AI Curriculum
AI, taught step by step
A clear, no-hype path to understanding modern AI. Designed for curious minds who want to build intuition, not just memorize jargon. We start with the basics of rules and chance, then climb all the way to the neural networks and language models changing our world today.
No PhD required. Just bring your curiosity. We’ll build up the math and concepts step-by-step, so you always know why things work, not just that they work.
Foundations
Welcome to the bedrock of AI. Think of this module as the class warm-up where we learn how computers follow rules, deal with uncertainty, and search for answers—exactly the skills we’ll lean on all year.
Question we are chasing: How can something as ordinary as metal and silicon learn to follow rules, handle uncertainty, and still find its way through a messy world?
#1
What is Computation?
How simple rules can create complex behavior
Meet the simplest “thinking” machine ever built and see how strict little rules add up to powerful behavior.
#2
Probability & Distributions
How AI talks about uncertainty
Build intuition for chance, bell curves, and belief updates using familiar, classroom-friendly examples.
#3
Algorithms & Graph Search
How computers find good routes
Walk alongside Dijkstra’s algorithm and see how a computer calmly chooses the cheapest path in a network.
Machine Learning
Here’s where the magic shows up: we stop hand-writing every rule and let data teach the model. Think of it as coaching instead of scripting.
Question we are chasing: How can a machine spot patterns from examples the way a student learns from practice problems?
#4
What is Machine Learning?
From examples to predictions
Get comfortable with what “training on data” really means and why different tasks need different learning setups.
#5
Regression & Classification
Predicting numbers and choosing categories
See the two big jobs of prediction: guessing numbers and sorting things into categories.
#6
Decision Trees & Random Forests
Learning by asking better questions
Watch how a model asks a series of yes/no questions, and why a whole forest of trees can be wiser than one.
#7
Support Vector Machines
Finding the safest separating line
See how SVMs draw a line (or plane) that leaves the widest safety margin between classes.
#8
Clustering & K-Means
Finding groups without labels
See K-means gently gather unlabeled data into groups by pulling points toward “centers.”
#9
Dimensionality Reduction
Keeping the important information
Learn why “too many columns” can overwhelm models and how PCA compresses data into simpler, clearer views.
Neural Networks
Inspired by the brain, powered by math. Here we’ll treat neural nets like a story of information flowing through layers, changing just enough each time to become something meaningful.
Question we are chasing: How do stacks of numbers and weights turn raw input into a confident prediction?
#10
Feedforward Neural Networks
From neurons to layered predictions
See how layers, weights, and activations team up to turn input values into a thoughtful prediction.
#11
Training & Backpropagation
How a network learns from mistakes
Follow how error signals flow backward so the network can nudge its weights in the right direction.
Sequence Models
Time matters. Language, music, weather—they all happen in a sequence. These models learn to remember the past to predict the future.
Question we are chasing: How can a model use the past to make sense of what comes next in a sequence?
#12
Hidden Markov Models
When the real state is hidden from view
Use what we can see to infer the hidden states we can’t, and decode the most likely path underneath the data.
#13
RNNs & LSTMs
Neural networks with memory
See how recurrent models carry information forward in time—and how LSTMs remember the important bits longer.
Language & Transformers
These lessons power the language revolution. We turn words into math and teach models to track context and meaning as they read.
Question we are chasing: How can a model capture word meaning, hold onto context, and generate fluent language one token at a time?
#14
Embeddings & Word2Vec
How words become meaningful vectors
Learn how models map words into a space where similar meanings land close together.
#15
Attention & Transformers
How models decide what to focus on
See how attention lets every word look around the sentence and pull in just the context it needs.
#16
Large Language Models
Predicting the next token at scale
Understand how transformer models train on massive text and generate fluent language one token at a time.
Advanced Topics
Here we peek over the horizon: agents that learn by doing, and systems that train together without spilling secrets.
Question we are chasing: How can AI learn from its own experience and still respect privacy and real-world limits?
#17
Reinforcement Learning
Learning by trying, failing, and improving
Watch an agent learn by chasing rewards and dodging penalties—no answer key required.
#18
Federated Learning
Training together without sharing raw data
Explore how many devices can train a shared model while keeping personal data safely on each device.