You don't need a CS degree or a math PhD to learn AI. You do need a plan. Most people who try to learn AI on their own bounce between random tutorials, get overwhelmed by the math, and quit after a few weeks. This roadmap gives you a clear sequence — what to study, when to study it, and which courses to use at each stage. It's based on the paths that actually worked for self-taught ML engineers, not what looks good on a curriculum diagram. The plan is broken into five phases, each building on the last. You'll start by building intuition about what AI is, then pick up the Python and math you need, learn classical machine learning, move into deep learning, and finally specialize in a subfield that matches your career goals. Each phase includes specific course recommendations so you're never guessing what to do next.
Phase 1: Build Intuition (Month 1)
Before you touch any code or math, spend the first few weeks understanding what AI actually is and what it can do. This sounds obvious, but skipping this step is why many people burn out — they dive into gradient descent without knowing why they're learning it. Your goal in Phase 1 is to build a mental map of the field so you know where everything fits. Think of this phase as reconnaissance. You're figuring out what areas of AI exist (computer vision, NLP, robotics, recommender systems), what tools people use (Python, PyTorch, TensorFlow), and what problems AI can actually solve versus what's still hype. This context makes every subsequent phase more efficient because you'll know where you're headed.
During this phase, pay attention to which areas excite you. Do you find yourself fascinated by image recognition? Language models? Recommendation engines? Your natural curiosity is a signal — it points toward the specialization you'll choose in Phase 5. Keep a simple note of topics that grab your attention. You don't need to commit to anything yet, but this early self-awareness saves time later.
Start with the big picture
Take Elements of AI from the University of Helsinki. It's free, non-technical, and takes about 30 hours. Teemu Roos explains what AI is, how machine learning works at a conceptual level, and what the societal implications are. You won't write any code, and that's the point — you need to understand the forest before examining individual trees.
If you want to go a step further, Andrew Ng's AI For Everyone on Coursera covers similar ground with more focus on how AI fits into business. It's about 6 hours and gives you vocabulary to talk about AI projects intelligently. Between these two courses, you'll have a solid conceptual foundation.
Try the fastest hands-on intro
Once you've gotten the big picture, spend an afternoon on Kaggle's Intro to Machine Learning. Dan Becker walks you through building a basic prediction model in about 3 hours. You won't understand everything that's happening under the hood — that's fine. The goal is to see the end-to-end workflow: data in, model trained, predictions out. It makes the rest of your learning feel more concrete because you've already seen the finish line.
Phase 2: Python and Math Basics (Month 2)
You need two things before you can do serious ML work: Python fluency and enough math to follow along with ML courses. You don't need to become a math expert — you need to be comfortable enough that equations don't make you freeze.
Python for data work
If you already know Python, skip this. If not, Jose Portilla's Python for Data Science and Machine Learning Bootcamp on Udemy covers Python basics plus the data science stack: NumPy, Pandas, Matplotlib, and scikit-learn. It goes on sale regularly for $15-20. Focus on the NumPy and Pandas sections — these libraries are the lingua franca of ML in Python, and you'll use them every day. Specifically, make sure you can create and manipulate NumPy arrays, filter and group Pandas DataFrames, and plot basic charts with Matplotlib. These three skills cover about 80% of the Python you'll use in ML work.
Don't spend more than 3-4 weeks on Python. You don't need to master object-oriented programming or web frameworks. You need to be comfortable writing functions, working with arrays, and manipulating DataFrames. That's it for now.
Math foundations
For math, take Luis Serrano's Mathematics for Machine Learning and Data Science Specialization on Coursera. It covers linear algebra, calculus, and probability — the three pillars of ML math — specifically in the context of machine learning. Serrano uses visual explanations and real ML examples instead of abstract proofs, which makes the material stick.
If you prefer a more rigorous approach to linear algebra specifically, Gilbert Strang's Linear Algebra on MIT OpenCourseWare is the classic. It's more math than ML, but Strang is one of the best math lecturers alive, and linear algebra is the single most important math skill for ML. Fair warning: this course is a full MIT semester, so it's a bigger time investment.
Don't try to learn all the math before moving on. Get comfortable with matrix operations, derivatives, chain rule, and basic probability distributions, then start Phase 3. You'll fill in gaps as you encounter them — learning math in context is more effective than studying it in isolation.
Phase 3: ML Fundamentals (Months 3-4)
Now you're ready to learn machine learning properly. This phase is the core of your education, and you should take it seriously — the fundamentals you build here determine how quickly you can learn everything else. Many people rush through this phase to get to deep learning. Resist that urge. A strong understanding of linear regression, logistic regression, decision trees, and ensemble methods will serve you better in your career than a shallow understanding of transformers. Most real-world ML problems are still solved with classical methods, and interviewers at top companies will test your understanding of these fundamentals.
Your main course
Take Andrew Ng's Machine Learning Specialization on Coursera. This three-course sequence covers linear regression, logistic regression, neural networks, decision trees, clustering, recommender systems, and reinforcement learning. It uses Python and moves at a measured pace with clear explanations. Complete all the programming assignments — don't just watch the videos.
Alternatively, if you prefer a faster pace and more practical approach, Google's ML Crash Course covers similar ground in about 15 hours. It's free, interactive, and designed for people who already know how to code. The interactive visualizations help build intuition about how models learn. You could do both — Google's course first for the quick overview, then Ng's specialization for the depth.
Practice with Kaggle
While taking your main course, start working through Kaggle's beginner competitions. The Titanic survival prediction and House Prices datasets are the standard starting points. Don't worry about winning — focus on applying what you're learning. Read other people's notebooks, try their techniques, and iterate on your models. This is where the learning really happens — courses teach you the concepts, but Kaggle teaches you the workflow of a real ML project. After you're comfortable with Kaggle notebooks, follow up with Kaggle's Intro to Deep Learning and Kaggle Intermediate ML micro-courses. They're short (a few hours each) and focus on practical skills like handling missing data, feature engineering, and cross-validation — things that matter in real projects but get glossed over in lecture courses.
Phase 4: Deep Learning (Months 5-6)
With ML fundamentals solid, it's time to tackle deep learning. This is where AI gets exciting — neural networks, computer vision, NLP, generative models. You have two excellent options here, and ideally you'll do both. Deep learning has a steeper learning curve than classical ML. You'll need to understand gradient descent at a deeper level, learn about activation functions and loss functions, and develop intuition for things like learning rates and batch sizes. Don't be discouraged if your first neural network doesn't work — debugging neural networks is a skill that takes time to develop.
Option A: Top-down with fast.ai
Jeremy Howard's Practical Deep Learning for Coders takes the practical-first approach. You'll train image classifiers, text models, and tabular models from lesson one, then gradually learn the theory behind them. Howard's teaching philosophy is that you should know what's possible before worrying about why it works. If you learn best by building things, start here.
Option B: Bottom-up with Andrew Ng
The Deep Learning Specialization takes the theory-first approach. You'll implement neural networks from scratch, understand backpropagation mathematically, and build up to CNNs and sequence models. If you learn best by understanding the mechanics first, start here. The five-course structure gives you a clear path, and the assignments are well-designed.
The ideal approach is to do fast.ai first (it's free and faster), then the Deep Learning Specialization to fill in the theoretical foundations. After both, you'll be able to build models quickly AND understand what's happening under the hood. If you can only pick one, choose based on your learning style — builders go fast.ai, theorists go Ng.
Supplement with MIT 6.S191
Alexander Amini's MIT 6.S191 is a great supplement during this phase. It covers deep learning fundamentals in about 10 lectures, with a different perspective than Ng or Howard. Watching the same concepts explained by different instructors solidifies your understanding. It's free and can be completed in a weekend.
Phase 5: Specialize (Months 7+)
After Phase 4, you have a solid ML/DL foundation. Now it's time to pick a direction. The AI field is broad, and trying to learn everything at once is a mistake. Choose one specialization based on your interests and career goals, and go deep. The three most in-demand specializations right now are NLP (driven by the LLM boom), computer vision (autonomous vehicles, medical imaging, augmented reality), and MLOps (deploying and maintaining models in production). Each has a distinct career path, and the courses below will get you started in any of them.
Natural Language Processing
NLP is where the industry momentum is right now, driven by large language models. If you want to work with text data, build chatbots, create search systems, or fine-tune LLMs, this is your path. Start with the Hugging Face NLP Course — it's free, practical, and teaches you to use the Transformers library, which is the standard tool for working with modern NLP models. You'll learn tokenization, fine-tuning pre-trained models, and building NLP pipelines. Then go deeper with Stanford's CS224N from Christopher Manning if you want to understand the theory behind transformers and attention mechanisms. CS224N is graduate-level and math-heavy, but the payoff is a real understanding of how LLMs work internally.
For a practical angle on LLMs, take Generative AI with Large Language Models on Coursera, which covers transformer architecture, fine-tuning, and RLHF. Follow that with ChatGPT Prompt Engineering for Developers and LangChain for LLM Application Development to learn how to build applications on top of LLMs.
Computer Vision
If you want to work with images and video, Stanford's CS231N taught by Fei-Fei Li is the definitive course. It covers image classification, object detection, segmentation, and generative models. The assignments have you implement key architectures from scratch in Python and NumPy. It's challenging — expect to spend 15+ hours per week — but the depth of understanding you gain is unmatched. Computer vision jobs exist in autonomous driving (Waymo, Tesla, Cruise), medical imaging (detecting tumors, analyzing X-rays), augmented reality (Apple, Meta), satellite imagery analysis, and manufacturing quality control. It's one of the most mature subfields of AI with clear career paths.
MLOps and Production ML
If you want to focus on deploying and maintaining ML systems (a skill set in high demand), take the MLOps Specialization by Andrew Ng on Coursera, followed by Full Stack Deep Learning from Sergey Karayev and Josh Tobin. These courses cover the 90% of ML work that isn't model training: data pipelines, model serving, monitoring, and iteration. MLOps engineers are some of the highest-paid roles in AI because most companies have far more models that need deploying and maintaining than they have people who know how to do it. If you enjoy systems engineering and infrastructure work, this specialization is both lucrative and in short supply.
Common Mistakes to Avoid
- Spending months on math before touching ML. You need basic math, not a math degree. Learn enough to get started, then fill gaps as you go.
- Watching courses passively without doing the assignments. ML is a skill, not a body of knowledge. You have to write code and train models to learn it.
- Collecting certificates instead of building projects. After your first 1-2 certificates, shift your energy to building things. A GitHub portfolio of projects speaks louder than a LinkedIn wall of certificates.
- Trying to learn everything at once. Pick one specialization after the fundamentals and go deep. You can always branch out later.
- Skipping classical ML to jump straight to deep learning. Linear regression, decision trees, and ensemble methods are still used constantly in industry. Don't skip them.
- Studying alone without any community. Join the fast.ai forums, a Kaggle discussion group, or a local ML meetup. Having people to ask questions and share progress with makes a huge difference in motivation.
- Giving up when you hit a wall. Everyone gets stuck on backpropagation, probability, or debugging model training at some point. It's normal. Take a break, try a different explanation, and come back. The confusion is part of the process.
Frequently Asked Questions
Do I need to be good at math to learn AI?
You need to be comfortable with basic math, but you don't need to be a math prodigy. The key areas are linear algebra (matrix operations, vectors), calculus (derivatives, chain rule), and probability (distributions, Bayes' theorem). If you can follow the Mathematics for Machine Learning specialization on Coursera, you have enough math. Many working ML engineers will tell you they learned the math alongside the ML, not before it.
How much time per week should I dedicate?
A realistic pace is 10-15 hours per week. At that rate, you can complete this roadmap in about 9-12 months. If you can do 20+ hours per week (e.g., between jobs or during a summer), you could compress it to 5-6 months. Less than 5 hours per week makes it hard to maintain momentum — the concepts build on each other and you'll spend too much time re-learning things you forgot.
Can I skip Phase 1 if I already know what AI is?
If you already understand the difference between supervised and unsupervised learning, know what a neural network is at a high level, and have a sense of where different AI techniques are used in practice, yes — skip straight to Phase 2. If you're unsure, spend a few hours on Elements of AI to confirm. It's better to spend a week confirming your foundations than to discover gaps months later.
Should I use a laptop or do I need a GPU?
A laptop is fine through Phase 3. For deep learning in Phase 4 and beyond, you'll need GPU access for training larger models. Don't buy a GPU — use Google Colab (free tier gives you limited GPU access) or Kaggle notebooks (free GPU for 30 hours per week). If you need more, Colab Pro at $10/month is enough for learning. Only invest in hardware if you're training models regularly for work or research.
Is this roadmap still relevant with AI tools like ChatGPT writing code?
Yes, more than ever. AI tools can write boilerplate code, but they can't replace understanding. When a model isn't working, you need to know whether the issue is your data, your architecture, your loss function, or your training procedure. LLMs are excellent at speeding up your workflow, but they're not a substitute for understanding what you're building. The people who benefit most from AI coding tools are the ones who already understand the fundamentals.