You want to break into AI but the sheer number of courses, tutorials, and bootcamps makes it hard to know where to start. This guide lays out a 36-week learning path that takes you from zero programming knowledge to job-ready AI practitioner. Every course recommended here is either free or under $80, and each one was chosen because it teaches skills employers actually ask for. No fluff stages, no detours into theory you will never use. Just a concrete week-by-week plan built around the best available courses in 2026.
Stage 1: Foundations (Weeks 1-4)
Before you touch any machine learning library, you need two things: basic Python fluency and enough math to understand what algorithms are doing under the hood. Skipping foundations is the number one reason people stall out at the intermediate level. Four weeks is enough to get both pieces in place if you put in 10-15 hours per week.
Weeks 1-2: Python for Data Science
Start with Python for Data Science and Machine Learning Bootcamp. Jose Portilla's course covers Python fundamentals, NumPy, Pandas, and Matplotlib in a hands-on style. You do not need prior coding experience. Spend about 12-15 hours per week and focus on the exercises rather than just watching videos. By the end of week two, you should be comfortable loading a CSV, cleaning data, and making basic plots.
Weeks 3-4: Math Essentials
Take Mathematics for Machine Learning from Imperial College London on Coursera. Focus on the linear algebra and multivariate calculus modules. You do not need to finish the PCA module yet. Simultaneously, watch the first 10 lectures of MIT 18.06 Linear Algebra by Gilbert Strang. His geometric intuitions for matrix operations will save you hours of confusion later. Budget 10-12 hours per week across both resources.
At the end of Stage 1, you should be able to: write Python scripts that manipulate dataframes, explain what a dot product and matrix multiplication represent, and take partial derivatives of simple functions. If any of these feel shaky, spend an extra week here. Rushing past foundations always costs more time later.
Stage 2: Core Machine Learning (Weeks 5-12)
This is where you learn the bread and butter of ML: regression, classification, clustering, decision trees, ensemble methods, and model evaluation. You will spend eight weeks here because these fundamentals underpin everything that follows, including deep learning and generative AI.
Weeks 5-8: Your Main ML Course
Enroll in Machine Learning Specialization by Andrew Ng on Coursera. This three-course specialization updated in 2022 now uses Python instead of Octave, which makes it far more practical than the original. Course 1 covers supervised learning (linear regression, logistic regression, neural networks). Course 2 covers advanced algorithms (decision trees, random forests, XGBoost, recommender systems). Course 3 covers unsupervised learning and reinforcement learning basics. Spend 12-15 hours per week and complete every programming assignment.
Weeks 9-10: Practical Kaggle Skills
Now apply what you learned. Complete Kaggle Intro to Machine Learning and Kaggle Intermediate Machine Learning back to back. These are short, focused courses that teach you to use scikit-learn on real datasets. You will learn to handle missing values, categorical variables, pipelines, and cross-validation. Each takes about 6-8 hours. After finishing, enter one active Kaggle competition and submit at least three solutions. Your score does not matter; the process of iterating on a real problem is what teaches you.
Weeks 11-12: Feature Engineering and Evaluation
Take Kaggle Feature Engineering to learn techniques like target encoding, mutual information, and feature creation. This is the skill that separates good ML practitioners from beginners. Spend the remaining time revisiting your Kaggle competition entry and improving it using these techniques. Also complete Google's Machine Learning Crash Course as a fast review. Google's course fills gaps with interactive visualizations that help cement concepts like regularization, ROC curves, and embeddings.
Stage 3: Deep Learning (Weeks 13-20)
With solid ML fundamentals, you are ready for neural networks. Deep learning is not harder than classical ML; it is just different. You trade interpretability for power, and you need to learn new debugging techniques. Eight weeks gives you time to build genuine understanding rather than just copying tutorial code.
Weeks 13-16: Deep Learning Theory and Practice
You have two strong options here, and you should pick based on your learning style. If you prefer top-down, code-first learning, take fast.ai Practical Deep Learning for Coders. Jeremy Howard teaches you to train state-of-the-art models in the first lesson and then gradually peels back layers of abstraction. If you prefer bottom-up understanding, take the Deep Learning Specialization by Andrew Ng. It starts with the math of a single neuron and builds up to convolutional and recurrent networks. Both are excellent. The fast.ai approach gets you to results faster; the Ng approach gives you more theoretical grounding.
Weeks 17-18: Computer Vision or NLP Focus
Pick one specialization track. For computer vision, watch the first 10 lectures of Stanford CS231n. For natural language processing, take the HuggingFace NLP Course. Both are free. The HuggingFace course is particularly practical because it teaches you the tools you will actually use in production: tokenizers, transformer models, and the HuggingFace ecosystem. Spend 12-15 hours per week.
Weeks 19-20: Generative AI Fundamentals
Take Generative AI with Large Language Models by DeepLearning.AI and AWS. This course covers transformer architecture, training processes, fine-tuning, RLHF, and prompt engineering with real code exercises. Follow it with ChatGPT Prompt Engineering for Developers for a focused look at building applications on top of LLMs. These two courses together take about 20-25 hours and bring you up to speed on the technology driving most AI hiring in 2026.
Stage 4: Specialization (Weeks 21-28)
Now you specialize based on the type of job you want. Pick one track below and stick with it for eight weeks. Trying to learn everything at once is a trap. Employers hire specialists, not generalists who know a little about everything.
Track A: ML Engineer
Take Full Stack Deep Learning to learn experiment tracking, deployment, testing, and monitoring. Follow with MLOps Specialization to understand ML pipelines, model serving, and CI/CD for models. This track prepares you for roles where you build and deploy models in production. Spend 15 hours per week.
Track B: AI Application Developer
Take LangChain for LLM Application Development and then build three projects: a RAG chatbot, an AI agent with tool use, and a fine-tuned model for a specific domain. This track is where the most job openings are in 2026. Companies need people who can integrate LLMs into products, not just train models from scratch.
Track C: Research-Oriented
Take Stanford CS229 for rigorous ML theory, then NYU Deep Learning with Yann LeCun for cutting-edge architecture understanding. Complement with fast.ai Part 2 which covers building a deep learning framework from scratch. This track suits people aiming for research labs or PhD programs.
Stage 5: Portfolio and Job Prep (Weeks 29-36)
Courses alone will not land you a job. You need a portfolio that proves you can solve real problems. Spend eight weeks building three polished projects and preparing for interviews.
Weeks 29-32: Build Three Portfolio Projects
Each project should have: a clear problem statement, a dataset you collected or curated yourself, a trained model with documented experiments, a deployed demo (Streamlit, Gradio, or a simple API), and a well-written README. One project should use classical ML, one should use deep learning, and one should involve generative AI or LLMs. Put all three on GitHub with clean code and documentation.
Weeks 33-34: Certifications (Optional)
If you are targeting corporate roles, consider Google AI Essentials or Microsoft AI Fundamentals (AI-900). These certifications are most useful for getting past HR filters at large companies. They are not substitutes for skills, but they signal familiarity with specific platforms. Skip them if you are targeting startups; startups care about your GitHub, not your certificates.
Weeks 35-36: Interview Preparation
Practice ML system design questions, review statistics fundamentals, and prepare to explain every project in your portfolio in detail. Be ready to whiteboard a simple model training pipeline and discuss tradeoffs between approaches. Mock interviews with peers are more valuable than any course at this stage.
Timeline Summary
- Weeks 1-4: Python and math foundations with udemy-python-for-ds-ml, coursera-mathematics-for-ml, and mit-linear-algebra
- Weeks 5-12: Core ML with coursera-ml-specialization, kaggle-intro-ml, kaggle-intermediate-ml, kaggle-feature-engineering, and google-ml-crash-course
- Weeks 13-20: Deep learning with fast-ai-practical-dl or deep-learning-specialization, plus huggingface-nlp or stanford-cs231n, plus deeplearning-ai-genai-llm
- Weeks 21-28: Specialization track (ML Engineer, AI App Developer, or Research)
- Weeks 29-36: Portfolio projects, optional certification, and interview prep
Total time commitment: 10-15 hours per week for 36 weeks, or roughly 400-540 hours. This is equivalent to one semester of full-time study. If you can dedicate 30+ hours per week, you can compress this to about 18 weeks. If you can only do 5 hours per week, stretch it to 72 weeks. The order matters more than the speed.
Frequently Asked Questions
Can I skip the math foundations if I already know Python?
You can skip the Python course but do not skip the math. Linear algebra and calculus show up constantly when debugging model behavior, reading papers, and understanding why certain architectures work. Take coursera-mathematics-for-ml even if you studied math in college. The ML-specific framing is what matters.
Do I need a GPU to follow this path?
Not for Stages 1-2. For Stages 3-5, you need GPU access for training deep learning models. Google Colab's free tier is sufficient for most course exercises. If you want to train larger models, Colab Pro at $10/month or a Kaggle notebook with free GPU hours will cover you. You do not need to buy a gaming PC.
Should I take Andrew Ng's courses or fast.ai first?
For the ML Specialization in Stage 2, Andrew Ng is the clear choice because it covers classical ML thoroughly. For deep learning in Stage 3, pick based on your style: fast.ai if you want to build things immediately and learn theory as needed, or the Deep Learning Specialization if you want to understand every equation before writing code. Both paths lead to the same place.
What if I want to focus on generative AI and skip classical ML?
You will hit a wall. Generative AI roles require understanding of attention mechanisms, loss functions, optimization, and evaluation metrics. All of those concepts come from classical ML and deep learning foundations. Companies hiring for GenAI roles test these fundamentals in interviews. Follow the stages in order.
How do I know when I am job-ready?
You are job-ready when you can: (1) take a business problem and frame it as an ML task, (2) select and train an appropriate model, (3) evaluate it rigorously, (4) deploy it so others can use it, and (5) explain every decision you made. If your portfolio projects demonstrate all five, start applying. Do not wait until you feel 100% ready because that day never comes.