Cursarium
Guide

AI Roadmap for Beginners — What to Learn First

Cursarium TeamFebruary 5, 202612 min read

Most AI roadmaps you'll find online are overwhelming. They list 47 topics, 12 math subjects, and 30 courses, then tell you to "just start." That's not a roadmap — it's a reading list. This guide gives you an actual sequence: what to learn first, what to skip, and when to move on. It's designed for people who know basic programming or are willing to learn, and who want to understand AI well enough to build real things — not just talk about ChatGPT at dinner parties.

The Big Picture: What AI Actually Is

Before you start learning tools and frameworks, you need a mental model of what you're getting into. AI is a broad field. Machine learning is a subset of AI. Deep learning is a subset of machine learning. Generative AI (ChatGPT, Midjourney, Stable Diffusion) is an application of deep learning. Most of the exciting stuff you see in the news uses deep learning, but you need to understand regular machine learning first.

Here's the simplest way to think about it: traditional programming is writing explicit rules ("if temperature > 100, turn on fan"). Machine learning is giving the computer examples and letting it figure out the rules. You provide data — thousands of labeled examples — and the algorithm finds patterns. That's it. Everything else is details about how to find those patterns more efficiently, with less data, or for more complex problems.

If you want a gentle, non-technical overview before diving in, the Elements of AI course from the University of Helsinki is excellent. It takes about 15 hours, requires zero coding, and gives you a solid conceptual foundation. Andrew Ng's AI for Everyone on Coursera covers similar ground with more business context.

Step 1: Get Comfortable with Python

Python is the language of AI. Not because it's the fastest or the best-designed language, but because the entire ecosystem — TensorFlow, PyTorch, scikit-learn, Hugging Face, pandas, NumPy — is built around it. You could learn AI concepts in any language, but you'd be fighting the tooling the entire time.

You don't need to be a Python expert. You need to be comfortable with variables, loops, functions, lists, dictionaries, and basic file I/O. You should understand how to install packages with pip, use Jupyter notebooks, and read error messages without panicking. That's the bar.

How long this takes

If you've never programmed: 4–6 weeks at 1–2 hours per day. If you know another language (Java, JavaScript, C++): 1–2 weeks to pick up Python syntax. The Python for Data Science and Machine Learning course on Udemy by Jose Portilla is a popular starting point that covers Python basics alongside the data science libraries you'll need next.

What to practice

  • Write scripts that read CSV files and compute basic statistics
  • Use NumPy to create arrays and do matrix operations
  • Use pandas to load, filter, and transform datasets
  • Plot data with matplotlib — histograms, scatter plots, line charts
  • Build a small project: a script that analyzes a dataset you care about (sports stats, music, weather data)

Step 2: Learn Basic Math (Only What You Need)

This is where most beginners either panic or waste months studying math they'll never use. Let's be precise about what you actually need.

Linear algebra (2–3 weeks)

You need to understand vectors, matrices, matrix multiplication, and transpose. That's about 80% of the linear algebra you'll use in ML. You don't need proofs or abstract vector spaces. Gilbert Strang's MIT Linear Algebra course is the gold standard, but it's more than you need right now. For a faster path, watch the first 10 lectures of 3Blue1Brown's "Essence of Linear Algebra" series on YouTube — it's free and builds geometric intuition that textbooks miss.

Calculus (1–2 weeks)

You need derivatives and partial derivatives. That's it. Specifically, you need to understand that a derivative tells you the rate of change of a function, and that gradient descent uses derivatives to find the minimum of a loss function. You don't need integration, differential equations, or multivariable calculus theorems. Khan Academy's calculus content covers what you need for free.

Probability and statistics (2 weeks)

You need probability distributions (normal, Bernoulli), conditional probability, Bayes' theorem, mean/median/standard deviation, and the concept of statistical significance. The Mathematics for Machine Learning specialization on Coursera by Imperial College London packages all three math areas into one structured path if you prefer a single course.

Step 3: Your First ML Course

This is the most important step, and your choice of first course matters a lot. The wrong course will bore you, confuse you, or teach you theory without application. The right course will make everything click.

You have three strong options depending on your learning style:

Option A: Andrew Ng's Machine Learning Specialization

The Machine Learning Specialization on Coursera is the updated version of Ng's legendary Stanford course. It covers supervised learning (regression, classification), unsupervised learning (clustering, anomaly detection), and recommender systems. Ng is an exceptional teacher who explains concepts without dumbing them down. The course uses Python (the original used Octave/MATLAB, which was a common complaint). Expect 3–4 months at 5–8 hours per week.

Option B: Google's Machine Learning Crash Course

The Google ML Crash Course is faster — about 15 hours total — and gets you building with TensorFlow quickly. It's less thorough than Ng's course but more practical. Good if you're impatient and want to see results fast. It's also completely free.

Option C: Kaggle's Intro to Machine Learning

The Kaggle Intro to Machine Learning course is the fastest on-ramp — about 3 hours of content. It teaches you to build a basic model using scikit-learn and make submissions to Kaggle competitions. Not deep, but it gives you a quick win and shows you what ML looks like in practice. Follow it up with Kaggle Intermediate ML for more depth.

Our recommendation: If you have the patience, go with Option A (Ng's specialization). It builds the strongest foundation. If you need a fast win to stay motivated, start with Option C (Kaggle), then circle back to Option A.

Step 4: Build Something

This step is not optional, and you should not skip it to take another course. The gap between "I completed a course" and "I can build ML systems" is enormous, and the only way to bridge it is to build something with messy, real-world data where nobody has pre-cleaned the dataset for you.

Project ideas that actually teach you something

  • Predict housing prices using a dataset from Kaggle. Simple regression problem, but you'll learn data cleaning, feature engineering, and model evaluation.
  • Build a text classifier that sorts emails, reviews, or news articles into categories. Teaches you NLP basics and text preprocessing.
  • Create a recommendation system for movies, books, or music. Covers collaborative filtering and matrix factorization.
  • Train an image classifier on a custom dataset you create yourself — take 100 photos of two different objects and classify them. Forces you to deal with data collection.
  • Build a simple chatbot using the OpenAI API or a Hugging Face model. Teaches you API integration and prompt design.

The Kaggle Feature Engineering course is a great companion during this phase — it teaches you how to create better input features for your models, which is often more impactful than choosing a fancier algorithm.

Document everything

Put your projects on GitHub with clear README files. Explain what problem you solved, what data you used, what approaches you tried (including ones that didn't work), and what results you got. This becomes your portfolio. Hiring managers and collaborators will judge you more on how you think through problems than on whether your accuracy score is 0.92 or 0.94.

Step 5: Go Deeper

Once you've completed Steps 1–4 (Python, math basics, first ML course, first project), you've built a real foundation. Now you specialize based on what interests you. Here are the main paths:

Deep learning

Start with the Deep Learning Specialization by Andrew Ng, then move to fast.ai Practical Deep Learning for a top-down, code-first approach. If you want academic rigor, MIT 6.S191 and NYU Deep Learning with Yann LeCun are both excellent and free. For computer vision specifically, look at Stanford CS231n.

Natural language processing

The Hugging Face NLP Course is the best starting point — free, practical, and uses the most popular NLP library. For more depth, Stanford CS224N taught by Chris Manning covers NLP theory and transformer architectures thoroughly.

Generative AI and LLMs

The Generative AI with LLMs course from DeepLearning.AI covers how large language models work under the hood. For building applications on top of LLMs, the LangChain course teaches you the most popular framework for chaining LLM calls. The ChatGPT Prompt Engineering course is a short but useful primer on getting better outputs from language models.

MLOps and production ML

If you want to deploy models rather than just train them, the Full Stack Deep Learning course covers the full lifecycle. The MLOps Specialization on Coursera goes deeper into CI/CD for ML, model monitoring, and pipeline orchestration.

What NOT to Learn First

This is as important as knowing what to learn. Beginners waste enormous amounts of time on topics they don't need yet.

  • Reinforcement learning — Fascinating but not where most AI jobs are. Save it for later unless you specifically want to work on robotics, game AI, or recommendation systems.
  • Advanced mathematics — You don't need measure theory, real analysis, or abstract algebra. If a YouTube comment says "you need to understand the Riemannian manifold structure of the loss landscape," ignore it.
  • Building neural networks from scratch in C++ — Understanding backpropagation conceptually is valuable. Implementing it from raw code is an exercise, not a prerequisite.
  • Every framework — Pick TensorFlow or PyTorch. Don't try to learn both simultaneously. PyTorch is more popular in research; TensorFlow has better production tooling. Either is fine to start.
  • Kaggle competition tricks — Feature stacking, complex ensembles, and leaderboard strategies teach you competition skills, not engineering skills. Focus on building clean, deployable models first.
  • Reading research papers — You're not ready for ArXiv yet. Papers assume graduate-level knowledge. Wait until you've completed a deep learning course.

A Realistic Timeline

If you're studying part-time (10–15 hours per week) alongside a job or school, here's a realistic timeline:

  • Month 1–2: Python fundamentals and NumPy/pandas
  • Month 2–3: Essential math (linear algebra, calculus basics, statistics)
  • Month 3–5: First ML course (Andrew Ng's specialization)
  • Month 5–6: First project — build something end-to-end
  • Month 6–9: Specialization area (deep learning, NLP, or gen AI)
  • Month 9–12: Second and third projects, start applying for roles or contributing to open source

That's roughly a year from zero to job-ready if you're consistent. Some people do it in six months studying full-time. Some take 18 months going slower. The key variable isn't intelligence — it's consistency. An hour every day beats eight hours every Saturday.

Don't try to optimize the perfect learning path. Pick the next step from this roadmap, start it today, and adjust as you go. The people who succeed in AI aren't the ones who found the perfect course — they're the ones who kept building after the course ended.

Frequently Asked Questions

Do I need a computer science degree to learn AI?

No. A CS degree helps because it gives you programming fluency, data structures knowledge, and math background. But many successful ML engineers and researchers are self-taught or come from other fields — physics, mathematics, biology, even music. What matters is your willingness to learn Python, work through the math, and build projects. The courses listed in this roadmap cover everything a CS degree would teach you about AI specifically.

Should I learn TensorFlow or PyTorch first?

PyTorch. As of 2026, PyTorch dominates both research and increasingly production environments. It has a more intuitive API, better debugging experience, and more community support. TensorFlow is still relevant — especially for mobile deployment and Google Cloud integration — but PyTorch is the safer first choice. You can always learn TensorFlow later once you understand the concepts.

How much math do I really need?

Less than you think for getting started, more than you think for going deep. To complete your first ML course and build basic models, you need high school algebra, basic derivatives, and intuitive understanding of vectors and matrices. To understand research papers and design novel architectures, you'll eventually need linear algebra, multivariate calculus, probability theory, and optimization. Start with the minimum and add math as you hit walls.

Can I learn AI on a Chromebook or old laptop?

Yes, with caveats. Google Colab gives you free access to GPUs in the cloud, and Kaggle also provides free GPU notebooks. You can complete most beginner and intermediate courses entirely in the browser. You'll hit limits when training large models or working with big datasets, but by that point you'll know enough to set up a cloud instance on AWS, GCP, or Azure. You don't need to buy an expensive GPU to start.

What if I start a course and it's too hard?

That's normal and it's useful information. If the math is the blocker, go back to the math step in this roadmap and spend a few weeks there. If the programming is the blocker, spend more time on Python basics. If the concepts just aren't clicking, try a different instructor — Andrew Ng, fast.ai's Jeremy Howard, and MIT's Alexander Amini all explain the same concepts in very different ways. Sometimes a different teaching style makes everything click.

More Articles