Become an AI Researcher Course – LLM, Math, PyTorch, Neural Networks, Transformers
Welcome to the full course on becoming an AI Researcher. This course will guide you step-by-step, starting with the foundational mathematics essential for understanding modern AI, before diving into PyTorch fundamentals. You will then learn about the building blocks of AI, from simple neural networks to the complexities of multi-layer architectures. The course ends with an in-depth module on Transformers, the critical technology underpinning today’s Large Language Models (LLMs) and generative AI.
Course from @vukrosic.
Github: https://github.com/vukrosic/become-elite-ai-researcher
Vuk on X: https://x.com/VukRosic99
❤️ Try interactive AI courses we love, right in your browser: https://scrimba.com/freeCodeCamp-AI (Made possible by a grant from our friends at Scrimba)
⭐️ Contents ⭐️
Introduction & Course Overview
– 00:00:00 Welcome & Course Overview
– 00:05:28 Requirements & Setup for the Course
Module 1: Foundational Mathematics for AI Research
– 00:10:48 Math Lesson: Functions (Linear, Quadratic, Cubic, Square Root)
– 00:19:10 Math Lesson: Derivatives (Rate of Change)
– 00:33:19 Math Lesson: Vectors (Magnitude, Dot Product, Normalization)
– 00:46:07 Math Lesson: Gradients (Steepest Ascent/Descent, Partial Derivatives)
– 00:55:03 Math Lesson: Matrices (Multiplication, Transpose, Identity)
– 01:08:39 Math Lesson: Probability (Expected Value, Conditional Probability)
Module 2: PyTorch Fundamentals
– 01:19:19 START: PyTorch Fundamentals & Creating Tensors
– 01:26:03 PyTorch Lesson: Reshaping and Viewing Tensors
– 01:27:48 PyTorch Lesson: Squeezing and Unsqueezing Dimensions
– 01:41:02 PyTorch Lesson: Indexing and Slicing Tensors
– 01:49:55 PyTorch Lesson: Special Tensors (Zero, Ones, Linspace)
Module 3: Neural Networks
– 01:54:00 START: Coding Neural Networks from Scratch
– 01:54:29 Neural Networks Lesson: Single Neuron (Weights, Bias, Weighted Sum)
– 01:57:11 Neural Networks Lesson: Activation Functions (Sigmoid, ReLU, tanh)
– 02:03:07 Neural Networks Lesson: Multi-Layer Networks & Backpropagation
Module 4: Transformers (for Large Language Models)
– 02:11:59 START: Understanding Transformers for LLMs
– 02:14:14 Transformers Lesson: Attention Mechanism (Query, Key, Value)
– 02:32:39 Transformers Lesson: Self-Attention & Causal Self-Attention
– 02:40:48 Transformers Lesson: Rotary Positional Embeddings (RoPE)
– 02:44:07 Transformers Lesson: Multi-Head Attention
– 02:55:03 Transformers Lesson: Transformer Block (Feed-Forward, Add & Norm)
– 03:04:15 Tokenization (for GPT Architecture)
Conclusion
– 03:06:47 Conclusion & Next Steps
🎉 Thanks to our Champion and Sponsor supporters:
👾 Drake Milly
👾 Ulises Moralez
👾 Goddard Tan
👾 David MG
👾 Matthew Springman
👾 Claudio
👾 Oscar R.
👾 jedi-or-sith
👾 Nattira Maneerat
👾 Justin Hual
—
Learn to code for free and get a developer job: https://www.freecodecamp.org
Read hundreds of articles on programming: https://freecodecamp.org/news
source
