The moment we stopped understanding AI [AlexNet]



Thanks to KiwiCo for sponsoring today’s video! Go to https://www.kiwico.com/welchlabs and use code WELCHLABS for 50% off your first month of monthly lines and/or for 20% off your first Panda Crate.

Activation Atlas Posters!
https://www.welchlabs.com/resources/5gtnaauv6nb9lrhoz9cp604padxp5o
https://www.welchlabs.com/resources/activation-atlas-poster-mixed5b-13×19
https://www.welchlabs.com/resources/large-activation-atlas-poster-mixed4c-24×36
https://www.welchlabs.com/resources/activation-atlas-poster-mixed4c-13×19

Special thanks to the Patrons:
Juan Benet, Ross Hanson, Yan Babitski, AJ Englehardt, Alvin Khaled, Eduardo Barraza, Hitoshi Yamauchi, Jaewon Jung, Mrgoodlight, Shinichi Hayashi, Sid Sarasvati, Dominic Beaumont, Shannon Prater, Ubiquity Ventures, Matias Forti

Welch Labs
Ad free videos and exclusive perks: https://www.patreon.com/welchlabs
Watch on TikTok: https://www.tiktok.com/@welchlabs
Learn More or Contact: https://www.welchlabs.com/
Instagram: https://www.instagram.com/welchlabs
X: https://twitter.com/welchlabs

References
AlexNet Paper
https://proceedings.neurips.cc/paper_files/paper/2012/file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf

Original Activation Atlas Article- explore here – Great interactive Atlas! https://distill.pub/2019/activation-atlas/
Carter, et al., “Activation Atlas”, Distill, 2019.

Feature Visualization Article: https://distill.pub/2017/feature-visualization/
`Olah, et al., “Feature Visualization”, Distill, 2017.`

Great LLM Explainability work: https://transformer-circuits.pub/2024/scaling-monosemanticity/index.html
Templeton, et al., “Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet”, Transformer Circuits Thread, 2024.

“Deep Visualization Toolbox” by Jason Yosinski video inspired many visuals:

Great LLM/GPT Intro paper
https://arxiv.org/pdf/2304.10557

3B1Bs GPT Videos are excellent, as always:

Andrej Kerpathy’s walkthrough is amazing:

Goodfellow’s Deep Learning Book
https://www.deeplearningbook.org/

OpenAI’s 10,000 V100 GPU cluster (1+ exaflop) https://news.microsoft.com/source/features/innovation/openai-azure-supercomputer/

GPT-3 size, etc: Language Models are Few-Shot Learners, Brown et al, 2020.

Unique token count for ChatGPT: https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken

GPT-4 training size etc, speculative:
https://patmcguinness.substack.com/p/gpt-4-details-revealed
https://www.semianalysis.com/p/gpt-4-architecture-infrastructure

Historical Neural Network Videos

Errata
1:40 should be: “word fragment is appended to the end of the original input”. Thanks for Chris A for finding this one.

source