Live Workshop

AI Bootcamp

Everyone’s heard of ChatGPT, but what truly powers these modern large language models? It all starts with the transformer architecture. This bootcamp demystifies LLMs, taking you from concept to code and giving you a full, hands-on understanding of how transformers work. You’ll gain intuitive insights into the core components—autoregressive decoding, multi-head attention, and more—while bridging theory, math, and code. By the end, you’ll be ready to understand, build, and optimize LLMs, with the skills to read research papers, evaluate models, and confidently tackle ML interviews.

  • 5.0 / 5 (1 rating)
  • Published
  • Updated
Workshop Instructor
Avatar Image

Alvin Wan

Currently at OpenAI. Previously he was a Senior Research Scientist at Apple working on large language models with Apple Intelligence. He formerly worked on Tesla AutoPilot and graduated with his PhD at UC Berkeley with 3000+ citations and 800+stars for his work.

How The Workshop Works

01Remote

You can take the course from anywhere in the world, as long as you have a computer and an internet connection.

02Self-Paced

Learn at your own pace, whenever it's convenient for you. With no rigid schedule to worry about, you can take the course on your own terms.

03Community

Join a vibrant community of other students who are also learning with AI Bootcamp. Ask questions, get feedback and collaborate with others to take your skills to the next level.

04Structured

Learn in a cohesive fashion that's easy to follow. With a clear progression from basic principles to advanced techniques, you'll grow stronger and more skilled with each module.

Workshop Overview

What you will learn
  • Understand and evaluate large language models

  • Importance of pretraining and fine-tuning in model performance

  • How to integrate AI models into real-world applications

  • Differences between training, fine-tuning, and evaluating models

  • What problems large language models can solve across industries

  • How to use developer tools like TensorFlow and PyTorch effectively

  • Strategies for reading and evaluating AI research papers

  • How to ace machine learning interviews with confidence

  • How to implement practical applications of AI, from chatbots to content generation

  • Differences between common AI ecosystems and platforms

  • Understanding the self-attention mechanism in transformers

  • Challenges in training large models and how to address them

  • What ethical considerations are crucial in AI development

  • How to build a portfolio of AI projects to showcase your skills

  • Emerging trends and future directions in AI research

In this bootcamp, we dive deep into Large Language Models (LLMs) to help you understand, build, and optimize their architecture for real-world applications. LLMs are transforming industries—from customer support to content creation—but understanding how these models function, and optimizing them for specific tasks, presents complex challenges.

Over an intensive, multi-week curriculum, we cover:

The technical foundations of LLMs, including autoregressive decoding, positional encoding, and multi-head attention The LLM lifecycle—from large-scale pretraining to fine-tuning and instruction tuning for niche applications Industry best practices for model evaluation, pinpointing performance bottlenecks, and employing cutting-edge architectures to balance efficiency and scalability.

This bootcamp includes hours of in-depth instruction, hands-on coding sessions, and access to a dedicated community for ongoing support and discussions. Additionally, you’ll have exclusive access to code templates, an expansive reference library, and downloadable resources for continuous learning.

Your guide through this bootcamp is Alvin Wan, a Senior Research Scientist at Apple and a PhD student at UC Berkeley with global recognition for his work in efficient AI and model design. With his unique blend of industry experience and research expertise, Alvin will take you from foundational concepts to advanced applications, providing a solid grounding in the practical skills required to build, optimize, and evaluate LLMs.

Our students work at

  • salesforce-seeklogo.com.svgintuit-seeklogo.com.svgAdobe.svgDisney.svgheroku-seeklogo.com.svgAT_and_T.svgvmware-seeklogo.com.svgmicrosoft-seeklogo.com.svgamazon-seeklogo.com.svg

Workshop Syllabus and Content

Module 1

Introduction to AI and LLMs

3 Lessons

Foundational Model Knowledge

  • 01Introduction
    Free 
  • 02Understanding Large Language Models
    Free 
  • 03Demystifying AI Terminology
    Free 
Module 2

AI Ecosystem and Market Overview

5 Lessons

The AI Landscape

  • 01Overview of the AI Ecosystem
    Free 
  • 02The Market Landscape
    Free 
  • 03Key AI-Centric Startups
    Sneak Peek 
  • 04Platform Integrations
    Sneak Peek 
  • 05App Integrations
    Sneak Peek 
Module 3

Developer Tools and Frameworks

3 Lessons

Building Blocks of AI Development

  • 01Introduction to Developer Libraries and Frameworks
    Sneak Peek 
  • 02Understanding Datasets and Checkpoints
    Sneak Peek 
  • 03Overview of APIs and Vector Databases
    Sneak Peek 
Module 4

How LLMs Predict

3 Lessons

Decoding the Prediction Mechanism

  • 01Autoregressive Decoding Explained
    Sneak Peek 
  • 02The Role of Vectors in LLMs
    Sneak Peek 
  • 03The Architecture of a Large Language Model
    Sneak Peek 
Module 5

Embeddings and Transformations

3 Lessons

Transforming Inputs to Outputs

  • 01Converting Words into Vectors
    Sneak Peek 
  • 02Transformer Architecture
    Sneak Peek 
  • 03Interacting with Word Embeddings
    Sneak Peek 
Module 6

Self-Attention Mechanism

3 Lessons

Enhancing Contextual Understanding

  • 01What is Self-Attention?
    Sneak Peek 
  • 02Queries, Keys, and Values
    Sneak Peek 
  • 03Multi-Head Attention and Its Variants
    Sneak Peek 
Module 7

Positional Encoding and Context

3 Lessons

Understanding Contextual Relevance

  • 01Importance of Positional Encoding
    Sneak Peek 
  • 02Skip Connections and Their Benefits
    Sneak Peek 
  • 03Normalization Techniques in Neural Networks
    Sneak Peek 
Module 8

Advanced Attention Mechanisms

2 Lessons

Diving Deeper into Attention

  • 01Multi-Query and Grouped-Query Attention
    Sneak Peek 
  • 02Transformer Diagrams and Flash Attention
    Sneak Peek 
Module 9

Optimizing LLM Inference

3 Lessons

Enhancing Performance and Efficiency

  • 01Memory and Compute Bound Issues
    Sneak Peek 
  • 02Techniques for Making LLM Inference Faster
    Sneak Peek 
  • 03Quantization and Speculative Decoding
    Sneak Peek 
Module 10

Practical Applications of LLMs and Interview Prep

4 Lessons

Building Real-World Applications and Interview Prep

  • 01Creating Chatbots and Code Editors
    Sneak Peek 
  • 02Integrating LLMs with Existing Platforms
    Sneak Peek 
  • 03Future Trends in AI and LLM Development
    Sneak Peek 
  • 04Preparing for Machine Learning Interviews
    Sneak Peek 

Subscribe for a Free Lesson

By subscribing to the newline newsletter, you will also receive weekly, hands-on tutorials and updates on upcoming courses in your inbox.

Meet the Workshop Instructor

Alvin Wan

Alvin Wan

👋 Hi, I’m Alvin Wan, a Senior Research Scientist at Apple and a PhD student at UC Berkeley specializing in efficient deep learning. My focus is on speeding up Large Language Models (LLMs) and advancing computer vision applications for self-driving cars and virtual reality. My work has earned international recognition for its design and social impact, and I’m here to share insights gained from the front lines of AI research. this workshop is your gateway to understanding, building, and optimizing LLMs—minus the jargon and complexity. (Oh, and my Mandarin? Just about on par with a kindergartener’s!) Join me to master LLMs in real-world applications.

Purchase the course today

One-Time Purchase

AI Bootcamp

$4,000$4,500$500.00 off
AI Bootcamp
  • Discord Community Access
  • Full Transcripts
  • Money Back Guarantee
  • Lifetime Access

Frequently Asked Questions

How is this bootcamp structured, and what topics does it cover?

This bootcamp covers Large Language Models (LLMs) from foundational concepts to implementation-ready skills. Topics include LLM terminology, transformer architecture, embeddings, autoregressive decoding, multi-head attention, model evaluation, fine-tuning, optimization, and real-world applications in areas like customer service, content generation, and data analytics.

Is this bootcamp suitable for my skill level?

The bootcamp is designed for individuals with a basic understanding of programming and machine learning. However, it’s adaptable for all levels: introductory modules build core understanding, while advanced sections, like self-attention mechanisms and performance optimization, are structured for those wanting to dive deeper.

Will I get real-world examples and practical applications in this bootcamp?

Absolutely! The bootcamp emphasizes hands-on, practical applications of LLMs. You’ll work on real-world use cases like building chatbots, analyzing data with LLMs, and creating custom coding assistants. Every module bridges theory with practice, providing clear examples and exercises.

How frequently is the bootcamp content updated?

The bootcamp content is reviewed and updated regularly to reflect advances in LLM technologies and industry practices. This includes updates on tools, frameworks, and techniques, ensuring you stay current in the rapidly evolving AI field.

Does this bootcamp cover the latest tools and integrations?

Yes, it covers a broad array of current tools and integrations. This includes popular libraries for transformer models, fine-tuning frameworks, and vector databases, giving you a complete view of the LLM ecosystem and hands-on practice with these tools.

How are complex concepts like self-attention and autoregressive decoding explained?

We break down complex concepts through visualizations, intuitive analogies, and interactive examples. For instance, self-attention and autoregressive decoding are explained with step-by-step walkthroughs, helping you grasp the underlying math and logic with ease.

Will I be able to access this bootcamp on my mobile or tablet?

Yes, the bootcamp content is optimized for multiple devices, including mobile, tablet, and desktop, allowing you to learn flexibly wherever you are.

Is there a certificate upon completion of the bootcamp?

Yes, a certificate is provided upon successful completion of the bootcamp, demonstrating your mastery of the material.

Can I ask questions during the bootcamp?

Yes, you can ask questions within each lesson’s comments section or through our community-driven Discord channel, where instructors and peers are available to help.

Can I download the course materials?

While the videos are not downloadable, you’ll have lifetime access to them online, along with downloadable code samples and other resources for offline study.

What is the price of the bootcamp?

The bootcamp is currently priced at [$3,000 USD]. Additionally, there’s an option to access the course via a monthly subscription that includes this and other advanced AI modules.

How is this bootcamp different from other content available on LLMs?

This bootcamp stands out by combining foundational knowledge with real-world applications, interactive labs, and personalized support. Unlike other courses, we focus on industry-specific challenges and provide extensive, hands-on experience in LLMs, preparing you to implement these skills directly in your work.

AI Bootcamp

$4,000

$4,500