Skip to content

ApartsinProjects/LLMBook

Repository files navigation

Building Conversational AI using LLM and Agents

A comprehensive, hands-on textbook covering modern Large Language Model technology from foundations to production deployment.

Course Overview

28 Modules + Capstone + 10 Appendices covering:

Part Modules Topics
I: Foundations 00-05 ML/PyTorch basics, NLP, tokenization, attention, transformers, decoding
II: Understanding LLMs 06-08 Pre-training, scaling laws, modern models, inference optimization
III: Working with LLMs 09-11 APIs, prompt engineering, hybrid ML+LLM architectures
IV: Training & Adapting 12-17 Synthetic data, fine-tuning, PEFT, distillation, alignment, interpretability
V: Retrieval & Conversation 18-20 Embeddings, vector databases, RAG, conversational AI
VI: Agents & Applications 21-25 AI agents, multi-agent systems, multimodal, applications, evaluation
VII: Production & Strategy 26-27 Deployment, safety, ethics, LLM strategy, ROI
Capstone End-to-end conversational AI agent project
Appendices A-J Math, ML, Python, setup, Git, glossary, hardware, models, prompts, benchmarks

Repository Structure

LLMBook/
├── index.html                              # Interactive syllabus (GitHub Pages)
├── index.html                           # Source syllabus
├── part-1-foundations/
│   ├── module-00-ml-pytorch-foundations/
│   ├── module-01-foundations-nlp-text-representation/
│   ├── module-02-tokenization-subword-models/
│   ├── module-03-sequence-models-attention/
│   ├── module-04-transformer-architecture/
│   └── module-05-decoding-text-generation/
├── part-2-understanding-llms/
│   ├── module-06-pretraining-scaling-laws/
│   ├── module-07-modern-llm-landscape/
│   └── module-08-inference-optimization/
├── part-3-working-with-llms/
│   ├── module-09-llm-apis/
│   ├── module-10-prompt-engineering/
│   └── module-11-hybrid-ml-llm/
├── part-4-training-adapting/
│   ├── module-12-synthetic-data/
│   ├── module-13-fine-tuning-fundamentals/
│   ├── module-14-peft/
│   ├── module-15-distillation-merging/
│   ├── module-16-alignment-rlhf-dpo/
│   └── module-17-interpretability/
├── part-5-retrieval-conversation/
│   ├── module-18-embeddings-vector-db/
│   ├── module-19-rag/
│   └── module-20-conversational-ai/
├── part-6-agents-applications/
│   ├── module-21-ai-agents/
│   ├── module-22-multi-agent-systems/
│   ├── module-23-multimodal/
│   ├── module-24-llm-applications/
│   └── module-25-evaluation-observability/
├── part-7-production-strategy/
│   ├── module-26-production-safety-ethics/
│   └── module-27-strategy-product-roi/
├── capstone/
└── appendices/
    ├── appendix-a-mathematical-foundations/
    ├── appendix-b-ml-essentials/
    ├── appendix-c-python-for-llm/
    ├── appendix-d-environment-setup/
    ├── appendix-e-git-collaboration/
    ├── appendix-f-glossary/
    ├── appendix-g-hardware-compute/
    ├── appendix-h-model-cards/
    ├── appendix-i-prompt-templates/
    └── appendix-j-datasets-benchmarks/

How This Book Is Built

Each chapter is produced by a 36-agent AI team orchestrated through 13 phases (meet the team):

  1. Setup: Chapter Lead defines scope, outline, and coordination plan
  2. Planning: Curriculum alignment, deep explanation design, teaching flow review
  3. Content Building: Examples and analogies, code pedagogy, visual learning, exercises
  4. Structural Review: Book-level organization and coherence
  5. Self-Containment: Prerequisite availability verification
  6. Engagement & Memorability: Title/hook design, first-page conversion, aha-moments, project catalysts, demos, mnemonics
  7. Writing Clarity: Plain-language rewriting, sentence flow, jargon gating, micro-chunking, fatigue detection
  8. Learning Quality Review: Student advocate, cognitive load optimizer, misconception analyst, research scientist
  9. Integrity Check: Fact checker, terminology keeper, cross-reference architect
  10. Visual Identity: Brand consistency across all figures and callouts
  11. Final Polish: Narrative continuity, style/voice, engagement, senior developmental editor
  12. Frontier & Currency: Research frontier mapping, content update scouting
  13. Quality Challenge: Skeptical reader challenges distinctiveness and quality

Target Audience

Software engineers with Python experience who want to build production LLM applications. Assumes basic linear algebra and probability; all other prerequisites are covered in the appendices.

License

All rights reserved. This material is for educational use.

About

Building Conversational AI using LLM and Agents - Complete Course Textbook

Resources

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages