Learn Retrieval-Augmented Generation, vector search, embeddings, AI agents, function calling, evaluation, monitoring, hybrid search, reranking, and more β all in a free, open-source, hands-on course by DataTalks.Club.
β Star this repo to stay updated with new modules and cohort announcements
| Resource | Link |
|---|---|
| π Course materials | GitHub repository |
| π₯ Video lectures | YouTube playlist |
| π Cohort schedule & deadlines | courses.datatalks.club |
| π¬ Slack community | #course-llm-zoomcamp |
| π£ Announcements | Telegram |
| β Full FAQ | datatalks.club/faq/llm-zoomcamp.html |
| π 2025 cohort projects | courses.datatalks.club/llm-zoomcamp-2025/projects |
- About This Course
- Who Should Join
- Prerequisites
- How to Take LLM Zoomcamp
- Quick Links & Resources
- What You'll Learn
- Course Syllabus
- Capstone Project
- How to Get a Certificate
- Cost
- Meet the Instructors
- Sponsors
- Testimonials
- Community & Support
- FAQ
- Contributing
- About DataTalks.Club
- License
Teams turn to large language models because they want applications that answer questions or search information more intelligently. But once they start building, they discover how unstable these systems can be β answers shift between runs, retrieval quality depends heavily on how data is indexed, and a small prompt change can break a feature that worked yesterday.
LLM Zoomcamp teaches you how to build practical, production-ready LLM applications step by step β from the basics of Large Language Models and RAG all the way to a fully deployed end-to-end AI assistant.
This course is for people who learn by doing. After completing it, you'll have a working codebase and the hands-on experience to build your own LLM-powered applications.
| Audience | Why This Course? |
|---|---|
| Software Engineers | Add LLMs, RAG, and modern search capabilities to real products |
| Data Engineers | Understand how vector search, hybrid search, and retrieval pipelines fit into production systems |
| ML Practitioners | Get a structured way to evaluate and monitor LLM-based applications |
| Python Developers New to LLMs | A clear, practical introduction to building end-to-end AI applications |
| Technical PMs / Tech Leads | Build a working understanding of how LLM systems behave in real usage |
| Engineers Maintaining LLM Features | Fix drift, inconsistent answers, and unreliable retrieval in existing systems |
Note
You don't need prior experience with AI or ML. The course focuses on the engineering side of modern LLM applications and guides you through concepts step by step.
No advanced ML background required β but you should be comfortable with the basics.
| Category | Requirement |
|---|---|
| Python | Intermediate β you can write and debug scripts confidently |
| Command Line | Comfortable running commands in a terminal |
| Docker | Basic familiarity (used for some tooling) |
| ML / LLMs | Beginner level β knowing what an LLM is helps, but isn't required |
| Hardware | Any modern laptop or PC β no GPU needed, cloud alternatives provided |
| Cost | ~$1β5 in API credits if running the code (see Cost section) |
Note
If you can write a Python function and have heard of ChatGPT, you have enough to get started.
There are two ways to follow the course. Here's how they compare:
| Live Cohort | Self-Paced | |
|---|---|---|
| Best for | People who want structure, deadlines & peers | People with irregular schedules |
| Start | June 8, 2026, 17:00 CET | Anytime β all materials are always available |
| Lectures | Pre-recorded, same as self-paced | Pre-recorded on YouTube |
| Homework | Graded with automatic scoring | Available but not scored |
| Leaderboard | β Yes | β No |
| Peer Review | β Yes | β No |
| Certificate | β Yes (on project completion) | β No |
| Cost | Free | Free |
| Register | Sign up here | Just clone the repo |
Important
"Live cohort" does not mean live classes. All lectures are pre-recorded. "Live" means homework deadlines, scoring, peer review, and certificates are enabled.
- Watch the course videos on YouTube
- Follow the materials on GitHub
- Ask questions and share progress in Slack
- Build a project for your portfolio β even outside a live cohort
| Topic | Tools | You'll Be Able To⦠|
|---|---|---|
| LLMs & RAG Fundamentals | OpenAI API, Elasticsearch | Build a Q&A system backed by a document store |
| Vector Search & Embeddings | Qdrant, dlt | Retrieve semantically relevant documents at scale |
| AI Agents | OpenAI Function Calling | Give your LLM the ability to take actions and use tools |
| Data Ingestion | dlt | Ingest and update knowledge bases from any source |
| Evaluation | LLM-as-a-Judge, eval frameworks | Measure and improve retrieval and answer quality systematically |
| Monitoring | Grafana, dashboards | Track real-world performance and catch regressions early |
| Best Practices | LangChain, hybrid search tools | Improve retrieval with hybrid search, reranking, and orchestration |
Recommended approach:
- Watch the video for each module
- Complete the homework to reinforce the concepts
- Build your capstone project applying everything end-to-end
| Module | Topic | Key Tools | What You'll Be Able to Do After |
|---|---|---|---|
| 1 β Intro to LLMs & RAG | Foundations | OpenAI API, Elasticsearch | Build a basic RAG pipeline with text search |
| 2 β Agents | Agentic RAG | OpenAI Function Calling | Add autonomous tool use and function calling to RAG |
| 3 β Vector Search | Retrieval | Qdrant, dlt | Index and retrieve documents using semantic embeddings |
| Workshop β Data Ingestion | Pipelines | dlt | Ingest data from external sources into your RAG system |
| 4 β Evaluation | Quality | LLM-as-a-Judge | Measure retrieval and answer quality with offline and online eval |
| 5 β Monitoring | Observability | Grafana | Monitor user feedback and system health with live dashboards |
| 6 β Best Practices | Production | LangChain, hybrid search | Combine vector + keyword search; rerank results for higher precision |
| 7 β End-to-End Project | Capstone reference | All tools | Follow a complete worked example: a fitness assistant built with LLMs |
| Capstone Project | Your project | Your choice | Ship a complete RAG application of your own from scratch |
The capstone is your chance to apply everything end-to-end β a complete, working RAG application built and owned by you.
What you'll build:
- A searchable knowledge base β choose a dataset, ingest, clean, and store it for retrieval
- A retrieval pipeline β implement the full RAG flow: retrieve context, assemble prompts, call an LLM, return grounded answers
- An evaluation process β measure how well your system retrieves and answers using search metrics or LLM-as-a-Judge
- A user-facing interface β a simple UI or API (Streamlit, FastAPI, or similar) so others can try your app
- Monitoring & feedback loops β track queries, feedback, and performance over time
- Fitness & nutrition assistant
- Study companion for textbooks or course notes
- Medical FAQ assistant
- Codebase Q&A bot
- News summarization and retrieval tool
Note
See the full capstone project guidelines and browse all 2025 cohort submissions for inspiration.
Certificates are available to live cohort participants only.
To earn your certificate:
- Complete the final project β build a real-world RAG application demonstrating all course concepts
- Peer review 3 projects β evaluate and provide written feedback on three fellow students' submissions
- Meet the deadlines β submit your project and reviews within the cohort schedule
Certificates are issued after all peer reviews are completed. Self-paced learners are not eligible for certification but can build portfolio projects freely.
The course is 100% free. If you run the code yourself, expect small API costs:
| Service | Estimated Cost | Notes | |||-| | OpenAI API | ~$1β5 | For LLM calls and embeddings during exercises | | All other tools | $0 | Everything else has a free tier |
|
Alexey Grigorev Founder, DataTalks.Club Founder of DataTalks.Club and creator of multiple open-source ML courses reaching tens of thousands of learners worldwide. Former principal data scientist with deep expertise in ML systems and engineering. |
Timur Kamaliev Senior Data Scientist AI Engineer specializing in building production LLM systems, RAG pipelines, and agentic applications. Hands-on practitioner with real-world experience shipping GenAI products. |
A huge thanks to our sponsors for making this course possible!
Tip
Interested in supporting the DataTalks.Club community? Reach out to alexey@datatalks.club.
"This course gave me hands-on experience in building LLM-powered applications, including prompt engineering, retrieval-augmented generation (RAG), pipeline orchestration, and vector search optimization."
β Alexander Daniel Rios, LLM Zoomcamp Graduate
"Not gonna lie β this course took longer than planned. By the end, I was running on fumes, forcing myself to push through the final modules. But I made it. What I loved: hands-on experience building real AI systems (not just theory!), deep dives into RAG, vector databases, evaluation, and monitoring, and the wealth of production-ready practices that matter in enterprise environments."
β Vasiliy Chernykh, LLM Zoomcamp Graduate
Read more testimonials from past graduates β
Join the #course-llm-zoomcamp channel on DataTalks.Club Slack for discussions, troubleshooting, and networking with fellow learners and the course team.
To keep discussions useful for everyone:
- Follow our posting guidelines when asking questions
- Review the community guidelines
We actively encourage sharing your progress online throughout the course. Posting what you're building β on LinkedIn, Twitter/X, or a blog β helps you get noticed, connect with others in the field, and earn bonus points toward your homework and project scores.
Full FAQ: datatalks.club/faq/llm-zoomcamp.html
Is this course really free? Yes β all videos, materials, and homework are free. You may spend $1β5 in OpenAI API credits if you run the code yourself.
Do I need a GPU? No. All exercises are designed to run on a standard laptop using cloud APIs.
What does "live cohort" mean? Are there live classes? No mandatory live classes. "Live" means homework deadlines, automatic scoring, a leaderboard, peer review, and certificate eligibility are all enabled. All lectures are pre-recorded.
Can I join after the cohort has started? Yes β you can join after the start date, but deadlines remain fixed. Some homework forms may already be closed.
Can I join mid-cohort or self-paced? Yes. All materials stay available after each cohort ends. Self-paced learners are always welcome, though certificates require a live cohort.
Will I get a certificate? Yes β complete the final project and peer review 3 students' projects during the live cohort to earn your certificate. Self-paced mode does not include certification.
Do I need to complete every homework to get a certificate? Missing some homework may be acceptable, but you must complete the final project and peer reviews. Check the cohort schedule for specific requirements.
What if I get stuck?
Post in #course-llm-zoomcamp on Slack β the community and instructors are active there. Also check the FAQ page for detailed answers.
How much time should I expect to spend? Expect roughly 5β10 hours per week, depending on your background and how deep you go into the materials.
Found a bug in the course materials? Know how to improve an explanation or fix broken code? Contributions are welcome and appreciated.
- Fork the repository
- Make your fix or improvement
- Open a pull request with a clear description
Every contribution β big or small β helps future learners. Thank you π
DataTalks.Club is a global online community of data enthusiasts β a place to learn, share knowledge, ask questions, and support each other through free courses, events, and an active Slack community.
Website β’ Slack β’ Newsletter β’ Events β’ Google Calendar β’ YouTube β’ GitHub β’ LinkedIn β’ Twitter
Note
Most activity happens on Slack β join us there for updates, discussions, and community events. Learn more at DataTalksClub Community Navigation.
This project is licensed under the MIT License.

