Six courses, one project. Each course adds new capabilities to the same system — mirroring how real-world AI applications grow over time.
Kickstart your repo and ship a minimal tech-support assistant using pure Python and the OpenAI SDK (no third-party libs). Understand tokenization, context windows, and how cost/latency affect quality.
You’ll build: a CLI/web script that answers support questions and preserves short conversational context.
Design prompts that clarify, summarize, and adapt to user preferences. Add lightweight telemetry to track token usage and spot regressions.
You’ll build: reusable system/user prompt templates + a small “prompt budget” dashboard.
Ground answers in your own documents and fail gracefully with fallbacks and “I don’t know (but here’s what to try)” patterns.
You’ll build: a simple RAG pipeline (ingest → chunk → embed → retrieve) and an eval script to compare answer quality.
Align behavior with business rules, safety, and compliance. Filter risky queries, enforce tone/policy, and log moderation events.
You’ll build: a guardrail layer (policy checks + moderation hooks) that runs before/after model calls.
Go beyond chat: add actions and escalation. Trigger webhooks, post to Slack, and hand off to humans when confidence is low — prioritizing critical cases.
You’ll build: Slack alerts + webhook actions with a scoring system that routes issues faster.
Implement plan-act-observe loops so your assistant can call tools, analyze results, and iterate toward a solution. Use Model Context Protocol (MCP) to register tools via a common interface.
You’ll build: a reasoning loop that chains multiple tool calls and an MCP tool registry your agent can query at runtime.
One project. Real feedback. Your pace.