🤝 Course Creator

AI agent team builds quality courses on any topic

Google ADK A2A Protocol Gemini 2.5 Pro Vertex AI Cloud Run Python

📋 Project Overview & Problem Statement

Challenge: Building a course on a new topic — even a short one — typically takes hours of research, fact-checking, and writing. Single-LLM tools rush straight to writing, often producing content that sounds confident but contains gaps or errors. Visitors and students don't always notice, but the credibility damage is real.

Solution: Course Creator is a team of four specialised AI agents that divide the work the way a small editorial team would. A Researcher gathers information, a Judge reviews it for completeness, a Content Builder turns the approved research into structured course material, and an Orchestrator coordinates the handoffs. Each agent runs as its own Cloud Run service and talks to the others over Google's A2A (Agent-to-Agent) protocol.

Key Benefits

🔁 What Makes This Different — A2A Multi-Agent

Most AI multi-agent demos run inside a single Python process — one container, one runtime, all agents sharing memory. This build doesn't. Each of the four agents runs as a separate Cloud Run microservice and communicates over Google's Agent-to-Agent (A2A) protocol — standard JSON-RPC over HTTP, with each agent publishing an "agent card" at /.well-known/agent-card.json describing what it does.

Think of it as the difference between a small startup where everyone shouts across one room, and a real company with departments, formal handoffs, and each team in its own building. Both can produce work, but only the second can scale, swap members, or audit the trail.

Single-Process Multi-Agent vs. A2A

Dimension Single-Process (CrewAI / LangGraph in one script) A2A (this build)
Code layout One script imports all agents Each agent is its own deployable service
Failure blast radius One agent bug crashes the pipeline Other agents keep responding; only the failing one is degraded
Scaling Scale everything together Scale only the slow agent
Upgrades Swap an agent → rewrite imports across the codebase Swap an agent → change one URL in the Orchestrator
Observability One log stream, hard to attribute Per-agent Cloud Run logs + HTTP traces between hops
Auth boundary Trust everything in-process Service-to-service IAM at every call

What This Means For The Output

🖥️ Application Features

🔍 Researcher Agent

Independent Cloud Run service that uses Google Search via the ADK google_search_tool to gather information on the requested topic. Saves findings to the shared session state for the Judge.

⚖️ Judge Agent

Strict editor that evaluates the Researcher's findings against the original request. Returns a Pydantic-structured response (status: pass | fail) so the Orchestrator can branch deterministically.

✍️ Content Builder Agent

Turns approved research into well-structured Markdown course material. No tools, no external calls — pure synthesis from the verified findings.

🎯 Orchestrator Agent

Front-door agent that wires the others together using ADK's LoopAgent and SequentialAgent. Talks to remote agents via RemoteA2aAgent with authenticated httpx clients.

🔁 EscalationChecker

Custom BaseAgent that reads the Judge's verdict and decides whether to break the research loop (status pass) or run another iteration (status fail), up to a maximum of 3 rounds.

🌐 Web App Frontend

FastAPI web app deployed as a fifth Cloud Run service. Sends the user's topic to the Orchestrator, displays per-agent progress, and renders the final Markdown course.

🤖 AI Integration & Intelligence

🧠 Gemini 2.5 Pro via Vertex AI

Every agent runs on Gemini 2.5 Pro through Vertex AI (GOOGLE_GENAI_USE_VERTEXAI=true). No API keys passed around — service accounts handle auth at the Cloud Run boundary.

🔍 Google Search Tool

The Researcher agent uses ADK's built-in google_search_tool to fetch live web context. Search results feed into the LLM's reasoning before it writes its findings.

📋 Pydantic Structured Output

The Judge returns a JudgeFeedback schema (status + feedback fields) instead of free-form text. This lets the EscalationChecker make a deterministic decision without parsing prose.

🔄 Quality Loop with Max Iterations

The LoopAgent wraps Researcher → Judge → EscalationChecker and re-runs the cycle up to 3 times until the Judge passes the research, then hands off to the Content Builder.

🛠️ Technical Architecture & Implementation

Agent Framework

Google ADK A2A Protocol RemoteA2aAgent LoopAgent SequentialAgent BaseAgent

Backend Stack

Python 3.11+ FastAPI uv (package manager) Pydantic httpx uvicorn

AI & Search

Gemini 2.5 Pro Vertex AI google_search_tool

Deployment & Infrastructure

Google Cloud Run Docker Service-to-Service IAM Agent Cards

System Architecture

📖 Development Setup & Installation Guide

Prerequisites

Quick Start Installation

# Clone the repository git clone https://github.com/lyven81/course-creator.git cd course-creator # Install dependencies uv sync # Set up Google Cloud credentials gcloud auth application-default login export GOOGLE_CLOUD_PROJECT=your-project-id # Run all 4 agents + the web app locally ./run_local.sh # Open the app open http://localhost:8000

Environment Configuration

GOOGLE_GENAI_USE_VERTEXAI="true" GOOGLE_CLOUD_PROJECT="your-project-id" GOOGLE_CLOUD_LOCATION="global"

Available Scripts

🚀 Deployment on Google Cloud Run

Each agent is deployed as a separate Cloud Run service. The Web App is configured with environment variables pointing to each agent's /.well-known/agent-card.json URL so the Orchestrator can discover them.

# One-shot deploy of all services ./deploy.sh # Or deploy each service individually gcloud run deploy researcher --source agents/researcher gcloud run deploy judge --source agents/judge gcloud run deploy content-builder --source agents/content_builder gcloud run deploy orchestrator --source agents/orchestrator gcloud run deploy course-creator --source app

Production Notes

📊 Key Metrics

4
Specialised AI Agents
5
Cloud Run Services
3
Max Quality-Loop Iterations
A2A
Agent-to-Agent Protocol

Business Value