Lee Yih Ven
AI Project

Course Creator — LoopAgent quality gate

Researcher-Judge loop in ADK gates content generation on quality.

A second post on Course Creator — this time on the Researcher → Judge loop that gates downstream content generation.

The risk with LLM-generated material is that it sounds confident but contains gaps. For Course Creator, the Content Builder shouldn't start writing course material until the research is genuinely solid. ADK's LoopAgent is built for this kind of workflow.

The pattern:

  1. Researcher gathers information on the topic
  2. Judge agent reviews it for completeness and accuracy
  3. If Judge says "fail," loop. Researcher tries again with the Judge's feedback as input.
  4. If Judge says "pass," break the loop. Hand off to Content Builder.
  5. Cap the loop so it can't run forever.

The custom piece is a small BaseAgent (the EscalationChecker) that reads the Judge's verdict and decides whether to break or continue. ADK provides LoopAgent and SequentialAgent as the building blocks; the loop-control logic is yours to write.

The leverage point in this whole architecture is the Judge prompt. A vague Judge passes weak research. A precise Judge with explicit pass/fail criteria forces the Researcher to actually do the work. The Judge is doing more shaping of the final output than the Researcher itself.

The same pattern fits any "generate-and-verify" workflow — code generation with a linting/test agent, image generation with a vision-model critic, data extraction with a schema validator. It's the cheapest way to add reliability to LLM output without fine-tuning.

Live demo →
#GoogleADK #LoopAgent #AIAgents