Multi-agent simulation that reveals which marketing channel delivers the best ROI โ and which one to cut
Challenge: Social media managers and digital marketers routinely allocate budget across multiple channels โ Email, Social, Paid Search, Display, and Affiliate โ based on intuition, industry benchmarks, or whatever worked last quarter. This approach makes it hard to know with confidence which channel is generating real ROI and which is silently draining budget.
Solution: Social Media Marketer runs a competitive multi-agent simulation where five AI agents โ each managing one marketing channel โ compete with the same starting budget over four rounds. Using a real campaign dataset, each agent picks campaigns, allocates spend, and gets scored against an ROI benchmark. The result is a ranked leaderboard plus clear recommendations: which channel to scale, which to hold, and which to cut.
Before the simulation starts, a baseline table shows each channel's historical average uplift from the dataset โ colour-coded green or red against the benchmark. Gives context before any agent runs.
Emma (Email), Sam (Social), Parker (Paid Search), Diana (Display), and Alex (Affiliate) each receive campaign options from their channel, decide which to run, and allocate budget. Each has a distinct strategic persona.
A progress bar advances as each agent completes their round. Agent decision cards appear one by one showing campaign picked, uplift achieved, budget change, and one-sentence reasoning from the AI.
After all rounds, agents are ranked by final budget. The table shows total ROI, average uplift, how many rounds hit the benchmark, and a status flag โ OK or FLAGGED โ for agents that missed benchmark twice consecutively.
A horizontal bar chart visualises each channel's total ROI โ green for profit, red for loss โ making the performance spread immediately visible without reading the table.
Three recommendation cards โ Scale Up, Hold, Cut / Review โ assign every channel to an action category based on ROI and flagging status. One-click download exports the full report as Markdown.
All five agents use the same Claude Haiku model. Each gets a unique system prompt defining their channel focus, decision style, and strategic strength. This isolates the effect of persona on campaign selection โ not model capability.
Each agent responds in strict JSON: campaign choice (1โ3), budget allocation percentage (15โ25%), and one-sentence reasoning. Structured output ensures reliable parsing and consistent simulation mechanics.
Decisions are scored against the expected uplift from the campaign dataset. Above benchmark: the agent earns back more than it spent. Below benchmark: partial loss. Two consecutive losses trigger a FLAGGED status โ mirroring real-world performance review logic.
If the API returns malformed JSON, the agent falls back to a safe default choice without crashing the simulation. Ensures a complete run even under network or rate-limit conditions.
agents.py, simulation.py, data.py, report.py, config.py โ each file has one jobst.session_stateconfig.py values before agents are initialised, so each run uses the user's chosen settingssimulation_report.md is written after each run and offered as a download via Streamlit's download buttonSTARTING_BUDGET โ Starting budget per agent (default: $1,000)ROI_BENCHMARK โ Minimum uplift to pass (default: 0.09 = 9%)ROUNDS โ Number of simulation rounds (default: 4)KILL_THRESHOLD โ Consecutive misses before flagging (default: 2)MODEL โ Claude model to use (default: claude-haiku-4-5-20251001)simulation_report.md โ Full round-by-round breakdown saved automatically after each run