📊 Social Media Marketer

Core Source Code — Multi-Agent Simulation Engine

Python Claude AI Streamlit Anthropic SDK

🔍 About This Code Showcase

This page highlights the core simulation logic behind Social Media Marketer — how five AI agents are defined, how they make decisions using Claude, and how the scoring and reporting system works.

API keys and environment configuration are omitted. The showcase focuses on three key parts: agent persona design, the Claude API decision loop, and the economic scoring engine.

🗂️ File Structure

social-media-marketer/ ├── app.py # Streamlit web UI — full simulation interface ├── main.py # CLI entry point using Rich terminal output ├── agents.py # MarketingAgent class + 5 persona definitions ├── simulation.py # run_round() — one round per agent ├── data.py # Load CSV, filter by channel, compute stats ├── report.py # Build leaderboard, recommendations, save .md ├── config.py # Budget, benchmark, rounds, model name ├── campaigns.csv # 50 real campaigns — 5 channels, 4 objectives ├── .env # ANTHROPIC_API_KEY (not committed) ├── requirements.txt └── run.bat # Windows launcher — installs deps + opens browser

🤖 Agent Personas — agents.py

All five agents share one LLM model. Each is differentiated entirely by its system prompt persona — demonstrating how prompt design shapes decision-making behaviour.

📄 agents.py — Persona definitions
PERSONAS = { "Email": { "name": "Emma", "title": "Email Marketing Agent", "style": "patient, data-driven, retention-focused", "strength": "nurturing existing customers and re-engaging lapsed ones", }, "Social": { "name": "Sam", "title": "Social Media Agent", "style": "creative, trend-aware, engagement-obsessed", "strength": "acquiring new customers and building brand awareness", }, "Paid Search": { "name": "Parker", "title": "Paid Search Agent", "style": "analytical, ROI-obsessed, intent-focused", "strength": "capturing high-intent buyers ready to convert", }, "Display": { ... }, "Affiliate": { ... } }
📄 agents.py — Claude API decision call
def decide(self, options, round_num): # Build a prompt from the agent's persona + available campaign options opts_text = "" for i, c in enumerate(options, 1): opts_text += ( f"\n Option {i}: Objective={c['objective']}, " f"Segment={c['segment']}, Duration={c['duration']} days" ) prompt = ( f"You are {self.persona['name']}, a {self.channel} marketing specialist.\n" f"Style: {self.persona['style']}\n" f"Strength: {self.persona['strength']}\n\n" f"Current budget: ${self.budget:.0f}\n" f"Round: {round_num}/4\n" f"ROI benchmark: {ROI_BENCHMARK * 100:.0f}% minimum uplift\n\n" f"Campaign options for your {self.channel} channel:{opts_text}\n\n" f"Pick the option most likely to beat the benchmark.\n" f'Respond ONLY with valid JSON: {{"choice": 1, "budget_pct": 20, "reasoning": "one sentence"}}' ) # Call Claude — structured JSON output only response = self._client.messages.create( model=MODEL, max_tokens=200, messages=[{"role": "user", "content": prompt}], ) text = response.content[0].text.strip() decision = json.loads(text) return options[choice], pct, reasoning

⚖️ Economic Scoring Engine — agents.py

Each agent's decision is scored against the campaign's actual expected uplift from the dataset. Good decisions earn back more than was spent. Poor decisions result in a partial loss.

📄 agents.py — apply_result()
def apply_result(self, campaign, allocated, round_num, reasoning): uplift = campaign["uplift"] if uplift >= ROI_BENCHMARK: # Above benchmark: agent earns back allocated + uplift bonus earnings = allocated * (1 + uplift) outcome = "PROFIT" self._below_count = 0 else: # Below benchmark: partial refund only — proportional to how far short earnings = allocated * (uplift / ROI_BENCHMARK) outcome = "LOSS" self._below_count += 1 # Two consecutive misses → agent is flagged for CMO review if self._below_count >= KILL_THRESHOLD: self.flagged = True net = earnings - allocated self.budget += net return record

📊 Data Loading & Channel Stats — data.py

The CSV is loaded once, cached by Streamlit, and filtered per channel when agents need their campaign options each round.

📄 data.py — load_campaigns() and channel_stats()
def load_campaigns(filepath="campaigns.csv"): campaigns = [] with open(filepath, newline="") as f: reader = csv.DictReader(f) for row in reader: start = datetime.strptime(row["start_date"], "%Y-%m-%d") end = datetime.strptime(row["end_date"], "%Y-%m-%d") campaigns.append({ "channel": row["channel"], "objective": row["objective"], "segment": row["target_segment"], "uplift": float(row["expected_uplift"]), "duration": (end - start).days, }) return campaigns def get_options(campaigns, channel, n=3): # Filter by channel, then return n random options each round pool = [c for c in campaigns if c["channel"] == channel] return random.sample(pool, min(n, len(pool))) def channel_stats(campaigns): # Aggregate avg, best, worst uplift per channel for the baseline table stats = {} for c in campaigns: ch = c["channel"] if ch not in stats: stats[ch] = [] stats[ch].append(c["uplift"]) return { ch: {"avg": round(sum(v)/len(v), 4), "best": max(v), "worst": min(v)} for ch, v in stats.items() }

⚙️ Key Design Decisions