Lee Yih Ven
Case Study

Finding Which Campaigns Work

50 marketing campaigns analyzed; Display and Paid Search beat Email.

A marketing team had run 50 campaigns across different channels, segments, and objectives. They wanted to know which combinations actually drove uplift — and which channels were absorbing budget without delivering.

The dataset covered campaign performance against expected uplift across channel, customer segment, objective, and timing.

On channel: Display (0.0944), Paid Search (0.0936), and Other (0.0986) consistently outperformed Social (0.0741) and Email (0.0749). The gaps were too wide and too consistent to be noise.

On segment: High Value (0.0913) and New Customer (0.0904) responded best. Churn Risk (0.0751) and Deal Seeker (0.0714) lagged — they need different messaging or more precise targeting before they earn the same budget.

On objective: Cross-sell (0.0927) and Retention (0.0923) won. Reactivation (0.0837) underdelivered.

On timing: February (0.106) and November (0.114) were the strongest months. September (0.062) was the weakest.

On duration: no meaningful relationship with uplift. Long campaigns and short campaigns performed similarly. Duration is an operational choice, not a performance lever.

The most uncomfortable finding was budget alignment. Email had 14 campaigns — the highest count — and the lowest uplift. Display and Paid Search, the strongest channels, had only 11 and 10 campaigns. The team was spending most heavily on what worked least.

A surprise from the combination matrix: Email pairs unexpectedly well with Retention for New Customers (0.140 uplift). The channel is weak on average but strong in the right context.

The recommendation: cut Email's volume in half, double down on Display and Paid Search, save big-budget campaigns for February and November, treat the strong combinations as templates.

Full report → Dashboard →
#MarketingAnalytics #CampaignOptimization #PerformanceMarketing