Sunny Car Accessories

Browser-native agentic commerce — a Malaysian car-accessory storefront agents can actually shop on

WebMCP (W3C) navigator.modelContext Vanilla JS Static HTML Chrome 146+

Project Overview & Problem Statement

Challenge: Today's AI shopping agents (ChatGPT, Gemini, Claude, Auto Browser) shop the way a confused human would — screenshot the product page, guess where the "Add to Cart" button is, click pixel coordinates, and hope nothing changed. The result: abandoned carts mid-checkout, wrong variants added, scraped prices that drift from real cart prices, A/B tests that break agent workflows. For Malaysian SME merchants, this translates to lost revenue from a growing channel they can't even measure.

Solution: Sunny Car Accessories is a multi-category storefront — 18 SKUs across Interior / Exterior / Performance / Tech — that exposes structured callable tools to AI agents via the proposed W3C WebMCP standard (navigator.modelContext). Humans see a normal storefront. Agents see a typed contract: "Here are my 7 tools, here are their parameters, here's how to call them." No screenshotting. No pixel-clicking. No abandoned agent carts.

Key Benefits

How to Experience This Demo — 3 Test Paths

The UI is intentionally identical to a regular static-HTML storefront. This is what WebMCP does — it adds an invisible agent-callable layer on top of a normal-looking shop. A skeptical visitor who isn't told about WebMCP would never notice the difference. The 3 paths below let you experience exactly what's different about a WebMCP-enabled store versus a non-WebMCP one.

Path 1 — Browse as a Human

What it looks like: Click categories, pick a product, add to cart, check out. Standard e-commerce UX.

What's happening: Pure static interaction. No LLM involved anywhere. The Inspector Panel sits unobtrusive in the corner — a normal shopper won't notice it.

Cost: zero. Difference vs non-WebMCP store: none visible to the human user.

Path 2 — Try as Agent (Inspector buttons)

What it looks like: Open the Inspector Panel → Try as Agent tab → click any "▶ Run [tool]" button.

What's happening: The page invokes its own registered WebMCP tools with sample params and shows the typed JSON result in the Log tab. Still no LLM. This simulates exactly what a real AI agent would call.

Cost: zero. Difference vs non-WebMCP store: a non-WebMCP page has nothing for these buttons to invoke — they wouldn't exist or would do nothing.

Path 3 — Real Auto Browser test

What it looks like: Visitor installs Auto Browser (the first WebMCP-aware AI agent), points it at the demo URL, types natural language: "Find me a dashcam under RM 500 and add it to my cart."

What's happening: Auto Browser does the LLM reasoning, decides which tool to call (searchAccessories, then addToCart), calls it via navigator.modelContext, gets typed JSON back. The LLM cost belongs to Auto Browser, not the demo.

Cost: zero on the merchant side. Difference vs non-WebMCP store: Auto Browser cannot reliably shop on a non-WebMCP store — it would screenshot, guess at "Add to Cart," click pixel coordinates, and probably abandon the cart mid-checkout.

The merchant-side cost story

This is the strongest WebMCP value for Malaysian SMEs: agent-ready storefront with zero ongoing AI cost — agents pay their own way. Compare this to "AI shopping assistant" SaaS subscriptions (RM 200/month per merchant) that route through the SaaS's LLM endpoint.

Application Features

1. Multi-View SPA

Single-file static HTML with a state-driven view router across home / category / product / cart / checkout / track. No backend, no framework, no build step. Survives any static host.

2. Per-View Tool Scoping

The killer WebMCP feature: tool surface morphs as the user navigates. Home shows 1 tool. Product page shows 3. Cart shows 2. Checkout shows 2. Agents see only what's relevant where they are — never 43 tools at once.

3. Inspector Panel (4 Tabs)

Floating panel demonstrates WebMCP is real, not a label. Tabs: Tools (live JSON schemas), Try as Agent (run any tool with sample params), Log (timestamped invocations), Source (the actual provideContext() code per view).

4. Vehicle Fitment Checker

The killer demo tool: checkFitment({sku, vehicleMake, vehicleModel, year}) returns a typed verdict on whether a part fits a customer's car. Pixel-clicking agents physically cannot do this reliably.

5. requestUserInteraction() Gate

The checkout tool triggers a custom "Allow once / Deny" confirmation modal before placing an order. Destructive actions (charging the customer) require explicit human approval — even when an agent initiated them.

6. Compatibility Mode Fallback

WebMCP is in W3C incubation — most browsers don't yet expose navigator.modelContext. The Inspector detects this and runs in compatibility mode: same simulated tool surface, honest "not detected" status banner, zero broken UI.

WebMCP Tool Surface (7 Tools)

searchAccessoriesquery, category, maxPrice, vehicleMake
getProductDetailssku
checkFitment 🔑sku, vehicleMake, vehicleModel, year
addToCartsku, variant, quantity
getCart(no params, read-only)
removeFromCartkey
checkoutpaymentMethod (gated)

Per-View Tool Mapping

Home1 tool: searchAccessories
Category2 tools: search + getDetails
Product3 tools: getDetails + checkFitment + addToCart
Cart2 tools: getCart + removeFromCart
Checkout2 tools: getCart + checkout (gated)
Track0 tools (post-purchase)

WebMCP Integration & Intelligence

Typed Contract over Pixel-Clicking

Each tool registers with a JSON inputSchema describing parameters, types, and constraints. The agent reads the schema and calls the tool with structured parameters — no screenshot interpretation, no DOM scraping, no broken layout assumptions.

Per-Page Surface (Pattern B)

On every view change, navigator.modelContext.provideContext({tools}) swaps the active tool set. The agent on the cart page sees removeFromCart but loses access to checkout — destructive tools live only on the page where the user is in that intent.

Human-in-the-Loop on Destructive Actions

The checkout tool calls requestUserInteraction() before placing an order. The browser pauses agent execution and shows a confirmation prompt. Even a runaway agent cannot charge the customer without explicit approval.

89% Token Efficiency vs Pixel-Clicking

Per the WebMCP deck: typed tool calls consume 20–100 tokens. Screenshot + DOM parsing typically consumes 2,000–5,000+ tokens per page. Per-view scoping (3–5 tools at a time, not 43) keeps the agent context clean.

Tier 1 Test Results — Verified 2026-04-30

End-to-end tested on Chrome 147 with experimental web platform features enabled. All five test cases passed:

Test What was checked Result
1A — Per-view tool scoping Inspector status badge updates: Home (1) → Category (2) → Product (3) → Cart (2) → Checkout (2) ✓ Pass
1B — Try as Agent playground Clicking ▶ Run checkFitment on a product page executes the tool, updates page state, no human button click ✓ Pass
1C — requestUserInteraction() Running checkout from the Inspector triggers the "Allow once / Deny" confirmation modal before placing the order ✓ Pass
1D — Source tab proof Source tab shows the actual navigator.modelContext.provideContext() code per view — different code per view, demonstrating real per-view scoping ✓ Pass
Status badge accuracy "WebMCP active · N tools on [view]" updates correctly on every view transition ✓ Pass
Browser-support disclaimer. WebMCP is in W3C incubation as of April 2026. The navigator.modelContext API is currently flag-gated even on Chrome 147 stable — the deck's "146+ stable" claim is aspirational; live API access requires chrome://flags/#enable-experimental-web-platform-features or a similar flag enabled.

On any browser without the API, the Inspector Panel runs in compatibility mode — it still demonstrates exactly what tools an agent would see, the per-view scoping, the JSON schemas, and the confirmation flow. The same code switches automatically to live agent integration the moment the browser enables WebMCP — no code change needed on this page.

To see the green ✅ today: Chrome 147 → chrome://flags → search "model context" or enable Experimental Web Platform Features → relaunch → reload the demo.

Technical Architecture & Implementation

Frontend Stack

HTML5 CSS3 Vanilla JavaScript Poppins Font localStorage

WebMCP Stack

navigator.modelContext (W3C) provideContext() registerTool() requestUserInteraction() submitEvent.agentInvoked

Deployment

GitHub Pages Static Single-File HTML No Backend No Build Step

System Architecture

Production Deployment Notes

This demo runs entirely in the browser as a static page. For a real merchant deployment, the following would be added:

Required for Production

Architecture Comparison

DEMO (this page): Customer Browser → static HTML → localStorage cart → mocked checkout // Single user, single tab, no real persistence PRODUCTION: Customer Browser → CDN-served HTML → backend API → Inventory DB (PostgreSQL/Firestore) → Payment processor (Stripe/Razorpay/iPay88) → Order confirmation → backend → browser + WebMCP Origin Trial token in <meta> tag + agent-traffic analytics (submitEvent.agentInvoked → log to BigQuery) + Server-side fraud checks before checkout completes + Inventory write-back on every addToCart

Development Setup & Testing Guide

Prerequisites

Quick Start

# Clone the repository git clone https://github.com/lyven81/ai-project.git cd ai-project/projects/sunny-car-accessories # Open the demo directly in any browser # Windows: start index.html # Mac: open index.html # Or run a local server (recommended for Auto Browser testing): python -m http.server 8000 # Then visit http://localhost:8000/

Console Verification (paste into DevTools)

'modelContext' in navigator; // → true (Chrome 146+ with flags) or false (compat mode) navigator.modelContext.tools.map(t => t.name); // → list of tools registered on the current view await webMCP.invokeTool('checkFitment', { sku: 'INT-002', vehicleMake: 'Honda', vehicleModel: 'Civic', year: 2020 }); // → { type: "text", text: '{"fits":true,...}' }

Project Files

projects/sunny-car-accessories/ index.html # Main app — multi-view SPA + WebMCP layer + Inspector Panel images/ hero/ hero-banner.jpg # 1600×600 — Malaysian car-accessory shop interior categories/ interior.jpg exterior.jpg performance.jpg tech.jpg # 1200×600 each — category banners products/ int-001-...jpg ... # 18 product images, 800×800 each # AI-generated via Imagen 3 with consistent style anchor

Key Metrics

7
WebMCP Tools Registered
18
SKUs across 4 categories
89%
Token reduction vs pixel-clicking
5/5
Tier 1 tests passed

Business Value