Browser-native agentic commerce — a Malaysian car-accessory storefront agents can actually shop on
Challenge: Today's AI shopping agents (ChatGPT, Gemini, Claude, Auto Browser) shop the way a confused human would — screenshot the product page, guess where the "Add to Cart" button is, click pixel coordinates, and hope nothing changed. The result: abandoned carts mid-checkout, wrong variants added, scraped prices that drift from real cart prices, A/B tests that break agent workflows. For Malaysian SME merchants, this translates to lost revenue from a growing channel they can't even measure.
Solution: Sunny Car Accessories is a multi-category storefront — 18 SKUs across Interior / Exterior / Performance / Tech — that exposes structured callable tools to AI agents via the proposed W3C WebMCP standard (navigator.modelContext). Humans see a normal storefront. Agents see a typed contract: "Here are my 7 tools, here are their parameters, here's how to call them." No screenshotting. No pixel-clicking. No abandoned agent carts.
addToCart({sku, variant, quantity}) directly — no wrong-variant disputes, no abandoned carts mid-checkout, no price driftcheckFitment, getStockLevel answer pre-sale questions without the customer pinging WhatsApp / MessengersubmitEvent.agentInvoked tells the merchant "this was an agent" — the new revenue channel finally becomes visibleThe UI is intentionally identical to a regular static-HTML storefront. This is what WebMCP does — it adds an invisible agent-callable layer on top of a normal-looking shop. A skeptical visitor who isn't told about WebMCP would never notice the difference. The 3 paths below let you experience exactly what's different about a WebMCP-enabled store versus a non-WebMCP one.
What it looks like: Click categories, pick a product, add to cart, check out. Standard e-commerce UX.
What's happening: Pure static interaction. No LLM involved anywhere. The Inspector Panel sits unobtrusive in the corner — a normal shopper won't notice it.
Cost: zero. Difference vs non-WebMCP store: none visible to the human user.
What it looks like: Open the Inspector Panel → Try as Agent tab → click any "▶ Run [tool]" button.
What's happening: The page invokes its own registered WebMCP tools with sample params and shows the typed JSON result in the Log tab. Still no LLM. This simulates exactly what a real AI agent would call.
Cost: zero. Difference vs non-WebMCP store: a non-WebMCP page has nothing for these buttons to invoke — they wouldn't exist or would do nothing.
What it looks like: Visitor installs Auto Browser (the first WebMCP-aware AI agent), points it at the demo URL, types natural language: "Find me a dashcam under RM 500 and add it to my cart."
What's happening: Auto Browser does the LLM reasoning, decides which tool to call (searchAccessories, then addToCart), calls it via navigator.modelContext, gets typed JSON back. The LLM cost belongs to Auto Browser, not the demo.
Cost: zero on the merchant side. Difference vs non-WebMCP store: Auto Browser cannot reliably shop on a non-WebMCP store — it would screenshot, guess at "Add to Cart," click pixel coordinates, and probably abandon the cart mid-checkout.
This is the strongest WebMCP value for Malaysian SMEs: agent-ready storefront with zero ongoing AI cost — agents pay their own way. Compare this to "AI shopping assistant" SaaS subscriptions (RM 200/month per merchant) that route through the SaaS's LLM endpoint.
Single-file static HTML with a state-driven view router across home / category / product / cart / checkout / track. No backend, no framework, no build step. Survives any static host.
The killer WebMCP feature: tool surface morphs as the user navigates. Home shows 1 tool. Product page shows 3. Cart shows 2. Checkout shows 2. Agents see only what's relevant where they are — never 43 tools at once.
Floating panel demonstrates WebMCP is real, not a label. Tabs: Tools (live JSON schemas), Try as Agent (run any tool with sample params), Log (timestamped invocations), Source (the actual provideContext() code per view).
The killer demo tool: checkFitment({sku, vehicleMake, vehicleModel, year}) returns a typed verdict on whether a part fits a customer's car. Pixel-clicking agents physically cannot do this reliably.
The checkout tool triggers a custom "Allow once / Deny" confirmation modal before placing an order. Destructive actions (charging the customer) require explicit human approval — even when an agent initiated them.
WebMCP is in W3C incubation — most browsers don't yet expose navigator.modelContext. The Inspector detects this and runs in compatibility mode: same simulated tool surface, honest "not detected" status banner, zero broken UI.
Each tool registers with a JSON inputSchema describing parameters, types, and constraints. The agent reads the schema and calls the tool with structured parameters — no screenshot interpretation, no DOM scraping, no broken layout assumptions.
On every view change, navigator.modelContext.provideContext({tools}) swaps the active tool set. The agent on the cart page sees removeFromCart but loses access to checkout — destructive tools live only on the page where the user is in that intent.
The checkout tool calls requestUserInteraction() before placing an order. The browser pauses agent execution and shows a confirmation prompt. Even a runaway agent cannot charge the customer without explicit approval.
Per the WebMCP deck: typed tool calls consume 20–100 tokens. Screenshot + DOM parsing typically consumes 2,000–5,000+ tokens per page. Per-view scoping (3–5 tools at a time, not 43) keeps the agent context clean.
End-to-end tested on Chrome 147 with experimental web platform features enabled. All five test cases passed:
| Test | What was checked | Result |
|---|---|---|
| 1A — Per-view tool scoping | Inspector status badge updates: Home (1) → Category (2) → Product (3) → Cart (2) → Checkout (2) | ✓ Pass |
| 1B — Try as Agent playground | Clicking ▶ Run checkFitment on a product page executes the tool, updates page state, no human button click |
✓ Pass |
| 1C — requestUserInteraction() | Running checkout from the Inspector triggers the "Allow once / Deny" confirmation modal before placing the order |
✓ Pass |
| 1D — Source tab proof | Source tab shows the actual navigator.modelContext.provideContext() code per view — different code per view, demonstrating real per-view scoping |
✓ Pass |
| Status badge accuracy | "WebMCP active · N tools on [view]" updates correctly on every view transition | ✓ Pass |
navigator.modelContext API is currently flag-gated even on Chrome 147 stable — the deck's "146+ stable" claim is aspirational; live API access requires chrome://flags/#enable-experimental-web-platform-features or a similar flag enabled.
chrome://flags → search "model context" or enable Experimental Web Platform Features → relaunch → reload the demo.
webMCP object wraps navigator.modelContext when present, falls back to a local registry otherwise — Inspector behaves identically in both cases.showView(name, params) function shows/hides view containers, calls the matching render function, and triggers updateWebMCPContext() to swap the tool surface.webMCP wrapper. Updates on every view change. Shows live tools, runs sample invocations, logs results, displays source code.requestUserInteraction() isn't available natively, a modal overlay polyfills the experience — Allow once / Deny buttons resolve a Promise the tool's execute() awaits.This demo runs entirely in the browser as a static page. For a real merchant deployment, the following would be added:
execute() hits a real payment processor server-side, returns order confirmation02_Pau-AI/template/webmcp-webstore/