Back to Blog

WebMCP: Chrome Just Turned Every Website Into an API for AI Agents

By Rohit Ghumare • February 14, 2026 • 18 min read

Two days ago, Google's Chrome team shipped something that will fundamentally change how AI agents interact with the web.

It's called WebMCP. And within 48 hours of the announcement, it hit 980 stars on GitHub, got 1.2 million impressions on a single tweet, and had developers across five different frameworks building demos.

Here's what actually happened, what it means, and why you should care.

The Problem WebMCP Solves

Right now, when an AI agent needs to interact with a website, it does one of two things:

  1. Screenshots and clicks. The agent takes a screenshot, uses vision to figure out what's on screen, then simulates mouse clicks and keyboard input. This is what tools like Playwright-based agents do. It works, but it's slow, expensive (thousands of tokens per screenshot), and breaks constantly when UI changes.
  2. Backend MCP servers. The website builds a separate server that exposes its functionality through the Model Context Protocol. This is reliable, but requires the website to build and maintain a whole separate API surface just for AI agents.

WebMCP introduces a third option: the website itself tells agents what it can do.

Instead of agents guessing what buttons do by looking at pixels, websites register structured tools directly in the browser. The agent sees a list of available actions, each with a name, description, and typed input schema. It calls the function. The website runs it. Done.

No screenshots. No DOM scraping. No separate server. The existing frontend JavaScript becomes the agent interface.

How It Works

WebMCP adds a new API to the browser: navigator.modelContext. Websites register tools through it, and AI agents discover and call those tools through the browser.

There are two ways to register tools.

The JavaScript Way (Imperative API)

This gives you full control. You write a function, describe it with a JSON Schema, and register it:

if ('modelContext' in navigator) {
  navigator.modelContext.registerTool({
    name: 'add_to_cart',
    description: 'Add a product to the shopping cart',
    inputSchema: {
      type: 'object',
      properties: {
        productId: {
          type: 'string',
          description: 'The product ID to add'
        },
        quantity: {
          type: 'number',
          description: 'How many to add'
        }
      },
      required: ['productId', 'quantity']
    },
    async execute({ productId, quantity }) {
      const result = await addToCart(productId, quantity);
      return {
        content: [{
          type: 'text',
          text: JSON.stringify(result)
        }]
      };
    }
  });
}

That's it. An AI agent can now add items to the cart on your site without seeing the UI. It calls add_to_cart with the right parameters, your existing addToCart() function runs, and the agent gets back structured data.

The HTML Way (Declarative API)

For simple form-based actions, you don't even need JavaScript. Add a few attributes to an existing <form>:

<form toolname="search_flights"
      tooldescription="Search for available flights"
      action="/api/search">
  <input name="origin"
         toolparamdescription="Departure airport code (e.g. SFO)" />
  <input name="destination"
         toolparamdescription="Arrival airport code (e.g. JFK)" />
  <input name="date" type="date"
         toolparamdescription="Travel date in YYYY-MM-DD" />
  <button type="submit">Search</button>
</form>

The browser reads these attributes and automatically exposes the form as an agent-callable tool. The agent fills in the fields and submits. From the server's perspective, it looks identical to a human submission.

There's even a CSS pseudo-class :tool-form-active so you can style the form differently when an agent is using it, and a SubmitEvent.agentInvoked boolean so your server knows whether a human or agent submitted.

The Numbers

Early benchmarks from the Chrome team and MCP-B project show real improvements:

MetricScreenshot-BasedWebMCPImprovement
Tokens per action (simple)3,80143389% fewer tokens
Tokens per action (complex)~8,000+~1,80077% fewer tokens
Computational overheadBaseline-67%67% reduction
Task accuracyVariable~98%Structured calls vs. pixel guessing

The token reduction is the big one. A simple counter task that costs 3,801 tokens with screenshots costs 433 tokens with WebMCP. That's not a marginal improvement. That's the difference between “AI agents browsing the web is economically viable” and “it's not.”

Who Built It

WebMCP was co-developed by engineers at Google and Microsoft working together through the W3C Web Machine Learning Community Group:

  • Brandon Walderman (Microsoft) — Spec editor
  • Khushal Sagar (Google) — Spec editor
  • Dominic Farolino (Google) — Spec editor

The fact that Google and Microsoft are collaborating on this matters. When both major browser vendors co-author a spec, adoption tends to follow. The spec was published as a W3C Community Group Draft on February 10, 2026. It's not a finalized standard yet, but it's in Chrome 146 Canary right now behind a flag.

How to Try It Today

  1. Download Chrome Canary (version 146+)
  2. Go to chrome://flags/#enable-webmcp-testing
  3. Enable the “WebMCP for testing” flag
  4. Relaunch Chrome

Chrome 146 stable is expected around March 10, 2026. For now, the flag is the only way in.

What the Developer Community Built in 48 Hours

The speed of adoption was genuinely surprising. Here's what showed up within two days of the announcement:

MCP-B — The Browser Bridge

Alex Nahas, formerly at Amazon, built MCP-B (“B” for browser) before the W3C spec even existed. It started as an internal solution at Amazon where backend MCP servers couldn't access authenticated web apps because they lacked session cookies.

MCP-B is a Chrome extension that collects WebMCP tools from all your open tabs and bridges them to desktop MCP clients like Claude Desktop or Cursor. It also includes built-in AI agents that help you create WebMCP tools without leaving the browser.

The project now provides the @mcp-b/global polyfill that adds navigator.modelContext to browsers that don't have native support yet, plus a forked Chrome DevTools MCP server that exposes list_webmcp_tools and call_webmcp_tool functions.

Five Framework Examples

The WebMCP-org/examples repo shipped production-ready demos across five frameworks:

FrameworkDemo AppKey Pattern
Vanilla TSShopping cartnavigator.modelContext.registerTool()
ReactTask manageruseWebMCP() hook + Zod validation
Rails 7Bookmarks managerStimulus controller integration
Angular 19Note-taking appAngular signals + services
Phoenix LiveViewCounter + item listElixir server-side state sync

Community members added Vue and Nuxt 3 implementations within hours.

DoorDash-Style Food Ordering Demo

Pietro Schirano built a starter template for a DoorDash-like food ordering app where an AI agent adds items to cart, configures options, and navigates checkout — all through WebMCP tool calls, never touching the UI.

P

Pietro Schirano

@skiranoFeb 14, 2026

WebMCP is the future of the web. Agents can now interact with any website without ever seeing the UI. I built a starter template to show how: A DoorDash like app where the agent adds items to cart...

551 likes39.1K views

TODO App with Agent Control

Japanese developer @azukiazusa9 built a clean TODO app demo showing navigator.modelContext.provideContext registering add, delete, and list tools. Their video shows an AI agent creating and managing TODO items through structured tool calls.

a

azukiazusa

@azukiazusa9Feb 12, 2026

WebMCP completely understood. Register tools with window.navigator.modelContext.provideContext and the AI agent executes JavaScript callbacks to add TODO items.

145 likes13.3K views

Travel Booking Reference App

The Chrome team shipped a live travel booking demo at travel-demo.bandarra.me that demonstrates both APIs — declarative HTML forms for flight search and imperative JavaScript for complex itinerary building. This serves as the official reference implementation.

Chrome DevTools Quickstart

The DevTools quickstart shows a development loop where AI writes WebMCP tools, Vite hot-reloads, the AI tests its own tools, and iterates. Three included examples (get_page_title, get_counter, set_counter) demonstrate the pattern.

What People Are Saying

The response from the developer community was immediate and largely enthusiastic. Here are some of the takes that stood out:

M

Maximiliano Firtman

@firtFeb 11, 2026

Chrome 146 includes an early preview of WebMCP, accessible via a flag, that lets AI agents query and execute services without browsing the web app like a user. Services can be declared through an imperative navigator.modelContext API or declaratively through a form.

2,700 likes1.2M views
W

Wes Bos

@wesbosFeb 12, 2026

Taking the new WebMCP spec proposal for a rip. Immediately see two benefits: this is WAY faster than a bot screenshotting and clicking buttons. Adapting existing apps will be much easier...

757 likes75.5K views
P

Philipp Schmid

@_philschmidFeb 12, 2026

MCP Servers Are Coming to the Web. MCP lets AI agents call tools on backends. WebMCP brings the same idea to the frontend, letting developers expose their website's functionality as structured tools using plain JavaScript (or even HTML), no separate server needed.

385 likes55.9K views
S

Steren Giannini

@sterenFeb 13, 2026

WebMCP sits in a very interesting spot: It's not a spec for HTTP APIs, like MCP servers are. It's a way to help browsers with agentic capabilities use webapps more reliably (instead of asking them to screenshot and click).

196 likes12.5K views
G

Glenn Gabe

@glenngabeFeb 13, 2026

This is a big deal. Agents can bypass the UI via WebMCP. Chrome Team announces WebMCP is available for early preview: "As the agentic web evolves, we want to help websites play an active role in how AI agents interact with them."

47 likes3.8K views

Miguel Angel Duran (1,700 likes): “The web will change forever with WebMCP! An open standard so AI agents don't ‘click’ but talk directly to your web. No DOM scraping. No fragile automations. Structured tools.”

On LinkedIn, SEO expert Dan Petrovic at DEJAN AI called WebMCP “the biggest shift in technical SEO since structured data” and coined the term “Agentic CRO” for optimizing how AI agents interact with websites — a discipline that didn't exist a week ago.

WebMCP vs. Backend MCP — When to Use Which

WebMCP doesn't replace backend MCP servers. They solve different problems:

WebMCP (Frontend)MCP Server (Backend)
Where it runsIn the browser tabOn your server
User present?Yes — shared contextNo — agent acts alone
AuthenticationInherits session cookiesOAuth / API keys
Use caseCollaborative workflowsAutomated pipelines
Setup effortAdd JS to existing pagesBuild and deploy a server
Headless?No — needs a browserYes

The key distinction is the user. WebMCP is for cooperative workflows where a human is watching and the agent operates within their session. Backend MCP is for autonomous pipelines where the agent operates independently.

A travel site might use WebMCP so an agent can search flights while the user watches and approves. That same site might use a backend MCP server so an agent can check prices overnight and send a notification when a fare drops.

Security Model

WebMCP's security approach is worth examining because it's different from traditional APIs:

  • HTTPS required. WebMCP only works in secure contexts (localhost exempted for development).
  • Callback-based control. The website decides what tools to expose and what they do. An agent can only call functions the site explicitly registers.
  • User confirmation. Tools can request user approval before executing through ModelContextClient.requestUserInteraction().
  • Origin-scoped. Tools are scoped to the page origin. One site can't access another site's tools.
  • Advisory hints. Tools can declare readOnlyHint and destructiveHint flags so agents know which actions are safe and which aren't.

The spec is honest about open questions. Prompt injection is still a risk — if an agent reads untrusted content from one tool and passes it to another, malicious instructions could execute. The Bug0 analysis calls out the “lethal trifecta” of agents that can read private data, parse untrusted content, and take actions. WebMCP narrows the attack surface by limiting agents to explicitly exposed functions rather than arbitrary DOM access, but it doesn't eliminate the risk entirely.

The Broader Ecosystem

WebMCP doesn't exist in isolation. It's part of a growing stack:

  • MCP (Anthropic) — The backend protocol. Agents call tools on servers. This is the original standard that WebMCP extends to the browser.
  • WebMCP (W3C) — The frontend protocol. Websites expose tools in the browser. What we've been discussing.
  • MCP-B (WebMCP-org) — The bridge. Chrome extension that connects browser-side WebMCP tools to desktop MCP clients. Adds inter-site tool cooperation.
  • MCP Apps (Anthropic) — Interactive UI extensions for MCP servers. Where WebMCP turns websites into tools for agents, MCP Apps turn agent tools into UIs for humans.

Together, these create a loop: websites expose tools to agents (WebMCP), agents expose UIs to users (MCP Apps), and everything connects through the same protocol layer (MCP).

What This Means for Different Roles

Frontend Developers

WebMCP is a net new interface surface. Your sites will need to declare what they can do, not just render UI. The good news: it's JavaScript you already know. The registerTool() API is straightforward. The bad news: now you need to think about tool discoverability, schema design, and how agents will interpret your descriptions. “Add to cart” means something different to a language model than to a human scanning a button.

SEO and Growth Teams

Dan Petrovic's “Agentic CRO” framing is not an overreaction. When AI agents can call structured tools on websites, the agents that deliver the best results will send users to sites with the best tool implementations. Tool descriptions become the new meta descriptions. Schema design becomes the new structured data. This is a real competitive surface.

DevRel Teams

If your platform has a web interface, WebMCP is a new integration point you'll need to document. Developers will ask “how do I expose my app's features to AI agents?” and the answer should include WebMCP alongside your REST/GraphQL APIs. Writing good tool descriptions — clear, unambiguous, with well-typed schemas — is a documentation challenge that falls squarely in DevRel territory.

Product Teams

WebMCP changes the question from “how does a human use our product?” to “how does a human and an agent use our product together?” The declarative API makes it cheap to experiment. Add toolname and tooldescription to your existing forms and see what agents do with them. No backend changes needed.

Limitations and Open Questions

The spec itself flags several unresolved areas:

  • Tool discovery at scale. If every website exposes 50 tools (the recommended max per page), how do agents efficiently discover relevant tools across the web?
  • Multi-agent conflicts. What happens when two agents try to use the same website's tools simultaneously?
  • Headless scenarios. WebMCP requires a browser. Server-side rendering and headless environments don't have navigator.modelContext.
  • No native support yet. Only Chrome 146 Canary behind a flag. Firefox and Safari haven't announced support. The @mcp-b/global polyfill fills the gap for now.
  • Spec instability. Multiple TODO placeholders in the spec. The API surface will change before standardization.

Getting Started

The fastest path from zero to a working WebMCP integration:

Option 1: Native API (Chrome 146 Canary)

// Enable chrome://flags/#enable-webmcp-testing first

navigator.modelContext.registerTool({
  name: 'get_product',
  description: 'Get product details by ID',
  inputSchema: {
    type: 'object',
    properties: {
      id: { type: 'string', description: 'Product ID' }
    },
    required: ['id']
  },
  async execute({ id }) {
    const product = await fetch(`/api/products/${id}`).then(r => r.json());
    return {
      content: [{ type: 'text', text: JSON.stringify(product) }]
    };
  }
});

Option 2: MCP-B Polyfill (Any Browser)

npm install @mcp-b/global

// In your app's entry point:
import '@mcp-b/global';

// Now navigator.modelContext is available
navigator.modelContext.registerTool({ /* same API */ });

Option 3: React Hook

import { useWebMCP } from '@mcp-b/react-webmcp';
import { z } from 'zod';

function TodoApp() {
  const [todos, setTodos] = useState([]);

  useWebMCP('add_todo', {
    description: 'Add a new todo item',
    schema: z.object({
      title: z.string().describe('The todo title'),
      priority: z.enum(['low', 'medium', 'high'])
    }),
    handler: async ({ title, priority }) => {
      const newTodo = { id: Date.now(), title, priority, done: false };
      setTodos(prev => [...prev, newTodo]);
      return { content: [{ type: 'text', text: `Added: ${title}` }] };
    }
  });

  return <div>{/* your UI */}</div>;
}

Option 4: Zero-Build HTML

<form toolname="subscribe"
      tooldescription="Subscribe a user to the newsletter"
      action="/api/subscribe"
      method="POST">
  <input name="email" type="email"
         toolparamdescription="User email address" required />
  <button type="submit">Subscribe</button>
</form>

Timeline

DateMilestone
Feb 10, 2026W3C Community Group Draft published
Feb 11, 2026Chrome team announces early preview
Feb 12, 2026Available in Chrome 146 Canary behind flag
~Mar 10, 2026Chrome 146 stable (expected)
Mid-2026Broader browser support expected (unconfirmed)

Where to Go From Here

WebMCP doesn't replace the web. It gives the web a way to participate in the agentic era on its own terms — by telling agents what it can do, instead of hoping they figure it out.

Looking for expert guidance with your DevRel initiatives?