Logo Tandem AI assistant

Menu

Logo Tandem AI assistant

Menu

/

How to Add WebMCP to Your React App: A Step-by-Step Guide

Mar 1, 2026

How to Add WebMCP to Your React App: A Step-by-Step Guide

Christophe Barre

co-founder of Tandem

Share on

On this page

No headings found on page

A hands-on guide to implementing WebMCP in React and Next.js apps. Both APIs explained with code, from HTML form attributes to navigator.modelContext.registerTool(). Plus: how Tandem’s Claude Code skill can automate the setup.

Updated February 28, 2026

TL;DR: WebMCP adds two APIs to Chrome that let your web app expose structured tools for AI agents: a Declarative API (HTML attributes on forms) and an Imperative API (JavaScript tool registration). Both are available in Chrome 146 Canary behind a flag. This guide walks through implementing each in a React/Next.js app, with patterns for per-component tool registration, tool naming best practices, and debugging with Chrome’s inspector. The spec is early — build prototypes, not production features. For teams that want to skip the manual work, Tandem offers a Claude Code skill that generates WebMCP bindings from your existing React components.

Prerequisites: get your environment ready

Before writing any code, you need three things set up.

Chrome 146 Canary. WebMCP is only available in the Canary channel as of February 2026. Download and install it alongside your regular Chrome — they run independently. Several developers in the community reported needing to try multiple channels (Beta, Dev, Canary) before finding the right build, so start with Canary.

Enable the WebMCP flag. Open chrome://flags in Canary, search for “WebMCP for testing,” and enable it. Relaunch the browser. This activates the navigator.modelContext API that powers the Imperative API, and enables the browser to parse Declarative API attributes on HTML forms.

Install the Model Context Tool Inspector extension. This Chrome extension, available from the Chrome Web Store, gives you a panel to inspect registered tools on any page, execute them manually with custom parameters, or test them via a connected AI (the demo uses Gemini 2.5 Flash). It’s the equivalent of Chrome DevTools for WebMCP — essential for debugging.

With these in place, you can see WebMCP in action on Google’s hosted demos before implementing it yourself.

Approach 1: the Declarative API (HTML forms)

The Declarative API is the fastest path to making your app agent-ready. If you have any HTML forms — search bars, contact forms, filter panels, settings pages — you can make them callable by AI agents with a few HTML attributes.

Basic form annotation

Take a standard contact form:

<form action="/api/contact" method="POST">
  <input type="text" name="name" placeholder="Your name"/>
  <input type="email" name="email" placeholder="Email address"/>
  <select name="subject">
    <option value="support">Support</option>
    <option value="sales">Sales</option>
    <option value="other">Other</option>
  </select>
  <textarea name="message" placeholder="Your message"></textarea>
  <button type="submit">Send</button>
</form>
<form action="/api/contact" method="POST">
  <input type="text" name="name" placeholder="Your name"/>
  <input type="email" name="email" placeholder="Email address"/>
  <select name="subject">
    <option value="support">Support</option>
    <option value="sales">Sales</option>
    <option value="other">Other</option>
  </select>
  <textarea name="message" placeholder="Your message"></textarea>
  <button type="submit">Send</button>
</form>
<form action="/api/contact" method="POST">
  <input type="text" name="name" placeholder="Your name"/>
  <input type="email" name="email" placeholder="Email address"/>
  <select name="subject">
    <option value="support">Support</option>
    <option value="sales">Sales</option>
    <option value="other">Other</option>
  </select>
  <textarea name="message" placeholder="Your message"></textarea>
  <button type="submit">Send</button>
</form>

To make this agent-ready, add four types of attributes:

<form
  action="/api/contact"
  method="POST"
  toolname="submitContactForm"
  tooldescription="Submit a contact form with name, email, subject, and message to reach the support or sales team"
>
  <input
    type="text"
    name="name"
    placeholder="Your name"
    toolparamdescription="Full name of the person submitting the form"
/>
  <input
    type="email"
    name="email"
    placeholder="Email address"
    toolparamdescription="Email address for receiving a reply"
/>
  <select
    name="subject"
    toolparamdescription="Category of inquiry: support, sales, or other"
>
    <option value="support">Support</option>
    <option value="sales">Sales</option>
    <option value="other">Other</option>
  </select>
  <textarea
    name="message"
    placeholder="Your message"
    toolparamdescription="Detailed description of the inquiry or request"
></textarea>
  <button type="submit">Send</button>
</form>
<form
  action="/api/contact"
  method="POST"
  toolname="submitContactForm"
  tooldescription="Submit a contact form with name, email, subject, and message to reach the support or sales team"
>
  <input
    type="text"
    name="name"
    placeholder="Your name"
    toolparamdescription="Full name of the person submitting the form"
/>
  <input
    type="email"
    name="email"
    placeholder="Email address"
    toolparamdescription="Email address for receiving a reply"
/>
  <select
    name="subject"
    toolparamdescription="Category of inquiry: support, sales, or other"
>
    <option value="support">Support</option>
    <option value="sales">Sales</option>
    <option value="other">Other</option>
  </select>
  <textarea
    name="message"
    placeholder="Your message"
    toolparamdescription="Detailed description of the inquiry or request"
></textarea>
  <button type="submit">Send</button>
</form>
<form
  action="/api/contact"
  method="POST"
  toolname="submitContactForm"
  tooldescription="Submit a contact form with name, email, subject, and message to reach the support or sales team"
>
  <input
    type="text"
    name="name"
    placeholder="Your name"
    toolparamdescription="Full name of the person submitting the form"
/>
  <input
    type="email"
    name="email"
    placeholder="Email address"
    toolparamdescription="Email address for receiving a reply"
/>
  <select
    name="subject"
    toolparamdescription="Category of inquiry: support, sales, or other"
>
    <option value="support">Support</option>
    <option value="sales">Sales</option>
    <option value="other">Other</option>
  </select>
  <textarea
    name="message"
    placeholder="Your message"
    toolparamdescription="Detailed description of the inquiry or request"
></textarea>
  <button type="submit">Send</button>
</form>

That’s it. Chrome automatically reads these attributes and generates a structured tool schema. When an agent visits your page, it discovers a submitContactForm tool with typed parameters and descriptions — no DOM parsing required.

Handling agent submissions

When an AI agent fills and submits this form, the submit event includes an agentInvoked flag. Use this to differentiate agent from human submissions and return proper tool responses:

document.querySelector('form').addEventListener('submit', (event) => {
  event.preventDefault();

  const formData = new FormData(event.target);
  const isAgentSubmission = event.agentInvoked;

  // Validate
  const errors = validateContactForm(formData);

  if (errors.length > 0 && isAgentSubmission) {
    // Return structured error to the agent
    event.respondWith({
      success: false,
      errors: errors
    });
    return;
  }

  if (isAgentSubmission) {
    // Submit and return structured confirmation
    submitForm(formData).then((result) => {
      event.respondWith({
        success: true,
        ticketId: result.id,
        message: "Contact form submitted successfully"
      });
    });
  } else {
    // Normal human submission flow
    submitForm(formData);
  }
});
document.querySelector('form').addEventListener('submit', (event) => {
  event.preventDefault();

  const formData = new FormData(event.target);
  const isAgentSubmission = event.agentInvoked;

  // Validate
  const errors = validateContactForm(formData);

  if (errors.length > 0 && isAgentSubmission) {
    // Return structured error to the agent
    event.respondWith({
      success: false,
      errors: errors
    });
    return;
  }

  if (isAgentSubmission) {
    // Submit and return structured confirmation
    submitForm(formData).then((result) => {
      event.respondWith({
        success: true,
        ticketId: result.id,
        message: "Contact form submitted successfully"
      });
    });
  } else {
    // Normal human submission flow
    submitForm(formData);
  }
});
document.querySelector('form').addEventListener('submit', (event) => {
  event.preventDefault();

  const formData = new FormData(event.target);
  const isAgentSubmission = event.agentInvoked;

  // Validate
  const errors = validateContactForm(formData);

  if (errors.length > 0 && isAgentSubmission) {
    // Return structured error to the agent
    event.respondWith({
      success: false,
      errors: errors
    });
    return;
  }

  if (isAgentSubmission) {
    // Submit and return structured confirmation
    submitForm(formData).then((result) => {
      event.respondWith({
        success: true,
        ticketId: result.id,
        message: "Contact form submitted successfully"
      });
    });
  } else {
    // Normal human submission flow
    submitForm(formData);
  }
});

The event.respondWith() method sends a structured response back to the agent, so it knows whether the action succeeded and can decide what to do next. This closes the feedback loop — the agent doesn’t have to re-scan the page to check if the form was submitted.

Adding agent-aware UI states

WebMCP introduces CSS pseudo-classes that activate when an agent is interacting with your form. You can use these to show visual indicators to the human user:

/* Highlight fields the agent is currently filling */
.tool-form-active input:focus {
  border-color: #4A90D9;
  box-shadow: 0 0 0 2px rgba(74, 144, 217, 0.2);
}

/* Show a review banner before agent submits */
.tool-submit-active::before {
  content: "AI is about to submit this form. Please review.";
  display: block;
  background: #FFF8E1;
  padding: 12px;
  border-radius: 4px;
  margin-bottom: 12px;
}
/* Highlight fields the agent is currently filling */
.tool-form-active input:focus {
  border-color: #4A90D9;
  box-shadow: 0 0 0 2px rgba(74, 144, 217, 0.2);
}

/* Show a review banner before agent submits */
.tool-submit-active::before {
  content: "AI is about to submit this form. Please review.";
  display: block;
  background: #FFF8E1;
  padding: 12px;
  border-radius: 4px;
  margin-bottom: 12px;
}
/* Highlight fields the agent is currently filling */
.tool-form-active input:focus {
  border-color: #4A90D9;
  box-shadow: 0 0 0 2px rgba(74, 144, 217, 0.2);
}

/* Show a review banner before agent submits */
.tool-submit-active::before {
  content: "AI is about to submit this form. Please review.";
  display: block;
  background: #FFF8E1;
  padding: 12px;
  border-radius: 4px;
  margin-bottom: 12px;
}

These classes — tool-form-active and tool-submit-active — are part of the WebMCP spec and provide the human-in-the-loop confirmation experience. The user sees what the agent is doing and can intervene before submission.

Approach 2: the Imperative API (JavaScript)

For anything beyond simple forms — multi-step workflows, stateful interactions, dynamic content — you’ll use the Imperative API. This is where WebMCP gets powerful for SaaS products.

Registering a basic tool

The core API is navigator.modelContext.registerTool():

await navigator.modelContext.registerTool({
  name: "searchProducts",
  description: "Search the product catalog by keyword with optional filters for category and price range",
  inputSchema: {
    type: "object",
    properties: {
      query: {
        type: "string",
        description: "Search keywords"
      },
      category: {
        type: "string",
        description: "Product category filter",
        enum: ["electronics", "clothing", "home", "all"]
      },
      maxPrice: {
        type: "number",
        description: "Maximum price in USD"
      }
    },
    required: ["query"]
  },
  outputSchema: {
    type: "object",
    properties: {
      results: {
        type: "array",
        description: "Array of matching products"
      },
      totalCount: {
        type: "number",
        description: "Total number of results"
      }
    }
  },
  execute: async (params) => {
    const results = await searchProducts(params.query, {
      category: params.category,
      maxPrice: params.maxPrice
    });
    return {
      results: results.items,
      totalCount: results.total
    };
  }
});
await navigator.modelContext.registerTool({
  name: "searchProducts",
  description: "Search the product catalog by keyword with optional filters for category and price range",
  inputSchema: {
    type: "object",
    properties: {
      query: {
        type: "string",
        description: "Search keywords"
      },
      category: {
        type: "string",
        description: "Product category filter",
        enum: ["electronics", "clothing", "home", "all"]
      },
      maxPrice: {
        type: "number",
        description: "Maximum price in USD"
      }
    },
    required: ["query"]
  },
  outputSchema: {
    type: "object",
    properties: {
      results: {
        type: "array",
        description: "Array of matching products"
      },
      totalCount: {
        type: "number",
        description: "Total number of results"
      }
    }
  },
  execute: async (params) => {
    const results = await searchProducts(params.query, {
      category: params.category,
      maxPrice: params.maxPrice
    });
    return {
      results: results.items,
      totalCount: results.total
    };
  }
});
await navigator.modelContext.registerTool({
  name: "searchProducts",
  description: "Search the product catalog by keyword with optional filters for category and price range",
  inputSchema: {
    type: "object",
    properties: {
      query: {
        type: "string",
        description: "Search keywords"
      },
      category: {
        type: "string",
        description: "Product category filter",
        enum: ["electronics", "clothing", "home", "all"]
      },
      maxPrice: {
        type: "number",
        description: "Maximum price in USD"
      }
    },
    required: ["query"]
  },
  outputSchema: {
    type: "object",
    properties: {
      results: {
        type: "array",
        description: "Array of matching products"
      },
      totalCount: {
        type: "number",
        description: "Total number of results"
      }
    }
  },
  execute: async (params) => {
    const results = await searchProducts(params.query, {
      category: params.category,
      maxPrice: params.maxPrice
    });
    return {
      results: results.items,
      totalCount: results.total
    };
  }
});

The schema follows JSON Schema conventions — the same format you’d use for OpenAI or Anthropic tool definitions. If you’ve written tool definitions for any LLM API, this will feel familiar.

React pattern: per-component tool registration

The real power of the Imperative API in React is tying tool registration to component lifecycle. Tools appear when the relevant UI renders and disappear when it unmounts:

import { useEffect } from 'react';

function ProductSearch({ categories }) {
  useEffect(() => {
    // Register tool when component mounts
    const registration = navigator.modelContext?.registerTool({
      name: "searchProducts",
      description: "Search products by keyword and category",
      inputSchema: {
        type: "object",
        properties: {
          query: { type: "string", description: "Search terms" },
          category: {
            type: "string",
            enum: categories.map(c => c.slug),
            description: "Filter by category"
          }
        },
        required: ["query"]
      },
      execute: async (params) => {
        // Use your existing search logic
        const results = await api.searchProducts(params);
        return { results, count: results.length };
      }
    });

    // Unregister when component unmounts
    return () => {
      registration?.then(reg => {
        navigator.modelContext?.unregisterTool(reg);
      });
    };
  }, [categories]);

  return (
    <div>
      {/* Your normal search UI */}
    </div>
  );
}
import { useEffect } from 'react';

function ProductSearch({ categories }) {
  useEffect(() => {
    // Register tool when component mounts
    const registration = navigator.modelContext?.registerTool({
      name: "searchProducts",
      description: "Search products by keyword and category",
      inputSchema: {
        type: "object",
        properties: {
          query: { type: "string", description: "Search terms" },
          category: {
            type: "string",
            enum: categories.map(c => c.slug),
            description: "Filter by category"
          }
        },
        required: ["query"]
      },
      execute: async (params) => {
        // Use your existing search logic
        const results = await api.searchProducts(params);
        return { results, count: results.length };
      }
    });

    // Unregister when component unmounts
    return () => {
      registration?.then(reg => {
        navigator.modelContext?.unregisterTool(reg);
      });
    };
  }, [categories]);

  return (
    <div>
      {/* Your normal search UI */}
    </div>
  );
}
import { useEffect } from 'react';

function ProductSearch({ categories }) {
  useEffect(() => {
    // Register tool when component mounts
    const registration = navigator.modelContext?.registerTool({
      name: "searchProducts",
      description: "Search products by keyword and category",
      inputSchema: {
        type: "object",
        properties: {
          query: { type: "string", description: "Search terms" },
          category: {
            type: "string",
            enum: categories.map(c => c.slug),
            description: "Filter by category"
          }
        },
        required: ["query"]
      },
      execute: async (params) => {
        // Use your existing search logic
        const results = await api.searchProducts(params);
        return { results, count: results.length };
      }
    });

    // Unregister when component unmounts
    return () => {
      registration?.then(reg => {
        navigator.modelContext?.unregisterTool(reg);
      });
    };
  }, [categories]);

  return (
    <div>
      {/* Your normal search UI */}
    </div>
  );
}

This pattern creates the contextual tool loading that AI Jason highlighted as “the coolest part” of WebMCP. As users navigate your app, the exposed tools change automatically. A dashboard page might expose getMetrics and exportReport. A settings page might expose updateProfile and changeNotificationPreferences. The agent always sees only what’s relevant.

Next.js considerations

In Next.js, WebMCP tools must be registered client-side since navigator.modelContext is a browser API. Use the "use client" directive and guard against server-side execution:

"use client";

import { useEffect } from 'react';

export function WebMCPProvider({ children }) {
  useEffect(() => {
    // Guard: only run in browser with WebMCP support
    if (typeof navigator === 'undefined' || !navigator.modelContext) {
      return;
    }

    // Register app-wide tools here
    const tools = registerGlobalTools();

    return () => {
      tools.forEach(t => navigator.modelContext?.unregisterTool(t));
    };
  }, []);

  return<>{children}</>;
}
"use client";

import { useEffect } from 'react';

export function WebMCPProvider({ children }) {
  useEffect(() => {
    // Guard: only run in browser with WebMCP support
    if (typeof navigator === 'undefined' || !navigator.modelContext) {
      return;
    }

    // Register app-wide tools here
    const tools = registerGlobalTools();

    return () => {
      tools.forEach(t => navigator.modelContext?.unregisterTool(t));
    };
  }, []);

  return<>{children}</>;
}
"use client";

import { useEffect } from 'react';

export function WebMCPProvider({ children }) {
  useEffect(() => {
    // Guard: only run in browser with WebMCP support
    if (typeof navigator === 'undefined' || !navigator.modelContext) {
      return;
    }

    // Register app-wide tools here
    const tools = registerGlobalTools();

    return () => {
      tools.forEach(t => navigator.modelContext?.unregisterTool(t));
    };
  }, []);

  return<>{children}</>;
}

For apps using Incremental Static Regeneration (ISR), the registration happens in client-side components that hydrate after the initial render. There’s no conflict with SSR or static generation — the WebMCP tools simply don’t exist until the JavaScript executes in the browser.

Best practices for tool design

Writing good tool definitions is its own skill. A poorly described tool is like a poorly written API doc — the agent won’t use it correctly.

Naming conventions

Name tools as clear verb-noun pairs that describe what they do. Think of them as function names that an LLM needs to understand from the name alone.

Good: searchProducts, addToCart, updateUserSettings, getOrderStatus Bad: doSearch, handler1, processInput, action

Description writing for LLMs

Tool descriptions are consumed by language models, not humans. Write them the way you’d write a system prompt for a tool-use call — specific, unambiguous, and inclusive of edge cases.

Good: “Search the product catalog by keyword. Returns up to 20 results sorted by relevance. Supports filtering by category and price range. Returns empty array if no matches found.”

Bad: “Searches products.”

If a description is too vague, the model may hallucinate parameters or misuse the tool. The Early Preview Program is designed partly so developers can test how different LLMs interpret their tool descriptions.

Categorize your tools

Alex Nahas, who built the original MCP-B that became WebMCP, recommends categorizing tools into three types:

Read-only tools fetch information — product details, account status, available dates. These should always be available so agents can answer user questions without navigating through menus. Lowest risk.

Navigation tools tell the agent what your website does and where things live. Think of these as a map: “Here are the main sections. Here’s what each one contains.” These help the agent orient before taking action.

Write tools take action — filling forms, submitting requests, completing transactions. These are where human-in-the-loop confirmation matters most. Always implement confirmation flows for write tools that affect user data or trigger irreversible actions.

Debugging with the Tool Inspector

The Model Context Tool Inspector extension is your primary debugging tool. Open it on any page and you’ll see all registered WebMCP tools with their schemas.

Key things to check during development: Are all expected tools showing up when you navigate to a page? Are tool descriptions clear enough that you can understand what they do without seeing the UI? Do the input schemas accurately reflect what your functions expect? Are tools unregistering properly when components unmount?

The inspector also lets you execute tools manually with custom parameters — essential for testing edge cases before connecting a real AI agent.

Current limitations and caveats

Be clear-eyed about what you’re building on:

The spec is an early draft. Method names, parameter shapes, and the entire navigator.modelContext interface could change between Chrome versions. Bug0’s analysis is direct: “Experiment with it. Build prototypes. Don’t ship it to production.”

Chrome-only for now. No other browser has shipped an implementation. Microsoft’s co-authorship of the spec strongly suggests Edge support is coming, but there’s no public timeline for Firefox or Safari.

Security model has open questions. Prompt injection through tool descriptions, data exfiltration through tool chaining, and multi-agent conflicts (two agents on the same page) are acknowledged concerns without fully resolved solutions. The requestUserInteraction() confirmation method helps, but it’s not a complete answer.

No tool discovery without visiting. Currently, tools only exist when a page is open in a tab. An agent can’t know what tools your app offers without navigating there first. Future work explores manifest-based discovery — something like .well-known/webmcp — but that’s not implemented yet.

No headless support yet. WebMCP currently requires a visible browser tab. Headless browser automation (Playwright, Puppeteer) can’t discover or call WebMCP tools. This limits server-side automation use cases for now.

Skip the manual work: Tandem’s Claude Code skill

If you’re looking at the code samples above and thinking “this is straightforward but repetitive” — that’s exactly the feedback we heard from our engineering customers.

Tandem has built a Claude Code skill (currently in beta) that helps engineers expose their existing React components as WebMCP tools. Instead of manually writing tool registrations, schemas, and lifecycle management for each component, you install the skill and let Claude Code generate the bindings from your existing codebase.

The skill analyzes your React components, identifies user-facing functions and form interactions, and generates proper WebMCP tool registrations with appropriate schemas, descriptions, and safety patterns. It handles the registerTool/unregisterTool lifecycle, adds human-in-the-loop confirmation for write operations, and follows the naming and categorization best practices outlined above.

“WebMCP is the missing link between the AI agents people are building and the SaaS products they need to interact with. We built the Claude Code skill because our customers kept asking: ‘How do I make my app work with all these new browser agents?’ The answer should be minutes of setup, not months of engineering.” — Manuel Darcemont, CTO & Co-founder, Tandem

The skill is in beta and available by request. If you’re interested, reach out to christophe@usetandem.ai for early access. It works with any React or Next.js codebase and follows the same patterns described in this guide — it just does the repetitive work for you.

For the strategic context on why WebMCP matters for your product beyond the technical implementation, read our guide: Make Your SaaS Agent-Ready with WebMCP.

FAQ

How long does it take to add WebMCP to an existing React app?

For the Declarative API (annotating existing forms), a few hours. For the Imperative API with per-component registration, expect a day or two for a typical SaaS app with 5-10 key workflows. Tandem’s Claude Code skill can reduce the Imperative API work to under an hour for most codebases.

Do I need to change my backend for WebMCP?

No. WebMCP runs entirely client-side. Your existing APIs, authentication, and backend logic stay the same. The only addition is tool registration JavaScript in your frontend code. If you want to distinguish agent from human requests, you can check the agentInvoked flag on form submissions.

Will WebMCP tools work with my existing state management (Redux, Zustand, etc.)?

Yes. WebMCP tool execute functions are regular JavaScript — they can read from and dispatch to any state management system. The tools are just a structured interface layer on top of your existing application logic.

Can I restrict which tools are available to which agents?

The current spec doesn’t include per-agent tool visibility. All registered tools are visible to any agent operating in the browser. If you need to restrict access, implement checks inside your execute functions based on user permissions or session state.

How do I test WebMCP without connecting a real AI?

The Model Context Tool Inspector extension lets you manually execute tools with custom parameters. You can also write automated tests using the navigator.modelContext API directly — register tools, call them programmatically, and assert on the results.

What happens if a user doesn’t have Chrome Canary?

Nothing breaks. WebMCP is purely additive. In browsers without WebMCP support, the navigator.modelContext API simply doesn’t exist, your registration code is guarded by a feature check, and your app works normally. No degradation for human users.

Glossary

navigator.modelContext: The browser API surface that WebMCP introduces. Used to register tools (registerTool()), unregister tools (unregisterTool()), and manage agent interactions.

registerTool(): The JavaScript method for imperatively declaring a WebMCP tool. Takes a name, description, input/output schemas, and an execute callback function.

Declarative API: WebMCP’s HTML-attribute-based approach. Add toolname, tooldescription, and toolparamdescription to existing form elements for automatic tool generation.

Imperative API: WebMCP’s JavaScript-based approach. Use navigator.modelContext.registerTool() for complex, dynamic, or stateful interactions that go beyond simple form submission.

agentInvoked: A boolean flag on the SubmitEvent that indicates whether a form submission was triggered by an AI agent (true) or a human user (false). Allows backend differentiation.

requestUserInteraction(): A WebMCP method that pauses agent execution and prompts the user for explicit confirmation before a sensitive action is executed. Enables human-in-the-loop patterns.

Model Context Tool Inspector: A Chrome extension for debugging WebMCP implementations. Shows registered tools, their schemas, and lets you execute them manually or via a connected AI model.

Claude Code Skill: A SKILL.md file that extends Claude Code’s capabilities with domain-specific instructions. Tandem’s WebMCP skill (in beta) generates tool registrations from existing React components.

Subscribe to get daily insights and company news straight to your inbox.