Protocol · · 4 min read

Making XMTP docs available where developers work

From llms.txt to a docs MCP server

Making XMTP docs available where developers work
Connect to the xmtp-docs MCP server in Claude Code in VS Code

A developer building with XMTP recently asked, “Do you have an MCP server for your docs?”

We didn’t.

MCP (Model Context Protocol) servers are emerging as a standard way for AI agents to access external sources of truth—like technical documentation—through well-defined tools instead of relying on their training data alone.

We do provide llms.txt files. These structured text files contain all our technical docs and are intended to help models answer questions about XMTP. To learn more about the llms.txt file standard, see https://llmstxt.org/.

Providing llms.txt files via public URLs is a relatively simple way to give developers an easy path to feed XMTP docs to their AI agents.

However, it was only the first step in the evolution of our docs. The next step was an MCP server.

What is a docs MCP server?

An MCP server makes docs available to AI agents inside the developer tools people already use, such as VS Code and Cursor.

When a developer asks the agent a question, the agent can use the MCP server to search, retrieve, and cite relevant docs to shape its answer.

💡
Docs MCP vs protocol MCP
It’s worth clarifying the difference between an MCP server for XMTP docs and an MCP server for the XMTP protocol itself. Here, we’re talking about a docs MCP server that exposes docs for agents to read—not actions they can perform via the protocol.

In our case, the MCP server uses the same LLM-friendly docs bundle (llms.txt) that we provide as a standalone solution, but exposes it through an interface that the agent can actively query.

In practice, llms.txt and a docs MCP server complement each other: One provides a static source of truth, while the other gives agents a structured, repeatable way to query it as developers work.

This leads to more predictable retrieval and fewer hallucinated answers—not because the model became smarter, but because it can perform a controlled search of the actual docs instead of guessing.

What do you mean by “guessing”?

Without an llms.txt file or a docs MCP server, when you ask an agent a question like, “How do I create a group chat with XMTP?” it might:

This isn’t malicious. Without a source of truth, large language models have no choice but to rely on runtime guesswork.

When you provide an llms.txt file or an MCP server for your docs, you’re giving developers—and their agents—a concrete source of truth to work from.

Why a docs MCP server is so valuable

Developers aren’t spending less time needing docs—they’re spending less time navigating docs themselves.

Increasingly, they’re asking AI agents questions inside editors, terminals, and chat interfaces. Today, the question isn’t so much “Where is this in the docs?” and it is, “What do I need to do next?”

Documentation hasn’t stopped mattering—but how it’s accessed is changing.

For anyone who writes docs—whether you’re a technical writer, developer, or product manager—this shift presents a real opportunity.

Instead of trying to anticipate every way someone might browse or read your docs, an MCP server allows an AI agent to handle navigation based on the developer’s questions. The agent can use your docs to provide relevant, situation-aware answers directly inside the tools where developers already work.

More efficient token usage

When you give an AI assistant an llms.txt file, the model must read and process a large block of text each time it answers a question. That’s why we provide llms.txt files scoped by developer use case, such as building an agent versus a chat app. This can work well, but it still requires the model to consider a large amount of information at once and decide what to focus on in the moment.

A docs MCP server works differently. Instead of loading the entire llms.txt into context, the agent can search the docs and retrieve only the specific sections it needs, when it needs them.

Tokens are still used in both cases. The difference is how they’re spent. With MCP, tokens go toward targeted queries and retrieval, instead of repeatedly loading and reinterpreting the full llms.txt file.

What I built (and what I didn’t)

With Claude Code in VS Code at my side, I built a small, local MCP server that exposes our docs through two tools:

I deliberately didn’t:

I didn’t need to build everything. I just needed to build enough to change how XMTP docs show up in the world.

Try it

The XMTP docs MCP server is open source and available here:

👉 https://github.com/xmtp/xmtp-docs-mcp

You can browse the code, see how it works, and try it out with Claude Code or Cursor.

You can fork the repo to create your own docs MCP server.

If you have any questions, please open an issue.

If you’ve been curious about MCP, or about how docs fit into AI-assisted development tools, I hope this makes it feel more approachable.

AI at work

Working at a startup means we don’t have developers waiting around for stuff to do, like helping a tech writer build a docs MCP server—though I’m sure every single one would have loved to help me!

At XMTP Labs, we’re encouraged to use AI to help us expand our agency. We even have an AI stipend to spend on the tools we need. That made it possible for me to use Claude Code to build a docs MCP server that developers were asking for.

If you’re interested in learning more about how to use AI to increase your agency as a doc writer, please reach out!

Want to work with a team that believes AI doesn’t replace us; it makes us better?

Take a look at the XMTP Labs Careers page.

Read next