F3.1 F3

MCP: One Protocol to Connect Them All

Model Context Protocol (MCP) is an open protocol for connecting AI applications to external systems. Instead of writing custom integration code for every service — GitHub’s API, Jira’s API, Slack’s API — you connect through MCP servers that speak a common language.

The architecture

MCP uses a client-server model. The AI application (like Claude Code) is the client. Each external system gets an MCP server that wraps its API. Communication between client and server uses JSON-RPC 2.0 — a lightweight, structured request/response format.

When Claude Code needs to create a GitHub issue, it doesn’t call the GitHub API directly. It sends a JSON-RPC request to the GitHub MCP server, which translates it into a GitHub API call, executes it, and returns the result.

It’s an open protocol, not a Claude feature

MCP is not exclusive to Claude or Anthropic. Any AI application can implement the MCP client protocol. Any developer can build an MCP server. The protocol specification is open — it standardizes the interface, not the implementation.

This is the key value proposition: standardization. Without MCP, connecting an AI agent to 5 services means writing 5 custom integrations with 5 different authentication flows, error formats, and data structures. With MCP, you write one client that speaks to any MCP server. The complexity of each service’s API is encapsulated inside its server.

No central registry

MCP servers are discovered through direct configuration, not a central registry. You tell your client which servers to connect to — there’s no marketplace or discovery service that automatically finds available servers. Configuration is explicit.


One-liner: MCP is an open protocol that standardizes how AI applications connect to external systems — one client interface, any number of servers, JSON-RPC underneath.