Imagine you have an incredibly smart assistant who can analyze data, write code, and answer complex questions — but who sits in a windowless room with no phone, no internet, and no access to any of your files. Every time you need the assistant to check your email, you have to print it out, walk it to the room, slide it under the door, wait for a response, and then walk the answer back. Now multiply that by every tool you use: your calendar, your database, your project management system, your cloud infrastructure. That is essentially the state of AI integrations before the Model Context Protocol — and it is as inefficient as it sounds.
Before MCP, every AI application had to build its own custom integration for every data source and tool it wanted to access. Want Claude to read your Google Drive? Build a custom integration. Want it to query your database? Another custom integration. Want it to access Slack? Yet another one. Every AI company and every tool vendor had to negotiate, build, and maintain a unique connector. The math was brutal: N AI applications times M tools equals N times M custom integrations, each with its own authentication flow, data format, and failure modes.
It was like the early internet before HTTP — everyone had their own way of sending documents between computers, and none of them talked to each other. Then HTTP came along and said: here is one standard way to request and serve documents. The web exploded.
MCP is doing the same thing for AI. Announced by Anthropic in late 2024 and open-sourced from day one, the Model Context Protocol is a universal standard that lets any AI model connect to any tool or data source through a single, well-defined protocol. Build an MCP server once, and every MCP-compatible AI application — Claude Desktop, Claude Code, VS Code Copilot, Cursor, Windsurf, Zed, and more — can use it immediately. No custom integrations. No vendor lock-in. One protocol to rule them all.
This is the definitive guide to MCP. By the end, you will understand the architecture, the three core primitives, how the transport layer works, and you will have built your own MCP servers in both Python and TypeScript. Let us get into it.
What Is MCP?
The Model Context Protocol (MCP) is an open standard protocol for communication between AI applications (called clients or hosts) and external data sources and tools (called servers). Think of it as a universal language that AI models and tools can speak to understand each other, regardless of who built them.
The USB Analogy
The best way to understand MCP is through the USB analogy. Remember the days before USB? Every device — printers, scanners, keyboards, cameras — had its own proprietary cable and connector. Your desk was a spaghetti mess of incompatible cables, and buying a new device meant praying it came with the right port. Then USB arrived and said: one connector, one protocol, every device. USB-C took it further: one cable for charging, data, video, and audio across laptops, phones, tablets, and monitors.
MCP is the USB-C of AI integrations. One standard connector for everything. A GitHub MCP server works with Claude, with Cursor, with VS Code Copilot, and with any future AI application that implements the MCP client specification. Build it once, use it everywhere.
Who Created It and Why
MCP was created by Anthropic and open-sourced under a permissive license. The specification, SDKs, and reference implementations are all publicly available on GitHub. Anthropic did not build MCP to lock developers into Claude — they built it because the N times M integration problem was holding back the entire AI industry.
Here is the math. Suppose there are 10 AI applications and 50 tools. Without a standard protocol, you need 10 times 50 equals 500 custom integrations. Each one needs to be built, tested, documented, and maintained. Now add one more AI application, and you need 50 more integrations. Add one more tool, and you need 10 more. The problem scales terribly.
With MCP, each AI application implements one MCP client, and each tool implements one MCP server. That is 10 plus 50 equals 60 implementations total. Add a new AI application? One more client. Add a new tool? One more server. The problem becomes linear instead of multiplicative.
What MCP Is NOT
To avoid confusion, let us be clear about what MCP is not:
- MCP is not an API. It is a protocol specification, like HTTP or WebSocket. APIs are built on top of protocols.
- MCP is not a framework. It is not LangChain, CrewAI, or AutoGen. Frameworks provide opinionated structures for building applications. MCP provides a communication standard.
- MCP is not a library. While SDKs exist for Python and TypeScript, the protocol itself is language-agnostic. You can implement it in Rust, Go, Java, or any language that can handle JSON-RPC.
- MCP is not Anthropic-only. It is an open standard. Microsoft, Google, and many open-source projects are adopting it.
The closest analogy in software engineering is the Language Server Protocol (LSP), created by Microsoft for VS Code. LSP standardized how code editors communicate with language-specific intelligence servers (autocomplete, go-to-definition, error checking). Before LSP, every editor needed its own plugin for every language. After LSP, a language server works with any editor. MCP does the same thing, but for AI models connecting to tools and data.
Current Adoption
As of early 2026, MCP has been adopted by a rapidly growing list of applications and platforms:
| Application | Type | MCP Support |
|---|---|---|
| Claude Desktop | AI Assistant | Full (host + client) |
| Claude Code | CLI Agent | Full (host + client) |
| VS Code (GitHub Copilot) | IDE | MCP server support |
| Cursor | AI IDE | Full MCP support |
| Windsurf | AI IDE | Full MCP support |
| Zed | Code Editor | MCP integration |
| Sourcegraph Cody | Code AI | MCP server support |
The Architecture of MCP
MCP follows a client-server architecture with three distinct components. Understanding how these pieces fit together is essential before diving into the primitives and transport layers.
Three Core Components
The architecture flows like this:
┌─────────────────────────────────────────────────────┐
│ MCP HOST │
│ (e.g., Claude Desktop) │
│ │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ MCP │ │ MCP │ │ MCP │ │
│ │ Client 1 │ │ Client 2 │ │ Client 3 │ │
│ └────┬─────┘ └────┬─────┘ └────┬─────┘ │
└───────┼──────────────┼──────────────┼────────────────┘
│ │ │
▼ ▼ ▼
┌──────────┐ ┌──────────┐ ┌──────────┐
│ MCP │ │ MCP │ │ MCP │
│ Server A │ │ Server B │ │ Server C │
│ (GitHub) │ │ (DB) │ │ (Slack) │
└────┬─────┘ └────┬─────┘ └────┬─────┘
│ │ │
▼ ▼ ▼
┌──────────┐ ┌──────────┐ ┌──────────┐
│ GitHub │ │ PostgreSQL│ │ Slack │
│ API │ │ Database │ │ API │
└──────────┘ └──────────┘ └──────────┘
MCP Hosts are the AI applications that want to access external tools and data. Claude Desktop, Claude Code, Cursor, and any custom AI application you build can be an MCP host. The host is responsible for managing the user interface, running the AI model, and coordinating connections to one or more MCP servers. In the HTTP analogy, the host is like a web browser — it is the application the user interacts with, and it knows how to speak the protocol to get things done.
MCP Clients are protocol-level connectors that live inside hosts. Each client maintains a one-to-one connection with a specific MCP server. If Claude Desktop is connected to three MCP servers (GitHub, a database, and Slack), it has three MCP clients running internally. The client handles all the low-level communication: sending JSON-RPC messages, negotiating capabilities, and managing the connection lifecycle. You typically do not build clients directly — the host application includes them.
MCP Servers are the services that expose tools, resources, and prompts to AI applications. A GitHub MCP server might expose tools like create_issue, search_repos, and list_pull_requests. A database MCP server might expose tools like run_query and list_tables. Each server exposes its capabilities through a standard interface, and any MCP client can discover and use them. In the HTTP analogy, MCP servers are like web servers — they serve content and functionality to any client that speaks the protocol.
MCP servers can run locally on your machine (using stdio transport, where they run as a subprocess) or remotely as web services (using HTTP+SSE transport). This flexibility means you can start with a simple local server for personal use and later deploy it as a shared service for your entire team.
How It Differs from Traditional API Integrations
In a traditional integration, your AI application directly calls an external API. You write HTTP requests, handle authentication, parse responses, and manage errors — all in custom code baked into your application. If the API changes, you update your code. If you want to support a new AI application, you rewrite all of that.
With MCP, there is an abstraction layer. The AI application does not know or care how the MCP server talks to GitHub, Slack, or your database. It only knows how to speak MCP. The server handles all the API-specific logic. This separation of concerns means:
- AI applications can support new tools without code changes — just point them at a new MCP server
- Tool providers can update their APIs without breaking AI integrations — they just update their MCP server
- The AI model can discover what tools are available dynamically at runtime, through the standard capability negotiation
The Three Primitives: Tools, Resources, and Prompts
MCP defines three core primitives — three types of things that servers can expose to clients. Each serves a different purpose and is controlled by a different party. Understanding these three primitives is the key to understanding MCP.
Tools (Model-Controlled)
Tools are functions that the AI model can call to perform actions. They are the most commonly used primitive and the one most people think of first when they hear “MCP.” Tools let the model do things: search files, run database queries, send messages, create GitHub issues, deploy code, and anything else you can express as a function call.
Each tool is defined with a name, a description (which the model reads to understand when to use the tool), and an input schema (defined in JSON Schema format). When the model decides a tool is needed to answer the user’s question, it generates the appropriate arguments, the MCP client sends the call to the server, the server executes the function, and the result flows back to the model.
Here is a complete example of a tool definition:
{
"name": "query_database",
"description": "Execute a read-only SQL query against the application database. Use this tool when the user asks about data stored in our systems — customer counts, order history, revenue figures, etc. Only SELECT queries are allowed.",
"inputSchema": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The SQL SELECT query to execute"
},
"database": {
"type": "string",
"enum": ["production", "analytics", "staging"],
"description": "Which database to query"
},
"limit": {
"type": "integer",
"default": 100,
"description": "Maximum number of rows to return"
}
},
"required": ["query", "database"]
}
}
The critical thing to understand is that tools are model-controlled. The AI model decides when to call a tool based on the user’s intent. The user says “how many customers signed up last month?” and the model determines that it needs to call query_database to answer that question. The model generates the SQL, picks the database, and makes the call. This is the same concept as function calling or tool calling in the Claude and OpenAI APIs, but standardized across all MCP-compatible applications.
Resources (Application-Controlled)
Resources are data that the application can expose to the AI model. If tools are like POST endpoints in REST (they perform actions), resources are like GET endpoints (they provide data). Resources give the model context — background information, file contents, configuration, documentation — that helps it understand the user’s situation and generate better responses.
Resources are identified by URIs, just like web pages. A file system MCP server might expose resources like file:///home/user/project/README.md. A database server might expose db://users/123 to represent a specific user record. A project management server might expose jira://PROJECT-456 for a specific ticket.
Here is an example of a resource definition:
{
"uri": "docs://api/authentication",
"name": "Authentication API Documentation",
"description": "Complete documentation for the authentication API, including endpoints, request/response formats, and error codes",
"mimeType": "text/markdown"
}
Resources are application-controlled, not model-controlled. The host application decides when to fetch and present resources to the model. For example, when you open a project in Claude Code, the application might automatically fetch the project’s README and configuration files as resources, giving the model context before you even ask a question. Resources can also be dynamic — a server can support subscriptions so the client is notified when a resource changes.
Prompts (User-Controlled)
Prompts are pre-built prompt templates that servers can expose. They give users (or applications) quick access to common workflows without having to type out the full instructions every time. A code review MCP server might expose a /review-code prompt that includes a detailed template for analyzing code quality, security, and performance. A documentation server might expose a /summarize prompt optimized for generating concise summaries.
Here is an example of a prompt definition:
{
"name": "review-code",
"description": "Perform a thorough code review with focus on bugs, security, performance, and maintainability",
"arguments": [
{
"name": "code",
"description": "The code to review",
"required": true
},
{
"name": "language",
"description": "Programming language of the code",
"required": false
},
{
"name": "focus",
"description": "Specific area to focus on (security, performance, readability)",
"required": false
}
]
}
Prompts are user-controlled. The user explicitly selects a prompt from the available list, provides any required arguments, and the expanded prompt is sent to the model. This differs from tools (where the model decides) and resources (where the application decides).
Comparison Table
| Aspect | Tools | Resources | Prompts |
|---|---|---|---|
| Controlled by | AI Model | Application | User |
| Direction | Model → Server (action) | Server → Model (data) | Server → User (template) |
| REST analogy | POST endpoints | GET endpoints | Pre-built query templates |
| Example | create_issue, run_query | file contents, DB records | /review-code, /summarize |
| Discovery | tools/list | resources/list | prompts/list |
| Use case | Perform actions | Provide context | Templated workflows |
Transport Layer: How MCP Communicates
The protocol needs a way to move messages between clients and servers. MCP supports two transport mechanisms, each suited to different deployment scenarios.
stdio (Standard I/O) Transport
The stdio transport is the simplest and most common way to run MCP servers. The host application launches the MCP server as a subprocess on the same machine, and they communicate via standard input (stdin) and standard output (stdout). Messages are JSON-RPC 2.0 objects, sent as newline-delimited JSON.
Here is what happens under the hood when you configure a stdio MCP server in Claude Desktop:
- You add the server configuration to
claude_desktop_config.json - Claude Desktop launches the server process (e.g.,
python weather_server.py) - The client sends an
initializerequest over stdin - The server responds with its capabilities (what tools, resources, and prompts it offers)
- The client sends a
tools/listrequest to discover available tools - When the model wants to call a tool, the client sends a
tools/callrequest over stdin - The server executes the tool and sends the result back over stdout
stdio is ideal for local development, personal tools, and single-user scenarios. It requires no network configuration, no authentication setup, and no infrastructure. You just need the server script on your machine.
HTTP + Server-Sent Events (SSE) Transport
For remote servers, shared team tools, and production deployments, MCP supports HTTP with Server-Sent Events. The client connects to the server over HTTP, sends requests as HTTP POST messages, and receives responses and notifications via an SSE stream.
This transport enables scenarios that stdio cannot handle:
- Remote access: The server runs on a different machine, in the cloud, or behind a load balancer
- Multi-user: Multiple clients can connect to the same server simultaneously
- Authentication: Standard HTTP authentication (Bearer tokens, OAuth) can be used
- Monitoring: Standard HTTP logging, metrics, and tracing tools work out of the box
- Scalability: The server can be deployed as a containerized service with horizontal scaling
Transport Comparison
| Aspect | stdio | HTTP + SSE |
|---|---|---|
| Setup complexity | Minimal — just run a script | Moderate — needs web server |
| Best for | Local dev, personal tools | Remote, shared, production |
| Authentication | OS-level (file permissions) | HTTP auth (tokens, OAuth) |
| Scalability | Single user, single machine | Multi-user, load balanced |
| Debugging | Read stdout/stderr | HTTP logs, network tools |
| Network required | No | Yes |
Building Your First MCP Server — Complete Tutorial
Theory is great, but nothing beats building something. In this section, we will build two complete, runnable MCP servers: one in Python and one in TypeScript. Both will be fully functional and ready to connect to Claude Desktop or Claude Code.
Python MCP Server: Weather Service
Step 1: Install dependencies
# Create a new project directory
mkdir mcp-weather-server && cd mcp-weather-server
# Initialize with uv (recommended) or pip
uv init
uv add mcp httpx
# Or with pip
pip install mcp httpx
Step 2: Create the server
Create a file called weather_server.py:
"""MCP Weather Server — exposes weather tools, resources, and prompts."""
import json
import httpx
from mcp.server.fastmcp import FastMCP
# Create the MCP server
mcp = FastMCP("weather-service")
# --- TOOLS (Model-Controlled) ---
@mcp.tool()
async def get_weather(city: str, units: str = "celsius") -> str:
"""Get the current weather for a city.
Use this tool when the user asks about weather conditions,
temperature, or forecasts for a specific location.
Args:
city: The city name (e.g., "Tokyo", "New York", "London")
units: Temperature units — "celsius" or "fahrenheit"
"""
# Using the free Open-Meteo API (no API key required)
# First, geocode the city name
async with httpx.AsyncClient() as client:
geo_response = await client.get(
"https://geocoding-api.open-meteo.com/v1/search",
params={"name": city, "count": 1}
)
geo_data = geo_response.json()
if "results" not in geo_data:
return f"Could not find location: {city}"
location = geo_data["results"][0]
lat = location["latitude"]
lon = location["longitude"]
name = location["name"]
country = location.get("country", "")
# Fetch weather data
temp_unit = "fahrenheit" if units == "fahrenheit" else "celsius"
weather_response = await client.get(
"https://api.open-meteo.com/v1/forecast",
params={
"latitude": lat,
"longitude": lon,
"current": "temperature_2m,wind_speed_10m,relative_humidity_2m,weather_code",
"temperature_unit": temp_unit,
}
)
weather = weather_response.json()["current"]
unit_symbol = "°F" if units == "fahrenheit" else "°C"
return (
f"Weather in {name}, {country}:\n"
f"Temperature: {weather['temperature_2m']}{unit_symbol}\n"
f"Humidity: {weather['relative_humidity_2m']}%\n"
f"Wind Speed: {weather['wind_speed_10m']} km/h\n"
f"Conditions: Weather code {weather['weather_code']}"
)
@mcp.tool()
async def get_forecast(city: str, days: int = 3) -> str:
"""Get a multi-day weather forecast for a city.
Args:
city: The city name
days: Number of days to forecast (1-7)
"""
days = min(max(days, 1), 7)
async with httpx.AsyncClient() as client:
geo_response = await client.get(
"https://geocoding-api.open-meteo.com/v1/search",
params={"name": city, "count": 1}
)
geo_data = geo_response.json()
if "results" not in geo_data:
return f"Could not find location: {city}"
location = geo_data["results"][0]
weather_response = await client.get(
"https://api.open-meteo.com/v1/forecast",
params={
"latitude": location["latitude"],
"longitude": location["longitude"],
"daily": "temperature_2m_max,temperature_2m_min,weather_code",
"forecast_days": days,
}
)
daily = weather_response.json()["daily"]
lines = [f"Forecast for {location['name']}:"]
for i in range(days):
lines.append(
f" {daily['time'][i]}: "
f"{daily['temperature_2m_min'][i]}°C — "
f"{daily['temperature_2m_max'][i]}°C "
f"(code: {daily['weather_code'][i]})"
)
return "\n".join(lines)
# --- RESOURCES (Application-Controlled) ---
@mcp.resource("weather://supported-cities")
async def list_supported_cities() -> str:
"""List of major cities with reliable weather data."""
cities = [
"Tokyo", "New York", "London", "Paris", "Sydney",
"Berlin", "Toronto", "Singapore", "Dubai", "Seoul",
"San Francisco", "Mumbai", "São Paulo", "Cairo", "Bangkok"
]
return json.dumps({"cities": cities, "note": "Any city works, these are examples"})
# --- PROMPTS (User-Controlled) ---
@mcp.prompt()
def weather_report(city: str) -> str:
"""Generate a detailed weather report for a city."""
return f"""Please provide a comprehensive weather report for {city}.
Include:
1. Current conditions (temperature, humidity, wind)
2. A {3}-day forecast
3. What to wear and any weather advisories
4. Best time of day for outdoor activities
Use the get_weather and get_forecast tools to gather the data,
then present it in a clear, friendly format."""
if __name__ == "__main__":
mcp.run(transport="stdio")
That is a complete, runnable MCP server in about 80 lines of meaningful code. It exposes two tools (get_weather and get_forecast), one resource (weather://supported-cities), and one prompt (weather_report).
FastMCP class (from the mcp package) is the high-level API that handles all the JSON-RPC boilerplate, capability negotiation, and message routing for you. The decorators @mcp.tool(), @mcp.resource(), and @mcp.prompt() map directly to the three MCP primitives.
TypeScript MCP Server: Database Query Service
Step 1: Setup
# Create project
mkdir mcp-database-server && cd mcp-database-server
npm init -y
npm install @modelcontextprotocol/sdk better-sqlite3
npm install -D typescript @types/better-sqlite3 @types/node
npx tsc --init
Step 2: Create the server
Create src/index.ts:
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import Database from "better-sqlite3";
import { z } from "zod";
// Open (or create) a SQLite database
const db = new Database("./data.db");
// Create a sample table for demonstration
db.exec(`
CREATE TABLE IF NOT EXISTS products (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL,
category TEXT,
price REAL,
stock INTEGER
)
`);
// Insert sample data if empty
const count = db.prepare("SELECT COUNT(*) as c FROM products").get() as any;
if (count.c === 0) {
const insert = db.prepare(
"INSERT INTO products (name, category, price, stock) VALUES (?, ?, ?, ?)"
);
const products = [
["Mechanical Keyboard", "Electronics", 149.99, 50],
["Ergonomic Mouse", "Electronics", 79.99, 120],
["4K Monitor", "Electronics", 599.99, 30],
["Standing Desk", "Furniture", 449.99, 15],
["Desk Lamp", "Furniture", 39.99, 200],
];
for (const p of products) {
insert.run(...p);
}
}
// Create the MCP server
const server = new McpServer({
name: "database-query",
version: "1.0.0",
});
// --- TOOLS ---
server.tool(
"query",
"Execute a read-only SQL query against the database. Only SELECT statements are allowed. Use this when the user asks about products, inventory, or any data in the database.",
{
sql: z.string().describe("The SQL SELECT query to execute"),
},
async ({ sql }) => {
// Security: only allow SELECT queries
const trimmed = sql.trim().toUpperCase();
if (!trimmed.startsWith("SELECT")) {
return {
content: [
{ type: "text", text: "Error: Only SELECT queries are allowed." },
],
};
}
try {
const rows = db.prepare(sql).all();
return {
content: [
{
type: "text",
text: JSON.stringify(rows, null, 2),
},
],
};
} catch (error: any) {
return {
content: [
{ type: "text", text: `Query error: ${error.message}` },
],
};
}
}
);
server.tool(
"list_tables",
"List all tables in the database with their schemas.",
{},
async () => {
const tables = db
.prepare(
"SELECT name, sql FROM sqlite_master WHERE type='table' ORDER BY name"
)
.all();
return {
content: [
{
type: "text",
text: JSON.stringify(tables, null, 2),
},
],
};
}
);
server.tool(
"describe_table",
"Get the column information for a specific table.",
{
table_name: z.string().describe("Name of the table to describe"),
},
async ({ table_name }) => {
try {
const columns = db.prepare(`PRAGMA table_info(${table_name})`).all();
return {
content: [
{
type: "text",
text: JSON.stringify(columns, null, 2),
},
],
};
} catch (error: any) {
return {
content: [
{ type: "text", text: `Error: ${error.message}` },
],
};
}
}
);
// --- Start the server ---
async function main() {
const transport = new StdioServerTransport();
await server.connect(transport);
console.error("Database MCP server running on stdio");
}
main().catch(console.error);
This TypeScript server exposes three tools for interacting with a SQLite database: query (execute SELECT statements), list_tables (discover the schema), and describe_table (inspect column details). It includes a security check that prevents non-SELECT queries from executing.
Step 3: Connect to Claude Desktop
To use your MCP server with Claude Desktop, edit your configuration file. On macOS, it is located at ~/Library/Application Support/Claude/claude_desktop_config.json. On Windows, check %APPDATA%\Claude\claude_desktop_config.json.
{
"mcpServers": {
"weather": {
"command": "python",
"args": ["/absolute/path/to/weather_server.py"]
},
"database": {
"command": "node",
"args": ["/absolute/path/to/dist/index.js"]
}
}
}
After saving the configuration and restarting Claude Desktop, you will see the MCP tools icon in the chat interface. Claude now has access to your weather and database tools. Try asking: “What is the weather in Tokyo?” or “Show me all products in the database.” Claude will discover the appropriate tools, call them, and present the results in natural language.
Step 4: Connect to Claude Code
For Claude Code, add your MCP servers to the project-level settings file at .claude/settings.json:
{
"mcpServers": {
"weather": {
"command": "python",
"args": ["/absolute/path/to/weather_server.py"]
}
}
}
Or add them at the user level in ~/.claude/settings.json so they are available across all projects. Claude Code will automatically discover the tools when it starts up, and you can use them in your conversations just like the built-in tools.
Popular MCP Servers and the Ecosystem
One of the most exciting aspects of MCP is the rapidly growing ecosystem of pre-built servers. You do not need to build everything from scratch — there are already servers for the most popular tools and services.
Official and Reference Servers
Anthropic and the MCP community maintain a collection of reference servers that cover common use cases:
| Server | What It Does | Transport | Source |
|---|---|---|---|
| Filesystem | Read, write, search files on disk | stdio | Official |
| GitHub | Repos, issues, PRs, commits, actions | stdio | Official |
| GitLab | Projects, merge requests, pipelines | stdio | Official |
| Google Drive | Search, read files from Drive | stdio | Official |
| Slack | Channels, messages, users | stdio | Official |
| PostgreSQL | Query databases, inspect schemas | stdio | Official |
| SQLite | Query and manage SQLite databases | stdio | Official |
| Brave Search | Web and local search via Brave | stdio | Official |
| Puppeteer | Browser automation, screenshots | stdio | Official |
| Notion | Pages, databases, search | stdio | Community |
| Linear | Issues, projects, teams | stdio | Community |
| Docker | Container management, images, logs | stdio | Community |
| Kubernetes | Cluster management, pods, services | stdio / HTTP | Community |
| Stripe | Payments, customers, subscriptions | stdio | Community |
| AWS | S3, Lambda, CloudWatch, EC2 | stdio | Community |
Discovering MCP Servers
Several directories and registries have emerged to help you find MCP servers:
- Smithery (smithery.ai) — A curated registry of MCP servers with installation instructions and ratings
- MCP Hub — Community-maintained directory with categories and search
- awesome-mcp-servers on GitHub — A curated list in the awesome-list tradition, organized by category
- npm / PyPI — Many MCP servers are published as packages you can install with
npm installorpip install
MCP in Claude Code — Deep Dive
If you are reading this blog, there is a good chance you are a developer — and Claude Code is where MCP gets really interesting for developers. Claude Code is itself an MCP host, and its built-in capabilities (Read, Write, Edit, Bash, Grep, Glob) are essentially MCP tools under the hood.
Built-In Tools as MCP
When you use Claude Code and it reads a file, edits code, or runs a shell command, it is using the same tool-calling pattern that MCP standardizes. The difference is that these tools are built directly into the Claude Code host rather than running as external MCP servers. But the mental model is identical: the AI model sees a list of available tools with descriptions and schemas, decides which one to call, generates the arguments, and processes the result.
This means Claude Code was designed from the ground up to be extensible via MCP. You can add capabilities to Claude Code just by pointing it at an MCP server.
Adding Custom MCP Servers
There are two levels of MCP configuration in Claude Code:
Project-level (in .claude/settings.json within your project):
{
"mcpServers": {
"project-db": {
"command": "python",
"args": ["./tools/db_server.py"],
"env": {
"DATABASE_URL": "postgresql://localhost:5432/myapp"
}
}
}
}
Project-level servers are only available when you are working in that specific project. This is ideal for project-specific tools like database access, deployment scripts, or custom linters.
User-level (in ~/.claude/settings.json):
{
"mcpServers": {
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": "ghp_..."
}
},
"slack": {
"command": "npx",
"args": ["-y", "@anthropic/mcp-server-slack"],
"env": {
"SLACK_BOT_TOKEN": "xoxb-..."
}
}
}
}
User-level servers are available in every project. This is ideal for universal tools like GitHub, Slack, and Notion that you use across all your work.
Real Workflow Example
Imagine you have Claude Code configured with GitHub, Notion, and Slack MCP servers. Here is a realistic workflow:
- You tell Claude Code: “Check the latest bug reports in our GitHub repo, summarize them in a Notion page, and post a summary to the #engineering Slack channel.”
- Claude Code uses the GitHub MCP server to call
list_issueswith labels=[“bug”] and state=”open” - It reads each issue’s details using
get_issue - It calls the Notion MCP server’s
create_pagetool with a structured summary - It calls the Slack MCP server’s
send_messagetool to post to #engineering - All of this happens in a single conversation, using standard MCP tools, with no custom code
This is the power of MCP. Each server was built independently, possibly by different teams or open-source contributors. But because they all speak the same protocol, Claude Code can orchestrate them seamlessly.
MCP vs Other Approaches
MCP did not arrive in a vacuum. There are several other approaches to connecting AI models with external tools. Understanding how MCP compares helps you make informed architectural decisions.
MCP vs OpenAI Function Calling
OpenAI’s function calling (and Anthropic’s tool use) lets you define tools in API calls and have the model generate structured arguments. It is a powerful feature — but it is provider-specific and requires custom integration code for each tool.
With function calling, the tool definitions and execution logic live in your application code. If you build a GitHub integration for your OpenAI-powered app, you cannot reuse it in a Claude-powered app without rewriting it. The function definitions may look similar, but the glue code — authentication, error handling, response formatting — is baked into each application.
MCP separates the tool definition and execution into a standalone server. Build a GitHub MCP server once, and it works with any MCP host. The tool definitions travel with the server, not the application.
MCP vs OpenAI Plugins (Deprecated)
OpenAI Plugins, launched in 2023 and later deprecated, were an earlier attempt to solve the same problem. Plugins used OpenAPI specifications to describe available endpoints, and ChatGPT could call them. However, plugins were OpenAI-only, required hosting a public API endpoint with an OpenAPI spec, and had significant security and reliability issues. MCP addresses all of these limitations: it is open standard, supports local servers (no public endpoints needed), and has a more robust security model.
MCP vs LangChain Tools
LangChain provides a framework for building AI applications, including a tool abstraction. LangChain tools are Python or JavaScript functions decorated with metadata. They are useful within the LangChain ecosystem, but they are framework-specific — you cannot use a LangChain tool outside of LangChain without extracting the logic.
MCP tools run as independent servers that any MCP client can connect to. They are language-agnostic, framework-agnostic, and transport-agnostic. A Python MCP server works with a TypeScript MCP client. A LangChain tool only works within LangChain.
That said, LangChain has started adding MCP integration, so you can use MCP servers as LangChain tools. The two approaches are converging rather than competing.
MCP vs Custom REST APIs
You might wonder: why not just have the AI call REST APIs directly? The answer is that REST APIs were designed for machine-to-machine communication between known systems. They assume you know the endpoint URL, the request format, and the authentication method in advance. There is no standard discovery mechanism — you have to read the docs and write client code.
MCP adds a discovery and negotiation layer. When an MCP client connects to a server, it automatically discovers what tools, resources, and prompts are available, along with their schemas. The AI model can then decide which tools to use based on the descriptions. No custom client code needed.
Detailed Comparison Table
| Feature | MCP | Function Calling | LangChain | REST APIs |
|---|---|---|---|---|
| Type | Protocol | API Feature | Framework | Architecture |
| Provider lock-in | None | High | Framework | None |
| Tool discovery | Automatic | Manual | Automatic | Manual |
| Language support | Any | Any | Python / JS | Any |
| Reusability | Build once, use everywhere | Per application | Within framework | Custom clients |
| Resources support | Yes | No | No (separate) | Yes (GET) |
| Prompt templates | Yes | No | Yes | No |
| Local execution | stdio transport | In-process | In-process | Needs server |
Security Considerations
Connecting AI models to tools and data is powerful — and power comes with responsibility. MCP includes several security mechanisms, and understanding them is essential for building production-ready servers.
Tool Authorization
Not every tool should be callable without review. MCP hosts implement authorization policies that control which tools the model can call. In Claude Desktop, for example, you see a confirmation dialog when the model wants to use a tool for the first time. You can approve individual calls, approve all calls to a specific tool, or deny the request.
For production deployments, you should implement server-side authorization as well. Just because a client requests a tool call does not mean the server should execute it. Validate inputs, check permissions, and enforce access controls.
Data Access Control
Resources expose data to the AI model, which means sensitive data could potentially reach the model’s context window. Design your MCP servers with the principle of least privilege:
- Only expose the data the AI actually needs
- Implement row-level and column-level filtering
- Redact sensitive fields (passwords, API keys, PII) before returning them
- Use read-only database connections for query tools
Credential Management
MCP servers often need credentials to access external APIs (GitHub tokens, database passwords, API keys). Best practices:
- Pass credentials via environment variables, not command-line arguments (which may appear in process lists)
- Use secrets managers (AWS Secrets Manager, HashiCorp Vault) for production deployments
- Rotate credentials regularly
- Never log credentials
.claude/settings.json committed to a repository), never include credentials directly. Use environment variable references or a separate, gitignored secrets file.
Sandboxing and Audit Logging
For tools that execute code or run shell commands, sandboxing is critical. Consider:
- Running MCP servers in containers with limited permissions
- Using filesystem access controls to restrict which directories are accessible
- Implementing timeout mechanisms for long-running operations
- Logging every tool call with its inputs and outputs for audit purposes
- Implementing rate limiting to prevent abuse
User Consent Model
The MCP specification encourages a user consent model where potentially dangerous operations require explicit approval. Before a tool deletes a file, sends an email, or deploys code, the user should be asked to confirm. Most MCP hosts implement this at the UI level, but server-side safeguards are an important additional layer.
Building Production MCP Servers
Moving from a prototype MCP server to a production-ready one involves several engineering concerns.
Error Handling
MCP tools should never throw unhandled exceptions. Catch errors, return descriptive error messages, and use the isError flag in tool results to signal failures:
@mcp.tool()
async def query_database(sql: str) -> str:
"""Execute a SQL query."""
try:
# Validate input
if not sql.strip().upper().startswith("SELECT"):
return "Error: Only SELECT queries are allowed for safety."
# Execute with timeout
result = await asyncio.wait_for(
execute_query(sql),
timeout=30.0
)
return json.dumps(result, default=str)
except asyncio.TimeoutError:
return "Error: Query timed out after 30 seconds. Try a simpler query."
except sqlite3.OperationalError as e:
return f"SQL Error: {e}. Check your query syntax."
except Exception as e:
logger.exception("Unexpected error in query_database")
return f"Internal error: {type(e).__name__}. The issue has been logged."
Logging and Monitoring
For MCP servers, log to stderr (not stdout, which is reserved for the JSON-RPC protocol in stdio transport). Include structured logging with request IDs, tool names, execution times, and error details. For HTTP-based servers, integrate with standard monitoring tools like Prometheus, Grafana, or Datadog.
Testing
Test your MCP servers at multiple levels:
- Unit tests: Test individual tool functions with known inputs and expected outputs
- Integration tests: Use the MCP SDK’s test client to simulate the full protocol flow (initialize → list tools → call tool → verify result)
- End-to-end tests: Connect a real MCP host (like Claude Code) to your server and verify the complete workflow
# Example: Testing with the MCP SDK's test utilities
import pytest
from mcp.client.session import ClientSession
from mcp.client.stdio import stdio_client, StdioServerParameters
@pytest.mark.asyncio
async def test_weather_tool():
server_params = StdioServerParameters(
command="python",
args=["weather_server.py"]
)
async with stdio_client(server_params) as (read, write):
async with ClientSession(read, write) as session:
await session.initialize()
# List available tools
tools = await session.list_tools()
tool_names = [t.name for t in tools.tools]
assert "get_weather" in tool_names
# Call the weather tool
result = await session.call_tool(
"get_weather",
arguments={"city": "London"}
)
assert "London" in result.content[0].text
assert "Temperature" in result.content[0].text
Deployment Options
MCP servers can be deployed in several ways depending on your needs:
- Local binary/script: Simplest option. Distribute the server script, users run it locally via stdio. Great for personal tools and open-source distribution.
- Docker container: Package the server with all dependencies. Users pull the image and point their MCP client at the container. Good for consistency across environments.
- Cloud function: Deploy as an AWS Lambda, Google Cloud Function, or Azure Function. Use the HTTP+SSE transport. Scales automatically, pay per invocation.
- Dedicated service: Run as a persistent web service (on Kubernetes, ECS, or a VM). Best for high-traffic, low-latency, and shared team scenarios.
The Future of MCP
MCP is still in its early days, but the trajectory is clear. Here is where things are headed.
Growing Industry Adoption
MCP is no longer just Anthropic’s project. Microsoft has added MCP support to VS Code and GitHub Copilot. Google has shown interest. The open-source community is building hundreds of servers. When major competitors adopt the same standard, it typically means the standard has won. Think of HTTP, JSON, or SQL — no single company owns them, and that is precisely why they dominate.
MCP Marketplaces
Just as app stores transformed mobile and browser extension stores transformed the web, MCP marketplaces are emerging. Smithery.ai is an early example — a registry where you can discover, install, and rate MCP servers. Expect more polished marketplaces with one-click installation, security audits, and verified publishers.
Server-to-Server Communication
The current MCP model is host-to-server: an AI application connects to MCP servers. But what about AI agents that use other agents’ tools? Server-to-server MCP communication would enable composable AI systems where a planning agent delegates tasks to specialized agents, each with their own MCP tools. This is the architecture that will power complex, multi-step AI workflows.
Authentication Standards
OAuth integration for MCP is actively being developed. This will allow MCP servers to use standard OAuth flows for authentication, making it easy to build servers that access user data from third-party services (Google, Microsoft, Salesforce) with proper authorization. No more asking users to generate personal access tokens manually.
Streaming and Performance
Current MCP tools return complete results. Future improvements include streaming results (useful for large dataset queries or real-time data), progress reporting for long-running operations, and partial results that the model can start processing before the tool finishes. The newer Streamable HTTP transport is a step in this direction.
The Interface Layer for AI
If we think about where AI is headed — models that can reason, plan, and act autonomously — they will need a standardized way to interact with the digital world. MCP is positioning itself as that interface layer. Just as operating systems provide a standardized interface between applications and hardware, MCP provides a standardized interface between AI models and tools. The model does not need to know how GitHub’s API works. It just needs to know how to speak MCP.
Getting Started: Your Next Steps
You now understand what MCP is, how it works architecturally, what the three primitives do, how the transport layer operates, and how to build servers in both Python and TypeScript. Here is how to put that knowledge into practice.
Try a Pre-Built MCP Server
The fastest way to experience MCP is to install Claude Desktop and add a pre-built server. Start with the filesystem server — it lets Claude read and search files on your computer:
// claude_desktop_config.json
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-filesystem",
"/Users/you/Documents"
]
}
}
}
Restart Claude Desktop, then ask: “What files are in my Documents folder?” Claude will use the filesystem MCP server to answer.
Build Your Own Server
Take one of the examples from this article — the Python weather server or the TypeScript database server — and modify it for your own use case. Maybe build a server that queries your company’s internal API, searches your notes, or manages your task list. Start simple: one or two tools, stdio transport, local execution.
Integrate with Your Development Workflow
If you use Claude Code, add MCP servers that enhance your development workflow. The GitHub server lets Claude create issues and PRs. A database server lets Claude query your dev database. A deployment server could let Claude trigger deployments. Each server you add makes Claude Code more capable — without any changes to Claude Code itself.
Contribute to the Ecosystem
The MCP ecosystem is still young, which means there are enormous opportunities to contribute. Build a server for a tool or service that does not have one yet. Improve an existing server with better error handling, more tools, or documentation. Submit a PR to the specification if you find a use case it does not cover well.
Essential Resources
- MCP Specification: spec.modelcontextprotocol.io — The authoritative source for the protocol
- MCP Documentation: modelcontextprotocol.io — Guides, tutorials, and SDK references
- Python SDK:
pip install mcp— The official Python SDK with FastMCP - TypeScript SDK:
npm install @modelcontextprotocol/sdk— The official TypeScript SDK - Reference Servers: github.com/modelcontextprotocol/servers — Official and community servers
- Claude Code Documentation: docs.anthropic.com/en/docs/claude-code — MCP configuration for Claude Code
Conclusion
The Model Context Protocol is one of those rare technologies that solves a problem so fundamental that once you understand it, you cannot imagine going back. Before MCP, connecting AI to tools was an artisanal craft — hand-built, fragile, and duplicated endlessly across every application and every vendor. After MCP, it is an engineering discipline — standardized, composable, and reusable.
The N times M problem is real. Every AI company was building the same GitHub integration, the same Slack integration, the same database connector — each slightly different, each maintained separately, each breaking in its own way. MCP collapses that complexity into N plus M, and the results are already visible in the ecosystem: hundreds of servers, dozens of compatible hosts, and a community that is growing faster than almost any open-source project in the AI space.
But MCP is more than an engineering convenience. It represents a philosophical shift in how we think about AI capabilities. Instead of building monolithic AI applications that try to do everything, MCP enables a modular architecture where capabilities are distributed across specialized servers. Need weather data? There is a server for that. Need GitHub access? There is a server for that. Need to query your proprietary database? Build a server in an afternoon.
The analogy to HTTP is not hyperbole. HTTP did not just make it easier to fetch web pages — it enabled an entire ecosystem of web servers, web applications, CDNs, APIs, and services that no one could have predicted in 1991. MCP has the same potential. We are at the beginning of the AI tooling ecosystem, and MCP is the protocol that will underpin it.
If you are a developer, start building MCP servers. If you are a company with internal tools, expose them via MCP. If you are evaluating AI platforms, prioritize ones that support MCP. The protocol is open, the SDKs are mature, and the ecosystem is ready. The only thing missing is your server.
References
- Anthropic. “Introducing the Model Context Protocol.” Anthropic Blog, November 2024. anthropic.com/news/model-context-protocol
- Model Context Protocol. “MCP Specification.” spec.modelcontextprotocol.io
- Model Context Protocol. “Documentation and Guides.” modelcontextprotocol.io
- GitHub. “Model Context Protocol Servers Repository.” github.com/modelcontextprotocol/servers
- Anthropic. “Claude Code Documentation.” docs.anthropic.com/en/docs/claude-code
- Microsoft. “Language Server Protocol.” microsoft.github.io/language-server-protocol
- JSON-RPC Working Group. “JSON-RPC 2.0 Specification.” jsonrpc.org/specification
Disclaimer: This article is for informational and educational purposes only. References to specific companies, products, or technologies do not constitute endorsements. Technology landscapes evolve rapidly — always verify details against official documentation.
Leave a Reply