Artificial IntelligenceAnthropicAI Agents

Model Context Protocol (MCP): Connect Claude to Any Tool

TT
TopicTrick
Model Context Protocol (MCP): Connect Claude to Any Tool

Every Claude agent you build needs tools. In the examples throughout this series, those tools were custom Python functions you wrote — a weather lookup, a search wrapper, a database query. That works, but it means writing integration code for every system your agent needs to connect to. Connecting to Slack, GitHub, PostgreSQL, Google Drive, and five other tools means writing five separate tool definitions and integration layers.

The Model Context Protocol (MCP) is Anthropic's answer to this integration problem. It is an open standard that defines a universal way for AI models to connect to tools, data sources, and services. Any system that implements the MCP server specification can be used by any MCP-compatible AI client — including Claude — without writing custom integration code every time.


What MCP Actually Is

MCP is a client-server protocol. The architecture has three components:

  • MCP Hosts: Applications that contain the AI model. Claude Desktop, Claude.ai, and your custom application are MCP hosts
  • MCP Clients: The component inside the host that connects to MCP servers and manages the protocol communication
  • MCP Servers: Standalone processes that expose tools, resources, and prompts to AI models. A GitHub MCP server exposes tools like create_issue and list_pull_requests. A PostgreSQL MCP server exposes tools like run_query and list_tables

The critical insight: the MCP server and the host application are decoupled. The GitHub MCP server does not know or care whether Claude Desktop, a custom Python agent, or another AI model is connecting to it. And your Claude agent does not need to know how GitHub's API works — it just calls the tools the MCP server exposes.

MCP is Like a Universal Plug for AI Tools

A useful mental model: MCP is like a USB standard for AI tool connections. Before USB, every peripheral needed a different connector. USB gave you one standard port that works with any device. MCP gives AI models one standard connection mechanism that works with any tool server. As the MCP ecosystem grows, Claude can connect to hundreds of pre-built integrations without any custom code.


    The MCP Ecosystem

    Anthropic and the developer community have published MCP servers for a growing list of systems. Notable examples:

    • File system: Read and write local files within a configured directory
    • GitHub: List repos, read files, create issues, manage pull requests
    • PostgreSQL: Query databases, list schemas, execute read-only SQL
    • Slack: Post messages, read channel history, manage conversations
    • Google Drive: Read and search Google Drive files
    • Brave Search: Real-time web search through Brave's search API
    • Puppeteer: Browser automation for web scraping and testing
    • SQLite: Query and analyse SQLite databases

    The full list of Anthropic-maintained MCP servers is available at github.com/modelcontextprotocol/servers.


    Using MCP in Claude Desktop

    The simplest way to use MCP is through Claude Desktop, which includes a built-in MCP client. You configure servers in a JSON configuration file.

    json
    1{ 2 "mcpServers": { 3 "filesystem": { 4 "command": "npx", 5 "args": ["-y", "@modelcontextprotocol/server-filesystem", "/Users/yourname/documents"] 6 }, 7 "github": { 8 "command": "npx", 9 "args": ["-y", "@modelcontextprotocol/server-github"], 10 "env": { 11 "GITHUB_PERSONAL_ACCESS_TOKEN": "your_token_here" 12 } 13 }, 14 "brave-search": { 15 "command": "npx", 16 "args": ["-y", "@modelcontextprotocol/server-brave-search"], 17 "env": { 18 "BRAVE_API_KEY": "your_brave_api_key" 19 } 20 } 21 } 22}

    This configuration file is placed at:

    • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
    • Windows: %APPDATA%\Claude\claude_desktop_config.json

    After saving the configuration and restarting Claude Desktop, Claude can see and use the tools provided by each configured server.


    Using MCP in Your Python Applications

    For programmatic use, the Python mcp SDK lets your application act as an MCP client:

    python
    1import asyncio 2from mcp import ClientSession, StdioServerParameters 3from mcp.client.stdio import stdio_client 4import anthropic 5 6anthropic_client = anthropic.Anthropic() 7 8async def run_claude_with_mcp(user_message: str): 9 """Run Claude with an MCP server providing tools.""" 10 11 # Connect to the filesystem MCP server 12 server_params = StdioServerParameters( 13 command="npx", 14 args=["-y", "@modelcontextprotocol/server-filesystem", "/tmp/workspace"] 15 ) 16 17 async with stdio_client(server_params) as (read, write): 18 async with ClientSession(read, write) as session: 19 # List available tools from the MCP server 20 tools_response = await session.list_tools() 21 22 # Convert MCP tool format to Anthropic tool format 23 tools = [ 24 { 25 "name": tool.name, 26 "description": tool.description, 27 "input_schema": tool.inputSchema 28 } 29 for tool in tools_response.tools 30 ] 31 32 messages = [{"role": "user", "content": user_message}] 33 34 # Agent loop with MCP tool execution 35 while True: 36 response = anthropic_client.messages.create( 37 model="claude-sonnet-4-6", 38 max_tokens=4096, 39 tools=tools, 40 messages=messages 41 ) 42 43 if response.stop_reason == "end_turn": 44 return response.content[0].text 45 46 tool_results = [] 47 for block in response.content: 48 if block.type == "tool_use": 49 # Execute the tool via MCP 50 mcp_result = await session.call_tool( 51 block.name, 52 block.input 53 ) 54 55 tool_results.append({ 56 "type": "tool_result", 57 "tool_use_id": block.id, 58 "content": str(mcp_result.content) 59 }) 60 61 messages.append({"role": "assistant", "content": response.content}) 62 messages.append({"role": "user", "content": tool_results}) 63 64# Run the agent 65result = asyncio.run(run_claude_with_mcp( 66 "List the files in the workspace directory and summarise what each one is for." 67)) 68print(result)

    MCP Resources and Prompts

    Beyond tools, MCP servers can expose two additional capability types:

    Resources

    Resources are data sources that can be read into Claude's context. A PostgreSQL MCP server might expose each database table as a resource. A file system server exposes individual files as resources. Claude can request specific resources to include in its context window.

    python
    1# List available resources from an MCP server 2resources = await session.list_resources() 3 4# Read a specific resource 5resource_content = await session.read_resource("file:///workspace/README.md")

    Prompts

    MCP servers can expose pre-built prompt templates — reusable system prompt patterns that guide Claude for specific tasks within that server's domain.

    python
    1# List available prompts 2prompts = await session.list_prompts() 3 4# Get a specific prompt template 5prompt = await session.get_prompt("code_review", {"language": "python"})

    Use Existing MCP Servers Before Building Custom Ones

    Before writing a custom tool integration, check whether an MCP server already exists for the system you need to connect to. The MCP ecosystem is growing rapidly — GitHub, Slack, databases, search engines, file systems, and many more are already covered by production-ready servers maintained by Anthropic and the community. Using an existing server saves significant development time and gives you a tested, maintained integration.


      Building a Custom MCP Server

      When an existing server does not cover your needs, you can build your own. The MCP Python SDK makes this straightforward:

      python
      1from mcp.server import Server 2from mcp.server.stdio import stdio_server 3from mcp import types 4 5server = Server("my-custom-server") 6 7@server.list_tools() 8async def list_tools() -> list[types.Tool]: 9 return [ 10 types.Tool( 11 name="get_customer", 12 description="Retrieve customer data by customer ID from the internal CRM", 13 inputSchema={ 14 "type": "object", 15 "properties": { 16 "customer_id": { 17 "type": "string", 18 "description": "The unique customer ID" 19 } 20 }, 21 "required": ["customer_id"] 22 } 23 ) 24 ] 25 26@server.call_tool() 27async def call_tool(name: str, arguments: dict) -> list[types.TextContent]: 28 if name == "get_customer": 29 customer_id = arguments["customer_id"] 30 # Query your actual database here 31 result = {"id": customer_id, "name": "Example Customer", "status": "active"} 32 return [types.TextContent(type="text", text=str(result))] 33 34 raise ValueError(f"Unknown tool: {name}") 35 36# Start the server 37asyncio.run(stdio_server(server))

      Summary

      MCP transforms Claude agent development by providing a standardised way to connect to any tool, data source, or service. Instead of writing bespoke integration code for every system, you configure MCP servers and Claude can immediately use their capabilities.

      Key points:

      • MCP is an open standard — any AI model and any service can implement it
      • Use Claude Desktop configuration for personal productivity with MCP servers
      • Use the Python MCP SDK for programmatic agent applications
      • Check the existing server ecosystem before building custom servers
      • MCP servers expose tools, resources, and prompts — all three can be used in Claude agents

      Next: one of the most important cost-saving techniques in the Claude API — Prompt Caching with Claude: Cut API Costs by Up to 90%.


      This post is part of the Anthropic AI Tutorial Series. Previous post: The Agentic Loop: How Claude Reasons, Acts, and Self-Corrects.