MCP Servers: The Complete Guide to Model Context Protocol in 2026
AITechnologyAutomation

MCP Servers: The Complete Guide to Model Context Protocol in 2026

Master MCP servers for Claude Code with this comprehensive guide. Covers 50+ servers, context window management, lazy loading, plugins, security best practices, and hidden gems.

JM

Jason Macht

Founder @ White Space

January 27, 2026
25 min read

If you've been working with AI assistants, you've probably hit the wall. The AI can answer questions, write code, help you think through problems—but it can't actually do anything in your environment. It can't read your files, query your database, or interact with the tools you use every day.

That's exactly the problem Model Context Protocol solves.

MCP servers are the bridge between AI models and the real world. They let Claude (and other AI assistants) reach out and interact with external systems—your filesystem, your CRM, your analytics platform, whatever you need.

But here's what most guides don't tell you: MCP servers can also destroy your productivity if you set them up wrong. Too many servers, and you'll blow through your context window before you even start working. Wrong configuration, and you'll expose sensitive systems to AI access you didn't intend.

This guide covers everything. The full ecosystem of MCP servers. The critical context window considerations. The lazy loading features that changed everything in late 2025. The hidden gems that power users swear by. And the security practices that'll keep you from making expensive mistakes.

Let's go ahead and jump into it.

What Are MCP Servers?

Model Context Protocol (MCP) is an open standard developed by Anthropic that creates a universal way for AI assistants to connect with external tools and data sources. Think of it as a USB standard for AI—before USB, every device needed its own connector. MCP does the same thing for AI integrations.

An MCP server is a lightweight program that exposes specific capabilities to AI models. It could give Claude the ability to:

  • Read and write files on your computer
  • Query a database
  • Browse the web
  • Interact with APIs like GitHub, Slack, or your CRM
  • Execute code in a sandboxed environment
  • Control a browser through Playwright
  • Search the internet for real-time information
  • Manage persistent memory across sessions

The beautiful thing here is that MCP is model-agnostic. While Anthropic created it for Claude, the protocol is open and any AI system can implement it. You're not locked into one ecosystem.

The Ecosystem Today

As of early 2026, the MCP ecosystem has exploded:

  • 3,000+ servers indexed on MCP.so
  • 2,200+ servers on Smithery with automated installation
  • Official MCP Registry launched by the MCP Steering Group
  • Major adoption by OpenAI (ChatGPT desktop), Google DeepMind, Microsoft Copilot, Replit, and Sourcegraph
  • Linux Foundation governance after Anthropic donated MCP to the Agentic AI Foundation (AAIF)

This isn't a niche protocol anymore. It's becoming the standard for AI-tool integration across the industry.

Why Anthropic Created MCP

Before MCP, every AI integration was a custom job. Want Claude to access your database? Build a custom integration. Want it to read files? Another integration. Each connection required specific code, authentication handling, and maintenance.

This fragmentation was holding back the entire AI ecosystem. Developers were spending more time on plumbing than on actual AI applications.

MCP standardizes all of this. One protocol, one authentication pattern, one way to expose tools to AI. Build an MCP server once, and any MCP-compatible AI can use it.

How MCP Servers Work

The architecture is straightforward once you see it. There are three main components:

MCP Hosts - These are the AI applications that want to use external tools. Claude Desktop, Claude Code, Cursor, and various IDEs are all MCP hosts.

MCP Clients - The client runs inside the host application and manages connections to servers. It handles protocol negotiation, capability discovery, and message routing.

MCP Servers - These are the programs that expose specific capabilities. Each server is a separate process that communicates with clients over a standardized protocol (JSON-RPC 2.0).

Here's how the communication flows:

┌─────────────────────────────────────────────────────────────┐
│                      MCP Host (Claude Code)                  │
│  ┌─────────────────────────────────────────────────────┐    │
│  │                    MCP Client                        │    │
│  │  ┌──────────┐  ┌──────────┐  ┌──────────┐          │    │
│  │  │ Server A │  │ Server B │  │ Server C │          │    │
│  │  │(Filesystem│  │(Database)│  │  (API)   │          │    │
│  │  └────┬─────┘  └────┬─────┘  └────┬─────┘          │    │
│  └───────┼─────────────┼─────────────┼────────────────┘    │
└──────────┼─────────────┼─────────────┼──────────────────────┘
           │             │             │
           ▼             ▼             ▼
      ┌────────┐    ┌────────┐    ┌────────┐
      │  File  │    │Postgres│    │ GitHub │
      │ System │    │   DB   │    │  API   │
      └────────┘    └────────┘    └────────┘

When Claude needs to do something—say, read a file—it sends a request through the MCP client. The client routes that request to the appropriate server (in this case, the filesystem server). The server executes the action and returns the result back through the same chain.

Communication Transports

MCP supports two transport mechanisms:

stdio - The server runs as a subprocess and communicates through standard input/output. This is the most common approach and what you'll use for local servers.

SSE/HTTP - For remote servers accessible over HTTP. Useful when the MCP server runs on a different machine or as a hosted service. This is becoming more common as MCP matures.

What Servers Expose

MCP servers can expose three types of capabilities:

CapabilityDescriptionExample
ToolsFunctions the AI can callread_file, query_database, send_email
ResourcesData the AI can accessFile contents, database schemas, API documentation
PromptsPre-built prompt templatesCode review templates, analysis frameworks

Most servers focus on tools—specific actions the AI can take. But resources and prompts add powerful capabilities for more complex use cases.

The Context Window Problem (Critical)

Here's where most MCP guides fail you. They show you how to add servers but don't mention the elephant in the room: context window consumption.

Every tool from every connected server gets preloaded into your model's context window. Tool names, descriptions, full JSON schemas, parameters, types, constraints. Multiply that by 50, 100, or 200 tools and you're burning through tokens faster than you can say "hello, Claude."

The Real-World Impact

Let me show you actual numbers from the community:

  • 7 MCP servers active: 67,300 tokens consumed (33.7% of 200k context) before any conversation
  • GitHub MCP server alone: Nearly 25% of Claude Sonnet's context window
  • Full MCP setup reported by one developer: 143K of 200K tokens (72% usage) with MCP tools consuming 82K tokens

That leaves you with almost nothing for your actual work.

One user reported their MCP tools context hitting ~81,986 tokens—exceeding the recommended 25,000 limit—before they even typed their first message.

Why This Happens

Context bloat usually emerges from reasonable decisions made repeatedly:

  1. Teams add more integrations - You start with filesystem access, then GitHub, then Slack, then your database...
  2. Over-documented schemas - Developers trying to be helpful add verbose descriptions to every parameter
  3. Static tool discovery - Agents see the full universe of available tools even when most are irrelevant
  4. Long sessions accumulate state - Prior tool results and conversational context pile up without pruning

The Effects

As context grows, you'll notice:

  • Slower reasoning - Models spend increasing effort deciding what not to use
  • Higher costs - Larger context windows mean larger prompts, driving up inference costs
  • Reduced accuracy - When multiple tools overlap in purpose, agents may select the wrong tool or hesitate
  • Hallucinated tool calls - Overloaded models are more prone to inventing parameters or misreading schemas
  • Token limit errors - Eventually you hit the ceiling and the session breaks

Tool Search and Lazy Loading (The Fix)

Claude Code 2.1.7 shipped the most important MCP feature since the protocol launched: Tool Search with lazy loading.

Instead of preloading all tool definitions at session start, Claude Code now loads a lightweight search index and fetches tool details on-demand when you actually need them.

How It Works

  1. Claude Code detects when your MCP tool descriptions would use more than 10% of context
  2. When triggered, tools marked as "deferred" are excluded from the initial prompt entirely
  3. A lightweight search index is loaded instead
  4. When you ask for a specific action, Claude queries the index
  5. Only the relevant tool definitions get pulled into context
  6. You use the tool, and context stays clean

The Numbers

According to Anthropic's engineering team benchmarks:

MetricBefore Tool SearchAfter Tool SearchImprovement
Token usage (50+ tools)~77K tokens~8.7K tokens89% reduction
Token usage (134K worst case)~134K tokens~5K tokens96% reduction
Opus 4 accuracy on MCP evals49%74%+25 points
Opus 4.5 accuracy79.5%88.1%+8.6 points

That's not a marginal improvement. That's a fundamental change in how many MCP servers you can practically use.

How to Use It

Tool Search is now enabled by default for all Claude Code users. You don't need to opt in.

To check your context usage, run /context in Claude Code to see the breakdown.

For MCP server developers, you can mark tools as deferred in your server implementation. Tools with defer_loading: true won't be loaded until Claude searches for them.

The ToolSearch Tool

When you have deferred tools, Claude Code exposes a special ToolSearch tool. You can use it in two ways:

Keyword search - When you're unsure which tool to use:

ToolSearch query: "slack message"

Returns up to 5 matching tools ranked by relevance.

Direct selection - When you know the exact tool name:

ToolSearch query: "select:mcp__slack__read_channel"

Returns just that tool.

Both modes load the tools into context, making them available for Claude to call.

Claude Code Plugins vs MCP Servers

This is a common point of confusion. Let me clarify the relationship.

MCP Servers

Individual connections to external tools/services. Each server exposes one or more tools via the MCP protocol. You configure them in .mcp.json and they connect Claude to specific capabilities.

Plugins

A packaging format that can contain:

  • MCP Servers
  • Slash commands
  • Skills (auto-triggered prompts)
  • Hooks (event handlers)
  • Sub-agents (specialized AI agents)

Plugins bundle all these components into a single distributable unit. Install a plugin and you get everything—tools, commands, and workflows—without manual configuration.

When to Use Each

Use MCP servers directly when:

  • You need a specific tool integration
  • You want fine-grained control over configuration
  • You're connecting to a custom or proprietary system

Use plugins when:

  • You want a complete workflow solution
  • Team consistency matters (everyone gets the same setup)
  • You don't want to manually configure MCP servers

Plugins can include MCP servers, so they're not mutually exclusive. Many developers use both—plugins for common workflows, direct MCP servers for specific integrations.

Plugin Installation

# Install a plugin
claude /plugins install @username/plugin-name

# List installed plugins
claude /plugins list

# Plugin MCP servers work identically to user-configured servers

The Complete MCP Server Directory

Now let's get into the meat of it. Here's a comprehensive breakdown of MCP servers by category, with honest assessments of when to use each.

Development & Code

GitHub MCP Server

Package: @modelcontextprotocol/server-github

The essential server for any developer. Full GitHub integration—create issues, open PRs, review code, manage repositories, search across organizations.

Tools exposed: ~24 tools including create_issue, create_pull_request, push_files, search_code, list_commits

Pros:

  • Comprehensive coverage of GitHub API
  • Well-maintained by the MCP team
  • Good defaults for most workflows

Cons:

  • High context cost (~25% of Sonnet's window in some configurations)
  • Requires Personal Access Token with broad permissions
  • Some operations feel redundant with gh CLI

Best for: Teams doing PR-heavy workflows, automated code reviews, repo management

Configuration:

{
  "github": {
    "command": "npx",
    "args": ["-y", "@modelcontextprotocol/server-github"],
    "env": {
      "GITHUB_PERSONAL_ACCESS_TOKEN": "${GITHUB_TOKEN}"
    }
  }
}

Context7 MCP

Package: @upstash/context7-mcp

This one's a game-changer for anyone tired of Claude generating code for deprecated APIs. Context7 fetches real-time, version-specific documentation from source repositories and injects it directly into your prompt.

Tools exposed: resolve-library-id, query-docs

Pros:

  • Eliminates outdated code suggestions
  • Works with all major frameworks (React, Vue, Next.js, etc.)
  • Free tier available
  • Can use HTTP transport (no local process needed)

Cons:

  • Adds latency for doc fetches
  • Requires API key for higher rate limits
  • Limited to indexed libraries

Best for: Full-stack developers working with rapidly evolving frameworks

Configuration (HTTP):

claude mcp add --transport http context7 https://mcp.context7.com/mcp

Filesystem Server

Package: @modelcontextprotocol/server-filesystem

The fundamental server. Gives Claude the ability to read, write, search, and navigate files on your system.

Important note: Claude Code already has built-in file access for the current project directory. This server is useful when you need access to files outside your project.

Pros:

  • Low context overhead
  • Essential for cross-project work
  • Security via path scoping

Cons:

  • Redundant within Claude Code projects
  • Easy to accidentally expose too much

Best for: Accessing files outside the working directory, batch operations across projects

Configuration:

{
  "filesystem": {
    "command": "npx",
    "args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/allowed/directory"]
  }
}

Desktop Commander

Package: @wonderwhy-er/desktop-commander

A powerhouse for local development. Combines terminal command execution, file operations, and process management. Think of it as a more capable version of the filesystem server.

Tools exposed: Terminal execution, file read/write/search, process management, configuration management

Pros:

  • Comprehensive local control
  • Background process management
  • Cross-platform support
  • Can run in Docker for isolation
  • Works with Claude Desktop Pro subscription (no API costs)

Cons:

  • Higher security surface
  • Overkill for simple file operations

Best for: Power users wanting full system control through natural language

Configuration:

npx @wonderwhy-er/desktop-commander@latest setup

Database & Backend

Supabase MCP

Package: @supabase/mcp-server-supabase

The official Supabase server with 20+ tools for database design, migrations, SQL queries, branching, and TypeScript type generation.

Tools exposed: search_docs, generate_typescript_types, table design, migrations, SQL execution, project management

Pros:

  • Official support from Supabase
  • Combines Postgres, auth, and storage
  • Read-only mode for safety
  • OAuth authentication (no PAT needed anymore)

Cons:

  • Only for development/testing (not production workloads)
  • Supabase-specific (won't work with vanilla Postgres)

Best for: Full-stack developers building on Supabase

Configuration:

{
  "supabase": {
    "command": "npx",
    "args": ["-y", "@supabase/mcp-server-supabase@latest", "--read-only", "--project-ref=<project-ref>"]
  }
}

PostgreSQL/SQLite Servers

Packages: @modelcontextprotocol/server-postgres, server-sqlite

Direct database access for querying, schema inspection, and analysis.

Pros:

  • Direct SQL access
  • Good for data exploration and reporting
  • Schema documentation generation

Cons:

  • High risk if misconfigured
  • No query sanitization by default
  • Easy to expose production data accidentally

Best for: Data analysis, schema documentation, query debugging

Critical: Always use read-only connections unless you specifically need mutations. Never point at production databases without extreme caution.

Browser Automation

Playwright MCP Server

Package: @playwright/mcp (official from Microsoft)

Browser automation using Playwright's accessibility tree rather than screenshot-based approaches. This is the recommended choice for browser automation.

Tools exposed: browser_navigate, browser_click, browser_fill_form, browser_snapshot, browser_take_screenshot

Pros:

  • Multi-browser support (Chromium, Firefox, WebKit)
  • Accessibility tree-based (more reliable than screenshots)
  • Built-in parallel testing support
  • Active development from Microsoft

Cons:

  • Requires Chromium installation
  • Can be resource-intensive
  • Learning curve for complex interactions

Best for: E2E testing, web scraping, form automation, debugging web apps

Configuration:

{
  "playwright": {
    "command": "npx",
    "args": ["@playwright/mcp@latest", "--headless"]
  }
}

Puppeteer MCP Server

Package: @modelcontextprotocol/server-puppeteer

The original browser automation server. Note: Some users have reported this server being deprecated in favor of Playwright.

Pros:

  • Simpler API
  • Lower overhead for basic tasks

Cons:

  • Chromium-only (limited cross-browser)
  • Less active development
  • May be deprecated

Best for: Simple browser tasks if you already have Puppeteer workflows

Search & Research

Brave Search MCP

Package: @modelcontextprotocol/server-brave-search or @brave/brave-search-mcp-server

Web search through Brave's API. Fast, privacy-focused, and well-documented.

Tools exposed: brave_web_search, brave_local_search

Pros:

  • Fast (sub-second queries)
  • Privacy-focused
  • Good API documentation
  • Free tier available

Cons:

  • Requires API key
  • Results may differ from Google
  • Limited to Brave's index

Best for: Real-time web information, privacy-conscious users

Configuration:

{
  "brave-search": {
    "command": "npx",
    "args": ["-y", "@modelcontextprotocol/server-brave-search"],
    "env": {
      "BRAVE_API_KEY": "${BRAVE_API_KEY}"
    }
  }
}

Exa MCP Server

Package: exa-mcp-server

Neural search with semantic understanding. Unlike keyword search, Exa understands meaning and context.

Tools exposed: web_search_exa, company_research, crawling, linkedin_search

Pros:

  • AI-powered semantic search
  • Company research tools built-in
  • LinkedIn search capabilities
  • Content extraction included

Cons:

  • Paid API
  • Different results than traditional search
  • Learning curve for effective queries

Best for: Research-heavy workflows, competitive analysis, lead research

Configuration:

claude mcp add exa -e EXA_API_KEY=YOUR_API_KEY -- npx -y exa-mcp-server

Firecrawl MCP

Package: firecrawl-mcp or @mendableai/mcp-server-firecrawl

Turns websites into structured, LLM-ready data. Handles JavaScript rendering, batch processing, and content extraction.

Tools exposed: firecrawl_scrape, firecrawl_crawl, firecrawl_search, firecrawl_extract, firecrawl_map

Pros:

  • JavaScript rendering included
  • Batch processing support
  • Structured output for LLMs
  • Handles complex sites well

Cons:

  • Paid API
  • Rate limits on free tier
  • Can be slow for large crawls

Best for: Web scraping, content extraction, competitor research

Productivity & Collaboration

Notion MCP

Package: @notionhq/notion-mcp-server

Official Notion server for workspace access. Read and write pages, manage databases, search content.

Pros:

  • Official Notion support
  • Full workspace access
  • Database and page management

Cons:

  • Requires internal integration setup
  • Acts with your full Notion permissions
  • Learning curve for complex databases

Best for: Documentation workflows, knowledge management, project tracking

Configuration:

{
  "notionApi": {
    "command": "npx",
    "args": ["-y", "@notionhq/notion-mcp-server"],
    "env": {
      "OPENAPI_MCP_HEADERS": "{\"Authorization\": \"Bearer ntn_****\", \"Notion-Version\": \"2022-06-28\"}"
    }
  }
}

Slack MCP

Package: @modelcontextprotocol/server-slack

Connect Claude to your Slack workspace. Read messages, post updates, search history.

Tools exposed: list_channels, post_message, search_messages, get_thread

Pros:

  • Direct Slack access
  • Thread support
  • Message search

Cons:

  • Requires Slack app setup
  • Bot token permissions can be broad
  • Message posting disabled by default (safety)

Best for: Team communication automation, notification systems

Memory & Knowledge

Knowledge Graph Memory Server

Package: @modelcontextprotocol/server-memory

Persistent memory using a local knowledge graph. Claude can remember information about you across sessions.

Tools exposed: Entity creation, relationship mapping, observation tracking

Pros:

  • Persistent context across sessions
  • Structured knowledge storage
  • No external API needed
  • MIT licensed

Cons:

  • Basic implementation
  • Manual entity management
  • No automatic memory formation

Best for: Personal assistants, project continuity, relationship tracking

Configuration:

{
  "memory": {
    "command": "npx",
    "args": ["-y", "@modelcontextprotocol/server-memory"],
    "env": {
      "MEMORY_FILE_PATH": "/path/to/memory.jsonl"
    }
  }
}

Enhanced Alternatives

mcp-memory-service - Adds semantic search, 5ms retrieval, and D3.js visualization Claude Code Memory Server - Neo4j-based with relationship mapping across sessions

Reasoning & Meta

Sequential Thinking MCP

Package: @modelcontextprotocol/server-sequential-thinking

A structured framework for step-by-step problem-solving. Unlike Claude's internal reasoning, this makes the thinking process visible and controllable.

Tools exposed: sequential_thinking with thought tracking, revision, and branching

Pros:

  • Transparent reasoning process
  • Ability to revise and branch thoughts
  • User intervention possible
  • Good for complex problems

Cons:

  • Adds overhead for simple tasks
  • Requires understanding of when to use it
  • Can be verbose

Best for: Complex architectural decisions, debugging, planning tasks where you need to see the reasoning

Configuration:

{
  "sequential-thinking": {
    "command": "npx",
    "args": ["-y", "@modelcontextprotocol/server-sequential-thinking"]
  }
}

Multi-Model Orchestration

Multi-MCP

Package: multi_mcp (github.com/religa/multi_mcp)

Orchestrates multiple AI models (OpenAI GPT, Anthropic Claude, Google Gemini) for code review, security analysis, and consensus building.

Tools exposed: Code review, chat, compare, debate modes

Pros:

  • Multi-model consensus for better accuracy
  • OWASP Top 10 security checks
  • CLI model support (runs as subprocesses)

Cons:

  • Complex setup
  • Requires API keys for multiple providers
  • Higher cost for multi-model queries

Best for: Security reviews, architectural decisions, second opinions on code

PAL MCP Server

Package: pal-mcp-server

Connects Claude Code to Gemini, OpenAI, OpenRouter, Azure, Grok, Ollama, and custom models.

Features: Conversation threading, model debates, second opinions

Best for: Users who want to leverage multiple LLMs through a single interface

Hidden Gems and Hot Takes

Here's the section you won't find in official documentation. These are the servers and patterns that power users swear by but don't get much attention.

1. Google Search Console MCP

One of my favorites. Connects to your GSC data and lets you query trends, create visualizations, and analyze search performance directly from Claude.

Why it's underrated: Most people use the GSC web interface. Having it in Claude means you can ask natural language questions about your search data and get instant analysis.

2. codegraphcontext

Indexes your local code into a graph database, providing context to AI assistants with graphical code visualizations.

Why it's underrated: Traditional file search is linear. Graph-based context understands relationships between functions, classes, and modules.

3. blind-auditor

A zero-cost MCP server that forces AI to self-correct generation messages using prompt injection, independent self-audition, and context isolation.

Why it's underrated: Free quality control layer on top of any AI workflow.

4. skill-cortex-server

Enables all IDEs/CLIs to access Claude Code's Skills capabilities.

Why it's underrated: Brings Claude Code's skill system to other environments.

5. roundtable

A meta-MCP server that unifies multiple AI coding assistants (Codex, Claude Code, Cursor, Gemini) through intelligent auto-discovery.

Why it's underrated: Zero-configuration access to the entire AI coding ecosystem through a standardized interface.

Hot Takes

Hot take #1: Most people are using too many MCP servers. If you haven't used a server in the last month, disable it. The context overhead isn't worth it.

Hot take #2: The official GitHub MCP server is overkill for most workflows. The gh CLI via Bash is often faster and more predictable.

Hot take #3: Context7 should be mandatory for anyone doing library-heavy development. The time saved from not debugging deprecated APIs pays for itself immediately.

Hot take #4: Playwright > Puppeteer in every scenario now. Don't bother with Puppeteer for new projects.

Hot take #5: The memory servers are overhyped for most use cases. CLAUDE.md files and project context work better for 90% of "memory" needs.

Security Best Practices

MCP servers can be powerful, but they also introduce real security risks. Here's what you need to know.

The Risks

  1. No built-in authentication - The MCP SDK doesn't include authentication. You must implement your own.
  2. Tool poisoning - Malicious servers can expose harmful tools
  3. Prompt injection - Attackers can exploit MCP proxy servers
  4. Supply chain attacks - MCP packages currently lack digital signatures
  5. Arbitrary code execution - Local servers with inadequate restrictions can be exploited

OWASP Recommendations

The OWASP MCP Security Cheatsheet recommends:

  • Sandbox MCP servers when possible
  • Enforce authentication and authorization for all MCP interactions
  • Implement logging for all data and interactions
  • Use vetted registries with namespace isolation and signature checks

Practical Security Measures

1. Principle of Least Privilege

Only configure servers you actually need. More servers = more attack surface.

// BAD: Overly permissive
{
  "filesystem": {
    "args": ["-y", "@modelcontextprotocol/server-filesystem", "/"]
  }
}

// GOOD: Scoped to project
{
  "filesystem": {
    "args": ["-y", "@modelcontextprotocol/server-filesystem", "/Users/me/projects/myproject"]
  }
}

2. Use Environment Variables for Secrets

Never put API keys directly in .mcp.json:

{
  "env": {
    "API_KEY": "${MY_SERVICE_API_KEY}"  // Good: variable reference
    // "API_KEY": "sk-abc123..."        // Bad: hardcoded secret
  }
}

3. Read-Only by Default

For database servers, always start with read-only access:

{
  "supabase": {
    "args": ["--read-only", "--project-ref=xxx"]
  }
}

4. Docker Isolation

For high-risk servers, run in Docker containers:

{
  "desktop-commander": {
    "command": "docker",
    "args": ["run", "--rm", "mcp/desktop-commander"]
  }
}

5. Verify Package Hashes

Before running a new MCP server, verify the package hash:

npm view @modelcontextprotocol/server-github dist.integrity

6. Monitor and Log

Keep logs of all MCP interactions. This helps with auditing and anomaly detection.

MCP Registries and Where to Find Servers

The MCP ecosystem has multiple registries. Here's how they compare:

RegistryServersFocusBest For
Official MCP RegistryReferenceSpec-compliantVerified servers
MCP.so3,000+Community indexDiscovery
Smithery2,200+Hosting & deploymentOne-click install
mcpservers.orgDirectoryDiscoveryBrowsing
ClaudeMCP.comCuratedClaude-specificClaude users
awesome-mcp-serversCollectionGitHubDevelopers

The Fragmentation Problem

Be aware: these registries require separate publishing. The spec-first main registry at modelcontextprotocol.io is intended to be the single source of truth, but in practice many servers only exist on one or two registries.

Recommendation: Start with Smithery for ease of installation, then check MCP.so for broader discovery, and verify against the official registry when security matters.

Setting Up Your First MCP Server

Let's get practical. Here's a complete setup guide for Claude Code.

Prerequisites

  • Claude Code installed (npm install -g @anthropic-ai/claude-code)
  • Node.js 18+
  • Basic terminal familiarity

The .mcp.json Configuration File

MCP servers are configured in .mcp.json in your project root:

{
  "mcpServers": {
    "filesystem": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-filesystem", "/Users/yourname/projects"]
    }
  }
}

Recommended Starter Configuration

Here's what I'd recommend for most web developers:

{
  "mcpServers": {
    "github": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-github"],
      "env": {
        "GITHUB_PERSONAL_ACCESS_TOKEN": "${GITHUB_TOKEN}"
      }
    },
    "playwright": {
      "command": "npx",
      "args": ["@playwright/mcp@latest", "--headless"]
    },
    "brave-search": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-brave-search"],
      "env": {
        "BRAVE_API_KEY": "${BRAVE_API_KEY}"
      }
    }
  }
}

Start with 2-3 servers. Add more only when you have a specific need.

Verifying Your Setup

After creating .mcp.json, restart Claude Code:

# Check MCP status
claude mcp list

# Test a specific server
claude mcp test github

Building Custom MCP Servers

When existing servers don't cover your needs, build your own.

SDK Overview

Anthropic provides official SDKs:

  • TypeScript/JavaScript: @modelcontextprotocol/sdk
  • Python: mcp

Example: Building a Weather Server

TypeScript:

import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";

const server = new Server(
  { name: "weather-server", version: "1.0.0" },
  { capabilities: { tools: {} } }
);

server.setRequestHandler("tools/list", async () => ({
  tools: [{
    name: "get_weather",
    description: "Get current weather for a city",
    inputSchema: {
      type: "object",
      properties: {
        city: { type: "string", description: "City name" }
      },
      required: ["city"]
    }
  }]
}));

server.setRequestHandler("tools/call", async (request) => {
  if (request.params.name === "get_weather") {
    const city = request.params.arguments.city;
    const weather = await fetchWeatherAPI(city);
    return {
      content: [{ type: "text", text: JSON.stringify(weather, null, 2) }]
    };
  }
  throw new Error(`Unknown tool: ${request.params.name}`);
});

const transport = new StdioServerTransport();
await server.connect(transport);

When to Build Custom

Build a custom MCP server when:

  • No existing server covers your use case
  • You need to integrate with internal/proprietary systems
  • You want fine-grained control over AI access
  • You're building a product that AI assistants should integrate with

For standard use cases, check the registries first. Someone's probably built it.

FAQ

Q: Are MCP servers secure?

Security depends entirely on configuration. The protocol itself is secure, but misconfigured servers can expose sensitive data or allow unintended actions. Follow the security practices above.

Q: Can I use MCP servers with ChatGPT?

Yes. OpenAI adopted MCP in March 2025. The ChatGPT desktop app supports MCP servers.

Q: How many MCP servers can I run?

No hard limit, but each server is a process consuming memory and CPU. More importantly, watch your context usage. Tool Search helps, but 2-8 servers is the practical range for most setups.

Q: What happens if an MCP server crashes?

Claude Code handles crashes gracefully. You'll get an error when trying to use that server's tools. Restart Claude Code to respawn servers.

Q: Do MCP servers affect performance?

Servers run as separate processes, so minimal direct overhead. The main performance impact is context usage—more tools mean more tokens consumed per interaction.

Q: Is there a cost for MCP servers?

The servers themselves are free and open source. But if a server connects to a paid API (like Exa or Firecrawl), you'll pay for that API usage.

Q: Can I use MCP servers with other IDEs?

Yes. Cursor, VS Code (with extensions), and many other tools support MCP. Configuration varies by client.

Getting Started

MCP servers fundamentally change what's possible with AI assistants. Instead of asking Claude questions and copying outputs, you can have it directly interact with your systems, automate workflows, and perform real work.

Start simple:

  1. Pick 2-3 servers that match your workflow
  2. Configure them in .mcp.json
  3. Run /context to monitor context usage
  4. Add more servers only when you have a specific need

The combination of MCP servers with Claude Code is what we're using to build Clawdbot—a persistent AI assistant that runs 24/7 on a VPS.

And if you're comparing automation approaches more broadly, check out our comparison of Make.com, n8n, and Claude Code. MCP servers are what put Claude Code in a different category entirely.

The ecosystem is evolving fast. New servers launch daily, Tool Search continues to improve, and the official registry is bringing better verification and discovery. Keep an eye on the MCP GitHub for updates.

That's all I got for now. Until next time.

Want to get more out of your business with automation and AI?

Let's talk about how we can streamline your operations and save you time.