MCP for Skeptics: Why the Model Context Protocol is Worth It (even if it doesn't seem like it)
6 min read

MCP for Skeptics: Why the Model Context Protocol is Worth It (even if it doesn't seem like it)

1089 words

Confession of a converted skeptic

When Anthropic announced the Model Context Protocol (MCP) in November 2024, my first reaction was: “Ah, another protocol promising to solve all integration problems”. As a DevOps Manager who has seen dozens of “universal standards” born and die, I have reasons to be skeptical.

But after several months watching MCP be massively adopted - OpenAI integrated it in March 2025, Google DeepMind in April - I decided to investigate beyond the hype. And I have to admit something: I was wrong.

What is MCP? (The non-marketing version)

MCP is, essentially, “USB for AI integrations”. Before USB, each peripheral needed its own port and drivers. The same happens with AI integrations: each tool needs its own custom connector.

The problem is mathematical: M AI applications × N data sources = M×N integrations. MCP converts it to M+N, where:

  • M MCP clients (one per AI application)
  • N MCP servers (one per data source)
// Before: integration nightmare
const integrations = [
  'claude-slack', 'claude-github', 'claude-postgres',
  'chatgpt-slack', 'chatgpt-github', 'chatgpt-postgres',
  'custom-agent-slack', 'custom-agent-github', 'custom-agent-postgres'
];

// With MCP: a standard protocol
const mcpServers = ['slack-mcp', 'github-mcp', 'postgres-mcp'];
const mcpClients = ['claude-mcp', 'chatgpt-mcp', 'custom-agent-mcp'];

My initial reasons for skepticism

1. “It’s just another standard”

We’ve seen decades of “universal standards” die in oblivion. CORBA, SOAP, REST, GraphQL… each promising to be the definitive one.

2. “Do we really need this?”

With REST APIs and GraphQL working well, why complicate things with another protocol? JSON-RPC seems like overkill.

3. “Adoption will be slow”

New standards take years to be adopted. Who will build MCP servers without clients? Who will integrate clients without servers?

4. “Security and complexity”

Giving AI access to internal systems, databases, filesystems… what could go wrong?

Why I changed my mind

1. Anthropic did their homework

They didn’t launch MCP just as a specification. It arrived with:

  • Working client: Claude Desktop
  • Reference servers: GitHub, Slack, PostgreSQL, Puppeteer, Google Drive
  • Complete SDKs: Python, TypeScript, Java, Kotlin, C#
  • Tooling: MCP Inspector for debugging

Dogfooding from day one. Anthropic itself uses MCP in production.

2. Explosive adoption

In less than a year:

  • OpenAI integrated MCP in ChatGPT Desktop and Agents SDK
  • Google DeepMind announced support in Gemini
  • Atlassian launched Remote MCP Server for Jira/Confluence
  • Zed, Replit, Sourcegraph integrated MCP
  • Thousands of community servers on GitHub

3. Real use cases

It’s not just theoretical hype. I see teams using MCP for:

  • Internal data access from Claude Desktop
  • IDE integration with enterprise context
  • Workflow automation with access to multiple systems
  • Data analysis with direct database access

4. Well-thought-out architecture

# Simplified MCP architecture
Host (Claude Desktop):
  ├── MCP Client
  └── Connections to MCP Servers
    ├── GitHub MCP Server
    ├── Slack MCP Server
    └── PostgreSQL MCP Server

Each server is a separate process. Security isolation, easy debugging, horizontal scalability.

What convinced me technically

Well-defined primitives

MCP isn’t a monolithic protocol. It defines clear primitives:

  • Tools: Functions that AI can execute
  • Resources: Data that AI can read
  • Prompts: Predefined templates to guide AI
  • Roots: Filesystem entry points
  • Sampling: Mechanism for multi-step reasoning

JSON-RPC with WebSockets

They didn’t reinvent the wheel. JSON-RPC is standard, proven, simple. WebSockets for bidirectional communication. Nothing revolutionary, but it works.

Authentication and permissions

Integrated OAuth, respect for existing permissions, granular access control. Security by design.

Use cases that actually work

1. Development with enterprise context

# MCP server for Jira access
npx @atlassian/mcp-server-jira

Claude Desktop can now access your Jira tickets, understand project context, suggest fixes based on bug history.

2. Frictionless data analysis

# MCP server for PostgreSQL
mcp_server = PostgreSQLMCPServer(connection_string)

Ask Claude: “What are the most active users in the last week?” and get SQL + analysis.

3. Workflow automation

With access to GitHub + Slack + Jira, Claude can:

  • Create PRs based on Jira tickets
  • Notify in Slack when there are deployments
  • Analyze performance metrics

Docker + MCP = Perfect combination

What I like most is the Docker integration. Each MCP server is a container:

# docker-compose.yml
services:
  mcp-github:
    image: mcp/github-server
    environment:
      - GITHUB_TOKEN=${GITHUB_TOKEN}
  
  mcp-postgres:
    image: mcp/postgres-server
    environment:
      - DATABASE_URL=${DATABASE_URL}

Consistent distribution, dependency isolation, scalability. Everything you need for production.

Remote MCP: The next level

Atlassian launched Remote MCP Servers - MCP servers that run in the cloud, not locally. This solves:

  • Configuration problems (you don’t need to install anything)
  • Centralized security (OAuth + corporate permissions)
  • Scalability (Cloudflare Workers + edge computing)

My remaining concerns

1. Security

In April 2025, security researchers found vulnerabilities in MCP:

  • Prompt injection in tools
  • File exfiltration combining tools
  • Impersonation of trusted tools

This is serious. We need better security practices before using MCP in production.

2. Operational complexity

Managing multiple MCP servers, connection debugging, error handling… it can get complex fast.

3. Subtle vendor lock-in

Although MCP is open source, the best servers are being developed by large companies. What if Anthropic decides to change direction?

Why MCP will succeed

1. Perfect timing

It arrived just when we need to integrate AI with existing systems. It’s not too early, not too late.

2. Giant adoption

OpenAI, Google, Anthropic, Atlassian… when the big players adopt a standard, it usually works.

3. Community development

Thousands of developers already contributing MCP servers. The ecosystem is growing organically.

4. Clear use cases

It’s not a solution looking for a problem. Solves real integration pain points.

My pragmatic recommendation

For skeptics like me: Try it in a small project. Install Claude Desktop, connect a GitHub or Slack MCP server, and see for yourselves.

For development teams: Start with MCP servers for your internal systems. Documentation, databases, monitoring tools.

For companies: Watch the evolution of Remote MCP Servers. Atlassian is leading the way, but more will come.

Conclusion: From skepticism to pragmatic confidence

MCP isn’t perfect. It has security problems, operational complexity, and the usual risk of any new technology. But it works.

What convinced me wasn’t the promises, but seeing real adoption by serious companies, solving real problems, with open and transparent code.

As the skeptic I am, I’ll keep watching closely. But I’ll also start experimenting. In a world where AI needs access to enterprise data, MCP seems to be the best bet we have.


What do you skeptics think? Have you tried MCP? Do you think it will be another standard that dies in oblivion, or does it really have potential?

My advice: keep healthy skepticism, but don’t close yourself to experimenting. Sometimes the standards that work are the ones you least expected.

PS: If you decide to try it, start with the MCP GitHub server. It’s easy to configure and the use cases are obvious.

Comments

Latest Posts

6 min

1097 words

Two protocols, two philosophies

In recent months, two protocols have emerged that will change how we build AI systems: Agent2Agent Protocol (A2A) from Google and Model Context Protocol (MCP) from Anthropic. But here’s the thing: they don’t compete with each other.

In fact, after analyzing both for weeks, I’ve realized that understanding the difference between A2A and MCP is crucial for anyone building AI systems beyond simple chatbots.

The key lies in one question: Are you connecting an AI with tools, or are you coordinating multiple intelligences?

9 min

1747 words

If you’re using tools like Claude Code, GitHub Copilot Workspace, or similar, you’ve probably noticed there’s technical jargon that goes beyond simply “chatting with AI”. I’m talking about terms like rules, commands, skills, MCP, and hooks.

These concepts are the architecture that makes AI agents truly useful for software development. They’re not just fancy marketing words — each one serves a specific function in how the agent works.

Let’s break them down one by one in a clear way.

2 min

315 words

Lately I’ve been closely following everything around the MCP protocol (Model Context Protocol), and recently I found a project that makes a lot of sense: MCPHero.

The reality is that although MCP is taking off, many “traditional” AI libraries like openai or google-genai still don’t have native MCP support. They only support tool/function calls. MCPHero comes to solve exactly this: make a bridge between MCP servers and these libraries.

What is MCPHero?

MCPHero is a Python library that lets you use MCP servers as tools/functions in native AI libraries. Basically, it lets you connect to any MCP server and use its tools as if they were native OpenAI or Google Gemini tools.

8 min

1497 words

Yet another protocol promising to change everything

When IBM Research announced the Agent Communication Protocol (ACP) as part of the BeeAI project, my first reaction was the usual one: “Oh, just another universal protocol”. With nearly 30 years in this field, I’ve seen too many “definitive standards” that ended up forgotten.

But there’s something different about ACP that made me pay attention: it doesn’t promise to solve all the world’s problems. It simply focuses on one very specific thing: making AI agents from different frameworks talk to each other. And it does it in a way that really makes sense.

5 min

911 words

A few months ago, when Anthropic launched their MCP (Model Context Protocol), I knew we’d see interesting integrations between LLMs and databases. What I didn’t expect was to see something as polished and functional as ClickHouse’s AgentHouse so soon.

I’m planning to test this demo soon, but just reading about it, the idea of being able to ask a database questions like “What are the most popular GitHub repositories this month?” and getting not just an answer, but automatic visualizations, seems fascinating.

5 min

1020 words

I have been using Claude Code daily for months, and there is one configuration that has completely changed how it works with my code. It is not a new plugin, a more powerful model, or a magic prompt. It is something that has existed since 2016 and that most developers use without knowing it every time they open VS Code: the Language Server Protocol (LSP).

Karan Bansal published an excellent article explaining in detail how to enable LSP in Claude Code and why it matters. After trying it, I can confirm the difference is real and significant.