Anthropic wants access to your computer and data. The more data they can use, the more useful their service becomes. Recently, Anthropic launched computer use and subsequently introduced Model Context Protocol (MCP), which is an effort to create a standardized framework for AI interactions with local and remote resources.
The app can retrieve information from MCP servers (e.g., files or text, called resources) or invoke server functions (tools).
It's very simple. To encourage developers to create servers, Anthropic provided TypeScript and Python SDKs and made everything open-source.
Technical Overview
- MCP Server: An HTTP Listener that uses JSON-RPC for requests and Server-Sent Events (SSE) for asynchronous communication (it's WebSockets, allows servers to send events back to the app)
- MCP Schema: A JSON schema defining the structure of resources and interactions.
- MCP Client: A TypeScript or Python library within the host app that connects to the server, manages server connections, processes queries, and handles responses.
While technically not groundbreaking, MCP marks the beginning of a standardization process. It addresses common technical needs, such as:
- 🔀 Connection lifecycle management.
- ⚡ Error handling.
- 🔬 Logging and monitoring capabilities.
- ✅ Schema/input validation.
- ⏳ Timeouts.
- 📃 Unified message formats.
- 📚 Testing.
- and more.
That's nice, but what's next?
- 🔍 Google has introduced Project Mariner.
- 💻 Microsoft has Copilot.
- 🤖 OpenAI is incorporating tools into ChatGPT, with rumors swirling about their own browser.
But here’s the catch: none of these companies are discussing MCP. If you believe you can simply add MCP to your AI agent and expect seamless integration with all AI products, you’re likely mistaken 😥.
Need or force?
Standards like OpenAPI and OpenTelemetry gained popularity because they addressed pain points shared by millions of developers working with distributed systems and complex infrastructures.
The OpenAI API succeeded because they offered something so big that everyone wanted to use, and not because they open-sourced their schema and SDK.
Another example. Take cloud providers like AWS, GCP, Azure, Alibaba, and Yandex. Even though they offer similar products, their APIs remain non-standardized. If you’re building a product like Pinecone, you must integrate with each cloud individually. Maybe MCP falls into this category—useful only for niche scenarios?
Need something bigger?
As I mentioned earlier, Anthropic is solving their own commercial problem: they want access to your data. But why should I, as an AI developer building agents, care to adopt MCP?
What we really need is an AI-to-AI or AI-Agent-to-AI-Agent communication protocol, not just another wrapper around HTTP using opinionated protocols like JSON-RPC and SSE.
I'm thinking about a future where:
- 🤝 Claude Desktop uses a built-in code interpreter capable of interacting seamlessly with the external world and asking other agents for help using an internal language of LLMs. In this case, SDKs like MCP become unnecessary.
- 🔗 AI systems communicate directly with each other without relying on us as intermediaries.
- 🖼️ Protocols support realtime multimodality (speech, vision, text).
- 🧠 Context sharing is effortless, including system prompts, conversation history, and active prompts.
- 🏗️ Hierarchical problem-solving allows tasks to be delegated to knowledge subgraphs.
- 📊 Features like conversation summarization, compositional function calls, and conditional execution of functions.
It's kind of Globally distributed CrewAI or Autogen
Final thoughts
I believe the engineers at OpenAI and Anthropic are some of the smartest minds in AI, and they surely understand these challenges.
That said, MCP feels overhyped and not a true "game changer"—at least not yet. You might be better off waiting for further breakthroughs in local AI assistants before investing significant effort in adopting and maintaining this solution.
Top comments (0)