MCP, or The Model Context Protocol, was launched by Anthropic late last year, as an attempt to make LLMs call APIs reliably.
Many folks in the industry, technical and non-technical alike, do not realize that LLMs are pretty bad at calling APIs — at least without some serious coaxing via prompts or bespoke code.
APIs are the foundation for building and connecting software, so its surprsing that LLMs are unable to utilize such standard technology out of the box.
How does MCP work today?
The current implementation of MCP is typically deployed as an “MCP Server” that runs locally on the same machine where you’re accessing your models.
So, if you utilize a program like Claude Desktop or Cursor on your Mac, the MCP server will run behind the scenes, locally, on your Mac.
This server provides the LLM with basic information such as:
Which APIs are available to use?
How should I think about when to use these APIs?
How do I actually call these APIs?
What data would I get back if I were to make an API call?
If you’ve written any significant amount of code in the last decade, you’d quickly realize that this setup of needing a local server to a call a remote server (or an API for your APIs) is really weird.
Why is the current version of MCP so weird?
This section is not intended to dunk on the fine engineers at Anthropic who came out with MCP. They built MCP to solve a real problem and there are real constraints in dealing with LLMs to enact a solution.
LLMs are unbelievably good at following patterns in language, be it human or a programming language.
LLMs aren’t great at being asked a very human question and then being expected to switch to behaving like a computer for a brief moment mid-sentence, and then finish the conversation as a human. They can sort of do this, but its very unreliable.
In order to solve this problem, an LLM needs outside help. In most software, outside help usually comes in the form of calling an external API. But most LLMs don’t have the infrastructure needed to call APIs.
To solve this, you need an API to call APIs.
In order to give LLMs an API to call APIs, The Anthropic team came up with this clever solution of installing an MCP server that plugs in to wherever your models are being used to help translate this human-to-machine-back-to-human transition without requiring any fundamental changes to the model.
What’s next?
The future when any LLM is able to reliably call external APIs without much friction will be an enormous unlock for every industry. While not perfect, and a little weird, MCP is currently the best first step in that direction.
We built Supergood for this inevitable future — the demand for LLMs needing to access a product’s APIs will exceed the supply of APIs available for use.
Supergood generates and actively maintains APIs for products that do not have APIs. We utilize a combination of human-in-the-loop code generation paired with our in-house observability platform to quickly create new integrations and free up your engineering team from ongoing maintenance.
If you want to learn more, schedule some time with me!
We’re also hiring Engineers and Generalists in San Francisco.