How we give Claude access to real-time data
If you've ever tried to get an LLM to accurately utilize data from the web, you'll know the struggle.
Ask it to find menu prices from Chicago restaurants and you'll get a wall of text about Lou Malnati's famous deep dish instead of a clean table of restaurants, prices, and items.
We've been working with Claude to tackle this problem, and we've found that by combining Anthropic's Model Context Protocol (MCP) with real-time structured web data powered by Supergood integrations, we can dramatically improve the accuracy and usefulness of LLM’s.
LLM + Web Data: Great for demos, not for scale
Let's look at an example: trying to find pets available for adoption. Using a tool like Perplexity, which lets LLMs search the internet, you might ask "What pets are available for adoption in Burlington, NC? Show the results as a table."
The tool retrieves unstructured data gathered from various parts of the internet and jams it into a table. At first glance, this seems acceptable for a consumer performing cursory research.
But upon closer inspection, these results are sparse, inaccurate, and generally impossible to build a quality product around. In reality, there are actually dozens of dogs and rabbits (and one pig named Perry) available for adoption in Burlington that Perplexity doesn’t seem to be aware of.
If you’re part of a product, engineering or data & analytics team building for scale, and you’re looking to use AI to help you integrate or analyze web data into your product, your $20 subscription to your favorite LLM won’t cut it.
Solving the scale problem requires better tools
When we use Anthropic's Model Context Protocol (MCP) and provide Claude with a tool generated with Supergood, the difference is striking:
We’ve overcome two key limitations of off-the-shelf LLMs:
We have full control over the sources LLMs can use, forcing the model to prioritize high-quality structured datasets that off-the-shelf LLMs will not have available.
We can enable LLMs to search across entire, massive datasets, rather than cramming whatever unstructured data you can fit into a context window.
This is huge for common AI applications where access to sources of high quality, structured information, dramatically increases the accuracy of complex queries.
As an example, RAG applications require data to be chunked in a way that provides meaningful information about relationships and context, while staying within token limitations. When data is well-structured, you cut out a lot of the fluff that has a high token cost but doesn’t add (and sometimes even degrades) the quality of results.
This extends beyond RAG applications and is a hard truth about LLMs — they work way better when they have access to structured data.
MCP bridges the gap between models and tools
MCP is Anthropic's protocol for standardizing how models interact with external tools. Think of it as a universal adapter that lets models like Claude seamlessly connect with specialized tools while maintaining consistent output formats.
The protocol defines how tools describe their capabilities to the model and provides standard formats for data ingest & output. This standardization means that once a tool implements MCP, it can be used by any model that supports the protocol.
Long-term, this should lead to better interoperability across the AI ecosystem. Short-term, this lets us see drastic improvements to Claude output when equipped with better tools.
Structured, real-time web data: AI’s killer power tool
Perplexity and similar tools do a great job at what they're built for: giving humans quick answers from the web. But embedding web data for production-worthy AI applications requires a fundamentally different approach.
The challenge with web data isn't just extraction - it's maintaining a reliable data pipeline outputting consistently high-quality data that can handle the mess of modern web apps. The engineering maintenance cost associated keeping up with ever changing web makes it nearly impossible to maintain consistent data quality at scale.
This is where Supergood comes in. Supergood generates unofficial APIs that are officially maintained. The platform combines LLMs with our proprietary observability data and human-in-the-loop expertise to ensure high-quality, structured data, reliably delivered.
The result? Your AI-powered apps and workflows can use real-time web data from all of the sources that you need, without requiring you to dedicate your best engineers to build & maintain a bunch of flaky integrations and data sources.
See it for yourself
MCP is just getting started, but the potential for utilizing it with reliable web data is already clear. If you're experimenting with AI agents and tools, hit us up at hello@supergood.ai. We'd love to chat and see if real-time web data can take your product to the next level.
We’re hiring !
We’re looking for talented engineers and generalists to join our team in San Francisco. Check us out at https://supergood.ai/careers