Last week I spent three hours copying and pasting context into Claude just to get useful suggestions for a client's Meteor app. Database schemas, API endpoints, custom middleware—the whole circus. If you've worked with AI coding assistants, you know the drill. The constant context-switching kills productivity.
MCP servers fix this problem. They're not revolutionary, just practical.
What MCP Servers Actually Do
MCP (Model Context Protocol) is Anthropic's attempt at standardizing how AI tools connect to data sources. Released in November 2024, it's basically a protocol that lets your AI assistant talk directly to your databases, APIs, and file systems without you playing middleman.
The USB-C comparison everyone uses actually makes sense here. Before USB-C, every device needed its own cable. Before MCP, every AI tool needed custom integration code for every data source. Now you write one MCP server, and it works with Claude, Cursor, Codeium, whatever.
OpenAI and Google jumped on board pretty quickly, which tells you something. When competitors adopt your standard, you've probably built something useful.
How MCP Actually Works
The architecture is dead simple. Client-server model using JSON-RPC. Your AI tool is the client, your MCP server exposes your data and tools. The server provides three things:
Tools - Functions the AI can call. Database queries, API calls, file operations. Whatever you need.
Resources - Read-only data sources. Config files, documentation, schemas.
Prompts - Saved templates for common tasks. Because explaining your database structure fifty times gets old.
The TypeScript SDK handles most of the protocol stuff. You just define your tools and their parameters. Here's what setting up a basic server looks like:
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
const server = new Server({
name: "my-tools-server",
version: "1.0.0",
});
Nothing fancy. Just Node.js with types.
Does It Actually Save Time?
We tested this on a Meteor.js project last month. The before and after was pretty clear. Without MCP: spend 20 minutes gathering context, pasting schemas, explaining our weird legacy patterns. With MCP: the AI already knew everything. Queries ran directly against our MongoDB. File structure was instantly accessible.
The time savings add up. Not just the obvious stuff like eliminating copy-paste. It's the accuracy improvement. When the AI has real-time access to your actual schema instead of your half-remembered description of it, you get better suggestions. Fewer hallucinations about field names that don't exist.
Block and Apollo are already using this in production. So are development tools like Zed, Replit, and Sourcegraph. This isn't beta software anymore.
Here's a typical workflow at Matrix Web Solutions now: Client needs API modifications. Instead of documenting everything for the AI, we spin up an MCP server that connects to their database, exposes their existing endpoints, and provides access to their business logic. The AI gets actual context, not our interpretation of it.
Building an MCP Server in TypeScript
The setup is straightforward if you've built any Node.js service. Install the SDK, configure TypeScript for ES2022 and Node16 modules, write your server.
Each tool needs a name, description, and parameter schema. That's how the AI knows what to call and when. No complex documentation required—the protocol handles that communication.
Exposing a database? Create a tool that accepts queries and returns results. Need to hit an external API? Wrap it in a tool with proper error handling. The pattern's the same regardless of what you're building.
server.addTool({
name: "query_database",
description: "Run a SELECT query on the production database",
inputSchema: {
type: "object",
properties: {
query: { type: "string" },
},
},
handler: async ({ query }) => {
// Your actual database logic here
const results = await db.query(query);
return { results };
},
});
One thing to watch: stdio transport works fine for local development with Claude Desktop. For production or remote access, use the Streamable HTTP transport. SSE is deprecated now.
Security (The Part Everyone Skips)
Security researchers found some obvious problems in 2025. Prompt injection, over-permissioned tools, authentication bypasses. Not surprising when you're letting AI execute commands.
The protocol includes basic security—host applications approve servers, control connections. But your implementation matters more. Input validation isn't optional when your MCP server touches production data.
We treat MCP servers like public API endpoints. Sanitize everything. Use read-only database credentials where possible. Rate limit aggressively. Never trust input just because it came from an AI.
The scariest vulnerability we've seen: MCP servers that blindly execute SQL from AI requests. That's how you wake up to an empty database. Always use parameterized queries, even when the AI is generating them.
Patterns That Actually Work
After building a few MCP servers, some patterns emerged:
Specific tools beat generic ones. A get_user_by_id tool works better than a generic run_sql tool. The AI understands specific tools better and uses them more accurately.
Cache expensive operations. Your database doesn't need hammering every time the AI thinks. Cache schemas, cache common queries, cache anything that doesn't change often.
Error messages matter. "Database connection failed" tells the AI nothing. "Database connection failed - PostgreSQL not responding on port 5432" gives it something to work with.
Start read-only. It's tempting to give the AI write access immediately. Don't. Start with read operations, build confidence, then gradually add mutations. Much easier than explaining why production data disappeared.
The Growing Ecosystem
There's already an MCP server for most developer tools. PostgreSQL, MongoDB, GitHub, Slack, Google Drive, AWS S3. The official repos have solid implementations you can use or learn from.
The community's building servers for everything. Jira, Docker, Spotify, you name it. Quality varies wildly. Some are production-ready, others are weekend projects. Check the code before trusting them with anything important.
What's useful for agencies? Custom MCP servers for client systems. We built one for a client's proprietary inventory system. Now any developer on the project can get AI assistance without learning their weird API first. The MCP server translates between their domain-specific stuff and standard tool interfaces.
Performance Reality Check
Latency kills the experience. If your tools take more than 2 seconds to respond, users think something's broken. Profile everything.
Database queries need indexes. External APIs need timeouts. Batch requests when possible. The basics still apply just because there's an AI in the middle.
We learned this the hard way. First MCP server we built did full table scans on every query. Worked fine in development. Production? The AI would timeout waiting for responses. Now we optimize queries before exposing them as tools.
Transport choice matters too. Stdio for local development, Streamable HTTP for production. Don't use SSE—it's deprecated and has issues with larger responses.
Where This Is Going
MCP is gaining adoption faster than most protocol standards. That's usually a good sign. When OpenAI and Google adopt your competitor's standard, you've built something that solves a real problem.
For development agencies, MCP servers offer tangible benefits. Less time explaining context to AI. More accurate suggestions. Faster development cycles. It's not transformative, just incrementally better in ways that compound.
We're using MCP servers on most new projects now. The setup time pays off quickly through reduced context-switching and better AI assistance. Clients don't care about the technical details, but they notice when projects move faster.
Should You Start Using MCP?
If you're using AI coding assistants regularly, probably yes. The investment is minimal—a few hours to build your first server. The productivity gain is immediate.
Start small. Pick one tool or data source you constantly copy-paste into Claude. Build an MCP server for it. See if it helps. Most developers who try it end up building more servers.
The TypeScript SDK is solid. Documentation is decent. Community examples are everywhere. There's no significant barrier to entry if you know Node.js.
Final Thoughts
MCP servers solve a specific problem: connecting AI to your tools without manual intervention. They're not revolutionary, just useful. In development, useful usually wins.
At Matrix Web Solutions, MCP has become part of our standard toolkit. Not because it's exciting technology, but because it makes our work slightly easier every day. Those small improvements compound into significant time savings.
The ecosystem will keep growing. More tools will get MCP servers. The protocol will evolve. But the core value proposition—less copy-paste, more actual work—remains constant.
If you're still copying context into AI assistants manually, you're wasting time. Build an MCP server. Your future self will thank you.