mcp server
MCP Server vs REST API: When to Use Which (Honest Take)
MCP is not a REST replacement. Decision table, real code from both, side-by-side comparison across 10 dimensions, and the hybrid pattern that ships in production.
MCP Server vs REST API: When to Use Which (Honest Take)
TL;DR — MCP is not a REST replacement. It's a contract layer for AI agents that sits on top of the same kind of backend code you'd write anyway. Use REST for humans and traditional app clients. Use MCP when an AI agent needs to discover and invoke tools on your system. Most production systems end up with both — the MCP server calls the REST handler underneath.
I run both in production. My MCP servers (AppHandoff, MCP Beast) and my REST APIs (the blog, the admin dashboard, the contact pipeline) live in the same codebase. They share auth, they share the database, and half the time the MCP tool handler is a 10-line wrapper over an existing REST function. This post is the honest answer to "should this be an MCP server or a REST API?" — no buzzwords, no product pitch.
Decision table (skim this and leave)
| Need | Pick | Why |
|---|---|---|
| Web app, mobile app, or CLI calling your backend | REST / GraphQL | Mature tooling, cacheable, every language has a client |
| AI agent needs to do something in your system | MCP | Tool discovery, structured schemas, audit per-call |
| AI agent needs to read public or semi-public data | REST (+ retrieval) | Don't add MCP just to expose a GET |
| Multiple AI clients (Claude, Cursor, custom) need the same tools | MCP | One contract, many clients |
| You need fine-grained per-tool access control for agents | MCP | Built into the model |
| You need sub-100ms latency for user-facing UI | REST | Fewer hops |
| You haven't shipped the REST version yet | REST first | Build once, wrap with MCP later |
That's the whole post. The rest is the receipts.
What MCP is actually for
Model Context Protocol is an open standard from Anthropic. At the wire level it's JSON-RPC 2.0 over stdio or HTTP. The real product is what it standardises:
tools/list— "what can this server do?" (agent-readable)tools/call— invoke a named tool with structured argumentsresources/list+resources/read— expose content by URIprompts/list+prompts/get— reusable prompt templates
An MCP server is valuable for one specific reason: the agent can discover your tools and understand their schemas without you writing a custom client. Claude, Cursor, Claude Code, Claude Desktop, and any MCP-compatible agent all speak the same protocol. Build the tool once.
If your answer to "why MCP?" doesn't involve an LLM as the primary consumer, you don't need MCP.
What REST still wins at
REST / GraphQL are not going anywhere. They win at:
- Caching. CDNs,
stale-while-revalidate, edge caching, browser HTTP cache. None of this applies to MCP tool calls — everytools/callis effectively uncacheable. - Public or semi-public read APIs. The blog API on this site is a GET with
s-maxage=60, stale-while-revalidate=300. Wrapping it in MCP would lose the caching and be slower. - Any UI client. Web, mobile, desktop — they talk REST/GraphQL because every framework has a client, every debugger speaks HTTP, and browser DevTools inspects it natively.
- Latency-critical paths. Fewer hops, smaller envelopes, predictable performance.
Here's the actual REST handler powering inspiredbyfrustration.com/blog — 20 lines, edge-cached, human-readable:
// inspired-api/dashboard/app/api/blog/route.ts
export async function GET(request: NextRequest) {
const origin = request.headers.get('origin')
const { searchParams } = new URL(request.url)
const siteId = searchParams.get('siteId') || (await getTenantIdFromOrigin(origin))
const limitParam = searchParams.get('limit')
const limit = limitParam ? Math.min(parseInt(limitParam, 10), 100) : undefined
const posts = await getPublishedPosts(limit, siteId)
return NextResponse.json(
{ data: posts },
{
headers: {
...corsHeaders(origin),
'Cache-Control': 'public, s-maxage=60, stale-while-revalidate=300',
},
}
)
}
This would be worse as an MCP server. You can't cache it on Cloudflare, and nothing in the call pattern benefits from tool discovery. Leave it alone.
Side-by-side comparison
| Dimension | REST | MCP |
|---|---|---|
| Protocol | HTTP verbs + JSON/GraphQL | JSON-RPC 2.0 over stdio or HTTP |
| Discovery | OpenAPI (optional, often stale) | tools/list (required, live, schema-typed) |
| Auth | Bearer tokens, OAuth, cookies | Bearer tokens in headers (HTTP) or process env (stdio) |
| Caching | Full HTTP cache ecosystem | None — every call executes |
| Streaming | SSE / WebSockets (custom) | SSE transport is part of the spec |
| Client ecosystem | Every language, every framework | MCP SDKs (TS, Python, others growing) |
| Rate limiting | You build it | You build it (same tools) |
| Primary consumer | Humans + app clients | AI agents |
| Error format | HTTP status + JSON body | JSON-RPC error object |
| Versioning | URL path or header | Capability negotiation at handshake |
Neither is a strict superset of the other. Picking one is about who's calling you, not which is "newer."
Hybrid patterns — MCP wrapping REST
The question on a real codebase isn't "MCP or REST?" It's "do I need an MCP surface on top of the REST API I already have?"
In AppHandoff this is exactly how it works. The tool handler is 15 lines:
// handler for mcp tool "list_handoff_tickets"
async function listHandoffTickets({ projectId, status }) {
// reuse the same service function the REST /api/handoffs endpoint uses
const tickets = await getHandoffTickets({ projectId, status })
return { content: [{ type: 'text', text: JSON.stringify(tickets) }] }
}
The REST endpoint exists for the dashboard. The MCP tool exists for Claude, Cursor, and anyone's custom agent. Both call getHandoffTickets() underneath. One source of truth, two front doors.
If you're starting from scratch, build the REST layer first. Then wrap the verbs that agents actually need as MCP tools. Don't MCP the whole surface — agents rarely need your admin endpoints.
The proxy pattern
If you're federating many MCP servers (AppHandoff actually does this — users can register their own MCP endpoints), you need to proxy tool calls over JSON-RPC. The full implementation is 40 lines:
// apphandoff/apps/web/lib/mcp-foreign.ts
async function execCustomMcp(tool, args, secrets, dryRun) {
const serverUrl = tool.config.mcp_server_url
const headers = {
'Content-Type': 'application/json',
...(secrets.MCP_AUTH_TOKEN ? { Authorization: `Bearer ${secrets.MCP_AUTH_TOKEN}` } : {}),
}
if (dryRun) {
const res = await fetchWithTimeout(serverUrl, {
method: 'POST', headers,
body: JSON.stringify({ jsonrpc: '2.0', id: 1, method: 'tools/list', params: {} }),
}, 10_000)
const json = await res.json()
return { data: { tools_count: (json?.result?.tools ?? []).length }, status: 'success' }
}
const res = await fetchWithTimeout(serverUrl, {
method: 'POST', headers,
body: JSON.stringify({
jsonrpc: '2.0', id: 1, method: 'tools/call',
params: { name: tool.config.remote_tool_name ?? tool.name, arguments: args },
}),
}, 10_000)
const json = await res.json()
if (json.error) return { error: json.error.message, status: 'error' }
return { data: json.result, status: 'success' }
}
Every call is wrapped in a 10-second timeout. The response goes through a 2 MB size cap. And a rate limiter + 3-state circuit breaker sit in front of the whole thing — details in MCP Server Architecture if you want the full stack.
Errors: the one place MCP beats naive REST
A REST 429 is a number. An MCP error is a structured payload the agent can act on:
{
"jsonrpc": "2.0",
"id": 1,
"error": {
"code": -32003,
"message": "rate_limited",
"data": { "retry_after_ms": 30000, "limit_per_min": 60 }
}
}
When I made AppHandoff return structured errors instead of plain 429s, Claude stopped giving up and started waiting 30 seconds and retrying. No prompt change, no model change — just better error payloads. Support tickets dropped roughly in half.
You can do the same thing in REST. Nothing stops you from returning a JSON body with retry_after_ms. Most people don't. MCP bakes it into the protocol, so it's the default.
Migration checklist — REST → add MCP
If you already have a REST API and you're considering MCP:
- List the 3–10 actions an agent would actually take. Not all your endpoints. Usually it's
create_*,update_*,search_*, andlist_*for one or two entities. - Define a JSON Schema for each tool's inputs. This is what the agent reads. Include units, ranges, enum values, and whether the call is idempotent.
- Write handlers that call your existing service layer. If your REST route has a controller and a service, the MCP tool calls the service directly. Do not reimplement business logic.
- Pick a transport. HTTP if you're serving multiple customers, stdio if it runs locally on the user's machine. Mixed is allowed; see my real Cursor config.
- Add guardrails. Rate limit, circuit breaker, response cap, structured errors. Same tools you'd use on a public REST API — just now applied per-tool.
- Log every tool call.
tool_name,status,duration_ms,caller_id. You need this for the error-rate dashboard; you really need it when a customer says "the agent did something weird." - Document what's idempotent. Agents retry. If
send_emailisn't idempotent, say so in the tool description.
The decision
Build REST for humans. Build MCP for agents. If you're only ever going to have one AI feature in one product, skip MCP and expose a tight REST endpoint. If you expect Claude + Cursor + a custom agent to all hit your system, MCP pays back the setup cost fast.
And if you already have a REST API — great, you're halfway done. Keep it, call it from your MCP tools, and let both serve their own audience.
Need help deciding, or need someone to actually ship the MCP layer? I've built production MCP servers and maintain the REST APIs they sit on top of. Describe the system you want to make AI-native and I'll tell you what the first version should look like.