How to Build an MCP Server in Dart (Step-by-Step Guide)

Most Flutter and Dart developers think of Dart as a UI language. A means to an end — write your widgets, ship your app, done. But Dart is a capable, compiled, strongly-typed server-side language too. And right now, there's a genuinely exciting opportunity to use it in one of the hottest areas in AI: Model Context Protocol (MCP) servers.

This post walks you through what MCP is, why it matters, and how to build your own MCP server in Dart — complete with tools, Docker deployment, and integration with Claude Desktop.


What Is the Model Context Protocol (MCP)?

MCP is an open protocol introduced by Anthropic in November 2024. The idea is elegant: rather than each AI app building its own bespoke integrations with every tool and data source, MCP provides a standardized way to connect AI models to external context.

The best analogy: MCP is USB-C for AI applications. Just like USB-C standardized how devices talk to peripherals, MCP standardizes how AI assistants talk to your data.

The Three Components

MCP has three main actors:

Communication between them happens over JSON-RPC 2.0 messages.

When you ask Claude Desktop a question, here's what actually happens:

  1. The host sends your question + a list of available MCP tools to the LLM
  2. The LLM decides which tool to use
  3. The MCP client calls that tool on your server
  4. Your server returns the result
  5. The LLM synthesizes a response using that fresh, real data

This is what makes MCP servers powerful — the AI can now access live data from your databases, APIs, or internal services, not just its training data.

What Can You Connect?

The ecosystem is already rich. The official modelcontextprotocol/servers repo on GitHub covers:

You can also find a curated community list at github.com/punkpeye/awesome-mcp-servers.


Why Build an MCP Server in Dart?

Fair question. Python and TypeScript have first-party MCP SDKs from Anthropic. Dart is the new kid on this particular block.

But here's the thing: if you're a Flutter/Dart developer, you already know the language. You don't need to context-switch to TypeScript to expose your Firebase backend or your app's internal APIs to an AI assistant. And Dart compiles to native binaries — small, fast Docker images, no runtime overhead.

The Dart community has already shipped packages for this:

The dart_mcp package from the official Dart team is particularly exciting. It's experimental and evolving quickly, but it signals that Dart is going to be a first-class citizen in the MCP ecosystem.

One real-world example: Jhin Lee (@leehack) built a Flutter app that uses mcp_dart to talk to a Playwright MCP server, enabling full browser automation — built with Gemini in about an hour.


Building Your MCP Server in Dart

Let's build one from scratch. We'll create a server that exposes an about_aditya tool — a simple but illustrative example that shows the full flow.

Step 1: Scaffold the Project

Dart's server-shelf template gives you a production-ready server project with a Dockerfile included out of the box.

dart create -t server-shelf mcp_server_demo
cd mcp_server_demo

Your project structure will look like this:

mcp_server_demo/
├── bin/
│   └── server.dart
├── test/
├── Dockerfile
├── pubspec.yaml
└── README.md

Step 2: Add the MCP Package

dart pub add dart_mcp

This pulls in dart_mcp and its dependency json_rpc_2. You're ready to write your server.

Step 3: Create Your MCP Server Class

The dart_mcp package gives you a base class MCPServer and a set of mixins for capabilities. Here's the core structure:

import 'package:dart_mcp/dart_mcp.dart';

class MyMCPServer extends MCPServer with ToolsSupport {
  MyMCPServer({required super.channel})
      : super.fromStreamChannel(
          implementation: ServerImplementation(
            name: 'my-mcp-server',
            version: '0.1.0',
          ),
          instructions:
              'Provides information and analytics data for my app',
        );
}

A few things to note:

Step 4: Register Tools

Tools are registered inside the initialize method, which is called once when a client connects:

@override
FutureOr<InitializeResult> initialize(InitializeRequest request) {
  registerTool(
    Tool(
      name: 'about_aditya',
      description: 'Returns information about Aditya Thakur',
      inputSchema: ObjectSchema(),
    ),
    (_) => CallToolResult(
      content: [
        TextContent(
          text: '''
I'm a Software Engineer at Scapia, working on the Flutter app
as part of the Travel Growth team. Previously a Developer Advocate
at 100ms, focusing on live audio-video infrastructure.
          ''',
        ),
      ],
    ),
  );

  return super.initialize(request);
}

The registerTool method takes two arguments:

  1. A Tool descriptor — name, description, and input schema
  2. A handler function that receives the call parameters and returns a CallToolResult

Step 5: Accept Inputs with Schemas

Real tools need parameters. Here's how you'd define a more complex analytics tool with typed inputs:

registerTool(
  Tool(
    name: 'analytics_data',
    description: 'Fetches app analytics for a given time range and metric',
    inputSchema: ObjectSchema(
      properties: {
        'days': NumberSchema(
          description: 'Number of days to fetch data for (default: 30)',
        ),
        'metric': StringSchema(
          description: 'Metric to fetch (e.g. "screenPageViews")',
        ),
        'dimension': StringSchema(
          description: 'Dimension to group by (e.g. "date", "city")',
        ),
      },
    ),
  ),
  (params) async {
    final days = params.arguments?['days'] as int? ?? 30;
    final metric = params.arguments?['metric'] as String? ?? 'screenPageViews';
    final dimension = params.arguments?['dimension'] as String? ?? 'date';

    try {
      final data = await fetchAnalyticsData(days, metric, dimension);
      return CallToolResult(
        content: [TextContent(text: data)],
      );
    } catch (e) {
      return CallToolResult(
        content: [TextContent(text: 'Error: ${e.toString()}')],
      );
    }
  },
);

The LLM will automatically populate days, metric, and dimension from the conversation context — it reads the schema descriptions to understand what each field means.

Step 6: Wire Up the Entry Point

In bin/server.dart, connect your MCP server to stdio:

import 'dart:io';
import 'package:dart_mcp/dart_mcp.dart';
import 'package:stream_channel/stream_channel.dart';
import '../lib/my_mcp_server.dart';

void main() {
  final channel = stdioChannel(input: stdin, output: stdout);
  MyMCPServer(channel: channel);
}

Connecting to Claude Desktop (Local Testing)

To test your server locally, you'll integrate it with Claude Desktop before thinking about deployment.

Option A: Run Directly with dart run

Edit your Claude Desktop config file (on macOS: ~/Library/Application Support/Claude/claude_desktop_config.json):

{
  "mcpServers": {
    "my_mcp_server": {
      "command": "dart",
      "args": ["run", "/path/to/mcp_server_demo/bin/server.dart"]
    }
  }
}

Option B: Docker (Recommended)

Build your Docker image first. The server-shelf template already includes a multi-stage Dockerfile that compiles your Dart code to a native binary:

docker build -t mcp_server_demo .

Then update your Claude Desktop config:

{
  "mcpServers": {
    "mcp_server_demo": {
      "command": "docker",
      "args": ["run", "-i", "--rm", "mcp_server_demo"]
    }
  }
}

Restart Claude Desktop. Go to Settings → Developer — you should see your server listed. Now ask Claude anything that would trigger your tools, and watch it call them in real time.


The Deployment Problem (and How to Solve It)

The stdio-based approach works perfectly for Claude Desktop, which spawns your server as a child process. But what if you want to deploy your MCP server to the cloud and have it accessible to multiple clients?

Stdio doesn't work over the network. That's the problem.

The Solution: Streamable HTTP

The MCP spec defines a Streamable HTTP transport for exactly this scenario. Here's what you need to implement:

| Endpoint | Purpose | | ------------- | ------------------------------------------- | | GET /health | Liveness probe for your hosting platform | | POST /mcp | Client → server JSON-RPC (the main channel) |

The key insight is that your existing MCPServer class doesn't need to change at all. You just wrap it in a session management layer that:

  1. Listens for HTTP POST requests on /mcp
  2. Assigns each connecting client a unique session ID (Mcp-Session-Id)
  3. Creates a StreamChannel for that session
  4. Hands that channel to a new instance of your MCPServer
  5. Streams responses back over SSE (Server-Sent Events) within the HTTP response

Think of it like a hotel receptionist: the HTTP server is the public address (your phone number), the session registry is the receptionist who routes each caller to the right room, and the StreamChannel is the private conversation that happens in that room.

SSE vs Streamable HTTP — quick comparison:

For cloud deployment, Streamable HTTP is the right choice. It's simpler to implement and works with every load balancer and reverse proxy out of the box.

Deploying to Railway / Render / Fly.io

Since you already have a Dockerfile from the server-shelf template, deploying is straightforward on any container platform:

  1. Push your image to a container registry (Docker Hub, GCR, etc.)
  2. Deploy as a web service on Railway, Render, or Fly.io
  3. Set the PORT environment variable (standard for these platforms)
  4. Point your MCP client config to your deployed URL instead of localhost

Your cloud-deployed MCP server endpoint would look like:

{
  "mcpServers": {
    "my_mcp_server": {
      "url": "https://your-server.railway.app/mcp"
    }
  }
}

What This Unlocks

Once your MCP server is running, the possibilities are genuinely exciting:

The dart_mcp package is experimental and evolving quickly, but the fundamentals are stable. The MCP spec itself (version 2025-03-26) is the authoritative reference, and the Dart team shipping their own package is a strong signal of long-term support.


Quick Reference: The Full Stack

Here's a summary of everything you need to build and ship a Dart MCP server:

# 1. Create project
dart create -t server-shelf my_mcp_server

# 2. Add MCP package
dart pub add dart_mcp

# 3. Build Docker image
docker build -t my_mcp_server .

# 4. Test locally with Claude Desktop
# Edit: ~/Library/Application Support/Claude/claude_desktop_config.json

# 5. Deploy to cloud
# Push image → deploy as web service → point clients at /mcp endpoint

Key packages:

Key classes:


Wrapping Up

Dart is having a quiet but real moment in the AI tooling space. The dart_mcp package from the official Dart team, combined with the simplicity of the MCP spec, makes it genuinely practical to build AI-connected servers in Dart today.

If you're a Flutter developer who's been watching the MCP ecosystem from the sidelines — this is your on-ramp.

Build the server. Connect it to Claude. Expose your APIs. It's less work than you think, and the result is something that actually feels like the future.

If you enjoyed this, you might also like:


Have questions or built something cool with Dart MCP? Reach out - I'd love to see what you're building.