The simplest way to connect to MCP servers. Two lines to connect. One line to call tools.
import { connect } from "mcpwire";
const server = await connect("http://localhost:3000/mcp");
const result = await server.callTool("search", { query: "hello" });No transport configuration. No protocol negotiation. No boilerplate. Just connect and go.
The official MCP SDK is powerful but verbose. Connecting to a server takes 30+ lines of setup code. mcpwire wraps it in a clean, ergonomic API so you can focus on building, not configuring.
| mcpwire | Official SDK | |
|---|---|---|
| Lines to connect | 2 | 15-30+ |
| Auto transport detection | Yes | Manual |
| OpenAI/Anthropic tool format | Built-in | DIY |
| Server discovery | Built-in | None |
| Learning curve | 3 methods | 20+ classes |
npm install mcpwireimport { connect } from "mcpwire";
const server = await connect("http://localhost:3000/mcp");
// List tools
const tools = await server.tools();
console.log(tools);
// Call a tool
const result = await server.callTool("get_weather", { city: "NYC" });
console.log(result.content[0].text);
// Read a resource
const docs = await server.readResource("file:///README.md");
console.log(docs[0].text);
// Clean up
await server.close();import { connectStdio } from "mcpwire";
const server = await connectStdio("npx", [
"-y",
"@modelcontextprotocol/server-filesystem",
"/home/user/documents",
]);
const files = await server.resources();
console.log(files);import { connect } from "mcpwire";
import OpenAI from "openai";
const server = await connect("http://localhost:3000/mcp");
const openai = new OpenAI();
const response = await openai.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "What's the weather in NYC?" }],
tools: await server.toolsForOpenAI(),
});
// Execute tool calls
for (const call of response.choices[0].message.tool_calls || []) {
const result = await server.callTool(
call.function.name,
JSON.parse(call.function.arguments)
);
console.log(result);
}import { connect } from "mcpwire";
import Anthropic from "@anthropic-ai/sdk";
const server = await connect("http://localhost:3000/mcp");
const anthropic = new Anthropic();
const response = await anthropic.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 1024,
messages: [{ role: "user", content: "What's the weather in NYC?" }],
tools: await server.toolsForAnthropic(),
});import { connect, MultiServer } from "mcpwire";
const weather = await connect("http://localhost:3001/mcp");
const search = await connect("http://localhost:3002/mcp");
const multi = new MultiServer({ weather, search });
// All tools from all servers, namespaced
const tools = await multi.tools();
// [{ name: "weather:get_forecast", ... }, { name: "search:query", ... }]
// Route tool calls automatically
const result = await multi.callTool("weather:get_forecast", { city: "NYC" });
console.log(result.text); // convenience accessor
// Works with OpenAI/Anthropic too
const openaiTools = await multi.toolsForOpenAI();
await multi.close(); // disconnects all serversconst raw = await server.callTool("search", { query: "hello" });
// Wrap in ToolResponse for convenience methods
import { ToolResponse } from "mcpwire";
const result = new ToolResponse(raw);
result.text; // first text content part
result.texts; // all text parts as string[]
result.json(); // parse first text as JSON
result.isError; // check if tool errored
result.data; // first binary content as data URI
result.length; // number of content partsimport { discover } from "mcpwire";
// Find servers configured in Claude Desktop, Cursor, etc.
const servers = discover();
for (const info of servers) {
console.log(`${info.name}: ${info.url || info.command}`);
}Connect to an MCP server over HTTP. Auto-detects Streamable HTTP vs SSE transport.
Connect to a local MCP server process over stdio.
Find MCP servers configured on your machine (Claude Desktop, Cursor, etc.).
| Method | Returns | Description |
|---|---|---|
tools() |
Tool[] |
List available tools |
callTool(name, args) |
ToolResult |
Call a tool |
resources() |
Resource[] |
List available resources |
readResource(uri) |
ResourceContent[] |
Read a resource |
prompts() |
Prompt[] |
List available prompts |
getPrompt(name, args) |
PromptMessage[] |
Get a prompt |
toolsForOpenAI() |
OpenAI tool format | Tools formatted for OpenAI API |
toolsForAnthropic() |
Anthropic tool format | Tools formatted for Anthropic API |
close() |
void | Disconnect |
Aggregates multiple servers. Tools are namespaced as serverName:toolName.
| Method | Returns | Description |
|---|---|---|
tools() |
Tool[] |
All tools from all servers (namespaced) |
callTool(name, args) |
ToolResponse |
Call a namespaced tool |
resources() |
Resource[] |
All resources from all servers |
prompts() |
Prompt[] |
All prompts from all servers |
toolsForOpenAI() |
OpenAI format | Aggregated tools for OpenAI |
toolsForAnthropic() |
Anthropic format | Aggregated tools for Anthropic |
close() |
void | Disconnect all servers |
Wraps a raw ToolResult with convenience accessors.
| Property/Method | Returns | Description |
|---|---|---|
.text |
string? |
First text content part |
.texts |
string[] |
All text content parts |
.json() |
T |
Parse first text as JSON |
.isError |
boolean |
Whether the tool errored |
.data |
string? |
First binary content as data URI |
.length |
number |
Number of content parts |
mcpwire automatically handles transport selection:
- Streamable HTTP (default) - Recommended for remote servers
- SSE (fallback) - Legacy HTTP+SSE transport
- stdio - For local process-spawned servers
// Force a specific transport
const server = await connect(url, { transport: "sse" });When you connect to 5-10 MCP servers, tool definitions alone can eat 100k+ tokens. Filter tools to only send relevant ones to the LLM:
import { connect, filterTools, searchTools, estimateToolTokens } from "mcpwire";
const server = await connect(url);
const all = await server.tools();
// Check how many tokens tool definitions would cost
console.log(`All tools: ~${estimateToolTokens(all)} tokens`);
// Filter by name pattern
const weatherOnly = filterTools(all, { include: [/weather/i] });
// Exclude admin tools
const safeTools = filterTools(all, { exclude: [/^admin_/] });
// Dynamically search based on user query
const relevant = searchTools(all, "weather forecast temperature");MCP servers fail in weird ways. Handle it gracefully:
import { connect, withRetry, CircuitBreaker } from "mcpwire";
const server = await connect(url);
// Automatic retry with exponential backoff
const result = await withRetry(
() => server.callTool("flaky_api", { id: 123 }),
{ maxRetries: 3, initialDelay: 500 }
);
// Circuit breaker for repeated failures
const breaker = new CircuitBreaker({ failureThreshold: 5, resetTimeout: 30000 });
const safe = await breaker.call(() => server.callTool("api", args));43% of MCP servers have command injection vulnerabilities. Sanitize inputs before they reach the server:
import { sanitize, SanitizeError } from "mcpwire";
sanitize({ query: "weather in NYC" }); // ok
sanitize({ path: "../../etc/passwd" }); // throws SanitizeError
sanitize({ cmd: "hello; rm -rf /" }); // throws SanitizeError
// Custom patterns
sanitize(
{ sql: "SELECT * FROM users" },
{ blockedPatterns: [/SELECT\s+\*\s+FROM/i] }
);Define servers once, use them in Claude Desktop, Cursor, VS Code, and Windsurf:
{
"servers": {
"weather": { "url": "http://localhost:3001/mcp" },
"files": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/home"]
}
}
}import { loadConfig, toClaudeDesktop, toVSCode } from "mcpwire";
const config = loadConfig(); // finds mcp.json in cwd or parents
const claude = toClaudeDesktop(config); // Claude Desktop format
const vscode = toVSCode(config); // VS Code / Cursor formatContributions welcome! Please read CONTRIBUTING.md first.
git clone https://github.com/ctonneslan/mcpwire.git
cd mcpwire
npm install
npm run buildMIT
Test MCP servers directly from your terminal:
# Install globally
npm install -g mcpwire
# List tools on a server
mcpwire http://localhost:3000/mcp tools
# Call a tool
mcpwire http://localhost:3000/mcp call get_weather '{"city":"NYC"}'
# List resources
mcpwire http://localhost:3000/mcp resources
# Find configured servers
mcpwire discover- 0.6.0 - Tool filtering (
filterTools,searchTools), retry with backoff (withRetry,CircuitBreaker), input sanitization (sanitize), portable config (loadConfig,toClaudeDesktop,toVSCode) - 0.5.0 - Multi-server aggregation (
MultiServer), tool result helpers (ToolResponse) - 0.4.0 - Vercel AI SDK integration (
toolsForVercelAI), better error messages - 0.3.0 - CLI tool (
npx mcpwire) - 0.2.0 - Google Gemini support (
toolsForGemini) - 0.1.0 - Initial release