Model Context protocole (MCP) Servers aide-mémoire
Overview
Model Context protocole (MCP) is a universal, open standard designed to connect AI systems with external data sources and tools. MCP servers act as the bridge between AI models (clients) and the external world, enabling AI assistants to access and invoke functions, retrieve information, and interact with various services in a standardized way.
What sets MCP apart is its ability to replace fragmented integrations with a single, unified protocole. Before MCP, each AI model provider had their own proprietary methods for tool use and function calling, creating a complex ecosystem that required developers to implement different integration approaches for each model. MCP solves this by providing a standardized interface that works across different AI models and services, significantly simplifying the development and deployment of AI applications.
MCP servers have emerged as a critical component in the AI infrastructure stack, allowing organizations to build secure, scalable, and standardized connexions between their AI models and the tools, data sources, and services they need to access. Whether deployed on cloud platforms or on-premises, MCP servers enable AI systems to safely and efficiently interact with the external world while maintaining control over these interactions.
Core Concepts
Model Context protocole (MCP)
MCP is the standardized protocole that defines how AI models interact with external tools and services. It provides a universal interface for connecting AI systems with data sources and functions.
MCP Server
An MCP server implements the Model Context protocole and acts as a bridge between AI models (clients) and external tools or services. It handles requests from AI models, executes the appropriate functions, and returns results.
MCP Client
An MCP client is any AI model or application that communicates with an MCP server to access external tools and services. Clients send requests to the server and receive responses according to the MCP specification.
Tools
Tools are functions or services that an MCP server makes available to AI models. These can include data retrieval functions, computational tools, API integrations, or any other capability that extends the AI model's functionality.
Context
Context refers to the information and capabilities available to an AI model through the MCP server. This includes the tools the model can access, the data it can retrieve, and the operations it can perform.
Installation and Setup
AWS Serverless MCP Server
# Clone the AWS Serverless MCP Server repository
git clone https://github.com/aws-samples/aws-serverless-mcp-server.git
cd aws-serverless-mcp-server
# Install dependencies
npm install
# Deploy using AWS CDK
npm run cdk bootstrap
npm run cdk deploy
Basic Node.js MCP Server
// Install dependencies
// npm install express cors body-parser
const express = require('express');
const cors = require('cors');
const bodyParser = require('body-parser');
const app = express();
app.use(cors());
app.use(bodyParser.json());
// Define available tools
const tools = \\\\{
get_weather: async (params) => \\\\{
const \\\\{ location \\\\} = params;
// Implement weather retrieval logic
return \\\\{ temperature: 25, conditions: "Sunny", location \\\\};
\\\\},
search_database: async (params) => \\\\{
const \\\\{ query \\\\} = params;
// Implement database search logic
return \\\\{ results: [`Result for: $\\{query\\}`] \\\\};
\\\\}
\\\\};
// MCP server endpoint
app.post('/mcp', async (req, res) => \\\\{
try \\\\{
const \\\\{ tool, paramètres \\\\} = req.body;
if (!tools[tool]) \\\\{
return res.status(400).json(\\\\{ error: `Tool '$\\{tool\\}' not found` \\\\});
\\\\}
const result = await tools[tool](paramètres);
res.json(\\\\{ result \\\\});
\\\\} catch (error) \\\\{
res.status(500).json(\\\\{ error: error.message \\\\});
\\\\}
\\\\});
// Start server
| const port = processus.env.port | | 3000; |
app.listen(port, () => \\\\{
console.log(`MCP Server running on port $\\{port\\}`);
\\\\});
Python MCP Server
# Install dependencies
# pip install fastapi uvicorn pydantic
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from typing import Dict, Any, optional
import uvicorn
app = FastAPI()
# Define request model
class MCPRequest(BaseModel):
tool: str
paramètres: Dict[str, Any]
# Define available tools
async def get_weather(location: str) -> Dict[str, Any]:
# Implement weather retrieval logic
return \\\\{"temperature": 25, "conditions": "Sunny", "location": location\\\\}
async def search_database(query: str) -> Dict[str, Any]:
# Implement database search logic
return \\\\{"results": [f"Result for: \\\\{query\\\\}"]\\\\}
# Tool registry
tools = \\\\{
"get_weather": get_weather,
"search_database": search_database
\\\\}
@app.post("/mcp")
async def mcp_endpoint(request: MCPRequest):
if request.tool not in tools:
raise HTTPException(status_code=400, detail=f"Tool '\\\\{request.tool\\\\}' not found")
try:
result = await tools[request.tool](https://**request.paramètres)
return \\\\{"result": result\\\\}
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))
if __name__ == "__main__":
uvicorn.run(app, hôte="0.0.0.0", port=3000)
Docker Deployment
# Dockerfile for Python MCP Server
FROM python:3.10-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 3000
CMD ["uvicorn", "mcp_server:app", "--hôte", "0.0.0.0", "--port", "3000"]
# docker-compose.yml
version: '3'
services:
mcp-server:
build: .
ports:
- "3000:3000"
environment:
- LOG_LEVEL=info
- AUTH_ENABLED=true
- AUTH_API_clé=your-secret-api-clé
volumes:
- ./config:/app/config
Tool Implementation
Basic Tool Structure
// JavaScript tool implementation
const tools = \\\\{
// Simple tool with direct implementation
get_current_time: async (params) => \\\\{
const \\\\{ timezone = 'UTC' \\\\} = params;
return \\\\{
time: new Date().toLocaleString('en-US', \\\\{ timeZone: timezone \\\\}),
timezone
\\\\};
\\\\},
// Tool that calls an external API
fetch_stock_price: async (params) => \\\\{
const \\\\{ symbol \\\\} = params;
try \\\\{
const response = await fetch(`https://api.exemple.com/stocks/$\\{symbol\\}`);
const data = await response.json();
return \\\\{
symbol,
price: data.price,
currency: data.currency,
timestamp: data.timestamp
\\\\};
\\\\} catch (error) \\\\{
throw new Error(`Failed to fetch stock price: $\\{error.message\\}`);
\\\\}
\\\\}
\\\\};
# Python tool implementation
async def get_current_time(timezone: str = 'UTC') -> Dict[str, Any]:
from datetime import datetime
import pytz
tz = pytz.timezone(timezone)
current_time = datetime.now(tz)
return \\\\{
"time": current_time.strftime("%Y-%m-%d %H:%M:%S"),
"timezone": timezone
\\\\}
async def fetch_stock_price(symbol: str) -> Dict[str, Any]:
import aiohttp
async with aiohttp.Clientsession() as session:
async with session.get(f"https://api.exemple.com/stocks/\\\\{symbol\\\\}") as response:
if response.status != 200:
raise Exception(f"API returned status code \\\\{response.status\\\\}")
data = await response.json()
return \\\\{
"symbol": symbol,
"price": data["price"],
"currency": data["currency"],
"timestamp": data["timestamp"]
\\\\}
Tool Manifest
\\\\{
"tools": [
\\\\{
"name": "get_current_time",
"Description": "Get the current time in a specified timezone",
"paramètres": \\\\{
"type": "object",
"properties": \\\\{
"timezone": \\\\{
"type": "string",
"Description": "Timezone identifier (e.g., 'UTC', 'America/New_York')"
\\\\}
\\\\},
"required": []
\\\\}
\\\\},
\\\\{
"name": "fetch_stock_price",
"Description": "Get the current stock price for a given symbol",
"paramètres": \\\\{
"type": "object",
"properties": \\\\{
"symbol": \\\\{
"type": "string",
"Description": "Stock symbol (e.g., 'AAPL', 'MSFT')"
\\\\}
\\\\},
"required": ["symbol"]
\\\\}
\\\\}
]
\\\\}
Advanced Tool with authentification
// Tool that requires authentification
const authenticatedTools = \\\\{
get_user_data: async (params, context) => \\\\{
const \\\\{ userId \\\\} = params;
const \\\\{ authjeton \\\\} = context;
if (!authjeton) \\\\{
throw new Error('authentification required');
\\\\}
try \\\\{
const response = await fetch(`https://api.exemple.com/users/$\\{userId\\}`, \\\\{
headers: \\\\{
'autorisation': `Bearer $\\{authjeton\\}`
\\\\}
\\\\});
if (!response.ok) \\\\{
throw new Error(`API returned status $\\{response.status\\}`);
\\\\}
return await response.json();
\\\\} catch (error) \\\\{
throw new Error(`Failed to fetch user data: $\\{error.message\\}`);
\\\\}
\\\\}
\\\\};
authentification and Security
API clé authentification
// Express middleware for API clé authentification
function apicléAuth(req, res, next) \\\\{
const apiclé = req.headers['x-api-clé'];
| if (!apiclé | | apiclé !== processus.env.MCP_API_clé) \\\\{ |
return res.status(401).json(\\\\{ error: 'Unauthorized' \\\\});
\\\\}
next();
\\\\}
// Apply middleware to MCP endpoint
app.post('/mcp', apicléAuth, async (req, res) => \\\\{
// MCP request handling
\\\\});
# FastAPI API clé authentification
from fastapi import Depends, HTTPException, Security
from fastapi.security.api_clé import APIcléHeader
import os
API_clé = os.getenv("MCP_API_clé")
api_clé_header = APIcléHeader(name="X-API-clé", auto_error=False)
async def get_api_clé(api_clé: str = Security(api_clé_header)):
if not api_clé or api_clé != API_clé:
raise HTTPException(status_code=401, detail="Invalid API clé")
return api_clé
@app.post("/mcp")
async def mcp_endpoint(request: MCPRequest, api_clé: str = Depends(get_api_clé)):
# MCP request handling
JWT authentification
// JWT authentification middleware
const jwt = require('jsonwebjeton');
function jwtAuth(req, res, next) \\\\{
const authHeader = req.headers.autorisation;
| if (!authHeader | | !authHeader.startsWith('Bearer ')) \\\\{ |
return res.status(401).json(\\\\{ error: 'Unauthorized' \\\\});
\\\\}
const jeton = authHeader.split(' ')[1];
try \\\\{
const decoded = jwt.verify(jeton, processus.env.JWT_SECRET);
req.user = decoded;
next();
\\\\} catch (error) \\\\{
return res.status(401).json(\\\\{ error: 'Invalid jeton' \\\\});
\\\\}
\\\\}
// Apply middleware to MCP endpoint
app.post('/mcp', jwtAuth, async (req, res) => \\\\{
// MCP request handling with access to req.user
\\\\});
# FastAPI JWT authentification
from fastapi import Depends, HTTPException
from fastapi.security import OAuth2mot de passeBearer
import jwt
from jwt.exceptions import PyJWTError
oauth2_scheme = OAuth2mot de passeBearer(jetonUrl="jeton")
JWT_SECRET = os.getenv("JWT_SECRET")
async def get_current_user(jeton: str = Depends(oauth2_scheme)):
try:
charge utile = jwt.decode(jeton, JWT_SECRET, algorithmes=["HS256"])
nom d'utilisateur = charge utile.get("sub")
if nom d'utilisateur is None:
raise HTTPException(status_code=401, detail="Invalid authentification identifiants")
except PyJWTError:
raise HTTPException(status_code=401, detail="Invalid authentification identifiants")
# Get user from database or return user info from jeton
return \\\\{"nom d'utilisateur": nom d'utilisateur\\\\}
@app.post("/mcp")
async def mcp_endpoint(request: MCPRequest, current_user: dict = Depends(get_current_user)):
# MCP request handling with access to current_user
Rate Limiting
// Rate limiting middleware
const rateLimit = require('express-rate-limit');
const mcpLimiter = rateLimit(\\\\{
windowMs: 15 * 60 * 1000, // 15 minutes
max: 100, // limit each IP to 100 requests per windowMs
message: 'Too many requests from this IP, please try again later'
\\\\});
// Apply rate limiting to MCP endpoint
app.post('/mcp', mcpLimiter, async (req, res) => \\\\{
// MCP request handling
\\\\});
# FastAPI rate limiting with slowapi
from fastapi import Depends
from slowapi import Limiter
from slowapi.util import get_remote_address
limiter = Limiter(clé_func=get_remote_address)
app = FastAPI()
app.state.limiter = limiter
@app.post("/mcp")
@limiter.limit("100/minute")
async def mcp_endpoint(request: MCPRequest, remote_addr: str = Depends(get_remote_address)):
# MCP request handling
Advanced MCP Server Features
Tool Discovery
// Endpoint for tool discovery
app.get('/mcp/tools', apicléAuth, (req, res) => \\\\{
const toolManifest = \\\\{
tools: Object.clés(tools).map(toolName => \\\\{
return \\\\{
name: toolName,
| Description: toolDescriptions[toolName] | | '', |
| paramètres: toolparamètres[toolName] | | \\\\{ type: 'object', properties: \\\\{\\\\} \\\\} |
\\\\};
\\\\})
\\\\};
res.json(toolManifest);
\\\\});
# FastAPI tool discovery endpoint
@app.get("/mcp/tools")
async def get_tools(api_clé: str = Depends(get_api_clé)):
tool_manifest = \\\\{
"tools": [
\\\\{
"name": name,
"Description": tool_Descriptions.get(name, ""),
"paramètres": tool_paramètres.get(name, \\\\{"type": "object", "properties": \\\\{\\\\}\\\\})
\\\\}
for name in tools.clés()
]
\\\\}
return tool_manifest
Streaming Responses
// Express streaming response
app.post('/mcp/stream', apicléAuth, (req, res) => \\\\{
const \\\\{ tool, paramètres \\\\} = req.body;
if (!streamingTools[tool]) \\\\{
return res.status(400).json(\\\\{ error: `Streaming tool '$\\{tool\\}' not found` \\\\});
\\\\}
// Set headers for streaming
res.setHeader('Content-Type', 'text/event-stream');
res.setHeader('Cache-Control', 'no-cache');
res.setHeader('connexion', 'keep-alive');
// Create streaming tool instance
const stream = streamingTools[tool](paramètres);
// Handle data events
stream.on('data', (data) => \\\\{
res.write(`data: $\\{JSON.stringify(data)\\}\n\n`);
\\\\});
// Handle end event
stream.on('end', () => \\\\{
res.write('data: [DONE]\n\n');
res.end();
\\\\});
// Handle errors
stream.on('error', (error) => \\\\{
res.write(`data: $\\{JSON.stringify(\\{ error: error.message \\})\\}\n\n`);
res.end();
\\\\});
// Handle client disconnect
req.on('close', () => \\\\{
stream.destroy();
\\\\});
\\\\});
# FastAPI streaming response
from fastapi import Response
from fastapi.responses import StreamingResponse
import json
import asyncio
@app.post("/mcp/stream")
async def stream_mcp(request: MCPRequest, api_clé: str = Depends(get_api_clé)):
if request.tool not in streaming_tools:
raise HTTPException(status_code=400, detail=f"Streaming tool '\\\\{request.tool\\\\}' not found")
async def event_generator():
try:
async for data in streaming_tools[request.tool](https://**request.paramètres):
yield f"data: \\\\{json.dumps(data)\\\\}\n\n"
await asyncio.sleep(0.01) # Small delay to prevent CPU hogging
yield "data: [DONE]\n\n"
except Exception as e:
yield f"data: \\\\{json.dumps(\\\\{'error': str(e)\\\\})\\\\}\n\n"
return StreamingResponse(
event_generator(),
media_type="text/event-stream"
)
Logging and Monitoring
// Winston logger setup
const winston = require('winston');
const logger = winston.createLogger(\\\\{
| level: processus.env.LOG_LEVEL | | 'info', |
format: winston.format.combine(
winston.format.timestamp(),
winston.format.json()
),
transports: [
new winston.transports.Console(),
new winston.transports.File(\\\\{ filename: 'mcp-server.log' \\\\})
]
\\\\});
// Logging middleware
function loggingMiddleware(req, res, next) \\\\{
const start = Date.now();
// Log request
logger.info(\\\\{
type: 'request',
method: req.method,
path: req.path,
tool: req.body.tool,
| requestId: req.headers['x-request-id'] | | uuidv4() |
\\\\});
// Capture response
const originalSend = res.send;
res.send = function(body) \\\\{
const duration = Date.now() - start;
// Log response
logger.info(\\\\{
type: 'response',
method: req.method,
path: req.path,
statusCode: res.statusCode,
duration,
| requestId: req.headers['x-request-id'] | | uuidv4() |
\\\\});
return originalSend.call(this, body);
\\\\};
next();
\\\\}
// Apply logging middleware
app.use(loggingMiddleware);
# FastAPI logging middleware
import logging
import time
import uuid
from fastapi import Request
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
handlers=[
logging.FileHandler("mcp-server.log"),
logging.StreamHandler()
]
)
logger = logging.getLogger("mcp-server")
@app.middleware("http")
async def logging_middleware(request: Request, call_next):
request_id = request.headers.get("X-Request-ID", str(uuid.uuid4()))
start_time = time.time()
# Log request
logger.info(\\\\{
"type": "request",
"method": request.method,
"path": request.url.path,
"request_id": request_id
\\\\})
# processus request
response = await call_next(request)
# Log response
duration = time.time() - start_time
logger.info(\\\\{
"type": "response",
"method": request.method,
"path": request.url.path,
"status_code": response.status_code,
"duration": duration,
"request_id": request_id
\\\\})
return response
Error Handling
// Error handling middleware
function errorHandler(err, req, res, next) \\\\{
logger.error(\\\\{
type: 'error',
error: err.message,
stack: err.stack,
path: req.path,
| requestId: req.headers['x-request-id'] | | uuidv4() |
\\\\});
res.status(500).json(\\\\{
error: 'Internal server error',
| requestId: req.headers['x-request-id'] | | uuidv4() |
\\\\});
\\\\}
// Apply error handling middleware
app.use(errorHandler);
# FastAPI exception handlers
from fastapi.exceptions import RequestValidationError
from starlette.exceptions import HTTPException as StarletteHTTPException
@app.exception_handler(RequestValidationError)
async def validation_exception_handler(request: Request, exc: RequestValidationError):
logger.error(\\\\{
"type": "validation_error",
"path": request.url.path,
"errors": exc.errors(),
"request_id": request.headers.get("X-Request-ID", str(uuid.uuid4()))
\\\\})
return JSONResponse(
status_code=422,
content=\\\\{"detail": exc.errors(), "type": "validation_error"\\\\}
)
@app.exception_handler(StarletteHTTPException)
async def http_exception_handler(request: Request, exc: StarletteHTTPException):
logger.error(\\\\{
"type": "http_error",
"path": request.url.path,
"status_code": exc.status_code,
"detail": exc.detail,
"request_id": request.headers.get("X-Request-ID", str(uuid.uuid4()))
\\\\})
return JSONResponse(
status_code=exc.status_code,
content=\\\\{"detail": exc.detail, "type": "http_error"\\\\}
)
@app.exception_handler(Exception)
async def general_exception_handler(request: Request, exc: Exception):
logger.error(\\\\{
"type": "server_error",
"path": request.url.path,
"error": str(exc),
"request_id": request.headers.get("X-Request-ID", str(uuid.uuid4()))
\\\\})
return JSONResponse(
status_code=500,
content=\\\\{"detail": "Internal server error", "type": "server_error"\\\\}
)
Cloud Deployment
AWS Lambda Deployment
// serverless.yml for AWS Lambda deployment
service: mcp-server
provider:
name: aws
runtime: nodejs14.x
region: us-east-1
environment:
MCP_API_clé: $\\\\{env:MCP_API_clé\\\\}
LOG_LEVEL: info
functions:
mcp:
handler: handler.mcp
events:
- http:
path: mcp
method: post
cors: true
toolDiscovery:
handler: handler.toolDiscovery
events:
- http:
path: mcp/tools
method: get
cors: true
// handler.js for AWS Lambda
const serverless = require('serverless-http');
const express = require('express');
const app = express();
// ... MCP server implementation ...
// Export Lambda handlers
module.exports.mcp = serverless(app);
module.exports.toolDiscovery = serverless(app);
Azure Functions Deployment
// function.json for Azure Functions
\\\\{
"bindings": [
\\\\{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": ["post"],
"route": "mcp"
\\\\},
\\\\{
"type": "http",
"direction": "out",
"name": "res"
\\\\}
]
\\\\}
// index.js for Azure Functions
module.exports = async function (context, req) \\\\{
// Validate request
| if (!req.body | | !req.body.tool) \\\\{ |
context.res = \\\\{
status: 400,
body: \\\\{ error: "Missing required fields" \\\\}
\\\\};
return;
\\\\}
const \\\\{ tool, paramètres \\\\} = req.body;
// Validate API clé
const apiclé = req.headers['x-api-clé'];
| if (!apiclé | | apiclé !== processus.env.MCP_API_clé) \\\\{ |
context.res = \\\\{
status: 401,
body: \\\\{ error: "Unauthorized" \\\\}
\\\\};
return;
\\\\}
try \\\\{
// Execute tool
if (!tools[tool]) \\\\{
context.res = \\\\{
status: 400,
body: \\\\{ error: `Tool '$\\{tool\\}' not found` \\\\}
\\\\};
return;
\\\\}
const result = await tools[tool](paramètres);
context.res = \\\\{
status: 200,
body: \\\\{ result \\\\}
\\\\};
\\\\} catch (error) \\\\{
context.log.error(`Error executing tool $\\{tool\\}: $\\{error.message\\}`);
context.res = \\\\{
status: 500,
body: \\\\{ error: error.message \\\\}
\\\\};
\\\\}
\\\\};
Kubernetes Deployment
# kubernetes-deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mcp-server
labels:
app: mcp-server
spec:
replicas: 3
selector:
matchLabels:
app: mcp-server
template:
metadata:
labels:
app: mcp-server
spec:
containers:
- name: mcp-server
image: your-registry/mcp-server:latest
ports:
- containerport: 3000
env:
- name: MCP_API_clé
valueFrom:
secretcléRef:
name: mcp-secrets
clé: api-clé
- name: LOG_LEVEL
value: "info"
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "200m"
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 5
periodSeconds: 5
---
apiVersion: v1
kind: service
metadata:
name: mcp-server-service
spec:
selector:
app: mcp-server
ports:
- port: 80
cibleport: 3000
type: LoadBalancer
---
apiVersion: v1
kind: Secret
metadata:
name: mcp-secrets
type: Opaque
data:
api-clé: <base64-encoded-api-clé>
Integration with AI Models
OpenAI Integration
// Client-side integration with OpenAI
const \\\\{ OpenAI \\\\} = require('openai');
const openai = new OpenAI(\\\\{
apiclé: processus.env.OPENAI_API_clé
\\\\});
async function callOpenAIWithMCP(prompt, mcpServerUrl, mcpApiclé) \\\\{
const response = await openai.chat.completions.create(\\\\{
model: 'gpt-4',
messages: [\\\\{ role: 'user', content: prompt \\\\}],
tools: [
\\\\{
type: 'function',
function: \\\\{
name: 'mcp_server',
Description: 'Call the MCP server to access external tools and data',
paramètres: \\\\{
type: 'object',
properties: \\\\{
tool: \\\\{
type: 'string',
Description: 'The name of the tool to call'
\\\\},
paramètres: \\\\{
type: 'object',
Description: 'paramètres for the tool'
\\\\}
\\\\},
required: ['tool']
\\\\}
\\\\}
\\\\}
],
tool_choice: 'auto'
\\\\});
// Check if the model wants to call a tool
const message = response.choices[0].message;
if (message.tool_calls && message.tool_calls.length > 0) \\\\{
const toolCall = message.tool_calls[0];
if (toolCall.function.name === 'mcp_server') \\\\{
const \\\\{ tool, paramètres \\\\} = JSON.parse(toolCall.function.arguments);
// Call MCP server
const mcpResponse = await fetch(mcpServerUrl, \\\\{
method: 'POST',
headers: \\\\{
'Content-Type': 'application/json',
'X-API-clé': mcpApiclé
\\\\},
body: JSON.stringify(\\\\{ tool, paramètres \\\\})
\\\\});
const mcpResult = await mcpResponse.json();
// Continue the conversation with the tool result
const finalResponse = await openai.chat.completions.create(\\\\{
model: 'gpt-4',
messages: [
\\\\{ role: 'user', content: prompt \\\\},
message,
\\\\{
role: 'tool',
tool_call_id: toolCall.id,
content: JSON.stringify(mcpResult)
\\\\}
]
\\\\});
return finalResponse.choices[0].message.content;
\\\\}
\\\\}
return message.content;
\\\\}
Anthropic Integration
// Client-side integration with Anthropic
const \\\\{ Anthropic \\\\} = require('@anthropic-ai/sdk');
const anthropic = new Anthropic(\\\\{
apiclé: processus.env.ANTHROPIC_API_clé
\\\\});
async function callAnthropicWithMCP(prompt, mcpServerUrl, mcpApiclé) \\\\{
const response = await anthropic.messages.create(\\\\{
model: 'claude-3-opus-20240229',
messages: [\\\\{ role: 'user', content: prompt \\\\}],
tools: [
\\\\{
name: 'mcp_server',
Description: 'Call the MCP server to access external tools and data',
input_schema: \\\\{
type: 'object',
properties: \\\\{
tool: \\\\{
type: 'string',
Description: 'The name of the tool to call'
\\\\},
paramètres: \\\\{
type: 'object',
Description: 'paramètres for the tool'
\\\\}
\\\\},
required: ['tool']
\\\\}
\\\\}
]
\\\\});
// Check if the model wants to call a tool
const message = response.content[0];
if (message.type === 'tool_use' && message.name === 'mcp_server') \\\\{
const \\\\{ tool, paramètres \\\\} = message.input;
// Call MCP server
const mcpResponse = await fetch(mcpServerUrl, \\\\{
method: 'POST',
headers: \\\\{
'Content-Type': 'application/json',
'X-API-clé': mcpApiclé
\\\\},
body: JSON.stringify(\\\\{ tool, paramètres \\\\})
\\\\});
const mcpResult = await mcpResponse.json();
// Continue the conversation with the tool result
const finalResponse = await anthropic.messages.create(\\\\{
model: 'claude-3-opus-20240229',
messages: [
\\\\{ role: 'user', content: prompt \\\\},
\\\\{
role: 'assistant',
content: [message]
\\\\},
\\\\{
role: 'tool',
name: 'mcp_server',
content: JSON.stringify(mcpResult)
\\\\}
]
\\\\});
return finalResponse.content[0].text;
\\\\}
return message.text;
\\\\}
Best Practices
Security Best Practices
- authentification: Always implement proper authentification for MCP servers
- autorisation: Implement fine-grained Contrôle d'Accès for tools
- Input Validation: Validate all input paramètres to prevent injection attacks
- Rate Limiting: Implement rate limiting to prevent abuse
- Secrets Management: Use secure methods for storing and accessing API clés and secrets
- HTTPS: Always use HTTPS for production deployments
- Minimal Permissions: Follow the principle of least privilege for tool implementations
Performance Optimization
- Caching: Implement caching for frequently used tool results
- connexion Pooling: Use connexion pooling for database and API connexions
- Asynchronous processusing: Use async/await for I/O-bound operations
- Horizontal Scaling: Design for horizontal scaling to handle increased load
- Timeout Handling: Implement proper timeout handling for external API calls
- Resource Limits: Set appropriate CPU and memory limits for containers
Reliability
- Error Handling: Implement comprehensive error handling and reporting
- Retries: Add retry logic for transient failures
- Circuit Breakers: Implement circuit breakers for external dependencies
- Health Checks: Add health check endpoints for monitoring
- Logging: Implement structured logging for dépannage
- Monitoring: Set up monitoring and alerting for clé metrics
Development Workflow
- Version Control: Use version control for MCP server code
- CI/CD: Implement continuous integration and deployment pipelines
- Testing: Write unit and integration tests for tools
- documentation: Document all tools and their paramètres
- Code Reviews: Conduct thorough code reviews for security and quality
- Semantic Versioning: Use semantic versioning for API changes
dépannage
Common Issues
authentification Failures
- Cause: Incorrect API clés, expired jetons, or misconfigured authentification
- Solution: Verify API clés, check jeton expiration, and ensure proper authentification configuration
Tool Execution Errors
- Cause: Invalid paramètres, external API failures, or bugs in tool implementation
- Solution: Validate paramètres, add error handling for external APIs, and test tools thoroughly
Performance Issues
- Cause: Inefficient tool implementations, missing caching, or resource constraints
- Solution: Optimize tool code, implement caching, and allocate appropriate resources
Integration Problems
- Cause: Incorrect tool schemas, mismatched paramètre types, or protocole misunderstandings
- Solution: Verify tool schemas, ensure paramètre types match, and follow the MCP specification
This comprehensive MCP Servers aide-mémoire provides everything needed to build, deploy, and integrate Model Context protocole servers. From basic setup to advanced deployment patterns, use these exemples and best practices to create powerful, standardized connexions between AI models and external tools and services.