Claude Code Router
"Clase de la hoja"
########################################################################################################################################################################################################################################################## Copiar todos los comandos
########################################################################################################################################################################################################################################################## Generar PDF seleccionado/button
■/div titulada
Completo Código Claude Router comandos y flujos de trabajo para routing Claude Code solicita a diferentes modelos AI, permitiendo el desarrollo multimodelo y la optimización de costes.
Sinopsis
Código de Claude Router es una herramienta de código abierto que actúa como un proxy local para las solicitudes de Código de Claude, lo que le permite interceptar, modificar y enviar solicitudes a diferentes modelos de AI, incluyendo GPT-4, Gemini, modelos locales y otras API compatibles con OpenAI. Esto permite a los desarrolladores utilizar la poderosa interfaz de Claude Code mientras aprovecha diferentes modelos para tareas específicas, optimización de costos o acceso a créditos gratuitos de API.
NOVEDAD ** Nota de Uso**: Código de Claude Router modifica las solicitudes de API y requiere una configuración adecuada de las claves y puntos finales de API. Garantizar siempre el manejo seguro de las credenciales y el cumplimiento de los términos de servicio de API.
Instalación
Configuración rápida
# Install via npm
npm install -g claude-code-router
# Install via pip
pip install claude-code-router
# Clone from GitHub
git clone https://github.com/musistudio/claude-code-router.git
cd claude-code-router
npm install
Docker Instalación
# Pull Docker image
docker pull claudecode/router:latest
# Run with Docker
docker run -p 8080:8080 -e OPENAI_API_KEY=your-key claudecode/router
# Docker Compose
version: '3.8'
services:
claude-router:
image: claudecode/router:latest
ports:
- "8080:8080"
environment:
- OPENAI_API_KEY=${OPENAI_API_KEY}
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
Build from Source
# Clone repository
git clone https://github.com/musistudio/claude-code-router.git
cd claude-code-router
# Install dependencies
npm install
# Build project
npm run build
# Start router
npm start
Configuración
Configuración básica
{
"router": {
"port": 8080,
"host": "localhost",
"logLevel": "info",
"enableCors": true
},
"models": {
"claude-3-sonnet": {
"provider": "anthropic",
"apiKey": "${ANTHROPIC_API_KEY}",
"endpoint": "https://api.anthropic.com/v1/messages"
},
"gpt-4": {
"provider": "openai",
"apiKey": "${OPENAI_API_KEY}",
"endpoint": "https://api.openai.com/v1/chat/completions"
},
"gemini-pro": {
"provider": "google",
"apiKey": "${GOOGLE_API_KEY}",
"endpoint": "https://generativelanguage.googleapis.com/v1/models"
}
},
"routing": {
"defaultModel": "claude-3-sonnet",
"fallbackModel": "gpt-4",
"loadBalancing": false,
"costOptimization": true
}
}
Reglas de rutina avanzadas
{
"routingRules": [
{
"name": "code_generation",
"condition": {
"messageContains": ["write code", "implement", "create function"],
"fileTypes": [".py", ".js", ".ts", ".java"]
},
"targetModel": "gpt-4",
"priority": 1
},
{
"name": "documentation",
"condition": {
"messageContains": ["document", "explain", "comment"],
"taskType": "documentation"
},
"targetModel": "claude-3-sonnet",
"priority": 2
},
{
"name": "cost_optimization",
"condition": {
"tokenCount": { "max": 1000 },
"complexity": "low"
},
"targetModel": "gpt-3.5-turbo",
"priority": 3
}
]
}
Comandos básicos
Router Management
Command | Description |
---|---|
router start |
Start the Claude Code Router |
router stop |
Stop the router service |
router restart |
Restart the router |
router status |
Check router status |
router config |
View current configuration |
router logs |
View router logs |
router health |
Health check endpoint |
Model Management
Command | Description |
---|---|
router models list |
List available models |
router models add |
Add new model configuration |
router models remove |
Remove model configuration |
router models test |
Test model connectivity |
router models switch |
Switch default model |
router models status |
Check model status |
Claude Code Integration
Configure Claude Code
# Set Claude Code to use router
export ANTHROPIC_API_BASE=http://localhost:8080/v1
export ANTHROPIC_API_KEY=router-proxy-key
# Alternative configuration
claude-code config set api.base_url http://localhost:8080/v1
claude-code config set api.key router-proxy-key
# Verify configuration
claude-code config show
Router Proxy Setup
# Start router with specific configuration
claude-code-router --config config.json --port 8080
# Start with environment variables
ROUTER_PORT=8080 ROUTER_CONFIG=config.json claude-code-router
# Start with custom models
claude-code-router --models gpt-4,claude-3-sonnet,gemini-pro
Proveedores de modelos
OpenAI Integration
{
"openai": {
"apiKey": "${OPENAI_API_KEY}",
"baseUrl": "https://api.openai.com/v1",
"models": {
"gpt-4": {
"maxTokens": 8192,
"temperature": 0.7,
"costPerToken": 0.00003
},
"gpt-3.5-turbo": {
"maxTokens": 4096,
"temperature": 0.7,
"costPerToken": 0.000002
}
}
}
}
Google Gemini Integración
{
"google": {
"apiKey": "${GOOGLE_API_KEY}",
"baseUrl": "https://generativelanguage.googleapis.com/v1",
"models": {
"gemini-pro": {
"maxTokens": 32768,
"temperature": 0.7,
"costPerToken": 0.000001
},
"gemini-pro-vision": {
"maxTokens": 16384,
"temperature": 0.7,
"supportsImages": true
}
}
}
}
Integración de modelos locales
{
"local": {
"baseUrl": "http://localhost:11434/v1",
"models": {
"llama2": {
"endpoint": "/api/generate",
"maxTokens": 4096,
"temperature": 0.7,
"costPerToken": 0
},
"codellama": {
"endpoint": "/api/generate",
"maxTokens": 8192,
"temperature": 0.3,
"specialization": "code"
}
}
}
}
Integración OpenRouter
{
"openrouter": {
"apiKey": "${OPENROUTER_API_KEY}",
"baseUrl": "https://openrouter.ai/api/v1",
"models": {
"anthropic/claude-3-sonnet": {
"maxTokens": 200000,
"costPerToken": 0.000015
},
"openai/gpt-4": {
"maxTokens": 8192,
"costPerToken": 0.00003
},
"google/gemini-pro": {
"maxTokens": 32768,
"costPerToken": 0.000001
}
}
}
}
Estrategias de rotación
Equilibrio de carga
{
"loadBalancing": {
"enabled": true,
"strategy": "round_robin",
"models": ["gpt-4", "claude-3-sonnet", "gemini-pro"],
"weights": {
"gpt-4": 0.4,
"claude-3-sonnet": 0.4,
"gemini-pro": 0.2
},
"healthCheck": {
"enabled": true,
"interval": 60,
"timeout": 10
}
}
}
Optimización de costos
{
"costOptimization": {
"enabled": true,
"budget": {
"daily": 10.00,
"monthly": 300.00,
"currency": "USD"
},
"rules": [
{
"condition": "tokenCount < 500",
"model": "gpt-3.5-turbo"
},
{
"condition": "tokenCount >= 500 && tokenCount < 2000",
"model": "claude-3-haiku"
},
{
"condition": "tokenCount >= 2000",
"model": "claude-3-sonnet"
}
]
}
}
Routing inteligente
{
"intelligentRouting": {
"enabled": true,
"rules": [
{
"name": "code_tasks",
"patterns": ["implement", "write code", "debug", "refactor"],
"model": "gpt-4",
"confidence": 0.8
},
{
"name": "analysis_tasks",
"patterns": ["analyze", "explain", "review", "understand"],
"model": "claude-3-sonnet",
"confidence": 0.9
},
{
"name": "creative_tasks",
"patterns": ["create", "design", "brainstorm", "generate"],
"model": "gemini-pro",
"confidence": 0.7
}
]
}
}
Vigilancia y análisis
Seguimiento de uso
# View usage statistics
curl http://localhost:8080/api/stats
# Model usage breakdown
curl http://localhost:8080/api/stats/models
# Cost analysis
curl http://localhost:8080/api/stats/costs
# Performance metrics
curl http://localhost:8080/api/stats/performance
Configuración de registro
{
"logging": {
"level": "info",
"format": "json",
"outputs": ["console", "file"],
"file": {
"path": "./logs/router.log",
"maxSize": "100MB",
"maxFiles": 10,
"compress": true
},
"metrics": {
"enabled": true,
"interval": 60,
"endpoint": "/metrics"
}
}
}
Vigilancia de la salud
# Health check endpoint
curl http://localhost:8080/health
# Detailed health status
curl http://localhost:8080/health/detailed
# Model availability
curl http://localhost:8080/health/models
# Performance metrics
curl http://localhost:8080/metrics
Características avanzadas
Solicitud de transformación
// Custom request transformer
const requestTransformer = {
transform: (request, targetModel) => {
// Modify request based on target model
if (targetModel === 'gpt-4') {
request.temperature = 0.3; // Lower temperature for code
} else if (targetModel === 'claude-3-sonnet') {
request.max_tokens = 4096; // Adjust token limit
}
return request;
}
};
// Response transformer
const responseTransformer = {
transform: (response, sourceModel) => {
// Standardize response format
return {
content: response.content || response.message,
model: sourceModel,
usage: response.usage,
timestamp: new Date().toISOString()
};
}
};
Sistema Plugin
// Router plugin interface
class RouterPlugin {
constructor(config) {
this.config = config;
}
onRequest(request, context) {
// Pre-process request
return request;
}
onResponse(response, context) {
// Post-process response
return response;
}
onError(error, context) {
// Handle errors
console.error('Router error:', error);
}
}
// Load plugins
const plugins = [
new CostTrackingPlugin(),
new CachePlugin(),
new RateLimitPlugin()
];
Caching System
{
"cache": {
"enabled": true,
"type": "redis",
"connection": {
"host": "localhost",
"port": 6379,
"password": "${REDIS_PASSWORD}"
},
"ttl": 3600,
"keyPrefix": "claude-router:",
"compression": true,
"rules": [
{
"pattern": "explain*",
"ttl": 7200
},
{
"pattern": "generate*",
"ttl": 1800
}
]
}
}
Seguridad y autenticación
API Gestión clave
{
"security": {
"authentication": {
"enabled": true,
"type": "bearer",
"keys": [
{
"key": "router-key-1",
"permissions": ["read", "write"],
"rateLimit": 1000
},
{
"key": "router-key-2",
"permissions": ["read"],
"rateLimit": 100
}
]
},
"encryption": {
"enabled": true,
"algorithm": "AES-256-GCM",
"keyRotation": 86400
}
}
}
Tasa de limitación
{
"rateLimit": {
"enabled": true,
"global": {
"requests": 1000,
"window": 3600
},
"perKey": {
"requests": 100,
"window": 3600
},
"perModel": {
"gpt-4": {
"requests": 50,
"window": 3600
},
"claude-3-sonnet": {
"requests": 100,
"window": 3600
}
}
}
}
Solución de problemas
Cuestiones comunes
# Router not starting
- Check port availability
- Verify configuration file
- Check API key validity
- Review log files
# Model connection failures
- Test API endpoints
- Verify API keys
- Check network connectivity
- Review rate limits
# Performance issues
- Monitor memory usage
- Check cache configuration
- Review routing rules
- Optimize model selection
Modo de depuración
# Enable debug logging
DEBUG=claude-router:* npm start
# Verbose logging
claude-code-router --log-level debug
# Request tracing
curl -H "X-Debug: true" http://localhost:8080/v1/messages
# Performance profiling
curl http://localhost:8080/debug/profile
Optimización del rendimiento
{
"performance": {
"connectionPool": {
"maxConnections": 100,
"timeout": 30000,
"keepAlive": true
},
"compression": {
"enabled": true,
"level": 6,
"threshold": 1024
},
"clustering": {
"enabled": true,
"workers": "auto"
}
}
}
Despliegue
Despliegue de la producción
# Kubernetes deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: claude-router
spec:
replicas: 3
selector:
matchLabels:
app: claude-router
template:
metadata:
labels:
app: claude-router
spec:
containers:
- name: claude-router
image: claudecode/router:latest
ports:
- containerPort: 8080
env:
- name: OPENAI_API_KEY
valueFrom:
secretKeyRef:
name: api-keys
key: openai
- name: ANTHROPIC_API_KEY
valueFrom:
secretKeyRef:
name: api-keys
key: anthropic
Docker Compose
version: '3.8'
services:
claude-router:
image: claudecode/router:latest
ports:
- "8080:8080"
environment:
- NODE_ENV=production
- ROUTER_CONFIG=/app/config/production.json
volumes:
- ./config:/app/config
- ./logs:/app/logs
restart: unless-stopped
redis:
image: redis:alpine
ports:
- "6379:6379"
volumes:
- redis_data:/data
volumes:
redis_data: