Routeur de Code Claude
Commandes et workflows complets du Routeur de Code Claude pour acheminer les requêtes de Code Claude vers différents modèles d’IA, permettant le développement multi-modèles et l’optimisation des coûts.
Aperçu
Le Routeur de Code Claude est un outil open-source qui agit comme un proxy local pour les requêtes de Code Claude, vous permettant d’intercepter, modifier et acheminer des requêtes vers différents modèles d’IA, notamment GPT-4, Gemini, des modèles locaux et d’autres API compatibles OpenAI. Cela permet aux développeurs d’utiliser l’interface puissante de Code Claude tout en exploitant différents modèles pour des tâches spécifiques, l’optimisation des coûts ou l’accès à des crédits API gratuits.
⚠️ Avis d’utilisation : Le Routeur de Code Claude modifie les requêtes API et nécessite une configuration appropriée des clés et points d’accès API. Assurez-vous toujours de manipuler les identifiants de manière sécurisée et de respecter les conditions de service des API.
Installation
Configuration Rapide
# Install via npm
npm install -g claude-code-router
# Install via pip
pip install claude-code-router
# Clone from GitHub
git clone https://github.com/musistudio/claude-code-router.git
cd claude-code-router
npm install
Installation Docker
# Pull Docker image
docker pull claudecode/router:latest
# Run with Docker
docker run -p 8080:8080 -e OPENAI_API_KEY=your-key claudecode/router
# Docker Compose
version: '3.8'
services:
claude-router:
image: claudecode/router:latest
ports:
- "8080:8080"
environment:
- OPENAI_API_KEY=${OPENAI_API_KEY}
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
Compilation à partir des Sources
# Clone repository
git clone https://github.com/musistudio/claude-code-router.git
cd claude-code-router
# Install dependencies
npm install
# Build project
npm run build
# Start router
npm start
Configuration
Configuration de Base
{
"router": {
"port": 8080,
"host": "localhost",
"logLevel": "info",
"enableCors": true
},
"models": {
"claude-3-sonnet": {
"provider": "anthropic",
"apiKey": "${ANTHROPIC_API_KEY}",
"endpoint": "https://api.anthropic.com/v1/messages"
},
"gpt-4": {
"provider": "openai",
"apiKey": "${OPENAI_API_KEY}",
"endpoint": "https://api.openai.com/v1/chat/completions"
},
"gemini-pro": {
"provider": "google",
"apiKey": "${GOOGLE_API_KEY}",
"endpoint": "https://generativelanguage.googleapis.com/v1/models"
}
},
"routing": {
"defaultModel": "claude-3-sonnet",
"fallbackModel": "gpt-4",
"loadBalancing": false,
"costOptimization": true
}
}
Règles de Routage Avancées
{
"routingRules": [
{
"name": "code_generation",
"condition": {
"messageContains": ["write code", "implement", "create function"],
"fileTypes": [".py", ".js", ".ts", ".java"]
},
"targetModel": "gpt-4",
"priority": 1
},
{
"name": "documentation",
"condition": {
"messageContains": ["document", "explain", "comment"],
"taskType": "documentation"
},
"targetModel": "claude-3-sonnet",
"priority": 2
},
{
"name": "cost_optimization",
"condition": {
"tokenCount": { "max": 1000 },
"complexity": "low"
},
"targetModel": "gpt-3.5-turbo",
"priority": 3
}
]
}
Commandes Principales
Gestion du Routeur
| Commande | Description |
|---|---|
router start | Démarrer le Claude Code Router |
router stop | Arrêter le service router |
router restart | Redémarrer le routeur |
router status | Vérifier le statut du router |
router config | Afficher la configuration actuelle |
router logs | Afficher les journaux du routeur |
router health | Point de contrôle de santé |
Gestion des Modèles
| Commande | Description |
|---|---|
router models list | Lister les modèles disponibles |
router models add | Ajouter une nouvelle configuration de modèle |
router models remove | Supprimer la configuration du modèle |
router models test | Tester la connectivité du modèle |
router models switch | Changer de modèle par défaut |
router models status | Vérifier le statut du modèle |
Intégration de Code Claude
Configurer Code Claude
# Set Claude Code to use router
export ANTHROPIC_API_BASE=http://localhost:8080/v1
export ANTHROPIC_API_KEY=router-proxy-key
# Alternative configuration
claude-code config set api.base_url http://localhost:8080/v1
claude-code config set api.key router-proxy-key
# Verify configuration
claude-code config show
Configuration du Proxy du Routeur
# Start router with specific configuration
claude-code-router --config config.json --port 8080
# Start with environment variables
ROUTER_PORT=8080 ROUTER_CONFIG=config.json claude-code-router
# Start with custom models
claude-code-router --models gpt-4,claude-3-sonnet,gemini-pro
Fournisseurs de Modèles
Intégration OpenAI
{
"openai": {
"apiKey": "${OPENAI_API_KEY}",
"baseUrl": "https://api.openai.com/v1",
"models": {
"gpt-4": {
"maxTokens": 8192,
"temperature": 0.7,
"costPerToken": 0.00003
},
"gpt-3.5-turbo": {
"maxTokens": 4096,
"temperature": 0.7,
"costPerToken": 0.000002
}
}
}
}
Intégration de Google Gemini
{
"google": {
"apiKey": "${GOOGLE_API_KEY}",
"baseUrl": "https://generativelanguage.googleapis.com/v1",
"models": {
"gemini-pro": {
"maxTokens": 32768,
"temperature": 0.7,
"costPerToken": 0.000001
},
"gemini-pro-vision": {
"maxTokens": 16384,
"temperature": 0.7,
"supportsImages": true
}
}
}
}
Intégration de Modèles Locaux
{
"local": {
"baseUrl": "http://localhost:11434/v1",
"models": {
"llama2": {
"endpoint": "/api/generate",
"maxTokens": 4096,
"temperature": 0.7,
"costPerToken": 0
},
"codellama": {
"endpoint": "/api/generate",
"maxTokens": 8192,
"temperature": 0.3,
"specialization": "code"
}
}
}
}
Intégration d’OpenRouter
{
"openrouter": {
"apiKey": "${OPENROUTER_API_KEY}",
"baseUrl": "https://openrouter.ai/api/v1",
"models": {
"anthropic/claude-3-sonnet": {
"maxTokens": 200000,
"costPerToken": 0.000015
},
"openai/gpt-4": {
"maxTokens": 8192,
"costPerToken": 0.00003
},
"google/gemini-pro": {
"maxTokens": 32768,
"costPerToken": 0.000001
}
}
}
}
Stratégies de Routage
Équilibrage de Charge
{
"loadBalancing": {
"enabled": true,
"strategy": "round_robin",
"models": ["gpt-4", "claude-3-sonnet", "gemini-pro"],
"weights": {
"gpt-4": 0.4,
"claude-3-sonnet": 0.4,
"gemini-pro": 0.2
},
"healthCheck": {
"enabled": true,
"interval": 60,
"timeout": 10
}
}
}
Optimisation des Coûts
{
"costOptimization": {
"enabled": true,
"budget": {
"daily": 10.00,
"monthly": 300.00,
"currency": "USD"
},
"rules": [
{
"condition": "tokenCount < 500",
"model": "gpt-3.5-turbo"
},
{
"condition": "tokenCount >= 500 && tokenCount < 2000",
"model": "claude-3-haiku"
},
{
"condition": "tokenCount >= 2000",
"model": "claude-3-sonnet"
}
]
}
}
Routage Intelligent
{
"intelligentRouting": {
"enabled": true,
"rules": [
{
"name": "code_tasks",
"patterns": ["implement", "write code", "debug", "refactor"],
"model": "gpt-4",
"confidence": 0.8
},
{
"name": "analysis_tasks",
"patterns": ["analyze", "explain", "review", "understand"],
"model": "claude-3-sonnet",
"confidence": 0.9
},
{
"name": "creative_tasks",
"patterns": ["create", "design", "brainstorm", "generate"],
"model": "gemini-pro",
"confidence": 0.7
}
]
}
}
Surveillance et Analytique
Suivi d’Utilisation
# View usage statistics
curl http://localhost:8080/api/stats
# Model usage breakdown
curl http://localhost:8080/api/stats/models
# Cost analysis
curl http://localhost:8080/api/stats/costs
# Performance metrics
curl http://localhost:8080/api/stats/performance
Configuration de la Journalisation
{
"logging": {
"level": "info",
"format": "json",
"outputs": ["console", "file"],
"file": {
"path": "./logs/router.log",
"maxSize": "100MB",
"maxFiles": 10,
"compress": true
},
"metrics": {
"enabled": true,
"interval": 60,
"endpoint": "/metrics"
}
}
}
Surveillance de la Santé
The translation preserves the markdown formatting, keeps technical terms in English, and maintains the overall structure and punctuation of the original text.```bash
Health check endpoint
curl http://localhost:8080/health
Detailed health status
curl http://localhost:8080/health/detailed
Model availability
curl http://localhost:8080/health/models
Performance metrics
curl http://localhost:8080/metrics
```javascript
// Custom request transformer
const requestTransformer = {
transform: (request, targetModel) => {
// Modify request based on target model
if (targetModel === 'gpt-4') {
request.temperature = 0.3; // Lower temperature for code
} else if (targetModel === 'claude-3-sonnet') {
request.max_tokens = 4096; // Adjust token limit
}
return request;
}
};
// Response transformer
const responseTransformer = {
transform: (response, sourceModel) => {
// Standardize response format
return {
content: response.content || response.message,
model: sourceModel,
usage: response.usage,
timestamp: new Date().toISOString()
};
}
};
```### Système de Plugins
```javascript
// Router plugin interface
class RouterPlugin {
constructor(config) {
this.config = config;
}
onRequest(request, context) {
// Pre-process request
return request;
}
onResponse(response, context) {
// Post-process response
return response;
}
onError(error, context) {
// Handle errors
console.error('Router error:', error);
}
}
// Load plugins
const plugins = [
new CostTrackingPlugin(),
new CachePlugin(),
new RateLimitPlugin()
];
```### Système de Mise en Cache
```json
{
"cache": {
"enabled": true,
"type": "redis",
"connection": {
"host": "localhost",
"port": 6379,
"password": "${REDIS_PASSWORD}"
},
"ttl": 3600,
"keyPrefix": "claude-router:",
"compression": true,
"rules": [
{
"pattern": "explain*",
"ttl": 7200
},
{
"pattern": "generate*",
"ttl": 1800
}
]
}
}
```## Sécurité et Authentification
```json
{
"security": {
"authentication": {
"enabled": true,
"type": "bearer",
"keys": [
{
"key": "router-key-1",
"permissions": ["read", "write"],
"rateLimit": 1000
},
{
"key": "router-key-2",
"permissions": ["read"],
"rateLimit": 100
}
]
},
"encryption": {
"enabled": true,
"algorithm": "AES-256-GCM",
"keyRotation": 86400
}
}
}
```### Gestion des Clés API
```json
{
"rateLimit": {
"enabled": true,
"global": {
"requests": 1000,
"window": 3600
},
"perKey": {
"requests": 100,
"window": 3600
},
"perModel": {
"gpt-4": {
"requests": 50,
"window": 3600
},
"claude-3-sonnet": {
"requests": 100,
"window": 3600
}
}
}
}
```### Limitation de Débit
```bash
# Router not starting
- Check port availability
- Verify configuration file
- Check API key validity
- Review log files
# Model connection failures
- Test API endpoints
- Verify API keys
- Check network connectivity
- Review rate limits
# Performance issues
- Monitor memory usage
- Check cache configuration
- Review routing rules
- Optimize model selection
```## Dépannage
```bash
# Enable debug logging
DEBUG=claude-router:* npm start
# Verbose logging
claude-code-router --log-level debug
# Request tracing
curl -H "X-Debug: true" http://localhost:8080/v1/messages
# Performance profiling
curl http://localhost:8080/debug/profile
```### Problèmes Courants
```json
{
"performance": {
"connectionPool": {
"maxConnections": 100,
"timeout": 30000,
"keepAlive": true
},
"compression": {
"enabled": true,
"level": 6,
"threshold": 1024
},
"clustering": {
"enabled": true,
"workers": "auto"
}
}
}
```### Mode Débogage
```yaml
# Kubernetes deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: claude-router
spec:
replicas: 3
selector:
matchLabels:
app: claude-router
template:
metadata:
labels:
app: claude-router
spec:
containers:
- name: claude-router
image: claudecode/router:latest
ports:
- containerPort: 8080
env:
- name: OPENAI_API_KEY
valueFrom:
secretKeyRef:
name: api-keys
key: openai
- name: ANTHROPIC_API_KEY
valueFrom:
secretKeyRef:
name: api-keys
key: anthropic
```### Optimisation des Performances
```yaml
version: '3.8'
services:
claude-router:
image: claudecode/router:latest
ports:
- "8080:8080"
environment:
- NODE_ENV=production
- ROUTER_CONFIG=/app/config/production.json
volumes:
- ./config:/app/config
- ./logs:/app/logs
restart: unless-stopped
redis:
image: redis:alpine
ports:
- "6379:6379"
volumes:
- redis_data:/data
volumes:
redis_data:
```## Déploiement
https://github.com/musistudio/claude-code-router##
# Déploiement en Production
https://docs.claude-router.dev/##
# Docker Compose
https://discord.gg/claude-router#
# Ressources
https://youtube.com/claude-router- [Dépôt GitHub de Claude Code Router](https://openrouter.ai/docs)