Salta ai contenuti

Claude Code Router

Router di Codice Claude completo con comandi e flussi di lavoro per instradare richieste di Claude Code a diversi modelli di intelligenza artificiale, consentendo lo sviluppo multi-modello e l’ottimizzazione dei costi.

Panoramica

Claude Code Router è uno strumento open-source che agisce come un proxy locale per le richieste di Claude Code, permettendo di intercettare, modificare e instradare richieste a diversi modelli di intelligenza artificiale tra cui GPT-4, Gemini, modelli locali e altre API compatibili con OpenAI. Questo consente agli sviluppatori di utilizzare l’interfaccia potente di Claude Code sfruttando modelli diversi per compiti specifici, ottimizzazione dei costi o accesso a crediti API gratuiti.

⚠️ Avviso di Utilizzo: Claude Code Router modifica le richieste API e richiede una configurazione corretta delle chiavi e degli endpoint API. Assicurarsi sempre della gestione sicura delle credenziali e della conformità con i termini di servizio delle API.

Installazione

Configurazione Rapida

# Install via npm
npm install -g claude-code-router

# Install via pip
pip install claude-code-router

# Clone from GitHub
git clone https://github.com/musistudio/claude-code-router.git
cd claude-code-router
npm install

Installazione Docker

# Pull Docker image
docker pull claudecode/router:latest

# Run with Docker
docker run -p 8080:8080 -e OPENAI_API_KEY=your-key claudecode/router

# Docker Compose
version: '3.8'
services:
  claude-router:
    image: claudecode/router:latest
    ports:
      - "8080:8080"
    environment:
      - OPENAI_API_KEY=${OPENAI_API_KEY}
      - ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}

Compilazione da Sorgente

# Clone repository
git clone https://github.com/musistudio/claude-code-router.git
cd claude-code-router

# Install dependencies
npm install

# Build project
npm run build

# Start router
npm start

Configurazione

Configurazione Base

{
  "router": {
    "port": 8080,
    "host": "localhost",
    "logLevel": "info",
    "enableCors": true
  },
  "models": {
    "claude-3-sonnet": {
      "provider": "anthropic",
      "apiKey": "${ANTHROPIC_API_KEY}",
      "endpoint": "https://api.anthropic.com/v1/messages"
    },
    "gpt-4": {
      "provider": "openai",
      "apiKey": "${OPENAI_API_KEY}",
      "endpoint": "https://api.openai.com/v1/chat/completions"
    },
    "gemini-pro": {
      "provider": "google",
      "apiKey": "${GOOGLE_API_KEY}",
      "endpoint": "https://generativelanguage.googleapis.com/v1/models"
    }
  },
  "routing": {
    "defaultModel": "claude-3-sonnet",
    "fallbackModel": "gpt-4",
    "loadBalancing": false,
    "costOptimization": true
  }
}

Regole di Instradamento Avanzate

{
  "routingRules": [
    {
      "name": "code_generation",
      "condition": {
        "messageContains": ["write code", "implement", "create function"],
        "fileTypes": [".py", ".js", ".ts", ".java"]
      },
      "targetModel": "gpt-4",
      "priority": 1
    },
    {
      "name": "documentation",
      "condition": {
        "messageContains": ["document", "explain", "comment"],
        "taskType": "documentation"
      },
      "targetModel": "claude-3-sonnet",
      "priority": 2
    },
    {
      "name": "cost_optimization",
      "condition": {
        "tokenCount": { "max": 1000 },
        "complexity": "low"
      },
      "targetModel": "gpt-3.5-turbo",
      "priority": 3
    }
  ]
}

Comandi Principali

Gestione Router

ComandoDescrizione
router startAvvia il Claude Code Router
router stopArrestare il servizio router
router restartRiavvia il router
router statusControlla lo stato del router
router configVisualizza configurazione corrente
router logsVisualizza log del router
router healthEndpoint di health check

Gestione Modelli

ComandoDescrizione
router models listElenca modelli disponibili
router models addAggiungi nuova configurazione del modello
router models removeRimuovi configurazione modello
router models testVerifica connettività del modello
router models switchCambia modello predefinito
router models statusControlla stato del modello

Integrazione Claude Code

Configura Claude Code

# Set Claude Code to use router
export ANTHROPIC_API_BASE=http://localhost:8080/v1
export ANTHROPIC_API_KEY=router-proxy-key

# Alternative configuration
claude-code config set api.base_url http://localhost:8080/v1
claude-code config set api.key router-proxy-key

# Verify configuration
claude-code config show

Configurazione Proxy Router

# Start router with specific configuration
claude-code-router --config config.json --port 8080

# Start with environment variables
ROUTER_PORT=8080 ROUTER_CONFIG=config.json claude-code-router

# Start with custom models
claude-code-router --models gpt-4,claude-3-sonnet,gemini-pro

Provider di Modelli

Integrazione OpenAI

{
  "openai": {
    "apiKey": "${OPENAI_API_KEY}",
    "baseUrl": "https://api.openai.com/v1",
    "models": {
      "gpt-4": {
        "maxTokens": 8192,
        "temperature": 0.7,
        "costPerToken": 0.00003
      },
      "gpt-3.5-turbo": {
        "maxTokens": 4096,
        "temperature": 0.7,
        "costPerToken": 0.000002
      }
    }
  }
}

Integrazione Google Gemini

{
  "google": {
    "apiKey": "${GOOGLE_API_KEY}",
    "baseUrl": "https://generativelanguage.googleapis.com/v1",
    "models": {
      "gemini-pro": {
        "maxTokens": 32768,
        "temperature": 0.7,
        "costPerToken": 0.000001
      },
      "gemini-pro-vision": {
        "maxTokens": 16384,
        "temperature": 0.7,
        "supportsImages": true
      }
    }
  }
}

Integrazione Modelli Locali

{
  "local": {
    "baseUrl": "http://localhost:11434/v1",
    "models": {
      "llama2": {
        "endpoint": "/api/generate",
        "maxTokens": 4096,
        "temperature": 0.7,
        "costPerToken": 0
      },
      "codellama": {
        "endpoint": "/api/generate",
        "maxTokens": 8192,
        "temperature": 0.3,
        "specialization": "code"
      }
    }
  }
}

Integrazione OpenRouter

{
  "openrouter": {
    "apiKey": "${OPENROUTER_API_KEY}",
    "baseUrl": "https://openrouter.ai/api/v1",
    "models": {
      "anthropic/claude-3-sonnet": {
        "maxTokens": 200000,
        "costPerToken": 0.000015
      },
      "openai/gpt-4": {
        "maxTokens": 8192,
        "costPerToken": 0.00003
      },
      "google/gemini-pro": {
        "maxTokens": 32768,
        "costPerToken": 0.000001
      }
    }
  }
}

Strategie di Instradamento

Bilanciamento del Carico

{
  "loadBalancing": {
    "enabled": true,
    "strategy": "round_robin",
    "models": ["gpt-4", "claude-3-sonnet", "gemini-pro"],
    "weights": {
      "gpt-4": 0.4,
      "claude-3-sonnet": 0.4,
      "gemini-pro": 0.2
    },
    "healthCheck": {
      "enabled": true,
      "interval": 60,
      "timeout": 10
    }
  }
}

Ottimizzazione dei Costi

{
  "costOptimization": {
    "enabled": true,
    "budget": {
      "daily": 10.00,
      "monthly": 300.00,
      "currency": "USD"
    },
    "rules": [
      {
        "condition": "tokenCount < 500",
        "model": "gpt-3.5-turbo"
      },
      {
        "condition": "tokenCount >= 500 && tokenCount < 2000",
        "model": "claude-3-haiku"
      },
      {
        "condition": "tokenCount >= 2000",
        "model": "claude-3-sonnet"
      }
    ]
  }
}

Instradamento Intelligente

{
  "intelligentRouting": {
    "enabled": true,
    "rules": [
      {
        "name": "code_tasks",
        "patterns": ["implement", "write code", "debug", "refactor"],
        "model": "gpt-4",
        "confidence": 0.8
      },
      {
        "name": "analysis_tasks",
        "patterns": ["analyze", "explain", "review", "understand"],
        "model": "claude-3-sonnet",
        "confidence": 0.9
      },
      {
        "name": "creative_tasks",
        "patterns": ["create", "design", "brainstorm", "generate"],
        "model": "gemini-pro",
        "confidence": 0.7
      }
    ]
  }
}

Monitoraggio e Analisi

Tracciamento Utilizzo

# View usage statistics
curl http://localhost:8080/api/stats

# Model usage breakdown
curl http://localhost:8080/api/stats/models

# Cost analysis
curl http://localhost:8080/api/stats/costs

# Performance metrics
curl http://localhost:8080/api/stats/performance

Configurazione Logging

{
  "logging": {
    "level": "info",
    "format": "json",
    "outputs": ["console", "file"],
    "file": {
      "path": "./logs/router.log",
      "maxSize": "100MB",
      "maxFiles": 10,
      "compress": true
    },
    "metrics": {
      "enabled": true,
      "interval": 60,
      "endpoint": "/metrics"
    }
  }
}

Monitoraggio Salute

Would you like me to continue with the remaining sections?```bash

Health check endpoint

curl http://localhost:8080/health

Detailed health status

curl http://localhost:8080/health/detailed

Model availability

curl http://localhost:8080/health/models

Performance metrics

curl http://localhost:8080/metrics

```javascript
// Custom request transformer
const requestTransformer = {
  transform: (request, targetModel) => {
    // Modify request based on target model
    if (targetModel === 'gpt-4') {
      request.temperature = 0.3; // Lower temperature for code
    } else if (targetModel === 'claude-3-sonnet') {
      request.max_tokens = 4096; // Adjust token limit
    }
    return request;
  }
};

// Response transformer
const responseTransformer = {
  transform: (response, sourceModel) => {
    // Standardize response format
    return {
      content: response.content || response.message,
      model: sourceModel,
      usage: response.usage,
      timestamp: new Date().toISOString()
    };
  }
};
```### Sistema di Plugin
```javascript
// Router plugin interface
class RouterPlugin {
  constructor(config) {
    this.config = config;
  }
  
  onRequest(request, context) {
    // Pre-process request
    return request;
  }
  
  onResponse(response, context) {
    // Post-process response
    return response;
  }
  
  onError(error, context) {
    // Handle errors
    console.error('Router error:', error);
  }
}

// Load plugins
const plugins = [
  new CostTrackingPlugin(),
  new CachePlugin(),
  new RateLimitPlugin()
];
```### Sistema di Caching
```json
{
  "cache": {
    "enabled": true,
    "type": "redis",
    "connection": {
      "host": "localhost",
      "port": 6379,
      "password": "${REDIS_PASSWORD}"
    },
    "ttl": 3600,
    "keyPrefix": "claude-router:",
    "compression": true,
    "rules": [
      {
        "pattern": "explain*",
        "ttl": 7200
      },
      {
        "pattern": "generate*",
        "ttl": 1800
      }
    ]
  }
}
```## Sicurezza e Autenticazione
```json
{
  "security": {
    "authentication": {
      "enabled": true,
      "type": "bearer",
      "keys": [
        {
          "key": "router-key-1",
          "permissions": ["read", "write"],
          "rateLimit": 1000
        },
        {
          "key": "router-key-2",
          "permissions": ["read"],
          "rateLimit": 100
        }
      ]
    },
    "encryption": {
      "enabled": true,
      "algorithm": "AES-256-GCM",
      "keyRotation": 86400
    }
  }
}
```### Gestione delle Chiavi API
```json
{
  "rateLimit": {
    "enabled": true,
    "global": {
      "requests": 1000,
      "window": 3600
    },
    "perKey": {
      "requests": 100,
      "window": 3600
    },
    "perModel": {
      "gpt-4": {
        "requests": 50,
        "window": 3600
      },
      "claude-3-sonnet": {
        "requests": 100,
        "window": 3600
      }
    }
  }
}
```### Limitazione dei Tassi
```bash
# Router not starting
- Check port availability
- Verify configuration file
- Check API key validity
- Review log files

# Model connection failures
- Test API endpoints
- Verify API keys
- Check network connectivity
- Review rate limits

# Performance issues
- Monitor memory usage
- Check cache configuration
- Review routing rules
- Optimize model selection
```## Risoluzione dei Problemi
```bash
# Enable debug logging
DEBUG=claude-router:* npm start

# Verbose logging
claude-code-router --log-level debug

# Request tracing
curl -H "X-Debug: true" http://localhost:8080/v1/messages

# Performance profiling
curl http://localhost:8080/debug/profile
```### Problemi Comuni
```json
{
  "performance": {
    "connectionPool": {
      "maxConnections": 100,
      "timeout": 30000,
      "keepAlive": true
    },
    "compression": {
      "enabled": true,
      "level": 6,
      "threshold": 1024
    },
    "clustering": {
      "enabled": true,
      "workers": "auto"
    }
  }
}
```### Modalità Debug
```yaml
# Kubernetes deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: claude-router
spec:
  replicas: 3
  selector:
    matchLabels:
      app: claude-router
  template:
    metadata:
      labels:
        app: claude-router
    spec:
      containers:
      - name: claude-router
        image: claudecode/router:latest
        ports:
        - containerPort: 8080
        env:
        - name: OPENAI_API_KEY
          valueFrom:
            secretKeyRef:
              name: api-keys
              key: openai
        - name: ANTHROPIC_API_KEY
          valueFrom:
            secretKeyRef:
              name: api-keys
              key: anthropic
```### Ottimizzazione delle Prestazioni
```yaml
version: '3.8'
services:
  claude-router:
    image: claudecode/router:latest
    ports:
      - "8080:8080"
    environment:
      - NODE_ENV=production
      - ROUTER_CONFIG=/app/config/production.json
    volumes:
      - ./config:/app/config
      - ./logs:/app/logs
    restart: unless-stopped
    
  redis:
    image: redis:alpine
    ports:
      - "6379:6379"
    volumes:
      - redis_data:/data
    
volumes:
  redis_data:
```## Distribuzione
https://github.com/musistudio/claude-code-router##

# Distribuzione in Produzione
https://docs.claude-router.dev/##

# Docker Compose
https://discord.gg/claude-router#

# Risorse
https://youtube.com/claude-router- [Repository GitHub di Claude Code Router](https://openrouter.ai/docs)