Zum Inhalt

Claude Code Router

generieren

Umfassende Claude Code Router Befehle und Workflows für Claude Code Anfragen an verschiedene AI-Modelle, die Multi-Modellentwicklung und Kostenoptimierung ermöglichen.

Überblick

Claude-Code Router ist ein Open-Source-Tool, das als lokale Proxy für Claude Code-Anfragen fungiert, so dass Sie Anfragen an verschiedene AI-Modelle wie GPT-4, Gemini, lokale Modelle und andere OpenAI-kompatible APIs abfangen, ändern und Routen. Dies ermöglicht Entwicklern, die leistungsstarke Schnittstelle von Claude Code zu verwenden, während verschiedene Modelle für bestimmte Aufgaben, Kostenoptimierung oder Zugriff auf kostenlose API-Gutschriften verwendet werden.

ZEIT ** Nutzungshinweis*: Claude-Code Router modifiziert API-Anfragen und erfordert eine ordnungsgemäße Konfiguration der API-Tasten und Endpunkte. Sicheres Handling von Anmeldeinformationen und Einhaltung von API-Bedingungen.

Installation

Quick Setup

```bash

Install via npm

npm install -g claude-code-router

Install via pip

pip install claude-code-router

Clone from GitHub

git clone https://github.com/musistudio/claude-code-router.git cd claude-code-router npm install ```_

Docker Installation

```bash

Pull Docker image

docker pull claudecode/router:latest

Run with Docker

docker run -p 8080:8080 -e OPENAI_API_KEY=your-key claudecode/router

Docker Compose

version: '3.8' services: claude-router: image: claudecode/router:latest ports: - "8080:8080" environment: - OPENAI_API_KEY=\({OPENAI_API_KEY} - ANTHROPIC_API_KEY=\) ```_

Aufbau von Source

```bash

Clone repository

git clone https://github.com/musistudio/claude-code-router.git cd claude-code-router

Install dependencies

npm install

Build project

npm run build

Start router

npm start ```_

Konfiguration

Grundkonfiguration

json { "router": { "port": 8080, "host": "localhost", "logLevel": "info", "enableCors": true }, "models": { "claude-3-sonnet": { "provider": "anthropic", "apiKey": "${ANTHROPIC_API_KEY}", "endpoint": "https://api.anthropic.com/v1/messages" }, "gpt-4": { "provider": "openai", "apiKey": "${OPENAI_API_KEY}", "endpoint": "https://api.openai.com/v1/chat/completions" }, "gemini-pro": { "provider": "google", "apiKey": "${GOOGLE_API_KEY}", "endpoint": "https://generativelanguage.googleapis.com/v1/models" } }, "routing": { "defaultModel": "claude-3-sonnet", "fallbackModel": "gpt-4", "loadBalancing": false, "costOptimization": true } }_

Erweiterte Routing-Regeln

json { "routingRules": [ { "name": "code_generation", "condition": { "messageContains": ["write code", "implement", "create function"], "fileTypes": [".py", ".js", ".ts", ".java"] }, "targetModel": "gpt-4", "priority": 1 }, { "name": "documentation", "condition": { "messageContains": ["document", "explain", "comment"], "taskType": "documentation" }, "targetModel": "claude-3-sonnet", "priority": 2 }, { "name": "cost_optimization", "condition": { "tokenCount": { "max": 1000 }, "complexity": "low" }, "targetModel": "gpt-3.5-turbo", "priority": 3 } ] }_

Kernkommandos

Router Management

Command Description
router start Start the Claude Code Router
router stop Stop the router service
router restart Restart the router
router status Check router status
router config View current configuration
router logs View router logs
router health Health check endpoint

Modellmanagement

Command Description
router models list List available models
router models add Add new model configuration
router models remove Remove model configuration
router models test Test model connectivity
router models switch Switch default model
router models status Check model status

Integration von Claude Code

Claude Code konfigurieren

```bash

Set Claude Code to use router

export ANTHROPIC_API_BASE=http://localhost:8080/v1 export ANTHROPIC_API_KEY=router-proxy-key

Alternative configuration

claude-code config set api.base_url http://localhost:8080/v1 claude-code config set api.key router-proxy-key

Verify configuration

claude-code config show ```_

Router Proxy Setup

```bash

Start router with specific configuration

claude-code-router --config config.json --port 8080

Start with environment variables

ROUTER_PORT=8080 ROUTER_CONFIG=config.json claude-code-router

Start with custom models

claude-code-router --models gpt-4,claude-3-sonnet,gemini-pro ```_

Modellanbieter

OpenAI Integration

json { "openai": { "apiKey": "${OPENAI_API_KEY}", "baseUrl": "https://api.openai.com/v1", "models": { "gpt-4": { "maxTokens": 8192, "temperature": 0.7, "costPerToken": 0.00003 }, "gpt-3.5-turbo": { "maxTokens": 4096, "temperature": 0.7, "costPerToken": 0.000002 } } } }_

Google Gemini Integration

json { "google": { "apiKey": "${GOOGLE_API_KEY}", "baseUrl": "https://generativelanguage.googleapis.com/v1", "models": { "gemini-pro": { "maxTokens": 32768, "temperature": 0.7, "costPerToken": 0.000001 }, "gemini-pro-vision": { "maxTokens": 16384, "temperature": 0.7, "supportsImages": true } } } }_

Lokale Modellintegration

json { "local": { "baseUrl": "http://localhost:11434/v1", "models": { "llama2": { "endpoint": "/api/generate", "maxTokens": 4096, "temperature": 0.7, "costPerToken": 0 }, "codellama": { "endpoint": "/api/generate", "maxTokens": 8192, "temperature": 0.3, "specialization": "code" } } } }_

OpenRouter Integration

json { "openrouter": { "apiKey": "${OPENROUTER_API_KEY}", "baseUrl": "https://openrouter.ai/api/v1", "models": { "anthropic/claude-3-sonnet": { "maxTokens": 200000, "costPerToken": 0.000015 }, "openai/gpt-4": { "maxTokens": 8192, "costPerToken": 0.00003 }, "google/gemini-pro": { "maxTokens": 32768, "costPerToken": 0.000001 } } } }_

Routing-Strategien

Last Balancing

json { "loadBalancing": { "enabled": true, "strategy": "round_robin", "models": ["gpt-4", "claude-3-sonnet", "gemini-pro"], "weights": { "gpt-4": 0.4, "claude-3-sonnet": 0.4, "gemini-pro": 0.2 }, "healthCheck": { "enabled": true, "interval": 60, "timeout": 10 } } }_

Kostenoptimierung

json { "costOptimization": { "enabled": true, "budget": { "daily": 10.00, "monthly": 300.00, "currency": "USD" }, "rules": [ { "condition": "tokenCount < 500", "model": "gpt-3.5-turbo" }, { "condition": "tokenCount >= 500 && tokenCount < 2000", "model": "claude-3-haiku" }, { "condition": "tokenCount >= 2000", "model": "claude-3-sonnet" } ] } }_

Intelligentes Routing

json { "intelligentRouting": { "enabled": true, "rules": [ { "name": "code_tasks", "patterns": ["implement", "write code", "debug", "refactor"], "model": "gpt-4", "confidence": 0.8 }, { "name": "analysis_tasks", "patterns": ["analyze", "explain", "review", "understand"], "model": "claude-3-sonnet", "confidence": 0.9 }, { "name": "creative_tasks", "patterns": ["create", "design", "brainstorm", "generate"], "model": "gemini-pro", "confidence": 0.7 } ] } }_

Monitoring und Analytics

Verwendung Tracking

```bash

View usage statistics

curl http://localhost:8080/api/stats

Model usage breakdown

curl http://localhost:8080/api/stats/models

Cost analysis

curl http://localhost:8080/api/stats/costs

Performance metrics

curl http://localhost:8080/api/stats/performance ```_

Logging Konfiguration

json { "logging": { "level": "info", "format": "json", "outputs": ["console", "file"], "file": { "path": "./logs/router.log", "maxSize": "100MB", "maxFiles": 10, "compress": true }, "metrics": { "enabled": true, "interval": 60, "endpoint": "/metrics" } } }_

Gesundheitsüberwachung

```bash

Health check endpoint

curl http://localhost:8080/health

Detailed health status

curl http://localhost:8080/health/detailed

Model availability

curl http://localhost:8080/health/models

Performance metrics

curl http://localhost:8080/metrics ```_

Erweiterte Funktionen

Transformation anfordern

```javascript // Custom request transformer const requestTransformer = { transform: (request, targetModel) => { // Modify request based on target model if (targetModel === 'gpt-4') { request.temperature = 0.3; // Lower temperature for code } else if (targetModel === 'claude-3-sonnet') { request.max_tokens = 4096; // Adjust token limit } return request; } };

// Response transformer const responseTransformer = { transform: (response, sourceModel) => { // Standardize response format return { content: response.content || response.message, model: sourceModel, usage: response.usage, timestamp: new Date().toISOString() }; } }; ```_

Plugin System

```javascript // Router plugin interface class RouterPlugin { constructor(config) { this.config = config; }

onRequest(request, context) { // Pre-process request return request; }

onResponse(response, context) { // Post-process response return response; }

onError(error, context) { // Handle errors console.error('Router error:', error); } }

// Load plugins const plugins = [ new CostTrackingPlugin(), new CachePlugin(), new RateLimitPlugin() ]; ```_

Caching System

json { "cache": { "enabled": true, "type": "redis", "connection": { "host": "localhost", "port": 6379, "password": "${REDIS_PASSWORD}" }, "ttl": 3600, "keyPrefix": "claude-router:", "compression": true, "rules": [ { "pattern": "explain*", "ttl": 7200 }, { "pattern": "generate*", "ttl": 1800 } ] } }_

Sicherheit und Authentifizierung

API Schlüsselverwaltung

json { "security": { "authentication": { "enabled": true, "type": "bearer", "keys": [ { "key": "router-key-1", "permissions": ["read", "write"], "rateLimit": 1000 }, { "key": "router-key-2", "permissions": ["read"], "rateLimit": 100 } ] }, "encryption": { "enabled": true, "algorithm": "AES-256-GCM", "keyRotation": 86400 } } }_

Grenzwerte

json { "rateLimit": { "enabled": true, "global": { "requests": 1000, "window": 3600 }, "perKey": { "requests": 100, "window": 3600 }, "perModel": { "gpt-4": { "requests": 50, "window": 3600 }, "claude-3-sonnet": { "requests": 100, "window": 3600 } } } }_

Fehlerbehebung

Gemeinsame Themen

```bash

Router not starting

  • Check port availability
  • Verify configuration file
  • Check API key validity
  • Review log files

Model connection failures

  • Test API endpoints
  • Verify API keys
  • Check network connectivity
  • Review rate limits

Performance issues

  • Monitor memory usage
  • Check cache configuration
  • Review routing rules
  • Optimize model selection ```_

Debug Mode

```bash

Enable debug logging

DEBUG=claude-router:* npm start

Verbose logging

claude-code-router --log-level debug

Request tracing

curl -H "X-Debug: true" http://localhost:8080/v1/messages

Performance profiling

curl http://localhost:8080/debug/profile ```_

Leistungsoptimierung

json { "performance": { "connectionPool": { "maxConnections": 100, "timeout": 30000, "keepAlive": true }, "compression": { "enabled": true, "level": 6, "threshold": 1024 }, "clustering": { "enabled": true, "workers": "auto" } } }_

Bereitstellung

Produktionsentwicklung

```yaml

Kubernetes deployment

apiVersion: apps/v1 kind: Deployment metadata: name: claude-router spec: replicas: 3 selector: matchLabels: app: claude-router template: metadata: labels: app: claude-router spec: containers: - name: claude-router image: claudecode/router:latest ports: - containerPort: 8080 env: - name: OPENAI_API_KEY valueFrom: secretKeyRef: name: api-keys key: openai - name: ANTHROPIC_API_KEY valueFrom: secretKeyRef: name: api-keys key: anthropic ```_

Docker komponiert

```yaml version: '3.8' services: claude-router: image: claudecode/router:latest ports: - "8080:8080" environment: - NODE_ENV=production - ROUTER_CONFIG=/app/config/production.json volumes: - ./config:/app/config - ./logs:/app/logs restart: unless-stopped

redis: image: redis:alpine ports: - "6379:6379" volumes: - redis_data:/data

volumes: redis_data: ```_

Ressourcen

  • [Claude Code Router GitHub](LINK_5_
  • [Dokumentation](LINK_5__
  • [Community Discord](LINK_5__
  • Video-Tutorials
  • (OpenRouter Integration)(LINK_5)