Aller au contenu

Continuer.dev Feuille de chaleur

Copier toutes les commandes Générer PDF

Aperçu général

Continue.dev est un assistant de code AI open-source qui met les développeurs en contrôle complet. Il est modèle-agnostique, hautement personnalisable, et peut fonctionner avec n'importe quel fournisseur de LLM ou des modèles hébergés localement. Parfait pour les équipes exigeant flexibilité et confidentialité.

C'est pas vrai. Note: Libre et open-source, prend en charge plusieurs fournisseurs de LLM

Installation

Code VS

# Install from VS Code Marketplace
# Search for "Continue" in Extensions
# Or install via command line
code --install-extension Continue.continue

# Verify installation
# Check Extensions panel for Continue

IDE JetBrains

# Install from JetBrains Plugin Repository
# Go to File > Settings > Plugins
# Search for "Continue" and install
# Restart IDE after installation
```_

### Installation manuelle
```bash
# Clone repository
git clone https://github.com/continuedev/continue.git
cd continue

# Install dependencies
npm install

# Build extension
npm run build

# Package for VS Code
npm run package
```_

## Configuration initiale

### Configuration de base
```json
// ~/.continue/config.json
\\\\{
  "models": [
    \\\\{
      "title": "GPT-4",
      "provider": "openai",
      "model": "gpt-4",
      "apiKey": "your-openai-api-key"
    \\\\}
  ],
  "tabAutocompleteModel": \\\\{
    "title": "Codestral",
    "provider": "mistral",
    "model": "codestral-latest",
    "apiKey": "your-mistral-api-key"
  \\\\}
\\\\}

Configuration de modèles multiples

\\\\{
  "models": [
    \\\\{
      "title": "GPT-4 Turbo",
      "provider": "openai",
      "model": "gpt-4-turbo-preview",
      "apiKey": "your-openai-key"
    \\\\},
    \\\\{
      "title": "Claude 3",
      "provider": "anthropic",
      "model": "claude-3-opus-20240229",
      "apiKey": "your-anthropic-key"
    \\\\},
    \\\\{
      "title": "Local Llama",
      "provider": "ollama",
      "model": "llama2:7b"
    \\\\}
  ]
\\\\}

Fournisseurs de modèles

OpenAI

\\\\{
  "title": "GPT-4",
  "provider": "openai",
  "model": "gpt-4",
  "apiKey": "sk-...",
  "apiBase": "https://api.openai.com/v1",
  "requestOptions": \\\\{
    "temperature": 0.7,
    "maxTokens": 2048
  \\\\}
\\\\}

Claude anthropique

\\\\{
  "title": "Claude 3",
  "provider": "anthropic",
  "model": "claude-3-opus-20240229",
  "apiKey": "sk-ant-...",
  "requestOptions": \\\\{
    "temperature": 0.5,
    "maxTokens": 4096
  \\\\}
\\\\}

Modèles locaux (Ollama)

\\\\{
  "title": "Local Code Llama",
  "provider": "ollama",
  "model": "codellama:7b",
  "apiBase": "http://localhost:11434",
  "requestOptions": \\\\{
    "temperature": 0.2,
    "numPredict": 1024
  \\\\}
\\\\}

Azure OpenAI

\\\\{
  "title": "Azure GPT-4",
  "provider": "azure",
  "model": "gpt-4",
  "apiKey": "your-azure-key",
  "apiBase": "https://your-resource.openai.azure.com",
  "apiVersion": "2023-12-01-preview",
  "deploymentName": "gpt-4-deployment"
\\\\}

OuvrirRouter

\\\\{
  "title": "OpenRouter GPT-4",
  "provider": "openrouter",
  "model": "openai/gpt-4",
  "apiKey": "sk-or-...",
  "requestOptions": \\\\{
    "temperature": 0.7
  \\\\}
\\\\}

Utilisation de base

Interface de discussion

# Open Continue chat
Ctrl+Shift+M (VS Code)
Ctrl+Shift+J (JetBrains)

# Quick chat
Ctrl+I (VS Code)
Ctrl+Shift+I (JetBrains)

# Chat with selection
# 1. Select code
# 2. Right-click > "Continue: Chat"
# 3. Or use Ctrl+Shift+M

Génération de codes

// Type comment and use Continue
// Generate a REST API endpoint for user authentication
// Continue will suggest implementation

// Or use chat:
// "Create a React component for file upload with drag and drop"

Explication du code

# Select code and ask:
"Explain this function"
"What does this regex do?"
"How does this algorithm work?"
"What are the potential issues with this code?"

Raccourcis clavier

Shortcut Action IDE
Ctrl+Shift+M Open chat VS Code
Ctrl+I Quick chat VS Code
Ctrl+Shift+L Select code for chat VS Code
Ctrl+Shift+J Open chat JetBrains
Ctrl+Shift+I Quick chat JetBrains
Tab Accept autocomplete All
Esc Dismiss autocomplete All
Ctrl+Shift+Enter Apply suggestion All

Configuration avancée

Commandes Slash personnalisées

\\\\{
  "slashCommands": [
    \\\\{
      "name": "test",
      "description": "Generate unit tests",
      "prompt": "Generate comprehensive unit tests for the selected code. Include edge cases and mock dependencies."
    \\\\},
    \\\\{
      "name": "optimize",
      "description": "Optimize code performance",
      "prompt": "Analyze the selected code and suggest performance optimizations. Focus on time complexity and memory usage."
    \\\\},
    \\\\{
      "name": "security",
      "description": "Security review",
      "prompt": "Review the selected code for security vulnerabilities. Check for common issues like SQL injection, XSS, and authentication flaws."
    \\\\}
  ]
\\\\}

Contexte Fournisseurs

\\\\{
  "contextProviders": [
    \\\\{
      "name": "codebase",
      "params": \\\\{
        "nRetrieve": 25,
        "nFinal": 5,
        "useReranking": true
      \\\\}
    \\\\},
    \\\\{
      "name": "file",
      "params": \\\\{\\\\}
    \\\\},
    \\\\{
      "name": "folder",
      "params": \\\\{\\\\}
    \\\\},
    \\\\{
      "name": "git",
      "params": \\\\{\\\\}
    \\\\},
    \\\\{
      "name": "github",
      "params": \\\\{
        "repos": [
          \\\\{
            "owner": "microsoft",
            "repo": "vscode"
          \\\\}
        ]
      \\\\}
    \\\\}
  ]
\\\\}

Configuration du modèle personnalisé

\\\\{
  "models": [
    \\\\{
      "title": "Custom Local Model",
      "provider": "openai",
      "model": "custom-model",
      "apiBase": "http://localhost:8000/v1",
      "apiKey": "not-needed",
      "requestOptions": \\\\{
        "temperature": 0.3,
        "maxTokens": 2048,
        "stop": ["<|endoftext|>"]
      \\\\}
    \\\\}
  ]
\\\\}

Configuration automatique

Tab Compléter automatiquement

\\\\{
  "tabAutocompleteModel": \\\\{
    "title": "Codestral",
    "provider": "mistral",
    "model": "codestral-latest",
    "apiKey": "your-mistral-key"
  \\\\},
  "tabAutocompleteOptions": \\\\{
    "useCopyBuffer": true,
    "maxPromptTokens": 1024,
    "prefixPercentage": 0.85,
    "maxSuffixPercentage": 0.25,
    "debounceDelay": 300
  \\\\}
\\\\}

Fournisseurs automatiques

\\\\{
  "tabAutocompleteModel": \\\\{
    "title": "Local Autocomplete",
    "provider": "ollama",
    "model": "deepseek-coder:6.7b",
    "requestOptions": \\\\{
      "temperature": 0.1,
      "numPredict": 256
    \\\\}
  \\\\}
\\\\}

Gestion du contexte

Contexte du fichier

# Add files to context
@file:src/utils/auth.js

# Add multiple files
@file:src/components/Button.tsx @file:src/styles/button.css

# Add entire folders
@folder:src/components

Contexte de la base de codes

# Search codebase for relevant context
@codebase "authentication functions"

# Find similar code patterns
@codebase "React hooks for API calls"

# Search for specific implementations
@codebase "error handling middleware"

Contexte Git

# Reference git history
@git "recent changes to authentication"

# Compare branches
@git "differences between main and feature-branch"

# Reference specific commits
@git "changes in commit abc123"

Intégrations personnalisées

Intégration des bases de données

\\\\{
  "contextProviders": [
    \\\\{
      "name": "database",
      "params": \\\\{
        "connectionString": "postgresql://user:pass@localhost:5432/db",
        "tables": ["users", "orders", "products"]
      \\\\}
    \\\\}
  ]
\\\\}

Documentation API

\\\\{
  "contextProviders": [
    \\\\{
      "name": "docs",
      "params": \\\\{
        "urls": [
          "https://docs.stripe.com/api",
          "https://docs.github.com/en/rest"
        ]
      \\\\}
    \\\\}
  ]
\\\\}

Intégration Jira

\\\\{
  "contextProviders": [
    \\\\{
      "name": "jira",
      "params": \\\\{
        "domain": "your-company.atlassian.net",
        "token": "your-jira-token",
        "email": "your-email@company.com"
      \\\\}
    \\\\}
  ]
\\\\}

Configuration spécifique à la langue

Configuration de Python

\\\\{
  "models": [
    \\\\{
      "title": "Python Specialist",
      "provider": "openai",
      "model": "gpt-4",
      "systemMessage": "You are a Python expert. Always follow PEP 8 style guidelines and use type hints."
    \\\\}
  ]
\\\\}

JavaScript/TypeScript

\\\\{
  "models": [
    \\\\{
      "title": "TS Expert",
      "provider": "anthropic",
      "model": "claude-3-opus-20240229",
      "systemMessage": "You are a TypeScript expert. Always use strict typing and modern ES6+ features."
    \\\\}
  ]
\\\\}

Configuration de la rouille

\\\\{
  "models": [
    \\\\{
      "title": "Rust Helper",
      "provider": "ollama",
      "model": "codellama:7b",
      "systemMessage": "You are a Rust expert. Focus on memory safety, performance, and idiomatic Rust code."
    \\\\}
  ]
\\\\}

Configuration de l'équipe

Configuration partagée

// .continue/config.json (in project root)
\\\\{
  "models": [
    \\\\{
      "title": "Team GPT-4",
      "provider": "openai",
      "model": "gpt-4",
      "apiKey": "$\\\\{OPENAI_API_KEY\\\\}"
    \\\\}
  ],
  "slashCommands": [
    \\\\{
      "name": "review",
      "description": "Code review following team standards",
      "prompt": "Review this code according to our team's coding standards: $\\\\{TEAM_STANDARDS\\\\}"
    \\\\}
  ]
\\\\}

Variables d'environnement

# .env file
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
TEAM_STANDARDS="Use TypeScript, follow ESLint rules, include unit tests"

# Use in config
\\\\{
  "apiKey": "$\\\\{OPENAI_API_KEY\\\\}"
\\\\}

Optimisation des performances

Configuration de cache

\\\\{
  "embeddingsProvider": \\\\{
    "provider": "openai",
    "model": "text-embedding-ada-002",
    "apiKey": "your-key"
  \\\\},
  "reranker": \\\\{
    "name": "cohere",
    "params": \\\\{
      "apiKey": "your-cohere-key",
      "model": "rerank-english-v2.0"
    \\\\}
  \\\\}
\\\\}

Embedages locaux

\\\\{
  "embeddingsProvider": \\\\{
    "provider": "ollama",
    "model": "nomic-embed-text",
    "apiBase": "http://localhost:11434"
  \\\\}
\\\\}

Dépannage

Questions communes

# Extension not loading
# 1. Check VS Code/JetBrains version compatibility
# 2. Restart IDE
# 3. Reinstall extension
# 4. Check Continue logs

# API key issues
# 1. Verify API key format
# 2. Check API key permissions
# 3. Test API key with curl
# 4. Check rate limits

# Model not responding
# 1. Check internet connection
# 2. Verify model name
# 3. Check API endpoint
# 4. Review request options

Mode de débogage

\\\\{
  "allowAnonymousTelemetry": false,
  "logLevel": "debug"
\\\\}

Analyse du journal

# VS Code logs location
# Windows: %APPDATA%\Code\logs\
# macOS: ~/Library/Logs/Code/
# Linux: ~/.config/Code/logs/

# JetBrains logs
# Check IDE logs directory
# Help > Show Log in Explorer/Finder

Meilleures pratiques

Mise à jour efficace

// ❌ Vague request
"Fix this code"

// ✅ Specific request
"Optimize this function for better performance and add error handling for edge cases"

// ❌ No context
"Create a component"

// ✅ With context
"Create a React component for displaying user profiles with TypeScript interfaces and proper prop validation"

Gestion du contexte

# Use relevant context providers
@codebase "similar authentication patterns"
@file:src/types/user.ts

# Be specific about requirements
"Using the User interface from @file:src/types/user.ts, create a validation function"

Sélection du modèle

# Use appropriate models for tasks
# - GPT-4: Complex reasoning, architecture decisions
# - Claude: Long context, detailed explanations
# - Local models: Privacy, offline work
# - Specialized models: Domain-specific tasks

Ressources

  • [Continue.dev Site Web] (LINK_7)
  • [Résistoire GitHub] (LINK_7)
  • [Documentation] (LINK_7)
  • [Prolongation du code VS] (LINK_7)
  • [Greffon de rainures] (LINK_7)
  • [Communauté de discorde] (LINK_7)
  • [Exemples de configuration] (LINK_7)