OpenAI Codex コード生成チートシート
概要
OpenAI Codexは、自然言語からコードに変換する強力なAIシステムで、数十のプログラミング言語でコードを理解し、生成できます。GPT-3アーキテクチャに基づき、公開されているコードの何十億行にも及ぶデータで学習されたCodexは、GitHub Copilotを支え、高度なコード補完、生成、説明機能を提供します。コンテキストを理解し、自然言語の説明から関数、クラス、さらには完全なアプリケーションを生成することに優れています。
⚠️ 注意: OpenAI Codex APIは2023年3月に廃止されました。このガイドは歴史的な使用方法と、コード生成タスクのためのGPT-3.5/GPT-4モデルへの移行について説明します。
現行モデルへの移行
コード生成のためのGPT-3.5/GPT-4
# Install OpenAI Python library
pip install openai
# Basic setup for code generation with current models
import openai
openai.api_key = "your-api-key-here"
# Use GPT-3.5-turbo or GPT-4 for code generation
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo", # or "gpt-4"
messages=[
\\\\{
"role": "system",
"content": "You are an expert programmer. Generate clean, efficient, and well-documented code."
\\\\},
\\\\{
"role": "user",
"content": "Write a Python function to implement binary search"
\\\\}
],
max_tokens=1000,
temperature=0.1 # Lower temperature for more deterministic code
)
print(response.choices[0].message.content)
レガシーCodex API の使用(歴史的参照)
# Historical Codex API usage (deprecated)
import openai
openai.api_key = "your-api-key-here"
# Codex completion (no longer available)
response = openai.Completion.create(
engine="code-davinci-002", # Deprecated
prompt="# Function to calculate factorial\ndef factorial(n):",
max_tokens=150,
temperature=0,
stop=["#", "\n\n"]
)
print(response.choices[0].text)
モダンなコード生成セットアップ
Pythonの SDK 設定
#!/usr/bin/env python3
# modern-codex-replacement.py
import openai
import os
import json
from typing import List, Dict, Optional
from datetime import datetime
class ModernCodeGenerator:
def __init__(self, api_key: str = None):
self.client = openai.OpenAI(
api_key=api_key or os.getenv("OPENAI_API_KEY")
)
self.conversation_history = []
def generate_code(self, prompt: str, language: str = "python",
model: str = "gpt-3.5-turbo") -> str:
"""Generate code using modern OpenAI models"""
system_prompt = f"""
You are an expert \\\\{language\\\\} developer. Generate clean, efficient,
and well-documented code that follows best practices. Include:
- Proper error handling
- Type hints (where applicable)
- Comprehensive docstrings
- Security considerations
- Performance optimizations
"""
try:
response = self.client.chat.completions.create(
model=model,
messages=[
\\\\{"role": "system", "content": system_prompt\\\\},
\\\\{"role": "user", "content": prompt\\\\}
],
max_tokens=2000,
temperature=0.1,
top_p=1.0,
frequency_penalty=0.0,
presence_penalty=0.0
)
generated_code = response.choices[0].message.content
# Store in conversation history
self.conversation_history.append(\\\\{
"prompt": prompt,
"language": language,
"model": model,
"response": generated_code,
"timestamp": datetime.now().isoformat()
\\\\})
return generated_code
except Exception as e:
return f"Error generating code: \\\\{e\\\\}"
def complete_code(self, partial_code: str, language: str = "python",
model: str = "gpt-3.5-turbo") -> str:
"""Complete partial code snippets"""
prompt = f"""
Complete this \\\\{language\\\\} code snippet. Provide only the missing parts:
```{language}
{partial_code}
Continue the code logically and maintain the existing style and patterns. """
try:
response = self.client.chat.completions.create(
model=model,
messages=[
\\\\{"role": "user", "content": prompt\\\\}
],
max_tokens=1000,
temperature=0.1
)
return response.choices[0].message.content
except Exception as e:
return f"Error completing code: \\\\{e\\\\}"
def explain_code(self, code: str, language: str = "python") -> str:
"""Explain existing code"""
prompt = f"""
Explain this \\{language\\} code in detail:
{code}
Provide:
-
High-level overview
-
Line-by-line explanation of complex parts
-
Purpose and functionality
-
Potential improvements """
try: response = self.client.chat.completions.create( model="gpt-3.5-turbo", messages=[ \\\\{"role": "user", "content": prompt\\\\} ], max_tokens=1500, temperature=0.1 ) return response.choices[0].message.content except Exception as e: return f"Error explaining code: \\\\{e\\\\}"def fix_code(self, buggy_code: str, error_message: str = None, language: str = “python”) -> str: """Fix buggy code"""
prompt = f"""
Fix this \\{language\\} code that has issues:
{buggy_code}
"""
if error_message:
prompt += f"\nError message: \\\\{error_message\\\\}"
prompt += """
Provide:
-
Corrected code
-
Explanation of what was wrong
-
Prevention strategies for similar issues """
try: response = self.client.chat.completions.create( model="gpt-3.5-turbo", messages=[ \\\\{"role": "user", "content": prompt\\\\} ], max_tokens=1500, temperature=0.1 ) return response.choices[0].message.content except Exception as e: return f"Error fixing code: \\\\{e\\\\}"def generate_tests(self, code: str, language: str = “python”) -> str: """Generate test cases for code"""
test_frameworks = \\\\{ "python": "pytest", "javascript": "jest", "java": "junit", "csharp": "nunit", "go": "testing package" \\\\} framework = test_frameworks.get(language, "appropriate testing framework") prompt = f"""
Generate comprehensive test cases for this \\{language\\} code using \\{framework\\}:
{code}
Include:
-
Unit tests for all functions/methods
-
Edge cases and boundary conditions
-
Error handling tests
-
Mock objects where needed
-
Test data setup and teardown """
try: response = self.client.chat.completions.create( model="gpt-3.5-turbo", messages=[ \\\\{"role": "user", "content": prompt\\\\} ], max_tokens=2000, temperature=0.1 ) return response.choices[0].message.content except Exception as e: return f"Error generating tests: \\\\{e\\\\}"
Example usage
def main(): generator = ModernCodeGenerator()
# Generate a REST API client
prompt = """
Create a Python class for a REST API client that handles:
-
GET, POST, PUT, DELETE requests
-
Authentication with API keys
-
Retry logic with exponential backoff
-
Rate limiting
-
Response caching
-
Comprehensive error handling """
code = generator.generate_code(prompt, “python”, “gpt-4”) print(“Generated Code:”) print(”=” * 50) print(code)
if name == “main”: main()
## 言語別コード生成
### Python開発
```python
# Python-specific code generation examples
# Data science and machine learning
prompt = """
Create a Python script for machine learning that:
- Loads data from CSV files
- Performs data preprocessing and feature engineering
- Implements multiple ML models (Random Forest, SVM, Neural Network)
- Evaluates models with cross-validation
- Visualizes results and feature importance
- Saves the best model for deployment
"""
# Web development with FastAPI
prompt = """
Build a FastAPI application that:
- Implements user authentication with JWT
- Uses SQLAlchemy for database operations
- Includes CRUD operations for a blog system
- Implements rate limiting and CORS
- Includes comprehensive error handling
- Provides OpenAPI documentation
"""
# DevOps automation
prompt = """
Create a Python automation script that:
- Manages Docker containers
- Deploys applications to Kubernetes
- Monitors application health
- Sends alerts via Slack/email
- Implements rollback functionality
- Logs all operations
"""
JavaScript/TypeScript開発
// JavaScript/TypeScript code generation
// React component with hooks
const prompt = `
Create a React TypeScript component that:
- Implements a data table with sorting, filtering, and pagination
- Uses React Query for data fetching
- Includes proper TypeScript interfaces
- Implements accessibility features (ARIA labels, keyboard navigation)
- Uses CSS modules for styling
- Includes comprehensive error boundaries
`;
// Node.js microservice
const prompt = `
Build a Node.js microservice with TypeScript that:
- Implements GraphQL API with Apollo Server
- Uses Prisma for database operations
- Includes authentication middleware
- Implements caching with Redis
- Uses Winston for logging
- Includes health check endpoints
`;
// Frontend build optimization
const prompt = `
Create a Webpack configuration that:
- Optimizes bundle size with code splitting
- Implements tree shaking
- Uses service workers for caching
- Includes source maps for debugging
- Supports hot module replacement
- Generates performance reports
`;
Go開発
// Go-specific code generation
// Microservice with gRPC
prompt := `
Create a Go microservice that:
- Implements gRPC server with protocol buffers
- Uses GORM for database operations
- Includes middleware for logging and authentication
- Implements graceful shutdown
- Uses Prometheus for metrics
- Includes comprehensive error handling
`
// Concurrent data processing
prompt := `
Implement a Go program that:
- Processes large datasets concurrently
- Uses worker pools with goroutines
- Implements backpressure handling
- Includes progress monitoring
- Uses channels for communication
- Handles graceful cancellation
`
高度なコード生成テクニック
コンテキストを意識した生成
#!/usr/bin/env python3
# context-aware-generation.py
import openai
import os
import ast
from typing import List, Dict
class ContextAwareGenerator:
def __init__(self):
self.client = openai.OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
self.project_context = \\\\{\\\\}
def analyze_codebase(self, directory: str) -> Dict:
"""Analyze existing codebase for context"""
context = \\\\{
"languages": set(),
"frameworks": set(),
"patterns": set(),
"dependencies": set(),
"file_structure": \\\\{\\\\}
\\\\}
# Analyze Python files
for root, dirs, files in os.walk(directory):
for file in files:
if file.endswith('.py'):
file_path = os.path.join(root, file)
try:
with open(file_path, 'r') as f:
content = f.read()
# Parse AST for imports and patterns
tree = ast.parse(content)
for node in ast.walk(tree):
if isinstance(node, ast.Import):
for alias in node.names:
context["dependencies"].add(alias.name)
elif isinstance(node, ast.ImportFrom):
if node.module:
context["dependencies"].add(node.module)
context["languages"].add("python")
# Detect frameworks
if "flask" in content.lower():
context["frameworks"].add("Flask")
if "django" in content.lower():
context["frameworks"].add("Django")
if "fastapi" in content.lower():
context["frameworks"].add("FastAPI")
except Exception as e:
print(f"Error analyzing \\\\{file_path\\\\}: \\\\{e\\\\}")
self.project_context = context
return context
def generate_with_context(self, prompt: str, language: str = "python") -> str:
"""Generate code with project context"""
context_info = ""
if self.project_context:
context_info = f"""
Project Context:
- Languages: \\\\{', '.join(self.project_context.get('languages', []))\\\\}
- Frameworks: \\\\{', '.join(self.project_context.get('frameworks', []))\\\\}
- Key Dependencies: \\\\{', '.join(list(self.project_context.get('dependencies', []))[:10])\\\\}
Please generate code that fits with this existing codebase.
"""
full_prompt = f"\\\\{context_info\\\\}\n\nRequest: \\\\{prompt\\\\}"
try:
response = self.client.chat.completions.create(
model="gpt-4",
messages=[
\\\\{
"role": "system",
"content": f"You are an expert \\\\{language\\\\} developer working on an existing project. Generate code that integrates well with the existing codebase."
\\\\},
\\\\{"role": "user", "content": full_prompt\\\\}
],
max_tokens=2000,
temperature=0.1
)
return response.choices[0].message.content
except Exception as e:
return f"Error generating contextual code: \\\\{e\\\\}"
def suggest_refactoring(self, code: str, language: str = "python") -> str:
"""Suggest refactoring based on project context"""
context_info = ""
if self.project_context:
frameworks = ', '.join(self.project_context.get('frameworks', []))
if frameworks:
context_info = f"This project uses \\\\{frameworks\\\\}. "
prompt = f"""
\\\\{context_info\\\\}Analyze this \\\\{language\\\\} code and suggest refactoring:
```{language}
{code}
Consider:
-
Code patterns used in the project
-
Framework-specific best practices
-
Performance optimizations
-
Maintainability improvements
-
Security enhancements """
try: response = self.client.chat.completions.create( model="gpt-4", messages=[ \\\\{"role": "user", "content": prompt\\\\} ], max_tokens=2000, temperature=0.1 ) return response.choices[0].message.content except Exception as e: return f"Error suggesting refactoring: \\\\{e\\\\}"
def main(): generator = ContextAwareGenerator()
# Analyze current project
context = generator.analyze_codebase(".")
print("Project Context:")
for key, value in context.items():
print(f" \\\\{key\\\\}: \\\\{value\\\\}")
# Generate code with context
prompt = "Create a new API endpoint for user management"
code = generator.generate_with_context(prompt)
print("\nGenerated Code:")
print("=" * 50)
print(code)
if name == “main”: main()
### 複数ステップのコード生成
```python
#!/usr/bin/env python3
# multi-step-generation.py
import openai
import os
from typing import List, Dict
class MultiStepGenerator:
def __init__(self):
self.client = openai.OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
self.generation_steps = []
def plan_implementation(self, requirement: str) -> List[str]:
"""Break down complex requirements into implementation steps"""
prompt = f"""
Break down this software requirement into detailed implementation steps:
Requirement: \\\\{requirement\\\\}
Provide a step-by-step implementation plan with:
1. Architecture decisions
2. Component breakdown
3. Implementation order
4. Dependencies between components
5. Testing strategy
Format as a numbered list of specific, actionable steps.
"""
try:
response = self.client.chat.completions.create(
model="gpt-4",
messages=[
\\\\{"role": "user", "content": prompt\\\\}
],
max_tokens=1500,
temperature=0.1
)
plan = response.choices[0].message.content
# Extract steps (simple parsing)
steps = []
for line in plan.split('\n'):
if line.strip() and (line.strip()[0].isdigit() or line.strip().startswith('-')):
steps.append(line.strip())
self.generation_steps = steps
return steps
except Exception as e:
return [f"Error creating plan: \\\\{e\\\\}"]
def implement_step(self, step: str, previous_code: str = "",
language: str = "python") -> str:
"""Implement a specific step"""
context = ""
if previous_code:
context = f"""
Previous implementation:
```{language}
{previous_code}
Build upon this existing code. """
prompt = f"""
\\{context\\}
Implement this specific step: \\{step\\}
Provide complete, working \\{language\\} code that:
-
Implements only this step
-
Integrates with previous code (if any)
-
Includes proper error handling
-
Follows best practices
-
Includes comments explaining the implementation """
try: response = self.client.chat.completions.create( model="gpt-4", messages=[ \\\\{"role": "user", "content": prompt\\\\} ], max_tokens=2000, temperature=0.1 ) return response.choices[0].message.content except Exception as e: return f"Error implementing step: \\\\{e\\\\}"def generate_complete_solution(self, requirement: str, language: str = “python”) -> Dict: """Generate complete solution using multi-step approach"""
print(f"Planning implementation for: \\\\{requirement\\\\}") # Step 1: Create implementation plan steps = self.plan_implementation(requirement) print(f"Implementation plan created with \\\\{len(steps)\\\\} steps") # Step 2: Implement each step complete_code = "" step_implementations = [] for i, step in enumerate(steps, 1): print(f"Implementing step \\\\{i\\\\}/\\\\{len(steps)\\\\}: \\\\{step[:50]\\\\}...") step_code = self.implement_step(step, complete_code, language) step_implementations.append(\\\\{ "step": step, "code": step_code, "step_number": i \\\\}) # Accumulate code for next step complete_code += f"\n\n# Step \\\\{i\\\\}: \\\\{step\\\\}\n\\\\{step_code\\\\}" # Step 3: Review and optimize complete solution optimized_code = self.optimize_complete_solution(complete_code, language) return \\\\{ "requirement": requirement, "language": language, "plan": steps, "step_implementations": step_implementations, "complete_code": complete_code, "optimized_code": optimized_code \\\\}def optimize_complete_solution(self, code: str, language: str = “python”) -> str: """Optimize the complete solution"""
prompt = f"""
Review and optimize this complete \\{language\\} solution:
{code}
Optimize for:
- Code organization and structure
- Performance improvements
- Error handling consistency
- Code reuse and DRY principles
- Documentation and comments
- Security considerations
Provide the optimized version with explanations of changes. """
try:
response = self.client.chat.completions.create(
model="gpt-4",
messages=[
\\\\{"role": "user", "content": prompt\\\\}
],
max_tokens=3000,
temperature=0.1
)
return response.choices[0].message.content
except Exception as e:
return f"Error optimizing solution: \\\\{e\\\\}"
def main(): generator = MultiStepGenerator()
requirement = """
Create a web scraping system that:
-
Scrapes product data from multiple e-commerce sites
-
Handles rate limiting and anti-bot measures
-
Stores data in a database
-
Provides a REST API to query the data
-
Includes monitoring and alerting
-
Supports distributed scraping across multiple workers """
solution = generator.generate_complete_solution(requirement, “python”)
print(“\nImplementation Plan:”) print(”=” * 50) for i, step in enumerate(solution[“plan”], 1): print(f”\\{i\\}. \\{step\\}”)
print(f”\nComplete solution generated with \\{len(solution[‘step_implementations’])\\} steps”) print(“Check the generated files for detailed implementation.”)
Save results
with open(“multi_step_solution.py”, “w”) as f: f.write(solution[“optimized_code”])
print(“Optimized solution saved to: multi_step_solution.py”)
if name == “main”: main()
## IDEとエディターの統合
### VS Codeの統合
```json
// VS Code settings for modern code generation
\\\\{
"openai.apiKey": "$\\\\{env:OPENAI_API_KEY\\\\}",
"openai.model": "gpt-3.5-turbo",
"openai.maxTokens": 1000,
"openai.temperature": 0.1,
"openai.codeCompletion": true,
"openai.codeExplanation": true,
"openai.codeGeneration": true,
"openai.languages": [
"python",
"javascript",
"typescript",
"go",
"java",
"cpp",
"csharp"
]
\\\\}
Vim/Neovimプラグイン
-- Neovim Lua configuration for code generation
local function generate_code()
local prompt = vim.fn.input("Code generation prompt: ")
if prompt == "" then
return
end
local language = vim.bo.filetype
local api_key = os.getenv("OPENAI_API_KEY")
if not api_key then
print("Error: OPENAI_API_KEY not set")
return
end
-- Call Python script for code generation
local cmd = string.format(
"python3 -c \"import openai; openai.api_key='%s'; response=openai.ChatCompletion.create(model='gpt-3.5-turbo', messages=[\\\\{'role': 'user', 'content': 'Generate %s code: %s'\\\\}], max_tokens=1000, temperature=0.1); print(response.choices[0].message.content)\"",
api_key, language, prompt
)
local output = vim.fn.system(cmd)
-- Insert generated code at cursor
local lines = vim.split(output, "\n")
vim.api.nvim_put(lines, "l", true, true)
end
-- Key mapping
vim.keymap.set("n", "<leader>cg", generate_code, \\\\{ desc = "Generate code" \\\\})
Emacsの統合
;; Emacs Lisp configuration for code generation
(defun openai-generate-code (prompt)
"Generate code using OpenAI API"
(interactive "sCode generation prompt: ")
(let* ((api-key (getenv "OPENAI_API_KEY"))
(language (file-name-extension (buffer-file-name)))
(python-script (format
"import openai; openai.api_key='%s'; response=openai.ChatCompletion.create(model='gpt-3.5-turbo', messages=[\\\\{'role': 'user', 'content': 'Generate %s code: %s'\\\\}], max_tokens=1000, temperature=0.1); print(response.choices[0].message.content)"
api-key language prompt)))
(if api-key
(let ((output (shell-command-to-string (format "python3 -c \"%s\"" python-script))))
(insert output))
(message "Error: OPENAI_API_KEY not set"))))
;; Key binding
(global-set-key (kbd "C-c g") 'openai-generate-code)
コマンドラインツール
CLIコードジェネレーター
Would you like me to continue with the remaining sections or placeholders?```bash #!/bin/bash
codegen-cli.sh - Command line code generator
OPENAI_API_KEY=”$\\{OPENAI_API_KEY\\}” MODEL=“gpt-3.5-turbo” MAX_TOKENS=1500 TEMPERATURE=0.1
if [ -z “$OPENAI_API_KEY” ]; then echo “Error: OPENAI_API_KEY environment variable not set” exit 1 fi
show_help() \\{
echo “Code Generation CLI Tool”
echo “Usage:”
echo ” codegen-cli.sh generate
call_openai_api() \\{ local system_prompt=“$1” local user_prompt=“$2”
python3 << EOF
import openai import json import sys
openai.api_key = “$OPENAI_API_KEY”
try: response = openai.ChatCompletion.create( model=“$MODEL”, messages=[ \\{“role”: “system”, “content”: “$system_prompt”\\}, \\{“role”: “user”, “content”: “$user_prompt”\\} ], max_tokens=$MAX_TOKENS, temperature=$TEMPERATURE )
print(response.choices[0].message.content)
except Exception as e: print(f”Error: \\{e\\}”, file=sys.stderr) sys.exit(1) EOF \\}
case “$1” in
“generate”)
if [ $# -lt 3 ]; then
echo “Usage: codegen-cli.sh generate
language="$2"
prompt="$3"
system_prompt="You are an expert $language developer. Generate clean, efficient, and well-documented code."
user_prompt="Generate $language code: $prompt"
echo "Generating $language code..."
echo "=========================="
call_openai_api "$system_prompt" "$user_prompt"
;;
"complete")
if [ $# -lt 2 ]; then
echo "Usage: codegen-cli.sh complete <file>"
exit 1
fi
file="$2"
if [ ! -f "$file" ]; then
echo "Error: File $file not found"
exit 1
fi
# Detect language from file extension
extension="$\\\\{file##*.\\\\}"
case "$extension" in
"py") language="python" ;;
"js") language="javascript" ;;
"ts") language="typescript" ;;
"go") language="go" ;;
"java") language="java" ;;
*) language="unknown" ;;
esac
code_content=$(cat "$file")
user_prompt="Complete this $language code:\n\n$code_content"
echo "Completing code in $file..."
echo "=========================="
call_openai_api "You are an expert programmer. Complete the provided code." "$user_prompt"
;;
"explain")
if [ $# -lt 2 ]; then
echo "Usage: codegen-cli.sh explain <file>"
exit 1
fi
file="$2"
if [ ! -f "$file" ]; then
echo "Error: File $file not found"
exit 1
fi
code_content=$(cat "$file")
user_prompt="Explain this code in detail:\n\n$code_content"
echo "Explaining code in $file..."
echo "=========================="
call_openai_api "You are an expert programmer. Explain code clearly and comprehensively." "$user_prompt"
;;
"fix")
if [ $# -lt 2 ]; then
echo "Usage: codegen-cli.sh fix <file> [error_message]"
exit 1
fi
file="$2"
error_message="$3"
if [ ! -f "$file" ]; then
echo "Error: File $file not found"
exit 1
fi
code_content=$(cat "$file")
user_prompt="Fix this buggy code:\n\n$code_content"
if [ -n "$error_message" ]; then
user_prompt="$user_prompt\n\nError message: $error_message"
fi
echo "Fixing code in $file..."
echo "======================"
call_openai_api "You are an expert debugger. Fix code issues and explain the problems." "$user_prompt"
;;
"test")
if [ $# -lt 2 ]; then
echo "Usage: codegen-cli.sh test <file>"
exit 1
fi
file="$2"
if [ ! -f "$file" ]; then
echo "Error: File $file not found"
exit 1
fi
code_content=$(cat "$file")
user_prompt="Generate comprehensive test cases for this code:\n\n$code_content"
echo "Generating tests for $file..."
echo "============================="
call_openai_api "You are an expert test engineer. Generate comprehensive test cases." "$user_prompt"
;;
*)
show_help
;;
esac
```python
# Best practices for code generation prompts
# Specific and detailed prompts
good_prompt = """
Create a Python class for a database connection pool that:
- Supports PostgreSQL and MySQL
- Implements connection pooling with configurable min/max connections
- Handles connection timeouts and retries
- Includes health checks for connections
- Provides async/await support
- Implements proper resource cleanup
- Includes comprehensive logging
- Follows the context manager protocol
"""
# Include constraints and requirements
constrained_prompt = """
Implement a rate limiter in Go that:
- Uses the token bucket algorithm
- Supports distributed rate limiting with Redis
- Handles burst traffic up to 1000 requests/second
- Includes metrics collection
- Must be thread-safe
- Should have minimal memory footprint
- Include benchmark tests
- Follow Go best practices and idioms
"""
# Request specific output format
formatted_prompt = """
Generate a React TypeScript component with the following structure:
1. Interface definitions for props and state
2. Component implementation with hooks
3. CSS module styles
4. Unit tests with React Testing Library
5. Storybook stories for documentation
Component requirements:
- Data table with sorting and filtering
- Pagination support
- Accessibility compliance (ARIA labels)
- Responsive design
- Error boundary handling
"""
```### コード生成のためのプロンプトエンジニアリング
```python
#!/usr/bin/env python3
# code-quality-validator.py
import ast
import subprocess
import tempfile
import os
from typing import List, Dict
class CodeQualityValidator:
def __init__(self):
self.quality_checks = \\\\{
"python": self.validate_python_code,
"javascript": self.validate_javascript_code,
"typescript": self.validate_typescript_code
\\\\}
def validate_generated_code(self, code: str, language: str) -> Dict:
"""Validate generated code quality"""
validator = self.quality_checks.get(language)
if not validator:
return \\\\{"error": f"No validator for language: \\\\{language\\\\}"\\\\}
return validator(code)
def validate_python_code(self, code: str) -> Dict:
"""Validate Python code quality"""
issues = []
# Syntax check
try:
ast.parse(code)
except SyntaxError as e:
issues.append(f"Syntax error: \\\\{e\\\\}")
return \\\\{"valid": False, "issues": issues\\\\}
# Create temporary file for linting
with tempfile.NamedTemporaryFile(mode='w', suffix='.py', delete=False) as f:
f.write(code)
temp_file = f.name
try:
# Run flake8 for style checking
result = subprocess.run(
['flake8', '--max-line-length=88', temp_file],
capture_output=True,
text=True
)
if result.returncode != 0:
issues.extend(result.stdout.split('\n'))
# Run bandit for security checking
result = subprocess.run(
['bandit', '-f', 'txt', temp_file],
capture_output=True,
text=True
)
if result.returncode != 0 and "No issues identified" not in result.stdout:
issues.append(f"Security issues found: \\\\{result.stdout\\\\}")
# Check for best practices
if 'import *' in code:
issues.append("Avoid wildcard imports")
if 'except:' in code and 'except Exception:' not in code:
issues.append("Use specific exception handling")
# Check for documentation
if 'def ' in code and '"""' not in code:
issues.append("Missing docstrings for functions")
finally:
os.unlink(temp_file)
return \\\\{
"valid": len(issues) == 0,
"issues": [issue for issue in issues if issue.strip()],
"language": "python"
\\\\}
def validate_javascript_code(self, code: str) -> Dict:
"""Validate JavaScript code quality"""
issues = []
# Create temporary file
with tempfile.NamedTemporaryFile(mode='w', suffix='.js', delete=False) as f:
f.write(code)
temp_file = f.name
try:
# Run ESLint if available
result = subprocess.run(
['eslint', '--format', 'compact', temp_file],
capture_output=True,
text=True
)
if result.returncode != 0:
issues.extend(result.stdout.split('\n'))
# Basic checks
if 'eval(' in code:
issues.append("Avoid using eval() - security risk")
if 'var ' in code:
issues.append("Use 'let' or 'const' instead of 'var'")
except FileNotFoundError:
issues.append("ESLint not available - install for better validation")
finally:
os.unlink(temp_file)
return \\\\{
"valid": len(issues) == 0,
"issues": [issue for issue in issues if issue.strip()],
"language": "javascript"
\\\\}
def validate_typescript_code(self, code: str) -> Dict:
"""Validate TypeScript code quality"""
issues = []
# Create temporary file
with tempfile.NamedTemporaryFile(mode='w', suffix='.ts', delete=False) as f:
f.write(code)
temp_file = f.name
try:
# Run TypeScript compiler
result = subprocess.run(
['tsc', '--noEmit', '--strict', temp_file],
capture_output=True,
text=True
)
if result.returncode != 0:
issues.extend(result.stderr.split('\n'))
# Check for TypeScript best practices
if ': any' in code:
issues.append("Avoid using 'any' type - use specific types")
if '// @ts-ignore' in code:
issues.append("Avoid @ts-ignore - fix type issues instead")
except FileNotFoundError:
issues.append("TypeScript compiler not available")
finally:
os.unlink(temp_file)
return \\\\{
"valid": len(issues) == 0,
"issues": [issue for issue in issues if issue.strip()],
"language": "typescript"
\\\\}
def main():
validator = CodeQualityValidator()
# Example Python code validation
python_code = '''
def calculate_factorial(n):
"""Calculate factorial of a number."""
if n < 0:
raise ValueError("Factorial not defined for negative numbers")
if n == 0 or n == 1:
return 1
return n * calculate_factorial(n - 1)
'''
result = validator.validate_generated_code(python_code, "python")
print("Python Code Validation:")
print(f"Valid: \\\\{result['valid']\\\\}")
if result['issues']:
print("Issues:")
for issue in result['issues']:
print(f" - \\\\{issue\\\\}")
if __name__ == "__main__":
main()
```### コード品質の検証
```python
# Common migration issues from Codex to GPT models
# Issue 1: Different response format
# Old Codex format
# response.choices[0].text
# New GPT format
# response.choices[0].message.content
# Issue 2: Different prompt structure
# Old Codex (completion)
prompt = "# Function to calculate factorial\ndef factorial(n):"
# New GPT (chat)
messages = [
\\\\{"role": "system", "content": "You are an expert Python developer."\\\\},
\\\\{"role": "user", "content": "Write a function to calculate factorial"\\\\}
]
# Issue 3: Stop sequences
# Old Codex
# stop=["#", "\n\n"]
# New GPT (handled differently)
# Use system prompts to control output format
```## トラブルシューティングと一般的な問題
```python
# Optimize API usage for better performance
class OptimizedCodeGenerator:
def __init__(self):
self.cache = \\\\{\\\\}
self.batch_requests = []
def cached_generation(self, prompt: str, language: str) -> str:
"""Use caching to avoid duplicate requests"""
cache_key = f"\\\\{language\\\\}:\\\\{hash(prompt)\\\\}"
if cache_key in self.cache:
return self.cache[cache_key]
# Generate code
result = self.generate_code(prompt, language)
# Cache result
self.cache[cache_key] = result
return result
def batch_generation(self, prompts: List[str], language: str) -> List[str]:
"""Batch multiple requests for efficiency"""
# Combine prompts
combined_prompt = "Generate code for the following requests:\n\n"
for i, prompt in enumerate(prompts, 1):
combined_prompt += f"\\\\{i\\\\}. \\\\{prompt\\\\}\n\n"
combined_prompt += f"Provide \\\\{language\\\\} code for each request, numbered accordingly."
# Single API call
response = self.generate_code(combined_prompt, language)
# Parse responses (simplified)
responses = response.split(f"\\\\{i+1\\\\}.")
return responses[:len(prompts)]
```### APIマイグレーションの問題
https://platform.openai.com/docs##
# パフォーマンス最適化
https://github.com/openai/openai-python#
# リソースとドキュメント
https://platform.openai.com/docs/models/gpt-4##
# 公式リソース
- [OpenAI APIドキュメント](https://github.com/openai/openai-cookbook)
- [OpenAI Pythonライブラリ](https://platform.openai.com/docs/guides/code)
- [GPT-4モデルドキュメント](https://platform.openai.com/docs/api-reference)
- [OpenAI Cookbook](https://platform.openai.com/docs/guides/code/best-practices)
https://community.openai.com/##
# マイグレーションガイド
- [CodexからGPTへの移行ガイド](https://docs.github.com/en/copilot)
- [APIマイグレーションドキュメント](https://github.com/openai/openai-cookbook/tree/main/examples)
- [コード生成のベストプラクティス](https://github.com/dair-ai/Prompt-Engineering-Guide)
https://github.com/features/copilot##
# コミュニティリソース
- [OpenAI開発者コミュニティ](https://www.tabnine.com/)
- [GitHub Copilotドキュメント](https://codeium.com/)
- [コード生成の例](https://aws.amazon.com/codewhisperer/)
- [プロンプトエンジニアリングガイド](