k6 Load Testing Cheatsheet¶
Installation¶
| Platform | Command |
|---|---|
| Ubuntu/Debian | sudo gpg --no-default-keyring --keyring /usr/share/keyrings/k6-archive-keyring.gpg --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys C5AD17C747E3415A3642D57D77C6C491D6AC1D69 && echo "deb [signed-by=/usr/share/keyrings/k6-archive-keyring.gpg] https://dl.k6.io/deb stable main" \| sudo tee /etc/apt/sources.list.d/k6.list && sudo apt-get update && sudo apt-get install k6 |
| RHEL/CentOS/Fedora | sudo dnf install https://dl.k6.io/rpm/repo.rpm && sudo dnf install k6 |
| macOS (Homebrew) | brew install k6 |
| macOS (MacPorts) | sudo port install k6 |
| Windows (Chocolatey) | choco install k6 |
| Windows (winget) | winget install k6 --source winget |
| Linux (Snap) | sudo snap install k6 |
| Docker | docker pull grafana/k6:latest |
| Verify Installation | k6 version |
Basic Commands¶
| Command | Description |
|---|---|
k6 run script.js |
Execute a load test script |
k6 run --vus 10 script.js |
Run test with 10 virtual users |
k6 run --duration 30s script.js |
Run test for 30 seconds |
k6 run --vus 50 --duration 5m script.js |
Run with 50 VUs for 5 minutes |
k6 run --iterations 1000 script.js |
Run for exactly 1000 iterations total |
k6 run --vus 10 --iterations 100 script.js |
100 iterations distributed across 10 VUs |
k6 version |
Display k6 version information |
k6 version --verbose |
Show detailed version and build info |
k6 inspect script.js |
Analyze script without executing it |
k6 inspect --execution-requirements script.js |
Show execution requirements in JSON format |
k6 run --stage 30s:10,1m:20,30s:0 script.js |
Run with ramping stages (ramp up, sustain, ramp down) |
k6 run --http-debug script.js |
Enable HTTP request/response debugging |
k6 run --http-debug="full" script.js |
Show full HTTP debug including headers and body |
k6 run --no-summary script.js |
Disable end-of-test summary |
k6 run --summary-export=summary.json script.js |
Export test summary to JSON file |
Advanced Usage¶
| Command | Description |
|---|---|
k6 run -e API_URL=https://api.test.com script.js |
Pass environment variable to test script |
k6 run -e VAR1=value1 -e VAR2=value2 script.js |
Pass multiple environment variables |
k6 run --out json=results.json script.js |
Export metrics to JSON file |
k6 run --out influxdb=http://localhost:8086/mydb script.js |
Send metrics to InfluxDB |
k6 run --out json=results.json --out influxdb=http://localhost:8086/k6 script.js |
Output to multiple backends simultaneously |
k6 run --out csv=results.csv script.js |
Export metrics to CSV format |
k6 run --out cloud script.js |
Stream results to k6 Cloud (requires authentication) |
k6 run --threshold http_req_duration=p(95)<500 script.js |
Set pass/fail threshold (95th percentile < 500ms) |
k6 run --threshold http_req_duration=p(95)<500 --threshold http_req_failed<0.01 script.js |
Multiple thresholds (response time and error rate) |
k6 run --rps 100 script.js |
Limit requests per second across all VUs |
k6 run --max-duration 10m script.js |
Set maximum test duration (hard stop) |
k6 run --grace-stop 30s script.js |
Allow 30s for VUs to finish current iteration on stop |
k6 run --insecure-skip-tls-verify script.js |
Skip TLS certificate verification (testing only) |
k6 run --user-agent "k6/custom-agent" script.js |
Set custom User-Agent header |
k6 run --batch 20 script.js |
Set maximum parallel batch requests per VU |
k6 run --batch-per-host 10 script.js |
Limit parallel requests per host |
k6 run --blacklist-ip 10.0.0.0/8,192.168.0.0/16 script.js |
Blacklist IP ranges from being accessed |
k6 run --system-tags=proto,status,method script.js |
Include only specific system tags in metrics |
k6 archive script.js |
Create test archive with all dependencies |
k6 archive -O test-archive.tar script.js |
Create archive with custom output filename |
k6 run archive.tar |
Execute test from archived bundle |
k6 cloud script.js |
Execute test in k6 Cloud infrastructure |
k6 login cloud |
Authenticate with k6 Cloud service |
k6 login cloud --token YOUR_API_TOKEN |
Authenticate using API token |
docker run -v $(pwd):/scripts grafana/k6 run /scripts/test.js |
Run k6 test in Docker with volume mount |
Configuration¶
Script Configuration (options object)¶
// Basic load test configuration
export let options = {
vus: 10, // Number of virtual users
duration: '30s', // Test duration
iterations: 1000, // Total iterations (alternative to duration)
};
export default function() {
// Test logic here
}
Stages Configuration (Ramping)¶
export let options = {
stages: [
{ duration: '2m', target: 10 }, // Ramp up to 10 VUs over 2 minutes
{ duration: '5m', target: 10 }, // Stay at 10 VUs for 5 minutes
{ duration: '2m', target: 50 }, // Ramp up to 50 VUs over 2 minutes
{ duration: '5m', target: 50 }, // Stay at 50 VUs for 5 minutes
{ duration: '2m', target: 0 }, // Ramp down to 0 VUs
],
};
Thresholds Configuration¶
export let options = {
thresholds: {
http_req_duration: ['p(95)<500'], // 95% of requests must complete below 500ms
http_req_failed: ['rate<0.01'], // Error rate must be below 1%
http_reqs: ['rate>100'], // Request rate must be above 100 req/s
checks: ['rate>0.95'], // 95% of checks must pass
'http_req_duration{status:200}': ['p(99)<1000'], // Tagged threshold
},
};
Scenarios Configuration (Advanced)¶
export let options = {
scenarios: {
// Constant VUs scenario
constant_load: {
executor: 'constant-vus',
vus: 10,
duration: '5m',
gracefulStop: '30s',
},
// Ramping VUs scenario
ramping_load: {
executor: 'ramping-vus',
startVUs: 0,
stages: [
{ duration: '2m', target: 50 },
{ duration: '5m', target: 50 },
{ duration: '2m', target: 0 },
],
gracefulRampDown: '30s',
},
// Per-VU iterations
per_vu_iterations: {
executor: 'per-vu-iterations',
vus: 10,
iterations: 100, // Each VU runs 100 iterations
maxDuration: '10m',
},
// Shared iterations
shared_iterations: {
executor: 'shared-iterations',
vus: 10,
iterations: 1000, // 1000 iterations shared across 10 VUs
maxDuration: '10m',
},
// Constant arrival rate
constant_arrival_rate: {
executor: 'constant-arrival-rate',
rate: 100, // 100 iterations per timeUnit
timeUnit: '1s', // per second
duration: '5m',
preAllocatedVUs: 50,
maxVUs: 100,
},
},
};
Environment Variables¶
// Access environment variables in script
import { check } from 'k6';
import http from 'k6/http';
const API_URL = __ENV.API_URL || 'https://default-api.com';
const API_KEY = __ENV.API_KEY;
export default function() {
let response = http.get(`${API_URL}/endpoint`, {
headers: { 'Authorization': `Bearer ${API_KEY}` }
});
check(response, {
'status is 200': (r) => r.status === 200,
});
}
Custom Metrics¶
import { Counter, Gauge, Rate, Trend } from 'k6/metrics';
// Define custom metrics
let myCounter = new Counter('my_custom_counter');
let myGauge = new Gauge('my_custom_gauge');
let myRate = new Rate('my_custom_rate');
let myTrend = new Trend('my_custom_trend');
export default function() {
myCounter.add(1); // Increment counter
myGauge.add(100); // Set gauge value
myRate.add(true); // Add success to rate
myTrend.add(250); // Add value to trend
}
Common Use Cases¶
Use Case: Basic API Load Test¶
# Create a simple test script
cat > api-test.js << 'EOF'
import http from 'k6/http';
import { check, sleep } from 'k6';
export let options = {
vus: 10,
duration: '30s',
};
export default function() {
let response = http.get('https://api.example.com/users');
check(response, {
'status is 200': (r) => r.status === 200,
'response time < 500ms': (r) => r.timings.duration < 500,
});
sleep(1);
}
EOF
# Run the test
k6 run api-test.js
# Run with custom VUs and duration
k6 run --vus 50 --duration 5m api-test.js
Use Case: Spike Testing¶
# Create spike test with sudden load increase
cat > spike-test.js << 'EOF'
import http from 'k6/http';
import { sleep } from 'k6';
export let options = {
stages: [
{ duration: '1m', target: 10 }, // Normal load
{ duration: '30s', target: 100 }, // Spike to 100 users
{ duration: '1m', target: 100 }, // Sustain spike
{ duration: '30s', target: 10 }, // Return to normal
{ duration: '1m', target: 10 }, // Recovery
],
thresholds: {
http_req_duration: ['p(95)<2000'], // Relaxed threshold for spike
http_req_failed: ['rate<0.05'], // Allow 5% error rate during spike
},
};
export default function() {
http.get('https://api.example.com/products');
sleep(1);
}
EOF
# Run spike test
k6 run spike-test.js
Use Case: API Authentication & Data Posting¶
# Test with authentication and POST requests
cat > auth-test.js << 'EOF'
import http from 'k6/http';
import { check } from 'k6';
export let options = {
vus: 20,
duration: '2m',
};
export default function() {
// Login and get token
let loginRes = http.post('https://api.example.com/auth/login',
JSON.stringify({
username: 'testuser',
password: 'testpass',
}),
{ headers: { 'Content-Type': 'application/json' } }
);
let token = loginRes.json('token');
// Use token for authenticated request
let response = http.post('https://api.example.com/data',
JSON.stringify({ name: 'test', value: 123 }),
{ headers: {
'Authorization': `Bearer ${token}`,
'Content-Type': 'application/json',
}}
);
check(response, {
'authenticated request successful': (r) => r.status === 201,
});
}
EOF
# Run with environment variables
k6 run -e USERNAME=myuser -e PASSWORD=mypass auth-test.js
Use Case: Soak Testing (Extended Duration)¶
# Long-running stability test
cat > soak-test.js << 'EOF'
import http from 'k6/http';
import { check, sleep } from 'k6';
export let options = {
stages: [
{ duration: '5m', target: 20 }, // Ramp up
{ duration: '4h', target: 20 }, // Soak for 4 hours
{ duration: '5m', target: 0 }, // Ramp down
],
thresholds: {
http_req_duration: ['p(95)<500', 'p(99)<1000'],
http_req_failed: ['rate<0.01'],
},
};
export default function() {
let response = http.get('https://api.example.com/health');
check(response, {
'status is 200': (r) => r.status === 200,
});
sleep(Math.random() * 5 + 3); // Random sleep 3-8 seconds
}
EOF
# Run soak test with metrics export
k6 run --out json=soak-results.json soak-test.js
Use Case: Multi-Scenario Test with Different Endpoints¶
# Complex test with multiple user behaviors
cat > multi-scenario.js << 'EOF'
import http from 'k6/http';
import { sleep } from 'k6';
export let options = {
scenarios: {
// Scenario 1: Browse products
browsers: {
executor: 'constant-vus',
exec: 'browseProducts',
vus: 30,
duration: '5m',
},
// Scenario 2: Search functionality
searchers: {
executor: 'constant-vus',
exec: 'searchProducts',
vus: 10,
duration: '5m',
},
// Scenario 3: Checkout process
buyers: {
executor: 'constant-arrival-rate',
exec: 'checkoutFlow',
rate: 5,
timeUnit: '1s',
duration: '5m',
preAllocatedVUs: 10,
},
},
};
export function browseProducts() {
http.get('https://shop.example.com/products');
sleep(2);
}
export function searchProducts() {
http.get('https://shop.example.com/search?q=laptop');
sleep(3);
}
export function checkoutFlow() {
http.post('https://shop.example.com/cart/add', { productId: 123 });
sleep(1);
http.post('https://shop.example.com/checkout', { cartId: 456 });
sleep(2);
}
EOF
# Run multi-scenario test
k6 run multi-scenario.js
Best Practices¶
-
Start Small, Scale Gradually: Begin with low VU counts and short durations to validate your script works correctly before running large-scale tests. Use
--vus 1 --iterations 1for initial validation. -
Use Realistic Think Time: Add
sleep()between requests to simulate real user behavior. Random sleep times (sleep(Math.random() * 3 + 1)) create more realistic patterns than fixed intervals. -
Set Appropriate Thresholds: Define clear pass/fail criteria using thresholds. Monitor both response times (p95, p99) and error rates to catch performance degradation early.
-
Tag Your Requests: Use tags to group related requests for better analysis:
http.get(url, { tags: { name: 'user_login' } }). This enables filtering metrics by specific operations. -
Monitor System Resources: Watch CPU, memory, and network on both the k6 load generator and target system. k6 itself can become a bottleneck with insufficient resources.
-
Use Scenarios for Complex Tests: Leverage scenarios instead of simple VU/duration when you need different load patterns, multiple user behaviors, or precise control over request rates.
-
Export Metrics for Analysis: Always export results to external systems (InfluxDB, JSON, CSV) for historical comparison and detailed analysis. The terminal summary is useful but limited.
-
Version Control Your Tests: Store test scripts in Git alongside application code. This enables tracking performance changes across releases and running tests in CI/CD pipelines.
-
Separate Data from Scripts: Use environment variables (
-eflag) or external data files for test configuration. This makes scripts reusable across environments (dev, staging, production). -
Run from CI/CD Pipelines: Integrate k6 tests into your deployment pipeline to catch performance regressions automatically. Use thresholds to fail builds when performance degrades.
Troubleshooting¶
| Issue | Solution |
|---|---|
| High memory usage on load generator | Reduce VUs, use --batch to limit concurrent requests, or distribute load across multiple k6 instances. Check for memory leaks in test scripts. |
| "context deadline exceeded" errors | Increase timeout values in HTTP requests: http.get(url, { timeout: '60s' }). Check network connectivity and target system capacity. |
| Inconsistent results between runs | Ensure consistent test environment, use fixed seed for random data, run longer tests for statistical significance, and check for external factors (network, other traffic). |
| TLS/SSL certificate errors | Use --insecure-skip-tls-verify for testing environments only. For production, ensure proper certificates or use --tls-cert and --tls-key for client certificates. |
| Script import errors ("module not found") | Verify module paths, ensure k6 supports the module (limited Node.js compatibility), use k6 archive (k6 archive) to bundle dependencies, or use remote modules with full URLs. |
| Rate limiting by target system | Implement proper sleep() intervals, use --rps to limit request rate, distribute requests across multiple IPs, or coordinate with target system owners for testing. |
| Metrics not appearing in output | Check output configuration (--out), verify backend connectivity (InfluxDB, Grafana), ensure custom metrics are properly defined, and validate metric names don't conflict with built-ins. |
| Docker container exits immediately | Mount script volume correctly: docker run -v $(pwd):/scripts grafana/k6 run /scripts/test.js. Use -i flag for piping scripts: docker run -i grafana/k6 run - <script.js. |
| Thresholds failing unexpectedly | Review threshold definitions for typos, check if target system can handle expected load, analyze metrics to understand actual performance, and adjust thresholds to realistic values. |
| k6 process hangs or doesn't terminate | Use --grace-stop to allow VUs to finish iterations cleanly, check for infinite loops in test scripts, ensure external dependencies (APIs) are responsive, or use --max-duration as hard stop. |