_
_
_
_
Locust Load Testing Cheatsheet¶
• Installation
| Platform | Command |
|---|---|
| Ubuntu/Debian | INLINE_CODE_11 HTML_TAG_85 INLINE_CODE_12 HTML_TAG_86 INLINE_CODE_13 |
| RHEL/CentOS | INLINE_CODE_14 HTML_TAG_87 INLINE_CODE_15 HTML_TAG_88 INLINE_CODE_16 |
| macOS | INLINE_CODE_17 HTML_TAG_89 INLINE_CODE_18 HTML_TAG_90 INLINE_CODE_19 |
| Windows | INLINE_CODE_20 HTML_TAG_91 INLINE_CODE_21 HTML_TAG_92 INLINE_CODE_22 |
| Docker | INLINE_CODE_23 HTML_TAG_93 INLINE_CODE_24 |
| Specific Version | INLINE_CODE_25 |
| With Dev Dependencies | INLINE_CODE_26 |
| Verify Installation | INLINE_CODE_27 |
| _ | |
| oder Grundlegende Befehle |
| Command | Description |
|---|---|
| INLINE_CODE_28 | Start Locust with web UI on default port 8089 |
| INLINE_CODE_29 | Start with specific target host |
| INLINE_CODE_30 | Run headless mode with 100 users, spawn rate 10/sec |
| INLINE_CODE_31 | Run for 30 minutes with 500 users |
| INLINE_CODE_32 | Start web UI on custom port 8090 |
| INLINE_CODE_33 | Bind web UI to all network interfaces |
| INLINE_CODE_34 | Save statistics to CSV files |
| INLINE_CODE_35 | Generate HTML report after test |
| INLINE_CODE_36 | Set logging level to DEBUG |
| INLINE_CODE_37 | Write logs to file |
| INLINE_CODE_38 | Run specific user class only |
| INLINE_CODE_39 | Run multiple user classes |
| INLINE_CODE_40 | Load multiple locustfiles |
| INLINE_CODE_41 | Use Python module path instead of file |
| INLINE_CODE_42 | Display Locust version |
| INLINE_CODE_43 | Show all available command-line options |
| _ | |
| / Fortgeschrittene Nutzung |
| Command | Description |
|---|---|
| INLINE_CODE_44 | Start master node for distributed testing |
| INLINE_CODE_45 | Start worker node connecting to master |
| INLINE_CODE_46 | Start master expecting 4 workers before beginning |
| INLINE_CODE_47 | Master with custom bind address and port |
| INLINE_CODE_48 | Worker with custom master port |
| INLINE_CODE_49 | Run with weighted user classes (75% WebUser, 25% APIUser) |
| INLINE_CODE_50 | Enable step load mode (increment users every 60s) |
| INLINE_CODE_51 | Run only tasks tagged as 'critical', exclude 'slow' |
| INLINE_CODE_52 | Load configuration from file |
| INLINE_CODE_53 | Reset statistics during test run |
| INLINE_CODE_54 | Set graceful shutdown timeout to 60 seconds |
| INLINE_CODE_55 | Enable modern web UI interface |
| INLINE_CODE_56 | Output statistics in JSON format |
| INLINE_CODE_57 | Set connection timeout to 30 seconds |
| INLINE_CODE_58 | Run for exactly 1 hour then stop |
| INLINE_CODE_59 | Scale distributed test to 4 workers using Docker Compose |
| _ | |
| Konfiguration |
Konfigurationsdatei (locust.conf)¶
# locust.conf - Configuration file format
locustfile = locustfile.py
host = https://api.example.com
users = 1000
spawn-rate = 100
run-time = 30m
headless = true
csv = results
html = report.html
loglevel = INFO
logfile = locust.log
Basic Locustfile Struktur¶
# locustfile.py - Minimal example
from locust import HttpUser, task, between
class WebsiteUser(HttpUser):
wait_time = between(1, 5) # Wait 1-5 seconds between tasks
@task
def index_page(self):
self.client.get("/")
@task(3) # 3x more likely than other tasks
def view_item(self):
self.client.get("/item/123")
Advanced Locustfile mit Authentication¶
from locust import HttpUser, task, between
import random
class AuthenticatedUser(HttpUser):
wait_time = between(1, 3)
def on_start(self):
"""Called when user starts - login here"""
response = self.client.post("/login", json={
"username": "testuser",
"password": "password123"
})
self.token = response.json()["token"]
@task
def protected_endpoint(self):
headers = {"Authorization": f"Bearer {self.token}"}
self.client.get("/api/protected", headers=headers)
@task(2)
def create_resource(self):
headers = {"Authorization": f"Bearer {self.token}"}
self.client.post("/api/items",
json={"name": "Test", "value": random.randint(1, 100)},
headers=headers)
Custom Load Shape¶
from locust import LoadTestShape
class StagesLoadShape(LoadTestShape):
"""
Custom load pattern with stages:
- Ramp to 100 users over 60s
- Hold at 100 for 120s
- Ramp to 500 over 60s
- Hold at 500 for 180s
"""
stages = [
{"duration": 60, "users": 100, "spawn_rate": 10},
{"duration": 180, "users": 100, "spawn_rate": 10},
{"duration": 240, "users": 500, "spawn_rate": 50},
{"duration": 420, "users": 500, "spawn_rate": 50},
]
def tick(self):
run_time = self.get_run_time()
for stage in self.stages:
if run_time < stage["duration"]:
return (stage["users"], stage["spawn_rate"])
return None
Docker Compose Configuration¶
# docker-compose.yml - Distributed testing setup
version: '3'
services:
master:
image: locustio/locust
ports:
- "8089:8089"
volumes:
- ./:/mnt/locust
command: -f /mnt/locust/locustfile.py --master --expect-workers=4
worker:
image: locustio/locust
volumes:
- ./:/mnt/locust
command: -f /mnt/locust/locustfile.py --worker --master-host=master
Umgebungsvariablen¶
# Set environment variables for Locust
export LOCUST_LOCUSTFILE=locustfile.py
export LOCUST_HOST=https://api.example.com
export LOCUST_USERS=1000
export LOCUST_SPAWN_RATE=100
export LOCUST_RUN_TIME=30m
export LOCUST_HEADLESS=true
# Run with environment variables
locust
Häufige Anwendungsfälle
Use Case 1: API Load Testing mit Authentication¶
# Create locustfile for API testing
cat > api_test.py << 'EOF'
from locust import HttpUser, task, between
class APIUser(HttpUser):
wait_time = between(1, 2)
def on_start(self):
# Authenticate once per user
response = self.client.post("/api/auth/login", json={
"username": "testuser",
"password": "testpass"
})
self.token = response.json()["access_token"]
@task(3)
def get_users(self):
self.client.get("/api/users",
headers={"Authorization": f"Bearer {self.token}"})
@task(1)
def create_user(self):
self.client.post("/api/users",
json={"name": "New User", "email": "test@example.com"},
headers={"Authorization": f"Bearer {self.token}"})
EOF
# Run the test
locust -f api_test.py --headless --host=https://api.example.com \
-u 500 -r 50 -t 10m --html=api_report.html
Use Case 2: Distributed Load Testing Across Multiple Machines¶
# On master machine (192.168.1.100)
locust -f locustfile.py --master --master-bind-host=0.0.0.0 \
--expect-workers=3 --web-host=0.0.0.0
# On worker machine 1
locust -f locustfile.py --worker --master-host=192.168.1.100
# On worker machine 2
locust -f locustfile.py --worker --master-host=192.168.1.100
# On worker machine 3
locust -f locustfile.py --worker --master-host=192.168.1.100
# Access web UI from any machine
# http://192.168.1.100:8089
Use Case 3: CI/CD Integration mit Automated Testing¶
# Create test script for CI/CD pipeline
cat > run_load_test.sh << 'EOF'
#!/bin/bash
# Run load test and capture exit code
locust -f locustfile.py --headless \
--host=https://staging.example.com \
-u 1000 -r 100 -t 5m \
--html=report.html \
--csv=results \
--exit-code-on-error 1
# Check if test passed
if [ $? -eq 0 ]; then
echo "Load test passed"
exit 0
else
echo "Load test failed"
exit 1
fi
EOF
chmod +x run_load_test.sh
./run_load_test.sh
Use Case 4: Testen mit mehreren Benutzerszenarien¶
# Create multi-scenario locustfile
cat > multi_scenario.py << 'EOF'
from locust import HttpUser, task, between
class BrowserUser(HttpUser):
weight = 3 # 75% of users
wait_time = between(2, 5)
@task
def browse_pages(self):
self.client.get("/")
self.client.get("/products")
self.client.get("/about")
class MobileUser(HttpUser):
weight = 1 # 25% of users
wait_time = between(1, 3)
@task
def mobile_api(self):
self.client.get("/api/mobile/products")
class AdminUser(HttpUser):
weight = 0.1 # Very few admin users
wait_time = between(5, 10)
def on_start(self):
self.client.post("/admin/login", json={
"username": "admin", "password": "admin123"
})
@task
def admin_dashboard(self):
self.client.get("/admin/dashboard")
EOF
# Run with all user types
locust -f multi_scenario.py --headless -u 1000 -r 100 -t 15m
Use Case 5: Step Load Testing mit Progressive Ramps¶
# Create step load configuration
cat > step_load.py << 'EOF'
from locust import HttpUser, task, between, LoadTestShape
class WebUser(HttpUser):
wait_time = between(1, 3)
@task
def load_page(self):
self.client.get("/")
class StepLoadShape(LoadTestShape):
step_time = 120 # 2 minutes per step
step_load = 100 # Add 100 users per step
spawn_rate = 10
time_limit = 600 # 10 minutes total
def tick(self):
run_time = self.get_run_time()
if run_time > self.time_limit:
return None
current_step = run_time // self.step_time
return (current_step + 1) * self.step_load, self.spawn_rate
EOF
# Run step load test
locust -f step_load.py --headless --host=https://example.com \
--html=step_load_report.html
oder Best Practices
- ** Nutze realistische Wartezeiten**: Set
wait_time = between(1, 5), um das reale Nutzerverhalten mit Pausen zwischen Aktionen zu simulieren, um unrealistische Konstanthämmer zu vermeiden - Implementieren Sie die richtige Authentifizierung: Verwenden Sie die Methode __INLINE_CODE_61_, um einmal pro Benutzer zu authentifizieren, anstatt auf jeder Anfrage, reduzieren Sie Overhead und mimicking real sessions
- **Tagen Sie Ihre Aufgaben*: Verwenden Sie
@tag('critical', 'api')Dekorateure, um Tests zu organisieren und bestimmte Teilmengen während der Entwicklung oder gezielten Tests auszuführen - **Monitor Ressourcennutzung*: Watch CPU und Speicher auf beiden Locust-Maschinen und Zielservern; Locust-Arbeiter sollten <80% CPU für genaue Ergebnisse verwenden
- **Start mit kleinen Lasten*: Beginnen Sie Tests mit 10-50 Benutzern, um die Testlogik korrekt zu überprüfen, bevor Sie auf Tausende von gleichzeitigen Benutzern skalieren
- **Verteilter Modus für Maßstab*: Einzelmaschine beschränkt auf ~5000-10000 Benutzer; verwenden Sie Master-Worker-Setup, um größere Lasten über mehrere Maschinen zu simulieren
- **Implementieren Sie die richtige Fehlerbehandlung*: Verwenden Sie
response.failure(), um fehlgeschlagene Anträge zu markieren und Ausnahmen zu fangen, um zu verhindern, dass Testunfälle die Lasterzeugung stoppen - Version steuern Sie Ihre Tests: Speichern Sie Locustfiles in Git neben Anwendungscode, Behandlung von Leistungstests als erstklassige Bürger in Ihrer Teststrategie
- Eine realistische Laichrate: Spawnen Sie nicht alle Benutzer sofort; verwenden Sie allmähliche Rampen-up (10-100 Benutzer/sec) um überwältigende Systeme zu vermeiden und falsche Fehler zu bekommen
- **Generierte Analysenberichte*: Verwenden Sie immer __INLINE_CODE_64_ und
--csv-Flags, um Ergebnisse für die Nachtestanalyse und den historischen Vergleich zu erfassen
Fehlerbehebung
| Issue | Solution |
|---|---|
| INLINE_CODE_66 | Ensure virtual environment is activated: INLINE_CODE_67 then reinstall: INLINE_CODE_68 |
| Workers not connecting to master | Check firewall allows port 5557, verify master IP address is correct, ensure both master and worker use same locustfile |
| INLINE_CODE_69 errors during test | Target server may be down or blocking requests; check server logs, verify host URL is correct, ensure firewall allows traffic |
| Locust using 100% CPU on worker | Reduce number of users per worker (max ~5000), add more worker machines, or optimize locustfile to reduce processing overhead |
| Statistics not updating in web UI | Check browser console for errors, try different browser, ensure no proxy/firewall blocking WebSocket connections on port 8089 |
| INLINE_CODE_70 installation fails on Windows | Install Visual C++ Build Tools from Microsoft, or use pre-compiled wheels: INLINE_CODE_71 |
| Test results inconsistent/unreliable | Ensure workers have sufficient resources, check network latency between workers and target, verify spawn rate isn't too aggressive |
| INLINE_CODE_72 errors | Disable SSL verification (testing only): INLINE_CODE_73 in INLINE_CODE_74, or provide certificate bundle path |
| Memory usage grows continuously | Check for memory leaks in locustfile (storing too much data), restart workers periodically, or reduce test duration |
| Cannot bind to port 8089 | Port already in use; use INLINE_CODE_75 to use different port, or kill existing Locust process: INLINE_CODE_76 |
| Docker container exits immediately | Ensure locustfile path is correct in volume mount, check container logs: INLINE_CODE_77, verify command syntax |
| Aufgaben, die nicht in der erwarteten Reihenfolge ausgeführt werden | Verwenden Sie __INLINE_CODE_78_ für die bestellte Ausführung statt zufälliger Aufgabenauswahl oder implementieren Sie benutzerdefinierte Aufgabenplanungslogik |