_
__HTML_TAG_97_📋 Kopieren Alle pytest Commands_
_
pytest Cheatsheet¶
• Installation
| Platform | Command |
|---|---|
| pip (All platforms) | INLINE_CODE_8 |
| Ubuntu/Debian | INLINE_CODE_9 |
| Fedora/RHEL | INLINE_CODE_10 |
| Arch Linux | INLINE_CODE_11 |
| macOS (Homebrew) | INLINE_CODE_12 |
| Windows (Chocolatey) | INLINE_CODE_13 |
| Virtual Environment | INLINE_CODE_14 |
| With Common Plugins | INLINE_CODE_15 |
| Verify Installation | INLINE_CODE_16 |
oder Grundlegende Befehle
| Command | Description |
|---|---|
| INLINE_CODE_17 | Run all tests in current directory and subdirectories |
| INLINE_CODE_18 | Run tests in a specific file |
| INLINE_CODE_19 | Run all tests in a specific directory |
| INLINE_CODE_20 | Run a specific test function |
| INLINE_CODE_21 | Run all tests in a specific class |
| INLINE_CODE_22 | Run a specific test method in a class |
| INLINE_CODE_23 | Run tests with verbose output (show test names) |
| INLINE_CODE_24 | Run tests with very verbose output (show full details) |
| INLINE_CODE_25 | Run tests in quiet mode (minimal output) |
| INLINE_CODE_26 | Show print statements and stdout during test execution |
| INLINE_CODE_27 | Stop after first test failure |
| INLINE_CODE_28 | Stop after N test failures |
| INLINE_CODE_29 | Run tests matching keyword expression |
| INLINE_CODE_30 | Run tests matching multiple keywords (AND/NOT logic) |
| INLINE_CODE_31 | Run tests marked with specific marker |
| INLINE_CODE_32 | Run tests excluding specific marker |
| INLINE_CODE_33 | Show which tests would be run without executing them |
| INLINE_CODE_34 | Run only tests that failed in last run (last failed) |
| INLINE_CODE_35 | Run failed tests first, then others (failed first) |
| INLINE_CODE_36 | Show local variables in tracebacks on failure |
/ Fortgeschrittene Nutzung
| Command | Description |
|---|---|
| INLINE_CODE_37 | Run tests in parallel using all available CPU cores (requires pytest-xdist) |
| INLINE_CODE_38 | Run tests in parallel with 4 worker processes |
| INLINE_CODE_39 | Show the 10 slowest test durations |
| INLINE_CODE_40 | Show slowest tests taking at least 1 second |
| INLINE_CODE_41 | Run tests with code coverage report (requires pytest-cov) |
| INLINE_CODE_42 | Generate HTML coverage report in htmlcov/ directory |
| INLINE_CODE_43 | Show coverage with missing line numbers in terminal |
| INLINE_CODE_44 | Fail if coverage is below 80% |
| INLINE_CODE_45 | Include branch coverage analysis |
| INLINE_CODE_46 | Generate JUnit XML report for CI/CD integration |
| INLINE_CODE_47 | Generate HTML test report (requires pytest-html) |
| INLINE_CODE_48 | Drop into Python debugger (PDB) on test failures |
| INLINE_CODE_49 | Drop into PDB at the start of each test |
| INLINE_CODE_50 | Show setup and teardown of fixtures during execution |
| INLINE_CODE_51 | List all available fixtures and their docstrings |
| INLINE_CODE_52 | List all registered markers |
| INLINE_CODE_53 | Use shorter traceback format |
| INLINE_CODE_54 | Show one line per failure in traceback |
| INLINE_CODE_55 | Disable traceback output |
| INLINE_CODE_56 | Treat deprecation warnings as errors |
| INLINE_CODE_57 | Disable output capturing (same as -s) |
| INLINE_CODE_58 | Set timeout of 300 seconds per test (requires pytest-timeout) |
| INLINE_CODE_59 | Repeat each test 3 times (requires pytest-repeat) |
| INLINE_CODE_60 | Run tests in random order (requires pytest-random-order) |
| INLINE_CODE_61 | Show short test summary for all tests (passed, failed, skipped, etc.) |
Konfiguration
pytest.ini Datei konfigurieren¶
Ort im Projekt root-Verzeichnis:
[pytest]
# Minimum pytest version required
minversion = 7.0
# Directories to search for tests
testpaths = tests
# Test file patterns
python_files = test_*.py *_test.py
# Test class patterns
python_classes = Test* *Tests
# Test function patterns
python_functions = test_*
# Default command line options
addopts =
-ra
--strict-markers
--strict-config
--verbose
--cov=myproject
--cov-report=html
--cov-report=term-missing
# Custom markers
markers =
slow: marks tests as slow (deselect with '-m "not slow"')
integration: marks tests as integration tests
unit: marks tests as unit tests
smoke: marks tests as smoke tests
database: marks tests requiring database connection
api: marks tests for API testing
# Directories to ignore
norecursedirs = .git .tox dist build *.egg venv node_modules
# Warning filters
filterwarnings =
error
ignore::UserWarning
ignore::DeprecationWarning
# Logging configuration
log_cli = true
log_cli_level = INFO
log_cli_format = %(asctime)s [%(levelname)s] %(message)s
log_cli_date_format = %Y-%m-%d %H:%M:%S
pyproject.toml Konfiguration¶
Moderne Python-Projekte mit pyproject.toml:
[tool.pytest.ini_options]
minversion = "7.0"
testpaths = ["tests"]
python_files = ["test_*.py", "*_test.py"]
python_classes = ["Test*"]
python_functions = ["test_*"]
addopts = [
"-ra",
"--strict-markers",
"--cov=myproject",
"--cov-branch",
"--cov-report=html",
"--cov-report=term-missing:skip-covered",
"--cov-fail-under=80",
]
markers = [
"slow: marks tests as slow",
"integration: integration tests",
"unit: unit tests",
"smoke: smoke tests",
]
filterwarnings = [
"error",
"ignore::UserWarning",
]
conftest. py - Geteilte Reparaturen¶
Ort in Testverzeichnis Wurzel für freigegebene Geräte:
import pytest
# Session-scoped fixture (runs once per test session)
@pytest.fixture(scope="session")
def database():
"""Provide database connection for entire test session"""
db = create_database_connection()
yield db
db.close()
# Module-scoped fixture (runs once per test module)
@pytest.fixture(scope="module")
def api_client():
"""Provide API client for test module"""
client = APIClient()
yield client
client.cleanup()
# Function-scoped fixture (default, runs for each test)
@pytest.fixture
def sample_data():
"""Provide sample data for testing"""
return {"id": 1, "name": "Test User"}
# Autouse fixture (automatically used by all tests)
@pytest.fixture(autouse=True)
def reset_state():
"""Reset application state before each test"""
clear_cache()
yield
cleanup_resources()
# Parametrized fixture
@pytest.fixture(params=["sqlite", "postgres", "mysql"])
def db_type(request):
"""Test with multiple database types"""
return request.param
# Configure pytest hooks
def pytest_configure(config):
"""Add custom configuration"""
config.addinivalue_line(
"markers", "custom: custom marker description"
)
Häufige Anwendungsfälle
Use Case 1: Basic Unit Testing¶
# Create test file
cat > test_calculator.py << 'EOF'
def add(a, b):
return a + b
def test_add():
assert add(2, 3) == 5
assert add(-1, 1) == 0
assert add(0, 0) == 0
EOF
# Run the tests
pytest test_calculator.py -v
Use Case 2: Testen mit Fixtures¶
# Create test with fixtures
cat > test_user.py << 'EOF'
import pytest
@pytest.fixture
def user_data():
return {"username": "testuser", "email": "test@example.com"}
def test_user_creation(user_data):
assert user_data["username"] == "testuser"
assert "@" in user_data["email"]
EOF
# Run tests with fixture details
pytest test_user.py -v --setup-show
Use Case 3: Parametrized Testing¶
# Create parametrized tests
cat > test_math.py << 'EOF'
import pytest
@pytest.mark.parametrize("input,expected", [
(2, 4),
(3, 9),
(4, 16),
(5, 25),
])
def test_square(input, expected):
assert input ** 2 == expected
EOF
# Run parametrized tests
pytest test_math.py -v
Use Case 4: Integrationstest mit Markern¶
# Create tests with markers
cat > test_api.py << 'EOF'
import pytest
@pytest.mark.unit
def test_data_validation():
assert True
@pytest.mark.integration
def test_api_endpoint():
# Simulated API test
assert True
@pytest.mark.slow
@pytest.mark.integration
def test_full_workflow():
# Long-running test
assert True
EOF
# Run only unit tests
pytest test_api.py -m unit -v
# Run integration tests excluding slow ones
pytest test_api.py -m "integration and not slow" -v
Use Case 5: Coverage Report Generation¶
# Run tests with coverage and generate reports
pytest --cov=myproject --cov-report=html --cov-report=term-missing
# View coverage report
# HTML report will be in htmlcov/index.html
# Run with coverage threshold
pytest --cov=myproject --cov-fail-under=80
# Generate coverage badge
pytest --cov=myproject --cov-report=term --cov-report=html
oder Best Practices
- **Benutze beschreibende Testnamen*: Namenstests mit
test_Präfix und beschreiben, was sie testen (z.B.test_user_registration_with_valid_email) - **Folge AAA Muster*: Strukturtests mit Arrange (Setup), Act (execute), Assert (verify) Abschnitte für Klarheit
- Benutze Geräte für Setup/Teardown: Leverage Pytests statt Setup/Teardown Methoden für bessere Wiederverwendbarkeit und Abhängigkeitsinjektion
- Mark-Tests entsprechend: Verwenden Sie Marker (
@pytest.mark.slow,@pytest.mark.integration_), um Tests zu kategorisieren und eine selektive Ausführung zu ermöglichen - **Keep-Tests isoliert* Jeder Test sollte unabhängig sein und sich nicht auf den Zustand von anderen Tests verlassen; Geräte mit entsprechenden Bereichen verwenden
- **Benutze Parametrize für ähnliche Tests*: Anstatt mehrere ähnliche Tests zu schreiben, verwenden Sie
@pytest.mark.parametrize, um mehrere Eingänge zu testen - Configure pytest.ini oder pyproject. toml: Legen Sie projektweite Standardeinstellungen für Testentdeckung, Marker und Befehlszeilenoptionen in Konfigurationsdateien fest
- **Writ fokussierte Behauptungen*: Verwenden Sie einfache, klare Behauptungen; pytests Introspektion zeigt detaillierte Fehlerinformationen automatisch
- **Benutze conftest.py für geteilte Geräte*: Platzieren Sie wiederverwendbare Geräte in
conftest.pyDateien auf entsprechenden Verzeichnisebenen - **Lauftests häufig* Testen Sie während der Entwicklung mit
pytest -x, um den ersten Ausfall für schnelleres Feedback zu stoppen - Monitor Testabdeckung: Regelmäßige Überprüfung der Berichterstattungsberichte und Ziel 80%+ Berichterstattung, aber Fokus auf sinnvolle Tests über Prozent
Fehlerbehebung
| Issue | Solution |
|---|---|
| Tests not discovered | Ensure files match patterns: INLINE_CODE_69 or INLINE_CODE_70, functions start with INLINE_CODE_71, classes start with INLINE_CODE_72 |
| Import errors in tests | Add empty INLINE_CODE_73 files in test directories, or install package in editable mode: INLINE_CODE_74 |
| Fixture not found | Check fixture is defined in same file or INLINE_CODE_75, verify correct scope, ensure fixture name matches parameter |
| Tests pass locally but fail in CI | Check for environment-specific dependencies, ensure consistent Python versions, verify all dependencies in INLINE_CODE_76 |
| Slow test execution | Use INLINE_CODE_77 to identify slow tests, consider parallel execution with INLINE_CODE_78, mark slow tests with INLINE_CODE_79 |
| Coverage not working | Install pytest-cov: INLINE_CODE_80, ensure source path is correct: INLINE_CODE_81, check INLINE_CODE_82 configuration |
| Markers not recognized | Register markers in INLINE_CODE_83 or INLINE_CODE_84 under INLINE_CODE_85, use INLINE_CODE_86 to catch typos |
| PDB not working with capture | Use INLINE_CODE_87 to disable output capturing, or use INLINE_CODE_88 instead of INLINE_CODE_89 |
| Fixtures running in wrong order | Check fixture scope (session > module > class > function), use INLINE_CODE_90 carefully, review dependency chain |
| Parallel tests failing | Ensure tests are isolated and don't share state, check for race conditions, use proper locking for shared resources |
| Memory leaks in tests | Use INLINE_CODE_91 for proper cleanup, ensure fixtures yield and cleanup properly, check for circular references |
| Warnings cluttering output | Konfigurieren Sie Warnfilter in pytest.ini: __INLINE_CODE_93_, oder verwenden Sie -W |