pytest Cheatsheet
Installation
| Platform | Command |
|---|
| pip (All platforms) | pip install pytest |
| Ubuntu/Debian | sudo apt install python3-pytest |
| Fedora/RHEL | sudo dnf install python3-pytest |
| Arch Linux | sudo pacman -S python-pytest |
| macOS (Homebrew) | brew install pytest |
| Windows (Chocolatey) | choco install pytest |
| Virtual Environment | python -m venv venv && source venv/bin/activate && pip install pytest |
| With Common Plugins | pip install pytest pytest-cov pytest-xdist pytest-mock pytest-html |
| Verify Installation | pytest --version |
Basic Commands
| Command | Description |
|---|
pytest | Run all tests in current directory and subdirectories |
pytest test_file.py | Run tests in a specific file |
pytest tests/ | Run all tests in a specific directory |
pytest test_file.py::test_function | Run a specific test function |
pytest test_file.py::TestClass | Run all tests in a specific class |
pytest test_file.py::TestClass::test_method | Run a specific test method in a class |
pytest -v | Run tests with verbose output (show test names) |
pytest -vv | Run tests with very verbose output (show full details) |
pytest -q | Run tests in quiet mode (minimal output) |
pytest -s | Show print statements and stdout during test execution |
pytest -x | Stop after first test failure |
pytest --maxfail=3 | Stop after N test failures |
pytest -k "test_user" | Run tests matching keyword expression |
pytest -k "user and not admin" | Run tests matching multiple keywords (AND/NOT logic) |
pytest -m "slow" | Run tests marked with specific marker |
pytest -m "not slow" | Run tests excluding specific marker |
pytest --collect-only | Show which tests would be run without executing them |
pytest --lf | Run only tests that failed in last run (last failed) |
pytest --ff | Run failed tests first, then others (failed first) |
pytest -l | Show local variables in tracebacks on failure |
Advanced Usage
| Command | Description |
|---|
pytest -n auto | Run tests in parallel using all available CPU cores (requires pytest-xdist) |
pytest -n 4 | Run tests in parallel with 4 worker processes |
pytest --durations=10 | Show the 10 slowest test durations |
pytest --durations=10 --durations-min=1.0 | Show slowest tests taking at least 1 second |
pytest --cov=myproject | Run tests with code coverage report (requires pytest-cov) |
pytest --cov=myproject --cov-report=html | Generate HTML coverage report in htmlcov/ directory |
pytest --cov=myproject --cov-report=term-missing | Show coverage with missing line numbers in terminal |
pytest --cov=myproject --cov-fail-under=80 | Fail if coverage is below 80% |
pytest --cov-branch | Include branch coverage analysis |
pytest --junitxml=report.xml | Generate JUnit XML report for CI/CD integration |
pytest --html=report.html | Generate HTML test report (requires pytest-html) |
pytest --pdb | Drop into Python debugger (PDB) on test failures |
pytest --trace | Drop into PDB at the start of each test |
pytest --setup-show | Show setup and teardown of fixtures during execution |
pytest --fixtures | List all available fixtures and their docstrings |
pytest --markers | List all registered markers |
pytest --tb=short | Use shorter traceback format |
pytest --tb=line | Show one line per failure in traceback |
pytest --tb=no | Disable traceback output |
pytest -W error::DeprecationWarning | Treat deprecation warnings as errors |
pytest --capture=no | Disable output capturing (same as -s) |
pytest --timeout=300 | Set timeout of 300 seconds per test (requires pytest-timeout) |
pytest --count=3 | Repeat each test 3 times (requires pytest-repeat) |
pytest --random-order | Run tests in random order (requires pytest-random-order) |
pytest -ra | Show short test summary for all tests (passed, failed, skipped, etc.) |
Configuration
pytest.ini Configuration File
Place in project root directory:
[pytest]
# Minimum pytest version required
minversion = 7.0
# Directories to search for tests
testpaths = tests
# Test file patterns
python_files = test_*.py *_test.py
# Test class patterns
python_classes = Test* *Tests
# Test function patterns
python_functions = test_*
# Default command line options
addopts =
-ra
--strict-markers
--strict-config
--verbose
--cov=myproject
--cov-report=html
--cov-report=term-missing
# Custom markers
markers =
slow: marks tests as slow (deselect with '-m "not slow"')
integration: marks tests as integration tests
unit: marks tests as unit tests
smoke: marks tests as smoke tests
database: marks tests requiring database connection
api: marks tests for API testing
# Directories to ignore
norecursedirs = .git .tox dist build *.egg venv node_modules
# Warning filters
filterwarnings =
error
ignore::UserWarning
ignore::DeprecationWarning
# Logging configuration
log_cli = true
log_cli_level = INFO
log_cli_format = %(asctime)s [%(levelname)s] %(message)s
log_cli_date_format = %Y-%m-%d %H:%M:%S
pyproject.toml Configuration
Modern Python projects using pyproject.toml:
[tool.pytest.ini_options]
minversion = "7.0"
testpaths = ["tests"]
python_files = ["test_*.py", "*_test.py"]
python_classes = ["Test*"]
python_functions = ["test_*"]
addopts = [
"-ra",
"--strict-markers",
"--cov=myproject",
"--cov-branch",
"--cov-report=html",
"--cov-report=term-missing:skip-covered",
"--cov-fail-under=80",
]
markers = [
"slow: marks tests as slow",
"integration: integration tests",
"unit: unit tests",
"smoke: smoke tests",
]
filterwarnings = [
"error",
"ignore::UserWarning",
]
conftest.py - Shared Fixtures
Place in test directory root for shared fixtures:
import pytest
# Session-scoped fixture (runs once per test session)
@pytest.fixture(scope="session")
def database():
"""Provide database connection for entire test session"""
db = create_database_connection()
yield db
db.close()
# Module-scoped fixture (runs once per test module)
@pytest.fixture(scope="module")
def api_client():
"""Provide API client for test module"""
client = APIClient()
yield client
client.cleanup()
# Function-scoped fixture (default, runs for each test)
@pytest.fixture
def sample_data():
"""Provide sample data for testing"""
return {"id": 1, "name": "Test User"}
# Autouse fixture (automatically used by all tests)
@pytest.fixture(autouse=True)
def reset_state():
"""Reset application state before each test"""
clear_cache()
yield
cleanup_resources()
# Parametrized fixture
@pytest.fixture(params=["sqlite", "postgres", "mysql"])
def db_type(request):
"""Test with multiple database types"""
return request.param
# Configure pytest hooks
def pytest_configure(config):
"""Add custom configuration"""
config.addinivalue_line(
"markers", "custom: custom marker description"
)
Common Use Cases
Use Case 1: Basic Unit Testing
# Create test file
cat > test_calculator.py << 'EOF'
def add(a, b):
return a + b
def test_add():
assert add(2, 3) == 5
assert add(-1, 1) == 0
assert add(0, 0) == 0
EOF
# Run the tests
pytest test_calculator.py -v
Use Case 2: Testing with Fixtures
# Create test with fixtures
cat > test_user.py << 'EOF'
import pytest
@pytest.fixture
def user_data():
return {"username": "testuser", "email": "test@example.com"}
def test_user_creation(user_data):
assert user_data["username"] == "testuser"
assert "@" in user_data["email"]
EOF
# Run tests with fixture details
pytest test_user.py -v --setup-show
Use Case 3: Parametrized Testing
# Create parametrized tests
cat > test_math.py << 'EOF'
import pytest
@pytest.mark.parametrize("input,expected", [
(2, 4),
(3, 9),
(4, 16),
(5, 25),
])
def test_square(input, expected):
assert input ** 2 == expected
EOF
# Run parametrized tests
pytest test_math.py -v
Use Case 4: Integration Testing with Markers
# Create tests with markers
cat > test_api.py << 'EOF'
import pytest
@pytest.mark.unit
def test_data_validation():
assert True
@pytest.mark.integration
def test_api_endpoint():
# Simulated API test
assert True
@pytest.mark.slow
@pytest.mark.integration
def test_full_workflow():
# Long-running test
assert True
EOF
# Run only unit tests
pytest test_api.py -m unit -v
# Run integration tests excluding slow ones
pytest test_api.py -m "integration and not slow" -v
Use Case 5: Coverage Report Generation
# Run tests with coverage and generate reports
pytest --cov=myproject --cov-report=html --cov-report=term-missing
# View coverage report
# HTML report will be in htmlcov/index.html
# Run with coverage threshold
pytest --cov=myproject --cov-fail-under=80
# Generate coverage badge
pytest --cov=myproject --cov-report=term --cov-report=html
Best Practices
- Use descriptive test names: Name tests with
test_ prefix and describe what they test (e.g., test_user_registration_with_valid_email)
- Follow AAA pattern: Structure tests with Arrange (setup), Act (execute), Assert (verify) sections for clarity
- Use fixtures for setup/teardown: Leverage pytest fixtures instead of setup/teardown methods for better reusability and dependency injection
- Mark tests appropriately: Use markers (
@pytest.mark.slow, @pytest.mark.integration) to categorize tests and enable selective execution
- Keep tests isolated: Each test should be independent and not rely on the state from other tests; use fixtures with appropriate scopes
- Use parametrize for similar tests: Instead of writing multiple similar tests, use
@pytest.mark.parametrize to test multiple inputs
- Configure pytest.ini or pyproject.toml: Set project-wide defaults for test discovery, markers, and command-line options in configuration files
- Write focused assertions: Use simple, clear assertions; pytest’s introspection shows detailed failure information automatically
- Use conftest.py for shared fixtures: Place reusable fixtures in
conftest.py files at appropriate directory levels
- Run tests frequently: Execute tests during development with
pytest -x to stop on first failure for faster feedback
- Monitor test coverage: Regularly check coverage reports and aim for 80%+ coverage, but focus on meaningful tests over percentage
Troubleshooting
| Issue | Solution |
|---|
| Tests not discovered | Ensure files match patterns: test_*.py or *_test.py, functions start with test_, classes start with Test |
| Import errors in tests | Add empty __init__.py files in test directories, or install package in editable mode: pip install -e . |
| Fixture not found | Check fixture is defined in same file or conftest.py, verify correct scope, ensure fixture name matches parameter |
| Tests pass locally but fail in CI | Check for environment-specific dependencies, ensure consistent Python versions, verify all dependencies in requirements.txt |
| Slow test execution | Use pytest --durations=10 to identify slow tests, consider parallel execution with pytest -n auto, mark slow tests with @pytest.mark.slow |
| Coverage not working | Install pytest-cov: pip install pytest-cov, ensure source path is correct: --cov=myproject, check .coveragerc configuration |
| Markers not recognized | Register markers in pytest.ini or pyproject.toml under [tool.pytest.ini_options], use --strict-markers to catch typos |
| PDB not working with capture | Use pytest -s --pdb to disable output capturing, or use pytest.set_trace() instead of pdb.set_trace() |
| Fixtures running in wrong order | Check fixture scope (session > module > class > function), use @pytest.fixture(autouse=True) carefully, review dependency chain |
| Parallel tests failing | Ensure tests are isolated and don’t share state, check for race conditions, use proper locking for shared resources |
| Memory leaks in tests | Use @pytest.fixture(scope="function") for proper cleanup, ensure fixtures yield and cleanup properly, check for circular references |