コンテンツにスキップ

pytest Cheatsheet

pytest Cheatsheet

Installation

PlatformCommand
pip (All platforms)pip install pytest
Ubuntu/Debiansudo apt install python3-pytest
Fedora/RHELsudo dnf install python3-pytest
Arch Linuxsudo pacman -S python-pytest
macOS (Homebrew)brew install pytest
Windows (Chocolatey)choco install pytest
Virtual Environmentpython -m venv venv && source venv/bin/activate && pip install pytest
With Common Pluginspip install pytest pytest-cov pytest-xdist pytest-mock pytest-html
Verify Installationpytest --version

Basic Commands

CommandDescription
pytestRun all tests in current directory and subdirectories
pytest test_file.pyRun tests in a specific file
pytest tests/Run all tests in a specific directory
pytest test_file.py::test_functionRun a specific test function
pytest test_file.py::TestClassRun all tests in a specific class
pytest test_file.py::TestClass::test_methodRun a specific test method in a class
pytest -vRun tests with verbose output (show test names)
pytest -vvRun tests with very verbose output (show full details)
pytest -qRun tests in quiet mode (minimal output)
pytest -sShow print statements and stdout during test execution
pytest -xStop after first test failure
pytest --maxfail=3Stop after N test failures
pytest -k "test_user"Run tests matching keyword expression
pytest -k "user and not admin"Run tests matching multiple keywords (AND/NOT logic)
pytest -m "slow"Run tests marked with specific marker
pytest -m "not slow"Run tests excluding specific marker
pytest --collect-onlyShow which tests would be run without executing them
pytest --lfRun only tests that failed in last run (last failed)
pytest --ffRun failed tests first, then others (failed first)
pytest -lShow local variables in tracebacks on failure

Advanced Usage

CommandDescription
pytest -n autoRun tests in parallel using all available CPU cores (requires pytest-xdist)
pytest -n 4Run tests in parallel with 4 worker processes
pytest --durations=10Show the 10 slowest test durations
pytest --durations=10 --durations-min=1.0Show slowest tests taking at least 1 second
pytest --cov=myprojectRun tests with code coverage report (requires pytest-cov)
pytest --cov=myproject --cov-report=htmlGenerate HTML coverage report in htmlcov/ directory
pytest --cov=myproject --cov-report=term-missingShow coverage with missing line numbers in terminal
pytest --cov=myproject --cov-fail-under=80Fail if coverage is below 80%
pytest --cov-branchInclude branch coverage analysis
pytest --junitxml=report.xmlGenerate JUnit XML report for CI/CD integration
pytest --html=report.htmlGenerate HTML test report (requires pytest-html)
pytest --pdbDrop into Python debugger (PDB) on test failures
pytest --traceDrop into PDB at the start of each test
pytest --setup-showShow setup and teardown of fixtures during execution
pytest --fixturesList all available fixtures and their docstrings
pytest --markersList all registered markers
pytest --tb=shortUse shorter traceback format
pytest --tb=lineShow one line per failure in traceback
pytest --tb=noDisable traceback output
pytest -W error::DeprecationWarningTreat deprecation warnings as errors
pytest --capture=noDisable output capturing (same as -s)
pytest --timeout=300Set timeout of 300 seconds per test (requires pytest-timeout)
pytest --count=3Repeat each test 3 times (requires pytest-repeat)
pytest --random-orderRun tests in random order (requires pytest-random-order)
pytest -raShow short test summary for all tests (passed, failed, skipped, etc.)

Configuration

pytest.ini Configuration File

Place in project root directory:

[pytest]
# Minimum pytest version required
minversion = 7.0

# Directories to search for tests
testpaths = tests

# Test file patterns
python_files = test_*.py *_test.py

# Test class patterns
python_classes = Test* *Tests

# Test function patterns
python_functions = test_*

# Default command line options
addopts = 
    -ra
    --strict-markers
    --strict-config
    --verbose
    --cov=myproject
    --cov-report=html
    --cov-report=term-missing

# Custom markers
markers =
    slow: marks tests as slow (deselect with '-m "not slow"')
    integration: marks tests as integration tests
    unit: marks tests as unit tests
    smoke: marks tests as smoke tests
    database: marks tests requiring database connection
    api: marks tests for API testing

# Directories to ignore
norecursedirs = .git .tox dist build *.egg venv node_modules

# Warning filters
filterwarnings =
    error
    ignore::UserWarning
    ignore::DeprecationWarning

# Logging configuration
log_cli = true
log_cli_level = INFO
log_cli_format = %(asctime)s [%(levelname)s] %(message)s
log_cli_date_format = %Y-%m-%d %H:%M:%S

pyproject.toml Configuration

Modern Python projects using pyproject.toml:

[tool.pytest.ini_options]
minversion = "7.0"
testpaths = ["tests"]
python_files = ["test_*.py", "*_test.py"]
python_classes = ["Test*"]
python_functions = ["test_*"]

addopts = [
    "-ra",
    "--strict-markers",
    "--cov=myproject",
    "--cov-branch",
    "--cov-report=html",
    "--cov-report=term-missing:skip-covered",
    "--cov-fail-under=80",
]

markers = [
    "slow: marks tests as slow",
    "integration: integration tests",
    "unit: unit tests",
    "smoke: smoke tests",
]

filterwarnings = [
    "error",
    "ignore::UserWarning",
]

conftest.py - Shared Fixtures

Place in test directory root for shared fixtures:

import pytest

# Session-scoped fixture (runs once per test session)
@pytest.fixture(scope="session")
def database():
    """Provide database connection for entire test session"""
    db = create_database_connection()
    yield db
    db.close()

# Module-scoped fixture (runs once per test module)
@pytest.fixture(scope="module")
def api_client():
    """Provide API client for test module"""
    client = APIClient()
    yield client
    client.cleanup()

# Function-scoped fixture (default, runs for each test)
@pytest.fixture
def sample_data():
    """Provide sample data for testing"""
    return {"id": 1, "name": "Test User"}

# Autouse fixture (automatically used by all tests)
@pytest.fixture(autouse=True)
def reset_state():
    """Reset application state before each test"""
    clear_cache()
    yield
    cleanup_resources()

# Parametrized fixture
@pytest.fixture(params=["sqlite", "postgres", "mysql"])
def db_type(request):
    """Test with multiple database types"""
    return request.param

# Configure pytest hooks
def pytest_configure(config):
    """Add custom configuration"""
    config.addinivalue_line(
        "markers", "custom: custom marker description"
    )

Common Use Cases

Use Case 1: Basic Unit Testing

# Create test file
cat > test_calculator.py << 'EOF'
def add(a, b):
    return a + b

def test_add():
    assert add(2, 3) == 5
    assert add(-1, 1) == 0
    assert add(0, 0) == 0
EOF

# Run the tests
pytest test_calculator.py -v

Use Case 2: Testing with Fixtures

# Create test with fixtures
cat > test_user.py << 'EOF'
import pytest

@pytest.fixture
def user_data():
    return {"username": "testuser", "email": "test@example.com"}

def test_user_creation(user_data):
    assert user_data["username"] == "testuser"
    assert "@" in user_data["email"]
EOF

# Run tests with fixture details
pytest test_user.py -v --setup-show

Use Case 3: Parametrized Testing

# Create parametrized tests
cat > test_math.py << 'EOF'
import pytest

@pytest.mark.parametrize("input,expected", [
    (2, 4),
    (3, 9),
    (4, 16),
    (5, 25),
])
def test_square(input, expected):
    assert input ** 2 == expected
EOF

# Run parametrized tests
pytest test_math.py -v

Use Case 4: Integration Testing with Markers

# Create tests with markers
cat > test_api.py << 'EOF'
import pytest

@pytest.mark.unit
def test_data_validation():
    assert True

@pytest.mark.integration
def test_api_endpoint():
    # Simulated API test
    assert True

@pytest.mark.slow
@pytest.mark.integration
def test_full_workflow():
    # Long-running test
    assert True
EOF

# Run only unit tests
pytest test_api.py -m unit -v

# Run integration tests excluding slow ones
pytest test_api.py -m "integration and not slow" -v

Use Case 5: Coverage Report Generation

# Run tests with coverage and generate reports
pytest --cov=myproject --cov-report=html --cov-report=term-missing

# View coverage report
# HTML report will be in htmlcov/index.html

# Run with coverage threshold
pytest --cov=myproject --cov-fail-under=80

# Generate coverage badge
pytest --cov=myproject --cov-report=term --cov-report=html

Best Practices

  • Use descriptive test names: Name tests with test_ prefix and describe what they test (e.g., test_user_registration_with_valid_email)
  • Follow AAA pattern: Structure tests with Arrange (setup), Act (execute), Assert (verify) sections for clarity
  • Use fixtures for setup/teardown: Leverage pytest fixtures instead of setup/teardown methods for better reusability and dependency injection
  • Mark tests appropriately: Use markers (@pytest.mark.slow, @pytest.mark.integration) to categorize tests and enable selective execution
  • Keep tests isolated: Each test should be independent and not rely on the state from other tests; use fixtures with appropriate scopes
  • Use parametrize for similar tests: Instead of writing multiple similar tests, use @pytest.mark.parametrize to test multiple inputs
  • Configure pytest.ini or pyproject.toml: Set project-wide defaults for test discovery, markers, and command-line options in configuration files
  • Write focused assertions: Use simple, clear assertions; pytest’s introspection shows detailed failure information automatically
  • Use conftest.py for shared fixtures: Place reusable fixtures in conftest.py files at appropriate directory levels
  • Run tests frequently: Execute tests during development with pytest -x to stop on first failure for faster feedback
  • Monitor test coverage: Regularly check coverage reports and aim for 80%+ coverage, but focus on meaningful tests over percentage

Troubleshooting

IssueSolution
Tests not discoveredEnsure files match patterns: test_*.py or *_test.py, functions start with test_, classes start with Test
Import errors in testsAdd empty __init__.py files in test directories, or install package in editable mode: pip install -e .
Fixture not foundCheck fixture is defined in same file or conftest.py, verify correct scope, ensure fixture name matches parameter
Tests pass locally but fail in CICheck for environment-specific dependencies, ensure consistent Python versions, verify all dependencies in requirements.txt
Slow test executionUse pytest --durations=10 to identify slow tests, consider parallel execution with pytest -n auto, mark slow tests with @pytest.mark.slow
Coverage not workingInstall pytest-cov: pip install pytest-cov, ensure source path is correct: --cov=myproject, check .coveragerc configuration
Markers not recognizedRegister markers in pytest.ini or pyproject.toml under [tool.pytest.ini_options], use --strict-markers to catch typos
PDB not working with captureUse pytest -s --pdb to disable output capturing, or use pytest.set_trace() instead of pdb.set_trace()
Fixtures running in wrong orderCheck fixture scope (session > module > class > function), use @pytest.fixture(autouse=True) carefully, review dependency chain
Parallel tests failingEnsure tests are isolated and don’t share state, check for race conditions, use proper locking for shared resources
Memory leaks in testsUse @pytest.fixture(scope="function") for proper cleanup, ensure fixtures yield and cleanup properly, check for circular references