__HTML_TAG_97_ Todos los comandos de pytest_HTML_TAG_98__
Pytest Cheatsheet¶
Instalación¶
__TABLE_103_
Comandos básicos¶
__TABLE_104_
Advanced Usage¶
| Command | Description |
|---|---|
| INLINE_CODE_37 | Run tests in parallel using all available CPU cores (requires pytest-xdist) |
| INLINE_CODE_38 | Run tests in parallel with 4 worker processes |
| INLINE_CODE_39 | Show the 10 slowest test durations |
| INLINE_CODE_40 | Show slowest tests taking at least 1 second |
| INLINE_CODE_41 | Run tests with code coverage report (requires pytest-cov) |
| INLINE_CODE_42 | Generate HTML coverage report in htmlcov/ directory |
| INLINE_CODE_43 | Show coverage with missing line numbers in terminal |
| INLINE_CODE_44 | Fail if coverage is below 80% |
| INLINE_CODE_45 | Include branch coverage analysis |
| INLINE_CODE_46 | Generate JUnit XML report for CI/CD integration |
| INLINE_CODE_47 | Generate HTML test report (requires pytest-html) |
| INLINE_CODE_48 | Drop into Python debugger (PDB) on test failures |
| INLINE_CODE_49 | Drop into PDB at the start of each test |
| INLINE_CODE_50 | Show setup and teardown of fixtures during execution |
| INLINE_CODE_51 | List all available fixtures and their docstrings |
| INLINE_CODE_52 | List all registered markers |
| INLINE_CODE_53 | Use shorter traceback format |
| INLINE_CODE_54 | Show one line per failure in traceback |
| INLINE_CODE_55 | Disable traceback output |
| INLINE_CODE_56 | Treat deprecation warnings as errors |
| INLINE_CODE_57 | Disable output capturing (same as -s) |
| INLINE_CODE_58 | Set timeout of 300 seconds per test (requires pytest-timeout) |
| INLINE_CODE_59 | Repeat each test 3 times (requires pytest-repeat) |
| INLINE_CODE_60 | Run tests in random order (requires pytest-random-order) |
| INLINE_CODE_61 | Show short test summary for all tests (passed, failed, skipped, etc.) |
Configuración¶
Pytest.ini Archivo de configuración¶
Lugar en el directorio raíz del proyecto:
[pytest]
# Minimum pytest version required
minversion = 7.0
# Directories to search for tests
testpaths = tests
# Test file patterns
python_files = test_*.py *_test.py
# Test class patterns
python_classes = Test* *Tests
# Test function patterns
python_functions = test_*
# Default command line options
addopts =
-ra
--strict-markers
--strict-config
--verbose
--cov=myproject
--cov-report=html
--cov-report=term-missing
# Custom markers
markers =
slow: marks tests as slow (deselect with '-m "not slow"')
integration: marks tests as integration tests
unit: marks tests as unit tests
smoke: marks tests as smoke tests
database: marks tests requiring database connection
api: marks tests for API testing
# Directories to ignore
norecursedirs = .git .tox dist build *.egg venv node_modules
# Warning filters
filterwarnings =
error
ignore::UserWarning
ignore::DeprecationWarning
# Logging configuration
log_cli = true
log_cli_level = INFO
log_cli_format = %(asctime)s [%(levelname)s] %(message)s
log_cli_date_format = %Y-%m-%d %H:%M:%S
Pyproject.toml Configuración¶
Modern Python proyectos utilizando pyproject.toml:
[tool.pytest.ini_options]
minversion = "7.0"
testpaths = ["tests"]
python_files = ["test_*.py", "*_test.py"]
python_classes = ["Test*"]
python_functions = ["test_*"]
addopts = [
"-ra",
"--strict-markers",
"--cov=myproject",
"--cov-branch",
"--cov-report=html",
"--cov-report=term-missing:skip-covered",
"--cov-fail-under=80",
]
markers = [
"slow: marks tests as slow",
"integration: integration tests",
"unit: unit tests",
"smoke: smoke tests",
]
filterwarnings = [
"error",
"ignore::UserWarning",
]
Conftest. py - Arreglos compartidos¶
Lugar en la raíz del directorio de prueba para accesorios compartidos:
import pytest
# Session-scoped fixture (runs once per test session)
@pytest.fixture(scope="session")
def database():
"""Provide database connection for entire test session"""
db = create_database_connection()
yield db
db.close()
# Module-scoped fixture (runs once per test module)
@pytest.fixture(scope="module")
def api_client():
"""Provide API client for test module"""
client = APIClient()
yield client
client.cleanup()
# Function-scoped fixture (default, runs for each test)
@pytest.fixture
def sample_data():
"""Provide sample data for testing"""
return {"id": 1, "name": "Test User"}
# Autouse fixture (automatically used by all tests)
@pytest.fixture(autouse=True)
def reset_state():
"""Reset application state before each test"""
clear_cache()
yield
cleanup_resources()
# Parametrized fixture
@pytest.fixture(params=["sqlite", "postgres", "mysql"])
def db_type(request):
"""Test with multiple database types"""
return request.param
# Configure pytest hooks
def pytest_configure(config):
"""Add custom configuration"""
config.addinivalue_line(
"markers", "custom: custom marker description"
)
Common Use Cases¶
Use Case 1: Basic Unit Testing¶
# Create test file
cat > test_calculator.py << 'EOF'
def add(a, b):
return a + b
def test_add():
assert add(2, 3) == 5
assert add(-1, 1) == 0
assert add(0, 0) == 0
EOF
# Run the tests
pytest test_calculator.py -v
Use Case 2: Testing with Fixtures¶
# Create test with fixtures
cat > test_user.py << 'EOF'
import pytest
@pytest.fixture
def user_data():
return {"username": "testuser", "email": "test@example.com"}
def test_user_creation(user_data):
assert user_data["username"] == "testuser"
assert "@" in user_data["email"]
EOF
# Run tests with fixture details
pytest test_user.py -v --setup-show
Use Case 3: Parametrized Testing¶
# Create parametrized tests
cat > test_math.py << 'EOF'
import pytest
@pytest.mark.parametrize("input,expected", [
(2, 4),
(3, 9),
(4, 16),
(5, 25),
])
def test_square(input, expected):
assert input ** 2 == expected
EOF
# Run parametrized tests
pytest test_math.py -v
Caso de uso 4: Pruebas de integración con marcadores¶
# Create tests with markers
cat > test_api.py << 'EOF'
import pytest
@pytest.mark.unit
def test_data_validation():
assert True
@pytest.mark.integration
def test_api_endpoint():
# Simulated API test
assert True
@pytest.mark.slow
@pytest.mark.integration
def test_full_workflow():
# Long-running test
assert True
EOF
# Run only unit tests
pytest test_api.py -m unit -v
# Run integration tests excluding slow ones
pytest test_api.py -m "integration and not slow" -v
Use Case 5: Coverage Report Generation¶
# Run tests with coverage and generate reports
pytest --cov=myproject --cov-report=html --cov-report=term-missing
# View coverage report
# HTML report will be in htmlcov/index.html
# Run with coverage threshold
pytest --cov=myproject --cov-fail-under=80
# Generate coverage badge
pytest --cov=myproject --cov-report=term --cov-report=html
Buenas prácticas¶
- Usar nombres de prueba descriptivos: Pruebas de nombre con
test_prefijo y describir lo que prueban (por ejemplo,test_user_registration_with_valid_email_) Siguiente patrón de AAA: Pruebas de estructura con secciones Arrange (setup), Act (ejecute), Assert (verify) para claridad - Usar accesorios para configuración/teardown: Fijaciones de pytest de palanca en lugar de métodos de configuración/teardown para una mejor reutilizabilidad e inyección de dependencia
- ** Pruebas de mercado apropiadamente**: Usar marcadores (
@pytest.mark.slow,@pytest.mark.integration) para clasificar las pruebas y permitir la ejecución selectiva - Mantén pruebas aisladas. Cada prueba debe ser independiente y no depender del estado de otras pruebas; utilizar accesorios con los alcances apropiados Use parametrize for similar tests: En lugar de escribir múltiples pruebas similares, use __INLINE_CODE_66_ para probar múltiples entradas
- Configure pytest.ini o pyproject. Toml**: Establecer predeterminados para el descubrimiento de pruebas, marcadores y opciones de línea de comandos en archivos de configuración
- Las afirmaciones centradas en la raza: Usar afirmaciones simples y claras; la introspección de pytest muestra información detallada del fallo automáticamente
- Use conftest.py para accesorios compartidos: Colocar accesorios reutilizables en los archivos
conftest.pya los niveles de directorio apropiados - Las pruebas con frecuencia: Ejecute pruebas durante el desarrollo con
pytest -x_ parar en el primer fracaso para una retroalimentación más rápida - Cobertura de pruebas de monitor: Revisar regularmente informes de cobertura y apuntar a una cobertura del 80%+, pero centrarse en pruebas significativas sobre porcentaje
Troubleshooting¶
| Issue | Solution |
|---|---|
| Tests not discovered | Ensure files match patterns: INLINE_CODE_69 or INLINE_CODE_70, functions start with INLINE_CODE_71, classes start with INLINE_CODE_72 |
| Import errors in tests | Add empty INLINE_CODE_73 files in test directories, or install package in editable mode: INLINE_CODE_74 |
| Fixture not found | Check fixture is defined in same file or INLINE_CODE_75, verify correct scope, ensure fixture name matches parameter |
| Tests pass locally but fail in CI | Check for environment-specific dependencies, ensure consistent Python versions, verify all dependencies in INLINE_CODE_76 |
| Slow test execution | Use INLINE_CODE_77 to identify slow tests, consider parallel execution with INLINE_CODE_78, mark slow tests with INLINE_CODE_79 |
| Coverage not working | Install pytest-cov: INLINE_CODE_80, ensure source path is correct: INLINE_CODE_81, check INLINE_CODE_82 configuration |
| Markers not recognized | Register markers in INLINE_CODE_83 or INLINE_CODE_84 under INLINE_CODE_85, use INLINE_CODE_86 to catch typos |
| PDB not working with capture | Use INLINE_CODE_87 to disable output capturing, or use INLINE_CODE_88 instead of INLINE_CODE_89 |
| Fixtures running in wrong order | Check fixture scope (session > module > class > function), use INLINE_CODE_90 carefully, review dependency chain |
| Parallel tests failing | Ensure tests are isolated and don't share state, check for race conditions, use proper locking for shared resources |
| Memory leaks in tests | Use INLINE_CODE_91 for proper cleanup, ensure fixtures yield and cleanup properly, check for circular references |
Advertencias de salida de borrado Silencio Configurar los filtros de advertencia en pytest.ini: filterwarnings = ignore::DeprecationWarning, o utilizar -W_ flag: pytest -W ignore::UserWarning |