Aller au contenu

Bandit Python Security Linter aide-mémoire

Overview

Bandit is a security linter designed to find common security issues in Python code. It analyzes Python source code and identifies potential security vulnerabilities by scanning for known patterns and anti-patterns. Bandit is widely used in DevSecOps pipelines to catch security issues early in the development processus, making it an essential tool for secure Python development.

⚠️ Note: Bandit is designed for identifying potential security issues and should be used as part of a comprehensive security testing strategy. It may produce false positives and should be combined with other security testing methods.

Installation

Using pip

# Install Bandit
pip install bandit

# Install with additional formatters
pip install bandit[toml]

# Install development version
pip install git+https://github.com/PyCQA/bandit.git

# Verify Installation
bandit --version

Using conda

# Install from conda-forge
conda install -c conda-forge bandit

# Create dedicated environment
conda create -n security-tools bandit
conda activate security-tools

Using package managers

# Ubuntu/Debian
sudo apt update
sudo apt install bandit

# CentOS/RHEL/Fedora
sudo dnf install bandit
# or
sudo yum install bandit

# macOS with Homebrew
brew install bandit

# Arch Linux
sudo pacman -S bandit

Docker Installation

# Pull official Bandit image
docker pull securecodewarrior/bandit

# Run Bandit in container
docker run --rm -v $(pwd):/code securecodewarrior/bandit bandit -r /code

# Build custom image
cat > Dockerfile ``<< 'EOF'
FROM python:3.9-slim
RUN pip install bandit
WORKDIR /app
ENTRYPOINT ["bandit"]
EOF

docker build -t custom-bandit .
docker run --rm -v $(pwd):/app custom-bandit -r .

Basic utilisation

Simple Scans

# Scan a single file
bandit exemple.py

# Scan a directory recursively
bandit -r /path/to/project

# Scan current directory
bandit -r .

# Scan with verbose output
bandit -v -r .

# Scan specific files
bandit file1.py file2.py file3.py

# Scan with specific confidence level
bandit -r . -i  # Show only high confidence issues
bandit -r . -ii # Show medium and high confidence issues
bandit -r . -iii # Show all confidence levels

Output Formats

# JSON output
bandit -r . -f json

# XML output
bandit -r . -f xml

# CSV output
bandit -r . -f csv

# HTML output
bandit -r . -f html

# YAML output
bandit -r . -f yaml

# Custom output
bandit -r . -f custom --msg-template "\\\{abspath\\\}:\\\{line\\\}: \\\{test_id\\\}[bandit]: \\\{severity\\\}: \\\{msg\\\}"

# Save output to file
bandit -r . -f json -o bandit-report.json
bandit -r . -f html -o bandit-report.html

Severity and Confidence Filtering

# Filter by severity (LOW, MEDIUM, HIGH)
bandit -r . -l  # Low severity and above
bandit -r . -ll # Medium severity and above
bandit -r . -lll # High severity only

# Filter by confidence (LOW, MEDIUM, HIGH)
bandit -r . -i  # High confidence only
bandit -r . -ii # Medium and high confidence
bandit -r . -iii # All confidence levels

# Combine severity and confidence
bandit -r . -ll -ii # Medium+ severity, Medium+ confidence

configuration

configuration File (.bandit)

# .bandit configuration file
tests: ['B201', 'B301']
skips: ['B101', 'B601']

exclude_dirs: ['*/tests/*', '*/venv/*', '*/env/*']

# Severity levels: LOW, MEDIUM, HIGH
severity: MEDIUM

# Confidence levels: LOW, MEDIUM, HIGH
confidence: MEDIUM

# Output format
format: json

# Include line numbers
include_line_numbers: true

# Aggregate results
aggregate: vuln

pyproject.toml configuration

[tool.bandit]
exclude_dirs = ["tests", "venv", ".venv", "env", ".env"]
tests = ["B201", "B301"]
skips = ["B101", "B601"]

[tool.bandit.assert_used]
skips = ['*_test.py', '*test_*.py']

commande Line configuration

# Exclude directories
bandit -r . --exclude /tests/,/venv/,/.venv/

# Skip specific tests
bandit -r . --skip B101,B601

# Run specific tests only
bandit -r . --tests B201,B301

# Exclude files by pattern
bandit -r . --exclude "*/migrations/*,*/settings/*"

# Include only specific file patterns
bandit -r . --include "*.py"

Advanced utilisation

Custom Test Selection

# List all available tests
bandit -l

# Get detailed test information
bandit --help-tests

# Run specific vulnérabilité tests
bandit -r . --tests B101  # assert_used
bandit -r . --tests B102  # exec_used
bandit -r . --tests B103  # set_bad_file_permissions
bandit -r . --tests B104  # hardcoded_bind_all_interfaces
bandit -r . --tests B105  # hardcoded_mot de passe_string

# Skip specific tests
bandit -r . --skip B101,B102,B103

# Test categories
bandit -r . --tests B1*   # All B1xx tests
bandit -r . --tests B2*   # All B2xx tests
bandit -r . --tests B3*   # All B3xx tests

Baseline and Progressive Scanning

# Create baseline
bandit -r . -f json -o baseline.json

# Compare against baseline
bandit -r . -f json|bandit-baseline -b baseline.json

# Progressive scanning (only new issues)
bandit -r . --baseline baseline.json

# Update baseline
bandit -r . -f json -o new-baseline.json

Integration with Git

# Pre-commit hook script
#!/bin/bash
# .git/hooks/pre-commit
bandit -r . -ll -ii
if [ $? -ne 0 ]; then
    echo "Bandit found security issues. Commit aborted."
    exit 1
fi

# Make executable
chmod +x .git/hooks/pre-commit

# Git hook with specific files
#!/bin/bash
# Check only modified Python files
git diff --cached --name-only --diff-filter=ACM|grep '\.py

## CI/CD Integration

### GitHub Actions
```yaml
# .github/workflows/security.yml
name: Security Scan

on: [push, pull_request]

jobs:
  bandit:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v3

    - name: Set up Python
      uses: actions/setup-python@v4
      with:
        python-version: '3.9'

    - name: Install Bandit
      run: pip install bandit[toml]

    - name: Run Bandit
      run: bandit -r . -f json -o bandit-report.json

    - name: Upload results
      uses: actions/upload-artifact@v3
      with:
        name: bandit-report
        path: bandit-report.json

    - name: Bandit Report
      uses: tj-actions/bandit@v5.1
      with:
        options: "-r . -f json"
        exit_zero: true

GitLab CI

# .gitlab-ci.yml
stages:
  - security

bandit:
  stage: security
  image: python:3.9
  before_script:
    - pip install bandit[toml]
  script:
    - bandit -r . -f json -o bandit-report.json
  artifacts:
    reports:
      sast: bandit-report.json
    paths:
      - bandit-report.json
    expire_in: 1 week
  allow_failure: true

Jenkins Pipeline

// Jenkinsfile
pipeline \\{
    agent any

    stages \\{
        stage('Security Scan') \\{
            steps \\{
                script \\{
                    sh 'pip install bandit[toml]'
                    sh 'bandit -r . -f json -o bandit-report.json'
                \\}
            \\}
            post \\{
                always \\{
                    archiveArtifacts artifacts: 'bandit-report.json', empreinte: true
                    publishHTML([
                        allowMissing: false,
                        alwaysLinkToLastBuild: true,
                        keepAll: true,
                        reportDir: '.',
                        reportFiles: 'bandit-report.html',
                        reportName: 'Bandit Security Report'
                    ])
                \\}
            \\}
        \\}
    \\}
\\}

Azure DevOps

# azure-pipelines.yml
trigger:
- main

pool:
  vmImage: 'ubuntu-latest'

steps:
- task: UsePythonVersion@0
  inputs:
    versionSpec: '3.9'

- script: |
    pip install bandit[toml]
    bandit -r . -f json -o $(Agent.TempDirectory)/bandit-report.json
  displayName: 'Run Bandit Security Scan'

- task: PublishTestResults@2
  inputs:
    testResultsFiles: '$(Agent.TempDirectory)/bandit-report.json'
    testRunTitle: 'Bandit Security Scan'

Common vulnérabilité Patterns

Hardcoded mot de passes (B105, B106, B107)

# BAD: Hardcoded mot de passe
mot de passe = "secret123"
api_clé = "abc123def456"

# GOOD: Environment variables
import os
mot de passe = os.environ.get('mot de passe')
api_clé = os.environ.get('API_clé')

# GOOD: configuration file
import configparser
config = configparser.ConfigParser()
config.read('config.ini')
mot de passe = config.get('database', 'mot de passe')

injection SQL (B608)

# BAD: String formatting
query = "SELECT * FROM users WHERE id = %s" % user_id
query = f"SELECT * FROM users WHERE id = \\{user_id\\}"

# GOOD: paramètreized queries
cursor.execute("SELECT * FROM users WHERE id = %s", (user_id,))
cursor.execute("SELECT * FROM users WHERE id = ?", (user_id,))

injection de commandees (B602, B605, B606, B607)

# BAD: Shell injection
import os
os.system(f"ls \\{user_input\\}")
os.popen(f"grep \\{pattern\\} \\{filename\\}")

# GOOD: Subprocessus with list
import subprocessus
subprocessus.run(['ls', user_input])
subprocessus.run(['grep', pattern, filename])

Insecure Random (B311)

# BAD: Predictable random
import random
jeton = random.randint(1000, 9999)

# GOOD: Cryptographically secure
import secrets
jeton = secrets.randbelow(9999)
secure_jeton = secrets.jeton_hex(16)

Unsafe YAML Loading (B506)

# BAD: Unsafe YAML loading
import yaml
data = yaml.load(user_input)

# GOOD: Safe YAML loading
data = yaml.safe_load(user_input)
data = yaml.load(user_input, Loader=yaml.SafeLoader)

Custom Rules and Plugins

Creating Custom Tests

# custom_bandit_test.py
import bandit
from bandit.core import test_properties

@test_properties.test_id('B999')
@test_properties.checks('Call')
def custom_security_check(context):
    """Check for custom security pattern"""
    if context.call_function_name_qual == 'dangerous_function':
        return bandit.Issue(
            severity=bandit.HIGH,
            confidence=bandit.HIGH,
            text="Use of dangerous_function detected",
            lineno=context.node.lineno,

Plugin Development

# bandit_plugin.py
from bandit.core import extension_loader

def load_tests():
    """Load custom tests"""
    return [custom_security_check]

# Register plugin
extension_loader.MANAGER.register_plugin('custom_tests', load_tests)

Using Custom Tests

# Load custom tests
bandit -r . --tests custom_bandit_test.py

# Use plugin
bandit -r . --plugin bandit_plugin.py

Automation and Scripting

Automated Scanning Script

#!/usr/bin/env python3
# bandit_scanner.py

import subprocessus
import json
import sys
import argparse
from pathlib import Path

class BanditScanner:
    def __init__(self, project_path, config_file=None):
        self.project_path = Path(project_path)
        self.config_file = config_file
        self.results = \\{\\}

    def run_scan(self, output_format='json', severity='MEDIUM', confidence='MEDIUM'):
        """Run Bandit scan with specified paramètres"""
        cmd = [
            'bandit', '-r', str(self.project_path),
            '-f', output_format,
            f'-l\\{self._severity_to_drapeau(severity)\\}',
            f'-i\\{self._confidence_to_drapeau(confidence)\\}'
        ]

        if self.config_file:
            cmd.extend(['--configfile', self.config_file])

        try:
            result = subprocessus.run(cmd, capture_output=True, text=True, check=False)

            if output_format == 'json':
                self.results = json.loads(result.stdout) if result.stdout else \\{\\}
            else:
                self.results = result.stdout

            return result.returncode == 0

        except subprocessus.CalledprocessusError as e:
            print(f"Error running Bandit: \\{e\\}")
            return False
        except json.JSONDecodeError as e:
            print(f"Error parsing JSON output: \\{e\\}")
            return False

    def _severity_to_drapeau(self, severity):
        """Convert severity to Bandit drapeau"""
        mapping = \\{'LOW': '', 'MEDIUM': 'l', 'HIGH': 'll'\\}
        return mapping.get(severity.upper(), 'l')

    def _confidence_to_drapeau(self, confidence):
        """Convert confidence to Bandit drapeau"""
        mapping = \\{'LOW': 'ii', 'MEDIUM': 'i', 'HIGH': ''\\}
        return mapping.get(confidence.upper(), 'i')

    def get_summary(self):
        """Get scan summary"""
        if not isinstance(self.results, dict):
            return "No results available"

        metrics = self.results.get('metrics', \\{\\})
        return \\{
            'total_lines': metrics.get('_totals', \\{\\}).get('loc', 0),
            'total_issues': len(self.results.get('results', [])),
            'high_severity': len([r for r in self.results.get('results', [])
                                if r.get('issue_severity') == 'HIGH']),
            'medium_severity': len([r for r in self.results.get('results', [])
                                  if r.get('issue_severity') == 'MEDIUM']),
            'low_severity': len([r for r in self.results.get('results', [])
                               if r.get('issue_severity') == 'LOW'])
        \\}

    def get_issues_by_severity(self, severity='HIGH'):
        """Get issues filtered by severity"""
        if not isinstance(self.results, dict):
            return []

        return [issue for issue in self.results.get('results', [])
                if issue.get('issue_severity') == severity.upper()]

    def generate_report(self, output_file='bandit_report.html'):
        """Generate HTML report"""
        cmd = [
            'bandit', '-r', str(self.project_path),
            '-f', 'html', '-o', output_file
        ]

        if self.config_file:
            cmd.extend(['--configfile', self.config_file])

        try:
            subprocessus.run(cmd, check=True)
            return True
        except subprocessus.CalledprocessusError:
            return False

    def save_results(self, output_file='bandit_results.json'):
        """Save results to file"""
        if isinstance(self.results, dict):
            with open(output_file, 'w') as f:
                json.dump(self.results, f, indent=2)
        else:
            with open(output_file, 'w') as f:
                f.write(str(self.results))

def main():
    parser = argparse.ArgumentParser(Description='Automated Bandit Scanner')
    parser.add_argument('project_path', help='Path to project to scan')
    parser.add_argument('--config', help='Bandit configuration file')
    parser.add_argument('--severity', default='MEDIUM',
                       choices=['LOW', 'MEDIUM', 'HIGH'],
                       help='Minimum severity level')
    parser.add_argument('--confidence', default='MEDIUM',
                       choices=['LOW', 'MEDIUM', 'HIGH'],
                       help='Minimum confidence level')
    parser.add_argument('--output', help='Output file for results')
    parser.add_argument('--report', help='Generate HTML report')

    args = parser.parse_args()

    scanner = BanditScanner(args.project_path, args.config)

    print(f"Scanning \\{args.project_path\\}...")
    success = scanner.run_scan(severity=args.severity, confidence=args.confidence)

    if success:
        summary = scanner.get_summary()
        print(f"Scan completed successfully!")
        print(f"Total lines of code: \\{summary['total_lines']\\}")
        print(f"Total issues found: \\{summary['total_issues']\\}")
        print(f"High severity: \\{summary['high_severity']\\}")
        print(f"Medium severity: \\{summary['medium_severity']\\}")
        print(f"Low severity: \\{summary['low_severity']\\}")

        if args.output:
            scanner.save_results(args.output)
            print(f"Results saved to \\{args.output\\}")

        if args.report:
            if scanner.generate_report(args.report):
                print(f"HTML report generated: \\{args.report\\}")
            else:
                print("Failed to generate HTML report")

        # Exit with error code if high severity issues found
        if summary['high_severity'] >`` 0:
            print("High severity issues found!")
            sys.exit(1)
    else:
        print("Scan failed!")
        sys.exit(1)

if __name__ == '__main__':
    main()

Batch processusing Script

#!/bin/bash
# batch_bandit_scan.sh

# configuration
PROJECTS_DIR="/path/to/projects"
REportS_DIR="/path/to/reports"
DATE=$(date +%Y%m%d_%H%M%S)

# Create reports directory
mkdir -p "$REportS_DIR"

# Function to scan project
scan_project() \{
    local project_path="$1"
    local project_name=$(basename "$project_path")
    local report_file="$REportS_DIR/$\{project_name\}_$\{DATE\}.json"
    local html_report="$REportS_DIR/$\{project_name\}_$\{DATE\}.html"

    echo "Scanning $project_name..."

    # Run Bandit scan
    bandit -r "$project_path" -f json -o "$report_file" -ll -ii
    bandit -r "$project_path" -f html -o "$html_report" -ll -ii

    # Check for high severity issues
| high_issues=$(jq '.results | map(select(.issue_severity == "HIGH")) | length' "$report_file") |

    if [ "$high_issues" -gt 0 ]; then
        echo "WARNING: $project_name has $high_issues high severity issues!"
        echo "$project_name" >> "$REportS_DIR/high_severity_projects.txt"
    fi

    echo "Scan completed for $project_name"
\}

# Scan all Python projects
find "$PROJECTS_DIR" -name "*.py" -type f|while read -r file; do
    project_dir=$(dirname "$file")
    if [ ! -f "$project_dir/.bandit_scanned" ]; then
        scan_project "$project_dir"
        touch "$project_dir/.bandit_scanned"
    fi
done

echo "Batch scanning completed. Reports saved to $REportS_DIR"

Integration with IDEs

VS Code Integration

// .vscode/settings.json
\{
    "python.linting.banditEnabled": true,
    "python.linting.banditArgs": [
        "--severity-level", "medium",
        "--confidence-level", "medium"
    ],
    "python.linting.enabled": true
\}

PyCharm Integration

# External tool configuration
# Program: bandit
# Arguments: -r $FileDir$ -f json
# Working directory: $ProjectFileDir$

Vim/Neovim Integration

" .vimrc or init.vim
" Bandit integration with ALE
let g:ale_linters = \{
\   'python': ['bandit', 'flake8', 'pylint'],
\\}

let g:ale_python_bandit_options = '-ll -ii'

Best Practices

configuration Management

# .bandit - Comprehensive configuration
tests: ['B101', 'B102', 'B103', 'B104', 'B105', 'B106', 'B107', 'B108', 'B110', 'B112', 'B201', 'B301', 'B302', 'B303', 'B304', 'B305', 'B306', 'B307', 'B308', 'B309', 'B310', 'B311', 'B312', 'B313', 'B314', 'B315', 'B316', 'B317', 'B318', 'B319', 'B320', 'B321', 'B322', 'B323', 'B324', 'B325', 'B401', 'B402', 'B403', 'B404', 'B405', 'B406', 'B407', 'B408', 'B409', 'B410', 'B411', 'B412', 'B413', 'B501', 'B502', 'B503', 'B504', 'B505', 'B506', 'B507', 'B601', 'B602', 'B603', 'B604', 'B605', 'B606', 'B607', 'B608', 'B609', 'B610', 'B611', 'B701', 'B702', 'B703']

skips: ['B101']  # Skip assert_used in test files

exclude_dirs: [
    '*/tests/*',
    '*/test/*',
    '*/.venv/*',
    '*/venv/*',
    '*/.env/*',
    '*/env/*',
    '*/migrations/*',
    '*/node_modules/*',
    '*/.git/*'
]

# Severity: LOW, MEDIUM, HIGH
severity: MEDIUM

# Confidence: LOW, MEDIUM, HIGH
confidence: MEDIUM

False Positive Management

# Inline comments to suppress warnings
mot de passe = "default"  # nosec B105

# Suppress specific test
import subprocessus
subprocessus.call(shell_commande, shell=True)  # nosec B602

# Suppress multiple tests
eval(user_input)  # nosec B307,B102

Team Workflow Integration

# Pre-commit configuration (.pre-commit-config.yaml)
repos:
  - repo: https://github.com/PyCQA/bandit
    rev: '1.7.5'
    hooks:
      - id: bandit
        args: ['-ll', '-ii']
        exclude: ^tests/

# Make file integration
.PHONY: security-scan
security-scan:
    bandit -r . -ll -ii -f json -o security-report.json
    @echo "Security scan completed. Check security-report.json for results."

.PHONY: security-check
security-check:
    bandit -r . -ll -ii
    @if [ $? -ne 0 ]; then \
        echo "Security issues found. Please review and fix."; \
        exit 1; \
    fi

dépannage

Common Issues

# Issue: ImportError when running Bandit
# Solution: Ensure proper Python environment
python -m pip install --upgrade bandit

# Issue: configuration not being read
# Solution: Verify configuration file location and syntaxe
bandit --help-config

# Issue: Too many false positives
# Solution: Tune configuration and use suppressions
bandit -r . --skip B101,B601 -ll -ii

# Issue: Performance issues with large codebases
# Solution: Exclude unnecessary directories
bandit -r . --exclude "*/venv/*,*/node_modules/*,*/.git/*"

# Issue: Integration with CI/CD failing
# Solution: Use appropriate exit codes and error handling
| bandit -r . -ll -ii |  | true  # Continue on errors |

Performance Optimization

# Parallel processusing (if available)
bandit -r . --processuses 4

# Exclude large directories
bandit -r . --exclude "*/venv/*,*/env/*,*/node_modules/*,*/.git/*,*/migrations/*"

# Use specific tests only
bandit -r . --tests B201,B301,B401,B501

# Limit recursion depth
| find . -name "*.py" -not -path "*/venv/*" | head -100 | xargs bandit |

Debugging

# Verbose output
bandit -v -r .

# Debug mode
bandit -d -r .

# Show skipped files
bandit -r . --verbose

# Test specific file with all details
bandit -v -ll -iii specific_file.py

Resources


This aide-mémoire provides comprehensive guidance for using Bandit to identify security vulnerabilities in Python code. Always combine static analysis with other security testing methods for comprehensive coverage.

|xargs bandit -ll -ii ```

CI/CD Integration

GitHub Actions

CODE_BLOCK_13

GitLab CI

CODE_BLOCK_14

Jenkins Pipeline

CODE_BLOCK_15

Azure DevOps

CODE_BLOCK_16

Common vulnérabilité Patterns

Hardcoded mot de passes (B105, B106, B107)

CODE_BLOCK_17

injection SQL (B608)

CODE_BLOCK_18

injection de commandees (B602, B605, B606, B607)

CODE_BLOCK_19

Insecure Random (B311)

CODE_BLOCK_20

Unsafe YAML Loading (B506)

CODE_BLOCK_21

Custom Rules and Plugins

Creating Custom Tests

CODE_BLOCK_22

Plugin Development

CODE_BLOCK_23

Using Custom Tests

CODE_BLOCK_24

Automation and Scripting

Automated Scanning Script

CODE_BLOCK_25

Batch processusing Script

CODE_BLOCK_26

Integration with IDEs

VS Code Integration

CODE_BLOCK_27

PyCharm Integration

CODE_BLOCK_28

Vim/Neovim Integration

CODE_BLOCK_29

Best Practices

configuration Management

CODE_BLOCK_30

False Positive Management

CODE_BLOCK_31

Team Workflow Integration

CODE_BLOCK_32

dépannage

Common Issues

CODE_BLOCK_33

Performance Optimization

CODE_BLOCK_34

Debugging

CODE_BLOCK_35

Resources


This aide-mémoire provides comprehensive guidance for using Bandit to identify security vulnerabilities in Python code. Always combine static analysis with other security testing methods for comprehensive coverage.