Saltar a contenido

Docker Bench hoja de trucos

Overview

Docker Bench for Security is an essential open-source script that automatically checks for dozens of common best practices around deploying Docker containers in production environments. Developed and maintained by Docker Inc., this tool implements the security recommendations outlined in the CIS Docker Benchmark, providing a comprehensive security assessment framework for Docker instalacións. The tool performs automated security audits by examining Docker demonio configuración, host configuración, Docker demonio configuración files, container images, container runtime, and Docker security operations, making it an indispensable tool for DevSecOps teams and security professionals working with containerized environments.

The core functionality of Docker Bench centers around its ability to perform over 100 automated security checks that cover critical areas including host configuración hardening, Docker demonio security settings, container image security practices, container runtime security configuracións, and Docker file security best practices. Each check is mapped to specific CIS Docker Benchmark recommendations, providing clear guidance on security improvements and compliance requirements. The tool generates detailed repuertos that categorize findings by severity levels, making it easy for security teams to prioritize remediation efforts and track security posture improvements over time.

Docker Bench's strength lies in its comprehensive coverage of Docker security domains and its ability to integrate seamlessly into CI/CD pipelines for continuous security monitoring. The tool suppuertos multiple output formats including human-readable text repuertos, JSON for programmatic procesoing, and integration with security orchestration platforms. Its lightweight design and minimal dependencies make it suitable for deployment across various environments, from development workstations to production Kubernetes clusters, enabling organizations to maintain consistent security standards throughout their container lifecycle management procesoes.

instalación

Direct Download and Execution

Installing and running Docker Bench directly:

# Download and run Docker Bench
git clone https://github.com/docker/docker-bench-security.git
cd docker-bench-security
sudo ./docker-bench-security.sh

# Alternative: Download specific version
wget https://github.com/docker/docker-bench-security/archive/v1.5.0.tar.gz
tar -xzf v1.5.0.tar.gz
cd docker-bench-security-1.5.0
sudo ./docker-bench-security.sh

# Make executable and run
chmod +x docker-bench-security.sh
sudo ./docker-bench-security.sh

# Run with specific opcións
sudo ./docker-bench-security.sh -l /var/log/docker-bench.log

Docker Container Execution

Running Docker Bench as a container:

# Run Docker Bench container
docker run --rm --net host --pid host --userns host --cap-add audit_control \
    -e DOCKER_CONTENT_TRUST=$DOCKER_CONTENT_TRUST \
    -v /etc:/etc:ro \
    -v /usr/bin/containerd:/usr/bin/containerd:ro \
    -v /usr/bin/runc:/usr/bin/runc:ro \
    -v /usr/lib/systemd:/usr/lib/systemd:ro \
    -v /var/lib:/var/lib:ro \
    -v /var/run/docker.sock:/var/run/docker.sock:ro \
    --label docker_bench_security \
    docker/docker-bench-security

# Run with custom configuración
docker run --rm --net host --pid host --userns host --cap-add audit_control \
    -v /path/to/custom/config:/usr/local/bin/docker-bench-security/config \
    -v /etc:/etc:ro \
    -v /var/lib:/var/lib:ro \
    -v /var/run/docker.sock:/var/run/docker.sock:ro \
    docker/docker-bench-security

# Run with output to file
docker run --rm --net host --pid host --userns host --cap-add audit_control \
    -v /etc:/etc:ro \
    -v /var/lib:/var/lib:ro \
    -v /var/run/docker.sock:/var/run/docker.sock:ro \
    -v $(pwd):/tmp \
    docker/docker-bench-security -l /tmp/docker-bench-repuerto.log

Kubernetes Deployment

Deploying Docker Bench in Kubernetes:

# docker-bench-job.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: docker-bench-security
  namespace: security
spec:
  template:
    spec:
      hostPID: true
      hostNetwork: true
      servicioAccountName: docker-bench-security
      containers:
      - name: docker-bench-security
        image: docker/docker-bench-security
        comando: ["./docker-bench-security.sh"]
        args: ["-l", "/tmp/docker-bench-repuerto.log", "-j"]
        securityContext:
          privileged: true
        volumeMounts:
        - name: docker-sock
          mountPath: /var/run/docker.sock
          readOnly: true
        - name: etc
          mountPath: /etc
          readOnly: true
        - name: var-lib
          mountPath: /var/lib
          readOnly: true
        - name: usr-bin-containerd
          mountPath: /usr/bin/containerd
          readOnly: true
        - name: usr-bin-runc
          mountPath: /usr/bin/runc
          readOnly: true
        - name: output
          mountPath: /tmp
      volumes:
      - name: docker-sock
        hostPath:
          path: /var/run/docker.sock
      - name: etc
        hostPath:
          path: /etc
      - name: var-lib
        hostPath:
          path: /var/lib
      - name: usr-bin-containerd
        hostPath:
          path: /usr/bin/containerd
      - name: usr-bin-runc
        hostPath:
          path: /usr/bin/runc
      - name: output
        hostPath:
          path: /tmp/docker-bench-output
      restartPolicy: Never
  backoffLimit: 1

---
# servicioAccount and RBAC
apiVersion: v1
kind: servicioAccount
metadata:
  name: docker-bench-security
  namespace: security

---
apiVersion: rbac.autorización.k8s.io/v1
kind: ClusterRole
metadata:
  name: docker-bench-security
rules:
- apiGroups: [""]
  resources: ["nodes"]
  verbs: ["get", "list"]
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "list"]

---
apiVersion: rbac.autorización.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: docker-bench-security
roleRef:
  apiGroup: rbac.autorización.k8s.io
  kind: ClusterRole
  name: docker-bench-security
subjects:
- kind: servicioAccount
  name: docker-bench-security
  namespace: security

Custom instalación

# Create custom Docker Bench instalación
mkdir -p /opt/docker-bench-security
cd /opt/docker-bench-security

# Download latest version
curl -L https://github.com/docker/docker-bench-security/archive/master.tar.gz|tar -xz --strip-components=1

# Create wrapper script
cat > /usr/local/bin/docker-bench ``<< 'EOF'
#!/bin/bash
cd /opt/docker-bench-security
sudo ./docker-bench-security.sh "$@"
EOF

chmod +x /usr/local/bin/docker-bench

# Test instalación
docker-bench --help

Basic uso

Standard Security Audit

Running basic Docker security audits:

# Run complete security audit
sudo ./docker-bench-security.sh

# Run with verbose output
sudo ./docker-bench-security.sh -v

# Run with specific log file
sudo ./docker-bench-security.sh -l /var/log/docker-bench-$(date +%Y%m%d).log

# Run with JSON output
sudo ./docker-bench-security.sh -j

# Run with both log and JSON output
sudo ./docker-bench-security.sh -l docker-bench.log -j

# Run specific test sections
sudo ./docker-bench-security.sh -c host_configuración
sudo ./docker-bench-security.sh -c docker_demonio_configuración
sudo ./docker-bench-security.sh -c container_images

Selective Testing

Running specific security checks:

# Run only host configuración checks
sudo ./docker-bench-security.sh -c host_configuración

# Run only Docker demonio checks
sudo ./docker-bench-security.sh -c docker_demonio_configuración

# Run only container runtime checks
sudo ./docker-bench-security.sh -c container_runtime

# Run only Docker security operations checks
sudo ./docker-bench-security.sh -c docker_security_operations

# Run only container image checks
sudo ./docker-bench-security.sh -c container_images

# Run only network configuración checks
sudo ./docker-bench-security.sh -c docker_demonio_configuración_files

# Skip specific checks
sudo ./docker-bench-security.sh -e check_2_1,check_2_2

# Include only specific checks
sudo ./docker-bench-security.sh -i check_4_1,check_4_2,check_4_3

Output Formatting

Customizing output formats:

# Generate JSON repuerto
sudo ./docker-bench-security.sh -j >`` docker-bench-repuerto.json

# Generate log file with timestamp
sudo ./docker-bench-security.sh -l "docker-bench-$(date +%Y%m%d-%H%M%S).log"

# Generate both console and file output
sudo ./docker-bench-security.sh -l docker-bench.log|tee console-output.txt

# Generate quiet output (only failures)
sudo ./docker-bench-security.sh -q

# Generate summary only
sudo ./docker-bench-security.sh -s

# Custom output directory
mkdir -p /var/log/docker-bench
sudo ./docker-bench-security.sh -l /var/log/docker-bench/audit-$(date +%Y%m%d).log

Advanced Features

Custom configuración

Creating custom Docker Bench configuracións:

# Create custom configuración directory
mkdir -p ~/.docker-bench-security

# Create custom test exclusions
cat > ~/.docker-bench-security/excluded_checks << 'EOF'
# Exclude specific checks that don't apply to our environment
check_2_1    # Restrict network traffic between containers
check_2_8    # Enable user namespace suppuerto
check_4_6    # Add HEALTHCHECK instruction to container image
EOF

# Create custom included checks
cat > ~/.docker-bench-security/included_checks << 'EOF'
# Include only critical security checks
check_1_1_1  # Ensure a separate partition for containers has been created
check_1_1_2  # Ensure only trusted users are allowed to control Docker demonio
check_2_1    # Restrict network traffic between containers
check_2_2    # Set the logging level
check_2_3    # Allow Docker to make changes to iptables
EOF

# Run with custom configuración
| sudo ./docker-bench-security.sh -e $(cat ~/.docker-bench-security/excluded_checks | grep -v '^#' | tr '\n' ',') |

Integration with CI/CD

Integrating Docker Bench into CI/CD pipelines:

# .gitlab-ci.yml
docker_security_scan:
  stage: security
  image: docker:latest
  servicios:
    - docker: dind
  variables:
    DOCKER_DRIVER: overlay2
    DOCKER_TLS_CERTDIR: "/certs"
  before_script:
    - docker info
  script:
    -|
      docker run --rm --net host --pid host --userns host --cap-add audit_control \
        -e DOCKER_CONTENT_TRUST=$DOCKER_CONTENT_TRUST \
        -v /etc: /etc:ro \
        -v /usr/bin/containerd: /usr/bin/containerd:ro \
        -v /usr/bin/runc: /usr/bin/runc:ro \
        -v /usr/lib/systemd: /usr/lib/systemd:ro \
        -v /var/lib: /var/lib:ro \
        -v /var/run/docker.sock: /var/run/docker.sock:ro \
        --label docker_bench_security \
        docker/docker-bench-security -j > docker-bench-repuerto.json
    -|
      # Parse results and fail if critical issues found
| CRITICAL_ISSUES=$(jq '.tests[] | select(.result == "FAIL" and .severity == "CRITICAL") | length' docker-bench-repuerto.json) |
      if [ "$CRITICAL_ISSUES" -gt 0 ]; then
        echo "Critical security issues found: $CRITICAL_ISSUES"
        exit 1
      fi
  artifacts:
    repuertos:
      junit: docker-bench-repuerto.json
    paths:
      - docker-bench-repuerto.json
    expire_in: 1 week
  only:
    - master
    - develop
# GitHub Actions workflow
name: Docker Security Scan
on:
  push:
    branches: [main, develop]
  pull_request:
    branches: [main]

jobs:
  docker-bench-security:
    runs-on: ubuntu-latest
    steps:
    - name: Checkout code
      uses: actions/checkout@v3

    - name: Run Docker Bench Security
      run: |
        docker run --rm --net host --pid host --userns host --cap-add audit_control \
          -v /etc: /etc:ro \
          -v /var/lib: /var/lib:ro \
          -v /var/run/docker.sock: /var/run/docker.sock:ro \
          -v $\\\\{\\\\{ github.workspace \\\\}\\\\}: /tmp \
          docker/docker-bench-security -j -l /tmp/docker-bench-repuerto.json

    - name: Parse Security Results
      run: |
        # Check for critical failures
| CRITICAL_FAILS=$(jq '[.tests[] | select(.result == "FAIL" and .severity == "CRITICAL")] | length' docker-bench-repuerto.json) |
| HIGH_FAILS=$(jq '[.tests[] | select(.result == "FAIL" and .severity == "HIGH")] | length' docker-bench-repuerto.json) |

        echo "Critical failures: $CRITICAL_FAILS"
        echo "High severity failures: $HIGH_FAILS"

        # Fail build if critical issues found
        if [ "$CRITICAL_FAILS" -gt 0 ]; then
          echo ": :error::Critical security issues found"
          exit 1
        fi

        # Warning for high severity issues
        if [ "$HIGH_FAILS" -gt 0 ]; then
          echo ": :warning::High severity security issues found"
        fi

    - name: Upload Security Repuerto
      uses: actions/upload-artifact@v3
      with:
        name: docker-bench-security-repuerto
        path: docker-bench-repuerto.json
        retention-days: 30

Automated Remediation

Creating automated remediation scripts:

#!/bin/bash
# docker-bench-remediation.sh - Automated remediation for common Docker security issues

# Function to remediate specific Docker Bench findings
remediate_docker_security() \\\\{
    local check_id="$1"
    local Descripción="$2"

    echo "Remediating: $check_id - $Descripción"

    case "$check_id" in
        "check_2_2")
            # Set Docker demonio logging level
            echo "Setting Docker demonio logging level to info"
            sudo mkdir -p /etc/docker
            echo '\\\\{"log-level": "info"\\\\}'|sudo tee /etc/docker/demonio.json
            sudo systemctl restart docker
            ;;

        "check_2_5")
            # Disable legacy registry
            echo "Disabling legacy registry (v1)"
            sudo mkdir -p /etc/docker
            jq '. + \\\\{"disable-legacy-registry": true\\\\}' /etc/docker/demonio.json|sudo tee /etc/docker/demonio.json.tmp
            sudo mv /etc/docker/demonio.json.tmp /etc/docker/demonio.json
            sudo systemctl restart docker
            ;;

        "check_2_8")
            # Enable user namespace suppuerto
            echo "Enabling user namespace suppuerto"
            sudo mkdir -p /etc/docker
            jq '. + \\\\{"userns-remap": "default"\\\\}' /etc/docker/demonio.json|sudo tee /etc/docker/demonio.json.tmp
            sudo mv /etc/docker/demonio.json.tmp /etc/docker/demonio.json
            sudo systemctl restart docker
            ;;

        "check_2_11")
            # Enable Docker Content Trust
            echo "Enabling Docker Content Trust"
            echo 'expuerto DOCKER_CONTENT_TRUST=1'|sudo tee -a /etc/environment
            expuerto DOCKER_CONTENT_TRUST=1
            ;;

        "check_2_13")
            # Configure centralized and remote logging
            echo "Configuring centralized logging"
            sudo mkdir -p /etc/docker
            jq '. + \\\\{"log-driver": "syslog", "log-opts": \\\\{"syslog-address": "tcp://localhost:514"\\\\}\\\\}' /etc/docker/demonio.json|sudo tee /etc/docker/demonio.json.tmp
            sudo mv /etc/docker/demonio.json.tmp /etc/docker/demonio.json
            sudo systemctl restart docker
            ;;

        "check_2_14")
            # Disable operations on legacy registry
            echo "Disabling operations on legacy registry"
            sudo mkdir -p /etc/docker
            jq '. + \\\\{"disable-legacy-registry": true\\\\}' /etc/docker/demonio.json|sudo tee /etc/docker/demonio.json.tmp
            sudo mv /etc/docker/demonio.json.tmp /etc/docker/demonio.json
            sudo systemctl restart docker
            ;;

        *)
            echo "No automated remediation available for $check_id"
            ;;
    esac
\\\\}

# Run Docker Bench and parse results
echo "Running Docker Bench Security scan..."
sudo ./docker-bench-security.sh -j > docker-bench-results.json

# Parse failed checks and attempt remediation
echo "Parsing results and attempting remediation..."
| jq -r '.tests[] | select(.result == "FAIL") | "\(.id) | \(.desc)"' docker-bench-results.json | while IFS=' | ' read -r check_id Descripción; do |
    remediate_docker_security "$check_id" "$Descripción"
done

# Re-run Docker Bench to verify improvements
echo "Re-running Docker Bench to verify improvements..."
sudo ./docker-bench-security.sh -j > docker-bench-results-after.json

# Compare results
echo "Comparing before and after results..."
| BEFORE_FAILS=$(jq '[.tests[] | select(.result == "FAIL")] | length' docker-bench-results.json) |
| AFTER_FAILS=$(jq '[.tests[] | select(.result == "FAIL")] | length' docker-bench-results-after.json) |

echo "Failed checks before remediation: $BEFORE_FAILS"
echo "Failed checks after remediation: $AFTER_FAILS"
echo "Improvements: $((BEFORE_FAILS - AFTER_FAILS))"

Automation Scripts

Comprehensive Security Monitoring

#!/usr/bin/env python3
# Comprehensive Docker security monitoring with Docker Bench

impuerto subproceso
impuerto json
impuerto os
impuerto smtplib
from datetime impuerto datetime, timedelta
from email.mime.text impuerto MIMEText
from email.mime.multipart impuerto MIMEMultipart
impuerto logging

class DockerBenchMonitoring:
    def __init__(self, config_file="docker-bench-config.json"):
        self.config_file = config_file
        self.load_config()
        self.setup_logging()

    def load_config(self):
        """Load monitoring configuración"""
        try:
            with open(self.config_file, 'r') as f:
                self.config = json.load(f)
        except FileNotFoundError:
            # Default configuración
            self.config = \\\\{
                "monitoring": \\\\{
                    "interval_hours": 24,
                    "severity_threshold": "HIGH",
                    "max_failures": 5
                \\\\},
                "notifications": \\\\{
                    "email": \\\\{
                        "enabled": False,
                        "smtp_server": "localhost",
                        "smtp_puerto": 587,
                        "nombre de usuario": "",
                        "contraseña": "",
                        "from": "docker-bench@ejemplo.com",
                        "to": "security@ejemplo.com"
                    \\\\},
                    "webhook": \\\\{
                        "enabled": False,
                        "url": "",
                        "headers": \\\\{\\\\}
                    \\\\}
                \\\\},
                "remediation": \\\\{
                    "auto_remediate": False,
                    "allowed_checks": []
                \\\\}
            \\\\}

    def setup_logging(self):
        """Setup logging configuración"""
        logging.basicConfig(
            level=logging.INFO,
            format='%(asctime)s - %(levelname)s - %(message)s',
            handlers=[
                logging.FileHandler('docker-bench-monitoring.log'),
                logging.StreamHandler()
            ]
        )
        self.logger = logging.getLogger(__name__)

    def run_docker_bench(self, output_file=None):
        """Run Docker Bench Security scan"""
        if not output_file:
            output_file = f"docker-bench-\\\\{datetime.now().strftime('%Y%m%d-%H%M%S')\\\\}.json"

        self.logger.info("Running Docker Bench Security scan...")

        try:
            # Run Docker Bench as container
            cmd = [
                "docker", "run", "--rm", "--net", "host", "--pid", "host",
                "--userns", "host", "--cap-add", "audit_control",
                "-v", "/etc:/etc:ro",
                "-v", "/usr/bin/containerd:/usr/bin/containerd:ro",
                "-v", "/usr/bin/runc:/usr/bin/runc:ro",
                "-v", "/usr/lib/systemd:/usr/lib/systemd:ro",
                "-v", "/var/lib:/var/lib:ro",
                "-v", "/var/run/docker.sock:/var/run/docker.sock:ro",
                "--label", "docker_bench_security",
                "docker/docker-bench-security", "-j"
            ]

            result = subproceso.run(cmd, capture_output=True, text=True, timeout=300)

            if result.returncode == 0:
                # Parse JSON output
                scan_results = json.loads(result.stdout)

                # Save results to file
                with open(output_file, 'w') as f:
                    json.dump(scan_results, f, indent=2)

                self.logger.info(f"Docker Bench scan completed. Results saved to \\\\{output_file\\\\}")
                return scan_results, output_file
            else:
                self.logger.error(f"Docker Bench scan failed: \\\\{result.stderr\\\\}")
                return None, None

        except subproceso.TimeoutExpired:
            self.logger.error("Docker Bench scan timed out")
            return None, None
        except json.JSONDecodeError as e:
            self.logger.error(f"Failed to parse Docker Bench output: \\\\{e\\\\}")
            return None, None
        except Exception as e:
            self.logger.error(f"Error running Docker Bench: \\\\{e\\\\}")
            return None, None

    def analyze_results(self, scan_results):
        """Analyze Docker Bench results"""
        if not scan_results:
            return None

        analysis = \\\\{
            "timestamp": datetime.now().isoformat(),
            "total_checks": len(scan_results.get("tests", [])),
            "passed": 0,
            "failed": 0,
            "warnings": 0,
            "info": 0,
            "critical_failures": [],
            "high_failures": [],
            "medium_failures": [],
            "summary": \\\\{\\\\}
        \\\\}

        # Analyze each test result
        for test in scan_results.get("tests", []):
            result = test.get("result", "").upper()
            severity = test.get("severity", "INFO").upper()

            if result == "PASS":
                analysis["passed"] += 1
            elif result == "FAIL":
                analysis["failed"] += 1

                # Categorize by severity
                if severity == "CRITICAL":
                    analysis["critical_failures"].append(test)
                elif severity == "HIGH":
                    analysis["high_failures"].append(test)
                elif severity == "MEDIUM":
                    analysis["medium_failures"].append(test)
            elif result == "WARN":
                analysis["warnings"] += 1
            else:
                analysis["info"] += 1

        # Generate summary by section
        sections = \\\\{\\\\}
        for test in scan_results.get("tests", []):
            section = test.get("section", "Unknown")
            if section not in sections:
                sections[section] = \\\\{"total": 0, "passed": 0, "failed": 0\\\\}

            sections[section]["total"] += 1
            if test.get("result", "").upper() == "PASS":
                sections[section]["passed"] += 1
            elif test.get("result", "").upper() == "FAIL":
                sections[section]["failed"] += 1

        analysis["summary"] = sections

        self.logger.info(f"Analysis complete: \\\\{analysis['passed']\\\\} passed, \\\\{analysis['failed']\\\\} failed")
        return analysis

    def check_thresholds(self, analysis):
        """Check if results exceed configured thresholds"""
        if not analysis:
            return False

        threshold_config = self.config.get("monitoring", \\\\{\\\\})
        severity_threshold = threshold_config.get("severity_threshold", "HIGH")
        max_failures = threshold_config.get("max_failures", 5)

        # Count failures by severity
        critical_count = len(analysis.get("critical_failures", []))
        high_count = len(analysis.get("high_failures", []))
        medium_count = len(analysis.get("medium_failures", []))

        # Check thresholds
        if severity_threshold == "CRITICAL" and critical_count > max_failures:
            return True
        elif severity_threshold == "HIGH" and (critical_count + high_count) > max_failures:
            return True
        elif severity_threshold == "MEDIUM" and (critical_count + high_count + medium_count) > max_failures:
            return True

        return False

    def send_notification(self, analysis, threshold_exceeded=False):
        """Send notification about scan results"""
        notification_config = self.config.get("notifications", \\\\{\\\\})

        # Prepare notification content
        subject = "Docker Bench Security Repuerto"
        if threshold_exceeded:
            subject += " - ALERT: Thresholds Exceeded"

        body = self.generate_notification_body(analysis, threshold_exceeded)

        # Send email notification
        if notification_config.get("email", \\\\{\\\\}).get("enabled", False):
            self.send_email_notification(subject, body)

        # Send webhook notification
        if notification_config.get("webhook", \\\\{\\\\}).get("enabled", False):
            self.send_webhook_notification(analysis, threshold_exceeded)

    def generate_notification_body(self, analysis, threshold_exceeded):
        """Generate notification message body"""
        body = f"""
Docker Bench Security Scan Repuerto
Generated: \\\\{analysis['timestamp']\\\\}

SUMMARY:
========
Total Checks: \\\\{analysis['total_checks']\\\\}
Passed: \\\\{analysis['passed']\\\\}
Failed: \\\\{analysis['failed']\\\\}
Warnings: \\\\{analysis['warnings']\\\\}

FAILURES BY SEVERITY:
====================
Critical: \\\\{len(analysis['critical_failures'])\\\\}
High: \\\\{len(analysis['high_failures'])\\\\}
Medium: \\\\{len(analysis['medium_failures'])\\\\}

"""

        if threshold_exceeded:
            body += "\n⚠️  ALERT: Security thresholds have been exceeded!\n\n"

        # Add critical failures details
        if analysis['critical_failures']:
            body += "CRITICAL FAILURES:\n"
            body += "==================\n"
            for failure in analysis['critical_failures'][:5]:  # Limit to first 5
                body += f"- \\\\{failure.get('id', 'Unknown')\\\\}: \\\\{failure.get('desc', 'No Descripción')\\\\}\n"

            if len(analysis['critical_failures']) > 5:
                body += f"... and \\\\{len(analysis['critical_failures']) - 5\\\\} more\n"
            body += "\n"

        # Add section summary
        body += "SUMMARY BY SECTION:\n"
        body += "==================\n"
        for section, stats in analysis['summary'].items():
            pass_rate = (stats['passed'] / stats['total']) * 100 if stats['total'] > 0 else 0
            body += f"\\\\{section\\\\}: \\\\{stats['passed']\\\\}/\\\\{stats['total']\\\\} passed (\\\\{pass_rate:.1f\\\\}%)\n"

        return body

    def send_email_notification(self, subject, body):
        """Send email notification"""
        email_config = self.config.get("notifications", \\\\{\\\\}).get("email", \\\\{\\\\})

        try:
            msg = MIMEMultipart()
            msg['From'] = email_config["from"]
            msg['To'] = email_config["to"]
            msg['Subject'] = subject

            msg.attach(MIMEText(body, 'plain'))

            server = smtplib.SMTP(email_config["smtp_server"], email_config["smtp_puerto"])
            server.starttls()

            if email_config.get("nombre de usuario") and email_config.get("contraseña"):
                server.login(email_config["nombre de usuario"], email_config["contraseña"])

            text = msg.as_string()
            server.sendmail(email_config["from"], email_config["to"], text)
            server.quit()

            self.logger.info("Email notification sent successfully")

        except Exception as e:
            self.logger.error(f"Failed to send email notification: \\\\{e\\\\}")

    def send_webhook_notification(self, analysis, threshold_exceeded):
        """Send webhook notification"""
        webhook_config = self.config.get("notifications", \\\\{\\\\}).get("webhook", \\\\{\\\\})

        try:
            impuerto requests

            payload = \\\\{
                "timestamp": analysis["timestamp"],
                "alert": threshold_exceeded,
                "summary": \\\\{
                    "total_checks": analysis["total_checks"],
                    "passed": analysis["passed"],
                    "failed": analysis["failed"],
                    "critical_failures": len(analysis["critical_failures"]),
                    "high_failures": len(analysis["high_failures"])
                \\\\},
                "details": analysis
            \\\\}

            headers = webhook_config.get("headers", \\\\{\\\\})
            headers["Content-Type"] = "application/json"

            response = requests.post(
                webhook_config["url"],
                json=payload,
                headers=headers,
                timeout=30
            )

            if response.status_code == 200:
                self.logger.info("Webhook notification sent successfully")
            else:
                self.logger.error(f"Webhook notification failed: \\\\{response.status_code\\\\}")

        except Exception as e:
            self.logger.error(f"Failed to send webhook notification: \\\\{e\\\\}")

    def run_monitoring_cycle(self):
        """Run complete monitoring cycle"""
        self.logger.info("Starting Docker Bench monitoring cycle")

        # Run Docker Bench scan
        scan_results, output_file = self.run_docker_bench()

        if not scan_results:
            self.logger.error("Failed to run Docker Bench scan")
            return False

        # Analyze results
        analysis = self.analyze_results(scan_results)

        if not analysis:
            self.logger.error("Failed to analyze scan results")
            return False

        # Check thresholds
        threshold_exceeded = self.check_thresholds(analysis)

        if threshold_exceeded:
            self.logger.warning("Security thresholds exceeded!")

        # Send notifications
        self.send_notification(analysis, threshold_exceeded)

        # Save analysis results
        analysis_file = f"docker-bench-analysis-\\\\{datetime.now().strftime('%Y%m%d-%H%M%S')\\\\}.json"
        with open(analysis_file, 'w') as f:
            json.dump(analysis, f, indent=2)

        self.logger.info(f"Monitoring cycle completed. Analysis saved to \\\\{analysis_file\\\\}")
        return True

# uso
if __name__ == "__main__":
    monitor = DockerBenchMonitoring()
    monitor.run_monitoring_cycle()

Compliance Repuertoing Script

#!/usr/bin/env python3
# Docker Bench compliance repuertoing

impuerto json
impuerto subproceso
from datetime impuerto datetime
impuerto pandas as pd

class DockerBenchCompliance:
    def __init__(self):
        self.cis_mapping = self.load_cis_mapping()

    def load_cis_mapping(self):
        """Load CIS Docker Benchmark mapping"""
        return \\\\{
            "1": "host configuración",
            "2": "Docker demonio configuración",
            "3": "Docker demonio configuración files",
            "4": "Container Images and Build File",
            "5": "Container Runtime",
            "6": "Docker Security Operations"
        \\\\}

    def run_compliance_scan(self, output_format="json"):
        """Run Docker Bench compliance scan"""

        cmd = [
            "docker", "run", "--rm", "--net", "host", "--pid", "host",
            "--userns", "host", "--cap-add", "audit_control",
            "-v", "/etc:/etc:ro",
            "-v", "/var/lib:/var/lib:ro",
            "-v", "/var/run/docker.sock:/var/run/docker.sock:ro",
            "docker/docker-bench-security"
        ]

        if output_format == "json":
            cmd.append("-j")

        try:
            result = subproceso.run(cmd, capture_output=True, text=True, timeout=300)

            if result.returncode == 0:
                if output_format == "json":
                    return json.loads(result.stdout)
                else:
                    return result.stdout
            else:
                print(f"Docker Bench scan failed: \\\\{result.stderr\\\\}")
                return None

        except Exception as e:
            print(f"Error running Docker Bench: \\\\{e\\\\}")
            return None

    def generate_compliance_repuerto(self, scan_results):
        """Generate compliance repuerto"""

        if not scan_results:
            return None

        repuerto = \\\\{
            "repuerto_metadata": \\\\{
                "generated_at": datetime.now().isoformat(),
                "benchmark": "CIS Docker Benchmark v1.2.0",
                "tool": "Docker Bench for Security"
            \\\\},
            "executive_summary": \\\\{\\\\},
            "detailed_results": \\\\{\\\\},
            "recommendations": []
        \\\\}

        # Calculate overall compliance score
        total_tests = len(scan_results.get("tests", []))
        passed_tests = len([t for t in scan_results.get("tests", []) if t.get("result") == "PASS"])

        compliance_score = (passed_tests / total_tests) * 100 if total_tests > 0 else 0

        repuerto["executive_summary"] = \\\\{
            "overall_compliance_score": round(compliance_score, 2),
            "total_controls": total_tests,
            "passed_controls": passed_tests,
            "failed_controls": total_tests - passed_tests,
            "compliance_level": self.get_compliance_level(compliance_score)
        \\\\}

        # Group results by CIS section
        sections = \\\\{\\\\}
        for test in scan_results.get("tests", []):
            section_id = test.get("id", "").split("_")[1] if "_" in test.get("id", "") else "unknown"
            section_name = self.cis_mapping.get(section_id, f"Section \\\\{section_id\\\\}")

            if section_name not in sections:
                sections[section_name] = \\\\{
                    "total": 0,
                    "passed": 0,
                    "failed": 0,
                    "tests": []
                \\\\}

            sections[section_name]["total"] += 1
            sections[section_name]["tests"].append(test)

            if test.get("result") == "PASS":
                sections[section_name]["passed"] += 1
            else:
                sections[section_name]["failed"] += 1

        # Calculate section compliance scores
        for section_name, section_data in sections.items():
            section_score = (section_data["passed"] / section_data["total"]) * 100
            section_data["compliance_score"] = round(section_score, 2)

        repuerto["detailed_results"] = sections

        # Generate recommendations
        failed_tests = [t for t in scan_results.get("tests", []) if t.get("result") == "FAIL"]

        for test in failed_tests[:10]:  # Top 10 recommendations
            recommendation = \\\\{
                "control_id": test.get("id", ""),
                "title": test.get("desc", ""),
                "severity": test.get("severity", "MEDIUM"),
                "remediation": self.get_remediation_guidance(test.get("id", ""))
            \\\\}
            repuerto["recommendations"].append(recommendation)

        return repuerto

    def get_compliance_level(self, score):
        """Determine compliance level based on score"""
        if score >= 95:
            return "Excellent"
        elif score >= 85:
            return "Good"
        elif score >= 70:
            return "Fair"
        elif score >= 50:
            return "Poor"
        else:
            return "Critical"

    def get_remediation_guidance(self, check_id):
        """Get remediation guidance for specific check"""

        remediation_guide = \\\\{
            "check_2_2": "Configure Docker demonio logging level by adding '\"log-level\": \"info\"' to /etc/docker/demonio.json",
            "check_2_5": "Disable legacy registry by adding '\"disable-legacy-registry\": true' to /etc/docker/demonio.json",
            "check_2_8": "Enable user namespace suppuerto by adding '\"userns-remap\": \"default\"' to /etc/docker/demonio.json",
            "check_2_11": "Enable Docker Content Trust by setting DOCKER_CONTENT_TRUST=1 environment variable",
            "check_2_13": "Configure centralized logging by setting appropriate log driver in /etc/docker/demonio.json",
            "check_4_1": "Create a user for the container in Dockerfile using USER instruction",
            "check_4_6": "Add HEALTHCHECK instruction to container image Dockerfile",
            "check_5_1": "Do not disable AppArmor Profile by avoiding --security-opt apparmor=unconfined",
            "check_5_2": "Do not disable SELinux security opcións by avoiding --security-opt label=disable"
        \\\\}

        return remediation_guide.get(check_id, "Refer to CIS Docker Benchmark documentación for detailed remediation steps")

    def expuerto_to_csv(self, repuerto, filename="docker-compliance-repuerto.csv"):
        """Expuerto compliance repuerto to CSV"""

        # Prepare data for CSV expuerto
        csv_data = []

        for section_name, section_data in repuerto["detailed_results"].items():
            for test in section_data["tests"]:
                csv_data.append(\\\\{
                    "Section": section_name,
                    "Control_ID": test.get("id", ""),
                    "Descripción": test.get("desc", ""),
                    "Result": test.get("result", ""),
                    "Severity": test.get("severity", ""),
                    "Section_Compliance_Score": section_data["compliance_score"]
                \\\\})

        # Create DataFrame and expuerto
        df = pd.DataFrame(csv_data)
        df.to_csv(filename, index=False)

        print(f"Compliance repuerto expuertoed to \\\\{filename\\\\}")

    def generate_html_repuerto(self, repuerto, filename="docker-compliance-repuerto.html"):
        """Generate HTML compliance repuerto"""

        html_template = """
<!DOCTYPE html>
<html>
<head>
    <title>Docker Compliance Repuerto</title>
    <style>
        body \\\\{ font-family: Arial, sans-serif; margin: 20px; \\\\}
        .header \\\\{ background-color: #f0f0f0; padding: 20px; \\\\}
        .summary \\\\{ background-color: #e6f3ff; padding: 15px; margin: 20px 0; \\\\}
        .section \\\\{ margin: 20px 0; padding: 15px; border: 1px solid #ddd; \\\\}
        .excellent \\\\{ color: #4caf50; \\\\}
        .good \\\\{ color: #8bc34a; \\\\}
        .fair \\\\{ color: #ff9800; \\\\}
        .poor \\\\{ color: #f44336; \\\\}
        .critical \\\\{ color: #d32f2f; \\\\}
        table \\\\{ width: 100%; border-collapse: collapse; margin: 10px 0; \\\\}
        th, td \\\\{ border: 1px solid #ddd; padding: 8px; text-align: left; \\\\}
        th \\\\{ background-color: #f2f2f2; \\\\}
        .pass \\\\{ background-color: #c8e6c9; \\\\}
        .fail \\\\{ background-color: #ffcdd2; \\\\}
    </style>
</head>
<body>
    <div class="header">
        <h1>Docker Security Compliance Repuerto</h1>
        <p>Generated: \\\\{repuerto_date\\\\}</p>
        <p>Benchmark: \\\\{benchmark\\\\}</p>
    </div>

    <div class="summary">
        <h2>Executive Summary</h2>
        <p><strong>Overall Compliance Score:</strong> <span class="\\\\{compliance_class\\\\}">\\\\{compliance_score\\\\}%</span></p>
        <p><strong>Compliance Level:</strong> \\\\{compliance_level\\\\}</p>
        <p><strong>Total Controls:</strong> \\\\{total_controls\\\\}</p>
        <p><strong>Passed:</strong> \\\\{passed_controls\\\\}</p>
        <p><strong>Failed:</strong> \\\\{failed_controls\\\\}</p>
    </div>

    \\\\{sections_html\\\\}

    <div class="section">
        <h2>Top Recommendations</h2>
        \\\\{recommendations_html\\\\}
    </div>
</body>
</html>
        """

        # Generate sections HTML
        sections_html = ""
        for section_name, section_data in repuerto["detailed_results"].items():
            score_class = self.get_score_class(section_data["compliance_score"])

            tests_html = ""
            for test in section_data["tests"]:
                result_class = "pass" if test.get("result") == "PASS" else "fail"
                tests_html += f"""
                <tr class="\\\\{result_class\\\\}">
                    <td>\\\\{test.get("id", "")\\\\}</td>
                    <td>\\\\{test.get("desc", "")\\\\}</td>
                    <td>\\\\{test.get("result", "")\\\\}</td>
                    <td>\\\\{test.get("severity", "")\\\\}</td>
                </tr>
                """

            sections_html += f"""
            <div class="section">
                <h3>\\\\{section_name\\\\}</h3>
                <p><strong>Section Score:</strong> <span class="\\\\{score_class\\\\}">\\\\{section_data["compliance_score"]\\\\}%</span></p>
                <table>
                    <tr>
                        <th>Control ID</th>
                        <th>Descripción</th>
                        <th>Result</th>
                        <th>Severity</th>
                    </tr>
                    \\\\{tests_html\\\\}
                </table>
            </div>
            """

        # Generate recommendations HTML
        recommendations_html = "<ul>"
        for rec in repuerto["recommendations"]:
            recommendations_html += f"""
            <li>
                <strong>\\\\{rec["control_id"]\\\\}</strong>: \\\\{rec["title"]\\\\}
                <br><small><strong>Remediation:</strong> \\\\{rec["remediation"]\\\\}</small>
            </li>
            """
        recommendations_html += "</ul>"

        # Fill template
        compliance_class = self.get_score_class(repuerto["executive_summary"]["overall_compliance_score"])

        html_content = html_template.format(
            repuerto_date=repuerto["repuerto_metadata"]["generated_at"],
            benchmark=repuerto["repuerto_metadata"]["benchmark"],
            compliance_score=repuerto["executive_summary"]["overall_compliance_score"],
            compliance_class=compliance_class,
            compliance_level=repuerto["executive_summary"]["compliance_level"],
            total_controls=repuerto["executive_summary"]["total_controls"],
            passed_controls=repuerto["executive_summary"]["passed_controls"],
            failed_controls=repuerto["executive_summary"]["failed_controls"],
            sections_html=sections_html,
            recommendations_html=recommendations_html
        )

        with open(filename, 'w') as f:
            f.write(html_content)

        print(f"HTML compliance repuerto generated: \\\\{filename\\\\}")

    def get_score_class(self, score):
        """Get CSS class for compliance score"""
        if score >= 95:
            return "excellent"
        elif score >= 85:
            return "good"
        elif score >= 70:
            return "fair"
        elif score >= 50:
            return "poor"
        else:
            return "critical"

# uso
if __name__ == "__main__":
    compliance = DockerBenchCompliance()

    # Run compliance scan
    scan_results = compliance.run_compliance_scan()

    if scan_results:
        # Generate compliance repuerto
        repuerto = compliance.generate_compliance_repuerto(scan_results)

        # Expuerto to different formats
        compliance.expuerto_to_csv(repuerto)
        compliance.generate_html_repuerto(repuerto)

        # Save JSON repuerto
        with open("docker-compliance-repuerto.json", 'w') as f:
            json.dump(repuerto, f, indent=2)

        print(f"Compliance Score: \\\\{repuerto['executive_summary']['overall_compliance_score']\\\\}%")
        print(f"Compliance Level: \\\\{repuerto['executive_summary']['compliance_level']\\\\}")

Integration ejemplos

Kubernetes Integration

# docker-bench-demonioset.yaml
apiVersion: apps/v1
kind: demonioSet
metadata:
  name: docker-bench-security
  namespace: security
  labels:
    app: docker-bench-security
spec:
  selector:
    matchLabels:
      app: docker-bench-security
  template:
    metadata:
      labels:
        app: docker-bench-security
    spec:
      hostPID: true
      hostNetwork: true
      servicioAccountName: docker-bench-security
      containers:
      - name: docker-bench-security
        image: docker/docker-bench-security
        comando: ["./docker-bench-security.sh"]
        args: ["-j", "-l", "/tmp/docker-bench-repuerto.log"]
        securityContext:
          privileged: true
        volumeMounts:
        - name: docker-sock
          mountPath: /var/run/docker.sock
          readOnly: true
        - name: etc
          mountPath: /etc
          readOnly: true
        - name: var-lib
          mountPath: /var/lib
          readOnly: true
        - name: usr-bin-containerd
          mountPath: /usr/bin/containerd
          readOnly: true
        - name: usr-bin-runc
          mountPath: /usr/bin/runc
          readOnly: true
        - name: output
          mountPath: /tmp
        resources:
          limits:
            memory: "256Mi"
            cpu: "200m"
          requests:
            memory: "128Mi"
            cpu: "100m"
      volumes:
      - name: docker-sock
        hostPath:
          path: /var/run/docker.sock
      - name: etc
        hostPath:
          path: /etc
      - name: var-lib
        hostPath:
          path: /var/lib
      - name: usr-bin-containerd
        hostPath:
          path: /usr/bin/containerd
      - name: usr-bin-runc
        hostPath:
          path: /usr/bin/runc
      - name: output
        hostPath:
          path: /var/log/docker-bench
      restartPolicy: Always
      tolerations:
      - clave: node-role.kubernetes.io/master
        effect: NoSchedule

Prometheus Integration

# docker-bench-expuertoer.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: docker-bench-expuertoer
  namespace: monitoring
spec:
  replicas: 1
  selector:
    matchLabels:
      app: docker-bench-expuertoer
  template:
    metadata:
      labels:
        app: docker-bench-expuertoer
    spec:
      containers:
      - name: docker-bench-expuertoer
        image: custom/docker-bench-expuertoer:latest
        puertos:
        - containerpuerto: 8080
        env:
        - name: SCAN_INTERVAL
          value: "3600"  # 1 hour
        volumeMounts:
        - name: docker-sock
          mountPath: /var/run/docker.sock
          readOnly: true
      volumes:
      - name: docker-sock
        hostPath:
          path: /var/run/docker.sock

---
apiVersion: v1
kind: servicio
metadata:
  name: docker-bench-expuertoer
  namespace: monitoring
  labels:
    app: docker-bench-expuertoer
spec:
  puertos:
  - puerto: 8080
    objetivopuerto: 8080
    name: metrics
  selector:
    app: docker-bench-expuertoer

---
apiVersion: monitoring.coreos.com/v1
kind: servicioMonitor
metadata:
  name: docker-bench-expuertoer
  namespace: monitoring
spec:
  selector:
    matchLabels:
      app: docker-bench-expuertoer
  endpoints:
  - puerto: metrics
    interval: 60s
    path: /metrics

solución de problemas

Common Issues

Permission Problems:

# Ensure Docker demonio is running
sudo systemctl status docker
sudo systemctl start docker

# Check Docker socket permissions
ls -la /var/run/docker.sock
sudo chmod 666 /var/run/docker.sock

# Run with proper privileges
sudo ./docker-bench-security.sh

# Check user groups
groups $USER
sudo usermod -aG docker $USER

Container Execution Issues:

# Check Docker version compatibility
docker version

# Pull latest Docker Bench image
docker pull docker/docker-bench-security:latest

# Run with debug output
docker run --rm --net host --pid host --userns host --cap-add audit_control \
    -v /etc:/etc:ro \
    -v /var/lib:/var/lib:ro \
    -v /var/run/docker.sock:/var/run/docker.sock:ro \
    docker/docker-bench-security -v

# Check container logs
| docker logs $(docker ps -a | grep docker-bench-security | awk '\\\\{print $1\\\\}') |

Output and Parsing Issues:

# Verify JSON output format
./docker-bench-security.sh -j|jq '.'

# Check log file permissions
touch docker-bench.log
chmod 644 docker-bench.log

# Validate output directory
mkdir -p /var/log/docker-bench
sudo chown $USER:$USER /var/log/docker-bench

# Test specific checks
./docker-bench-security.sh -c host_configuración -v

Performance Optimization

Optimizing Docker Bench performance:

# Run specific sections only
./docker-bench-security.sh -c container_runtime

# Skip time-consuming checks
./docker-bench-security.sh -e check_1_1_1,check_1_1_2

# Use faster execution mode
./docker-bench-security.sh -q

# Limit resource uso
docker run --rm --memory=256m --cpus=0.5 \
    --net host --pid host --userns host --cap-add audit_control \
    -v /etc:/etc:ro \
    -v /var/lib:/var/lib:ro \
    -v /var/run/docker.sock:/var/run/docker.sock:ro \
    docker/docker-bench-security

Security Considerations

Safe uso Practices

Environment Security: - Run Docker Bench in isolated environments when possible - Limit network access during security scans - Use read-only mounts for system directories - Implement proper Control de Accesos for scan results - Regular updates of Docker Bench and Docker demonio

Data Protection: - Encrypt sensitive scan results and repuertos - Implement secure storage for compliance data - Use secure channels for transmitting repuertos - Regular cleanup of temporary files and logs - Implement data retention policies

Operational Security

Monitoring and Alerting: - Monitor Docker Bench execution and results - Set up alerting for critical security findings - Track compliance score trends over time - Implement automated remediation where appropriate - Regular review of security configuracións

Integration Security: - Secure CI/CD pipeline integration - Protect API claves and credenciales - Implement proper RBAC for Kubernetes deployments - Monitor for unauthorized Docker Bench uso - Regular security assessment of monitoring infrastructure

referencias

  1. Docker Bench for Security GitHub
  2. CIS Docker Benchmark
  3. Docker Security Best Practices
  4. NIST Seguridad de Contenedores Guide
  5. Docker Official Security documentación