Zum Inhalt

Falco Cheatsheet

- :material-content-copy: **[Kopieren auf Clipboard](https://__LINK_0_____** - :material-file-pdf-box:*[PDF herunterladen](__LINK_0____**

Überblick

Falco ist ein Open-Source-Laufzeit-Sicherheitstool, das unerwartete Anwendungsverhalten und Alarme auf Bedrohungen zu Laufzeit erkennt. Es verwendet Systemanrufe, um ein System zu sichern und zu überwachen, indem Linux-Systemanrufe aus dem Kernel zur Laufzeit parsing und den Stream gegen eine leistungsstarke Regelmaschine behauptet.

Schlüsselmerkmale

  • *Runtime Threat Detection: Echtzeitüberwachung von Systemanrufen und Kernelereignissen
  • *Kubernetes Native: Tiefe Integration mit Kubernetes-Umgebungen
  • Flexible Rules Engine: Anpassbare Regeln für die Bedrohungserkennung
  • ** Multiple Output Channels**: Alarme über Syslog, Dateien, HTTP, gRPC und mehr
  • eBPF Support: Moderne eBPF-Sonde für effiziente Kernelüberwachung
  • *Cloud-Native Integration: Integration von CNCF-Ökosystemwerkzeugen

Installation

Binärinstallation

# Download and install Falco
curl -s https://falco.org/repo/falcosecurity-3672BA8F.asc | apt-key add -
echo "deb https://download.falco.org/packages/deb stable main" | tee -a /etc/apt/sources.list.d/falcosecurity.list
apt-get update -y
apt-get install -y falco

# Start Falco service
systemctl enable falco
systemctl start falco
```_

### Installation des Paketmanagers

```bash
# Ubuntu/Debian
curl -s https://falco.org/repo/falcosecurity-3672BA8F.asc | apt-key add -
echo "deb https://download.falco.org/packages/deb stable main" | tee -a /etc/apt/sources.list.d/falcosecurity.list
apt-get update
apt-get install falco

# RHEL/CentOS/Fedora
rpm --import https://falco.org/repo/falcosecurity-3672BA8F.asc
curl -s -o /etc/yum.repos.d/falcosecurity.repo https://falco.org/repo/falcosecurity-rpm.repo
yum install falco

# Arch Linux
yay -S falco

# macOS (Homebrew)
brew install falco
```_

### Installation von Containern

```bash
# Pull Falco image
docker pull falcosecurity/falco:latest

# Run Falco in container
docker run --rm -i -t \
    --privileged \
    -v /var/run/docker.sock:/host/var/run/docker.sock \
    -v /dev:/host/dev \
    -v /proc:/host/proc:ro \
    -v /boot:/host/boot:ro \
    -v /lib/modules:/host/lib/modules:ro \
    -v /usr:/host/usr:ro \
    -v /etc:/host/etc:ro \
    falcosecurity/falco:latest
```_

### Installation von Kubernets

```bash
# Install via Helm
helm repo add falcosecurity https://falcosecurity.github.io/charts
helm repo update
helm install falco falcosecurity/falco

# Install via kubectl
kubectl apply -f https://raw.githubusercontent.com/falcosecurity/falco/master/deploy/kubernetes/falco-daemonset-configmap.yaml

# Verify installation
kubectl get pods -l app=falco
kubectl logs -l app=falco
```_

### eBPF Installation

```bash
# Install with eBPF probe
falco --modern-bpf

# Install eBPF probe manually
falco-driver-loader bpf

# Verify eBPF probe
lsmod | grep falco
```_

## Basisnutzung

### Laufende Falco

```bash
# Run Falco with default configuration
falco

# Run with specific configuration file
falco -c /etc/falco/falco.yaml

# Run with custom rules
falco -r /path/to/custom_rules.yaml

# Run in daemon mode
falco -d

# Run with specific output format
falco --json-output
```_

### Kommandozeilenoptionen

```bash
# Verbose output
falco -v

# Very verbose output
falco -vv

# Disable specific rule tags
falco -T filesystem,network

# Enable specific rule tags only
falco -t container,k8s_audit

# Dry run (validate configuration)
falco --dry-run

# Print supported fields
falco --list

# Print version information
falco --version
```_

### Konfigurationsdateien

```yaml
# /etc/falco/falco.yaml
rules_file:
  - /etc/falco/falco_rules.yaml
  - /etc/falco/falco_rules.local.yaml
  - /etc/falco/k8s_audit_rules.yaml

time_format_iso_8601: false
json_output: false
json_include_output_property: true
json_include_tags_property: true

log_stderr: true
log_syslog: true
log_level: info

priority: debug

buffered_outputs: false
outputs:
  rate: 1
  max_burst: 1000

syslog_output:
  enabled: true

file_output:
  enabled: false
  keep_alive: false
  filename: ./events.txt

stdout_output:
  enabled: true

webserver:
  enabled: true
  listen_port: 8765
  k8s_healthz_endpoint: /healthz
  ssl_enabled: false

grpc:
  enabled: false
  bind_address: "0.0.0.0:5060"
  threadiness: 8

grpc_output:
  enabled: false
```_

## Regeln und Erkennung

### Default Regeln

```yaml
# Container rules
- rule: Terminal shell in container
  desc: A shell was used as the entrypoint/exec point into a container with an attached terminal.
  condition: >
    spawned_process and container
    and shell_procs and proc.tty != 0
    and container_entrypoint
  output: >
    A shell was spawned in a container with an attached terminal (user=%user.name %container.info
    shell=%proc.name parent=%proc.pname cmdline=%proc.cmdline terminal=%proc.tty container_id=%container.id image=%container.image.repository)
  priority: NOTICE
  tags: [container, shell, mitre_execution]

- rule: File below /etc opened for writing
  desc: an attempt to write to any file below /etc
  condition: open_write and fd.typechar='f' and fd.name startswith /etc
  output: "File below /etc opened for writing (user=%user.name command=%proc.cmdline file=%fd.name parent=%proc.pname pcmdline=%proc.pcmdline gparent=%proc.aname[2])"
  priority: ERROR
  tags: [filesystem, mitre_persistence]
```_

### Zollvorschriften

```yaml
# custom_rules.yaml
- rule: Detect cryptocurrency mining
  desc: Detect cryptocurrency mining activities
  condition: >
    spawned_process and
    (proc.name in (xmrig, cpuminer, ccminer, cgminer, bfgminer) or
     proc.cmdline contains "stratum+tcp" or
     proc.cmdline contains "mining.pool" or
     proc.cmdline contains "--donate-level")
  output: >
    Cryptocurrency mining detected (user=%user.name command=%proc.cmdline
    container=%container.info image=%container.image.repository)
  priority: CRITICAL
  tags: [cryptocurrency, mining, malware]

- rule: Suspicious network activity
  desc: Detect suspicious network connections
  condition: >
    inbound_outbound and
    (fd.rip in (suspicious_ips) or
     fd.rport in (4444, 5555, 6666, 7777, 8888, 9999) or
     fd.rip startswith "10.0.0" and fd.rport = 22)
  output: >
    Suspicious network activity detected (user=%user.name command=%proc.cmdline
    connection=%fd.rip:%fd.rport direction=%evt.type container=%container.info)
  priority: WARNING
  tags: [network, suspicious]

- rule: Privilege escalation attempt
  desc: Detect privilege escalation attempts
  condition: >
    spawned_process and
    (proc.name in (sudo, su, pkexec, doas) or
     proc.cmdline contains "chmod +s" or
     proc.cmdline contains "setuid" or
     proc.cmdline contains "setgid")
  output: >
    Privilege escalation attempt detected (user=%user.name command=%proc.cmdline
    parent=%proc.pname container=%container.info)
  priority: HIGH
  tags: [privilege_escalation, mitre_privilege_escalation]
```_

### Kubernets Prüfungsregeln

```yaml
# k8s_audit_rules.yaml
- rule: K8s Secret Created
  desc: Detect any attempt to create a secret
  condition: ka and secret and kcreate
  output: K8s Secret Created (user=%ka.user.name verb=%ka.verb name=%ka.target.name reason=%ka.reason)
  priority: INFO
  source: k8s_audit
  tags: [k8s]

- rule: K8s Secret Deleted
  desc: Detect any attempt to delete a secret
  condition: ka and secret and kdelete
  output: K8s Secret Deleted (user=%ka.user.name verb=%ka.verb name=%ka.target.name reason=%ka.reason)
  priority: WARNING
  source: k8s_audit
  tags: [k8s]

- rule: K8s ConfigMap Modified
  desc: Detect any attempt to modify a configmap
  condition: ka and configmap and kmodify
  output: K8s ConfigMap Modified (user=%ka.user.name verb=%ka.verb name=%ka.target.name reason=%ka.reason)
  priority: WARNING
  source: k8s_audit
  tags: [k8s]
```_

## Erweiterte Konfiguration

### Ausgangskanäle

```yaml
# HTTP Output
http_output:
  enabled: true
  url: "http://webhook.example.com/falco"
  user_agent: "falcosecurity/falco"

# gRPC Output
grpc_output:
  enabled: true
  address: "grpc-server:5060"
  tls: true
  cert: "/etc/ssl/certs/client.crt"
  key: "/etc/ssl/private/client.key"
  ca: "/etc/ssl/certs/ca.crt"

# Program Output
program_output:
  enabled: true
  keep_alive: false
  program: "jq '{text: .output}' | curl -d @- -X POST https://hooks.slack.com/services/XXX"
```_

### Leistung Tuning

```yaml
# Performance configuration
syscall_event_drops:
  actions:
    - log
    - alert
  rate: 0.03333
  max_burst: 10

syscall_event_timeouts:
  max_consecutives: 1000

metadata_download:
  max_mb: 100
  chunk_wait_us: 1000
  watch_freq_sec: 1

base_syscalls:
  custom_set: []
  repair: false
```_

### Regelbedingungen und Makros

```yaml
# Macros for reusable conditions
- macro: container
  condition: container.id != host

- macro: spawned_process
  condition: evt.type = execve and evt.dir = <

- macro: shell_procs
  condition: proc.name in (bash, csh, ksh, sh, tcsh, zsh, dash)

- macro: sensitive_files
  condition: >
    fd.name startswith /etc or
    fd.name startswith /root/.ssh or
    fd.name startswith /home/*/.ssh

# Lists for grouping values
- list: shell_binaries
  items: [bash, csh, ksh, sh, tcsh, zsh, dash]

- list: package_mgmt_binaries
  items: [dpkg, apt, apt-get, yum, rpm, dnf, zypper]

- list: network_tools
  items: [nc, ncat, netcat, nmap, dig, nslookup, host]
```_

## Integration von Kubernets

### Helmkonfiguration

```yaml
# values.yaml for Helm chart
falco:
  rules_file:
    - /etc/falco/falco_rules.yaml
    - /etc/falco/falco_rules.local.yaml
    - /etc/falco/k8s_audit_rules.yaml

  json_output: true
  json_include_output_property: true

  grpc:
    enabled: true
    bind_address: "0.0.0.0:5060"

  grpc_output:
    enabled: true

  webserver:
    enabled: true
    listen_port: 8765

resources:
  limits:
    cpu: 1000m
    memory: 1024Mi
  requests:
    cpu: 100m
    memory: 512Mi

nodeSelector:
  kubernetes.io/os: linux

tolerations:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
  - effect: NoSchedule
    key: node-role.kubernetes.io/control-plane

serviceAccount:
  create: true
  name: falco

rbac:
  create: true

customRules:
  custom_rules.yaml: |-
    - rule: Detect kubectl usage
      desc: Detect kubectl command execution
      condition: spawned_process and proc.name = kubectl
      output: kubectl command executed (user=%user.name command=%proc.cmdline)
      priority: INFO
      tags: [k8s, kubectl]
```_

### DaemonSet Konfiguration

```yaml
# falco-daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: falco
  namespace: falco
spec:
  selector:
    matchLabels:
      app: falco
  template:
    metadata:
      labels:
        app: falco
    spec:
      serviceAccountName: falco
      hostNetwork: true
      hostPID: true
      containers:
      - name: falco
        image: falcosecurity/falco:latest
        args:
          - /usr/bin/falco
          - --cri=/run/containerd/containerd.sock
          - --k8s-api
          - --k8s-api-cert=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
          - --k8s-api-token=/var/run/secrets/kubernetes.io/serviceaccount/token
        securityContext:
          privileged: true
        volumeMounts:
        - mountPath: /host/var/run/docker.sock
          name: docker-socket
        - mountPath: /host/run/containerd/containerd.sock
          name: containerd-socket
        - mountPath: /host/dev
          name: dev-fs
        - mountPath: /host/proc
          name: proc-fs
          readOnly: true
        - mountPath: /host/boot
          name: boot-fs
          readOnly: true
        - mountPath: /host/lib/modules
          name: lib-modules
        - mountPath: /host/usr
          name: usr-fs
          readOnly: true
        - mountPath: /host/etc
          name: etc-fs
          readOnly: true
        - mountPath: /etc/falco
          name: falco-config
      volumes:
      - name: docker-socket
        hostPath:
          path: /var/run/docker.sock
      - name: containerd-socket
        hostPath:
          path: /run/containerd/containerd.sock
      - name: dev-fs
        hostPath:
          path: /dev
      - name: proc-fs
        hostPath:
          path: /proc
      - name: boot-fs
        hostPath:
          path: /boot
      - name: lib-modules
        hostPath:
          path: /lib/modules
      - name: usr-fs
        hostPath:
          path: /usr
      - name: etc-fs
        hostPath:
          path: /etc
      - name: falco-config
        configMap:
          name: falco-config
```_

## CI/CD Integration

### GitHub Aktionen

```yaml
# .github/workflows/falco.yml
name: Falco Security Monitoring

on:
  push:
    branches: [ main ]
  pull_request:
    branches: [ main ]

jobs:
  falco-rules-test:
    runs-on: ubuntu-latest
    steps:
    - name: Checkout code
      uses: actions/checkout@v3

    - name: Install Falco
      run: |
        curl -s https://falco.org/repo/falcosecurity-3672BA8F.asc | sudo apt-key add -
        echo "deb https://download.falco.org/packages/deb stable main" | sudo tee -a /etc/apt/sources.list.d/falcosecurity.list
        sudo apt-get update
        sudo apt-get install -y falco

    - name: Validate Falco rules
      run: |
        sudo falco --dry-run -r rules/custom_rules.yaml

    - name: Test Falco rules
      run: |
        # Start Falco in background
        sudo falco -r rules/custom_rules.yaml --json-output > falco-events.json &
        FALCO_PID=$!

        # Run test scenarios
        sleep 5

        # Test 1: Shell in container
        docker run --rm alpine sh -c "echo 'test'"

        # Test 2: File modification
        sudo touch /etc/test-file

        # Wait and stop Falco
        sleep 10
        sudo kill $FALCO_PID

        # Check for expected events
        if grep -q "A shell was spawned in a container" falco-events.json; then
          echo "✅ Container shell detection working"
        else
          echo "❌ Container shell detection failed"
          exit 1
        fi

    - name: Upload Falco events
      uses: actions/upload-artifact@v3
      with:
        name: falco-events
        path: falco-events.json

  falco-k8s-deploy:
    runs-on: ubuntu-latest
    needs: falco-rules-test
    if: github.ref == 'refs/heads/main'
    steps:
    - name: Checkout code
      uses: actions/checkout@v3

    - name: Setup kubectl
      uses: azure/setup-kubectl@v3
      with:
        version: 'latest'

    - name: Configure kubectl
      run: |
        echo "${{ secrets.KUBE_CONFIG }}" | base64 -d > kubeconfig
        export KUBECONFIG=kubeconfig

    - name: Deploy Falco to Kubernetes
      run: |
        kubectl create namespace falco --dry-run=client -o yaml | kubectl apply -f -
        kubectl apply -f k8s/falco-configmap.yaml
        kubectl apply -f k8s/falco-daemonset.yaml

    - name: Verify Falco deployment
      run: |
        kubectl rollout status daemonset/falco -n falco --timeout=300s
        kubectl get pods -n falco -l app=falco
```_

### GitLab CI

```yaml
# .gitlab-ci.yml
stages:
  - validate
  - test
  - deploy

falco-validate:
  stage: validate
  image: falcosecurity/falco:latest
  script:
    - falco --dry-run -r rules/custom_rules.yaml
  only:
    - main
    - merge_requests

falco-test:
  stage: test
  image: docker:latest
  services:
    - docker:dind
  before_script:
    - apk add --no-cache curl
    - curl -s https://falco.org/repo/falcosecurity-3672BA8F.asc | apt-key add -
    - echo "deb https://download.falco.org/packages/deb stable main" | tee -a /etc/apt/sources.list.d/falcosecurity.list
    - apt-get update && apt-get install -y falco
  script:
    - falco -r rules/custom_rules.yaml --json-output > falco-events.json &
    - FALCO_PID=$!
    - sleep 5
    - docker run --rm alpine sh -c "echo 'test'"
    - sleep 10
    - kill $FALCO_PID
    - grep -q "A shell was spawned in a container" falco-events.json
  artifacts:
    paths:
      - falco-events.json
    expire_in: 1 week
  only:
    - main
    - merge_requests

falco-deploy:
  stage: deploy
  image: bitnami/kubectl:latest
  script:
    - kubectl create namespace falco --dry-run=client -o yaml | kubectl apply -f -
    - kubectl apply -f k8s/
    - kubectl rollout status daemonset/falco -n falco --timeout=300s
  only:
    - main
```_

## Automatisierungsskripte

### Falco Event Processing

```bash
#!/bin/bash
# falco-event-processor.sh

set -e

# Configuration
FALCO_EVENTS_FILE="${FALCO_EVENTS_FILE:-/var/log/falco/events.json}"
PROCESSED_EVENTS_DIR="${PROCESSED_EVENTS_DIR:-/var/log/falco/processed}"
ALERT_WEBHOOK_URL="${ALERT_WEBHOOK_URL:-}"
SLACK_WEBHOOK_URL="${SLACK_WEBHOOK_URL:-}"

# Create processed events directory
mkdir -p "$PROCESSED_EVENTS_DIR"

# Function to process Falco events
process_events() {
    echo "Processing Falco events from $FALCO_EVENTS_FILE..."

    # Read new events (since last processing)
    local last_processed_file="$PROCESSED_EVENTS_DIR/last_processed.timestamp"
    local last_processed_time="0"

    if [ -f "$last_processed_file" ]; then
        last_processed_time=$(cat "$last_processed_file")
    fi

    # Process events newer than last processed time
    while IFS= read -r line; do
        if [ -n "$line" ]; then
            process_single_event "$line"
        fi
    done < <(jq -r --arg timestamp "$last_processed_time" '
        select(.time > $timestamp) | @json
| ' "$FALCO_EVENTS_FILE" 2>/dev/null |  | echo "") |

    # Update last processed timestamp
    date +%s > "$last_processed_file"
}

# Function to process a single event
process_single_event() {
    local event_json="$1"

    # Parse event details
    local priority=$(echo "$event_json" | jq -r '.priority // "INFO"')
    local rule=$(echo "$event_json" | jq -r '.rule // "Unknown"')
    local output=$(echo "$event_json" | jq -r '.output // "No output"')
    local time=$(echo "$event_json" | jq -r '.time // "Unknown"')
| local tags=$(echo "$event_json" | jq -r '.tags[]? // empty' | tr '\n' ',' | sed 's/,$//') |

    echo "Processing event: $rule (Priority: $priority)"

    # Save event to processed directory
    local event_file="$PROCESSED_EVENTS_DIR/event_$(date +%s%N).json"
    echo "$event_json" > "$event_file"

    # Handle based on priority
    case "$priority" in
        "CRITICAL"|"ERROR")
            handle_critical_event "$event_json"
            ;;
        "WARNING")
            handle_warning_event "$event_json"
            ;;
        "NOTICE"|"INFO")
            handle_info_event "$event_json"
            ;;
    esac
}

# Function to handle critical events
handle_critical_event() {
    local event_json="$1"
    local rule=$(echo "$event_json" | jq -r '.rule')
    local output=$(echo "$event_json" | jq -r '.output')

    echo "🚨 CRITICAL EVENT: $rule"
    echo "Details: $output"

    # Send immediate alerts
    send_slack_alert "🚨 CRITICAL Falco Alert" "$rule: $output" "danger"
    send_webhook_alert "$event_json"

    # Log to syslog
    logger -p local0.crit "Falco Critical: $rule - $output"
}

# Function to handle warning events
handle_warning_event() {
    local event_json="$1"
    local rule=$(echo "$event_json" | jq -r '.rule')
    local output=$(echo "$event_json" | jq -r '.output')

    echo "⚠️ WARNING EVENT: $rule"
    echo "Details: $output"

    # Send warning alerts
    send_slack_alert "⚠️ WARNING Falco Alert" "$rule: $output" "warning"

    # Log to syslog
    logger -p local0.warning "Falco Warning: $rule - $output"
}

# Function to handle info events
handle_info_event() {
    local event_json="$1"
    local rule=$(echo "$event_json" | jq -r '.rule')

    echo "ℹ️ INFO EVENT: $rule"

    # Log to syslog
    logger -p local0.info "Falco Info: $rule"
}

# Function to send Slack alerts
send_slack_alert() {
    local title="$1"
    local message="$2"
    local color="$3"

    if [ -n "$SLACK_WEBHOOK_URL" ]; then
        curl -X POST -H 'Content-type: application/json' \
            --data "{
                \"attachments\": [{
                    \"color\": \"$color\",
                    \"title\": \"$title\",
                    \"text\": \"$message\",
                    \"footer\": \"Falco Security Monitor\",
                    \"ts\": $(date +%s)
                }]
            }" \
            "$SLACK_WEBHOOK_URL" &
    fi
}

# Function to send webhook alerts
send_webhook_alert() {
    local event_json="$1"

    if [ -n "$ALERT_WEBHOOK_URL" ]; then
        curl -X POST -H 'Content-type: application/json' \
            --data "$event_json" \
            "$ALERT_WEBHOOK_URL" &
    fi
}

# Function to generate summary report
generate_summary_report() {
    echo "Generating Falco events summary..."

    local report_file="$PROCESSED_EVENTS_DIR/summary_$(date +%Y%m%d).json"

    # Count events by priority
| local critical_count=$(find "$PROCESSED_EVENTS_DIR" -name "event_*.json" -exec jq -r '.priority' {} \; | grep -c "CRITICAL" |  | echo "0") |
| local error_count=$(find "$PROCESSED_EVENTS_DIR" -name "event_*.json" -exec jq -r '.priority' {} \; | grep -c "ERROR" |  | echo "0") |
| local warning_count=$(find "$PROCESSED_EVENTS_DIR" -name "event_*.json" -exec jq -r '.priority' {} \; | grep -c "WARNING" |  | echo "0") |
| local info_count=$(find "$PROCESSED_EVENTS_DIR" -name "event_*.json" -exec jq -r '.priority' {} \; | grep -c -E "NOTICE | INFO" |  | echo "0") |

    # Generate summary
    cat > "$report_file" << EOF
{
  "date": "$(date -I)",
  "summary": {
    "critical": $critical_count,
    "error": $error_count,
    "warning": $warning_count,
    "info": $info_count,
    "total": $((critical_count + error_count + warning_count + info_count))
  },
| "top_rules": $(find "$PROCESSED_EVENTS_DIR" -name "event_*.json" -exec jq -r '.rule' {} \; | sort | uniq -c | sort -nr | head -10 | jq -R 'split(" ") | {count: .[0], rule: .[1:] | join(" ")}' | jq -s .) |
}
EOF

    echo "Summary report generated: $report_file"

    # Print summary to console
    echo "=== Daily Falco Events Summary ==="
    echo "Critical: $critical_count"
    echo "Error: $error_count"
    echo "Warning: $warning_count"
    echo "Info: $info_count"
    echo "Total: $((critical_count + error_count + warning_count + info_count))"
}

# Function to cleanup old events
cleanup_old_events() {
    local retention_days="${RETENTION_DAYS:-30}"

    echo "Cleaning up events older than $retention_days days..."

    find "$PROCESSED_EVENTS_DIR" -name "event_*.json" -mtime +$retention_days -delete
    find "$PROCESSED_EVENTS_DIR" -name "summary_*.json" -mtime +$retention_days -delete

    echo "Cleanup completed"
}

# Main execution
main() {
    case "${1:-process}" in
        "process")
            process_events
            ;;
        "summary")
            generate_summary_report
            ;;
        "cleanup")
            cleanup_old_events
            ;;
        "monitor")
            # Continuous monitoring mode
            while true; do
                process_events
                sleep 60
            done
            ;;
        *)
| echo "Usage: $0 {process | summary | cleanup | monitor}" |
            exit 1
            ;;
    esac
}

main "$@"
```_

### Falco Regeln Management

```python
#!/usr/bin/env python3
# falco-rules-manager.py

import yaml
import json
import sys
import os
import subprocess
from pathlib import Path
from datetime import datetime

class FalcoRulesManager:
    def __init__(self, rules_dir="rules", config_file="/etc/falco/falco.yaml"):
        self.rules_dir = Path(rules_dir)
        self.config_file = Path(config_file)
        self.falco_binary = "falco"

    def validate_rules(self):
        """Validate Falco rules syntax"""
        print("Validating Falco rules...")

        success = True
        for rules_file in self.rules_dir.glob("**/*.yaml"):
            try:
                # Validate YAML syntax
                with open(rules_file) as f:
                    yaml.safe_load(f)

                # Validate with Falco
                result = subprocess.run([
                    self.falco_binary, "--dry-run", "-r", str(rules_file)
                ], capture_output=True, text=True)

                if result.returncode != 0:
                    print(f"❌ Validation failed for {rules_file}: {result.stderr}")
                    success = False
                else:
                    print(f"✅ {rules_file} validated successfully")

            except yaml.YAMLError as e:
                print(f"❌ YAML syntax error in {rules_file}: {e}")
                success = False
            except Exception as e:
                print(f"❌ Error validating {rules_file}: {e}")
                success = False

        return success

    def generate_rules_documentation(self):
        """Generate documentation for Falco rules"""
        print("Generating rules documentation...")

        all_rules = []
        all_macros = []
        all_lists = []

        for rules_file in self.rules_dir.glob("**/*.yaml"):
            try:
                with open(rules_file) as f:
                    data = yaml.safe_load(f)

                if not isinstance(data, list):
                    continue

                for item in data:
                    if 'rule' in item:
                        all_rules.append({
                            'file': str(rules_file),
                            'rule': item['rule'],
                            'desc': item.get('desc', 'No description'),
                            'condition': item.get('condition', ''),
                            'output': item.get('output', ''),
                            'priority': item.get('priority', 'INFO'),
                            'tags': item.get('tags', [])
                        })
                    elif 'macro' in item:
                        all_macros.append({
                            'file': str(rules_file),
                            'macro': item['macro'],
                            'condition': item.get('condition', '')
                        })
                    elif 'list' in item:
                        all_lists.append({
                            'file': str(rules_file),
                            'list': item['list'],
                            'items': item.get('items', [])
                        })

            except Exception as e:
                print(f"❌ Error processing {rules_file}: {e}")

        # Generate markdown documentation
        doc_content = self._generate_markdown_doc(all_rules, all_macros, all_lists)

        doc_file = self.rules_dir / "RULES_DOCUMENTATION.md"
        with open(doc_file, 'w') as f:
            f.write(doc_content)

        print(f"✅ Documentation generated: {doc_file}")

        # Generate JSON summary
        summary = {
            'generated_at': datetime.now().isoformat(),
            'total_rules': len(all_rules),
            'total_macros': len(all_macros),
            'total_lists': len(all_lists),
            'rules_by_priority': self._count_by_priority(all_rules),
            'rules_by_tag': self._count_by_tags(all_rules)
        }

        summary_file = self.rules_dir / "rules_summary.json"
        with open(summary_file, 'w') as f:
            json.dump(summary, f, indent=2)

        print(f"✅ Summary generated: {summary_file}")

    def _generate_markdown_doc(self, rules, macros, lists):
        """Generate markdown documentation"""
        content = f"""# Falco Rules Documentation

Generated on: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}

## Summary

- **Total Rules**: {len(rules)}
- **Total Macros**: {len(macros)}
- **Total Lists**: {len(lists)}

## Rules

"""

        # Sort rules by priority
        priority_order = ['CRITICAL', 'ERROR', 'WARNING', 'NOTICE', 'INFO']
        rules_sorted = sorted(rules, key=lambda x: priority_order.index(x['priority']) if x['priority'] in priority_order else 999)

        for rule in rules_sorted:
            content += f"""### {rule['rule']}

**Priority**: {rule['priority']}  
**File**: `{rule['file']}`  
**Tags**: {', '.join(rule['tags']) if rule['tags'] else 'None'}

**Description**: {rule['desc']}

**Condition**:
```_
{rule['condition')}

Output:

{rule['output'}}

"""

    content += "\n## Macros\n\n"
    for macro in macros:
        content += f"""### {macro['macro']}

File: {macro['file']}

Condition:

{macro['condition')}

"""

    content += "\n## Lists\n\n"
    for lst in lists:
        content += f"""### {lst['list']}

File: {lst['file']}

Items: {', '.join(lst['items']) if lst['items'] else 'None'}


"""

    return content

def _count_by_priority(self, rules):
    """Count rules by priority"""
    counts = {}
    for rule in rules:
        priority = rule['priority']
        counts[priority] = counts.get(priority, 0) + 1
    return counts

def _count_by_tags(self, rules):
    """Count rules by tags"""
    counts = {}
    for rule in rules:
        for tag in rule['tags']:
            counts[tag] = counts.get(tag, 0) + 1
    return counts

def test_rules(self):
    """Test Falco rules with sample events"""
    print("Testing Falco rules...")

    # Create test scenarios
    test_scenarios = [
        {
            'name': 'Container shell test',
            'command': 'docker run --rm alpine sh -c "echo test"',
            'expected_rules': ['Terminal shell in container']
        },
        {
            'name': 'File modification test',
            'command': 'touch /tmp/test-file && rm /tmp/test-file',
            'expected_rules': []
        }
    ]

    for scenario in test_scenarios:
        print(f"Running test: {scenario['name']}")

        # Start Falco in background
        falco_process = subprocess.Popen([
            self.falco_binary, "-r", str(self.rules_dir / "*.yaml"),
            "--json-output"
        ], stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)

        # Wait a moment for Falco to start
        import time
        time.sleep(2)

        # Run test command
        subprocess.run(scenario['command'], shell=True, capture_output=True)

        # Wait for events
        time.sleep(3)

        # Stop Falco
        falco_process.terminate()
        stdout, stderr = falco_process.communicate(timeout=5)

        # Check for expected events
        events = []
        for line in stdout.split('\n'):
            if line.strip():
                try:
                    event = json.loads(line)
                    events.append(event)
                except json.JSONDecodeError:
                    continue

        print(f"  Captured {len(events)} events")

        # Validate expected rules
        triggered_rules = [event.get('rule', '') for event in events]
        for expected_rule in scenario['expected_rules']:
            if expected_rule in triggered_rules:
                print(f"  ✅ Expected rule triggered: {expected_rule}")
            else:
                print(f"  ❌ Expected rule not triggered: {expected_rule}")

def deploy_rules(self, target="local"):
    """Deploy rules to target environment"""
    print(f"Deploying rules to {target}...")

    if target == "local":
        # Copy rules to local Falco configuration
        import shutil
        for rules_file in self.rules_dir.glob("*.yaml"):
            dest = Path("/etc/falco") / rules_file.name
            shutil.copy2(rules_file, dest)
            print(f"✅ Deployed {rules_file} to {dest}")

        # Restart Falco service
        subprocess.run(["systemctl", "restart", "falco"], check=True)
        print("✅ Falco service restarted")

    elif target == "kubernetes":
        # Deploy to Kubernetes via ConfigMap
        subprocess.run([
            "kubectl", "create", "configmap", "falco-rules",
            f"--from-file={self.rules_dir}",
            "--dry-run=client", "-o", "yaml"
        ], check=True)

        subprocess.run([
            "kubectl", "apply", "-f", "-"
        ], input=subprocess.run([
            "kubectl", "create", "configmap", "falco-rules",
            f"--from-file={self.rules_dir}",
            "--dry-run=client", "-o", "yaml"
        ], capture_output=True, text=True).stdout, text=True, check=True)

        # Restart Falco DaemonSet
        subprocess.run([
            "kubectl", "rollout", "restart", "daemonset/falco", "-n", "falco"
        ], check=True)

        print("✅ Rules deployed to Kubernetes")

    else:
        print(f"❌ Unknown target: {target}")
        return False

    return True

def main(): import argparse

parser = argparse.ArgumentParser(description='Falco Rules Manager')
parser.add_argument('action', choices=['validate', 'document', 'test', 'deploy'],
                   help='Action to perform')
parser.add_argument('--rules-dir', default='rules',
                   help='Directory containing Falco rules')
parser.add_argument('--target', default='local',
                   help='Deployment target (local, kubernetes)')

args = parser.parse_args()

manager = FalcoRulesManager(args.rules_dir)

if args.action == 'validate':
    success = manager.validate_rules()
    sys.exit(0 if success else 1)
elif args.action == 'document':
    manager.generate_rules_documentation()
elif args.action == 'test':
    manager.test_rules()
elif args.action == 'deploy':
    success = manager.deploy_rules(args.target)
    sys.exit(0 if success else 1)

if name == "main": main() ```_

Fehlerbehebung

Gemeinsame Themen

```bash

Driver loading issues

falco-driver-loader

Check driver status

lsmod | grep falco

eBPF probe issues

falco --modern-bpf --dry-run

Permission issues

sudo falco

Configuration validation

falco --dry-run -c /etc/falco/falco.yaml

Verbose debugging

falco -vv ```_

Leistungsoptimierung

```bash

Reduce syscall overhead

falco --modern-bpf

Optimize buffer sizes

falco -o syscall_event_drops.rate=0.1

Disable unnecessary rules

falco -T filesystem,network

Monitor performance

falco --stats-interval 30 ```_

Analyse der Ergebnisse

```bash

Check Falco logs

journalctl -u falco -f

Parse JSON events

tail -f /var/log/falco/events.json | jq '.rule, .output'

Count events by priority

| jq -r '.priority' /var/log/falco/events.json | sort | uniq -c |

Filter by rule

jq 'select(.rule == "Terminal shell in container")' /var/log/falco/events.json ```_

Best Practices

Artikel Entwicklung

  1. ** Besondere Bedingungen**: Schreiben Sie präzise Regelbedingungen, um falsche Positives zu minimieren
  2. *Performance Awareness: Leistungseinwirkung komplexer Regeln berücksichtigen
  3. *Clear Outputs: Klare, aktionsfähige Ausgabenachrichten
  4. *Proper Tagging: Verwenden Sie konsequentes Tagging für Regelorganisation
  5. *Regular Testing: Testregeln mit realistischen Szenarien

Leitlinien für die Bereitstellung

  1. Gradual Rollout: Neue Regeln schrittweise in der Produktion einsetzen
  2. Monitoring: Regelleistung überwachen und falsche positive Preise
  3. Dokumentation*: Vollständige Regeldokumentation
  4. Version Control: Verwenden Sie die Versionskontrolle für das Regelmanagement
  5. Regular Updates: Regeln mit Bedrohungsinformationen aktualisieren

Sicherheitsüberlegungen

  1. Least Privilege: Falco mit minimalen erforderlichen Privilegien führen
  2. *Secure Outputs: Sichere Ausgangskanäle und Endpunkte
  3. Log Protection: Falco Protokolle vor Manipulation schützen
  4. *Regular Audits: Regelmäßige Prüfung und Überprüfung
  5. Incident Response: Integrieren Sie mit auftreffenden Reaktionsabläufen

Dieses umfassende Falco-Cheatsheet bietet alles, was für die professionelle Runtime-Drohung und Sicherheitsüberwachung benötigt wird, von der Basisnutzung bis hin zu fortschrittlichen Automatisierungs- und Integrationsszenarien.