Appearance
Falco Cheat Sheet
Overview
Falco is an open-source runtime security monitoring tool designed to detect anomalous activity in applications, containers, and Kubernetes environments. Originally created by Sysdig and now a CNCF graduated project, Falco uses system calls to provide deep runtime visibility and threat detection.
Key Features
- Runtime Security Monitoring: Real-time detection of anomalous behavior
- Container Security: Deep visibility into container runtime activity
- Kubernetes Integration: Native Kubernetes security monitoring
- Custom Rules: Flexible rule engine for custom security policies
- Multiple Outputs: Integration with SIEM, alerting, and response systems
- eBPF Support: Modern kernel-based monitoring with minimal overhead
- Cloud Native: Designed for modern containerized environments
Installation
Docker Installation (Recommended)
bash
# Run Falco in a container with host privileges
docker run --rm -i -t \
--privileged \
-v /var/run/docker.sock:/host/var/run/docker.sock \
-v /dev:/host/dev \
-v /proc:/host/proc:ro \
-v /boot:/host/boot:ro \
-v /lib/modules:/host/lib/modules:ro \
-v /usr:/host/usr:ro \
-v /etc:/host/etc:ro \
falcosecurity/falco:latest
# Run with custom configuration
docker run --rm -i -t \
--privileged \
-v /var/run/docker.sock:/host/var/run/docker.sock \
-v /dev:/host/dev \
-v /proc:/host/proc:ro \
-v /boot:/host/boot:ro \
-v /lib/modules:/host/lib/modules:ro \
-v /usr:/host/usr:ro \
-v /etc:/host/etc:ro \
-v $(pwd)/falco.yaml:/etc/falco/falco.yaml \
falcosecurity/falco:latest
Kubernetes Installation
bash
# Install using Helm
helm repo add falcosecurity https://falcosecurity.github.io/charts
helm repo update
# Install Falco with default configuration
helm install falco falcosecurity/falco
# Install with custom values
helm install falco falcosecurity/falco \
--set falco.grpc.enabled=true \
--set falco.grpcOutput.enabled=true \
--set falco.httpOutput.enabled=true
# Install as DaemonSet with custom configuration
kubectl apply -f - << 'EOF'
apiVersion: v1
kind: ConfigMap
metadata:
name: falco-config
namespace: falco
data:
falco.yaml: |
rules_file:
- /etc/falco/falco_rules.yaml
- /etc/falco/falco_rules.local.yaml
- /etc/falco/k8s_audit_rules.yaml
- /etc/falco/rules.d
time_format_iso_8601: true
json_output: true
json_include_output_property: true
log_stderr: true
log_syslog: true
log_level: info
priority: debug
buffered_outputs: false
outputs:
rate: 1
max_burst: 1000
syslog_output:
enabled: true
file_output:
enabled: false
keep_alive: false
filename: ./events.txt
stdout_output:
enabled: true
webserver:
enabled: true
listen_port: 8765
k8s_healthz_endpoint: /healthz
ssl_enabled: false
ssl_certificate: /etc/falco/falco.pem
grpc:
enabled: false
bind_address: "0.0.0.0:5060"
threadiness: 0
grpc_output:
enabled: false
EOF
# Apply DaemonSet
kubectl apply -f - << 'EOF'
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: falco
namespace: falco
spec:
selector:
matchLabels:
app: falco
template:
metadata:
labels:
app: falco
spec:
serviceAccount: falco
hostNetwork: true
hostPID: true
containers:
- name: falco
image: falcosecurity/falco:latest
securityContext:
privileged: true
args:
- /usr/bin/falco
- --cri=/run/containerd/containerd.sock
- --k8s-api
- --k8s-api-cert=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
- --k8s-api-token=/var/run/secrets/kubernetes.io/serviceaccount/token
volumeMounts:
- mountPath: /host/var/run/docker.sock
name: docker-socket
- mountPath: /host/dev
name: dev-fs
- mountPath: /host/proc
name: proc-fs
readOnly: true
- mountPath: /host/boot
name: boot-fs
readOnly: true
- mountPath: /host/lib/modules
name: lib-modules
readOnly: true
- mountPath: /host/usr
name: usr-fs
readOnly: true
- mountPath: /host/etc
name: etc-fs
readOnly: true
- mountPath: /etc/falco
name: config-volume
volumes:
- name: docker-socket
hostPath:
path: /var/run/docker.sock
- name: dev-fs
hostPath:
path: /dev
- name: proc-fs
hostPath:
path: /proc
- name: boot-fs
hostPath:
path: /boot
- name: lib-modules
hostPath:
path: /lib/modules
- name: usr-fs
hostPath:
path: /usr
- name: etc-fs
hostPath:
path: /etc
- name: config-volume
configMap:
name: falco-config
EOF
Ubuntu/Debian Installation
bash
# Add Falco repository
curl -s https://falco.org/repo/falcosecurity-3672BA8F.asc | apt-key add -
echo "deb https://download.falco.org/packages/deb stable main" | tee -a /etc/apt/sources.list.d/falcosecurity.list
# Update and install
apt-get update -y
apt-get install -y falco
# Install kernel headers (required)
apt-get install -y linux-headers-$(uname -r)
# Start Falco service
systemctl enable falco
systemctl start falco
# Check status
systemctl status falco
CentOS/RHEL Installation
bash
# Add Falco repository
rpm --import https://falco.org/repo/falcosecurity-3672BA8F.asc
curl -s -o /etc/yum.repos.d/falcosecurity.repo https://falco.org/repo/falcosecurity-rpm.repo
# Install
yum install -y falco
# Install kernel headers
yum install -y kernel-devel-$(uname -r)
# Start service
systemctl enable falco
systemctl start falco
Basic Usage
Command Line Interface
bash
# Basic Falco execution
falco
# Run with specific configuration
falco -c /etc/falco/falco.yaml
# Run with custom rules
falco -r /path/to/custom_rules.yaml
# Run with specific output format
falco -M 45 -o json_output=true
# Run with Kubernetes API integration
falco -k http://localhost:8080 -K /path/to/kubeconfig
# Run with specific log level
falco -o log_level=debug
# Validate configuration
falco --validate /etc/falco/falco.yaml
# List available fields
falco --list
# Print version information
falco --version
Configuration Management
bash
# Main configuration file
cat > /etc/falco/falco.yaml << 'EOF'
# File(s) or Directories containing Falco rules
rules_file:
- /etc/falco/falco_rules.yaml
- /etc/falco/falco_rules.local.yaml
- /etc/falco/k8s_audit_rules.yaml
- /etc/falco/rules.d
# Whether to output events in json or text
json_output: true
json_include_output_property: true
# When using json output, whether or not to include the "output" property
# itself (e.g. "File below a known binary directory opened for writing
# (user=root ....") in the json output.
json_include_tags_property: true
# Send information logs to stderr and/or syslog Note these are *not* security
# notification logs! These are just Falco lifecycle (and possibly error) logs.
log_stderr: true
log_syslog: true
# Minimum log level to include in logs. Note: these levels are
# separate from the priority field of rules. This refers only to the
# log level of falco's internal logging. Can be one of "emergency", "alert",
# "critical", "error", "warning", "notice", "info", "debug".
log_level: info
# Minimum rule priority level to load and run. All rules having a
# priority more severe than this level will be loaded/run. Can be one
# of "emergency", "alert", "critical", "error", "warning", "notice",
# "info", "debug".
priority: debug
# Whether or not output to any of the output channels below is
# buffered. Defaults to false
buffered_outputs: false
# Falco uses a shared buffer between the kernel and userspace to pass
# system call information. When Falco detects that this buffer is
# full and system calls have been dropped, it can take one or more of
# the following actions:
# - ignore: do nothing (default when list of actions is empty)
# - log: log a DEBUG message noting that the buffer was full
# - alert: emit a Falco alert noting that the buffer was full
# - exit: exit Falco with a non-zero rc
syscall_event_drops:
actions:
- log
- alert
rate: 0.03333
max_burst: 1
# Falco continuously monitors outputs performance. When an output channel does
# not allow to deliver an alert within a given deadline, an error is reported
# indicating which output is blocking notifications.
# The timeout error will be reported to the log according to the above log_* settings.
# Note that the notification will not be discarded from the output queue; thus,
# output channels may indefinitely remain blocked.
# An output timeout error indeed indicate a misconfiguration issue or I/O problems
# that cannot be recovered by Falco and should be fixed by the user.
#
# The "output_timeout" value specifies the duration in milliseconds to wait before
# considering the deadline exceeded.
# By default, output timeout feature is disabled with a zero value.
output_timeout: 2000
# A throttling mechanism implemented as a token bucket limits the
# rate of falco notifications. This throttling is controlled by the following configuration
# options:
# - rate: the number of tokens (i.e. right to send a notification)
# gained per second. Defaults to 1.
# - max_burst: the maximum number of tokens outstanding. Defaults to 1000.
#
# With these defaults, falco could send up to 1000 notifications after
# an initial quiet period, and then up to 1 notification per second
# afterward. It would gain the full burst back after 1000 seconds of
# no activity.
outputs:
rate: 1
max_burst: 1000
# Where security notifications are sent.
# Multiple outputs can be enabled.
syslog_output:
enabled: true
# If keep_alive is set to true, the file will be opened once and
# continuously written to, with each output message on its own
# line. If keep_alive is set to false, the file will be re-opened
# for each output message.
#
# Also, the file will be closed and reopened if falco is signaled with
# SIGUSR1.
file_output:
enabled: false
keep_alive: false
filename: ./events.txt
stdout_output:
enabled: true
# Falco contains an embedded webserver that can be used to accept K8s
# Admission Controller requests.
webserver:
enabled: true
listen_port: 8765
k8s_healthz_endpoint: /healthz
ssl_enabled: false
ssl_certificate: /etc/falco/falco.pem
# Possible additional things you might want to do with program output:
# - send to a slack webhook:
# program: "jq '{text: .output}' | curl -d @- -X POST https://hooks.slack.com/services/XXX"
# - logging (alternate method than syslog):
# program: logger -t falco-test
# - send over a network connection:
# program: nc host.example.com 80
# If enabled, the times displayed in log messages and output messages
# will be in ISO 8601. By default, times are displayed in the local
# time zone, as governed by /etc/localtime.
time_format_iso_8601: false
# Whether to output events in json or text
json_output: false
# When using json output, whether or not to include the "output" property
# itself (e.g. "File below a known binary directory opened for writing
# (user=root ....") in the json output.
json_include_output_property: true
# When using json output, whether or not to include the "tags" property
# itself in the json output. If set to true, outputs caused by rules
# with no tags will have a "tags" field set to an empty array. If set to
# false, the "tags" field will not be included in the json output at all.
json_include_tags_property: true
# gRPC server configuration.
# The gRPC server is secure by default (mutual TLS) so you need to generate certificates and update their paths here.
# By default the gRPC server is off.
# You can configure the address to bind and expose it.
# By modifying the threadiness configuration you can fine-tune the number of threads (and context) it will use.
grpc:
enabled: false
bind_address: "0.0.0.0:5060"
# when threadiness is 0, Falco sets it automatically to the number of online cores
threadiness: 0
private_key: "/etc/falco/certs/server.key"
cert_chain: "/etc/falco/certs/server.crt"
root_certs: "/etc/falco/certs/ca.crt"
# gRPC output service.
# By default it is off.
# By enabling this all the output events will be kept in memory until you read them with a gRPC client.
# Make sure to have a consumer for them or leave this disabled.
grpc_output:
enabled: false
# Container orchestrator metadata fetching params
metadata_download:
max_mb: 100
chunk_wait_us: 1000
watch_freq_sec: 1
# Logging
load_plugins: []
EOF
# Custom rules file
cat > /etc/falco/falco_rules.local.yaml << 'EOF'
# Custom Falco Rules
# Detect shell spawned in container
- rule: Shell Spawned in Container
desc: Detect shell spawned in container
condition: >
spawned_process and container and
(proc.name in (shell_binaries) or
(proc.name in (shell_interpreters) and not proc.args contains "-c"))
output: >
Shell spawned in container (user=%user.name user_loginuid=%user.loginuid %container.info
shell=%proc.name parent=%proc.pname cmdline=%proc.cmdline terminal=%proc.tty container_id=%container.id image=%container.image.repository)
priority: WARNING
tags: [container, shell, mitre_execution]
# Detect privilege escalation
- rule: Privilege Escalation via Setuid
desc: Detect privilege escalation via setuid
condition: >
spawned_process and
(proc.name in (setuid_binaries) or
(proc.args contains "chmod +s" or proc.args contains "chmod u+s"))
output: >
Privilege escalation via setuid (user=%user.name user_loginuid=%user.loginuid
command=%proc.cmdline container_id=%container.id image=%container.image.repository)
priority: HIGH
tags: [privilege_escalation, mitre_privilege_escalation]
# Detect suspicious network activity
- rule: Suspicious Network Activity
desc: Detect suspicious network connections
condition: >
(inbound_outbound) and
((fd.sport in (suspicious_ports)) or
(fd.dport in (suspicious_ports)) or
(fd.sip in (suspicious_ips)) or
(fd.dip in (suspicious_ips)))
output: >
Suspicious network activity (user=%user.name command=%proc.cmdline connection=%fd.name
container_id=%container.id image=%container.image.repository)
priority: HIGH
tags: [network, suspicious, mitre_command_and_control]
# Detect file modifications in sensitive directories
- rule: Sensitive File Modification
desc: Detect modifications to sensitive files
condition: >
open_write and
(fd.name startswith "/etc/" or
fd.name startswith "/usr/bin/" or
fd.name startswith "/usr/sbin/" or
fd.name startswith "/bin/" or
fd.name startswith "/sbin/")
output: >
Sensitive file modification (user=%user.name command=%proc.cmdline file=%fd.name
container_id=%container.id image=%container.image.repository)
priority: HIGH
tags: [filesystem, sensitive, mitre_persistence]
# Detect crypto mining activity
- rule: Crypto Mining Activity
desc: Detect potential crypto mining activity
condition: >
spawned_process and
(proc.name in (crypto_miners) or
proc.args contains "stratum" or
proc.args contains "mining" or
proc.args contains "cryptonight")
output: >
Crypto mining activity detected (user=%user.name command=%proc.cmdline
container_id=%container.id image=%container.image.repository)
priority: HIGH
tags: [crypto_mining, malware, mitre_impact]
# Lists for custom rules
- list: shell_binaries
items: [bash, csh, ksh, sh, tcsh, zsh, dash]
- list: shell_interpreters
items: [awk, gawk, mawk, nawk, python, python2, python3, ruby, perl, php]
- list: setuid_binaries
items: [sudo, su, newgrp, newuidmap, newgidmap, gpasswd, chfn, chsh, expiry, passwd]
- list: suspicious_ports
items: [4444, 5555, 6666, 7777, 8888, 9999, 1337, 31337]
- list: suspicious_ips
items: ["192.168.1.100", "10.0.0.100", "172.16.0.100"]
- list: crypto_miners
items: [xmrig, cpuminer, cgminer, bfgminer, sgminer, nheqminer]
EOF
Advanced Usage
Custom Rule Development
bash
# Advanced custom rule with macros
cat > /etc/falco/advanced_rules.yaml << 'EOF'
# Advanced Falco Rules with Macros
# Macros for reusable conditions
- macro: container
condition: container.id != host
- macro: spawned_process
condition: evt.type = execve and evt.dir = <
- macro: open_write
condition: (evt.type=openat or evt.type=open) and evt.is_open_write=true and fd.typechar='f' and fd.num>=0
- macro: inbound_outbound
condition: >
((evt.type in (accept,listen) and evt.dir=<) or
(evt.type in (connect) and evt.dir=< and fd.typechar=4))
- macro: user_known_package_manager_in_container
condition: >
container and user.name != "_apt" and
proc.name in (package_mgmt_binaries)
# Advanced rule with multiple conditions
- rule: Package Management Process Launched in Container
desc: >
Package management process ran inside container.
Package management binaries are programs like apt, yum, apk, etc.
These are often used by attackers to install additional software.
condition: >
spawned_process and container and user.name != "_apt" and
proc.name in (package_mgmt_binaries)
output: >
Package management process launched in container (user=%user.name user_loginuid=%user.loginuid
command=%proc.cmdline container_id=%container.id container_name=%container.name image=%container.image.repository:%container.image.tag)
priority: ERROR
tags: [process, software]
# Rule with exceptions
- rule: Write below binary dir
desc: an attempt to write to any file below a set of binary directories
condition: >
bin_dir and evt.dir = < and open_write and
not proc.name in (docker_binaries) and
not exe_running_docker_save and
not python_running_get_pip and
not python_running_ms_oms and
not user_known_package_manager_in_container and
not user_known_k8s_client_container and
not nomachine_writing_usr_bin_files and
not language_binaries and
not openvpn_writing_tmp and
not parent_java_running_zookeeper and
not elasticsearch_writing_exec and
not user_known_btmp_wtmp_edits and
not parent_python_running_denyhosts and
not fluentd_writing_conf_files and
not user_known_read_ssh_information and
not run_by_qualys and
not run_by_sumologic and
not run_by_appdynamics and
not user_known_rpm_binaries and
not maven_writing_groovy and
not chef_writing_conf and
not kubectl_writing_state and
not cassandra_writing_exec and
not user_known_vpn_binaries and
not parent_linux_image_upgrade_script and
not parent_node_running_npm and
not user_known_centrify_binaries and
not user_known_ms_binaries and
not user_known_ms_sql_binaries and
not parent_ucf_writing_conf and
not parent_supervise_writing_status and
not supervise_writing_status and
not pki_realm_writing_realms and
not htpasswd_writing_passwd and
not lvprogs_writing_conf and
not ovsdb_writing_openvswitch and
not user_known_write_below_binary_dir_activities and
not parent_python_running_pip and
not parent_conda_running_conda and
not user_known_write_rpm_database and
not parent_python_running_sdcli and
not parent_sap_running_hdbindexserver and
not parent_sap_running_hdbcompileserver and
not nginx_starting_nginx and
not nginx_running_aws_s3_cp and
not parent_node_running_gyp and
not parent_node_running_npm_install and
not parent_java_running_jenkins and
not user_known_write_monitored_dir_conditions
output: >
File below a known binary directory opened for writing (user=%user.name user_loginuid=%user.loginuid
command=%proc.cmdline pid=%proc.pid file=%fd.name parent=%proc.pname pcmdline=%proc.pcmdline gparent=%proc.aname[2] container_id=%container.id image=%container.image.repository)
priority: ERROR
tags: [filesystem, mitre_persistence]
# Lists for advanced rules
- list: package_mgmt_binaries
items: [
apt, apt-config, apt-get, aptitude, aptitude-curses,
dpkg, dpkg-preconfigure, dpkg-reconfigure, dpkg-divert,
yum, rpm, rpmkey, rpmdb, rpm2cpio,
pup, gem, pip, pip3, sbt,
npm, yarn,
apk,
snapd, snap,
microdnf,
zypper
]
- list: docker_binaries
items: [docker, dockerd, exe, docker-compose, docker-entrypoint, docker-runc-cur, docker-current, dockerd-current]
- list: language_binaries
items: [
node, nodejs, java, javac, python, python2, python3, go, ruby, php, erlang, lua, R,
scala, groovy, kotlin, clojure, haskell, ocaml, perl, rust, swift, dart, julia
]
EOF
# Rule validation
falco --validate /etc/falco/advanced_rules.yaml
# Test rules with specific events
falco -r /etc/falco/advanced_rules.yaml -M 60
Kubernetes Integration
bash
# Kubernetes audit log integration
cat > /etc/falco/k8s_audit_rules.yaml << 'EOF'
# Kubernetes Audit Rules
# Detect privilege escalation in Kubernetes
- rule: K8s Privilege Escalation
desc: Detect privilege escalation attempts in Kubernetes
condition: >
ka and
(ka.verb in (create, update, patch) and
ka.target.resource in (pods, deployments, daemonsets, statefulsets, replicasets) and
(ka.request_object contains "privileged: true" or
ka.request_object contains "allowPrivilegeEscalation: true" or
ka.request_object contains "hostNetwork: true" or
ka.request_object contains "hostPID: true" or
ka.request_object contains "hostIPC: true"))
output: >
Kubernetes privilege escalation attempt (user=%ka.user.name verb=%ka.verb
target=%ka.target.resource reason=%ka.response_reason obj=%ka.request_object)
priority: HIGH
source: k8s_audit
tags: [k8s, privilege_escalation]
# Detect secret access
- rule: K8s Secret Access
desc: Detect access to Kubernetes secrets
condition: >
ka and
ka.verb in (get, list, watch, create, update, patch, delete) and
ka.target.resource = secrets
output: >
Kubernetes secret access (user=%ka.user.name verb=%ka.verb
target=%ka.target.resource secret=%ka.target.name namespace=%ka.target.namespace)
priority: INFO
source: k8s_audit
tags: [k8s, secrets]
# Detect pod creation with suspicious images
- rule: K8s Suspicious Image
desc: Detect pod creation with suspicious container images
condition: >
ka and
ka.verb = create and
ka.target.resource = pods and
(ka.request_object contains "image: latest" or
ka.request_object contains "image: alpine" or
ka.request_object contains "image: busybox" or
ka.request_object contains "image: ubuntu")
output: >
Kubernetes pod created with suspicious image (user=%ka.user.name
pod=%ka.target.name namespace=%ka.target.namespace image=%ka.request_object)
priority: WARNING
source: k8s_audit
tags: [k8s, suspicious_image]
EOF
# Configure Kubernetes audit webhook
cat > /etc/kubernetes/audit-policy.yaml << 'EOF'
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
- level: Metadata
namespaces: ["default", "kube-system", "kube-public"]
verbs: ["get", "list", "create", "update", "patch", "delete"]
resources:
- group: ""
resources: ["pods", "services", "secrets", "configmaps"]
- group: "apps"
resources: ["deployments", "daemonsets", "statefulsets"]
- group: "rbac.authorization.k8s.io"
resources: ["roles", "rolebindings", "clusterroles", "clusterrolebindings"]
EOF
# Update kube-apiserver configuration
# Add to /etc/kubernetes/manifests/kube-apiserver.yaml:
# --audit-log-path=/var/log/audit.log
# --audit-policy-file=/etc/kubernetes/audit-policy.yaml
# --audit-webhook-config-file=/etc/kubernetes/audit-webhook.yaml
Output Integration
bash
# Slack integration
cat > /etc/falco/falco.yaml << 'EOF'
program_output:
enabled: true
keep_alive: false
program: |
jq '{
"text": "Falco Alert",
"attachments": [{
"color": "danger",
"fields": [{
"title": "Rule",
"value": .rule,
"short": true
}, {
"title": "Priority",
"value": .priority,
"short": true
}, {
"title": "Output",
"value": .output,
"short": false
}]
}]
}' | curl -X POST -H 'Content-type: application/json' --data @- YOUR_SLACK_WEBHOOK_URL
EOF
# Elasticsearch integration
cat > /etc/falco/falco.yaml << 'EOF'
program_output:
enabled: true
keep_alive: false
program: |
jq '. + {"@timestamp": now | strftime("%Y-%m-%dT%H:%M:%S.%3NZ")}' | \
curl -X POST "http://elasticsearch:9200/falco-$(date +%Y.%m.%d)/_doc" \
-H "Content-Type: application/json" -d @-
EOF
# Splunk integration
cat > /etc/falco/falco.yaml << 'EOF'
program_output:
enabled: true
keep_alive: false
program: |
jq '{
"time": now,
"event": .,
"source": "falco",
"sourcetype": "falco:alert"
}' | curl -X POST "http://splunk:8088/services/collector/event" \
-H "Authorization: Splunk YOUR_HEC_TOKEN" \
-H "Content-Type: application/json" -d @-
EOF
# Custom webhook integration
cat > /etc/falco/falco.yaml << 'EOF'
http_output:
enabled: true
url: "http://webhook-server:8080/falco-alerts"
user_agent: "falco/0.32.0"
ca_file: ""
ca_bundle: ""
ca_path: ""
insecure: false
echo: false
mtls: false
client_cert: ""
client_key: ""
headers:
Authorization: "Bearer YOUR_API_TOKEN"
Content-Type: "application/json"
EOF
Automation and Scripting
Comprehensive Monitoring Script
bash
#!/bin/bash
# falco_monitor.sh - Comprehensive Falco monitoring and alerting
set -euo pipefail
# Configuration
FALCO_CONFIG="/etc/falco/falco.yaml"
RULES_DIR="/etc/falco/rules.d"
LOG_FILE="/var/log/falco_monitor.log"
ALERT_THRESHOLD=10
WEBHOOK_URL="https://hooks.slack.com/services/YOUR/WEBHOOK/URL"
# Logging function
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a "$LOG_FILE"
}
# Check Falco status
check_falco_status() {
log "Checking Falco status..."
if systemctl is-active --quiet falco; then
log "✅ Falco service is running"
return 0
else
log "❌ Falco service is not running"
return 1
fi
}
# Monitor Falco alerts
monitor_alerts() {
log "Starting Falco alert monitoring..."
# Count alerts in the last hour
local alert_count
alert_count=$(journalctl -u falco --since "1 hour ago" | grep -c "Priority:" || echo "0")
log "Alert count in last hour: $alert_count"
if [ "$alert_count" -gt "$ALERT_THRESHOLD" ]; then
log "⚠️ High alert volume detected: $alert_count alerts"
send_alert "High Falco alert volume: $alert_count alerts in the last hour"
fi
}
# Send alert to webhook
send_alert() {
local message="$1"
curl -X POST -H 'Content-type: application/json' \
--data "{\"text\":\"🚨 Falco Alert: $message\"}" \
"$WEBHOOK_URL" 2>/dev/null || log "Failed to send webhook alert"
}
# Update Falco rules
update_rules() {
log "Updating Falco rules..."
# Backup current rules
cp -r "$RULES_DIR" "${RULES_DIR}.backup.$(date +%Y%m%d_%H%M%S)"
# Download latest rules (example)
# wget -O /tmp/falco_rules.yaml https://raw.githubusercontent.com/falcosecurity/falco/master/rules/falco_rules.yaml
# Validate rules before applying
if falco --validate "$FALCO_CONFIG"; then
log "✅ Rules validation successful"
systemctl reload falco
log "✅ Falco rules reloaded"
else
log "❌ Rules validation failed"
return 1
fi
}
# Generate Falco report
generate_report() {
log "Generating Falco report..."
local report_file="/tmp/falco_report_$(date +%Y%m%d_%H%M%S).txt"
cat > "$report_file" << EOF
Falco Security Report
Generated: $(date)
=== System Status ===
Falco Service: $(systemctl is-active falco)
Falco Version: $(falco --version 2>/dev/null | head -1)
Configuration: $FALCO_CONFIG
=== Alert Summary (Last 24 Hours) ===
Total Alerts: $(journalctl -u falco --since "24 hours ago" | grep -c "Priority:" || echo "0")
High Priority: $(journalctl -u falco --since "24 hours ago" | grep -c "Priority: Critical\|Priority: High" || echo "0")
Medium Priority: $(journalctl -u falco --since "24 hours ago" | grep -c "Priority: Warning\|Priority: Error" || echo "0")
Low Priority: $(journalctl -u falco --since "24 hours ago" | grep -c "Priority: Notice\|Priority: Info\|Priority: Debug" || echo "0")
=== Top Alert Types ===
$(journalctl -u falco --since "24 hours ago" | grep "Priority:" | awk -F'rule=' '{print $2}' | awk '{print $1}' | sort | uniq -c | sort -nr | head -10)
=== Recent Critical Alerts ===
$(journalctl -u falco --since "24 hours ago" | grep "Priority: Critical\|Priority: High" | tail -10)
=== Configuration Summary ===
Rules Files: $(grep -E "^rules_file:" "$FALCO_CONFIG" | wc -l)
Output Channels: $(grep -E "enabled: true" "$FALCO_CONFIG" | wc -l)
Log Level: $(grep -E "^log_level:" "$FALCO_CONFIG" | awk '{print $2}')
EOF
log "Report generated: $report_file"
echo "$report_file"
}
# Performance monitoring
monitor_performance() {
log "Monitoring Falco performance..."
# Check CPU and memory usage
local falco_pid
falco_pid=$(pgrep falco || echo "")
if [ -n "$falco_pid" ]; then
local cpu_usage
local mem_usage
cpu_usage=$(ps -p "$falco_pid" -o %cpu --no-headers | tr -d ' ')
mem_usage=$(ps -p "$falco_pid" -o %mem --no-headers | tr -d ' ')
log "Falco Performance - CPU: ${cpu_usage}%, Memory: ${mem_usage}%"
# Alert if resource usage is high
if (( $(echo "$cpu_usage > 80" | bc -l) )); then
log "⚠️ High CPU usage detected: ${cpu_usage}%"
send_alert "High Falco CPU usage: ${cpu_usage}%"
fi
if (( $(echo "$mem_usage > 80" | bc -l) )); then
log "⚠️ High memory usage detected: ${mem_usage}%"
send_alert "High Falco memory usage: ${mem_usage}%"
fi
else
log "❌ Falco process not found"
fi
}
# Main execution
main() {
log "Starting Falco monitoring script..."
case "${1:-monitor}" in
"status")
check_falco_status
;;
"monitor")
check_falco_status
monitor_alerts
monitor_performance
;;
"update")
update_rules
;;
"report")
generate_report
;;
"all")
check_falco_status
monitor_alerts
monitor_performance
generate_report
;;
*)
echo "Usage: $0 {status|monitor|update|report|all}"
exit 1
;;
esac
log "Falco monitoring script completed"
}
# Execute main function
main "$@"
Kubernetes Deployment Automation
bash
#!/bin/bash
# deploy_falco_k8s.sh - Automated Falco deployment for Kubernetes
set -euo pipefail
# Configuration
NAMESPACE="falco"
HELM_RELEASE="falco"
VALUES_FILE="/tmp/falco-values.yaml"
# Create namespace
kubectl create namespace "$NAMESPACE" --dry-run=client -o yaml | kubectl apply -f -
# Generate Helm values
cat > "$VALUES_FILE" << 'EOF'
falco:
# Enable gRPC output
grpc:
enabled: true
grpcOutput:
enabled: true
# Enable HTTP output
httpOutput:
enabled: true
url: "http://falco-webhook:8080/alerts"
# Custom rules
rules:
- /etc/falco/falco_rules.yaml
- /etc/falco/falco_rules.local.yaml
- /etc/falco/k8s_audit_rules.yaml
# JSON output
jsonOutput: true
jsonIncludeOutputProperty: true
# Log configuration
logLevel: info
priority: debug
# Resource limits
resources:
limits:
cpu: 1000m
memory: 1024Mi
requests:
cpu: 100m
memory: 512Mi
# Node selector for specific nodes
nodeSelector:
kubernetes.io/os: linux
# Tolerations for master nodes
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
operator: Exists
# Service account
serviceAccount:
create: true
name: falco
# RBAC
rbac:
create: true
# Custom configuration
customRules:
custom_rules.yaml: |
- rule: Suspicious kubectl Usage
desc: Detect suspicious kubectl usage
condition: >
spawned_process and proc.name = kubectl and
(proc.args contains "exec" or
proc.args contains "port-forward" or
proc.args contains "proxy")
output: >
Suspicious kubectl usage (user=%user.name command=%proc.cmdline
container_id=%container.id image=%container.image.repository)
priority: WARNING
tags: [k8s, kubectl, suspicious]
- rule: Container Drift Detection
desc: Detect when a container is running a different binary than expected
condition: >
spawned_process and container and
proc.name != container.image.repository
output: >
Container drift detected (user=%user.name command=%proc.cmdline
expected=%container.image.repository actual=%proc.name
container_id=%container.id image=%container.image.repository)
priority: ERROR
tags: [container, drift, security]
# Webhook configuration
webhook:
enabled: true
image:
repository: falcosecurity/falco-webhook
tag: latest
service:
type: ClusterIP
port: 8080
EOF
# Deploy Falco using Helm
helm repo add falcosecurity https://falcosecurity.github.io/charts
helm repo update
helm upgrade --install "$HELM_RELEASE" falcosecurity/falco \
--namespace "$NAMESPACE" \
--values "$VALUES_FILE" \
--wait
# Verify deployment
kubectl get pods -n "$NAMESPACE"
kubectl get services -n "$NAMESPACE"
# Create webhook deployment
cat << 'EOF' | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: falco-webhook
namespace: falco
spec:
replicas: 1
selector:
matchLabels:
app: falco-webhook
template:
metadata:
labels:
app: falco-webhook
spec:
containers:
- name: webhook
image: nginx:alpine
ports:
- containerPort: 8080
volumeMounts:
- name: webhook-config
mountPath: /etc/nginx/conf.d
volumes:
- name: webhook-config
configMap:
name: webhook-config
---
apiVersion: v1
kind: Service
metadata:
name: falco-webhook
namespace: falco
spec:
selector:
app: falco-webhook
ports:
- port: 8080
targetPort: 8080
---
apiVersion: v1
kind: ConfigMap
metadata:
name: webhook-config
namespace: falco
data:
default.conf: |
server {
listen 8080;
location /alerts {
access_log /var/log/nginx/falco-alerts.log;
return 200 "Alert received\n";
add_header Content-Type text/plain;
}
}
EOF
echo "Falco deployment completed successfully!"
echo "Check status with: kubectl get pods -n $NAMESPACE"
echo "View logs with: kubectl logs -n $NAMESPACE -l app.kubernetes.io/name=falco"
Alert Processing Script
bash
#!/bin/bash
# process_falco_alerts.sh - Process and analyze Falco alerts
set -euo pipefail
# Configuration
ALERT_LOG="/var/log/falco_alerts.json"
PROCESSED_LOG="/var/log/falco_processed.log"
ELASTICSEARCH_URL="http://localhost:9200"
SLACK_WEBHOOK="https://hooks.slack.com/services/YOUR/WEBHOOK/URL"
# Process alerts from stdin or file
process_alerts() {
local input_source="${1:-stdin}"
while IFS= read -r line; do
# Parse JSON alert
local rule priority output timestamp
rule=$(echo "$line" | jq -r '.rule // "unknown"')
priority=$(echo "$line" | jq -r '.priority // "unknown"')
output=$(echo "$line" | jq -r '.output // "unknown"')
timestamp=$(echo "$line" | jq -r '.time // now')
# Log processed alert
echo "[$(date)] Processed alert: $rule ($priority)" >> "$PROCESSED_LOG"
# Send to Elasticsearch
send_to_elasticsearch "$line"
# Send high priority alerts to Slack
if [[ "$priority" =~ ^(Critical|High|ERROR)$ ]]; then
send_to_slack "$rule" "$priority" "$output"
fi
# Custom processing based on rule type
case "$rule" in
"Shell Spawned in Container")
handle_shell_alert "$line"
;;
"Privilege Escalation"*)
handle_privilege_escalation "$line"
;;
"Suspicious Network Activity")
handle_network_alert "$line"
;;
*)
handle_generic_alert "$line"
;;
esac
done < "${input_source}"
}
# Send alert to Elasticsearch
send_to_elasticsearch() {
local alert="$1"
local index="falco-$(date +%Y.%m.%d)"
curl -s -X POST "$ELASTICSEARCH_URL/$index/_doc" \
-H "Content-Type: application/json" \
-d "$alert" > /dev/null || echo "Failed to send to Elasticsearch"
}
# Send alert to Slack
send_to_slack() {
local rule="$1"
local priority="$2"
local output="$3"
local color="danger"
case "$priority" in
"Critical") color="danger" ;;
"High") color="warning" ;;
"ERROR") color="warning" ;;
*) color="good" ;;
esac
local payload
payload=$(jq -n \
--arg rule "$rule" \
--arg priority "$priority" \
--arg output "$output" \
--arg color "$color" \
'{
"text": "🚨 Falco Security Alert",
"attachments": [{
"color": $color,
"fields": [
{"title": "Rule", "value": $rule, "short": true},
{"title": "Priority", "value": $priority, "short": true},
{"title": "Details", "value": $output, "short": false}
]
}]
}')
curl -s -X POST -H 'Content-type: application/json' \
--data "$payload" "$SLACK_WEBHOOK" > /dev/null || echo "Failed to send to Slack"
}
# Handle shell spawn alerts
handle_shell_alert() {
local alert="$1"
local container_id user command
container_id=$(echo "$alert" | jq -r '.output_fields.container_id // "unknown"')
user=$(echo "$alert" | jq -r '.output_fields.user_name // "unknown"')
command=$(echo "$alert" | jq -r '.output_fields.proc_cmdline // "unknown"')
echo "🐚 Shell Alert - Container: $container_id, User: $user, Command: $command" >> "$PROCESSED_LOG"
# Additional processing for shell alerts
# Could trigger container isolation, user investigation, etc.
}
# Handle privilege escalation alerts
handle_privilege_escalation() {
local alert="$1"
local user command container_id
user=$(echo "$alert" | jq -r '.output_fields.user_name // "unknown"')
command=$(echo "$alert" | jq -r '.output_fields.proc_cmdline // "unknown"')
container_id=$(echo "$alert" | jq -r '.output_fields.container_id // "unknown"')
echo "⬆️ Privilege Escalation - User: $user, Command: $command, Container: $container_id" >> "$PROCESSED_LOG"
# High priority alert - could trigger immediate response
send_to_slack "URGENT: Privilege Escalation Detected" "Critical" "User $user executed: $command"
}
# Handle network alerts
handle_network_alert() {
local alert="$1"
local connection user command
connection=$(echo "$alert" | jq -r '.output_fields.fd_name // "unknown"')
user=$(echo "$alert" | jq -r '.output_fields.user_name // "unknown"')
command=$(echo "$alert" | jq -r '.output_fields.proc_cmdline // "unknown"')
echo "🌐 Network Alert - Connection: $connection, User: $user, Command: $command" >> "$PROCESSED_LOG"
# Could trigger network monitoring, connection blocking, etc.
}
# Handle generic alerts
handle_generic_alert() {
local alert="$1"
local rule priority
rule=$(echo "$alert" | jq -r '.rule // "unknown"')
priority=$(echo "$alert" | jq -r '.priority // "unknown"')
echo "📋 Generic Alert - Rule: $rule, Priority: $priority" >> "$PROCESSED_LOG"
}
# Generate alert statistics
generate_stats() {
local timeframe="${1:-1h}"
echo "Falco Alert Statistics (Last $timeframe)"
echo "========================================"
# Count alerts by priority
echo "Alerts by Priority:"
journalctl -u falco --since "$timeframe ago" | \
grep "Priority:" | \
awk -F'Priority: ' '{print $2}' | \
awk '{print $1}' | \
sort | uniq -c | sort -nr
echo ""
# Count alerts by rule
echo "Top 10 Alert Rules:"
journalctl -u falco --since "$timeframe ago" | \
grep "rule=" | \
awk -F'rule=' '{print $2}' | \
awk '{print $1}' | \
sort | uniq -c | sort -nr | head -10
echo ""
# Count alerts by container
echo "Top 10 Containers with Alerts:"
journalctl -u falco --since "$timeframe ago" | \
grep "container_id=" | \
awk -F'container_id=' '{print $2}' | \
awk '{print $1}' | \
sort | uniq -c | sort -nr | head -10
}
# Main execution
main() {
case "${1:-process}" in
"process")
if [ -p /dev/stdin ]; then
process_alerts "stdin"
elif [ -f "$ALERT_LOG" ]; then
process_alerts "$ALERT_LOG"
else
echo "No input source available"
exit 1
fi
;;
"stats")
generate_stats "${2:-1h}"
;;
"test")
# Test alert processing with sample data
echo '{"rule":"Test Rule","priority":"High","output":"Test alert output","time":"2023-01-01T00:00:00Z"}' | process_alerts
;;
*)
echo "Usage: $0 {process|stats [timeframe]|test}"
echo " process: Process alerts from stdin or log file"
echo " stats: Generate alert statistics"
echo " test: Test alert processing"
exit 1
;;
esac
}
# Execute main function
main "$@"
Integration Examples
SIEM Integration
bash
# Splunk Universal Forwarder configuration
cat > /opt/splunkforwarder/etc/apps/falco/local/inputs.conf << 'EOF'
[monitor:///var/log/falco.log]
disabled = false
sourcetype = falco:alert
index = security
[monitor:///var/log/falco_audit.log]
disabled = false
sourcetype = falco:audit
index = security
EOF
# Elastic Stack integration
cat > /etc/filebeat/conf.d/falco.yml << 'EOF'
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/falco.log
json.keys_under_root: true
json.add_error_key: true
fields:
logtype: falco
fields_under_root: true
output.elasticsearch:
hosts: ["localhost:9200"]
index: "falco-%{+yyyy.MM.dd}"
template.name: "falco"
template.pattern: "falco-*"
template.settings:
index.number_of_shards: 1
index.number_of_replicas: 0
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
EOF
# QRadar integration via syslog
cat > /etc/rsyslog.d/49-falco.conf << 'EOF'
# Falco alerts to QRadar
if $programname == 'falco' then {
*.* @@qradar-server:514
stop
}
EOF
CI/CD Integration
bash
# Jenkins pipeline for Falco rule testing
cat > Jenkinsfile << 'EOF'
pipeline {
agent any
stages {
stage('Validate Falco Rules') {
steps {
script {
sh '''
# Install Falco
curl -s https://falco.org/repo/falcosecurity-3672BA8F.asc | apt-key add -
echo "deb https://download.falco.org/packages/deb stable main" | tee -a /etc/apt/sources.list.d/falcosecurity.list
apt-get update -y
apt-get install -y falco
# Validate rules
falco --validate /path/to/custom_rules.yaml
# Test rules with sample events
falco -r /path/to/custom_rules.yaml -M 10 --dry-run
'''
}
}
}
stage('Deploy Rules') {
when {
branch 'main'
}
steps {
script {
sh '''
# Deploy to Kubernetes
kubectl create configmap falco-rules \
--from-file=/path/to/custom_rules.yaml \
--namespace=falco \
--dry-run=client -o yaml | kubectl apply -f -
# Restart Falco pods
kubectl rollout restart daemonset/falco -n falco
'''
}
}
}
}
post {
always {
archiveArtifacts artifacts: 'rules/*.yaml', fingerprint: true
}
failure {
emailext (
subject: "Falco Rule Validation Failed: ${env.JOB_NAME} - ${env.BUILD_NUMBER}",
body: "Rule validation failed. Check console output for details.",
to: "${env.CHANGE_AUTHOR_EMAIL}"
)
}
}
}
EOF
# GitHub Actions workflow
cat > .github/workflows/falco.yml << 'EOF'
name: Falco Rule Validation
on:
push:
branches: [ main, develop ]
pull_request:
branches: [ main ]
jobs:
validate-rules:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Install Falco
run: |
curl -s https://falco.org/repo/falcosecurity-3672BA8F.asc | sudo apt-key add -
echo "deb https://download.falco.org/packages/deb stable main" | sudo tee -a /etc/apt/sources.list.d/falcosecurity.list
sudo apt-get update -y
sudo apt-get install -y falco
- name: Validate Rules
run: |
for rule_file in rules/*.yaml; do
echo "Validating $rule_file"
falco --validate "$rule_file"
done
- name: Test Rules
run: |
falco -r rules/custom_rules.yaml -M 30 --dry-run
- name: Upload artifacts
uses: actions/upload-artifact@v3
with:
name: falco-rules
path: rules/
EOF
Troubleshooting
Common Issues and Solutions
bash
# Check Falco service status
systemctl status falco
# View Falco logs
journalctl -u falco -f
# Check for kernel module issues
dmesg | grep falco
# Verify kernel headers
ls /lib/modules/$(uname -r)/build
# Test Falco configuration
falco --validate /etc/falco/falco.yaml
# Check for permission issues
ls -la /dev/falco*
# Monitor Falco performance
top -p $(pgrep falco)
# Check for dropped events
journalctl -u falco | grep "dropped"
# Verify eBPF probe
ls -la /root/.falco/
# Test rule syntax
falco --dry-run -r /path/to/rules.yaml
# Debug mode
falco -o log_level=debug
# Check syscall compatibility
falco --list | grep syscall
Performance Optimization
bash
# Optimize Falco configuration for performance
cat > /etc/falco/falco.yaml << 'EOF'
# Performance optimizations
syscall_event_drops:
actions:
- log
rate: 0.1
max_burst: 10
outputs:
rate: 10
max_burst: 100
# Reduce rule complexity
rules_file:
- /etc/falco/falco_rules.yaml
- /etc/falco/essential_rules.yaml # Only essential rules
# Optimize output
json_output: true
buffered_outputs: true
# Disable unnecessary outputs
file_output:
enabled: false
syslog_output:
enabled: false
EOF
# Monitor resource usage
cat > /usr/local/bin/falco_monitor.sh << 'EOF'
#!/bin/bash
while true; do
echo "$(date): CPU: $(ps -p $(pgrep falco) -o %cpu --no-headers)%, Memory: $(ps -p $(pgrep falco) -o %mem --no-headers)%"
sleep 60
done
EOF
chmod +x /usr/local/bin/falco_monitor.sh
Security Considerations
Operational Security
bash
# Secure Falco configuration
chmod 600 /etc/falco/falco.yaml
chown root:root /etc/falco/falco.yaml
# Protect rule files
chmod 644 /etc/falco/falco_rules*.yaml
chown root:root /etc/falco/falco_rules*.yaml
# Secure log files
chmod 640 /var/log/falco.log
chown root:adm /var/log/falco.log
# Network security for gRPC
# Use TLS certificates for gRPC communication
openssl genrsa -out server.key 2048
openssl req -new -x509 -key server.key -out server.crt -days 365
# Configure firewall rules
ufw allow from trusted_network to any port 5060
ufw deny 5060
# Audit Falco configuration changes
auditctl -w /etc/falco/ -p wa -k falco_config
Best Practices
Rule Management:
- Use version control for custom rules
- Test rules in development environment
- Implement gradual rule deployment
- Monitor rule performance impact
Alert Management:
- Implement alert prioritization
- Use alert correlation and deduplication
- Set up proper escalation procedures
- Regular review of alert patterns
Performance:
- Monitor system resource usage
- Optimize rule complexity
- Use appropriate output buffering
- Regular performance tuning
Security:
- Secure Falco configuration files
- Use encrypted communication channels
- Implement proper access controls
- Regular security updates
Conclusion
Falco provides comprehensive runtime security monitoring for containerized and cloud-native environments. This cheatsheet covers installation, configuration, custom rule development, Kubernetes integration, automation, and troubleshooting. Regular monitoring and tuning ensure optimal security coverage while maintaining system performance.