Filebeat Cheatsheet¶
Installation¶
| Platform | Command |
|---|---|
| Ubuntu/Debian | wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch \| sudo apt-key add -sudo apt-get install apt-transport-httpsecho "deb https://artifacts.elastic.co/packages/8.x/apt stable main" \| sudo tee -a /etc/apt/sources.list.d/elastic-8.x.listsudo apt-get update && sudo apt-get install filebeat |
| RHEL/CentOS | sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearchsudo yum install filebeat |
| macOS | brew tap elastic/tapbrew install elastic/tap/filebeat-full |
| Windows | Download from https://artifacts.elastic.co/downloads/beats/filebeat/ Extract and run: .\install-service-filebeat.ps1 |
| Docker | docker pull docker.elastic.co/beats/filebeat:8.11.0 |
| Kubernetes (Helm) | helm repo add elastic https://helm.elastic.cohelm install filebeat elastic/filebeat |
Basic Commands¶
| Command | Description |
|---|---|
sudo systemctl start filebeat |
Start Filebeat service |
sudo systemctl stop filebeat |
Stop Filebeat service |
sudo systemctl restart filebeat |
Restart Filebeat service |
sudo systemctl status filebeat |
Check Filebeat service status |
sudo systemctl enable filebeat |
Enable Filebeat to start on boot |
sudo filebeat -e |
Run Filebeat in foreground with console output |
sudo filebeat -e -c /path/to/filebeat.yml |
Run with specific configuration file |
sudo filebeat test config |
Validate configuration file syntax |
sudo filebeat test output |
Test connectivity to configured output |
sudo journalctl -u filebeat -f |
View Filebeat service logs in real-time |
sudo filebeat modules list |
List all available modules |
sudo filebeat modules enable apache |
Enable a specific module (Apache example) |
sudo filebeat modules disable apache |
Disable a specific module |
sudo filebeat setup |
Load index template, dashboards, and pipelines |
sudo filebeat version |
Display Filebeat version information |
Module Management¶
| Command | Description |
|---|---|
sudo filebeat modules list |
Show all available modules and their status |
sudo filebeat modules enable nginx mysql |
Enable multiple modules at once |
sudo filebeat modules disable system |
Disable a module |
sudo filebeat modules list \| grep Enabled -A 10 |
Show only enabled modules |
sudo filebeat export config --modules apache |
Export specific module configuration |
ls /etc/filebeat/modules.d/ |
List module configuration files |
sudo vi /etc/filebeat/modules.d/nginx.yml |
Edit module configuration file |
Setup and Initialization¶
| Command | Description |
|---|---|
sudo filebeat setup --index-management |
Setup only index template in Elasticsearch |
sudo filebeat setup --dashboards |
Setup only Kibana dashboards |
sudo filebeat setup --pipelines |
Setup only ingest pipelines |
sudo filebeat setup -E output.elasticsearch.hosts=['es:9200'] |
Setup with specific Elasticsearch host |
sudo filebeat setup -E output.elasticsearch.username=elastic -E output.elasticsearch.password=pass |
Setup with authentication credentials |
sudo filebeat export template |
Export index template to stdout |
sudo filebeat export ilm-policy |
Export ILM (Index Lifecycle Management) policy |
Advanced Usage¶
| Command | Description |
|---|---|
sudo filebeat -e -d "*" |
Run with debug logging for all components |
sudo filebeat -e -d "publish,harvester" |
Debug specific components only |
sudo filebeat -e --strict.perms=false |
Disable strict permission checking |
sudo filebeat -e -E http.enabled=true -E http.host=localhost -E http.port=5066 |
Enable HTTP monitoring endpoint |
curl http://localhost:5066/stats |
Query monitoring endpoint for statistics |
curl http://localhost:5066/state |
Get detailed state information |
sudo filebeat -e -E output.console.enabled=true -E output.elasticsearch.enabled=false |
Output events to console instead of Elasticsearch |
sudo filebeat -e -E filebeat.config.inputs.workers=4 |
Run with specific number of workers |
sudo filebeat -e -E output.elasticsearch.bulk_max_size=100 |
Adjust bulk indexing size |
sudo filebeat -e -E queue.mem.events=8192 |
Set queue memory event limit |
sudo filebeat -e -E output.elasticsearch.compression_level=3 |
Enable compression for Elasticsearch output |
sudo filebeat test config -e -d "processors" |
Test processors configuration with debug output |
sudo filebeat migrate-registry |
Migrate registry from older Filebeat version |
sudo filebeat export config |
Export complete configuration to stdout |
curl http://localhost:5066/autodiscover |
Check autodiscover status |
Keystore Management¶
| Command | Description |
|---|---|
sudo filebeat keystore create |
Create a new keystore for secrets |
sudo filebeat keystore add ES_PASSWORD |
Add a secret to keystore (prompts for value) |
sudo filebeat keystore list |
List all keys in keystore |
sudo filebeat keystore remove ES_PASSWORD |
Remove a key from keystore |
Configuration¶
Main Configuration File¶
Location: /etc/filebeat/filebeat.yml (Linux) or C:\Program Files\filebeat\filebeat.yml (Windows)
Basic Input Configuration¶
# Filestream input (recommended for log files)
filebeat.inputs:
- type: filestream
id: my-app-logs
enabled: true
paths:
- /var/log/myapp/*.log
fields:
app: myapp
environment: production
fields_under_root: true
Log Input Configuration¶
# Log input (legacy, but still supported)
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/nginx/access.log
- /var/log/nginx/error.log
exclude_lines: ['^DEBUG']
include_lines: ['^ERR', '^WARN']
multiline.pattern: '^[[:space:]]'
multiline.negate: false
multiline.match: after
Container Input Configuration¶
# Docker container logs
filebeat.inputs:
- type: container
enabled: true
paths:
- /var/lib/docker/containers/*/*.log
processors:
- add_docker_metadata: ~
Elasticsearch Output¶
output.elasticsearch:
hosts: ["elasticsearch:9200"]
username: "elastic"
password: "${ES_PASSWORD}" # From keystore
index: "filebeat-%{[agent.version]}-%{+yyyy.MM.dd}"
pipeline: "filebeat-%{[agent.version]}-apache-access-default"
Logstash Output¶
output.logstash:
hosts: ["logstash:5044"]
loadbalance: true
ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
ssl.certificate: "/etc/pki/client/cert.pem"
ssl.key: "/etc/pki/client/cert.key"
Kafka Output¶
output.kafka:
hosts: ["kafka1:9092", "kafka2:9092"]
topic: "filebeat"
partition.round_robin:
reachable_only: false
compression: gzip
max_message_bytes: 1000000
Processors Configuration¶
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
- drop_fields:
fields: ["agent.ephemeral_id", "agent.id"]
- decode_json_fields:
fields: ["message"]
target: "json"
overwrite_keys: true
Autodiscover for Docker¶
filebeat.autodiscover:
providers:
- type: docker
hints.enabled: true
templates:
- condition:
contains:
docker.container.image: nginx
config:
- type: container
paths:
- /var/lib/docker/containers/${data.docker.container.id}/*.log
processors:
- add_docker_metadata: ~
Autodiscover for Kubernetes¶
filebeat.autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
hints.enabled: true
hints.default_config:
type: container
paths:
- /var/log/containers/*${data.kubernetes.container.id}.log
Module Configuration Example¶
# /etc/filebeat/modules.d/nginx.yml
- module: nginx
access:
enabled: true
var.paths: ["/var/log/nginx/access.log*"]
error:
enabled: true
var.paths: ["/var/log/nginx/error.log*"]
Common Use Cases¶
Use Case 1: Collecting Application Logs¶
filebeat.inputs:
- type: filestream
id: myapp-logs
enabled: true
paths:
- /var/log/myapp/*.log
fields:
app: myapp
env: production
Use Case 2: Setting Up Nginx Log Collection¶
# Enable Nginx module
sudo filebeat modules enable nginx
# Configure module
sudo vi /etc/filebeat/modules.d/nginx.yml
- module: nginx
access:
enabled: true
var.paths: ["/var/log/nginx/access.log*"]
error:
enabled: true
var.paths: ["/var/log/nginx/error.log*"]
Use Case 3: Docker Container Log Collection¶
# Run Filebeat in Docker to collect container logs
docker run -d \
--name=filebeat \
--user=root \
--volume="$(pwd)/filebeat.docker.yml:/usr/share/filebeat/filebeat.yml:ro" \
--volume="/var/lib/docker/containers:/var/lib/docker/containers:ro" \
--volume="/var/run/docker.sock:/var/run/docker.sock:ro" \
docker.elastic.co/beats/filebeat:8.11.0
Use Case 4: Kubernetes DaemonSet Deployment¶
# Deploy Filebeat as DaemonSet using Helm
helm repo add elastic https://helm.elastic.co
helm repo update
# Create values file
cat > filebeat-values.yaml <<EOF
daemonset:
enabled: true
filebeatConfig:
filebeat.yml: |
filebeat.autodiscover:
providers:
- type: kubernetes
node: \${NODE_NAME}
hints.enabled: true
output.elasticsearch:
hosts: ["elasticsearch:9200"]
EOF
# Install Filebeat
helm install filebeat elastic/filebeat \
--namespace logging --create-namespace \
-f filebeat-values.yaml
Use Case 5: Multiline Log Parsing (Stack Traces)¶
filebeat.inputs:
- type: filestream
id: java-app
enabled: true
paths:
- /var/log/java-app/*.log
parsers:
- multiline:
type: pattern
pattern: '^[[:space:]]+(at|\.{3})\b|^Caused by:'
negate: false
match: after
Use Case 6: Sending Logs to Multiple Outputs¶
# Configure Logstash output for processing and Elasticsearch for backup
sudo vi /etc/filebeat/filebeat.yml
output.logstash:
hosts: ["logstash:5044"]
loadbalance: true
# Note: Only one output can be active at a time in Filebeat
# For multiple outputs, use Logstash as intermediary
Use Case 7: Filtering and Enriching Logs¶
processors:
- drop_event:
when:
regexp:
message: "^DEBUG"
- add_fields:
target: ''
fields:
datacenter: us-east-1
team: platform
- decode_json_fields:
fields: ["message"]
target: "json"
overwrite_keys: true
- drop_fields:
fields: ["agent.ephemeral_id", "ecs.version"]
Best Practices¶
- Use Filestream Input: Prefer
filestreamover the legacyloginput type for better performance and reliability with file rotation - Enable Modules: Use pre-built modules (nginx, apache, mysql, etc.) instead of custom configurations when possible for faster setup and better parsing
- Implement Backpressure Handling: Configure
queue.mem.eventsandoutput.elasticsearch.bulk_max_sizeto handle load spikes without data loss - Secure Credentials: Store sensitive information (passwords, API keys) in the Filebeat keystore rather than plain text in configuration files
- Monitor Resource Usage: Enable HTTP endpoint (
http.enabled: true) to monitor Filebeat performance and harvester status - Use Index Lifecycle Management (ILM): Configure ILM policies to automatically manage index retention and reduce storage costs
- Tag and Enrich Logs: Add custom fields and metadata using processors to make logs more searchable and contextual
- Test Before Production: Always use
filebeat test configandfilebeat test outputbefore deploying configuration changes - Handle Multiline Logs: Configure multiline patterns for stack traces and multi-line application logs to prevent log fragmentation
- Implement Autodiscover: Use autodiscover for dynamic environments (Docker, Kubernetes) to automatically detect and configure new containers
- Regular Updates: Keep Filebeat updated to match your Elasticsearch version for compatibility and security patches
- Set Appropriate Permissions: Ensure Filebeat has read access to log files but follows the principle of least privilege
Troubleshooting¶
| Issue | Solution |
|---|---|
| Filebeat not starting | Check configuration syntax: sudo filebeat test configCheck service status: sudo systemctl status filebeatReview logs: sudo journalctl -u filebeat -n 50 |
| No data in Elasticsearch | Test output connectivity: sudo filebeat test outputCheck Elasticsearch is running: curl http://elasticsearch:9200Verify index exists: curl http://elasticsearch:9200/_cat/indices?v |
| Permission denied errors | Ensure Filebeat has read access: sudo chmod 644 /var/log/myapp/*.logCheck file ownership: ls -la /var/log/myapp/Run with proper user: sudo chown root:root /etc/filebeat/filebeat.yml |
| Duplicate events | Check registry file: /var/lib/filebeat/registry/filebeat/data.jsonEnsure unique input IDs in configuration Avoid multiple Filebeat instances reading same files |
| High memory usage | Reduce queue size: queue.mem.events: 2048Decrease harvester limit: filebeat.config.inputs.max_harvesters: 100Enable compression: output.elasticsearch.compression_level: 3 |
| Logs not being tailed | Check file paths are correct: ls -la /var/log/myapp/*.logVerify input is enabled in configuration Check close_inactive setting isn't too aggressive |
| Connection timeout to Elasticsearch | Increase timeout: output.elasticsearch.timeout: 90Check network connectivity: telnet elasticsearch 9200Verify credentials: curl -u elastic:password http://elasticsearch:9200 |
| Module not working | Verify module is enabled: sudo filebeat modules listCheck log paths in module config: cat /etc/filebeat/modules.d/nginx.ymlEnsure ingest pipelines loaded: sudo filebeat setup --pipelines |
| Multiline logs not parsing | Test pattern with sample logs Check multiline.negate and multiline.match settingsReview harvester debug logs: sudo filebeat -e -d "harvester" |
| SSL/TLS connection errors | Verify certificate paths and permissions Check certificate validity: openssl x509 -in cert.pem -text -nooutDisable SSL verification for testing: output.elasticsearch.ssl.verification_mode: none |
| Registry file corruption | Stop Filebeat: sudo systemctl stop filebeatBackup registry: sudo cp -r /var/lib/filebeat/registry /tmp/registry.bakRemove registry: sudo rm -rf /var/lib/filebeat/registryRestart (will reprocess logs): sudo systemctl start filebeat |
| Autodiscover not detecting containers | Check Docker socket permissions: ls -la /var/run/docker.sockVerify autodiscover config syntax Enable debug: sudo filebeat -e -d "autodiscover" |
Quick Reference: Common File Locations¶
| Item | Location (Linux) | Location (Windows) |
|---|---|---|
| Main config | /etc/filebeat/filebeat.yml |
C:\Program Files\filebeat\filebeat.yml |
| Module configs | /etc/filebeat/modules.d/ |
C:\Program Files\filebeat\modules.d\ |
| Registry | /var/lib/filebeat/registry/ |
C:\ProgramData\filebeat\registry\ |
| Logs | /var/log/filebeat/ |
C:\ProgramData\filebeat\logs\ |
| Binary | /usr/share/filebeat/bin/filebeat |
C:\Program Files\filebeat\filebeat.exe |
| Data directory | /var/lib/filebeat/ |
C:\ProgramData\filebeat\ |
Performance Tuning Parameters¶
| Parameter | Default | Description | Recommended Range |
|---|---|---|---|
queue.mem.events |
4096 | Number of events queue can hold | 2048-8192 |
queue.mem.flush.min_events |
2048 | Minimum events before flush | 1024-4096 |
output.elasticsearch.bulk_max_size |
50 | Max events per bulk request | 50-1600 |
output.elasticsearch.worker |
1 | Number of output workers | 1-4 |
filebeat.config.inputs.max_harvesters |
0 (unlimited) | Max concurrent file readers | 100-500 |
close_inactive |
5m | Close file after inactivity | 1m-10m |
scan_frequency |
10s | How often to check for new files | 5s-30s |