Zum Inhalt

Plattform Befehl
Ubuntu/Debian (td-agent) curl -fsSL https://toolbelt.treasuredata.com/sh/install-ubuntu-jammy-td-agent4.sh \ | sh
RHEL/CentOS curl -L https://toolbelt.treasuredata.com/sh/install-redhat-td-agent4.sh \ | sh
macOS brew install fluentd
Ruby Gem gem install fluentd
Docker docker pull fluent/fluentd:latest
Kubernetes Als DaemonSet bereitstellen (siehe Konfigurationsabschnitt)
Befehl Beschreibung
fluentd -c fluent.conf Starten Sie Fluentd mit der angegebenen Konfigurationsdatei
fluentd -c fluent.conf -vv Mit ausführlicher Debug-Ausgabe ausführen
fluentd -c fluent.conf --dry-run Konfiguration ohne Start validieren
fluentd --setup ./fluent Standard-Konfigurationsverzeichnisstruktur erstellen
fluentd --version Zeige Fluentd Versionsinformationen
sudo systemctl start td-agent td-Agent-Dienst starten (Linux)
sudo systemctl stop td-agent td-Agent-Dienst stoppen
sudo systemctl restart td-agent td-Agent-Dienst neu starten
sudo systemctl status td-agent td-Agent-Dienststatus prüfen
sudo systemctl reload td-agent Konfiguration ohne Neustart neu laden
sudo systemctl enable td-agent td-Agent beim Systemstart aktivieren
sudo journalctl -u td-agent -f td-Agent-Dienst-Protokolle in Echtzeit verfolgen
echo '{"msg":"test"}' \ | fluent-cat debug.test Sende Test-Log-Nachricht an Fluentd
curl -X POST -d 'json={"event":"test"}' http://localhost:8888/test.cycle HTTP-Testprotokoll senden
td-agent-gem list \ | grep fluent-plugin Installierte Fluentd-Plugins auflisten
Befehl Beschreibung
fluentd -c fluent.conf -d /var/run/fluentd.pid Führen Sie Fluentd im Daemon-Modus mit PID-Datei aus
fluentd -c fluent.conf -o /var/log/fluentd.log Mit Ausgabe in eine bestimmte Logdatei ausführen
fluentd -c fluent.conf --workers 4 Mit mehreren Workerprozessen ausführen
fluentd -c fluent.conf -vvv Mit Trace-Level-Logging für Debugging ausführen
fluentd --show-plugin-config=input:tail Konfigurationsoptionen für spezifisches Plugin anzeigen
td-agent-gem install fluent-plugin-elasticsearch Elasticsearch-Ausgabe-Plugin installieren
td-agent-gem install fluent-plugin-kafka -v 0.17.5 Installieren Sie eine bestimmte Version des Kafka-Plugins
td-agent-gem update fluent-plugin-s3 S3-Plugin auf die neueste Version aktualisieren
td-agent-gem uninstall fluent-plugin-mongo MongoDB-Plugin entfernen
td-agent-gem search -r fluent-plugin Nach verfügbaren Plugins im Repository suchen
fluent-cat --host 192.168.1.100 --port 24224 app.logs Protokolle an Remote-Fluentd-Instanz senden
fluent-cat app.logs < /path/to/logfile.json Senden Sie Protokolldateiinhalte an Fluentd
docker run -d -p 24224:24224 -v /data/fluentd:/fluentd/etc fluent/fluentd Führen Sie Fluentd in Docker mit eingehängter Konfiguration aus
sudo kill -USR1 $(cat /var/run/td-agent/td-agent.pid) Fluentd elegant neu laden (Logdateien wiedereröffnen)
sudo kill -USR2 $(cat /var/run/td-agent/td-agent.pid) Fluentd-Logdateien ohne Neuladen erneut öffnen
/etc/td-agent/td-agent.conf/etc/td-agent/td-agent.conf
./fluent/fluent.conf~/.fluentd/fluent.conf
/fluentd/etc/fluent.conf/etc/fluent/fluent.conf
# Source: Input plugins
<source>
  @type forward
  port 24224
  bind 0.0.0.0
</source>

# Filter: Process/transform logs
<filter app.**>
  @type record_transformer
  <record>
    hostname "#{Socket.gethostname}"
    tag ${tag}
  </record>
</filter>

# Match: Output plugins
<match app.**>
  @type elasticsearch
  host elasticsearch.local
  port 9200
  index_name fluentd
  type_name fluentd
</match>
``````yaml
type: input/output/filter
<plugin_name>
  # Konfigurationsparameter
</plugin_name>
# Forward input (receive from other Fluentd instances)
<source>
  @type forward
  port 24224
  bind 0.0.0.0
</source>

# Tail log files
<source>
  @type tail
  path /var/log/nginx/access.log
  pos_file /var/log/td-agent/nginx-access.pos
  tag nginx.access
  <parse>
    @type nginx
  </parse>
</source>

# HTTP input
<source>
  @type http
  port 8888
  bind 0.0.0.0
  body_size_limit 32m
  keepalive_timeout 10s
</source>

# Syslog input
<source>
  @type syslog
  port 5140
  bind 0.0.0.0
  tag system.syslog
</source>
``````yaml
<source>
  @type tail
  path /var/log/nginx/access.log
  tag nginx.access
  <parse>
    @type nginx
  </parse>
</source>
# Add/modify record fields
<filter app.**>
  @type record_transformer
  <record>
    hostname "#{Socket.gethostname}"
    environment production
    timestamp ${time}
  </record>
</filter>

# Parse unstructured logs
<filter app.logs>
  @type parser
  key_name message
  <parse>
    @type json
  </parse>
</filter>

# Grep filter (include/exclude)
<filter app.**>
  @type grep
  <regexp>
    key level
    pattern /^(ERROR|FATAL)$/
  </regexp>
</filter>

# Modify tag
<match app.raw.**>
  @type rewrite_tag_filter
  <rule>
    key level
    pattern /^ERROR$/
    tag app.error.${tag}
  </rule>
</match>
``````yaml
<filter tag_name>
  @type record_transformer
  <record>
    # Transformationslogik
  </record>
</filter>
# Elasticsearch output
<match app.**>
  @type elasticsearch
  host elasticsearch.local
  port 9200
  logstash_format true
  logstash_prefix fluentd
  <buffer>
    @type file
    path /var/log/fluentd/buffer/elasticsearch
    flush_interval 10s
    retry_max_interval 300s
  </buffer>
</match>

# S3 output
<match logs.**>
  @type s3
  aws_key_id YOUR_AWS_KEY_ID
  aws_sec_key YOUR_AWS_SECRET_KEY
  s3_bucket your-bucket-name
  s3_region us-east-1
  path logs/
  time_slice_format %Y%m%d%H
  <buffer time>
    timekey 3600
    timekey_wait 10m
  </buffer>
</match>

# File output
<match debug.**>
  @type file
  path /var/log/fluentd/output
  <buffer>
    timekey 1d
    timekey_use_utc true
  </buffer>
</match>

# Forward to another Fluentd
<match forward.**>
  @type forward
  <server>
    host 192.168.1.100
    port 24224
  </server>
  <buffer>
    @type file
    path /var/log/fluentd/buffer/forward
  </buffer>
</match>

# Stdout (debugging)
<match debug.**>
  @type stdout
</match>
``````yaml
<match tag_pattern>
  @type elasticsearch
  host localhost
  port 9200
  logstash_format true
</match>
<match pattern.**>
  @type elasticsearch

  # File buffer with advanced settings
  <buffer>
    @type file
    path /var/log/fluentd/buffer

    # Flush settings
    flush_mode interval
    flush_interval 10s
    flush_at_shutdown true

    # Retry settings
    retry_type exponential_backoff
    retry_wait 10s
    retry_max_interval 300s
    retry_timeout 72h
    retry_max_times 17

    # Chunk settings
    chunk_limit_size 5M
    queue_limit_length 32
    overflow_action drop_oldest_chunk

    # Compression
    compress gzip
  </buffer>
</match>

# Memory buffer for high-performance
<match fast.**>
  @type forward
  <buffer>
    @type memory
    flush_interval 5s
    chunk_limit_size 1M
    queue_limit_length 64
  </buffer>
</match>
``````yaml
<buffer>
  @type file
  path /var/log/fluentd-buffers/myapp
  flush_mode interval
  retry_type exponential_backoff
</buffer>
<system>
  workers 4
  root_dir /var/log/fluentd
</system>

# Worker-specific sources
<worker 0>
  <source>
    @type forward
    port 24224
  </source>
</worker>

<worker 1-3>
  <source>
    @type tail
    path /var/log/app/*.log
    tag app.logs
  </source>
</worker>
``````yaml
<system>
  workers 4
</system>
# Route to different pipelines using labels
<source>
  @type forward
  @label @mainstream
</source>

<source>
  @type tail
  path /var/log/secure.log
  @label @security
</source>

<label @mainstream>
  <filter **>
    @type record_transformer
    <record>
      pipeline mainstream
    </record>
  </filter>

  <match **>
    @type elasticsearch
    host es-main
  </match>
</label>

<label @security>
  <filter **>
    @type grep
    <regexp>
      key message
      pattern /authentication failure/
    </regexp>
  </filter>

  <match **>
    @type s3
    s3_bucket security-logs
  </match>
</label>
``````yaml
<label @SYSTEM>
  <match **>
    @type forward
    send_timeout 60s
  </match>
</label>
# Install Elasticsearch plugin
sudo td-agent-gem install fluent-plugin-elasticsearch

# Configure Fluentd
sudo tee /etc/td-agent/td-agent.conf > /dev/null <<'EOF'
<source>
  @type tail
  path /var/log/nginx/access.log
  pos_file /var/log/td-agent/nginx-access.pos
  tag nginx.access
  <parse>
    @type nginx
  </parse>
</source>

<match nginx.access>
  @type elasticsearch
  host localhost
  port 9200
  logstash_format true
  logstash_prefix nginx
  <buffer>
    flush_interval 10s
  </buffer>
</match>
EOF

# Restart td-agent
sudo systemctl restart td-agent

# Verify logs are flowing
sudo journalctl -u td-agent -f
``````yaml
<source>
  @type tail
  path /var/log/nginx/access.log
  tag nginx.access
  <parse>
    @type nginx
  </parse>
</source>

<match nginx.access>
  @type elasticsearch
  host elasticsearch
  port 9200
  logstash_format true
</match>
# Deploy Fluentd DaemonSet
kubectl apply -f - <<'EOF'
apiVersion: v1
kind: ServiceAccount
metadata:
  name: fluentd
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: fluentd
rules:
- apiGroups: [""]
  resources: ["pods", "namespaces"]
  verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: fluentd
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: fluentd
subjects:
- kind: ServiceAccount
  name: fluentd
  namespace: kube-system
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: fluentd-logging
  template:
    metadata:
      labels:
        k8s-app: fluentd-logging
    spec:
      serviceAccountName: fluentd
      containers:
      - name: fluentd
        image: fluent/fluentd-kubernetes-daemonset:v1-debian-elasticsearch
        env:
        - name: FLUENT_ELASTICSEARCH_HOST
          value: "elasticsearch.logging.svc.cluster.local"
        - name: FLUENT_ELASTICSEARCH_PORT
          value: "9200"
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
EOF

# Check DaemonSet status
kubectl get daemonset -n kube-system fluentd
kubectl logs -n kube-system -l k8s-app=fluentd-logging --tail=50
``````yaml
<source>
  @type kubernetes
  tag kube.*
  <parse>
    @type json
  </parse>
</source>

<match kube.**>
  @type elasticsearch
  host elasticsearch
  logstash_format true
</match>
# Install S3 plugin
sudo td-agent-gem install fluent-plugin-s3

# Configure S3 output
sudo tee /etc/td-agent/td-agent.conf > /dev/null <<'EOF'
<source>
  @type tail
  path /var/log/app/*.log
  pos_file /var/log/td-agent/app.pos
  tag app.logs
  <parse>
    @type json
  </parse>
</source>

<match app.logs>
  @type s3

  aws_key_id YOUR_AWS_ACCESS_KEY
  aws_sec_key YOUR_AWS_SECRET_KEY
  s3_bucket my-application-logs
  s3_region us-east-1

  path logs/%Y/%m/%d/
  s3_object_key_format %{path}%{time_slice}_%{index}.%{file_extension}

  <buffer time>
    @type file
    path /var/log/td-agent/s3
    timekey 3600
    timekey_wait 10m
    chunk_limit_size 256m
  </buffer>

  <format>
    @type json
  </format>
</match>
EOF

# Restart and verify
sudo systemctl restart td-agent
sudo systemctl status td-agent
``````yaml
<match app.logs>
  @type s3
  aws_key_id YOUR_AWS_KEY
  aws_sec_key YOUR_AWS_SECRET
  s3_bucket your-bucket
  path logs/
  <buffer>
    @type file
    path /var/log/fluentd-buffers/s3
    timekey 1d
    timekey_use_utc true
  </buffer>
</match>
# Configure routing to multiple destinations
sudo tee /etc/td-agent/td-agent.conf > /dev/null <<'EOF'
<source>
  @type tail
  path /var/log/app/application.log
  pos_file /var/log/td-agent/app.pos
  tag app.logs
  <parse>
    @type json
  </parse>
</source>

# Copy logs to multiple destinations
<match app.logs>
  @type copy

  # Send to Elasticsearch
  <store>
    @type elasticsearch
    host elasticsearch.local
    port 9200
    logstash_format true
  </store>

  # Send to S3 for archival
  <store>
    @type s3
    s3_bucket app-logs-archive
    path logs/
    <buffer time>
      timekey 86400
    </buffer>
  </store>

  # Send errors to Slack
  <store>
    @type grep
    <regexp>
      key level
      pattern /^ERROR$/
    </regexp>
    @type slack
    webhook_url https://hooks.slack.com/services/YOUR/WEBHOOK/URL
    channel alerts
    username fluentd
  </store>
</match>
EOF

sudo systemctl restart td-agent
``````yaml
<match app.logs>
  @type copy
  <store>
    @type elasticsearch
    host es1
  </store>
  <store>
    @type forward
    send_timeout 60s
    <server>
      host log-collector
    </server>
  </store>
</match>
```bash
# Configure APM log forwarding
sudo tee /etc/td-agent/td-agent.conf > /dev/null <<'EOF'
@type tail
path /var/log/app/*.log
pos_file /var/log/td-agent/app.pos
tag app.logs
@type json
time_key timestamp
time_format %Y-%m-%dT%H:%M:%S.%NZ

Enrich logs with metadata

@type record_transformer hostname "#{Socket.gethostname}" environment ${ENV['ENVIRONMENT'] || 'production'} service_name myapp trace_id ${record['trace_id']}

Calculate response time metrics

@type prometheus name http_request_duration_seconds type histogram desc HTTP request duration key response_time

Forward to APM system

@type http endpoint http://apm-server:8200/intake/v2/events flush_interval 5s EOF

sudo systemctl restart td-agent ``yaml <match app.metrics> @type prometheus <metric> name fluentd_processed_records_total type counter desc Total number of records processed </metric> </match>pos_filefür Tail-Eingaben und setze entsprechendetimekeyWerte in Puffern, um Speicherplatzprobleme zu verhindern. Verwenderotate_ageundrotate_size`für Dateiausgaben.

  • Logs hierarchisch taggen: Verwende Punkt-Notation-Tags (z.B.,app.production.web), um flexible Weiterleitung und Filterung zu ermöglichen. Dies erlaubt das Matching von Mustern wieapp.**oderapp.production.*.

  • Fluentd-Performance überwachen: Verfolge Pufferwarteschlangenlänge, Wiederholungszählungen und Emissionsraten. Verwende Prometheus-Plugin oder integrierte Überwachung, um Engpässe zu erkennen, bevor sie Datenverlust verursachen.

  • Sensible Daten schützen: Verwende@type secure_forwardfür verschlüsselte Log-Übertragung, filtere sensible Felder mitrecord_modifierund beschränke Dateiberechtigungen auf Konfigurationsdateien mit Zugangsdaten.

  • Konfigurationsänderungen testen: Verwende immer--dry-run, um Konfigurationssyntax vor der Bereitstellung zu validieren. Teste Routing-Logik mit kleinen Log-Volumina, bevor sie in Produktion angewendet wird.

  • Multi-Worker-Modus umsichtig verwenden: Aktiviere Worker für CPU-intensive Operationen (Parsing, Filterung), aber sei dir bewusst, dass einige Plugins den Multi-Worker-Modus nicht unterstützen. Beginne mit 2-4 Workern und überwache CPU-Auslastung.

  • Graziöse Degradierung implementieren: Konfiguriereoverflow_actionin Puffern, um Rückdruck zu handhaben (verwendedrop_oldest_chunkoderblockbasierend auf deinen Anforderungen). Setze vernünftigeretry_timeoutWerte, um unendliche Wiederholungen zu verhindern.

  • Aspekte mit Labels trennen: Verwende@labelDirektiven, um isolierte Verarbeitungspipelines für verschiedene Log-Typen zu erstellen. Dies verbessert Wartbarkeit und verhindert unbeabsichtigte Weiterleitung.

  • Plugins aktuell halten: Aktualisiere Fluentd und Plugins regelmäßig, um Sicherheitsupdates und Performanceverbesserungen zu erhalten. Fixiere Plugin-Versionen in Produktion, um Konsistenz zu gewährleisten.

Fehlerbehebung

Problem Lösung
Fluentd won't start Check syntax: fluentd -c fluent.conf --dry-run. Review logs: sudo journalctl -u td-agent -n 100. Verify file permissions on config and buffer directories.
Logs not being collected Verify pos_file exists and is writable. Check file path patterns match actual log locations. Ensure log files have read permissions. Test with tail -f on the log file.
High memory usage Switch from memory buffers to file buffers. Reduce chunk_limit_size and queue_limit_length. Enable multi-worker mode to distribute load. Check for memory leaks in custom plugins.
Buffer queue growing Increase flush_interval or reduce log volume. Check downstream system capacity (Elasticsearch, S3). Verify network connectivity. Review retry_max_interval settings.
Logs being dropped Check buffer overflow_action setting. Increase queue_limit_length and chunk_limit_size. Monitor disk space for file buffers. Review retry_timeout configuration.
Plugin installation fails Ensure Ruby development headers installed: sudo apt-get install ruby-dev build-essential. Use correct gem command: td-agent-gem not gem. Check plugin compatibility with Fluentd version.
Parse errors in logs Validate parser configuration with sample logs. Use @type regexp with proper regex patterns. Add error handling: emit_invalid_record_to_error true. Check time format strings.
Cannot connect to Elasticsearch Verify Elasticsearch is running: curl http://elasticsearch:9200. Check firewall rules. Validate credentials if using authentication. Review Elasticsearch logs for rejection reasons.
Duplicate logs appearing Check pos_file location is persistent across restarts. Verify only one Fluentd instance is running. Review read_from_head setting (should be false in production).
Slow log processing Enable multi-worker mode. Optimize regex patterns in filters. Use @type grep before expensive parsers. Profile with --trace flag to identify bottlenecks.
SSL/TLS connection errors Verify certificate paths and permissions. Check certificate expiration dates. Ensure CA bundle is up to date. Use verify_ssl false for testing only (not production).