Zum Inhalt springen

Grafana Alloy-Befehle

Grafana Alloy ist eine flexible, herstellerneutrale OpenTelemetry-Distribution zur Sammlung, Verarbeitung und zum Export von Telemetrie-Daten (Metriken, Logs, Traces). Der Nachfolger von Grafana Agent verwendet eine komponentenbasierte Konfigurationssprache.

Installation

Linux-Paket-Repositorys

Debian/Ubuntu

sudo mkdir -p /etc/apt/keyrings/
wget -qO - https://apt.grafana.com/gpg.key | sudo gpg --dearmor -o /etc/apt/keyrings/grafana.gpg
echo "deb [signed-by=/etc/apt/keyrings/grafana.gpg] https://apt.grafana.com stable main" | sudo tee /etc/apt/sources.list.d/grafana.list
sudo apt-get update
sudo apt-get install alloy

RHEL/CentOS/Fedora

sudo tee /etc/yum.repos.d/grafana.repo << EOF
[grafana]
name=grafana
baseurl=https://rpm.grafana.com
repo_gpgcheck=1
enabled=1
gpgcheck=1
gpgkey=https://rpm.grafana.com/gpg.key
EOF
sudo dnf install alloy

macOS

brew install grafana/grafana/alloy
alloy --version

Binär-Download

# Neueste Version herunterladen
wget https://github.com/grafana/alloy/releases/download/v1.14.0/alloy-v1.14.0-linux-amd64.zip
unzip alloy-v1.14.0-linux-amd64.zip
sudo mv alloy /usr/local/bin/
alloy --version

Docker

docker pull grafana/alloy:latest
docker run -v /path/to/config.alloy:/etc/alloy/config.alloy \
  grafana/alloy:latest run /etc/alloy/config.alloy --server.http.listen-addr=0.0.0.0:12345

Helm Chart

helm repo add grafana https://grafana.github.io/helm-charts
helm repo update
helm install alloy grafana/alloy --namespace monitoring --create-namespace \
  -f values.yaml

Grundlegende Befehle

BefehlBeschreibung
alloy run config.alloyAlloy mit angegebener Konfigurationsdatei ausführen
alloy run config.alloy --server.http.listen-addr=0.0.0.0:12345Mit benutzerdefinierter HTTP-Server-Adresse ausführen
alloy fmt config.alloyKonfigurationsdatei formatieren und validieren (Dry-Run)
alloy fmt -w config.alloyKonfigurationsdatei an Ort und Stelle formatieren
alloy tools parse config.alloyKonfigurationssyntax parsen und validieren
alloy tools lint config.alloyKonfiguration auf Probleme überprüfen
alloy --versionAlloy-Version anzeigen
alloy --helpHilfeinformationen anzeigen
alloy run --helpRun-Befehls-Optionen anzeigen

Konfiguration Grundlagen

Dateiendung und Syntax

Alloy verwendet .alloy-Dateierweiterung mit komponentenbasierter Konfigurationssprache (ähnlich HCL).

Grundstruktur

// Kommentare verwenden //

// Komponenten-Instanziierung: <component_type>.<unique_name> { ... }
prometheus.scrape "example" {
  targets = [{"__address__" = "localhost:9090"}]
  forward_to = [prometheus.remote_write.grafana.receiver]
}

// Daten von einer Komponente zu einer anderen exportieren
prometheus.remote_write "grafana" {
  endpoint {
    url = "https://prometheus.grafana.net/api/prom/push"
    headers = {
      "Authorization" = "Bearer ${GRAFANA_TOKEN}"
    }
  }
}

Variablen und Secrets

// Umgebungsvariablen
GRAFANA_TOKEN = env("GRAFANA_TOKEN")
PROMETHEUS_URL = env("PROMETHEUS_URL")

// Lokale Variablen
local "my_targets" {
  value = [
    {"__address__" = "localhost:9090"},
    {"__address__" = "localhost:9100"},
  ]
}

// Variablen referenzieren
prometheus.scrape "nodes" {
  targets = local.my_targets.value
}

Argument- und Export-Blöcke

// Die meisten Komponenten akzeptieren Argumente
prometheus.scrape "example" {
  targets    = [{"__address__" = "localhost:9090"}]
  scrape_interval = "30s"
  scrape_timeout  = "10s"
  forward_to = [prometheus.remote_write.grafana.receiver]
}

// Komponenten exportieren Daten (sichtbar in UI unter Exports)
// Beispiel: prometheus.scrape exportiert scraped_targets und targets

Komponenten - Quellen

Prometheus Scrape

prometheus.scrape "kubernetes" {
  targets    = discovery.kubernetes.nodes.targets
  scrape_interval = "30s"
  scrape_timeout  = "10s"
  metrics_path    = "/metrics"
  scheme          = "http"

  forward_to = [prometheus.relabel.drop_internal.receiver]
}

Loki File Source

loki.source.file "app_logs" {
  targets = [
    {
      __path__ = "/var/log/app/*.log",
      job      = "app",
      env      = "production",
    }
  ]

  forward_to = [loki.relabel.add_labels.receiver]
}

OpenTelemetry Receiver (OTLP)

otelcol.receiver.otlp "default" {
  grpc {
    endpoint = "0.0.0.0:4317"
  }

  http {
    endpoint = "0.0.0.0:4318"
  }

  output {
    traces  = [otelcol.processor.batch.default.input]
    metrics = [otelcol.processor.batch.default.input]
    logs    = [otelcol.processor.batch.default.input]
  }
}

Prometheus Remote Write Receiver

prometheus.receive_http "example" {
  http {
    address = "0.0.0.0:9009"
  }

  forward_to = [prometheus.relabel.example.receiver]
}

Loki API Server

loki.relabel "add_labels" {
  forward_to = [loki.write.grafana.receiver]

  rule {
    source_labels = ["__path__"]
    target_label  = "job"
    replacement   = "app-logs"
  }
}

Komponenten - Prozessoren

Batch Processor

otelcol.processor.batch "default" {
  send_batch_size    = 1000
  timeout            = "10s"
  send_batch_max_size = 2000

  output {
    traces  = [otelcol.exporter.otlp.grafana.input]
    metrics = [otelcol.exporter.prometheus.grafana.input]
    logs    = [otelcol.exporter.loki.grafana.input]
  }
}

Filter Processor

otelcol.processor.filter "drop_internal" {
  metrics {
    exclude {
      match_type = "regexp"
      regexp     = "internal_.*"
    }
  }

  output {
    metrics = [otelcol.exporter.prometheus.grafana.input]
  }
}

Resource Detection Processor

otelcol.processor.resourcedetection "default" {
  detectors = ["env", "system", "gcp", "aws", "azure", "docker", "kubernetes"]

  output {
    traces  = [otelcol.processor.batch.default.input]
    metrics = [otelcol.processor.batch.default.input]
    logs    = [otelcol.processor.batch.default.input]
  }
}

Attribute Processor

otelcol.processor.attributes "add_env" {
  action {
    key    = "environment"
    value  = "production"
    action = "insert"
  }

  action {
    key         = "pod_name"
    from_attribute = "k8s.pod.name"
    action      = "insert"
  }

  output {
    traces  = [otelcol.processor.batch.default.input]
    metrics = [otelcol.processor.batch.default.input]
    logs    = [otelcol.processor.batch.default.input]
  }
}

Memory Limiter Processor

otelcol.processor.memory_limiter "default" {
  check_interval       = "5s"
  limit_mib            = 512
  spike_limit_mib      = 256

  output {
    traces  = [otelcol.processor.batch.default.input]
    metrics = [otelcol.processor.batch.default.input]
    logs    = [otelcol.processor.batch.default.input]
  }
}

Span Processor (Traces)

otelcol.processor.span "extract_attributes" {
  name {
    to_attributes {
      rules = ["^/api/(?P<version>v\\d)/(?P<resource>\\w+)"]
    }
  }

  output {
    traces = [otelcol.processor.batch.default.input]
  }
}

Komponenten - Exporters

Prometheus Remote Write

prometheus.remote_write "grafana" {
  endpoint {
    url = "https://prometheus.grafana.net/api/prom/push"

    basic_auth {
      username = "GRAFANA_USER_ID"
      password = "${GRAFANA_TOKEN}"
    }

    headers = {
      "X-Custom-Header" = "value"
    }

    tls_config {
      insecure_skip_verify = false
    }
  }

  wal {
    enabled = true
    directory = "/var/lib/alloy/wal"
  }

  queue_settings {
    capacity = 10000
  }
}

Loki Write

loki.write "grafana" {
  endpoint {
    url = "https://logs.grafana.net/loki/api/v1/push"

    basic_auth {
      username = "GRAFANA_USER_ID"
      password = "${GRAFANA_TOKEN}"
    }
  }

  tenant_id = "production"
}

OpenTelemetry Exporter (OTLP)

otelcol.exporter.otlp "grafana_cloud" {
  client {
    endpoint = "tempo.grafana.net:4317"

    auth = otelcol.auth.basic "grafana" {}

    tls {
      insecure = false
    }
  }

  retry_on_failure {
    enabled       = true
    initial_interval = "5s"
    max_interval  = "30s"
    max_elapsed_time = "5m"
  }
}

OTLP Authentifizierung

otelcol.auth.basic "grafana" {
  username = "GRAFANA_USER_ID"
  password = "${GRAFANA_TOKEN}"
}

// OTLP mit Basic Auth
otelcol.exporter.otlp "example" {
  client {
    endpoint = "tempo.grafana.net:4317"
    auth = otelcol.auth.basic.grafana.handler
  }
}

Prometheus Exporter (Node Exporter Stil)

prometheus.exporter.unix "local_system" {
  disabled_collectors = ["netdev", "netstat"]
}

prometheus.scrape "local_system" {
  targets = prometheus.exporter.unix.local_system.targets
  forward_to = [prometheus.remote_write.grafana.receiver]
}

Komponenten - Discovery

Kubernetes Discovery

discovery.kubernetes "cluster" {
  role = "pod"
  namespaces {
    names = ["default", "monitoring", "production"]
  }
}

prometheus.scrape "kubernetes" {
  targets    = discovery.kubernetes.cluster.targets
  forward_to = [prometheus.remote_write.grafana.receiver]

  relabel_configurations {
    source_labels = ["__meta_kubernetes_pod_annotation_prometheus_io_scrape"]
    regex         = "true"
    action        = "keep"
  }
}

Docker Discovery

discovery.docker "local" {
  host = "unix:///var/run/docker.sock"
}

prometheus.scrape "docker" {
  targets    = discovery.docker.local.targets
  forward_to = [prometheus.remote_write.grafana.receiver]
}

Consul Discovery

discovery.consul "example" {
  server   = "localhost:8500"
  datacenter = "dc1"
  services = ["prometheus", "app"]
}

prometheus.scrape "consul" {
  targets    = discovery.consul.example.targets
  forward_to = [prometheus.remote_write.grafana.receiver]
}

Datei-basierte Discovery

discovery.file "dynamic_targets" {
  files = ["/etc/alloy/targets.json"]
  refresh_interval = "30s"
}

prometheus.scrape "file_targets" {
  targets    = discovery.file.dynamic_targets.targets
  forward_to = [prometheus.remote_write.grafana.receiver]
}

Metriken-Sammlung

Prometheus Scraping mit Relabeling

prometheus.scrape "prometheus" {
  targets = [
    {
      __address__ = "prometheus.example.com:9090",
      job         = "prometheus",
    },
    {
      __address__ = "alertmanager.example.com:9093",
      job         = "alertmanager",
    },
  ]

  metrics_path    = "/metrics"
  scrape_interval = "30s"
  scrape_timeout  = "10s"

  relabel_configurations {
    source_labels = ["__address__"]
    target_label  = "instance"
    regex         = "([^:]+)(?::\\d+)?"
    replacement   = "${1}"
  }

  metric_relabel_configurations {
    source_labels = ["__name__"]
    regex         = "up|job|instance"
    action        = "keep"
  }

  forward_to = [prometheus.relabel.drop_internal.receiver]
}

Node Exporter Integration

prometheus.exporter.unix "node_metrics" {
  collectors = ["cpu", "diskstats", "filesystem", "loadavg", "meminfo", "netdev", "netstat"]
  disabled_collectors = ["netdev"]
  set_collectors = ["textfile"]
  textfile_directory = "/var/lib/node_exporter/textfile_collector"
}

prometheus.scrape "node_exporter" {
  targets    = prometheus.exporter.unix.node_metrics.targets
  forward_to = [prometheus.remote_write.grafana.receiver]
}

Benutzerdefinierter Metriken-Endpoint

prometheus.scrape "custom_app" {
  targets = [
    {
      __address__ = "app.example.com:8080",
      __metrics_path__ = "/api/metrics",
      job = "custom-app",
      env = "production",
    },
  ]

  scrape_interval = "15s"
  forward_to = [prometheus.remote_write.grafana.receiver]
}

Log-Sammlung

Datei-Tailing mit Loki

loki.source.file "application" {
  targets = [
    {
      __path__ = "/var/log/app/app.log",
      job      = "app",
      service  = "web",
      env      = "production",
    },
    {
      __path__ = "/var/log/app/error.log",
      job      = "app",
      level    = "error",
    },
  ]

  forward_to = [loki.relabel.add_labels.receiver]
}

Journal Logs (systemd)

loki.source.journal "systemd" {
  path   = "/var/log/journal"
  labels = {
    job = "systemd",
  }

  forward_to = [loki.relabel.add_labels.receiver]
}

JSON Logs parsen

loki.relabel "parse_json" {
  forward_to = [loki.process.extract_json.receiver]

  rule {
    source_labels = ["__path__"]
    target_label  = "filename"
  }
}

loki.process "extract_json" {
  forward_to = [loki.write.grafana.receiver]

  stage {
    json {
      expressions = {
        timestamp = "ts",
        message   = "msg",
        level     = "level",
        service   = "service",
      }
    }
  }

  stage {
    labels {
      values = {
        level   = "level",
        service = "service",
      }
    }
  }

  stage {
    timestamp {
      source = "timestamp"
      format = "Unix"
    }
  }
}

Mehrzeilige Logs (Stack Traces)

loki.process "multiline" {
  forward_to = [loki.write.grafana.receiver]

  stage {
    multiline {
      line_start_pattern = "^\\d{4}-\\d{2}-\\d{2}"
    }
  }

  stage {
    regex {
      expression = "^(?P<timestamp>\\d{4}-\\d{2}-\\d{2}) (?P<level>\\w+) (?P<message>.*)"
    }
  }

  stage {
    labels {
      values = {
        level = "level",
      }
    }
  }
}

Traces-Sammlung

OpenTelemetry Traces Pipeline

otelcol.receiver.otlp "app" {
  grpc {
    endpoint = "0.0.0.0:4317"
  }

  http {
    endpoint = "0.0.0.0:4318"
  }

  output {
    traces = [otelcol.processor.memory_limiter.default.input]
  }
}

otelcol.processor.memory_limiter "default" {
  check_interval  = "5s"
  limit_mib       = 512
  spike_limit_mib = 256

  output {
    traces = [otelcol.processor.batch.default.input]
  }
}

otelcol.processor.batch "default" {
  send_batch_size = 100
  timeout         = "10s"

  output {
    traces = [otelcol.exporter.otlp.grafana.input]
  }
}

otelcol.exporter.otlp "grafana" {
  client {
    endpoint = "tempo.grafana.net:4317"
    auth = otelcol.auth.basic.grafana.handler
  }
}

otelcol.auth.basic "grafana" {
  username = "GRAFANA_USER_ID"
  password = "${GRAFANA_TOKEN}"
}

Jaeger Receiver

otelcol.receiver.jaeger "default" {
  protocols {
    grpc {
      endpoint = "0.0.0.0:14250"
    }

    thrift_http {
      endpoint = "0.0.0.0:14268"
    }
  }

  output {
    traces = [otelcol.processor.batch.default.input]
  }
}

Zipkin Receiver

otelcol.receiver.zipkin "default" {
  endpoint = "0.0.0.0:9411"

  output {
    traces = [otelcol.processor.batch.default.input]
  }
}

Kubernetes Deployment

Helm Values (values.yaml)

config: |
  otelcol.receiver.otlp "default" {
    grpc {
      endpoint = "0.0.0.0:4317"
    }
    http {
      endpoint = "0.0.0.0:4318"
    }
    output {
      traces = [otelcol.processor.batch.default.input]
    }
  }

  otelcol.processor.batch "default" {
    send_batch_size = 100
    output {
      traces = [otelcol.exporter.otlp.grafana.input]
    }
  }

  otelcol.exporter.otlp "grafana" {
    client {
      endpoint = "tempo.grafana.net:4317"
      auth = otelcol.auth.basic.grafana.handler
    }
  }

  otelcol.auth.basic "grafana" {
    username = "GRAFANA_USER_ID"
    password = "${GRAFANA_TOKEN}"
  }

alloy:
  remoteConfigUrl: ""

serviceAccount:
  create: true

rbac:
  create: true

daemonset:
  enabled: true

deployment:
  enabled: true
  replicas: 1

configMap:
  content: ""

livenessProbe:
  enabled: true

readinessProbe:
  enabled: true

Helm Install

# Mit benutzerdefinierten Werten installieren
helm install alloy grafana/alloy \
  --namespace monitoring \
  --create-namespace \
  -f values.yaml

# Bestehende Installation aktualisieren
helm upgrade alloy grafana/alloy \
  --namespace monitoring \
  -f values.yaml

# Deinstallieren
helm uninstall alloy --namespace monitoring

DaemonSet Beispiel

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: alloy
  namespace: monitoring
spec:
  selector:
    matchLabels:
      app: alloy
  template:
    metadata:
      labels:
        app: alloy
    spec:
      serviceAccountName: alloy
      containers:
      - name: alloy
        image: grafana/alloy:latest
        args:
        - run
        - /etc/alloy/config.alloy
        - --server.http.listen-addr=0.0.0.0:12345
        ports:
        - name: http
          containerPort: 12345
        - name: otlp-grpc
          containerPort: 4317
        - name: otlp-http
          containerPort: 4318
        volumeMounts:
        - name: config
          mountPath: /etc/alloy
        - name: varlog
          mountPath: /var/log
          readOnly: true
      volumes:
      - name: config
        configMap:
          name: alloy-config
      - name: varlog
        hostPath:
          path: /var/log
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: alloy-config
  namespace: monitoring
data:
  config.alloy: |
    prometheus.scrape "kubernetes" {
      targets = discovery.kubernetes.nodes.targets
      forward_to = [prometheus.remote_write.grafana.receiver]
    }

    discovery.kubernetes "nodes" {
      role = "node"
    }

    prometheus.remote_write "grafana" {
      endpoint {
        url = "https://prometheus.grafana.net/api/prom/push"
        basic_auth {
          username = "GRAFANA_USER_ID"
          password = "${GRAFANA_TOKEN}"
        }
      }
    }

ServiceMonitor Beispiel

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: app-metrics
  namespace: production
spec:
  selector:
    matchLabels:
      app: myapp
  endpoints:
  - port: metrics
    interval: 30s
    path: /metrics

Docker Configuration

Compose File

version: '3.8'
services:
  alloy:
    image: grafana/alloy:latest
    container_name: alloy
    command:
      - run
      - /etc/alloy/config.alloy
      - --server.http.listen-addr=0.0.0.0:12345
    ports:
      - "12345:12345"
      - "4317:4317"
      - "4318:4318"
    volumes:
      - ./alloy-config.alloy:/etc/alloy/config.alloy
      - /var/run/docker.sock:/var/run/docker.sock
      - /var/log:/var/log:ro
    environment:
      - GRAFANA_TOKEN=${GRAFANA_TOKEN}
    networks:
      - monitoring

  prometheus:
    image: prom/prometheus:latest
    container_name: prometheus
    ports:
      - "9090:9090"
    volumes:
      - ./prometheus.yml:/etc/prometheus/prometheus.yml
    networks:
      - monitoring

networks:
  monitoring:
    driver: bridge

Container ausführen

# Mit Konfigurationsdatei ausführen
docker run -d \
  --name alloy \
  -v /path/to/config.alloy:/etc/alloy/config.alloy \
  -p 12345:12345 \
  -p 4317:4317 \
  -p 4318:4318 \
  -e GRAFANA_TOKEN=$GRAFANA_TOKEN \
  grafana/alloy:latest run /etc/alloy/config.alloy \
  --server.http.listen-addr=0.0.0.0:12345

# Logs anzeigen
docker logs -f alloy

# Container stoppen
docker stop alloy
docker rm alloy

Debugging

Web UI Zugriff

# Alloy UI zugreifen (Standard Port 12345)
# Zeigt Komponentenstatus, Exports, Metriken
curl http://localhost:12345
# Browser: http://localhost:12345/graph

Log Levels

# Mit Debug-Logging ausführen
alloy run config.alloy --log.level=debug

# Mit Info-Logging (Standard)
alloy run config.alloy --log.level=info

# Mit Warning-Logging
alloy run config.alloy --log.level=warn

Konfiguration validieren

# Konfiguration parsen ohne auszuführen
alloy tools parse config.alloy

# Syntax-Fehler überprüfen
alloy fmt config.alloy

# Lint und Validieren
alloy tools lint config.alloy

Komponenten inspizieren

# Alle Komponenten-Typen anzeigen
alloy tools component list

# Komponenten-Dokumentation abrufen
alloy tools component doc prometheus.scrape

# Komponenten-Schema anzeigen
alloy tools component schema prometheus.scrape

Pprof Profiling

# pprof Server aktivieren (Standard: http://localhost:6060/debug/pprof)
alloy run config.alloy --pprof.enabled=true --pprof.address=0.0.0.0:6060

# Heap Profile anzeigen
curl http://localhost:6060/debug/pprof/heap > heap.prof
go tool pprof heap.prof

# Goroutine Profile anzeigen
curl http://localhost:6060/debug/pprof/goroutine

Traces Inspection

# Verteiltes Tracing aktivieren
alloy run config.alloy --traces.enabled=true

# Trace UI zugreifen (falls exposiert)
curl http://localhost:12345/traces

Umgebungsvariablen

VariableBeschreibung
ALLOY_CONFIG_FILEPfad zur Konfigurationsdatei
GRAFANA_TOKENAuthentifizierungs-Token für Grafana Cloud
PROMETHEUS_URLPrometheus-Server-Endpoint
LOKI_URLLoki API Endpoint
TEMPO_URLTempo API Endpoint
OTEL_EXPORTER_OTLP_ENDPOINTOpenTelemetry OTLP Exporter Endpoint
OTEL_EXPORTER_OTLP_HEADERSOTLP Exporter Headers
OTEL_SDK_DISABLEDOpenTelemetry SDK deaktivieren
LOG_LEVELLogging Level: debug, info, warn, error
ALLOY_REMOTE_CONFIG_URLRemote Konfigurations-URL
NODE_NAMEKnoten-Identifier (k8s Knoten Name)
POD_NAMEPod Name (für k8s)
NAMESPACEKubernetes Namespace

Umgebungsvariablen verwenden

# Variablen exportieren
export GRAFANA_TOKEN="glc_xxx"
export PROMETHEUS_URL="https://prometheus.grafana.net"

# Alloy ausführen
alloy run config.alloy

# In config.alloy mit ${ } referenzieren
prometheus.remote_write "grafana" {
  endpoint {
    url = "${PROMETHEUS_URL}/api/prom/push"
    basic_auth {
      password = "${GRAFANA_TOKEN}"
    }
  }
}

Ressourcen