Grafana Alloy Commands
Grafana Alloy is a flexible, vendor-neutral OpenTelemetry distribution for collecting, processing, and exporting telemetry data (metrics, logs, traces). Successor to Grafana Agent, it uses a component-based configuration language.
Installation
Linux Package Repositories
Debian/Ubuntu
sudo mkdir -p /etc/apt/keyrings/
wget -qO - https://apt.grafana.com/gpg.key | sudo gpg --dearmor -o /etc/apt/keyrings/grafana.gpg
echo "deb [signed-by=/etc/apt/keyrings/grafana.gpg] https://apt.grafana.com stable main" | sudo tee /etc/apt/sources.list.d/grafana.list
sudo apt-get update
sudo apt-get install alloy
RHEL/CentOS/Fedora
sudo tee /etc/yum.repos.d/grafana.repo << EOF
[grafana]
name=grafana
baseurl=https://rpm.grafana.com
repo_gpgcheck=1
enabled=1
gpgcheck=1
gpgkey=https://rpm.grafana.com/gpg.key
EOF
sudo dnf install alloy
macOS
brew install grafana/grafana/alloy
alloy --version
Binary Download
# Download latest release
wget https://github.com/grafana/alloy/releases/download/v1.14.0/alloy-v1.14.0-linux-amd64.zip
unzip alloy-v1.14.0-linux-amd64.zip
sudo mv alloy /usr/local/bin/
alloy --version
Docker
docker pull grafana/alloy:latest
docker run -v /path/to/config.alloy:/etc/alloy/config.alloy \
grafana/alloy:latest run /etc/alloy/config.alloy --server.http.listen-addr=0.0.0.0:12345
Helm Chart
helm repo add grafana https://grafana.github.io/helm-charts
helm repo update
helm install alloy grafana/alloy --namespace monitoring --create-namespace \
-f values.yaml
Basic Commands
| Command | Description |
|---|---|
alloy run config.alloy | Run Alloy with specified configuration file |
alloy run config.alloy --server.http.listen-addr=0.0.0.0:12345 | Run with custom HTTP server address for UI |
alloy fmt config.alloy | Format and validate configuration file (dry-run) |
alloy fmt -w config.alloy | Format configuration file in-place |
alloy tools parse config.alloy | Parse and validate configuration syntax |
alloy tools lint config.alloy | Lint configuration for issues |
alloy --version | Display Alloy version |
alloy --help | Show help information |
alloy run --help | Show run command options |
Configuration Basics
File Extension and Syntax
Alloy uses .alloy file extension with component-based configuration language (similar to HCL).
Basic Structure
// Comments use //
// Component instantiation: <component_type>.<unique_name> { ... }
prometheus.scrape "example" {
targets = [{"__address__" = "localhost:9090"}]
forward_to = [prometheus.remote_write.grafana.receiver]
}
// Export data from component to another
prometheus.remote_write "grafana" {
endpoint {
url = "https://prometheus.grafana.net/api/prom/push"
headers = {
"Authorization" = "Bearer ${GRAFANA_TOKEN}"
}
}
}
Variable and Secrets
// Environment variables
GRAFANA_TOKEN = env("GRAFANA_TOKEN")
PROMETHEUS_URL = env("PROMETHEUS_URL")
// Local variables
local "my_targets" {
value = [
{"__address__" = "localhost:9090"},
{"__address__" = "localhost:9100"},
]
}
// Reference variables
prometheus.scrape "nodes" {
targets = local.my_targets.value
}
Argument and Export Blocks
// Most components accept arguments
prometheus.scrape "example" {
targets = [{"__address__" = "localhost:9090"}]
scrape_interval = "30s"
scrape_timeout = "10s"
forward_to = [prometheus.remote_write.grafana.receiver]
}
// Components export data (visible in UI under Exports)
// Example: prometheus.scrape exports scraped_targets and targets
Components - Sources
Prometheus Scrape
prometheus.scrape "kubernetes" {
targets = discovery.kubernetes.nodes.targets
scrape_interval = "30s"
scrape_timeout = "10s"
metrics_path = "/metrics"
scheme = "http"
forward_to = [prometheus.relabel.drop_internal.receiver]
}
Loki File Source
loki.source.file "app_logs" {
targets = [
{
__path__ = "/var/log/app/*.log",
job = "app",
env = "production",
}
]
forward_to = [loki.relabel.add_labels.receiver]
}
OpenTelemetry Receiver (OTLP)
otelcol.receiver.otlp "default" {
grpc {
endpoint = "0.0.0.0:4317"
}
http {
endpoint = "0.0.0.0:4318"
}
output {
traces = [otelcol.processor.batch.default.input]
metrics = [otelcol.processor.batch.default.input]
logs = [otelcol.processor.batch.default.input]
}
}
Prometheus Remote Write Receiver
prometheus.receive_http "example" {
http {
address = "0.0.0.0:9009"
}
forward_to = [prometheus.relabel.example.receiver]
}
Loki API Server
loki.relabel "add_labels" {
forward_to = [loki.write.grafana.receiver]
rule {
source_labels = ["__path__"]
target_label = "job"
replacement = "app-logs"
}
}
Components - Processors
Batch Processor
otelcol.processor.batch "default" {
send_batch_size = 1000
timeout = "10s"
send_batch_max_size = 2000
output {
traces = [otelcol.exporter.otlp.grafana.input]
metrics = [otelcol.exporter.prometheus.grafana.input]
logs = [otelcol.exporter.loki.grafana.input]
}
}
Filter Processor
otelcol.processor.filter "drop_internal" {
metrics {
exclude {
match_type = "regexp"
regexp = "internal_.*"
}
}
output {
metrics = [otelcol.exporter.prometheus.grafana.input]
}
}
Resource Detection Processor
otelcol.processor.resourcedetection "default" {
detectors = ["env", "system", "gcp", "aws", "azure", "docker", "kubernetes"]
output {
traces = [otelcol.processor.batch.default.input]
metrics = [otelcol.processor.batch.default.input]
logs = [otelcol.processor.batch.default.input]
}
}
Attribute Processor
otelcol.processor.attributes "add_env" {
action {
key = "environment"
value = "production"
action = "insert"
}
action {
key = "pod_name"
from_attribute = "k8s.pod.name"
action = "insert"
}
output {
traces = [otelcol.processor.batch.default.input]
metrics = [otelcol.processor.batch.default.input]
logs = [otelcol.processor.batch.default.input]
}
}
Memory Limiter Processor
otelcol.processor.memory_limiter "default" {
check_interval = "5s"
limit_mib = 512
spike_limit_mib = 256
output {
traces = [otelcol.processor.batch.default.input]
metrics = [otelcol.processor.batch.default.input]
logs = [otelcol.processor.batch.default.input]
}
}
Span Processor (Traces)
otelcol.processor.span "extract_attributes" {
name {
to_attributes {
rules = ["^/api/(?P<version>v\\d)/(?P<resource>\\w+)"]
}
}
output {
traces = [otelcol.processor.batch.default.input]
}
}
Components - Exporters
Prometheus Remote Write
prometheus.remote_write "grafana" {
endpoint {
url = "https://prometheus.grafana.net/api/prom/push"
basic_auth {
username = "GRAFANA_USER_ID"
password = "${GRAFANA_TOKEN}"
}
headers = {
"X-Custom-Header" = "value"
}
tls_config {
insecure_skip_verify = false
}
}
wal {
enabled = true
directory = "/var/lib/alloy/wal"
}
queue_settings {
capacity = 10000
}
}
Loki Write
loki.write "grafana" {
endpoint {
url = "https://logs.grafana.net/loki/api/v1/push"
basic_auth {
username = "GRAFANA_USER_ID"
password = "${GRAFANA_TOKEN}"
}
}
tenant_id = "production"
}
OpenTelemetry Exporter (OTLP)
otelcol.exporter.otlp "grafana_cloud" {
client {
endpoint = "tempo.grafana.net:4317"
auth = otelcol.auth.basic "grafana" {}
tls {
insecure = false
}
}
retry_on_failure {
enabled = true
initial_interval = "5s"
max_interval = "30s"
max_elapsed_time = "5m"
}
}
OTLP Authentication
otelcol.auth.basic "grafana" {
username = "GRAFANA_USER_ID"
password = "${GRAFANA_TOKEN}"
}
// OTLP with basic auth
otelcol.exporter.otlp "example" {
client {
endpoint = "tempo.grafana.net:4317"
auth = otelcol.auth.basic.grafana.handler
}
}
Prometheus Exporter (Node Exporter Style)
prometheus.exporter.unix "local_system" {
disabled_collectors = ["netdev", "netstat"]
}
prometheus.scrape "local_system" {
targets = prometheus.exporter.unix.local_system.targets
forward_to = [prometheus.remote_write.grafana.receiver]
}
Components - Discovery
Kubernetes Discovery
discovery.kubernetes "cluster" {
role = "pod"
namespaces {
names = ["default", "monitoring", "production"]
}
}
prometheus.scrape "kubernetes" {
targets = discovery.kubernetes.cluster.targets
forward_to = [prometheus.remote_write.grafana.receiver]
relabel_configurations {
source_labels = ["__meta_kubernetes_pod_annotation_prometheus_io_scrape"]
regex = "true"
action = "keep"
}
}
Docker Discovery
discovery.docker "local" {
host = "unix:///var/run/docker.sock"
}
prometheus.scrape "docker" {
targets = discovery.docker.local.targets
forward_to = [prometheus.remote_write.grafana.receiver]
}
Consul Discovery
discovery.consul "example" {
server = "localhost:8500"
datacenter = "dc1"
services = ["prometheus", "app"]
}
prometheus.scrape "consul" {
targets = discovery.consul.example.targets
forward_to = [prometheus.remote_write.grafana.receiver]
}
File-based Discovery
discovery.file "dynamic_targets" {
files = ["/etc/alloy/targets.json"]
refresh_interval = "30s"
}
prometheus.scrape "file_targets" {
targets = discovery.file.dynamic_targets.targets
forward_to = [prometheus.remote_write.grafana.receiver]
}
Metrics Collection
Prometheus Scraping with Relabeling
prometheus.scrape "prometheus" {
targets = [
{
__address__ = "prometheus.example.com:9090",
job = "prometheus",
},
{
__address__ = "alertmanager.example.com:9093",
job = "alertmanager",
},
]
metrics_path = "/metrics"
scrape_interval = "30s"
scrape_timeout = "10s"
relabel_configurations {
source_labels = ["__address__"]
target_label = "instance"
regex = "([^:]+)(?::\\d+)?"
replacement = "${1}"
}
metric_relabel_configurations {
source_labels = ["__name__"]
regex = "up|job|instance"
action = "keep"
}
forward_to = [prometheus.relabel.drop_internal.receiver]
}
Node Exporter Integration
prometheus.exporter.unix "node_metrics" {
collectors = ["cpu", "diskstats", "filesystem", "loadavg", "meminfo", "netdev", "netstat"]
disabled_collectors = ["netdev"]
set_collectors = ["textfile"]
textfile_directory = "/var/lib/node_exporter/textfile_collector"
}
prometheus.scrape "node_exporter" {
targets = prometheus.exporter.unix.node_metrics.targets
forward_to = [prometheus.remote_write.grafana.receiver]
}
Custom Metrics Endpoint
prometheus.scrape "custom_app" {
targets = [
{
__address__ = "app.example.com:8080",
__metrics_path__ = "/api/metrics",
job = "custom-app",
env = "production",
},
]
scrape_interval = "15s"
forward_to = [prometheus.remote_write.grafana.receiver]
}
Log Collection
File Tailing with Loki
loki.source.file "application" {
targets = [
{
__path__ = "/var/log/app/app.log",
job = "app",
service = "web",
env = "production",
},
{
__path__ = "/var/log/app/error.log",
job = "app",
level = "error",
},
]
forward_to = [loki.relabel.add_labels.receiver]
}
Journal Logs (systemd)
loki.source.journal "systemd" {
path = "/var/log/journal"
labels = {
job = "systemd",
}
forward_to = [loki.relabel.add_labels.receiver]
}
Parse JSON Logs
loki.relabel "parse_json" {
forward_to = [loki.process.extract_json.receiver]
rule {
source_labels = ["__path__"]
target_label = "filename"
}
}
loki.process "extract_json" {
forward_to = [loki.write.grafana.receiver]
stage {
json {
expressions = {
timestamp = "ts",
message = "msg",
level = "level",
service = "service",
}
}
}
stage {
labels {
values = {
level = "level",
service = "service",
}
}
}
stage {
timestamp {
source = "timestamp"
format = "Unix"
}
}
}
Multiline Logs (Stack Traces)
loki.process "multiline" {
forward_to = [loki.write.grafana.receiver]
stage {
multiline {
line_start_pattern = "^\\d{4}-\\d{2}-\\d{2}"
}
}
stage {
regex {
expression = "^(?P<timestamp>\\d{4}-\\d{2}-\\d{2}) (?P<level>\\w+) (?P<message>.*)"
}
}
stage {
labels {
values = {
level = "level",
}
}
}
}
Traces Collection
OpenTelemetry Traces Pipeline
otelcol.receiver.otlp "app" {
grpc {
endpoint = "0.0.0.0:4317"
}
http {
endpoint = "0.0.0.0:4318"
}
output {
traces = [otelcol.processor.memory_limiter.default.input]
}
}
otelcol.processor.memory_limiter "default" {
check_interval = "5s"
limit_mib = 512
spike_limit_mib = 256
output {
traces = [otelcol.processor.batch.default.input]
}
}
otelcol.processor.batch "default" {
send_batch_size = 100
timeout = "10s"
output {
traces = [otelcol.exporter.otlp.grafana.input]
}
}
otelcol.exporter.otlp "grafana" {
client {
endpoint = "tempo.grafana.net:4317"
auth = otelcol.auth.basic.grafana.handler
}
}
otelcol.auth.basic "grafana" {
username = "GRAFANA_USER_ID"
password = "${GRAFANA_TOKEN}"
}
Jaeger Receiver
otelcol.receiver.jaeger "default" {
protocols {
grpc {
endpoint = "0.0.0.0:14250"
}
thrift_http {
endpoint = "0.0.0.0:14268"
}
}
output {
traces = [otelcol.processor.batch.default.input]
}
}
Zipkin Receiver
otelcol.receiver.zipkin "default" {
endpoint = "0.0.0.0:9411"
output {
traces = [otelcol.processor.batch.default.input]
}
}
Kubernetes Deployment
Helm Values (values.yaml)
config: |
otelcol.receiver.otlp "default" {
grpc {
endpoint = "0.0.0.0:4317"
}
http {
endpoint = "0.0.0.0:4318"
}
output {
traces = [otelcol.processor.batch.default.input]
}
}
otelcol.processor.batch "default" {
send_batch_size = 100
output {
traces = [otelcol.exporter.otlp.grafana.input]
}
}
otelcol.exporter.otlp "grafana" {
client {
endpoint = "tempo.grafana.net:4317"
auth = otelcol.auth.basic.grafana.handler
}
}
otelcol.auth.basic "grafana" {
username = "GRAFANA_USER_ID"
password = "${GRAFANA_TOKEN}"
}
alloy:
remoteConfigUrl: ""
serviceAccount:
create: true
rbac:
create: true
daemonset:
enabled: true
deployment:
enabled: true
replicas: 1
configMap:
content: ""
livenessProbe:
enabled: true
readinessProbe:
enabled: true
Helm Install
# Install with custom values
helm install alloy grafana/alloy \
--namespace monitoring \
--create-namespace \
-f values.yaml
# Upgrade existing installation
helm upgrade alloy grafana/alloy \
--namespace monitoring \
-f values.yaml
# Uninstall
helm uninstall alloy --namespace monitoring
DaemonSet Example
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: alloy
namespace: monitoring
spec:
selector:
matchLabels:
app: alloy
template:
metadata:
labels:
app: alloy
spec:
serviceAccountName: alloy
containers:
- name: alloy
image: grafana/alloy:latest
args:
- run
- /etc/alloy/config.alloy
- --server.http.listen-addr=0.0.0.0:12345
ports:
- name: http
containerPort: 12345
- name: otlp-grpc
containerPort: 4317
- name: otlp-http
containerPort: 4318
volumeMounts:
- name: config
mountPath: /etc/alloy
- name: varlog
mountPath: /var/log
readOnly: true
volumes:
- name: config
configMap:
name: alloy-config
- name: varlog
hostPath:
path: /var/log
---
apiVersion: v1
kind: ConfigMap
metadata:
name: alloy-config
namespace: monitoring
data:
config.alloy: |
prometheus.scrape "kubernetes" {
targets = discovery.kubernetes.nodes.targets
forward_to = [prometheus.remote_write.grafana.receiver]
}
discovery.kubernetes "nodes" {
role = "node"
}
prometheus.remote_write "grafana" {
endpoint {
url = "https://prometheus.grafana.net/api/prom/push"
basic_auth {
username = "GRAFANA_USER_ID"
password = "${GRAFANA_TOKEN}"
}
}
}
ServiceMonitor Example
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: app-metrics
namespace: production
spec:
selector:
matchLabels:
app: myapp
endpoints:
- port: metrics
interval: 30s
path: /metrics
Docker Configuration
Compose File
version: '3.8'
services:
alloy:
image: grafana/alloy:latest
container_name: alloy
command:
- run
- /etc/alloy/config.alloy
- --server.http.listen-addr=0.0.0.0:12345
ports:
- "12345:12345"
- "4317:4317"
- "4318:4318"
volumes:
- ./alloy-config.alloy:/etc/alloy/config.alloy
- /var/run/docker.sock:/var/run/docker.sock
- /var/log:/var/log:ro
environment:
- GRAFANA_TOKEN=${GRAFANA_TOKEN}
networks:
- monitoring
prometheus:
image: prom/prometheus:latest
container_name: prometheus
ports:
- "9090:9090"
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
networks:
- monitoring
networks:
monitoring:
driver: bridge
Run Container
# Run with config file
docker run -d \
--name alloy \
-v /path/to/config.alloy:/etc/alloy/config.alloy \
-p 12345:12345 \
-p 4317:4317 \
-p 4318:4318 \
-e GRAFANA_TOKEN=$GRAFANA_TOKEN \
grafana/alloy:latest run /etc/alloy/config.alloy \
--server.http.listen-addr=0.0.0.0:12345
# View logs
docker logs -f alloy
# Stop container
docker stop alloy
docker rm alloy
Debugging
Web UI Access
# Access Alloy UI (default port 12345)
# Shows component status, exports, metrics
curl http://localhost:12345
# Browser: http://localhost:12345/graph
Log Levels
# Run with debug logging
alloy run config.alloy --log.level=debug
# Run with info logging (default)
alloy run config.alloy --log.level=info
# Run with warning logging
alloy run config.alloy --log.level=warn
Validate Configuration
# Parse configuration without running
alloy tools parse config.alloy
# Check for syntax errors
alloy fmt config.alloy
# Lint and validate
alloy tools lint config.alloy
Inspect Components
# View all component types
alloy tools component list
# Get component documentation
alloy tools component doc prometheus.scrape
# Show component schema
alloy tools component schema prometheus.scrape
Pprof Profiling
# Enable pprof server (default: http://localhost:6060/debug/pprof)
alloy run config.alloy --pprof.enabled=true --pprof.address=0.0.0.0:6060
# View heap profile
curl http://localhost:6060/debug/pprof/heap > heap.prof
go tool pprof heap.prof
# View goroutine profile
curl http://localhost:6060/debug/pprof/goroutine
Traces Inspection
# Enable distributed tracing
alloy run config.alloy --traces.enabled=true
# Access trace UI (if exposed)
curl http://localhost:12345/traces
Environment Variables
| Variable | Description |
|---|---|
ALLOY_CONFIG_FILE | Path to configuration file |
GRAFANA_TOKEN | Authentication token for Grafana Cloud |
PROMETHEUS_URL | Prometheus server endpoint |
LOKI_URL | Loki API endpoint |
TEMPO_URL | Tempo API endpoint |
OTEL_EXPORTER_OTLP_ENDPOINT | OpenTelemetry OTLP exporter endpoint |
OTEL_EXPORTER_OTLP_HEADERS | OTLP exporter headers |
OTEL_SDK_DISABLED | Disable OpenTelemetry SDK |
LOG_LEVEL | Logging level: debug, info, warn, error |
ALLOY_REMOTE_CONFIG_URL | Remote configuration URL |
NODE_NAME | Node identifier (k8s node name) |
POD_NAME | Pod name (for k8s) |
NAMESPACE | Kubernetes namespace |
Using Environment Variables
# Export variables
export GRAFANA_TOKEN="glc_xxx"
export PROMETHEUS_URL="https://prometheus.grafana.net"
# Run Alloy
alloy run config.alloy
# In config.alloy, reference with ${ }
prometheus.remote_write "grafana" {
endpoint {
url = "${PROMETHEUS_URL}/api/prom/push"
basic_auth {
password = "${GRAFANA_TOKEN}"
}
}
}
Resources
- Official Documentation: https://grafana.com/docs/alloy/latest/
- Configuration Reference: https://grafana.com/docs/alloy/latest/reference/
- Components Catalog: https://grafana.com/docs/alloy/latest/reference/components/
- GitHub Repository: https://github.com/grafana/alloy
- Release Notes: https://github.com/grafana/alloy/releases
- Grafana Cloud Integration: https://grafana.com/products/cloud/
- OpenTelemetry Spec: https://opentelemetry.io/docs/
- Community Discussions: https://community.grafana.com/
- Tutorials & Examples: https://grafana.com/docs/alloy/latest/tutorials/
- Helm Chart: https://github.com/grafana/helm-charts/tree/main/charts/alloy
- Docker Images: https://hub.docker.com/r/grafana/alloy
- Feedback & Issues: https://github.com/grafana/alloy/issues