Pular para o conteúdo

RedELK

RedELK is Outflank’s open-source SIEM platform purpose-built for red teams, providing centralized logging, correlation, and visualization across Cobalt Strike, Covenant, Sliver, redirectors, and phishing infrastructure in real-time.

  • Docker and Docker Compose
  • 8GB+ RAM (16GB recommended for large deployments)
  • Linux or macOS (Windows with WSL2)
  • Git
git clone https://github.com/outflanknl/RedELK.git
cd RedELK
# Navigate to docker directory
cd docker

# Copy and customize environment file
cp .env.example .env

# Start all services (Elasticsearch, Kibana, Logstash, Filebeat)
docker-compose up -d

# Verify services are running
docker-compose ps

# Check Elasticsearch health
curl -u elastic:password http://localhost:9200/_cluster/health
# Install Elasticsearch
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-8.10.0-linux-x86_64.tar.gz
tar -xzf elasticsearch-8.10.0-linux-x86_64.tar.gz
cd elasticsearch-8.10.0/
./bin/elasticsearch

# Install Kibana (separate terminal)
wget https://artifacts.elastic.co/downloads/kibana/kibana-8.10.0-linux-x86_64.tar.gz
tar -xzf kibana-8.10.0-linux-x86_64.tar.gz
cd kibana-8.10.0/
./bin/kibana

# Install Logstash
wget https://artifacts.elastic.co/downloads/logstash/logstash-8.10.0-linux-x86_64.tar.gz
tar -xzf logstash-8.10.0-linux-x86_64.tar.gz

RedELK consists of four primary components:

ComponentPurposePort
ElasticsearchSearch and analytics engine, stores logs9200, 9300
KibanaWeb UI for visualization and dashboarding5601
LogstashLog parsing, enrichment, and filtering5000
FilebeatLightweight log shipper from C2/redirectors5044

Data flow: C2/Redirector → Filebeat → Logstash → Elasticsearch ← Kibana

# After docker-compose up -d
# Navigate to http://localhost:5601
# Default credentials: elastic / password (from .env)
# Check cluster status
curl -u elastic:password http://localhost:9200/_cluster/health

# List indices
curl -u elastic:password http://localhost:9200/_cat/indices

# Check document count
curl -u elastic:password http://localhost:9200/_cat/count?v
# Create RedELK index pattern in Kibana
# Stack Management > Index Patterns > Create index pattern
# Pattern: redelk-*
# Timestamp field: @timestamp

# Apply predefined dashboards
docker exec redelk-kibana /usr/share/kibana/bin/kibana --install-plugins
# filebeat.yml configuration
filebeat.inputs:
  - type: log
    enabled: true
    paths:
      - /path/to/cobaltstrike/logs/*.log
    fields:
      log_source: cobalt_strike
      team_server: ts01.red.local
    multiline.pattern: '^\['
    multiline.negate: true
    multiline.match: after
# filebeat.yml output
output.logstash:
  hosts: ["localhost:5000"]
  ssl.enabled: true
  ssl.certificate_authorities: ["/path/to/ca.crt"]
  ssl.certificate: "/path/to/cert.crt"
  ssl.key: "/path/to/key.key"
# Enable Covenant logging output
# In Covenant server config, set up Syslog output to Logstash
# Configure Logstash filter to parse Covenant JSON logs

# Logstash filter example
filter {
  if [log_source] == "covenant" {
    json {
      source => "message"
    }
    mutate {
      add_field => { "c2_framework" => "covenant" }
    }
  }
}
# Enable Sliver implant callbacks logging
implants

# Configure Sliver server to ship logs to Logstash
# In Sliver config:
sliver > logs --stream logstash://logstash-host:5000
# For custom C2 frameworks, send logs to Logstash via syslog or HTTP
# Logstash input filter
input {
  syslog {
    port => 5514
    type => "custom_c2"
  }
}

filter {
  if [type] == "custom_c2" {
    grok {
      match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} \[%{DATA:beacon_id}\] %{GREEDYDATA:action}" }
    }
  }
}
# Configure Apache to log redirector traffic
# In Apache vhost config
CustomLog /var/log/apache2/redelk-access.log combined
ErrorLog /var/log/apache2/redelk-error.log

# filebeat.yml for Apache logs
filebeat.inputs:
  - type: log
    enabled: true
    paths:
      - /var/log/apache2/redelk-access.log
    fields:
      log_source: apache_redirector
      redirector_id: redir01
    multiline.pattern: '^\d{1,3}\.\d{1,3}\.'
    multiline.negate: true
    multiline.match: after
# Nginx configuration
access_log /var/log/nginx/redelk-access.log combined;
error_log /var/log/nginx/redelk-error.log warn;

# filebeat.yml for Nginx
filebeat.inputs:
  - type: log
    enabled: true
    paths:
      - /var/log/nginx/redelk-access.log
    fields:
      log_source: nginx_redirector
# HAProxy configuration for logging
global
    log stdout local0

frontend redelk_frontend
    bind *:80
    mode http
    log global
    option httplog
    default_backend redelk_backend

backend redelk_backend
    balance roundrobin
    server c2_server 10.0.0.10:8080

# filebeat.yml for HAProxy
filebeat.inputs:
  - type: log
    paths:
      - /var/log/haproxy.log
    fields:
      log_source: haproxy
# GoPhish sends logs via HTTP to Logstash
# Logstash HTTP input filter
input {
  http {
    port => 8080
    type => "gophish"
  }
}

filter {
  if [type] == "gophish" {
    json {
      source => "message"
    }
    mutate {
      add_field => { "campaign" => "%{[campaign_name]}" }
      add_field => { "email_sent" => "%{[email_sent]}" }
    }
  }
}
# Create custom Logstash pipeline for email events
input {
  file {
    path => "/opt/gophish/logs/gophish.log"
    start_position => "beginning"
    type => "email_tracking"
  }
}

filter {
  if [type] == "email_tracking" {
    grok {
      match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} \[%{DATA:recipient}\] %{DATA:status} %{GREEDYDATA:details}" }
    }
    mutate {
      add_field => { "event_type" => "email_delivery" }
    }
  }
}
# Track phishing clicks via GoPhish webhook
# Logstash webhook input
input {
  http {
    port => 9000
    type => "phishing_click"
  }
}

filter {
  if [type] == "phishing_click" {
    json { source => "message" }
    mutate {
      add_field => { "event_type" => "click" }
      add_field => { "ip_address" => "%{[headers][x-forwarded-for]}" }
      add_field => { "user_agent" => "%{[headers][user-agent]}" }
    }
    geoip {
      source => "ip_address"
    }
  }
}
filebeat.config.modules:
  enabled: false

filebeat.inputs:
  - type: log
    enabled: true
    paths:
      - /var/log/redelk/*.log
    fields:
      environment: production
      team: red_team

output.logstash:
  hosts: ["logstash.internal:5000"]
  loadbalance: true
  worker: 4

logging.level: info
logging.to_files: true
logging.files:
  path: /var/log/filebeat
  name: filebeat
  keepfiles: 7
  permissions: 0644
filebeat.inputs:
  - type: log
    enabled: true
    paths:
      - /opt/cobaltstrike/logs/*.log
    fields:
      log_source: cobalt_strike
      team_server: ts01.red.local
      operator: operator1
      campaign: operation_alpha
      customer: acme_corp
    field_under_root: false
# Stack traces or multi-line logs
filebeat.inputs:
  - type: log
    multiline.pattern: '^\['
    multiline.negate: true
    multiline.match: after
    multiline.max_lines: 100
    multiline.timeout: 5s

# JSON logs
filebeat.inputs:
  - type: log
    multiline.pattern: '^\{'
    multiline.negate: true
    multiline.match: after
# File: /etc/logstash/conf.d/c2-parsing.conf
input {
  beats {
    port => 5044
  }
}

filter {
  if [log_source] == "cobalt_strike" {
    grok {
      match => { "message" => "\[%{TIMESTAMP_ISO8601:timestamp}\] %{DATA:beacon_id} %{DATA:action} %{GREEDYDATA:command}" }
    }
  }
  
  if [log_source] == "covenant" {
    json { source => "message" }
  }
}

output {
  elasticsearch {
    hosts => ["elasticsearch:9200"]
    index => "redelk-%{+YYYY.MM.dd}"
  }
}
# Enrich logs with geographic data
filter {
  geoip {
    source => "client_ip"
    target => "geoip"
  }
}

output {
  elasticsearch {
    hosts => ["elasticsearch:9200"]
    index => "redelk-geo-%{+YYYY.MM.dd}"
  }
}
# Detect blue team activity
filter {
  if [user_agent] =~ /Nmap|Masscan|nessus/i {
    mutate {
      add_tag => ["potential_scanner"]
      add_field => { "alert_level" => "high" }
    }
  }
  
  # Detect sandbox/analysis environment
  if [user_agent] =~ /Cuckoo|Hybrid|VirusTotal/ {
    mutate {
      add_tag => ["sandbox_detection"]
    }
  }
}
# Navigate in Kibana UI
# Analytics > Dashboards > Search for "redelk"

# Available dashboards:
# - RedELK Overview
# - Beacon Activity
# - IOC Tracking
# - Redirector Traffic
# - Phishing Campaign
# - Geographic Distribution
# - Detection Alerts
# Create custom visualization
# Kibana > Visualize > Create > Area Chart
# X-axis: timestamp
# Y-axis: count of beacons
# Bucket: Split series by beacon_id

# Saved search filter
log_source: cobalt_strike AND action: beacon_callback
# Create IOC dashboard
# Kibana > Visualize > Create > Data Table
# Columns: ip_address, domain, username, hostname
# Filter by log_source and event_type

# Markdown widget for IOC export
Indicators of Compromise detected in last 24 hours
# Timeline chart in Kibana
# Visualize > Line Chart
# X-axis: @timestamp
# Y-axis: count
# Split by: campaign

# Create search query
campaign: operation_* AND event_type: (beacon OR phishing)
# Map visualization
# Kibana > Visualize > Maps
# Location field: geoip.location
# Data: client_ip
# Filter: redirector logs

# Heatmap of traffic by country
# Enable Alerting in Kibana
# Stack Management > Rules and Connectors > Create Rule

# Webhook connector for alerting
# Webhook URL: https://hooks.slack.com/services/YOUR/WEBHOOK
# Rule: Detect blue team scanning activity
# Condition: user_agent contains (nmap OR masscan OR nessus)
# Timeframe: last 5 minutes
# Action: Alert with Slack notification
# Rule: Detect sandbox/analysis environment
# Condition: user_agent matches (cuckoo OR hybrid OR virustotal)
# Action: Create alert, tag logs with sandbox_detection
# Enrichment: Import threat intel feeds
# Rule: client_ip in [bad_ip_list]
# Action: Create high-priority alert, block in redirector
# Rule: Track beacon check-ins
# Condition: action == "beacon_callback"
# Visualization: Count of unique beacons over time
# Alert threshold: No check-ins for 10+ minutes
# Create IOC index
PUT /ioc-tracking
{
  "mappings": {
    "properties": {
      "indicator": { "type": "keyword" },
      "indicator_type": { "type": "keyword" },
      "source": { "type": "keyword" },
      "campaign": { "type": "keyword" },
      "timestamp": { "type": "date" }
    }
  }
}

# Add IOCs from logs
POST /ioc-tracking/_doc
{
  "indicator": "192.168.1.100",
  "indicator_type": "ip",
  "source": "cobalt_strike",
  "campaign": "operation_alpha",
  "timestamp": "2026-04-17T10:00:00Z"
}
# Export IOCs via Kibana
# Discover > Filter by indicator_type
# Export to CSV/JSON

# Create saved search for quick export
# Name: "Active IOCs"
# Query: campaign: operation_* AND timestamp: > now-24h
# Kibana visualization
# Data Table showing:
# - IP addresses with beacon count
# - Domains with traffic volume
# - Hostnames with first seen date
# - Usernames with activity count
# Check Elasticsearch connectivity
curl -u elastic:password http://localhost:9200/

# Verify Logstash can reach Elasticsearch
docker logs redelk-logstash | grep -i error

# Check firewall rules
netstat -tlnp | grep 9200
# Verify Filebeat is running
systemctl status filebeat

# Check Filebeat logs
tail -f /var/log/filebeat/filebeat

# Verify data is reaching Logstash
tcpdump -i lo port 5044

# Check Elasticsearch indices
curl -u elastic:password http://localhost:9200/_cat/indices?v
# Increase JVM heap (Elasticsearch/Logstash)
# export ES_JAVA_OPTS="-Xms2g -Xmx2g"

# In docker-compose.yml
environment:
  - "ES_JAVA_OPTS=-Xms4g -Xmx4g"

# Restart services
docker-compose restart
# Add debug output to Logstash config
output {
  stdout { codec => rubydebug }
  elasticsearch { ... }
}

# Check Logstash logs
docker logs redelk-logstash --tail 100
PracticeBenefit
Separate indices by log sourceEasier querying and retention policies
Add custom fields at FilebeatEnriches logs before indexing
Use field_under_root: falseKeeps Filebeat metadata organized
Implement log rotationPrevents disk space exhaustion
Regular backups of ElasticsearchRecovery from data loss
Monitor cluster healthCatches issues early
Use index lifecycle policiesAuto-archive old logs
Enable TLS/SSLProtects data in transit
Restrict Kibana accessLimits exposure of sensitive logs
Document custom parsersAids team handoff and debugging
# Enable X-Pack security in elasticsearch.yml
xpack.security.enabled: true
xpack.security.authc:
  realms:
    native:
      type: native
      order: 0

# Create API keys for Filebeat instead of passwords
POST /_security/api_key
{
  "name": "filebeat-key",
  "role_descriptors": {
    "filebeat_role": {
      "cluster": ["monitor"],
      "index": [{"names": ["redelk-*"], "privileges": ["write", "index", "manage"]}]
    }
  }
}
# Create Index Lifecycle Policy
PUT _ilm/policy/redelk-policy
{
  "policy": {
    "phases": {
      "hot": {
        "min_age": "0d",
        "actions": {}
      },
      "warm": {
        "min_age": "30d",
        "actions": {
          "set_priority": { "priority": 50 }
        }
      },
      "cold": {
        "min_age": "90d",
        "actions": {
          "set_priority": { "priority": 0 }
        }
      },
      "delete": {
        "min_age": "180d",
        "actions": {
          "delete": {}
        }
      }
    }
  }
}
ToolPurposeIntegration
Mythic C2Alternative C2 frameworkLogstash filter for Mythic logs
SliverCommand & control frameworkNative Logstash output
ELK StackOpen-source SIEM foundationRedELK built on ELK
VECTRDetection framework correlationExport RedELK IOCs to VECTR
SplunkEnterprise SIEM alternativeSimilar parsing/visualization
WazuhHost-based monitoringComplement with endpoint data