RedELK
RedELK is Outflank’s open-source SIEM platform purpose-built for red teams, providing centralized logging, correlation, and visualization across Cobalt Strike, Covenant, Sliver, redirectors, and phishing infrastructure in real-time.
Installation
Abschnitt betitelt „Installation“Prerequisites
Abschnitt betitelt „Prerequisites“- Docker and Docker Compose
- 8GB+ RAM (16GB recommended for large deployments)
- Linux or macOS (Windows with WSL2)
- Git
Clone RedELK Repository
Abschnitt betitelt „Clone RedELK Repository“git clone https://github.com/outflanknl/RedELK.git
cd RedELK
Docker Compose Setup
Abschnitt betitelt „Docker Compose Setup“# Navigate to docker directory
cd docker
# Copy and customize environment file
cp .env.example .env
# Start all services (Elasticsearch, Kibana, Logstash, Filebeat)
docker-compose up -d
# Verify services are running
docker-compose ps
# Check Elasticsearch health
curl -u elastic:password http://localhost:9200/_cluster/health
Manual Installation (Advanced)
Abschnitt betitelt „Manual Installation (Advanced)“# Install Elasticsearch
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-8.10.0-linux-x86_64.tar.gz
tar -xzf elasticsearch-8.10.0-linux-x86_64.tar.gz
cd elasticsearch-8.10.0/
./bin/elasticsearch
# Install Kibana (separate terminal)
wget https://artifacts.elastic.co/downloads/kibana/kibana-8.10.0-linux-x86_64.tar.gz
tar -xzf kibana-8.10.0-linux-x86_64.tar.gz
cd kibana-8.10.0/
./bin/kibana
# Install Logstash
wget https://artifacts.elastic.co/downloads/logstash/logstash-8.10.0-linux-x86_64.tar.gz
tar -xzf logstash-8.10.0-linux-x86_64.tar.gz
Architecture
Abschnitt betitelt „Architecture“RedELK consists of four primary components:
| Component | Purpose | Port |
|---|---|---|
| Elasticsearch | Search and analytics engine, stores logs | 9200, 9300 |
| Kibana | Web UI for visualization and dashboarding | 5601 |
| Logstash | Log parsing, enrichment, and filtering | 5000 |
| Filebeat | Lightweight log shipper from C2/redirectors | 5044 |
Data flow: C2/Redirector → Filebeat → Logstash → Elasticsearch ← Kibana
Quick Start
Abschnitt betitelt „Quick Start“Access Kibana Dashboard
Abschnitt betitelt „Access Kibana Dashboard“# After docker-compose up -d
# Navigate to http://localhost:5601
# Default credentials: elastic / password (from .env)
Verify Elasticsearch Cluster
Abschnitt betitelt „Verify Elasticsearch Cluster“# Check cluster status
curl -u elastic:password http://localhost:9200/_cluster/health
# List indices
curl -u elastic:password http://localhost:9200/_cat/indices
# Check document count
curl -u elastic:password http://localhost:9200/_cat/count?v
Initial Configuration
Abschnitt betitelt „Initial Configuration“# Create RedELK index pattern in Kibana
# Stack Management > Index Patterns > Create index pattern
# Pattern: redelk-*
# Timestamp field: @timestamp
# Apply predefined dashboards
docker exec redelk-kibana /usr/share/kibana/bin/kibana --install-plugins
C2 Log Integration
Abschnitt betitelt „C2 Log Integration“Cobalt Strike Teamserver Logs
Abschnitt betitelt „Cobalt Strike Teamserver Logs“# filebeat.yml configuration
filebeat.inputs:
- type: log
enabled: true
paths:
- /path/to/cobaltstrike/logs/*.log
fields:
log_source: cobalt_strike
team_server: ts01.red.local
multiline.pattern: '^\['
multiline.negate: true
multiline.match: after
Forward to Logstash
Abschnitt betitelt „Forward to Logstash“# filebeat.yml output
output.logstash:
hosts: ["localhost:5000"]
ssl.enabled: true
ssl.certificate_authorities: ["/path/to/ca.crt"]
ssl.certificate: "/path/to/cert.crt"
ssl.key: "/path/to/key.key"
Covenant Integration
Abschnitt betitelt „Covenant Integration“# Enable Covenant logging output
# In Covenant server config, set up Syslog output to Logstash
# Configure Logstash filter to parse Covenant JSON logs
# Logstash filter example
filter {
if [log_source] == "covenant" {
json {
source => "message"
}
mutate {
add_field => { "c2_framework" => "covenant" }
}
}
}
Sliver Log Forwarding
Abschnitt betitelt „Sliver Log Forwarding“# Enable Sliver implant callbacks logging
implants
# Configure Sliver server to ship logs to Logstash
# In Sliver config:
sliver > logs --stream logstash://logstash-host:5000
Custom C2 Log Parsing
Abschnitt betitelt „Custom C2 Log Parsing“# For custom C2 frameworks, send logs to Logstash via syslog or HTTP
# Logstash input filter
input {
syslog {
port => 5514
type => "custom_c2"
}
}
filter {
if [type] == "custom_c2" {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} \[%{DATA:beacon_id}\] %{GREEDYDATA:action}" }
}
}
}
Redirector Logging
Abschnitt betitelt „Redirector Logging“Apache Redirector Logs
Abschnitt betitelt „Apache Redirector Logs“# Configure Apache to log redirector traffic
# In Apache vhost config
CustomLog /var/log/apache2/redelk-access.log combined
ErrorLog /var/log/apache2/redelk-error.log
# filebeat.yml for Apache logs
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/apache2/redelk-access.log
fields:
log_source: apache_redirector
redirector_id: redir01
multiline.pattern: '^\d{1,3}\.\d{1,3}\.'
multiline.negate: true
multiline.match: after
Nginx Redirector Logs
Abschnitt betitelt „Nginx Redirector Logs“# Nginx configuration
access_log /var/log/nginx/redelk-access.log combined;
error_log /var/log/nginx/redelk-error.log warn;
# filebeat.yml for Nginx
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/nginx/redelk-access.log
fields:
log_source: nginx_redirector
HAProxy Traffic Analysis
Abschnitt betitelt „HAProxy Traffic Analysis“# HAProxy configuration for logging
global
log stdout local0
frontend redelk_frontend
bind *:80
mode http
log global
option httplog
default_backend redelk_backend
backend redelk_backend
balance roundrobin
server c2_server 10.0.0.10:8080
# filebeat.yml for HAProxy
filebeat.inputs:
- type: log
paths:
- /var/log/haproxy.log
fields:
log_source: haproxy
Phishing Campaign Tracking
Abschnitt betitelt „Phishing Campaign Tracking“GoPhish Integration
Abschnitt betitelt „GoPhish Integration“# GoPhish sends logs via HTTP to Logstash
# Logstash HTTP input filter
input {
http {
port => 8080
type => "gophish"
}
}
filter {
if [type] == "gophish" {
json {
source => "message"
}
mutate {
add_field => { "campaign" => "%{[campaign_name]}" }
add_field => { "email_sent" => "%{[email_sent]}" }
}
}
}
Email Delivery Tracking
Abschnitt betitelt „Email Delivery Tracking“# Create custom Logstash pipeline for email events
input {
file {
path => "/opt/gophish/logs/gophish.log"
start_position => "beginning"
type => "email_tracking"
}
}
filter {
if [type] == "email_tracking" {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} \[%{DATA:recipient}\] %{DATA:status} %{GREEDYDATA:details}" }
}
mutate {
add_field => { "event_type" => "email_delivery" }
}
}
}
Click Tracking
Abschnitt betitelt „Click Tracking“# Track phishing clicks via GoPhish webhook
# Logstash webhook input
input {
http {
port => 9000
type => "phishing_click"
}
}
filter {
if [type] == "phishing_click" {
json { source => "message" }
mutate {
add_field => { "event_type" => "click" }
add_field => { "ip_address" => "%{[headers][x-forwarded-for]}" }
add_field => { "user_agent" => "%{[headers][user-agent]}" }
}
geoip {
source => "ip_address"
}
}
}
Filebeat Configuration
Abschnitt betitelt „Filebeat Configuration“Main filebeat.yml Structure
Abschnitt betitelt „Main filebeat.yml Structure“filebeat.config.modules:
enabled: false
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/redelk/*.log
fields:
environment: production
team: red_team
output.logstash:
hosts: ["logstash.internal:5000"]
loadbalance: true
worker: 4
logging.level: info
logging.to_files: true
logging.files:
path: /var/log/filebeat
name: filebeat
keepfiles: 7
permissions: 0644
Custom Fields Addition
Abschnitt betitelt „Custom Fields Addition“filebeat.inputs:
- type: log
enabled: true
paths:
- /opt/cobaltstrike/logs/*.log
fields:
log_source: cobalt_strike
team_server: ts01.red.local
operator: operator1
campaign: operation_alpha
customer: acme_corp
field_under_root: false
Multiline Log Patterns
Abschnitt betitelt „Multiline Log Patterns“# Stack traces or multi-line logs
filebeat.inputs:
- type: log
multiline.pattern: '^\['
multiline.negate: true
multiline.match: after
multiline.max_lines: 100
multiline.timeout: 5s
# JSON logs
filebeat.inputs:
- type: log
multiline.pattern: '^\{'
multiline.negate: true
multiline.match: after
Logstash Pipelines
Abschnitt betitelt „Logstash Pipelines“C2 Traffic Parsing Pipeline
Abschnitt betitelt „C2 Traffic Parsing Pipeline“# File: /etc/logstash/conf.d/c2-parsing.conf
input {
beats {
port => 5044
}
}
filter {
if [log_source] == "cobalt_strike" {
grok {
match => { "message" => "\[%{TIMESTAMP_ISO8601:timestamp}\] %{DATA:beacon_id} %{DATA:action} %{GREEDYDATA:command}" }
}
}
if [log_source] == "covenant" {
json { source => "message" }
}
}
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "redelk-%{+YYYY.MM.dd}"
}
}
GeoIP Enrichment
Abschnitt betitelt „GeoIP Enrichment“# Enrich logs with geographic data
filter {
geoip {
source => "client_ip"
target => "geoip"
}
}
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "redelk-geo-%{+YYYY.MM.dd}"
}
}
Custom Detection Filters
Abschnitt betitelt „Custom Detection Filters“# Detect blue team activity
filter {
if [user_agent] =~ /Nmap|Masscan|nessus/i {
mutate {
add_tag => ["potential_scanner"]
add_field => { "alert_level" => "high" }
}
}
# Detect sandbox/analysis environment
if [user_agent] =~ /Cuckoo|Hybrid|VirusTotal/ {
mutate {
add_tag => ["sandbox_detection"]
}
}
}
Kibana Dashboards
Abschnitt betitelt „Kibana Dashboards“Accessing Pre-built Dashboards
Abschnitt betitelt „Accessing Pre-built Dashboards“# Navigate in Kibana UI
# Analytics > Dashboards > Search for "redelk"
# Available dashboards:
# - RedELK Overview
# - Beacon Activity
# - IOC Tracking
# - Redirector Traffic
# - Phishing Campaign
# - Geographic Distribution
# - Detection Alerts
Beacon Activity Dashboard
Abschnitt betitelt „Beacon Activity Dashboard“# Create custom visualization
# Kibana > Visualize > Create > Area Chart
# X-axis: timestamp
# Y-axis: count of beacons
# Bucket: Split series by beacon_id
# Saved search filter
log_source: cobalt_strike AND action: beacon_callback
IOC Overview Dashboard
Abschnitt betitelt „IOC Overview Dashboard“# Create IOC dashboard
# Kibana > Visualize > Create > Data Table
# Columns: ip_address, domain, username, hostname
# Filter by log_source and event_type
# Markdown widget for IOC export
Indicators of Compromise detected in last 24 hours
Campaign Timeline Visualization
Abschnitt betitelt „Campaign Timeline Visualization“# Timeline chart in Kibana
# Visualize > Line Chart
# X-axis: @timestamp
# Y-axis: count
# Split by: campaign
# Create search query
campaign: operation_* AND event_type: (beacon OR phishing)
Geographic Visualization
Abschnitt betitelt „Geographic Visualization“# Map visualization
# Kibana > Visualize > Maps
# Location field: geoip.location
# Data: client_ip
# Filter: redirector logs
# Heatmap of traffic by country
Alarm Rules
Abschnitt betitelt „Alarm Rules“Automated Alerting Setup
Abschnitt betitelt „Automated Alerting Setup“# Enable Alerting in Kibana
# Stack Management > Rules and Connectors > Create Rule
# Webhook connector for alerting
# Webhook URL: https://hooks.slack.com/services/YOUR/WEBHOOK
Blue Team Detection Rule
Abschnitt betitelt „Blue Team Detection Rule“# Rule: Detect blue team scanning activity
# Condition: user_agent contains (nmap OR masscan OR nessus)
# Timeframe: last 5 minutes
# Action: Alert with Slack notification
Sandbox Detection Rule
Abschnitt betitelt „Sandbox Detection Rule“# Rule: Detect sandbox/analysis environment
# Condition: user_agent matches (cuckoo OR hybrid OR virustotal)
# Action: Create alert, tag logs with sandbox_detection
Known-Bad IP Alerting
Abschnitt betitelt „Known-Bad IP Alerting“# Enrichment: Import threat intel feeds
# Rule: client_ip in [bad_ip_list]
# Action: Create high-priority alert, block in redirector
Beacon Check-in Rule
Abschnitt betitelt „Beacon Check-in Rule“# Rule: Track beacon check-ins
# Condition: action == "beacon_callback"
# Visualization: Count of unique beacons over time
# Alert threshold: No check-ins for 10+ minutes
IOC Management
Abschnitt betitelt „IOC Management“Tracking Indicators
Abschnitt betitelt „Tracking Indicators“# Create IOC index
PUT /ioc-tracking
{
"mappings": {
"properties": {
"indicator": { "type": "keyword" },
"indicator_type": { "type": "keyword" },
"source": { "type": "keyword" },
"campaign": { "type": "keyword" },
"timestamp": { "type": "date" }
}
}
}
# Add IOCs from logs
POST /ioc-tracking/_doc
{
"indicator": "192.168.1.100",
"indicator_type": "ip",
"source": "cobalt_strike",
"campaign": "operation_alpha",
"timestamp": "2026-04-17T10:00:00Z"
}
Exporting IOC Lists
Abschnitt betitelt „Exporting IOC Lists“# Export IOCs via Kibana
# Discover > Filter by indicator_type
# Export to CSV/JSON
# Create saved search for quick export
# Name: "Active IOCs"
# Query: campaign: operation_* AND timestamp: > now-24h
IOC Aggregation Dashboard
Abschnitt betitelt „IOC Aggregation Dashboard“# Kibana visualization
# Data Table showing:
# - IP addresses with beacon count
# - Domains with traffic volume
# - Hostnames with first seen date
# - Usernames with activity count
Troubleshooting
Abschnitt betitelt „Troubleshooting“Elasticsearch Connection Issues
Abschnitt betitelt „Elasticsearch Connection Issues“# Check Elasticsearch connectivity
curl -u elastic:password http://localhost:9200/
# Verify Logstash can reach Elasticsearch
docker logs redelk-logstash | grep -i error
# Check firewall rules
netstat -tlnp | grep 9200
Missing Logs in Kibana
Abschnitt betitelt „Missing Logs in Kibana“# Verify Filebeat is running
systemctl status filebeat
# Check Filebeat logs
tail -f /var/log/filebeat/filebeat
# Verify data is reaching Logstash
tcpdump -i lo port 5044
# Check Elasticsearch indices
curl -u elastic:password http://localhost:9200/_cat/indices?v
High Memory Usage
Abschnitt betitelt „High Memory Usage“# Increase JVM heap (Elasticsearch/Logstash)
# export ES_JAVA_OPTS="-Xms2g -Xmx2g"
# In docker-compose.yml
environment:
- "ES_JAVA_OPTS=-Xms4g -Xmx4g"
# Restart services
docker-compose restart
Logstash Filter Debugging
Abschnitt betitelt „Logstash Filter Debugging“# Add debug output to Logstash config
output {
stdout { codec => rubydebug }
elasticsearch { ... }
}
# Check Logstash logs
docker logs redelk-logstash --tail 100
Best Practices
Abschnitt betitelt „Best Practices“| Practice | Benefit |
|---|---|
| Separate indices by log source | Easier querying and retention policies |
| Add custom fields at Filebeat | Enriches logs before indexing |
| Use field_under_root: false | Keeps Filebeat metadata organized |
| Implement log rotation | Prevents disk space exhaustion |
| Regular backups of Elasticsearch | Recovery from data loss |
| Monitor cluster health | Catches issues early |
| Use index lifecycle policies | Auto-archive old logs |
| Enable TLS/SSL | Protects data in transit |
| Restrict Kibana access | Limits exposure of sensitive logs |
| Document custom parsers | Aids team handoff and debugging |
Security Hardening
Abschnitt betitelt „Security Hardening“# Enable X-Pack security in elasticsearch.yml
xpack.security.enabled: true
xpack.security.authc:
realms:
native:
type: native
order: 0
# Create API keys for Filebeat instead of passwords
POST /_security/api_key
{
"name": "filebeat-key",
"role_descriptors": {
"filebeat_role": {
"cluster": ["monitor"],
"index": [{"names": ["redelk-*"], "privileges": ["write", "index", "manage"]}]
}
}
}
Retention Policies
Abschnitt betitelt „Retention Policies“# Create Index Lifecycle Policy
PUT _ilm/policy/redelk-policy
{
"policy": {
"phases": {
"hot": {
"min_age": "0d",
"actions": {}
},
"warm": {
"min_age": "30d",
"actions": {
"set_priority": { "priority": 50 }
}
},
"cold": {
"min_age": "90d",
"actions": {
"set_priority": { "priority": 0 }
}
},
"delete": {
"min_age": "180d",
"actions": {
"delete": {}
}
}
}
}
}
Related Tools
Abschnitt betitelt „Related Tools“| Tool | Purpose | Integration |
|---|---|---|
| Mythic C2 | Alternative C2 framework | Logstash filter for Mythic logs |
| Sliver | Command & control framework | Native Logstash output |
| ELK Stack | Open-source SIEM foundation | RedELK built on ELK |
| VECTR | Detection framework correlation | Export RedELK IOCs to VECTR |
| Splunk | Enterprise SIEM alternative | Similar parsing/visualization |
| Wazuh | Host-based monitoring | Complement with endpoint data |