Saltar a contenido

SecurityOnion Cheatsheet

SecurityOnion is a free and open-source Linux distribution for caza de amenazas, enterprise security monitoring, and log management. It includes a comprehensive suite of security tools including Elasticsearch, Logstash, Kibana, Suricata, Zeek, Wazuh, TheHive, Cortex, and many other security-focused applications integrated into a cohesive platform.

Platform Overview

Architecture and Components

SecurityOnion follows a distributed architecture with different node types serving specific functions. The platform integrates multiple open-source security tools into a unified ecosystem for comprehensive network security monitoring and respuesta a incidentes.

Core components include network security monitoring (NSM) tools like Suricata and Zeek for Análisis de Tráfico, log management through the Elastic Stack, host-based monitoring via Wazuh agents, and case management through TheHive integration.

clave Features

# Core SecurityOnion Capabilities
- Full packet capture and network security monitoring
- Intrusion detection and prevention (Suricata)
- Network Análisis de Tráfico (Zeek/Bro)
- Log aggregation and analysis (Elastic Stack)
- host-based intrusion detection (Wazuh)
- caza de amenazas and investigation tools
- Case management and respuesta a incidentes
- Distributed deployment architecture
- Web-based management interface (SOC)

instalación and Setup

ISO instalación

# Download SecurityOnion ISO
wget https://github.com/Security-Onion-Solutions/securityonion/releases/latest/download/securityonion-2.3.x-x86_64.iso

# Verify suma de verificación
sha256sum securityonion-2.3.x-x86_64.iso

# Create bootable USB (Linux)
sudo dd if=securityonion-2.3.x-x86_64.iso of=/dev/sdX bs=4M status=progress
sync

# Boot from USB and follow instalación wizard
# Minimum requirements:
# - 16GB RAM (32GB+ recommended)
# - 200GB storage (1TB+ recommended)
# - Dual network interfaces (management + monitoring)

# Post-instalación network configuración
sudo so-setup

# Initial setup wizard will configure:
# - Network interfaces
# - Node type (standalone, manager, search, forward, heavy, etc.)
# - User accounts and autenticación
# - SSL certificados
# - servicio configuración

Distributed Deployment

# Manager Node Setup (first node)
sudo so-setup
# Select: Install
# Select: Manager
# Configure management interface
# Configure monitoring interface(s)
# Set admin credenciales
# Configure grid settings

# Search Node Setup
sudo so-setup
# Select: Install
# Select: Search Node
# Enter manager IP address
# Configure network settings
# Join existing grid

# Forward Node Setup
sudo so-setup
# Select: Install
# Select: Forward Node
# Enter manager IP address
# Configure monitoring interfaces
# Set log forwarding destination

# Heavy Node Setup (combined search + forward)
sudo so-setup
# Select: Install
# Select: Heavy Node
# Enter manager IP address
# Configure all interfaces and servicios

# Verify grid status
sudo so-status
sudo so-grid-status

# Check node connectivity
sudo so-test

Docker-based instalación

# Install Docker and Docker Compose
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
sudo usermod -aG docker $USER

# Clone SecurityOnion repository
git clone https://github.com/Security-Onion-Solutions/securityonion.git
cd securityonion

# Configure environment
cp .env.ejemplo .env
nano .env

# ejemplo .env configuración
SO_MANAGER_IP=192.168.1.100
SO_INTERFACE_MONITOR=eth1
SO_INTERFACE_MANAGEMENT=eth0
SO_ADMIN_USER=admin
SO_ADMIN_PASS=Securecontraseña123!

# Deploy with Docker Compose
sudo docker-compose up -d

# Check deployment status
sudo docker-compose ps
sudo docker-compose logs -f

Core Tools and servicios

Suricata (IDS/IPS)

# Suricata configuración and management
sudo so-suricata-restart
sudo so-suricata-status

# View Suricata configuración
sudo cat /opt/so/conf/suricata/suricata.yaml

# Update Suricata rules
sudo so-rule-update

# Custom rule management
sudo nano /opt/so/rules/local.rules

# ejemplo custom rules
alert tcp any any -> $HOME_NET 22 (msg:"SSH conexión Attempt"; sid:1000001; rev:1;)
alert http any any -> any any (msg:"Suspicious User Agent"; content:"User-Agent: BadBot"; sid:1000002; rev:1;)

# Test rule sintaxis
sudo suricata -T -c /opt/so/conf/suricata/suricata.yaml

# Monitor Suricata alerts
sudo tail -f /nsm/suricata/eve.json

# Suricata performance tuning
sudo nano /opt/so/conf/suricata/suricata.yaml
# Adjust:
# - af-packet workers
# - ring-size
# - block-size
# - use-mmap

# Restart Suricata with new configuración
sudo so-suricata-restart

# Check Suricata statistics
sudo suricata-sc -c stats

# Rule management comandos
sudo so-rule-update --help
sudo so-rule-update --force
sudo so-rule-update --ruleset=emerging-threats

Zeek (Análisis de Red)

# Zeek configuración and management
sudo so-zeek-restart
sudo so-zeek-status

# View Zeek configuración
sudo cat /opt/so/conf/zeek/node.cfg

# Zeek log locations
ls -la /nsm/zeek/logs/current/

# Common Zeek logs
tail -f /nsm/zeek/logs/current/conn.log
tail -f /nsm/zeek/logs/current/dns.log
tail -f /nsm/zeek/logs/current/http.log
tail -f /nsm/zeek/logs/current/ssl.log
tail -f /nsm/zeek/logs/current/files.log

# Custom Zeek scripts
sudo nano /opt/so/conf/zeek/local.zeek

# ejemplo custom script
@load base/protocolos/http
event http_request(c: conexión, method: string, original_URI: string, unescaped_URI: string, version: string) \\\\{
    if (/malware/ in original_URI) \\\\{
        print fmt("Suspicious HTTP request: %s %s", c$id$orig_h, original_URI);
    \\\\}
\\\\}

# Deploy Zeek configuración
sudo so-zeek-restart

# Zeek packet analysis
sudo zeek -r /nsm/pcap/file.pcap local.zeek

# Extract files from network traffic
sudo zeek -r traffic.pcap /opt/so/conf/zeek/extract-files.zeek

# Zeek intelligence framework
sudo nano /opt/so/conf/zeek/intel.dat
# Format: indicator<tab>indicator_type<tab>meta.source
192.168.1.100   Intel::ADDR malicious-ip-list
evil.com    Intel::DOMAIN   suspicious-domains

# Load intelligence data
sudo so-zeek-restart

# Monitor intelligence matches
tail -f /nsm/zeek/logs/current/intel.log

Wazuh (HIDS)

# Wazuh manager configuración
sudo nano /opt/so/conf/wazuh/ossec.conf

# Deploy Wazuh agent (on monitored systems)
# Download agent
wget https://packages.wazuh.com/4.x/apt/pool/main/w/wazuh-agent/wazuh-agent_4.x.x-1_amd64.deb

# Install agent
sudo dpkg -i wazuh-agent_4.x.x-1_amd64.deb

# Configure agent
sudo nano /var/ossec/etc/ossec.conf
# Set manager IP:
<client>
  <server>
    <address>192.168.1.100</address>
    <puerto>1514</puerto>
    <protocolo>tcp</protocolo>
  </server>
</client>

# Start agent
sudo systemctl enable wazuh-agent
sudo systemctl start wazuh-agent

# Register agent on manager
sudo /var/ossec/bin/manage_agents
# Select opción 'A' to add agent
# Enter agent name and IP

# Extract agent clave
sudo /var/ossec/bin/manage_agents
# Select opción 'E' to extract clave

# Impuerto clave on agent
sudo /var/ossec/bin/manage_agents
# Select opción 'I' to impuerto clave

# Restart agent
sudo systemctl restart wazuh-agent

# Check agent status
sudo /var/ossec/bin/agent_control -lc

# Custom Wazuh rules
sudo nano /opt/so/conf/wazuh/rules/local_rules.xml

# ejemplo custom rule
<group name="local,">
  <rule id="100001" level="10">
    <if_sid>5716</if_sid>
    <srcip>!192.168.1.0/24</srcip>
    <Descripción>SSH login from external network</Descripción>
    <group>autenticación_success,pci_dss_10.2.5,</group>
  </rule>
</group>

# Test rule configuración
sudo /var/ossec/bin/ossec-logtest

# Restart Wazuh manager
sudo systemctl restart wazuh-manager

# Monitor Wazuh alerts
sudo tail -f /var/ossec/logs/alerts/alerts.log

Elastic Stack (ELK)

# Elasticsearch management
sudo so-elasticsearch-restart
sudo so-elasticsearch-status

# Check Elasticsearch cluster health
curl -X GET "localhost:9200/_cluster/health?pretty"

# View Elasticsearch indices
curl -X GET "localhost:9200/_cat/indices?v"

# Elasticsearch configuración
sudo nano /opt/so/conf/elasticsearch/elasticsearch.yml

# Logstash management
sudo so-logstash-restart
sudo so-logstash-status

# Logstash configuración
ls -la /opt/so/conf/logstash/conf.d/

# Custom Logstash pipeline
sudo nano /opt/so/conf/logstash/conf.d/custom.conf

# ejemplo Logstash configuración
input \\\\{
  beats \\\\{
    puerto => 5044
  \\\\}
\\\\}

filter \\\\{
  if [fields][log_type] == "custom" \\\\{
    grok \\\\{
      match => \\\\{ "message" => "%\\\\{TIMESTAMP_ISO8601:timestamp\\\\} %\\\\{LOGLEVEL:level\\\\} %\\\\{GREEDYDATA:message\\\\}" \\\\}
    \\\\}
    date \\\\{
      match => [ "timestamp", "ISO8601" ]
    \\\\}
  \\\\}
\\\\}

output \\\\{
  elasticsearch \\\\{
    hosts => ["localhost:9200"]
    index => "custom-logs-%\\\\{+YYYY.MM.dd\\\\}"
  \\\\}
\\\\}

# Test Logstash configuración
sudo /usr/share/logstash/bin/logstash --config.test_and_exit -f /opt/so/conf/logstash/conf.d/custom.conf

# Kibana management
sudo so-kibana-restart
sudo so-kibana-status

# Access Kibana web interface
# https://manager-ip/kibana

# Create custom Kibana dashboard
# 1. Navigate to Kibana > Dashboard
# 2. Create new dashboard
# 3. Add visualizations
# 4. Save dashboard

# Expuerto/Impuerto Kibana objects
# Expuerto
curl -X POST "localhost:5601/api/saved_objects/_expuerto" \
  -H "Content-Type: application/json" \
  -H "kbn-xsrf: true" \
  -d '\\\\{"type": "dashboard"\\\\}' > dashboards.ndjson

# Impuerto
curl -X POST "localhost:5601/api/saved_objects/_impuerto" \
  -H "kbn-xsrf: true" \
  -F file=@dashboards.ndjson

TheHive (Case Management)

# TheHive configuración
sudo nano /opt/so/conf/thehive/application.conf

# ejemplo configuración
play.http.secret.clave = "your-secret-clave"
db.janusgraph \\\\{
  storage.backend = berkeleyje
  storage.directory = /opt/thehive/db
\\\\}

# Start TheHive
sudo so-thehive-restart
sudo so-thehive-status

# Access TheHive web interface
# https://manager-ip:9000

# Create organization and users
# 1. Login as admin
# 2. Navigate to Admin > Organizations
# 3. Create organization
# 4. Add users to organization

# API uso ejemplos
THEHIVE_URL="https://localhost:9000"
API_clave="your-api-clave"

# Create case
curl -X POST "$THEHIVE_URL/api/case" \
  -H "autorización: Bearer $API_clave" \
  -H "Content-Type: application/json" \
  -d '\\\\{
    "title": "Suspicious Network Activity",
    "Descripción": "Detected unusual traffic patterns",
    "severity": 2,
    "tlp": 2,
    "tags": ["network", "suspicious"]
  \\\\}'

# Add observable to case
curl -X POST "$THEHIVE_URL/api/case/\\\\{case-id\\\\}/artifact" \
  -H "autorización: Bearer $API_clave" \
  -H "Content-Type: application/json" \
  -d '\\\\{
    "dataType": "ip",
    "data": "192.168.1.100",
    "message": "Suspicious IP address",
    "tags": ["malicious"]
  \\\\}'

# Search cases
curl -X POST "$THEHIVE_URL/api/case/_search" \
  -H "autorización: Bearer $API_clave" \
  -H "Content-Type: application/json" \
  -d '\\\\{
    "query": \\\\{
      "_and": [
        \\\\{"_field": "status", "_value": "Open"\\\\},
        \\\\{"_field": "severity", "_gte": 2\\\\}
      ]
    \\\\}
  \\\\}'

Cortex (Analysis Engine)

# Cortex configuración
sudo nano /opt/so/conf/cortex/application.conf

# ejemplo configuración
play.http.secret.clave = "cortex-secret-clave"
cortex.storage \\\\{
  provider = localfs
  localfs.location = /opt/cortex/files
\\\\}

# Start Cortex
sudo so-cortex-restart
sudo so-cortex-status

# Access Cortex web interface
# https://manager-ip:9001

# Install analyzers
sudo docker pull cortexneurons/virustotal_3_0
sudo docker pull cortexneurons/shodan_host_1_0
sudo docker pull cortexneurons/abuse_finder_1_0

# Configure analyzers
# 1. Login to Cortex
# 2. Navigate to Organization > Analyzers
# 3. Enable and configure analyzers
# 4. Add API claves for external servicios

# API uso ejemplos
CORTEX_URL="https://localhost:9001"
API_clave="your-cortex-api-clave"

# Submit analysis job
curl -X POST "$CORTEX_URL/api/analyzer/VirusTotal_GetRepuerto_3_0/run" \
  -H "autorización: Bearer $API_clave" \
  -H "Content-Type: application/json" \
  -d '\\\\{
    "data": "malicious-hash",
    "dataType": "hash",
    "tlp": 2
  \\\\}'

# Get job results
curl -X GET "$CORTEX_URL/api/job/\\\\{job-id\\\\}" \
  -H "autorización: Bearer $API_clave"

# List available analyzers
curl -X GET "$CORTEX_URL/api/analyzer" \
  -H "autorización: Bearer $API_clave"

Network Security Monitoring

Packet Capture and Analysis

# Full packet capture configuración
sudo nano /opt/so/conf/stenographer/config

# ejemplo stenographer configuración
\\\\{
  "hilos": [
    \\\\{ "PacketsDirectory": "/nsm/pcap", "MaxDirectoryFiles": 30000, "DiskFreePercentage": 10 \\\\}
  ],
  "StenotypePath": "/usr/bin/stenotype",
  "Interface": "eth1",
  "puerto": 1234,
  "host": "127.0.0.1",
  "banderas": [],
  "CertPath": "/opt/so/conf/stenographer/certs"
\\\\}

# Start packet capture
sudo so-stenographer-restart

# Query packet capture
sudo stenoread 'host 192.168.1.100' -w output.pcap

# Time-based queries
sudo stenoread 'host 192.168.1.100 and after 2023-01-01T00:00:00Z and before 2023-01-01T23:59:59Z' -w output.pcap

# protocolo-specific queries
sudo stenoread 'tcp and puerto 80' -w http-traffic.pcap
sudo stenoread 'udp and puerto 53' -w dns-traffic.pcap

# Advanced packet analysis with tcpdump
sudo tcpdump -i eth1 -w capture.pcap
sudo tcpdump -r capture.pcap 'host 192.168.1.100'
sudo tcpdump -r capture.pcap -A 'puerto 80'

# Packet analysis with tshark
sudo tshark -i eth1 -w capture.pcap
sudo tshark -r capture.pcap -Y "ip.addr == 192.168.1.100"
sudo tshark -r capture.pcap -Y "http.request.method == GET" -T fields -e http.host -e http.request.uri

# Extract files from packet capture
sudo tcpflow -r capture.pcap -o extracted_files/
sudo foremost -i capture.pcap -o carved_files/

# Network statistics
sudo capinfos capture.pcap
sudo editcap -A '2023-01-01 00:00:00' -B '2023-01-01 23:59:59' input.pcap output.pcap

Análisis de Tráfico and Hunting

# Zeek-based Análisis de Tráfico
# conexión analysis
| sudo zcat /nsm/zeek/logs/*/conn.log.gz | zeek-cut id.orig_h id.resp_h id.resp_p proto duration bytes | sort | uniq -c | sort -nr |

# DNS analysis
| sudo zcat /nsm/zeek/logs/*/dns.log.gz | zeek-cut query answers | grep -v "^-" | sort | uniq -c | sort -nr |

# HTTP analysis
| sudo zcat /nsm/zeek/logs/*/http.log.gz | zeek-cut host uri user_agent | grep -E "(exe | zip | rar)" | sort | uniq |

# SSL/TLS analysis
| sudo zcat /nsm/zeek/logs/*/ssl.log.gz | zeek-cut server_name subject issuer | sort | uniq |

# File analysis
| sudo zcat /nsm/zeek/logs/*/files.log.gz | zeek-cut mime_type filename md5 | grep -E "(exe | pdf | doc)" | sort | uniq |

# Custom Zeek analysis scripts
cat > /tmp/analyze_traffic.zeek ``<< 'EOF'
@load base/protocolos/http
@load base/protocolos/dns

global suspicious_domains: set[string] = \\\{
    "evil.com",
    "malware.net",
    "phishing.org"
\\\};

event dns_request(c: conexión, msg: dns_msg, query: string, qtype: count, qclass: count) \\\{
    if (query in suspicious_domains) \\\{
        print fmt("Suspicious DNS query: %s ->`` %s", c$id$orig_h, query);
    \\\\}
\\\\}

event http_request(c: conexión, method: string, original_URI: string, unescaped_URI: string, version: string) \\\\{
| if (/\.(exe | zip | rar)$/ in original_URI) \\\\{ |
        print fmt("Suspicious file download: %s -> %s", c$id$orig_h, original_URI);
    \\\\}
\\\\}
EOF

# Run analysis on packet capture
sudo zeek -r capture.pcap /tmp/analyze_traffic.zeek

# caza de amenazas queries
# Long conexións
| sudo zcat /nsm/zeek/logs/*/conn.log.gz | zeek-cut duration id.orig_h id.resp_h | awk '$1 > 3600' | sort -nr |

# Large data transfers
| sudo zcat /nsm/zeek/logs/*/conn.log.gz | zeek-cut orig_bytes resp_bytes id.orig_h id.resp_h | awk '$1+$2 > 1000000' | sort -nr |

# Unusual puertos
| sudo zcat /nsm/zeek/logs/*/conn.log.gz | zeek-cut id.resp_p proto | sort | uniq -c | sort -nr | head -20 |

# Beaconing detection
| sudo zcat /nsm/zeek/logs/*/conn.log.gz | zeek-cut id.orig_h id.resp_h ts | awk '\\\\{print $1, $2, strftime("%H:%M", $3)\\\\}' | sort | uniq -c | awk '$1 > 10' |

Alert Management and Investigation

# Suricata alert analysis
# Real-time alert monitoring
sudo tail -f /nsm/suricata/eve.json|jq 'select(.event_type=="alert")'

# Alert statistics
| sudo cat /nsm/suricata/eve.json | jq 'select(.event_type=="alert") | .alert.firma' | sort | uniq -c | sort -nr |

# High-priority alerts
sudo cat /nsm/suricata/eve.json|jq 'select(.event_type=="alert" and .alert.severity<=2)'

# Alert correlation with Zeek logs
# Extract IPs from alerts
| sudo cat /nsm/suricata/eve.json | jq -r 'select(.event_type=="alert") | "\(.src_ip) \(.dest_ip)"' | sort | uniq > alert_ips.txt |

# Find corresponding Zeek conexións
while read src_ip dest_ip; do
| sudo zcat /nsm/zeek/logs/*/conn.log.gz | zeek-cut id.orig_h id.resp_h | grep "$src_ip.*$dest_ip" |
done < alert_ips.txt

# Custom alert procesoing script
cat > /usr/local/bin/proceso_alerts.py << 'EOF'
#!/usr/bin/env python3
impuerto json
impuerto sys
from datetime impuerto datetime

def proceso_alert(alert):
    if alert.get('event_type') == 'alert':
        severity = alert.get('alert', \\\\{\\\\}).get('severity', 0)
        firma = alert.get('alert', \\\\{\\\\}).get('firma', '')
        src_ip = alert.get('src_ip', '')
        dest_ip = alert.get('dest_ip', '')
        timestamp = alert.get('timestamp', '')

        if severity <= 2:  # High priority alerts
            print(f"HIGH PRIORITY: \\\\{timestamp\\\\} - \\\\{firma\\\\}")
            print(f"  Source: \\\\{src_ip\\\\} -> Destination: \\\\{dest_ip\\\\}")
            print(f"  Severity: \\\\{severity\\\\}")
            print("-" * 50)

if __name__ == "__main__":
    for line in sys.stdin:
        try:
            alert = json.loads(line.strip())
            proceso_alert(alert)
        except json.JSONDecodeError:
            continue
EOF

chmod +x /usr/local/bin/proceso_alerts.py

# proceso alerts
sudo tail -f /nsm/suricata/eve.json|/usr/local/bin/proceso_alerts.py

# Alert enrichment with Inteligencia de Amenazas
cat > /usr/local/bin/enrich_alerts.py << 'EOF'
#!/usr/bin/env python3
impuerto json
impuerto requests
impuerto sys

def check_virustotal(ip):
    # Placeholder for VirusTotal API integration
    # Replace with actual API clave and implementation
    return \\\\{"reputation": "unknown"\\\\}

def enrich_alert(alert):
    if alert.get('event_type') == 'alert':
        src_ip = alert.get('src_ip', '')
        dest_ip = alert.get('dest_ip', '')

        # Enrich with Inteligencia de Amenazas
        src_intel = check_virustotal(src_ip)
        dest_intel = check_virustotal(dest_ip)

        alert['enrichment'] = \\\\{
            'src_intel': src_intel,
            'dest_intel': dest_intel
        \\\\}

        return alert
    return alert

if __name__ == "__main__":
    for line in sys.stdin:
        try:
            alert = json.loads(line.strip())
            enriched = enrich_alert(alert)
            print(json.dumps(enriched))
        except json.JSONDecodeError:
            continue
EOF

chmod +x /usr/local/bin/enrich_alerts.py

Administración de Sistemas

servicio Management

# SecurityOnion servicio management
sudo so-status                    # Check all servicios
sudo so-restart                   # Restart all servicios
sudo so-stop                      # Stop all servicios
sudo so-start                     # Start all servicios

# Individual servicio management
sudo so-elasticsearch-restart
sudo so-logstash-restart
sudo so-kibana-restart
sudo so-suricata-restart
sudo so-zeek-restart
sudo so-wazuh-restart
sudo so-thehive-restart
sudo so-cortex-restart

# Check servicio logs
sudo so-elasticsearch-logs
sudo so-logstash-logs
sudo so-kibana-logs

# Docker container management
sudo docker ps                   # List running containers
sudo docker logs container_name  # View container logs
sudo docker exec -it container_name /bin/bash  # Access container shell

# System resource monitoring
sudo so-top                       # SecurityOnion-specific top
htop                             # System resource uso
iotop                            # I/O monitoring
nethogs                          # Network uso by proceso

# Disk space management
df -h                            # Check disk uso
sudo du -sh /nsm/*               # Check NSM data uso
sudo find /nsm -name "*.log.gz" -mtime +30 -delete  # Clean old logs

# Log rotation configuración
sudo nano /etc/logrotate.d/securityonion

# ejemplo logrotate configuración
/nsm/*/logs/*.log \\\\{
    daily
    missingok
    rotate 30
    compress
    delaycompress
    notifempty
    create 644 root root
    postrotate
        /usr/bin/killall -HUP rsyslog
    endscript
\\\\}

configuración Management

# Backup SecurityOnion configuración
sudo so-backup

# Custom backup script
cat > /usr/local/bin/so-custom-backup.sh << 'EOF'
#!/bin/bash
BACKUP_DIR="/opt/so/backup/$(date +%Y%m%d-%H%M%S)"
mkdir -p $BACKUP_DIR

# Backup configuracións
cp -r /opt/so/conf $BACKUP_DIR/
cp -r /etc/nsm $BACKUP_DIR/
cp /etc/hostname $BACKUP_DIR/
cp /etc/hosts $BACKUP_DIR/

# Backup Elasticsearch indices list
curl -X GET "localhost:9200/_cat/indices?v" > $BACKUP_DIR/elasticsearch_indices.txt

# Backup Kibana objects
curl -X POST "localhost:5601/api/saved_objects/_expuerto" \
  -H "Content-Type: application/json" \
  -H "kbn-xsrf: true" \
  -d '\\\\{"type": ["dashboard", "visualization", "search"]\\\\}' > $BACKUP_DIR/kibana_objects.ndjson

# Create archive
tar -czf $BACKUP_DIR.tar.gz -C /opt/so/backup $(basename $BACKUP_DIR)
rm -rf $BACKUP_DIR

echo "Backup created: $BACKUP_DIR.tar.gz"
EOF

chmod +x /usr/local/bin/so-custom-backup.sh

# Schedule regular backups
echo "0 2 * * * /usr/local/bin/so-custom-backup.sh"|sudo crontab -

# configuración validation
sudo so-test                      # Test configuración
sudo so-checklist                 # Security checklist

# Update SecurityOnion
sudo so-update                    # Update packages
sudo so-upgrade                   # Upgrade to new version

# Network interface configuración
sudo nano /etc/netplan/01-netcfg.yaml

# ejemplo netplan configuración
network:
  version: 2
  ethernets:
    eth0:  # Management interface
      dhcp4: true
    eth1:  # Monitoring interface
      dhcp4: false
      dhcp6: false

# Apply network configuración
sudo netplan apply

# Firewall configuración
sudo ufw status
sudo ufw allow from 192.168.1.0/24 to any puerto 443
sudo ufw allow from 192.168.1.0/24 to any puerto 9000
sudo ufw allow from 192.168.1.0/24 to any puerto 5601

# SSL certificado management
sudo so-ssl-update                # Update SSL certificados
sudo openssl x509 -in /etc/ssl/certs/so.crt -text -noout  # View certificado details

Performance Tuning

# Elasticsearch performance tuning
sudo nano /opt/so/conf/elasticsearch/elasticsearch.yml

# clave performance settings
cluster.name: securityonion
node.name: so-node-1
path.data: /nsm/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
network.host: 0.0.0.0
http.puerto: 9200
discovery.type: single-node

# JVM heap size (50% of available RAM, max 32GB)
sudo nano /opt/so/conf/elasticsearch/jvm.opcións
-Xms16g
-Xmx16g

# Logstash performance tuning
sudo nano /opt/so/conf/logstash/logstash.yml

# clave performance settings
pipeline.workers: 8
pipeline.batch.size: 1000
pipeline.batch.delay: 50
path.queue: /nsm/logstash/queue
queue.type: persisted
queue.max_bytes: 10gb

# Suricata performance tuning
sudo nano /opt/so/conf/suricata/suricata.yaml

# AF_PACKET configuración
af-packet:
  - interface: eth1
    hilos: 8
    cluster-id: 99
    cluster-type: cluster_flow
    defrag: yes
    use-mmap: yes
    mmap-locked: yes
    tpacket-v3: yes
    ring-size: 200000
    block-size: 32768

# hiloing configuración
hiloing:
  set-cpu-affinity: yes
  cpu-affinity:
    - management-cpu-set:
        cpu: [ "0-1" ]
    - receive-cpu-set:
        cpu: [ "2-5" ]
    - worker-cpu-set:
        cpu: [ "6-15" ]

# Zeek performance tuning
sudo nano /opt/so/conf/zeek/node.cfg

# Worker configuración
[worker-1]
type=worker
host=localhost
interface=eth1
lb_method=pf_ring
lb_procs=8
pin_cpus=2,3,4,5,6,7,8,9

# System-level optimizations
# Increase file descriptor limits
echo "* soft nofile 65536"|sudo tee -a /etc/security/limits.conf
echo "* hard nofile 65536"|sudo tee -a /etc/security/limits.conf

# Optimize network buffers
echo "net.core.rmem_max = 134217728"|sudo tee -a /etc/sysctl.conf
echo "net.core.wmem_max = 134217728"|sudo tee -a /etc/sysctl.conf
echo "net.core.netdev_max_backlog = 5000"|sudo tee -a /etc/sysctl.conf

# Apply sysctl changes
sudo sysctl -p

# Disk I/O optimization
# Use deadline scheduler for SSDs
echo deadline|sudo tee /sys/block/sda/queue/scheduler

# Mount opcións for performance
sudo nano /etc/fstab
# Add noatime,nodiratime opcións to reduce disk writes
/dev/sda1 /nsm ext4 defaults,noatime,nodiratime 0 2

Automation and Integration

API Integration

#!/usr/bin/env python3
# SecurityOnion API integration ejemplos

impuerto requests
impuerto json
from datetime impuerto datetime, timedelta

class SecurityOnionAPI:
    def __init__(self, base_url, nombre de usuario, contraseña):
        self.base_url = base_url
        self.sesión = requests.sesión()
        self.login(nombre de usuario, contraseña)

    def login(self, nombre de usuario, contraseña):
        """Authenticate with SecurityOnion"""
        login_data = \\\\{
            'nombre de usuario': nombre de usuario,
            'contraseña': contraseña
        \\\\}
        response = self.sesión.post(f"\\\\{self.base_url\\\\}/auth/login", json=login_data)
        if response.status_code == 200:
            print("autenticación successful")
        else:
            raise Exception("autenticación failed")

    def search_alerts(self, query, start_time=None, end_time=None):
        """Search for alerts in Elasticsearch"""
        if not start_time:
            start_time = datetime.now() - timedelta(hours=24)
        if not end_time:
            end_time = datetime.now()

        search_query = \\\\{
            "query": \\\\{
                "bool": \\\\{
                    "must": [
                        \\\\{"match": \\\\{"event_type": "alert"\\\\}\\\\},
                        \\\\{"query_string": \\\\{"query": query\\\\}\\\\},
                        \\\\{"range": \\\\{
                            "@timestamp": \\\\{
                                "gte": start_time.isoformat(),
                                "lte": end_time.isoformat()
                            \\\\}
                        \\\\}\\\\}
                    ]
                \\\\}
            \\\\},
            "sort": [\\\\{"@timestamp": \\\\{"order": "desc"\\\\}\\\\}],
            "size": 1000
        \\\\}

        response = self.sesión.post(
            f"\\\\{self.base_url\\\\}/elasticsearch/_search",
            json=search_query
        )
        return response.json()

    def get_zeek_logs(self, log_type, start_time=None, end_time=None):
        """Retrieve Zeek logs"""
        if not start_time:
            start_time = datetime.now() - timedelta(hours=1)
        if not end_time:
            end_time = datetime.now()

        query = \\\\{
            "query": \\\\{
                "bool": \\\\{
                    "must": [
                        \\\\{"match": \\\\{"event_type": log_type\\\\}\\\\},
                        \\\\{"range": \\\\{
                            "@timestamp": \\\\{
                                "gte": start_time.isoformat(),
                                "lte": end_time.isoformat()
                            \\\\}
                        \\\\}\\\\}
                    ]
                \\\\}
            \\\\},
            "sort": [\\\\{"@timestamp": \\\\{"order": "desc"\\\\}\\\\}],
            "size": 1000
        \\\\}

        response = self.sesión.post(
            f"\\\\{self.base_url\\\\}/elasticsearch/_search",
            json=query
        )
        return response.json()

    def create_case(self, title, Descripción, severity=2):
        """Create case in TheHive"""
        case_data = \\\\{
            "title": title,
            "Descripción": Descripción,
            "severity": severity,
            "tlp": 2,
            "tags": ["automated"]
        \\\\}

        response = self.sesión.post(
            f"\\\\{self.base_url\\\\}/thehive/api/case",
            json=case_data
        )
        return response.json()

# uso ejemplo
if __name__ == "__main__":
    so_api = SecurityOnionAPI("https://so-manager", "admin", "contraseña")

    # Search for high-severity alerts
    alerts = so_api.search_alerts("alert.severity:[1 TO 2]")
    print(f"Found \\\\{len(alerts['hits']['hits'])\\\\} high-severity alerts")

    # Get recent DNS logs
    dns_logs = so_api.get_zeek_logs("dns")
    print(f"Found \\\\{len(dns_logs['hits']['hits'])\\\\} DNS events")

    # Create case for investigation
    case = so_api.create_case(
        "Automated Alert Investigation",
        "High-severity alerts detected requiring investigation"
    )
    print(f"Created case: \\\\{case.get('id')\\\\}")

Automated Response Scripts

#!/bin/bash
# Automated respuesta a incidentes script

LOG_FILE="/var/log/so-automated-response.log"
ALERT_THRESHOLD=10
TIME_WINDOW=300  # 5 minutes

log_message() \\\\{
    echo "$(date): $1" >> $LOG_FILE
\\\\}

check_alert_volume() \\\\{
    RECENT_ALERTS=$(sudo tail -n 1000 /nsm/suricata/eve.json|\
| jq -r 'select(.event_type=="alert") | .timestamp' | \ |
        awk -v threshold=$(date -d "-$\\\\{TIME_WINDOW\\\\} seconds" +%s) \
        'BEGIN\\\\{count=0\\\\} \\\\{
            gsub(/[TZ]/, " ", $1);
            if(mktime(gensub(/-/, " ", "g", $1)) > threshold) count++
        \\\\} END\\\\{print count\\\\}')

    if [ "$RECENT_ALERTS" -gt "$ALERT_THRESHOLD" ]; then
        log_message "High alert volume detected: $RECENT_ALERTS alerts in last $TIME_WINDOW seconds"
        return 0
    else
        log_message "Normal alert volume: $RECENT_ALERTS alerts"
        return 1
    fi
\\\\}

block_suspicious_ip() \\\\{
    local IP=$1
    log_message "Blocking suspicious IP: $IP"

    # Add to firewall
    sudo iptables -I INPUT -s $IP -j DROP

    # Add to Suricata block list
    echo "$IP"|sudo tee -a /opt/so/rules/block.rules

    # Restart Suricata to apply new rules
    sudo so-suricata-restart

    log_message "IP $IP blocked successfully"
\\\\}

analyze_top_alerting_ips() \\\\{
    TOP_IPS=$(sudo tail -n 10000 /nsm/suricata/eve.json|\
| jq -r 'select(.event_type=="alert") | .src_ip' | \ |
| sort | uniq -c | sort -nr | head -5 | awk '$1 > 5 \\\\{print $2\\\\}') |

    for IP in $TOP_IPS; do
        log_message "Analyzing suspicious IP: $IP"

        # Check if IP is external
| if [[ ! $IP =~ ^192\.168\. ]] && [[ ! $IP =~ ^10\. ]] && [[ ! $IP =~ ^172\.(1[6-9] | 2[0-9] | 3[0-1])\. ]]; then |
            block_suspicious_ip $IP
        fi
    done
\\\\}

send_notification() \\\\{
    local MESSAGE=$1
    log_message "Sending notification: $MESSAGE"

    # Send email notification (configure sendmail/postfix)
    echo "$MESSAGE"|mail -s "SecurityOnion Alert" admin@company.com

    # Send Slack notification (configure webhook)
    curl -X POST -H 'Content-type: application/json' \
        --data "\\\\{\"text\":\"$MESSAGE\"\\\\}" \
        $SLACK_WEBHOOK_URL
\\\\}

main() \\\\{
    log_message "Starting automated response check"

    if check_alert_volume; then
        analyze_top_alerting_ips
        send_notification "High alert volume detected - automated response activated"
    fi

    log_message "Automated response check completed"
\\\\}

# Run main function
main

Integration with SOAR Platforms

#!/usr/bin/env python3
# Integration with external SOAR platforms

impuerto requests
impuerto json
from datetime impuerto datetime

class SOARIntegration:
    def __init__(self, soar_url, api_clave):
        self.soar_url = soar_url
        self.api_clave = api_clave
        self.headers = \\\\{
            'autorización': f'Bearer \\\\{api_clave\\\\}',
            'Content-Type': 'application/json'
        \\\\}

    def create_incident(self, title, Descripción, severity, artifacts):
        """Create incident in SOAR platform"""
        incident_data = \\\\{
            'name': title,
            'Descripción': Descripción,
            'severity': severity,
            'artifacts': artifacts,
            'source': 'SecurityOnion',
            'created_time': datetime.now().isoformat()
        \\\\}

        response = requests.post(
            f"\\\\{self.soar_url\\\\}/api/incidents",
            headers=self.headers,
            json=incident_data
        )
        return response.json()

    def add_artifact(self, incident_id, artifact_type, value, Descripción):
        """Add artifact to existing incident"""
        artifact_data = \\\\{
            'type': artifact_type,
            'value': value,
            'Descripción': Descripción
        \\\\}

        response = requests.post(
            f"\\\\{self.soar_url\\\\}/api/incidents/\\\\{incident_id\\\\}/artifacts",
            headers=self.headers,
            json=artifact_data
        )
        return response.json()

    def run_playbook(self, incident_id, playbook_name):
        """Execute playbook for incident"""
        playbook_data = \\\\{
            'playbook': playbook_name,
            'incident_id': incident_id
        \\\\}

        response = requests.post(
            f"\\\\{self.soar_url\\\\}/api/playbooks/run",
            headers=self.headers,
            json=playbook_data
        )
        return response.json()

# ejemplo uso
def proceso_security_alert(alert_data):
    soar = SOARIntegration("https://soar-platform", "api-clave")

    # Extract relevant information
    title = f"Security Alert: \\\\{alert_data.get('alert', \\\\{\\\\}).get('firma', 'Unknown')\\\\}"
    Descripción = f"Alert detected at \\\\{alert_data.get('timestamp')\\\\}"
    severity = alert_data.get('alert', \\\\{\\\\}).get('severity', 3)

    # Create artifacts
    artifacts = []
    if alert_data.get('src_ip'):
        artifacts.append(\\\\{
            'type': 'ip',
            'value': alert_data['src_ip'],
            'Descripción': 'Source IP address'
        \\\\})

    if alert_data.get('dest_ip'):
        artifacts.append(\\\\{
            'type': 'ip',
            'value': alert_data['dest_ip'],
            'Descripción': 'Destination IP address'
        \\\\})

    # Create incident
    incident = soar.create_incident(title, Descripción, severity, artifacts)

    # Run appropriate playbook based on alert type
    if 'malware' in title.lower():
        soar.run_playbook(incident['id'], 'malware-investigation')
    elif 'phishing' in title.lower():
        soar.run_playbook(incident['id'], 'phishing-response')
    else:
        soar.run_playbook(incident['id'], 'generic-investigation')

    return incident

Resources