Zum Inhalt

Sicherheitsaufzeichnung Cheatsheet

generieren

SecurityOnion ist eine kostenlose und Open-Source Linux Distribution für Bedrohungsjagd, Unternehmenssicherheitsüberwachung und Protokollmanagement. Es umfasst eine umfassende Suite von Sicherheitstools wie Elasticsearch, Logstash, Kibana, Suricata, Zeek, Wazuh, TheHive, Cortex und viele andere sicherheitsgerichtete Anwendungen integriert in eine zusammenhängende Plattform.

Überblick

Architektur und Komponenten

SecurityOnion folgt einer verteilten Architektur mit verschiedenen Knotentypen, die spezifische Funktionen bedienen. Die Plattform integriert mehrere Open-Source-Sicherheitstools in ein einheitliches Ökosystem für eine umfassende Netzwerksicherheitsüberwachung und Notfallreaktion.

Kernkomponenten umfassen Netzwerksicherheitsüberwachung (NSM)-Tools wie Suricata und Zeek zur Verkehrsanalyse, Protokollmanagement durch den Elastic Stack, Host-basierte Überwachung über Wazuh-Agenten und Fallmanagement durch die TheHive-Integration.

Schlüsselmerkmale

```bash

Core SecurityOnion Capabilities

  • Full packet capture and network security monitoring
  • Intrusion detection and prevention (Suricata)
  • Network traffic analysis (Zeek/Bro)
  • Log aggregation and analysis (Elastic Stack)
  • Host-based intrusion detection (Wazuh)
  • Threat hunting and investigation tools
  • Case management and incident response
  • Distributed deployment architecture
  • Web-based management interface (SOC) ```_

Installation und Inbetriebnahme

ISO Installation

```bash

Download SecurityOnion ISO

wget https://github.com/Security-Onion-Solutions/securityonion/releases/latest/download/securityonion-2.3.x-x86_64.iso

Verify checksum

sha256sum securityonion-2.3.x-x86_64.iso

Create bootable USB (Linux)

sudo dd if=securityonion-2.3.x-x86_64.iso of=/dev/sdX bs=4M status=progress sync

Boot from USB and follow installation wizard

Minimum requirements:

- 16GB RAM (32GB+ recommended)

- 200GB storage (1TB+ recommended)

- Dual network interfaces (management + monitoring)

Post-installation network configuration

sudo so-setup

Initial setup wizard will configure:

- Network interfaces

- Node type (standalone, manager, search, forward, heavy, etc.)

- User accounts and authentication

- SSL certificates

- Service configuration

```_

Verteilte Bereitstellung

```bash

Manager Node Setup (first node)

sudo so-setup

Select: Install

Select: Manager

Configure management interface

Configure monitoring interface(s)

Set admin credentials

Configure grid settings

Search Node Setup

sudo so-setup

Select: Install

Select: Search Node

Enter manager IP address

Configure network settings

Join existing grid

Forward Node Setup

sudo so-setup

Select: Install

Select: Forward Node

Enter manager IP address

Configure monitoring interfaces

Set log forwarding destination

Heavy Node Setup (combined search + forward)

sudo so-setup

Select: Install

Select: Heavy Node

Enter manager IP address

Configure all interfaces and services

Verify grid status

sudo so-status sudo so-grid-status

Check node connectivity

sudo so-test ```_

Docker-basierte Installation

```bash

Install Docker and Docker Compose

curl -fsSL https://get.docker.com -o get-docker.sh sudo sh get-docker.sh sudo usermod -aG docker $USER

Clone SecurityOnion repository

git clone https://github.com/Security-Onion-Solutions/securityonion.git cd securityonion

Configure environment

cp .env.example .env nano .env

Example .env configuration

SO_MANAGER_IP=192.168.1.100 SO_INTERFACE_MONITOR=eth1 SO_INTERFACE_MANAGEMENT=eth0 SO_ADMIN_USER=admin SO_ADMIN_PASS=SecurePassword123!

Deploy with Docker Compose

sudo docker-compose up -d

Check deployment status

sudo docker-compose ps sudo docker-compose logs -f ```_

Core Tools und Services

Suricata (IDS/IPS)

```bash

Suricata configuration and management

sudo so-suricata-restart sudo so-suricata-status

View Suricata configuration

sudo cat /opt/so/conf/suricata/suricata.yaml

Update Suricata rules

sudo so-rule-update

Custom rule management

sudo nano /opt/so/rules/local.rules

Example custom rules

alert tcp any any -> $HOME_NET 22 (msg:"SSH Connection Attempt"; sid:1000001; rev:1;) alert http any any -> any any (msg:"Suspicious User Agent"; content:"User-Agent: BadBot"; sid:1000002; rev:1;)

Test rule syntax

sudo suricata -T -c /opt/so/conf/suricata/suricata.yaml

Monitor Suricata alerts

sudo tail -f /nsm/suricata/eve.json

Suricata performance tuning

sudo nano /opt/so/conf/suricata/suricata.yaml

Adjust:

- af-packet workers

- ring-size

- block-size

- use-mmap

Restart Suricata with new configuration

sudo so-suricata-restart

Check Suricata statistics

sudo suricata-sc -c stats

Rule management commands

sudo so-rule-update --help sudo so-rule-update --force sudo so-rule-update --ruleset=emerging-threats ```_

Zeek (Netzwerkanalyse)

```bash

Zeek configuration and management

sudo so-zeek-restart sudo so-zeek-status

View Zeek configuration

sudo cat /opt/so/conf/zeek/node.cfg

Zeek log locations

ls -la /nsm/zeek/logs/current/

Common Zeek logs

tail -f /nsm/zeek/logs/current/conn.log tail -f /nsm/zeek/logs/current/dns.log tail -f /nsm/zeek/logs/current/http.log tail -f /nsm/zeek/logs/current/ssl.log tail -f /nsm/zeek/logs/current/files.log

Custom Zeek scripts

sudo nano /opt/so/conf/zeek/local.zeek

Example custom script

@load base/protocols/http event http_request(c: connection, method: string, original_URI: string, unescaped_URI: string, version: string) \\{ if (/malware/ in original_URI) \\{ print fmt("Suspicious HTTP request: %s %s", c$id$orig_h, original_URI); \\} \\}

Deploy Zeek configuration

sudo so-zeek-restart

Zeek packet analysis

sudo zeek -r /nsm/pcap/file.pcap local.zeek

Extract files from network traffic

sudo zeek -r traffic.pcap /opt/so/conf/zeek/extract-files.zeek

Zeek intelligence framework

sudo nano /opt/so/conf/zeek/intel.dat

Format: indicatorindicator_typemeta.source

192.168.1.100 Intel::ADDR malicious-ip-list evil.com Intel::DOMAIN suspicious-domains

Load intelligence data

sudo so-zeek-restart

Monitor intelligence matches

tail -f /nsm/zeek/logs/current/intel.log ```_

Wazuh (HIDS)

```bash

Wazuh manager configuration

sudo nano /opt/so/conf/wazuh/ossec.conf

Deploy Wazuh agent (on monitored systems)

Download agent

wget https://packages.wazuh.com/4.x/apt/pool/main/w/wazuh-agent/wazuh-agent_4.x.x-1_amd64.deb

Install agent

sudo dpkg -i wazuh-agent_4.x.x-1_amd64.deb

Configure agent

sudo nano /var/ossec/etc/ossec.conf

Set manager IP:

192.168.1.100
1514 tcp

Start agent

sudo systemctl enable wazuh-agent sudo systemctl start wazuh-agent

Register agent on manager

sudo /var/ossec/bin/manage_agents

Select option 'A' to add agent

Enter agent name and IP

Extract agent key

sudo /var/ossec/bin/manage_agents

Select option 'E' to extract key

Import key on agent

sudo /var/ossec/bin/manage_agents

Select option 'I' to import key

Restart agent

sudo systemctl restart wazuh-agent

Check agent status

sudo /var/ossec/bin/agent_control -lc

Custom Wazuh rules

sudo nano /opt/so/conf/wazuh/rules/local_rules.xml

Example custom rule

5716 !192.168.1.0/24 SSH login from external network authentication_success,pci_dss_10.2.5,

Test rule configuration

sudo /var/ossec/bin/ossec-logtest

Restart Wazuh manager

sudo systemctl restart wazuh-manager

Monitor Wazuh alerts

sudo tail -f /var/ossec/logs/alerts/alerts.log ```_

Elastischer Stapel (ELK)

```bash

Elasticsearch management

sudo so-elasticsearch-restart sudo so-elasticsearch-status

Check Elasticsearch cluster health

curl -X GET "localhost:9200/_cluster/health?pretty"

View Elasticsearch indices

curl -X GET "localhost:9200/_cat/indices?v"

Elasticsearch configuration

sudo nano /opt/so/conf/elasticsearch/elasticsearch.yml

Logstash management

sudo so-logstash-restart sudo so-logstash-status

Logstash configuration

ls -la /opt/so/conf/logstash/conf.d/

Custom Logstash pipeline

sudo nano /opt/so/conf/logstash/conf.d/custom.conf

Example Logstash configuration

input \\{ beats \\{ port => 5044 \\} \\}

filter \\{ if [fields][log_type] == "custom" \\{ grok \\{ match => \\{ "message" => "%\\{TIMESTAMP_ISO8601:timestamp\\} %\\{LOGLEVEL:level\\} %\\{GREEDYDATA:message\\}" \\} \\} date \\{ match => [ "timestamp", "ISO8601" ] \\} \\} \\}

output \\{ elasticsearch \\{ hosts => ["localhost:9200"] index => "custom-logs-%\\{+YYYY.MM.dd\\}" \\} \\}

Test Logstash configuration

sudo /usr/share/logstash/bin/logstash --config.test_and_exit -f /opt/so/conf/logstash/conf.d/custom.conf

Kibana management

sudo so-kibana-restart sudo so-kibana-status

Access Kibana web interface

https://manager-ip/kibana

Create custom Kibana dashboard

1. Navigate to Kibana > Dashboard

2. Create new dashboard

3. Add visualizations

4. Save dashboard

Export/Import Kibana objects

Export

curl -X POST "localhost:5601/api/saved_objects/_export" \ -H "Content-Type: application/json" \ -H "kbn-xsrf: true" \ -d '\\{"type": "dashboard"\\}' > dashboards.ndjson

Import

curl -X POST "localhost:5601/api/saved_objects/import" \ -H "kbn-xsrf: true" \ -F file=@dashboards.ndjson ```

TheHive (Case Management)

```bash

TheHive configuration

sudo nano /opt/so/conf/thehive/application.conf

Example configuration

play.http.secret.key = "your-secret-key" db.janusgraph \\{ storage.backend = berkeleyje storage.directory = /opt/thehive/db \\}

Start TheHive

sudo so-thehive-restart sudo so-thehive-status

Access TheHive web interface

https://manager-ip:9000

Create organization and users

1. Login as admin

2. Navigate to Admin > Organizations

3. Create organization

4. Add users to organization

API usage examples

THEHIVE_URL="https://localhost:9000" API_KEY="your-api-key"

Create case

curl -X POST "$THEHIVE_URL/api/case" \ -H "Authorization: Bearer $API_KEY" \ -H "Content-Type: application/json" \ -d '\\{ "title": "Suspicious Network Activity", "description": "Detected unusual traffic patterns", "severity": 2, "tlp": 2, "tags": ["network", "suspicious"] \\}'

Add observable to case

curl -X POST "$THEHIVE_URL/api/case/\\{case-id\\}/artifact" \ -H "Authorization: Bearer $API_KEY" \ -H "Content-Type: application/json" \ -d '\\{ "dataType": "ip", "data": "192.168.1.100", "message": "Suspicious IP address", "tags": ["malicious"] \\}'

Search cases

curl -X POST "$THEHIVE_URL/api/case/search" \ -H "Authorization: Bearer $API_KEY" \ -H "Content-Type: application/json" \ -d '\\{ "query": \\{ "_and": [ \\{"_field": "status", "_value": "Open"\\}, \\{"_field": "severity", "_gte": 2\\} ] \\} \\}' ```

Cortex (Analysis Engine)

```bash

Cortex configuration

sudo nano /opt/so/conf/cortex/application.conf

Example configuration

play.http.secret.key = "cortex-secret-key" cortex.storage \\{ provider = localfs localfs.location = /opt/cortex/files \\}

Start Cortex

sudo so-cortex-restart sudo so-cortex-status

Access Cortex web interface

https://manager-ip:9001

Install analyzers

sudo docker pull cortexneurons/virustotal_3_0 sudo docker pull cortexneurons/shodan_host_1_0 sudo docker pull cortexneurons/abuse_finder_1_0

Configure analyzers

1. Login to Cortex

2. Navigate to Organization > Analyzers

3. Enable and configure analyzers

4. Add API keys for external services

API usage examples

CORTEX_URL="https://localhost:9001" API_KEY="your-cortex-api-key"

Submit analysis job

curl -X POST "$CORTEX_URL/api/analyzer/VirusTotal_GetReport_3_0/run" \ -H "Authorization: Bearer $API_KEY" \ -H "Content-Type: application/json" \ -d '\\{ "data": "malicious-hash", "dataType": "hash", "tlp": 2 \\}'

Get job results

curl -X GET "$CORTEX_URL/api/job/\\{job-id\\}" \ -H "Authorization: Bearer $API_KEY"

List available analyzers

curl -X GET "$CORTEX_URL/api/analyzer" \ -H "Authorization: Bearer $API_KEY" ```_

Überwachung der Netzsicherheit

Paketerfassung und -analyse

```bash

Full packet capture configuration

sudo nano /opt/so/conf/stenographer/config

Example stenographer configuration

\\{ "Threads": [ \\{ "PacketsDirectory": "/nsm/pcap", "MaxDirectoryFiles": 30000, "DiskFreePercentage": 10 \\} ], "StenotypePath": "/usr/bin/stenotype", "Interface": "eth1", "Port": 1234, "Host": "127.0.0.1", "Flags": [], "CertPath": "/opt/so/conf/stenographer/certs" \\}

Start packet capture

sudo so-stenographer-restart

Query packet capture

sudo stenoread 'host 192.168.1.100' -w output.pcap

Time-based queries

sudo stenoread 'host 192.168.1.100 and after 2023-01-01T00:00:00Z and before 2023-01-01T23:59:59Z' -w output.pcap

Protocol-specific queries

sudo stenoread 'tcp and port 80' -w http-traffic.pcap sudo stenoread 'udp and port 53' -w dns-traffic.pcap

Advanced packet analysis with tcpdump

sudo tcpdump -i eth1 -w capture.pcap sudo tcpdump -r capture.pcap 'host 192.168.1.100' sudo tcpdump -r capture.pcap -A 'port 80'

Packet analysis with tshark

sudo tshark -i eth1 -w capture.pcap sudo tshark -r capture.pcap -Y "ip.addr == 192.168.1.100" sudo tshark -r capture.pcap -Y "http.request.method == GET" -T fields -e http.host -e http.request.uri

Extract files from packet capture

sudo tcpflow -r capture.pcap -o extracted_files/ sudo foremost -i capture.pcap -o carved_files/

Network statistics

sudo capinfos capture.pcap sudo editcap -A '2023-01-01 00:00:00' -B '2023-01-01 23:59:59' input.pcap output.pcap ```_

Verkehrsanalyse und Jagd

```bash

Zeek-based traffic analysis

Connection analysis

| sudo zcat /nsm/zeek/logs/*/conn.log.gz | zeek-cut id.orig_h id.resp_h id.resp_p proto duration bytes | sort | uniq -c | sort -nr |

DNS analysis

| sudo zcat /nsm/zeek/logs/*/dns.log.gz | zeek-cut query answers | grep -v "^-" | sort | uniq -c | sort -nr |

HTTP analysis

| sudo zcat /nsm/zeek/logs/*/http.log.gz | zeek-cut host uri user_agent | grep -E "(exe | zip | rar)" | sort | uniq |

SSL/TLS analysis

| sudo zcat /nsm/zeek/logs/*/ssl.log.gz | zeek-cut server_name subject issuer | sort | uniq |

File analysis

| sudo zcat /nsm/zeek/logs/*/files.log.gz | zeek-cut mime_type filename md5 | grep -E "(exe | pdf | doc)" | sort | uniq |

Custom Zeek analysis scripts

cat > /tmp/analyze_traffic.zeek ``<< 'EOF' @load base/protocols/http @load base/protocols/dns

global suspicious_domains: set[string] = \{ "evil.com", "malware.net", "phishing.org" \};

event dns_request(c: connection, msg: dns_msg, query: string, qtype: count, qclass: count) \{ if (query in suspicious_domains) \{ print fmt("Suspicious DNS query: %s ->`` %s", c$id$orig_h, query); \\} \\}

event http_request(c: connection, method: string, original_URI: string, unescaped_URI: string, version: string) \\{ | if (/.(exe | zip | rar)$/ in original_URI) \\{ | print fmt("Suspicious file download: %s -> %s", c$id$orig_h, original_URI); \\} \\} EOF

Run analysis on packet capture

sudo zeek -r capture.pcap /tmp/analyze_traffic.zeek

Threat hunting queries

Long connections

| sudo zcat /nsm/zeek/logs/*/conn.log.gz | zeek-cut duration id.orig_h id.resp_h | awk '$1 > 3600' | sort -nr |

Large data transfers

| sudo zcat /nsm/zeek/logs/*/conn.log.gz | zeek-cut orig_bytes resp_bytes id.orig_h id.resp_h | awk '$1+$2 > 1000000' | sort -nr |

Unusual ports

| sudo zcat /nsm/zeek/logs/*/conn.log.gz | zeek-cut id.resp_p proto | sort | uniq -c | sort -nr | head -20 |

Beaconing detection

| sudo zcat /nsm/zeek/logs/*/conn.log.gz | zeek-cut id.orig_h id.resp_h ts | awk '\\{print $1, $2, strftime("%H:%M", $3)\\}' | sort | uniq -c | awk '$1 > 10' | ```_

Alert Management und Untersuchung

```bash

Suricata alert analysis

Real-time alert monitoring

sudo tail -f /nsm/suricata/eve.json|jq 'select(.event_type=="alert")'

Alert statistics

| sudo cat /nsm/suricata/eve.json | jq 'select(.event_type=="alert") | .alert.signature' | sort | uniq -c | sort -nr |

High-priority alerts

sudo cat /nsm/suricata/eve.json|jq 'select(.event_type=="alert" and .alert.severity<=2)'

Alert correlation with Zeek logs

Extract IPs from alerts

| sudo cat /nsm/suricata/eve.json | jq -r 'select(.event_type=="alert") | "(.src_ip) (.dest_ip)"' | sort | uniq > alert_ips.txt |

Find corresponding Zeek connections

while read src_ip dest_ip; do | sudo zcat /nsm/zeek/logs//conn.log.gz | zeek-cut id.orig_h id.resp_h | grep "$src_ip.$dest_ip" | done < alert_ips.txt

Custom alert processing script

cat > /usr/local/bin/process_alerts.py << 'EOF'

!/usr/bin/env python3

import json import sys from datetime import datetime

def process_alert(alert): if alert.get('event_type') == 'alert': severity = alert.get('alert', \\{\\}).get('severity', 0) signature = alert.get('alert', \\{\\}).get('signature', '') src_ip = alert.get('src_ip', '') dest_ip = alert.get('dest_ip', '') timestamp = alert.get('timestamp', '')

    if severity <= 2:  # High priority alerts
        print(f"HIGH PRIORITY: \\\\{timestamp\\\\} - \\\\{signature\\\\}")
        print(f"  Source: \\\\{src_ip\\\\} -> Destination: \\\\{dest_ip\\\\}")
        print(f"  Severity: \\\\{severity\\\\}")
        print("-" * 50)

if name == "main": for line in sys.stdin: try: alert = json.loads(line.strip()) process_alert(alert) except json.JSONDecodeError: continue EOF

chmod +x /usr/local/bin/process_alerts.py

Process alerts

sudo tail -f /nsm/suricata/eve.json|/usr/local/bin/process_alerts.py

Alert enrichment with threat intelligence

cat > /usr/local/bin/enrich_alerts.py << 'EOF'

!/usr/bin/env python3

import json import requests import sys

def check_virustotal(ip): # Placeholder for VirusTotal API integration # Replace with actual API key and implementation return \\{"reputation": "unknown"\\}

def enrich_alert(alert): if alert.get('event_type') == 'alert': src_ip = alert.get('src_ip', '') dest_ip = alert.get('dest_ip', '')

    # Enrich with threat intelligence
    src_intel = check_virustotal(src_ip)
    dest_intel = check_virustotal(dest_ip)

    alert['enrichment'] = \\\\{
        'src_intel': src_intel,
        'dest_intel': dest_intel
    \\\\}

    return alert
return alert

if name == "main": for line in sys.stdin: try: alert = json.loads(line.strip()) enriched = enrich_alert(alert) print(json.dumps(enriched)) except json.JSONDecodeError: continue EOF

chmod +x /usr/local/bin/enrich_alerts.py ```_

Systemverwaltung

Service Management

```bash

SecurityOnion service management

sudo so-status # Check all services sudo so-restart # Restart all services sudo so-stop # Stop all services sudo so-start # Start all services

Individual service management

sudo so-elasticsearch-restart sudo so-logstash-restart sudo so-kibana-restart sudo so-suricata-restart sudo so-zeek-restart sudo so-wazuh-restart sudo so-thehive-restart sudo so-cortex-restart

Check service logs

sudo so-elasticsearch-logs sudo so-logstash-logs sudo so-kibana-logs

Docker container management

sudo docker ps # List running containers sudo docker logs container_name # View container logs sudo docker exec -it container_name /bin/bash # Access container shell

System resource monitoring

sudo so-top # SecurityOnion-specific top htop # System resource usage iotop # I/O monitoring nethogs # Network usage by process

Disk space management

df -h # Check disk usage sudo du -sh /nsm/ # Check NSM data usage sudo find /nsm -name ".log.gz" -mtime +30 -delete # Clean old logs

Log rotation configuration

sudo nano /etc/logrotate.d/securityonion

Example logrotate configuration

/nsm//logs/.log \\{ daily missingok rotate 30 compress delaycompress notifempty create 644 root root postrotate /usr/bin/killall -HUP rsyslog endscript \\} ```_

Konfigurationsmanagement

```bash

Backup SecurityOnion configuration

sudo so-backup

Custom backup script

cat > /usr/local/bin/so-custom-backup.sh << 'EOF'

!/bin/bash

BACKUP_DIR="/opt/so/backup/$(date +%Y%m%d-%H%M%S)" mkdir -p $BACKUP_DIR

Backup configurations

cp -r /opt/so/conf $BACKUP_DIR/ cp -r /etc/nsm $BACKUP_DIR/ cp /etc/hostname $BACKUP_DIR/ cp /etc/hosts $BACKUP_DIR/

Backup Elasticsearch indices list

curl -X GET "localhost:9200/_cat/indices?v" > $BACKUP_DIR/elasticsearch_indices.txt

Backup Kibana objects

curl -X POST "localhost:5601/api/saved_objects/_export" \ -H "Content-Type: application/json" \ -H "kbn-xsrf: true" \ -d '\\{"type": ["dashboard", "visualization", "search"]\\}' > $BACKUP_DIR/kibana_objects.ndjson

Create archive

tar -czf $BACKUP_DIR.tar.gz -C /opt/so/backup $(basename $BACKUP_DIR) rm -rf $BACKUP_DIR

echo "Backup created: $BACKUP_DIR.tar.gz" EOF

chmod +x /usr/local/bin/so-custom-backup.sh

Schedule regular backups

echo "0 2 * * * /usr/local/bin/so-custom-backup.sh"|sudo crontab -

Configuration validation

sudo so-test # Test configuration sudo so-checklist # Security checklist

Update SecurityOnion

sudo so-update # Update packages sudo so-upgrade # Upgrade to new version

Network interface configuration

sudo nano /etc/netplan/01-netcfg.yaml

Example netplan configuration

network: version: 2 ethernets: eth0: # Management interface dhcp4: true eth1: # Monitoring interface dhcp4: false dhcp6: false

Apply network configuration

sudo netplan apply

Firewall configuration

sudo ufw status sudo ufw allow from 192.168.1.0/24 to any port 443 sudo ufw allow from 192.168.1.0/24 to any port 9000 sudo ufw allow from 192.168.1.0/24 to any port 5601

SSL certificate management

sudo so-ssl-update # Update SSL certificates sudo openssl x509 -in /etc/ssl/certs/so.crt -text -noout # View certificate details ```_

Leistung Tuning

```bash

Elasticsearch performance tuning

sudo nano /opt/so/conf/elasticsearch/elasticsearch.yml

Key performance settings

cluster.name: securityonion node.name: so-node-1 path.data: /nsm/elasticsearch path.logs: /var/log/elasticsearch bootstrap.memory_lock: true network.host: 0.0.0.0 http.port: 9200 discovery.type: single-node

JVM heap size (50% of available RAM, max 32GB)

sudo nano /opt/so/conf/elasticsearch/jvm.options -Xms16g -Xmx16g

Logstash performance tuning

sudo nano /opt/so/conf/logstash/logstash.yml

Key performance settings

pipeline.workers: 8 pipeline.batch.size: 1000 pipeline.batch.delay: 50 path.queue: /nsm/logstash/queue queue.type: persisted queue.max_bytes: 10gb

Suricata performance tuning

sudo nano /opt/so/conf/suricata/suricata.yaml

AF_PACKET configuration

af-packet: - interface: eth1 threads: 8 cluster-id: 99 cluster-type: cluster_flow defrag: yes use-mmap: yes mmap-locked: yes tpacket-v3: yes ring-size: 200000 block-size: 32768

Threading configuration

threading: set-cpu-affinity: yes cpu-affinity: - management-cpu-set: cpu: [ "0-1" ] - receive-cpu-set: cpu: [ "2-5" ] - worker-cpu-set: cpu: [ "6-15" ]

Zeek performance tuning

sudo nano /opt/so/conf/zeek/node.cfg

Worker configuration

[worker-1] type=worker host=localhost interface=eth1 lb_method=pf_ring lb_procs=8 pin_cpus=2,3,4,5,6,7,8,9

System-level optimizations

Increase file descriptor limits

echo " soft nofile 65536"|sudo tee -a /etc/security/limits.conf echo " hard nofile 65536"|sudo tee -a /etc/security/limits.conf

Optimize network buffers

echo "net.core.rmem_max = 134217728"|sudo tee -a /etc/sysctl.conf echo "net.core.wmem_max = 134217728"|sudo tee -a /etc/sysctl.conf echo "net.core.netdev_max_backlog = 5000"|sudo tee -a /etc/sysctl.conf

Apply sysctl changes

sudo sysctl -p

Disk I/O optimization

Use deadline scheduler for SSDs

echo deadline|sudo tee /sys/block/sda/queue/scheduler

Mount options for performance

sudo nano /etc/fstab

Add noatime,nodiratime options to reduce disk writes

/dev/sda1 /nsm ext4 defaults,noatime,nodiratime 0 2 ```_

Automatisierung und Integration

API Integration

```python

!/usr/bin/env python3

SecurityOnion API integration examples

import requests import json from datetime import datetime, timedelta

class SecurityOnionAPI: def init(self, base_url, username, password): self.base_url = base_url self.session = requests.Session() self.login(username, password)

def login(self, username, password):
    """Authenticate with SecurityOnion"""
    login_data = \\\\{
        'username': username,
        'password': password
    \\\\}
    response = self.session.post(f"\\\\{self.base_url\\\\}/auth/login", json=login_data)
    if response.status_code == 200:
        print("Authentication successful")
    else:
        raise Exception("Authentication failed")

def search_alerts(self, query, start_time=None, end_time=None):
    """Search for alerts in Elasticsearch"""
    if not start_time:
        start_time = datetime.now() - timedelta(hours=24)
    if not end_time:
        end_time = datetime.now()

    search_query = \\\\{
        "query": \\\\{
            "bool": \\\\{
                "must": [
                    \\\\{"match": \\\\{"event_type": "alert"\\\\}\\\\},
                    \\\\{"query_string": \\\\{"query": query\\\\}\\\\},
                    \\\\{"range": \\\\{
                        "@timestamp": \\\\{
                            "gte": start_time.isoformat(),
                            "lte": end_time.isoformat()
                        \\\\}
                    \\\\}\\\\}
                ]
            \\\\}
        \\\\},
        "sort": [\\\\{"@timestamp": \\\\{"order": "desc"\\\\}\\\\}],
        "size": 1000
    \\\\}

    response = self.session.post(
        f"\\\\{self.base_url\\\\}/elasticsearch/_search",
        json=search_query
    )
    return response.json()

def get_zeek_logs(self, log_type, start_time=None, end_time=None):
    """Retrieve Zeek logs"""
    if not start_time:
        start_time = datetime.now() - timedelta(hours=1)
    if not end_time:
        end_time = datetime.now()

    query = \\\\{
        "query": \\\\{
            "bool": \\\\{
                "must": [
                    \\\\{"match": \\\\{"event_type": log_type\\\\}\\\\},
                    \\\\{"range": \\\\{
                        "@timestamp": \\\\{
                            "gte": start_time.isoformat(),
                            "lte": end_time.isoformat()
                        \\\\}
                    \\\\}\\\\}
                ]
            \\\\}
        \\\\},
        "sort": [\\\\{"@timestamp": \\\\{"order": "desc"\\\\}\\\\}],
        "size": 1000
    \\\\}

    response = self.session.post(
        f"\\\\{self.base_url\\\\}/elasticsearch/_search",
        json=query
    )
    return response.json()

def create_case(self, title, description, severity=2):
    """Create case in TheHive"""
    case_data = \\\\{
        "title": title,
        "description": description,
        "severity": severity,
        "tlp": 2,
        "tags": ["automated"]
    \\\\}

    response = self.session.post(
        f"\\\\{self.base_url\\\\}/thehive/api/case",
        json=case_data
    )
    return response.json()

Usage example

if name == "main": so_api = SecurityOnionAPI("https://so-manager", "admin", "password")

# Search for high-severity alerts
alerts = so_api.search_alerts("alert.severity:[1 TO 2]")
print(f"Found \\\\{len(alerts['hits']['hits'])\\\\} high-severity alerts")

# Get recent DNS logs
dns_logs = so_api.get_zeek_logs("dns")
print(f"Found \\\\{len(dns_logs['hits']['hits'])\\\\} DNS events")

# Create case for investigation
case = so_api.create_case(
    "Automated Alert Investigation",
    "High-severity alerts detected requiring investigation"
)
print(f"Created case: \\\\{case.get('id')\\\\}")

```_

Automatisierte Antwortskripte

```bash

!/bin/bash

Automated incident response script

LOG_FILE="/var/log/so-automated-response.log" ALERT_THRESHOLD=10 TIME_WINDOW=300 # 5 minutes

log_message() \\{ echo "$(date): $1" >> $LOG_FILE \\}

check_alert_volume() \\{ RECENT_ALERTS=$(sudo tail -n 1000 /nsm/suricata/eve.json|\ | jq -r 'select(.event_type=="alert") | .timestamp' | \ | awk -v threshold=$(date -d "-$\\{TIME_WINDOW\\} seconds" +%s) \ 'BEGIN\\{count=0\\} \\{ gsub(/[TZ]/, " ", $1); if(mktime(gensub(/-/, " ", "g", $1)) > threshold) count++ \\} END\\{print count\\}')

if [ "$RECENT_ALERTS" -gt "$ALERT_THRESHOLD" ]; then
    log_message "High alert volume detected: $RECENT_ALERTS alerts in last $TIME_WINDOW seconds"
    return 0
else
    log_message "Normal alert volume: $RECENT_ALERTS alerts"
    return 1
fi

\\}

block_suspicious_ip() \\{ local IP=$1 log_message "Blocking suspicious IP: $IP"

# Add to firewall
sudo iptables -I INPUT -s $IP -j DROP

# Add to Suricata block list
echo "$IP"|sudo tee -a /opt/so/rules/block.rules

# Restart Suricata to apply new rules
sudo so-suricata-restart

log_message "IP $IP blocked successfully"

\\}

analyze_top_alerting_ips() \\{ TOP_IPS=$(sudo tail -n 10000 /nsm/suricata/eve.json|\ | jq -r 'select(.event_type=="alert") | .src_ip' | \ | | sort | uniq -c | sort -nr | head -5 | awk '$1 > 5 \\{print $2\\}') |

for IP in $TOP_IPS; do
    log_message "Analyzing suspicious IP: $IP"

    # Check if IP is external

| if [[ ! $IP =~ ^192.168. ]] && [[ ! $IP =~ ^10. ]] && [[ ! $IP =~ ^172.(1[6-9] | 2[0-9] | 3[0-1]). ]]; then | block_suspicious_ip $IP fi done \\}

send_notification() \\{ local MESSAGE=$1 log_message "Sending notification: $MESSAGE"

# Send email notification (configure sendmail/postfix)
echo "$MESSAGE"|mail -s "SecurityOnion Alert" admin@company.com

# Send Slack notification (configure webhook)
curl -X POST -H 'Content-type: application/json' \
    --data "\\\\{\"text\":\"$MESSAGE\"\\\\}" \
    $SLACK_WEBHOOK_URL

\\}

main() \\{ log_message "Starting automated response check"

if check_alert_volume; then
    analyze_top_alerting_ips
    send_notification "High alert volume detected - automated response activated"
fi

log_message "Automated response check completed"

\\}

Run main function

main ```_

Integration mit SOAR Plattformen

```python

!/usr/bin/env python3

Integration with external SOAR platforms

import requests import json from datetime import datetime

class SOARIntegration: def init(self, soar_url, api_key): self.soar_url = soar_url self.api_key = api_key self.headers = \\{ 'Authorization': f'Bearer \\{api_key\\}', 'Content-Type': 'application/json' \\}

def create_incident(self, title, description, severity, artifacts):
    """Create incident in SOAR platform"""
    incident_data = \\\\{
        'name': title,
        'description': description,
        'severity': severity,
        'artifacts': artifacts,
        'source': 'SecurityOnion',
        'created_time': datetime.now().isoformat()
    \\\\}

    response = requests.post(
        f"\\\\{self.soar_url\\\\}/api/incidents",
        headers=self.headers,
        json=incident_data
    )
    return response.json()

def add_artifact(self, incident_id, artifact_type, value, description):
    """Add artifact to existing incident"""
    artifact_data = \\\\{
        'type': artifact_type,
        'value': value,
        'description': description
    \\\\}

    response = requests.post(
        f"\\\\{self.soar_url\\\\}/api/incidents/\\\\{incident_id\\\\}/artifacts",
        headers=self.headers,
        json=artifact_data
    )
    return response.json()

def run_playbook(self, incident_id, playbook_name):
    """Execute playbook for incident"""
    playbook_data = \\\\{
        'playbook': playbook_name,
        'incident_id': incident_id
    \\\\}

    response = requests.post(
        f"\\\\{self.soar_url\\\\}/api/playbooks/run",
        headers=self.headers,
        json=playbook_data
    )
    return response.json()

Example usage

def process_security_alert(alert_data): soar = SOARIntegration("https://soar-platform", "api-key")

# Extract relevant information
title = f"Security Alert: \\\\{alert_data.get('alert', \\\\{\\\\}).get('signature', 'Unknown')\\\\}"
description = f"Alert detected at \\\\{alert_data.get('timestamp')\\\\}"
severity = alert_data.get('alert', \\\\{\\\\}).get('severity', 3)

# Create artifacts
artifacts = []
if alert_data.get('src_ip'):
    artifacts.append(\\\\{
        'type': 'ip',
        'value': alert_data['src_ip'],
        'description': 'Source IP address'
    \\\\})

if alert_data.get('dest_ip'):
    artifacts.append(\\\\{
        'type': 'ip',
        'value': alert_data['dest_ip'],
        'description': 'Destination IP address'
    \\\\})

# Create incident
incident = soar.create_incident(title, description, severity, artifacts)

# Run appropriate playbook based on alert type
if 'malware' in title.lower():
    soar.run_playbook(incident['id'], 'malware-investigation')
elif 'phishing' in title.lower():
    soar.run_playbook(incident['id'], 'phishing-response')
else:
    soar.run_playbook(incident['id'], 'generic-investigation')

return incident

```_

Ressourcen