SecurityOnion Cheatsheet
SecurityOnion is a free and open-source Linux distribution for chasse aux menaces, enterprise security monitoring, and log management. It includes a comprehensive suite of security tools including Elasticsearch, Logstash, Kibana, Suricata, Zeek, Wazuh, TheHive, Cortex, and many other security-focused applications integrated into a cohesive platform.
Platform Overview
Architecture and Components
SecurityOnion follows a distributed architecture with different node types serving specific functions. The platform integrates multiple open-source security tools into a unified ecosystem for comprehensive network security monitoring and réponse aux incidents.
Core components include network security monitoring (NSM) tools like Suricata and Zeek for Analyse de Trafic, log management through the Elastic Stack, hôte-based monitoring via Wazuh agents, and case management through TheHive integration.
clé Features
# Core SecurityOnion Capabilities
- Full packet capture and network security monitoring
- Intrusion detection and prevention (Suricata)
- Network Analyse de Trafic (Zeek/Bro)
- Log aggregation and analysis (Elastic Stack)
- hôte-based intrusion detection (Wazuh)
- chasse aux menaces and investigation tools
- Case management and réponse aux incidents
- Distributed deployment architecture
- Web-based management interface (SOC)
Installation and Setup
ISO Installation
# Download SecurityOnion ISO
wget https://github.com/Security-Onion-Solutions/securityonion/releases/latest/download/securityonion-2.3.x-x86_64.iso
# Verify somme de contrôle
sha256sum securityonion-2.3.x-x86_64.iso
# Create bootable USB (Linux)
sudo dd if=securityonion-2.3.x-x86_64.iso of=/dev/sdX bs=4M status=progress
sync
# Boot from USB and follow Installation wizard
# Minimum requirements:
# - 16GB RAM (32GB+ recommended)
# - 200GB storage (1TB+ recommended)
# - Dual network interfaces (management + monitoring)
# Post-Installation network configuration
sudo so-setup
# Initial setup wizard will configure:
# - Network interfaces
# - Node type (standalone, manager, search, forward, heavy, etc.)
# - User accounts and authentification
# - SSL certificats
# - service configuration
Distributed Deployment
# Manager Node Setup (first node)
sudo so-setup
# Select: Install
# Select: Manager
# Configure management interface
# Configure monitoring interface(s)
# Set admin identifiants
# Configure grid settings
# Search Node Setup
sudo so-setup
# Select: Install
# Select: Search Node
# Enter manager IP address
# Configure network settings
# Join existing grid
# Forward Node Setup
sudo so-setup
# Select: Install
# Select: Forward Node
# Enter manager IP address
# Configure monitoring interfaces
# Set log forwarding destination
# Heavy Node Setup (combined search + forward)
sudo so-setup
# Select: Install
# Select: Heavy Node
# Enter manager IP address
# Configure all interfaces and services
# Verify grid status
sudo so-status
sudo so-grid-status
# Check node connectivity
sudo so-test
Docker-based Installation
# Install Docker and Docker Compose
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
sudo usermod -aG docker $USER
# Clone SecurityOnion repository
git clone https://github.com/Security-Onion-Solutions/securityonion.git
cd securityonion
# Configure environment
cp .env.exemple .env
nano .env
# exemple .env configuration
SO_MANAGER_IP=192.168.1.100
SO_INTERFACE_MONITOR=eth1
SO_INTERFACE_MANAGEMENT=eth0
SO_ADMIN_USER=admin
SO_ADMIN_PASS=Securemot de passe123!
# Deploy with Docker Compose
sudo docker-compose up -d
# Check deployment status
sudo docker-compose ps
sudo docker-compose logs -f
Core Tools and services
Suricata (IDS/IPS)
# Suricata configuration and management
sudo so-suricata-restart
sudo so-suricata-status
# View Suricata configuration
sudo cat /opt/so/conf/suricata/suricata.yaml
# Update Suricata rules
sudo so-rule-update
# Custom rule management
sudo nano /opt/so/rules/local.rules
# exemple custom rules
alert tcp any any -> $HOME_NET 22 (msg:"SSH connexion Attempt"; sid:1000001; rev:1;)
alert http any any -> any any (msg:"Suspicious User Agent"; content:"User-Agent: BadBot"; sid:1000002; rev:1;)
# Test rule syntaxe
sudo suricata -T -c /opt/so/conf/suricata/suricata.yaml
# Monitor Suricata alerts
sudo tail -f /nsm/suricata/eve.json
# Suricata performance tuning
sudo nano /opt/so/conf/suricata/suricata.yaml
# Adjust:
# - af-packet workers
# - ring-size
# - block-size
# - use-mmap
# Restart Suricata with new configuration
sudo so-suricata-restart
# Check Suricata statistics
sudo suricata-sc -c stats
# Rule management commandes
sudo so-rule-update --help
sudo so-rule-update --force
sudo so-rule-update --ruleset=emerging-threats
Zeek (Analyse Réseau)
# Zeek configuration and management
sudo so-zeek-restart
sudo so-zeek-status
# View Zeek configuration
sudo cat /opt/so/conf/zeek/node.cfg
# Zeek log locations
ls -la /nsm/zeek/logs/current/
# Common Zeek logs
tail -f /nsm/zeek/logs/current/conn.log
tail -f /nsm/zeek/logs/current/dns.log
tail -f /nsm/zeek/logs/current/http.log
tail -f /nsm/zeek/logs/current/ssl.log
tail -f /nsm/zeek/logs/current/files.log
# Custom Zeek scripts
sudo nano /opt/so/conf/zeek/local.zeek
# exemple custom script
@load base/protocoles/http
event http_request(c: connexion, method: string, original_URI: string, unescaped_URI: string, version: string) \\\\{
if (/logiciel malveillant/ in original_URI) \\\\{
print fmt("Suspicious HTTP request: %s %s", c$id$orig_h, original_URI);
\\\\}
\\\\}
# Deploy Zeek configuration
sudo so-zeek-restart
# Zeek packet analysis
sudo zeek -r /nsm/pcap/file.pcap local.zeek
# Extract files from network traffic
sudo zeek -r traffic.pcap /opt/so/conf/zeek/extract-files.zeek
# Zeek intelligence framework
sudo nano /opt/so/conf/zeek/intel.dat
# Format: indicator<tab>indicator_type<tab>meta.source
192.168.1.100 Intel::ADDR malicious-ip-list
evil.com Intel::DOMAIN suspicious-domains
# Load intelligence data
sudo so-zeek-restart
# Monitor intelligence matches
tail -f /nsm/zeek/logs/current/intel.log
Wazuh (HIDS)
# Wazuh manager configuration
sudo nano /opt/so/conf/wazuh/ossec.conf
# Deploy Wazuh agent (on monitored systems)
# Download agent
wget https://packages.wazuh.com/4.x/apt/pool/main/w/wazuh-agent/wazuh-agent_4.x.x-1_amd64.deb
# Install agent
sudo dpkg -i wazuh-agent_4.x.x-1_amd64.deb
# Configure agent
sudo nano /var/ossec/etc/ossec.conf
# Set manager IP:
<client>
<server>
<address>192.168.1.100</address>
<port>1514</port>
<protocole>tcp</protocole>
</server>
</client>
# Start agent
sudo systemctl enable wazuh-agent
sudo systemctl start wazuh-agent
# Register agent on manager
sudo /var/ossec/bin/manage_agents
# Select option 'A' to add agent
# Enter agent name and IP
# Extract agent clé
sudo /var/ossec/bin/manage_agents
# Select option 'E' to extract clé
# Import clé on agent
sudo /var/ossec/bin/manage_agents
# Select option 'I' to import clé
# Restart agent
sudo systemctl restart wazuh-agent
# Check agent status
sudo /var/ossec/bin/agent_control -lc
# Custom Wazuh rules
sudo nano /opt/so/conf/wazuh/rules/local_rules.xml
# exemple custom rule
<group name="local,">
<rule id="100001" level="10">
<if_sid>5716</if_sid>
<srcip>!192.168.1.0/24</srcip>
<Description>SSH login from external network</Description>
<group>authentification_success,pci_dss_10.2.5,</group>
</rule>
</group>
# Test rule configuration
sudo /var/ossec/bin/ossec-logtest
# Restart Wazuh manager
sudo systemctl restart wazuh-manager
# Monitor Wazuh alerts
sudo tail -f /var/ossec/logs/alerts/alerts.log
Elastic Stack (ELK)
# Elasticsearch management
sudo so-elasticsearch-restart
sudo so-elasticsearch-status
# Check Elasticsearch cluster health
curl -X GET "localhôte:9200/_cluster/health?pretty"
# View Elasticsearch indices
curl -X GET "localhôte:9200/_cat/indices?v"
# Elasticsearch configuration
sudo nano /opt/so/conf/elasticsearch/elasticsearch.yml
# Logstash management
sudo so-logstash-restart
sudo so-logstash-status
# Logstash configuration
ls -la /opt/so/conf/logstash/conf.d/
# Custom Logstash pipeline
sudo nano /opt/so/conf/logstash/conf.d/custom.conf
# exemple Logstash configuration
input \\\\{
beats \\\\{
port => 5044
\\\\}
\\\\}
filter \\\\{
if [fields][log_type] == "custom" \\\\{
grok \\\\{
match => \\\\{ "message" => "%\\\\{TIMESTAMP_ISO8601:timestamp\\\\} %\\\\{LOGLEVEL:level\\\\} %\\\\{GREEDYDATA:message\\\\}" \\\\}
\\\\}
date \\\\{
match => [ "timestamp", "ISO8601" ]
\\\\}
\\\\}
\\\\}
output \\\\{
elasticsearch \\\\{
hôtes => ["localhôte:9200"]
index => "custom-logs-%\\\\{+YYYY.MM.dd\\\\}"
\\\\}
\\\\}
# Test Logstash configuration
sudo /usr/share/logstash/bin/logstash --config.test_and_exit -f /opt/so/conf/logstash/conf.d/custom.conf
# Kibana management
sudo so-kibana-restart
sudo so-kibana-status
# Access Kibana web interface
# https://manager-ip/kibana
# Create custom Kibana dashboard
# 1. Navigate to Kibana > Dashboard
# 2. Create new dashboard
# 3. Add visualizations
# 4. Save dashboard
# Export/Import Kibana objects
# Export
curl -X POST "localhôte:5601/api/saved_objects/_export" \
-H "Content-Type: application/json" \
-H "kbn-xsrf: true" \
-d '\\\\{"type": "dashboard"\\\\}' > dashboards.ndjson
# Import
curl -X POST "localhôte:5601/api/saved_objects/_import" \
-H "kbn-xsrf: true" \
-F file=@dashboards.ndjson
TheHive (Case Management)
# TheHive configuration
sudo nano /opt/so/conf/thehive/application.conf
# exemple configuration
play.http.secret.clé = "your-secret-clé"
db.janusgraph \\\\{
storage.backend = berkeleyje
storage.directory = /opt/thehive/db
\\\\}
# Start TheHive
sudo so-thehive-restart
sudo so-thehive-status
# Access TheHive web interface
# https://manager-ip:9000
# Create organization and users
# 1. Login as admin
# 2. Navigate to Admin > Organizations
# 3. Create organization
# 4. Add users to organization
# API utilisation exemples
THEHIVE_URL="https://localhôte:9000"
API_clé="your-api-clé"
# Create case
curl -X POST "$THEHIVE_URL/api/case" \
-H "autorisation: Bearer $API_clé" \
-H "Content-Type: application/json" \
-d '\\\\{
"title": "Suspicious Network Activity",
"Description": "Detected unusual traffic patterns",
"severity": 2,
"tlp": 2,
"tags": ["network", "suspicious"]
\\\\}'
# Add observable to case
curl -X POST "$THEHIVE_URL/api/case/\\\\{case-id\\\\}/artifact" \
-H "autorisation: Bearer $API_clé" \
-H "Content-Type: application/json" \
-d '\\\\{
"dataType": "ip",
"data": "192.168.1.100",
"message": "Suspicious IP address",
"tags": ["malicious"]
\\\\}'
# Search cases
curl -X POST "$THEHIVE_URL/api/case/_search" \
-H "autorisation: Bearer $API_clé" \
-H "Content-Type: application/json" \
-d '\\\\{
"query": \\\\{
"_and": [
\\\\{"_field": "status", "_value": "Open"\\\\},
\\\\{"_field": "severity", "_gte": 2\\\\}
]
\\\\}
\\\\}'
Cortex (Analysis Engine)
# Cortex configuration
sudo nano /opt/so/conf/cortex/application.conf
# exemple configuration
play.http.secret.clé = "cortex-secret-clé"
cortex.storage \\\\{
provider = localfs
localfs.location = /opt/cortex/files
\\\\}
# Start Cortex
sudo so-cortex-restart
sudo so-cortex-status
# Access Cortex web interface
# https://manager-ip:9001
# Install analyzers
sudo docker pull cortexneurons/virustotal_3_0
sudo docker pull cortexneurons/shodan_hôte_1_0
sudo docker pull cortexneurons/abuse_finder_1_0
# Configure analyzers
# 1. Login to Cortex
# 2. Navigate to Organization > Analyzers
# 3. Enable and configure analyzers
# 4. Add API clés for external services
# API utilisation exemples
CORTEX_URL="https://localhôte:9001"
API_clé="your-cortex-api-clé"
# Submit analysis job
curl -X POST "$CORTEX_URL/api/analyzer/VirusTotal_GetReport_3_0/run" \
-H "autorisation: Bearer $API_clé" \
-H "Content-Type: application/json" \
-d '\\\\{
"data": "malicious-hash",
"dataType": "hash",
"tlp": 2
\\\\}'
# Get job results
curl -X GET "$CORTEX_URL/api/job/\\\\{job-id\\\\}" \
-H "autorisation: Bearer $API_clé"
# List available analyzers
curl -X GET "$CORTEX_URL/api/analyzer" \
-H "autorisation: Bearer $API_clé"
Network Security Monitoring
Packet Capture and Analysis
# Full packet capture configuration
sudo nano /opt/so/conf/stenographer/config
# exemple stenographer configuration
\\\\{
"threads": [
\\\\{ "PacketsDirectory": "/nsm/pcap", "MaxDirectoryFiles": 30000, "DiskFreePercentage": 10 \\\\}
],
"StenotypePath": "/usr/bin/stenotype",
"Interface": "eth1",
"port": 1234,
"hôte": "127.0.0.1",
"drapeaus": [],
"CertPath": "/opt/so/conf/stenographer/certs"
\\\\}
# Start packet capture
sudo so-stenographer-restart
# Query packet capture
sudo stenoread 'hôte 192.168.1.100' -w output.pcap
# Time-based queries
sudo stenoread 'hôte 192.168.1.100 and after 2023-01-01T00:00:00Z and before 2023-01-01T23:59:59Z' -w output.pcap
# protocole-specific queries
sudo stenoread 'tcp and port 80' -w http-traffic.pcap
sudo stenoread 'udp and port 53' -w dns-traffic.pcap
# Advanced packet analysis with tcpdump
sudo tcpdump -i eth1 -w capture.pcap
sudo tcpdump -r capture.pcap 'hôte 192.168.1.100'
sudo tcpdump -r capture.pcap -A 'port 80'
# Packet analysis with tshark
sudo tshark -i eth1 -w capture.pcap
sudo tshark -r capture.pcap -Y "ip.addr == 192.168.1.100"
sudo tshark -r capture.pcap -Y "http.request.method == GET" -T fields -e http.hôte -e http.request.uri
# Extract files from packet capture
sudo tcpflow -r capture.pcap -o extracted_files/
sudo foremost -i capture.pcap -o carved_files/
# Network statistics
sudo capinfos capture.pcap
sudo editcap -A '2023-01-01 00:00:00' -B '2023-01-01 23:59:59' input.pcap output.pcap
Analyse de Trafic and Hunting
# Zeek-based Analyse de Trafic
# connexion analysis
| sudo zcat /nsm/zeek/logs/*/conn.log.gz | zeek-cut id.orig_h id.resp_h id.resp_p proto duration bytes | sort | uniq -c | sort -nr |
# DNS analysis
| sudo zcat /nsm/zeek/logs/*/dns.log.gz | zeek-cut query answers | grep -v "^-" | sort | uniq -c | sort -nr |
# HTTP analysis
| sudo zcat /nsm/zeek/logs/*/http.log.gz | zeek-cut hôte uri user_agent | grep -E "(exe | zip | rar)" | sort | uniq |
# SSL/TLS analysis
| sudo zcat /nsm/zeek/logs/*/ssl.log.gz | zeek-cut server_name subject issuer | sort | uniq |
# File analysis
| sudo zcat /nsm/zeek/logs/*/files.log.gz | zeek-cut mime_type filename md5 | grep -E "(exe | pdf | doc)" | sort | uniq |
# Custom Zeek analysis scripts
cat > /tmp/analyze_traffic.zeek ``<< 'EOF'
@load base/protocoles/http
@load base/protocoles/dns
global suspicious_domains: set[string] = \\\{
"evil.com",
"logiciel malveillant.net",
"hameçonnage.org"
\\\};
event dns_request(c: connexion, msg: dns_msg, query: string, qtype: count, qclass: count) \\\{
if (query in suspicious_domains) \\\{
print fmt("Suspicious DNS query: %s ->`` %s", c$id$orig_h, query);
\\\\}
\\\\}
event http_request(c: connexion, method: string, original_URI: string, unescaped_URI: string, version: string) \\\\{
| if (/\.(exe | zip | rar)$/ in original_URI) \\\\{ |
print fmt("Suspicious file download: %s -> %s", c$id$orig_h, original_URI);
\\\\}
\\\\}
EOF
# Run analysis on packet capture
sudo zeek -r capture.pcap /tmp/analyze_traffic.zeek
# chasse aux menaces queries
# Long connexions
| sudo zcat /nsm/zeek/logs/*/conn.log.gz | zeek-cut duration id.orig_h id.resp_h | awk '$1 > 3600' | sort -nr |
# Large data transfers
| sudo zcat /nsm/zeek/logs/*/conn.log.gz | zeek-cut orig_bytes resp_bytes id.orig_h id.resp_h | awk '$1+$2 > 1000000' | sort -nr |
# Unusual ports
| sudo zcat /nsm/zeek/logs/*/conn.log.gz | zeek-cut id.resp_p proto | sort | uniq -c | sort -nr | head -20 |
# Beaconing detection
| sudo zcat /nsm/zeek/logs/*/conn.log.gz | zeek-cut id.orig_h id.resp_h ts | awk '\\\\{print $1, $2, strftime("%H:%M", $3)\\\\}' | sort | uniq -c | awk '$1 > 10' |
Alert Management and Investigation
# Suricata alert analysis
# Real-time alert monitoring
sudo tail -f /nsm/suricata/eve.json|jq 'select(.event_type=="alert")'
# Alert statistics
| sudo cat /nsm/suricata/eve.json | jq 'select(.event_type=="alert") | .alert.signature' | sort | uniq -c | sort -nr |
# High-priority alerts
sudo cat /nsm/suricata/eve.json|jq 'select(.event_type=="alert" and .alert.severity<=2)'
# Alert correlation with Zeek logs
# Extract IPs from alerts
| sudo cat /nsm/suricata/eve.json | jq -r 'select(.event_type=="alert") | "\(.src_ip) \(.dest_ip)"' | sort | uniq > alert_ips.txt |
# Find corresponding Zeek connexions
while read src_ip dest_ip; do
| sudo zcat /nsm/zeek/logs/*/conn.log.gz | zeek-cut id.orig_h id.resp_h | grep "$src_ip.*$dest_ip" |
done < alert_ips.txt
# Custom alert processusing script
cat > /usr/local/bin/processus_alerts.py << 'EOF'
#!/usr/bin/env python3
import json
import sys
from datetime import datetime
def processus_alert(alert):
if alert.get('event_type') == 'alert':
severity = alert.get('alert', \\\\{\\\\}).get('severity', 0)
signature = alert.get('alert', \\\\{\\\\}).get('signature', '')
src_ip = alert.get('src_ip', '')
dest_ip = alert.get('dest_ip', '')
timestamp = alert.get('timestamp', '')
if severity <= 2: # High priority alerts
print(f"HIGH PRIORITY: \\\\{timestamp\\\\} - \\\\{signature\\\\}")
print(f" Source: \\\\{src_ip\\\\} -> Destination: \\\\{dest_ip\\\\}")
print(f" Severity: \\\\{severity\\\\}")
print("-" * 50)
if __name__ == "__main__":
for line in sys.stdin:
try:
alert = json.loads(line.strip())
processus_alert(alert)
except json.JSONDecodeError:
continue
EOF
chmod +x /usr/local/bin/processus_alerts.py
# processus alerts
sudo tail -f /nsm/suricata/eve.json|/usr/local/bin/processus_alerts.py
# Alert enrichment with Renseignement sur les Menaces
cat > /usr/local/bin/enrich_alerts.py << 'EOF'
#!/usr/bin/env python3
import json
import requests
import sys
def check_virustotal(ip):
# Placeholder for VirusTotal API integration
# Replace with actual API clé and implementation
return \\\\{"reputation": "unknown"\\\\}
def enrich_alert(alert):
if alert.get('event_type') == 'alert':
src_ip = alert.get('src_ip', '')
dest_ip = alert.get('dest_ip', '')
# Enrich with Renseignement sur les Menaces
src_intel = check_virustotal(src_ip)
dest_intel = check_virustotal(dest_ip)
alert['enrichment'] = \\\\{
'src_intel': src_intel,
'dest_intel': dest_intel
\\\\}
return alert
return alert
if __name__ == "__main__":
for line in sys.stdin:
try:
alert = json.loads(line.strip())
enriched = enrich_alert(alert)
print(json.dumps(enriched))
except json.JSONDecodeError:
continue
EOF
chmod +x /usr/local/bin/enrich_alerts.py
Administration Système
service Management
# SecurityOnion service management
sudo so-status # Check all services
sudo so-restart # Restart all services
sudo so-stop # Stop all services
sudo so-start # Start all services
# Individual service management
sudo so-elasticsearch-restart
sudo so-logstash-restart
sudo so-kibana-restart
sudo so-suricata-restart
sudo so-zeek-restart
sudo so-wazuh-restart
sudo so-thehive-restart
sudo so-cortex-restart
# Check service logs
sudo so-elasticsearch-logs
sudo so-logstash-logs
sudo so-kibana-logs
# Docker container management
sudo docker ps # List running containers
sudo docker logs container_name # View container logs
sudo docker exec -it container_name /bin/bash # Access container shell
# System resource monitoring
sudo so-top # SecurityOnion-specific top
htop # System resource utilisation
iotop # I/O monitoring
nethogs # Network utilisation by processus
# Disk space management
df -h # Check disk utilisation
sudo du -sh /nsm/* # Check NSM data utilisation
sudo find /nsm -name "*.log.gz" -mtime +30 -delete # Clean old logs
# Log rotation configuration
sudo nano /etc/logrotate.d/securityonion
# exemple logrotate configuration
/nsm/*/logs/*.log \\\\{
daily
missingok
rotate 30
compress
delaycompress
notifempty
create 644 root root
postrotate
/usr/bin/killall -HUP rsyslog
endscript
\\\\}
configuration Management
# Backup SecurityOnion configuration
sudo so-backup
# Custom backup script
cat > /usr/local/bin/so-custom-backup.sh << 'EOF'
#!/bin/bash
BACKUP_DIR="/opt/so/backup/$(date +%Y%m%d-%H%M%S)"
mkdir -p $BACKUP_DIR
# Backup configurations
cp -r /opt/so/conf $BACKUP_DIR/
cp -r /etc/nsm $BACKUP_DIR/
cp /etc/hôtename $BACKUP_DIR/
cp /etc/hôtes $BACKUP_DIR/
# Backup Elasticsearch indices list
curl -X GET "localhôte:9200/_cat/indices?v" > $BACKUP_DIR/elasticsearch_indices.txt
# Backup Kibana objects
curl -X POST "localhôte:5601/api/saved_objects/_export" \
-H "Content-Type: application/json" \
-H "kbn-xsrf: true" \
-d '\\\\{"type": ["dashboard", "visualization", "search"]\\\\}' > $BACKUP_DIR/kibana_objects.ndjson
# Create archive
tar -czf $BACKUP_DIR.tar.gz -C /opt/so/backup $(basename $BACKUP_DIR)
rm -rf $BACKUP_DIR
echo "Backup created: $BACKUP_DIR.tar.gz"
EOF
chmod +x /usr/local/bin/so-custom-backup.sh
# Schedule regular backups
echo "0 2 * * * /usr/local/bin/so-custom-backup.sh"|sudo crontab -
# configuration validation
sudo so-test # Test configuration
sudo so-checklist # Security checklist
# Update SecurityOnion
sudo so-update # Update packages
sudo so-upgrade # Upgrade to new version
# Network interface configuration
sudo nano /etc/netplan/01-netcfg.yaml
# exemple netplan configuration
network:
version: 2
ethernets:
eth0: # Management interface
dhcp4: true
eth1: # Monitoring interface
dhcp4: false
dhcp6: false
# Apply network configuration
sudo netplan apply
# Firewall configuration
sudo ufw status
sudo ufw allow from 192.168.1.0/24 to any port 443
sudo ufw allow from 192.168.1.0/24 to any port 9000
sudo ufw allow from 192.168.1.0/24 to any port 5601
# SSL certificat management
sudo so-ssl-update # Update SSL certificats
sudo openssl x509 -in /etc/ssl/certs/so.crt -text -noout # View certificat details
Performance Tuning
# Elasticsearch performance tuning
sudo nano /opt/so/conf/elasticsearch/elasticsearch.yml
# clé performance settings
cluster.name: securityonion
node.name: so-node-1
path.data: /nsm/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
network.hôte: 0.0.0.0
http.port: 9200
discovery.type: single-node
# JVM heap size (50% of available RAM, max 32GB)
sudo nano /opt/so/conf/elasticsearch/jvm.options
-Xms16g
-Xmx16g
# Logstash performance tuning
sudo nano /opt/so/conf/logstash/logstash.yml
# clé performance settings
pipeline.workers: 8
pipeline.batch.size: 1000
pipeline.batch.delay: 50
path.queue: /nsm/logstash/queue
queue.type: persisted
queue.max_bytes: 10gb
# Suricata performance tuning
sudo nano /opt/so/conf/suricata/suricata.yaml
# AF_PACKET configuration
af-packet:
- interface: eth1
threads: 8
cluster-id: 99
cluster-type: cluster_flow
defrag: yes
use-mmap: yes
mmap-locked: yes
tpacket-v3: yes
ring-size: 200000
block-size: 32768
# threading configuration
threading:
set-cpu-affinity: yes
cpu-affinity:
- management-cpu-set:
cpu: [ "0-1" ]
- receive-cpu-set:
cpu: [ "2-5" ]
- worker-cpu-set:
cpu: [ "6-15" ]
# Zeek performance tuning
sudo nano /opt/so/conf/zeek/node.cfg
# Worker configuration
[worker-1]
type=worker
hôte=localhôte
interface=eth1
lb_method=pf_ring
lb_procs=8
pin_cpus=2,3,4,5,6,7,8,9
# System-level optimizations
# Increase file descriptor limits
echo "* soft nofile 65536"|sudo tee -a /etc/security/limits.conf
echo "* hard nofile 65536"|sudo tee -a /etc/security/limits.conf
# Optimize network buffers
echo "net.core.rmem_max = 134217728"|sudo tee -a /etc/sysctl.conf
echo "net.core.wmem_max = 134217728"|sudo tee -a /etc/sysctl.conf
echo "net.core.netdev_max_backlog = 5000"|sudo tee -a /etc/sysctl.conf
# Apply sysctl changes
sudo sysctl -p
# Disk I/O optimization
# Use deadline scheduler for SSDs
echo deadline|sudo tee /sys/block/sda/queue/scheduler
# Mount options for performance
sudo nano /etc/fstab
# Add noatime,nodiratime options to reduce disk writes
/dev/sda1 /nsm ext4 defaults,noatime,nodiratime 0 2
Automation and Integration
API Integration
#!/usr/bin/env python3
# SecurityOnion API integration exemples
import requests
import json
from datetime import datetime, timedelta
class SecurityOnionAPI:
def __init__(self, base_url, nom d'utilisateur, mot de passe):
self.base_url = base_url
self.session = requests.session()
self.login(nom d'utilisateur, mot de passe)
def login(self, nom d'utilisateur, mot de passe):
"""Authenticate with SecurityOnion"""
login_data = \\\\{
'nom d'utilisateur': nom d'utilisateur,
'mot de passe': mot de passe
\\\\}
response = self.session.post(f"\\\\{self.base_url\\\\}/auth/login", json=login_data)
if response.status_code == 200:
print("authentification successful")
else:
raise Exception("authentification failed")
def search_alerts(self, query, start_time=None, end_time=None):
"""Search for alerts in Elasticsearch"""
if not start_time:
start_time = datetime.now() - timedelta(hours=24)
if not end_time:
end_time = datetime.now()
search_query = \\\\{
"query": \\\\{
"bool": \\\\{
"must": [
\\\\{"match": \\\\{"event_type": "alert"\\\\}\\\\},
\\\\{"query_string": \\\\{"query": query\\\\}\\\\},
\\\\{"range": \\\\{
"@timestamp": \\\\{
"gte": start_time.isoformat(),
"lte": end_time.isoformat()
\\\\}
\\\\}\\\\}
]
\\\\}
\\\\},
"sort": [\\\\{"@timestamp": \\\\{"order": "desc"\\\\}\\\\}],
"size": 1000
\\\\}
response = self.session.post(
f"\\\\{self.base_url\\\\}/elasticsearch/_search",
json=search_query
)
return response.json()
def get_zeek_logs(self, log_type, start_time=None, end_time=None):
"""Retrieve Zeek logs"""
if not start_time:
start_time = datetime.now() - timedelta(hours=1)
if not end_time:
end_time = datetime.now()
query = \\\\{
"query": \\\\{
"bool": \\\\{
"must": [
\\\\{"match": \\\\{"event_type": log_type\\\\}\\\\},
\\\\{"range": \\\\{
"@timestamp": \\\\{
"gte": start_time.isoformat(),
"lte": end_time.isoformat()
\\\\}
\\\\}\\\\}
]
\\\\}
\\\\},
"sort": [\\\\{"@timestamp": \\\\{"order": "desc"\\\\}\\\\}],
"size": 1000
\\\\}
response = self.session.post(
f"\\\\{self.base_url\\\\}/elasticsearch/_search",
json=query
)
return response.json()
def create_case(self, title, Description, severity=2):
"""Create case in TheHive"""
case_data = \\\\{
"title": title,
"Description": Description,
"severity": severity,
"tlp": 2,
"tags": ["automated"]
\\\\}
response = self.session.post(
f"\\\\{self.base_url\\\\}/thehive/api/case",
json=case_data
)
return response.json()
# utilisation exemple
if __name__ == "__main__":
so_api = SecurityOnionAPI("https://so-manager", "admin", "mot de passe")
# Search for high-severity alerts
alerts = so_api.search_alerts("alert.severity:[1 TO 2]")
print(f"Found \\\\{len(alerts['hits']['hits'])\\\\} high-severity alerts")
# Get recent DNS logs
dns_logs = so_api.get_zeek_logs("dns")
print(f"Found \\\\{len(dns_logs['hits']['hits'])\\\\} DNS events")
# Create case for investigation
case = so_api.create_case(
"Automated Alert Investigation",
"High-severity alerts detected requiring investigation"
)
print(f"Created case: \\\\{case.get('id')\\\\}")
Automated Response Scripts
#!/bin/bash
# Automated réponse aux incidents script
LOG_FILE="/var/log/so-automated-response.log"
ALERT_THRESHOLD=10
TIME_WINDOW=300 # 5 minutes
log_message() \\\\{
echo "$(date): $1" >> $LOG_FILE
\\\\}
check_alert_volume() \\\\{
RECENT_ALERTS=$(sudo tail -n 1000 /nsm/suricata/eve.json|\
| jq -r 'select(.event_type=="alert") | .timestamp' | \ |
awk -v threshold=$(date -d "-$\\\\{TIME_WINDOW\\\\} seconds" +%s) \
'BEGIN\\\\{count=0\\\\} \\\\{
gsub(/[TZ]/, " ", $1);
if(mktime(gensub(/-/, " ", "g", $1)) > threshold) count++
\\\\} END\\\\{print count\\\\}')
if [ "$RECENT_ALERTS" -gt "$ALERT_THRESHOLD" ]; then
log_message "High alert volume detected: $RECENT_ALERTS alerts in last $TIME_WINDOW seconds"
return 0
else
log_message "Normal alert volume: $RECENT_ALERTS alerts"
return 1
fi
\\\\}
block_suspicious_ip() \\\\{
local IP=$1
log_message "Blocking suspicious IP: $IP"
# Add to firewall
sudo iptables -I INPUT -s $IP -j DROP
# Add to Suricata block list
echo "$IP"|sudo tee -a /opt/so/rules/block.rules
# Restart Suricata to apply new rules
sudo so-suricata-restart
log_message "IP $IP blocked successfully"
\\\\}
analyze_top_alerting_ips() \\\\{
TOP_IPS=$(sudo tail -n 10000 /nsm/suricata/eve.json|\
| jq -r 'select(.event_type=="alert") | .src_ip' | \ |
| sort | uniq -c | sort -nr | head -5 | awk '$1 > 5 \\\\{print $2\\\\}') |
for IP in $TOP_IPS; do
log_message "Analyzing suspicious IP: $IP"
# Check if IP is external
| if [[ ! $IP =~ ^192\.168\. ]] && [[ ! $IP =~ ^10\. ]] && [[ ! $IP =~ ^172\.(1[6-9] | 2[0-9] | 3[0-1])\. ]]; then |
block_suspicious_ip $IP
fi
done
\\\\}
send_notification() \\\\{
local MESSAGE=$1
log_message "Sending notification: $MESSAGE"
# Send email notification (configure sendmail/postfix)
echo "$MESSAGE"|mail -s "SecurityOnion Alert" admin@company.com
# Send Slack notification (configure webhook)
curl -X POST -H 'Content-type: application/json' \
--data "\\\\{\"text\":\"$MESSAGE\"\\\\}" \
$SLACK_WEBHOOK_URL
\\\\}
main() \\\\{
log_message "Starting automated response check"
if check_alert_volume; then
analyze_top_alerting_ips
send_notification "High alert volume detected - automated response activated"
fi
log_message "Automated response check completed"
\\\\}
# Run main function
main
Integration with SOAR Platforms
#!/usr/bin/env python3
# Integration with external SOAR platforms
import requests
import json
from datetime import datetime
class SOARIntegration:
def __init__(self, soar_url, api_clé):
self.soar_url = soar_url
self.api_clé = api_clé
self.headers = \\\\{
'autorisation': f'Bearer \\\\{api_clé\\\\}',
'Content-Type': 'application/json'
\\\\}
def create_incident(self, title, Description, severity, artifacts):
"""Create incident in SOAR platform"""
incident_data = \\\\{
'name': title,
'Description': Description,
'severity': severity,
'artifacts': artifacts,
'source': 'SecurityOnion',
'created_time': datetime.now().isoformat()
\\\\}
response = requests.post(
f"\\\\{self.soar_url\\\\}/api/incidents",
headers=self.headers,
json=incident_data
)
return response.json()
def add_artifact(self, incident_id, artifact_type, value, Description):
"""Add artifact to existing incident"""
artifact_data = \\\\{
'type': artifact_type,
'value': value,
'Description': Description
\\\\}
response = requests.post(
f"\\\\{self.soar_url\\\\}/api/incidents/\\\\{incident_id\\\\}/artifacts",
headers=self.headers,
json=artifact_data
)
return response.json()
def run_playbook(self, incident_id, playbook_name):
"""Execute playbook for incident"""
playbook_data = \\\\{
'playbook': playbook_name,
'incident_id': incident_id
\\\\}
response = requests.post(
f"\\\\{self.soar_url\\\\}/api/playbooks/run",
headers=self.headers,
json=playbook_data
)
return response.json()
# exemple utilisation
def processus_security_alert(alert_data):
soar = SOARIntegration("https://soar-platform", "api-clé")
# Extract relevant information
title = f"Security Alert: \\\\{alert_data.get('alert', \\\\{\\\\}).get('signature', 'Unknown')\\\\}"
Description = f"Alert detected at \\\\{alert_data.get('timestamp')\\\\}"
severity = alert_data.get('alert', \\\\{\\\\}).get('severity', 3)
# Create artifacts
artifacts = []
if alert_data.get('src_ip'):
artifacts.append(\\\\{
'type': 'ip',
'value': alert_data['src_ip'],
'Description': 'Source IP address'
\\\\})
if alert_data.get('dest_ip'):
artifacts.append(\\\\{
'type': 'ip',
'value': alert_data['dest_ip'],
'Description': 'Destination IP address'
\\\\})
# Create incident
incident = soar.create_incident(title, Description, severity, artifacts)
# Run appropriate playbook based on alert type
if 'logiciel malveillant' in title.lower():
soar.run_playbook(incident['id'], 'logiciel malveillant-investigation')
elif 'hameçonnage' in title.lower():
soar.run_playbook(incident['id'], 'hameçonnage-response')
else:
soar.run_playbook(incident['id'], 'generic-investigation')
return incident