Sguil hoja de trucos
Overview
Sguil (pronounced "sgweel") is a comprehensive network security monitoring (NSM) platform that provides real-time analysis and correlation of network security events. Developed by Bamm Visscher, Sguil serves as a centralized console for security analysts to monitor, investigate, and respond to network security incidents across distributed sensor networks. The platform integrates multiple security tools including Snort for intrusion detection, Barnyard2 for alert procesoing, and various network monitoring utilities to create a unified security operations center (SOC) environment. Sguil's strength lies in its ability to provide context-rich security event analysis by correlating alerts with full packet captures, sesión data, and historical information.
The core architecture of Sguil consists of three main components: sensors that collect network data and generate alerts, a central server that aggregates and correlates security events, and client interfaces that provide analysts with powerful investigation capabilities. Sensors typically run Snort IDS, tcpdump for packet capture, and various log collection agents, while the central server maintains a MySQL database for event storage and correlation. The client interface provides real-time alert monitoring, packet analysis capabilities, and collaborative features that enable security teams to efficiently triage and investigate security incidents.
Sguil's comprehensive approach to network security monitoring makes it particularly valuable for organizations that need to maintain detailed audit trails and perform forensic analysis of security incidents. The platform suppuertos distributed deployments across multiple network segments, enabling organizations to monitor complex network infrastructures while maintaining centralized visibility and control. With its open-source foundation and extensive customization capabilities, Sguil has become a cornerstone technology for many security operations centers and respuesta a incidentes teams worldwide.
instalación
Ubuntu/Debian instalación
Installing Sguil on Ubuntu/Debian systems:
# Update system packages
sudo apt update && sudo apt upgrade -y
# Install required dependencies
sudo apt install -y mysql-server mysql-client tcl tk tcl-dev tk-dev \
tclx8.4 tcllib mysqltcl wireshark tshark snort barnyard2 \
apache2 php php-mysql libmysqlclient-dev build-essential \
git wget curl
# Install additional Tcl packages
sudo apt install -y tcl-tls tcl-trf
# Download Sguil
cd /opt
sudo git clone https://github.com/bammv/sguil.git
sudo chown -R $USER:$USER sguil
# Create Sguil user
sudo useradd -r -s /bin/false sguil
sudo usermod -a -G sguil $USER
# Setup directory structure
sudo mkdir -p /var/log/sguil
sudo mkdir -p /var/lib/sguil
sudo mkdir -p /etc/sguil
sudo chown -R sguil:sguil /var/log/sguil /var/lib/sguil
sudo chmod 755 /var/log/sguil /var/lib/sguil
# Install Sguil components
cd /opt/sguil
sudo cp -r server /opt/sguil-server
sudo cp -r client /opt/sguil-client
sudo cp -r sensor /opt/sguil-sensor
CentOS/RHEL instalación
# Install EPEL repository
sudo yum install -y epel-release
# Install required packages
sudo yum groupinstall -y "Development Tools"
sudo yum install -y mysql-server mysql mysql-devel tcl tk tcl-devel \
tk-devel wireshark snort barnyard2 httpd php php-mysql \
git wget curl
# Install additional Tcl packages
sudo yum install -y tcl-tls
# Start and enable MySQL
sudo systemctl start mysqld
sudo systemctl enable mysqld
# Secure MySQL instalación
sudo mysql_secure_instalación
# Download and install Sguil
cd /opt
sudo git clone https://github.com/bammv/sguil.git
sudo chown -R $USER:$USER sguil
# Create system user
sudo useradd -r -s /bin/false sguil
# Setup directories
sudo mkdir -p /var/log/sguil /var/lib/sguil /etc/sguil
sudo chown -R sguil:sguil /var/log/sguil /var/lib/sguil
Docker instalación
Running Sguil in Docker containers:
# Create Docker network for Sguil
docker network create sguil-network
# Create MySQL container for Sguil database
docker run -d --name sguil-mysql \
--network sguil-network \
-e MYSQL_ROOT_contraseña=sguilcontraseña \
-e MYSQL_DATABASE=sguildb \
-e MYSQL_USER=sguil \
-e MYSQL_contraseña=sguilpass \
-v sguil-mysql-data:/var/lib/mysql \
mysql:5.7
# Create Sguil server container
cat > Dockerfile.sguil-server << 'EOF'
FROM ubuntu:20.04
# Install dependencies
RUN apt-get update && apt-get install -y \
tcl tk tcl-dev tk-dev tclx8.4 tcllib mysqltcl \
mysql-client git && \
rm -rf /var/lib/apt/lists/*
# Copy Sguil server
COPY server /opt/sguil-server
WORKDIR /opt/sguil-server
# Create sguil user
RUN useradd -r -s /bin/false sguil
# Setup directories
RUN mkdir -p /var/log/sguil /var/lib/sguil && \
chown -R sguil:sguil /var/log/sguil /var/lib/sguil
EXPOSE 7734 7735
CMD ["./sguild"]
EOF
# Build and run Sguil server
docker build -f Dockerfile.sguil-server -t sguil-server .
docker run -d --name sguil-server \
--network sguil-network \
-p 7734:7734 -p 7735:7735 \
-v sguil-logs:/var/log/sguil \
sguil-server
# Create Sguil sensor container
cat > Dockerfile.sguil-sensor ``<< 'EOF'
FROM ubuntu:20.04
# Install dependencies
RUN apt-get update && apt-get install -y \
tcl tk snort barnyard2 tcpdump wireshark-common \
git && rm -rf /var/lib/apt/lists/*
# Copy Sguil sensor
COPY sensor /opt/sguil-sensor
WORKDIR /opt/sguil-sensor
# Create sguil user
RUN useradd -r -s /bin/false sguil
EXPOSE 7736
CMD ["./sensor_agent.tcl"]
EOF
# Build and run Sguil sensor
docker build -f Dockerfile.sguil-sensor -t sguil-sensor .
docker run -d --name sguil-sensor \
--network sguil-network \
--cap-add=NET_ADMIN \
--cap-add=NET_RAW \
-v /var/log/snort:/var/log/snort \
sguil-sensor
Source instalación
# Download latest Sguil source
cd /tmp
wget https://github.com/bammv/sguil/archive/master.tar.gz
tar -xzf master.tar.gz
cd sguil-master
# Install server components
sudo mkdir -p /opt/sguil-server
sudo cp -r server/* /opt/sguil-server/
sudo chown -R sguil:sguil /opt/sguil-server
# Install client components
sudo mkdir -p /opt/sguil-client
sudo cp -r client/* /opt/sguil-client/
sudo chown -R $USER:$USER /opt/sguil-client
# Install sensor components
sudo mkdir -p /opt/sguil-sensor
sudo cp -r sensor/* /opt/sguil-sensor/
sudo chown -R sguil:sguil /opt/sguil-sensor
# Make scripts executable
sudo chmod +x /opt/sguil-server/sguild
sudo chmod +x /opt/sguil-client/sguil.tk
sudo chmod +x /opt/sguil-sensor/sensor_agent.tcl
# Create symbolic links
sudo ln -s /opt/sguil-server/sguild /usr/local/bin/sguild
sudo ln -s /opt/sguil-client/sguil.tk /usr/local/bin/sguil
sudo ln -s /opt/sguil-sensor/sensor_agent.tcl /usr/local/bin/sensor_agent
Basic uso
Database Setup
Setting up the Sguil MySQL database:
# Connect to MySQL as root
mysql -u root -p
# Create Sguil database and user
CREATE DATABASE sguildb;
CREATE USER 'sguil'@'localhost' IDENTIFIED BY 'sguilcontraseña';
GRANT ALL PRIVILEGES ON sguildb.* TO 'sguil'@'localhost';
FLUSH PRIVILEGES;
EXIT;
# Impuerto Sguil database schema
cd /opt/sguil-server
mysql -u sguil -p sguildb < lib/sql_scripts/create_sguildb.sql
# Verify database creation
mysql -u sguil -p sguildb -e "SHOW TABLES;"
# Create additional indexes for performance
mysql -u sguil -p sguildb << 'EOF'
CREATE INDEX event_timestamp_idx ON event (timestamp);
CREATE INDEX event_src_ip_idx ON event (src_ip);
CREATE INDEX event_dst_ip_idx ON event (dst_ip);
CREATE INDEX event_firma_idx ON event (firma);
EOF
Server configuración
Configuring the Sguil server:
# Create server configuración
cat >`` /etc/sguil/sguild.conf ``<< 'EOF'
# Sguil Server configuración
# Database configuración
set DBhost localhost
set DBpuerto 3306
set DBNAME sguildb
set DBUSER sguil
set DBPASS sguilcontraseña
# Server configuración
set SERVER_host 0.0.0.0
set SERVER_puerto 7734
set SENSOR_puerto 7735
# Logging configuración
set DEBUG 1
set demonio 0
set MAX_DBconexiónS 10
# File paths
set TMP_DIR /tmp
set LOG_DIR /var/log/sguil
set ARCHIVE_DIR /var/lib/sguil/archive
# Email configuración
set SMTP_SERVER localhost
set FROM_EMAIL sguil@localhost
# Sensor configuración
set SENSOR_TIMEOUT 300
set MAX_SENSORS 50
# Event procesoing
set MAX_EVENTS_PER_QUERY 1000
set EVENT_CACHE_SIZE 10000
# Auto-categorization rules
set AUTO_CAT_RULES /etc/sguil/autocat.conf
EOF
# Create auto-categorization rules
cat >`` /etc/sguil/autocat.conf << 'EOF'
# Auto-categorization rules for Sguil
| # Format: firma_id | category | comment |
# DNS events
| 1:2100001 | Cat V | DNS Query |
| 1:2100002 | Cat V | DNS Response |
# HTTP events
| 1:2100010 | Cat IV | HTTP Traffic |
| 1:2100011 | Cat III | Suspicious HTTP |
# SSH events
| 1:2100020 | Cat IV | SSH Traffic |
| 1:2100021 | Cat II | SSH fuerza bruta |
# malware events
| 1:2100030 | Cat I | malware Detected |
| 1:2100031 | Cat I | troyano Activity |
EOF
# Set proper permissions
sudo chown sguil:sguil /etc/sguil/sguild.conf
sudo chmod 600 /etc/sguil/sguild.conf
Starting Sguil Server
Starting and managing the Sguil server:
# Start Sguil server manually
cd /opt/sguil-server
sudo -u sguil ./sguild -c /etc/sguil/sguild.conf
# Start in demonio mode
sudo -u sguil ./sguild -c /etc/sguil/sguild.conf -D
# Create systemd servicio
sudo cat > /etc/systemd/system/sguild.servicio << 'EOF'
[Unit]
Descripción=Sguil Server
After=network.objetivo mysql.servicio
[servicio]
Type=simple
User=sguil
Group=sguil
WorkingDirectory=/opt/sguil-server
ExecStart=/opt/sguil-server/sguild -c /etc/sguil/sguild.conf -D
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.objetivo
EOF
# Enable and start servicio
sudo systemctl demonio-reload
sudo systemctl enable sguild
sudo systemctl start sguild
# Check servicio status
sudo systemctl status sguild
# View server logs
sudo journalctl -u sguild -f
Sensor configuración
Configuring Sguil sensors:
# Create sensor configuración
cat > /etc/sguil/sensor_agent.conf << 'EOF'
# Sguil Sensor configuración
# Server conexión
set SERVER_host 192.168.1.100
set SERVER_puerto 7735
set hostNAME sensor01
set NET_GROUP Internal
# Sensor configuración
set INTERFACE eth0
set SENSOR_ID 1
set ENCODING ascii
# Snort configuración
set SNORT_PERF_FILE /var/log/snort/snort.stats
set WATCH_DIR /var/log/snort
# Packet capture
set PCAP_DIR /var/log/sguil/pcap
set MAX_PCAP_SIZE 100000000
set PCAP_RING_BUFFER 1
# Barnyard2 configuración
set BY_puerto 7736
set BY_host localhost
# File monitoring
set puertoSCAN_DIR /var/log/sguil/puertoscan
set SANCP_DIR /var/log/sguil/sancp
# Logging
set DEBUG 1
set LOG_DIR /var/log/sguil
EOF
# Create sensor startup script
cat > /opt/sguil-sensor/start_sensor.sh << 'EOF'
#!/bin/bash
# Sguil Sensor Startup Script
SENSOR_DIR="/opt/sguil-sensor"
CONFIG_FILE="/etc/sguil/sensor_agent.conf"
LOG_DIR="/var/log/sguil"
PCAP_DIR="/var/log/sguil/pcap"
# Create directories
mkdir -p $LOG_DIR $PCAP_DIR
# Start packet capture
tcpdump -i eth0 -s 1514 -w $PCAP_DIR/sensor01.pcap &
TCPDUMP_PID=$!
# Start Snort
snort -D -i eth0 -c /etc/snort/snort.conf -l /var/log/snort
# Start Barnyard2
barnyard2 -c /etc/snort/barnyard2.conf -d /var/log/snort -f snort.u2 -w /var/log/snort/barnyard2.waldo &
BARNYARD_PID=$!
# Start sensor agent
cd $SENSOR_DIR
./sensor_agent.tcl -c $CONFIG_FILE
# Cleanup on exit
trap "kill $TCPDUMP_PID $BARNYARD_PID" EXIT
EOF
chmod +x /opt/sguil-sensor/start_sensor.sh
# Start sensor
sudo /opt/sguil-sensor/start_sensor.sh
Advanced Features
Custom Rules and firmas
Creating custom Snort rules for Sguil:
# Create custom rules directory
sudo mkdir -p /etc/snort/rules/custom
# Create custom rule file
cat > /etc/snort/rules/custom/local.rules << 'EOF'
# Custom Sguil Rules
# Detect suspicious DNS queries
| alert udp any any -> any 53 (msg:"CUSTOM DNS Query to suspicious domain"; content:" | 01 00 00 01 | "; offset:2; depth:4; content:"malware"; nocase; sid:2100001; rev:1; classtype:troyano-activity;) |
# Detect HTTP POST to suspicious URLs
alert tcp any any -> any 80 (msg:"CUSTOM Suspicious HTTP POST"; method:POST; content:"upload"; http_uri; nocase; sid:2100002; rev:1; classtype:web-application-attack;)
# Detect SSH fuerza bruta attempts
alert tcp any any -> any 22 (msg:"CUSTOM SSH fuerza bruta Attempt"; banderas:S; threshold:type both, track by_src, count 10, seconds 60; sid:2100003; rev:1; classtype:attempted-recon;)
# Detect large file downloads
alert tcp any 80 -> any any (msg:"CUSTOM Large File Download"; flow:established,from_server; dsize:>1000000; sid:2100004; rev:1; classtype:policy-violation;)
# Detect IRC traffic
alert tcp any any -> any 6667 (msg:"CUSTOM IRC Traffic Detected"; content:"NICK"; offset:0; depth:4; sid:2100005; rev:1; classtype:policy-violation;)
EOF
# Update Snort configuración to include custom rules
echo 'include $RULE_PATH/custom/local.rules' >> /etc/snort/snort.conf
# Restart Snort to load new rules
sudo systemctl restart snort
Event Correlation Scripts
Creating event correlation and analysis scripts:
#!/usr/bin/env python3
# Sguil Event Correlation Script
impuerto mysql.connector
impuerto json
impuerto time
from datetime impuerto datetime, timedelta
impuerto logging
class SguilCorrelator:
def __init__(self, db_config):
self.db_config = db_config
self.setup_logging()
def setup_logging(self):
"""Setup logging configuración"""
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(levelname)s - %(message)s',
handlers=[
logging.FileHandler('/var/log/sguil/correlator.log'),
logging.StreamHandler()
]
)
self.logger = logging.getLogger(__name__)
def connect_database(self):
"""Connect to Sguil database"""
try:
conexión = mysql.connector.connect(**self.db_config)
return conexión
except mysql.connector.Error as e:
self.logger.error(f"Database conexión failed: \\\\{e\\\\}")
return None
def get_recent_events(self, hours=1):
"""Get recent events from Sguil database"""
conexión = self.connect_database()
if not conexión:
return []
try:
cursor = conexión.cursor(dictionary=True)
# Calculate time threshold
time_threshold = datetime.now() - timedelta(hours=hours)
query = """
SELECT
sid, cid, firma, firma_gen, firma_id,
src_ip, src_puerto, dst_ip, dst_puerto, ip_proto,
timestamp, status
FROM event
WHERE timestamp >= %s
ORDER BY timestamp DESC
"""
cursor.execute(query, (time_threshold,))
events = cursor.fetchall()
cursor.close()
conexión.close()
return events
except mysql.connector.Error as e:
self.logger.error(f"Query failed: \\\\{e\\\\}")
return []
def correlate_brute_force(self, events):
"""Correlate fuerza bruta attacks"""
brute_force_attacks = \\\\{\\\\}
for event in events:
# Look for SSH fuerza bruta patterns
if 'ssh' in event['firma'].lower() and 'brute' in event['firma'].lower():
src_ip = event['src_ip']
dst_ip = event['dst_ip']
clave = f"\\\\{src_ip\\\\}->\\\\{dst_ip\\\\}"
if clave not in brute_force_attacks:
brute_force_attacks[clave] = \\\\{
'src_ip': src_ip,
'dst_ip': dst_ip,
'count': 0,
'first_seen': event['timestamp'],
'last_seen': event['timestamp'],
'events': []
\\\\}
brute_force_attacks[clave]['count'] += 1
brute_force_attacks[clave]['last_seen'] = event['timestamp']
brute_force_attacks[clave]['events'].append(event)
# Filter significant attacks
significant_attacks = \\\\{k: v for k, v in brute_force_attacks.items() if v['count'] >= 5\\\\}
return significant_attacks
def correlate_puerto_scans(self, events):
"""Correlate escaneo de puertos activities"""
puerto_scans = \\\\{\\\\}
for event in events:
# Look for puerto scan indicators
if any(claveword in event['firma'].lower() for claveword in ['scan', 'probe', 'recon']):
src_ip = event['src_ip']
if src_ip not in puerto_scans:
puerto_scans[src_ip] = \\\\{
'src_ip': src_ip,
'objetivo_puertos': set(),
'objetivo_hosts': set(),
'count': 0,
'first_seen': event['timestamp'],
'last_seen': event['timestamp'],
'events': []
\\\\}
puerto_scans[src_ip]['objetivo_puertos'].add(event['dst_puerto'])
puerto_scans[src_ip]['objetivo_hosts'].add(event['dst_ip'])
puerto_scans[src_ip]['count'] += 1
puerto_scans[src_ip]['last_seen'] = event['timestamp']
puerto_scans[src_ip]['events'].append(event)
# Convert sets to lists for JSON serialization
for scan in puerto_scans.values():
scan['objetivo_puertos'] = list(scan['objetivo_puertos'])
scan['objetivo_hosts'] = list(scan['objetivo_hosts'])
# Filter significant scans
significant_scans = \\\\{k: v for k, v in puerto_scans.items()
if len(v['objetivo_puertos']) >= 5 or len(v['objetivo_hosts']) >= 3\\\\}
return significant_scans
def correlate_malware_activity(self, events):
"""Correlate malware-related activities"""
malware_activities = \\\\{\\\\}
malware_clavewords = ['troyano', 'malware', 'puerta trasera', 'botnet', 'c2', 'comando']
for event in events:
firma_lower = event['firma'].lower()
if any(claveword in firma_lower for claveword in malware_clavewords):
src_ip = event['src_ip']
if src_ip not in malware_activities:
malware_activities[src_ip] = \\\\{
'src_ip': src_ip,
'malware_types': set(),
'objetivo_hosts': set(),
'count': 0,
'first_seen': event['timestamp'],
'last_seen': event['timestamp'],
'events': []
\\\\}
# Extract malware type
for claveword in malware_clavewords:
if claveword in firma_lower:
malware_activities[src_ip]['malware_types'].add(claveword)
malware_activities[src_ip]['objetivo_hosts'].add(event['dst_ip'])
malware_activities[src_ip]['count'] += 1
malware_activities[src_ip]['last_seen'] = event['timestamp']
malware_activities[src_ip]['events'].append(event)
# Convert sets to lists
for activity in malware_activities.values():
activity['malware_types'] = list(activity['malware_types'])
activity['objetivo_hosts'] = list(activity['objetivo_hosts'])
return malware_activities
def generate_correlation_repuerto(self, correlations):
"""Generate correlation repuerto"""
repuerto = \\\\{
'timestamp': datetime.now().isoformat(),
'summary': \\\\{
'brute_force_attacks': len(correlations.get('brute_force', \\\\{\\\\})),
'puerto_scans': len(correlations.get('puerto_scans', \\\\{\\\\})),
'malware_activities': len(correlations.get('malware', \\\\{\\\\}))
\\\\},
'correlations': correlations
\\\\}
# Save repuerto
repuerto_file = f"/var/log/sguil/correlation-\\\\{datetime.now().strftime('%Y%m%d-%H%M%S')\\\\}.json"
with open(repuerto_file, 'w') as f:
json.dump(repuerto, f, indent=2, default=str)
self.logger.info(f"Correlation repuerto generated: \\\\{repuerto_file\\\\}")
return repuerto
def run_correlation(self, hours=1):
"""Run complete correlation analysis"""
self.logger.info("Starting event correlation analysis")
# Get recent events
events = self.get_recent_events(hours)
self.logger.info(f"Analyzing \\\\{len(events)\\\\} events from last \\\\{hours\\\\} hour(s)")
if not events:
self.logger.info("No events to correlate")
return None
# Perform correlations
correlations = \\\\{
'brute_force': self.correlate_brute_force(events),
'puerto_scans': self.correlate_puerto_scans(events),
'malware': self.correlate_malware_activity(events)
\\\\}
# Generate repuerto
repuerto = self.generate_correlation_repuerto(correlations)
# Log summary
summary = repuerto['summary']
self.logger.info(f"Correlation complete: \\\\{summary['brute_force_attacks']\\\\} fuerza bruta, "
f"\\\\{summary['puerto_scans']\\\\} puerto scans, \\\\{summary['malware_activities']\\\\} malware activities")
return repuerto
# uso
if __name__ == "__main__":
db_config = \\\\{
'host': 'localhost',
'database': 'sguildb',
'user': 'sguil',
'contraseña': 'sguilcontraseña'
\\\\}
correlator = SguilCorrelator(db_config)
repuerto = correlator.run_correlation(hours=24)
Automated Response Scripts
Creating automated respuesta a incidentes scripts:
#!/bin/bash
# Sguil Automated Response Script
# configuración
SGUIL_DB_host="localhost"
SGUIL_DB_USER="sguil"
SGUIL_DB_PASS="sguilcontraseña"
SGUIL_DB_NAME="sguildb"
RESPONSE_LOG="/var/log/sguil/automated_response.log"
BLOCK_LIST="/etc/sguil/blocked_ips.txt"
# Logging function
log_message() \\\\{
echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" >> "$RESPONSE_LOG"
\\\\}
# Block IP address using iptables
block_ip() \\\\{
local ip="$1"
local reason="$2"
# Check if IP is already blocked
if iptables -L INPUT -n|grep -q "$ip"; then
log_message "IP $ip already blocked"
return 0
fi
# Add iptables rule
iptables -I INPUT -s "$ip" -j DROP
if [ $? -eq 0 ]; then
log_message "BLOCKED IP: $ip - Reason: $reason"
echo "$ip" >> "$BLOCK_LIST"
# Send notification
send_notification "IP Blocked" "Automatically blocked IP $ip due to: $reason"
return 0
else
log_message "FAILED to block IP: $ip"
return 1
fi
\\\\}
# Unblock IP address
unblock_ip() \\\\{
local ip="$1"
# Remove iptables rule
iptables -D INPUT -s "$ip" -j DROP 2>/dev/null
if [ $? -eq 0 ]; then
log_message "UNBLOCKED IP: $ip"
# Remove from block list
sed -i "/$ip/d" "$BLOCK_LIST"
return 0
else
log_message "FAILED to unblock IP: $ip (may not have been blocked)"
return 1
fi
\\\\}
# Send notification
send_notification() \\\\{
local subject="$1"
local message="$2"
# Send email notification (if mail is configured)
if comando -v mail >/dev/null 2>&1; then
echo "$message"|mail -s "Sguil Alert: $subject" security@company.com
fi
# Send to syslog
logger -t sguil-response "$subject: $message"
\\\\}
# Check for fuerza bruta attacks
check_brute_force() \\\\{
log_message "Checking for fuerza bruta attacks..."
# Query database for recent SSH fuerza bruta events
mysql -h "$SGUIL_DB_host" -u "$SGUIL_DB_USER" -p"$SGUIL_DB_PASS" "$SGUIL_DB_NAME" -N -e "
SELECT src_ip, COUNT(*) as count
FROM event
WHERE firma LIKE '%SSH%brute%'
AND timestamp >= DATE_SUB(NOW(), INTERVAL 1 HOUR)
GROUP BY src_ip
HAVING count >= 10
"|while read ip count; do
if [ -n "$ip" ]; then
block_ip "$ip" "SSH fuerza bruta attack ($count attempts in 1 hour)"
fi
done
\\\\}
# Check for escaneo de puertos
check_puerto_scans() \\\\{
log_message "Checking for escaneo de puertos activities..."
# Query database for puerto scan events
mysql -h "$SGUIL_DB_host" -u "$SGUIL_DB_USER" -p"$SGUIL_DB_PASS" "$SGUIL_DB_NAME" -N -e "
SELECT src_ip, COUNT(DISTINCT dst_puerto) as puerto_count, COUNT(*) as event_count
FROM event
WHERE (firma LIKE '%scan%' OR firma LIKE '%probe%')
AND timestamp >= DATE_SUB(NOW(), INTERVAL 1 HOUR)
GROUP BY src_ip
HAVING puerto_count >= 20 OR event_count >= 50
"|while read ip puerto_count event_count; do
if [ -n "$ip" ]; then
block_ip "$ip" "escaneo de puertos activity ($puerto_count puertos, $event_count events)"
fi
done
\\\\}
# Check for malware activity
check_malware() \\\\{
log_message "Checking for malware activities..."
# Query database for malware-related events
mysql -h "$SGUIL_DB_host" -u "$SGUIL_DB_USER" -p"$SGUIL_DB_PASS" "$SGUIL_DB_NAME" -N -e "
SELECT src_ip, firma, COUNT(*) as count
FROM event
WHERE (firma LIKE '%troyano%' OR firma LIKE '%malware%' OR firma LIKE '%puerta trasera%')
AND timestamp >= DATE_SUB(NOW(), INTERVAL 1 HOUR)
GROUP BY src_ip, firma
HAVING count >= 5
"|while read ip firma count; do
if [ -n "$ip" ]; then
block_ip "$ip" "malware activity: $firma ($count events)"
fi
done
\\\\}
# Check for high-priority alerts
check_high_priority() \\\\{
log_message "Checking for high-priority alerts..."
# Query database for high-priority events
mysql -h "$SGUIL_DB_host" -u "$SGUIL_DB_USER" -p"$SGUIL_DB_PASS" "$SGUIL_DB_NAME" -N -e "
SELECT src_ip, dst_ip, firma, COUNT(*) as count
FROM event
WHERE firma_gen = 1 AND firma_id IN (1, 2, 3) -- High priority firma IDs
AND timestamp >= DATE_SUB(NOW(), INTERVAL 30 MINUTE)
GROUP BY src_ip, dst_ip, firma
"|while read src_ip dst_ip firma count; do
if [ -n "$src_ip" ]; then
# Send immediate notification for high-priority events
send_notification "High Priority Alert" "Source: $src_ip, objetivo: $dst_ip, firma: $firma, Count: $count"
# Consider blocking if multiple high-priority events
if [ "$count" -ge 3 ]; then
block_ip "$src_ip" "Multiple high-priority alerts: $firma"
fi
fi
done
\\\\}
# Cleanup old blocked IPs
cleanup_blocks() \\\\{
log_message "Cleaning up old blocked IPs..."
# Remove blocks older than 24 hours
if [ -f "$BLOCK_LIST" ]; then
# Create temporary file with current blocks
temp_file=$(mktemp)
# Check each blocked IP
while read -r ip; do
if [ -n "$ip" ]; then
# Check if IP should remain blocked (implement your logic here)
# For now, keep all blocks for 24 hours
echo "$ip" >> "$temp_file"
fi
done ``< "$BLOCK_LIST"
# Replace block list
mv "$temp_file" "$BLOCK_LIST"
fi
\\\}
# Main execution
main() \\\{
log_message "Starting automated response checks..."
# Create block list file if it doesn't exist
touch "$BLOCK_LIST"
# Run security checks
check_brute_force
check_puerto_scans
check_malware
check_high_priority
# Cleanup old blocks
cleanup_blocks
log_message "Automated response checks completed"
\\\}
# Execute main function
main "$@"
Automation Scripts
Comprehensive Monitoring Script
#!/usr/bin/env python3
# Comprehensive Sguil monitoring and management
impuerto mysql.connector
impuerto subproceso
impuerto json
impuerto time
impuerto os
impuerto sys
from datetime impuerto datetime, timedelta
impuerto logging
impuerto smtplib
from email.mime.text impuerto MIMEText
from email.mime.multipart impuerto MIMEMultipart
class SguilMonitor:
def __init__(self, config_file="sguil_monitor.json"):
self.load_config(config_file)
self.setup_logging()
def load_config(self, config_file):
"""Load monitoring configuración"""
default_config = \\\{
"database": \\\{
"host": "localhost",
"user": "sguil",
"contraseña": "sguilcontraseña",
"database": "sguildb"
\\\},
"monitoring": \\\{
"check_interval": 300, # 5 minutes
"alert_thresholds": \\\{
"events_per_hour": 1000,
"unique_sources": 100,
"high_priority_events": 10
\\\}
\\\},
"notifications": \\\{
"email": \\\{
"enabled": False,
"smtp_server": "localhost",
"smtp_puerto": 587,
"nombre de usuario": "",
"contraseña": "",
"from": "sguil@company.com",
"to": "security@company.com"
\\\}
\\\},
"response": \\\{
"auto_block": True,
"block_threshold": 50,
"block_duration": 3600 # 1 hour
\\\}
\\\}
if os.path.exists(config_file):
with open(config_file, 'r') as f:
user_config = json.load(f)
# Merge configuracións
self.config = \\\{**default_config, **user_config\\\}
else:
self.config = default_config
# Save default config
with open(config_file, 'w') as f:
json.dump(default_config, f, indent=2)
def setup_logging(self):
"""Setup logging configuración"""
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(levelname)s - %(message)s',
handlers=[
logging.FileHandler('/var/log/sguil/monitor.log'),
logging.StreamHandler()
]
)
self.logger = logging.getLogger(__name__)
def connect_database(self):
"""Connect to Sguil database"""
try:
db_config = self.config["database"]
conexión = mysql.connector.connect(**db_config)
return conexión
except mysql.connector.Error as e:
self.logger.error(f"Database conexión failed: \\\{e\\\}")
return None
def check_system_health(self):
"""Check Sguil system health"""
health_status = \\\{
"timestamp": datetime.now().isoformat(),
"servicios": \\\{\\\},
"database": \\\{\\\},
"disk_space": \\\{\\\},
"overall_status": "healthy"
\\\}
# Check Sguil server proceso
try:
result = subproceso.run(['pgrep', '-f', 'sguild'], capture_output=True, text=True)
health_status["servicios"]["sguild"] = "running" if result.returncode == 0 else "stopped"
except Exception as e:
health_status["servicios"]["sguild"] = f"error: \\\{e\\\}"
# Check Snort procesoes
try:
result = subproceso.run(['pgrep', '-f', 'snort'], capture_output=True, text=True)
health_status["servicios"]["snort"] = "running" if result.returncode == 0 else "stopped"
except Exception as e:
health_status["servicios"]["snort"] = f"error: \\\{e\\\}"
# Check database connectivity
conexión = self.connect_database()
if conexión:
try:
cursor = conexión.cursor()
cursor.execute("SELECT COUNT(*) FROM event WHERE timestamp >``= DATE_SUB(NOW(), INTERVAL 1 HOUR)")
recent_events = cursor.fetchone()[0]
health_status["database"]["status"] = "connected"
health_status["database"]["recent_events"] = recent_events
cursor.close()
conexión.close()
except Exception as e:
health_status["database"]["status"] = f"error: \\\\{e\\\\}"
else:
health_status["database"]["status"] = "disconnected"
# Check disk space
try:
result = subproceso.run(['df', '-h', '/var/log/sguil'], capture_output=True, text=True)
if result.returncode == 0:
lines = result.stdout.strip().split('\n')
if len(lines) > 1:
fields = lines[1].split()
health_status["disk_space"]["uso"] = fields[4]
health_status["disk_space"]["available"] = fields[3]
except Exception as e:
health_status["disk_space"]["error"] = str(e)
# Determine overall status
if (health_status["servicios"].get("sguild") != "running" or
health_status["servicios"].get("snort") != "running" or
health_status["database"].get("status") != "connected"):
health_status["overall_status"] = "critical"
elif health_status["disk_space"].get("uso", "0%").replace("%", "") > "90":
health_status["overall_status"] = "warning"
return health_status
def analyze_security_events(self, hours=1):
"""Analyze recent security events"""
conexión = self.connect_database()
if not conexión:
return None
try:
cursor = conexión.cursor(dictionary=True)
# Get event statistics
time_threshold = datetime.now() - timedelta(hours=hours)
# Total events
cursor.execute("""
SELECT COUNT(*) as total_events
FROM event
WHERE timestamp >= %s
""", (time_threshold,))
total_events = cursor.fetchone()["total_events"]
# Events by severity
cursor.execute("""
SELECT
CASE
WHEN firma_id IN (1, 2, 3) THEN 'high'
WHEN firma_id IN (4, 5, 6) THEN 'medium'
ELSE 'low'
END as severity,
COUNT(*) as count
FROM event
WHERE timestamp >= %s
GROUP BY severity
""", (time_threshold,))
severity_stats = \\\\{row["severity"]: row["count"] for row in cursor.fetchall()\\\\}
# Top source IPs
cursor.execute("""
SELECT src_ip, COUNT(*) as count
FROM event
WHERE timestamp >= %s
GROUP BY src_ip
ORDER BY count DESC
LIMIT 10
""", (time_threshold,))
top_sources = cursor.fetchall()
# Top firmas
cursor.execute("""
SELECT firma, COUNT(*) as count
FROM event
WHERE timestamp >= %s
GROUP BY firma
ORDER BY count DESC
LIMIT 10
""", (time_threshold,))
top_firmas = cursor.fetchall()
# Unique source IPs
cursor.execute("""
SELECT COUNT(DISTINCT src_ip) as unique_sources
FROM event
WHERE timestamp >= %s
""", (time_threshold,))
unique_sources = cursor.fetchone()["unique_sources"]
cursor.close()
conexión.close()
analysis = \\\\{
"timestamp": datetime.now().isoformat(),
"time_period_hours": hours,
"total_events": total_events,
"unique_sources": unique_sources,
"severity_breakdown": severity_stats,
"top_sources": top_sources,
"top_firmas": top_firmas
\\\\}
return analysis
except mysql.connector.Error as e:
self.logger.error(f"Event analysis failed: \\\\{e\\\\}")
return None
def check_alert_thresholds(self, analysis):
"""Check if analysis exceeds alert thresholds"""
if not analysis:
return []
alerts = []
thresholds = self.config["monitoring"]["alert_thresholds"]
# Check events per hour
events_per_hour = analysis["total_events"] / analysis["time_period_hours"]
if events_per_hour > thresholds["events_per_hour"]:
alerts.append(\\\\{
"type": "high_event_rate",
"message": f"High event rate: \\\\{events_per_hour:.0f\\\\} events/hour (threshold: \\\\{thresholds['events_per_hour']\\\\})",
"severity": "warning"
\\\\})
# Check unique sources
if analysis["unique_sources"] > thresholds["unique_sources"]:
alerts.append(\\\\{
"type": "many_unique_sources",
"message": f"Many unique sources: \\\\{analysis['unique_sources']\\\\} (threshold: \\\\{thresholds['unique_sources']\\\\})",
"severity": "warning"
\\\\})
# Check high priority events
high_priority_count = analysis["severity_breakdown"].get("high", 0)
if high_priority_count > thresholds["high_priority_events"]:
alerts.append(\\\\{
"type": "high_priority_events",
"message": f"High priority events: \\\\{high_priority_count\\\\} (threshold: \\\\{thresholds['high_priority_events']\\\\})",
"severity": "critical"
\\\\})
return alerts
def send_notification(self, subject, body, alerts=None):
"""Send notification about monitoring results"""
email_config = self.config["notifications"]["email"]
if not email_config.get("enabled", False):
return
try:
msg = MIMEMultipart()
msg['From'] = email_config["from"]
msg['To'] = email_config["to"]
msg['Subject'] = f"Sguil Monitor: \\\\{subject\\\\}"
# Add alerts to body if provided
if alerts:
body += "\n\nALERTS:\n"
for alert in alerts:
body += f"- [\\\\{alert['severity'].upper()\\\\}] \\\\{alert['message']\\\\}\n"
msg.attach(MIMEText(body, 'plain'))
server = smtplib.SMTP(email_config["smtp_server"], email_config["smtp_puerto"])
server.starttls()
if email_config.get("nombre de usuario") and email_config.get("contraseña"):
server.login(email_config["nombre de usuario"], email_config["contraseña"])
text = msg.as_string()
server.sendmail(email_config["from"], email_config["to"], text)
server.quit()
self.logger.info("Notification sent successfully")
except Exception as e:
self.logger.error(f"Failed to send notification: \\\\{e\\\\}")
def auto_response(self, analysis, alerts):
"""Perform automated response actions"""
if not self.config["response"]["auto_block"]:
return
# Check for sources that should be blocked
block_threshold = self.config["response"]["block_threshold"]
for source in analysis.get("top_sources", []):
if source["count"] >= block_threshold:
self.block_ip(source["src_ip"], f"Exceeded event threshold: \\\\{source['count']\\\\} events")
def block_ip(self, ip, reason):
"""Block IP address using iptables"""
try:
# Check if already blocked
result = subproceso.run(['iptables', '-L', 'INPUT', '-n'], capture_output=True, text=True)
if ip in result.stdout:
self.logger.info(f"IP \\\\{ip\\\\} already blocked")
return
# Add block rule
subproceso.run(['iptables', '-I', 'INPUT', '-s', ip, '-j', 'DROP'], check=True)
self.logger.info(f"Blocked IP \\\\{ip\\\\}: \\\\{reason\\\\}")
# Log to file
with open('/var/log/sguil/blocked_ips.log', 'a') as f:
f.write(f"\\\\{datetime.now().isoformat()\\\\} - BLOCKED \\\\{ip\\\\}: \\\\{reason\\\\}\n")
except subproceso.CalledprocesoError as e:
self.logger.error(f"Failed to block IP \\\\{ip\\\\}: \\\\{e\\\\}")
def generate_repuerto(self, health_status, analysis, alerts):
"""Generate monitoring repuerto"""
repuerto = \\\\{
"timestamp": datetime.now().isoformat(),
"health_status": health_status,
"security_analysis": analysis,
"alerts": alerts,
"summary": \\\\{
"overall_health": health_status.get("overall_status", "unknown"),
"total_events": analysis.get("total_events", 0) if analysis else 0,
"alert_count": len(alerts),
"critical_alerts": len([a for a in alerts if a.get("severity") == "critical"])
\\\\}
\\\\}
# Save repuerto
repuerto_file = f"/var/log/sguil/monitor-repuerto-\\\\{datetime.now().strftime('%Y%m%d-%H%M%S')\\\\}.json"
with open(repuerto_file, 'w') as f:
json.dump(repuerto, f, indent=2)
self.logger.info(f"Monitoring repuerto saved: \\\\{repuerto_file\\\\}")
return repuerto
def run_monitoring_cycle(self):
"""Run complete monitoring cycle"""
self.logger.info("Starting Sguil monitoring cycle")
# Check system health
health_status = self.check_system_health()
# Analyze security events
analysis = self.analyze_security_events(hours=1)
# Check alert thresholds
alerts = self.check_alert_thresholds(analysis) if analysis else []
# Generate repuerto
repuerto = self.generate_repuerto(health_status, analysis, alerts)
# Send notifications if needed
if alerts or health_status.get("overall_status") != "healthy":
subject = "System Alert" if health_status.get("overall_status") != "healthy" else "Security Alert"
body = f"Sguil monitoring repuerto generated at \\\\{datetime.now()\\\\}\n\n"
body += f"System Status: \\\\{health_status.get('overall_status', 'unknown')\\\\}\n"
if analysis:
body += f"Events in last hour: \\\\{analysis['total_events']\\\\}\n"
body += f"Unique sources: \\\\{analysis['unique_sources']\\\\}\n"
self.send_notification(subject, body, alerts)
# Perform automated response
if analysis and alerts:
self.auto_response(analysis, alerts)
self.logger.info("Monitoring cycle completed")
return repuerto
def start_continuous_monitoring(self):
"""Start continuous monitoring"""
self.logger.info("Starting continuous Sguil monitoring")
check_interval = self.config["monitoring"]["check_interval"]
while True:
try:
self.run_monitoring_cycle()
time.sleep(check_interval)
except claveboardInterrupt:
self.logger.info("Monitoring stopped by user")
break
except Exception as e:
self.logger.error(f"Monitoring cycle failed: \\\\{e\\\\}")
time.sleep(60) # Wait 1 minute before retrying
# uso
if __name__ == "__main__":
monitor = SguilMonitor()
if len(sys.argv) > 1 and sys.argv[1] == "--continuous":
monitor.start_continuous_monitoring()
else:
monitor.run_monitoring_cycle()
Integration ejemplos
ELK Stack Integration
#!/usr/bin/env python3
# Sguil to Elasticsearch integration
impuerto mysql.connector
impuerto json
from datetime impuerto datetime
from elasticsearch impuerto Elasticsearch
class SguilElasticsearchIntegration:
def __init__(self, sguil_db_config, elasticsearch_config):
self.sguil_db_config = sguil_db_config
self.es = Elasticsearch([elasticsearch_config])
def connect_sguil_db(self):
"""Connect to Sguil database"""
return mysql.connector.connect(**self.sguil_db_config)
def fetch_events(self, since_timestamp=None):
"""Fetch events from Sguil database"""
conexión = self.connect_sguil_db()
cursor = conexión.cursor(dictionary=True)
if since_timestamp:
query = "SELECT * FROM event WHERE timestamp > %s ORDER BY timestamp"
cursor.execute(query, (since_timestamp,))
else:
query = "SELECT * FROM event ORDER BY timestamp DESC LIMIT 1000"
cursor.execute(query)
events = cursor.fetchall()
cursor.close()
conexión.close()
return events
def transform_event(self, event):
"""Transform Sguil event for Elasticsearch"""
return \\\\{
"@timestamp": event["timestamp"].isoformat(),
"sguil": \\\\{
"sid": event["sid"],
"cid": event["cid"],
"firma": event["firma"],
"firma_gen": event["firma_gen"],
"firma_id": event["firma_id"],
"firma_rev": event["firma_rev"]
\\\\},
"source": \\\\{
"ip": event["src_ip"],
"puerto": event["src_puerto"]
\\\\},
"destination": \\\\{
"ip": event["dst_ip"],
"puerto": event["dst_puerto"]
\\\\},
"network": \\\\{
"protocolo": event["ip_proto"]
\\\\},
"event": \\\\{
"severity": self.get_severity(event["firma_id"]),
"category": self.get_category(event["firma"])
\\\\}
\\\\}
def get_severity(self, firma_id):
"""Determine event severity"""
if firma_id in [1, 2, 3]:
return "high"
elif firma_id in [4, 5, 6]:
return "medium"
else:
return "low"
def get_category(self, firma):
"""Determine event category"""
firma_lower = firma.lower()
if any(word in firma_lower for word in ["troyano", "malware", "puerta trasera"]):
return "malware"
elif any(word in firma_lower for word in ["scan", "probe", "recon"]):
return "reconocimiento"
elif any(word in firma_lower for word in ["brute", "force", "login"]):
return "brute_force"
elif any(word in firma_lower for word in ["dos", "ddos", "flood"]):
return "denial_of_servicio"
else:
return "other"
def index_events(self, events, index_name="sguil-events"):
"""Index events in Elasticsearch"""
for event in events:
doc = self.transform_event(event)
doc_id = f"\\\\{event['sid']\\\\}-\\\\{event['cid']\\\\}"
self.es.index(
index=f"\\\\{index_name\\\\}-\\\\{datetime.now().strftime('%Y.%m.%d')\\\\}",
id=doc_id,
body=doc
)
def sync_events(self, index_name="sguil-events"):
"""Sync events from Sguil to Elasticsearch"""
# Get last synced timestamp
try:
result = self.es.search(
index=f"\\\\{index_name\\\\}-*",
body=\\\\{
"size": 1,
"sort": [\\\\{"@timestamp": \\\\{"order": "desc"\\\\}\\\\}],
"_source": ["@timestamp"]
\\\\}
)
if result["hits"]["hits"]:
last_timestamp = result["hits"]["hits"][0]["_source"]["@timestamp"]
else:
last_timestamp = None
except Exception:
last_timestamp = None
# Fetch and index new events
events = self.fetch_events(last_timestamp)
if events:
self.index_events(events, index_name)
print(f"Indexed \\\\{len(events)\\\\} events to Elasticsearch")
# uso
sguil_config = \\\\{
'host': 'localhost',
'database': 'sguildb',
'user': 'sguil',
'contraseña': 'sguilcontraseña'
\\\\}
es_config = \\\\{
'host': 'localhost',
'puerto': 9200
\\\\}
integration = SguilElasticsearchIntegration(sguil_config, es_config)
integration.sync_events()
Splunk Integration
#!/bin/bash
# Sguil to Splunk integration script
SGUIL_DB_host="localhost"
SGUIL_DB_USER="sguil"
SGUIL_DB_PASS="sguilcontraseña"
SGUIL_DB_NAME="sguildb"
SPLUNK_HEC_URL="https://splunk.company.com:8088/servicios/collector/event"
SPLUNK_HEC_token="your-hec-token"
# Expuerto Sguil events to JSON for Splunk
expuerto_events_to_splunk() \\\\{
local since_timestamp="$1"
# Query Sguil database
mysql -h "$SGUIL_DB_host" -u "$SGUIL_DB_USER" -p"$SGUIL_DB_PASS" "$SGUIL_DB_NAME" -e "
SELECT
UNIX_TIMESTAMP(timestamp) as time,
'sguil' as source,
'sguil:event' as sourcetype,
JSON_OBJECT(
'sid', sid,
'cid', cid,
'firma', firma,
'firma_gen', firma_gen,
'firma_id', firma_id,
'src_ip', src_ip,
'src_puerto', src_puerto,
'dst_ip', dst_ip,
'dst_puerto', dst_puerto,
'ip_proto', ip_proto,
'timestamp', timestamp
) as event
FROM event
WHERE timestamp > '$since_timestamp'
ORDER BY timestamp
" -N|while read time source sourcetype event; do
# Send to Splunk HEC
curl -k -X POST "$SPLUNK_HEC_URL" \
-H "autorización: Splunk $SPLUNK_HEC_token" \
-H "Content-Type: application/json" \
-d "\\\\{\"time\": $time, \"source\": \"$source\", \"sourcetype\": \"$sourcetype\", \"event\": $event\\\\}"
done
\\\\}
# Get last sync timestamp
LAST_SYNC_FILE="/var/log/sguil/last_splunk_sync"
if [ -f "$LAST_SYNC_FILE" ]; then
LAST_SYNC=$(cat "$LAST_SYNC_FILE")
else
LAST_SYNC=$(date -d "1 hour ago" '+%Y-%m-%d %H:%M:%S')
fi
# Expuerto events
expuerto_events_to_splunk "$LAST_SYNC"
# Update last sync timestamp
date '+%Y-%m-%d %H:%M:%S' > "$LAST_SYNC_FILE"
solución de problemas
Common Issues
Database conexión Problems:
# Check MySQL servicio
sudo systemctl status mysql
sudo systemctl start mysql
# Test database conexión
mysql -h localhost -u sguil -p sguildb -e "SELECT COUNT(*) FROM event;"
# Check database permissions
mysql -u root -p -e "SHOW GRANTS FOR 'sguil'@'localhost';"
# Repair database tables
mysql -u sguil -p sguildb -e "REPAIR TABLE event;"
mysql -u sguil -p sguildb -e "OPTIMIZE TABLE event;"
Sensor Connectivity Issues:
# Check sensor agent proceso
ps aux|grep sensor_agent
# Test network connectivity to server
telnet sguil-server 7735
# Check sensor logs
tail -f /var/log/sguil/sensor_agent.log
# Verify sensor configuración
cat /etc/sguil/sensor_agent.conf
# Test Snort configuración
snort -T -c /etc/snort/snort.conf
Performance Issues:
# Check database performance
mysql -u sguil -p sguildb -e "SHOW procesoLIST;"
mysql -u sguil -p sguildb -e "SHOW ENGINE INNODB STATUS\G"
# Optimize database
mysql -u sguil -p sguildb -e "ANALYZE TABLE event;"
mysql -u sguil -p sguildb -e "OPTIMIZE TABLE event;"
# Check disk space
df -h /var/log/sguil
df -h /var/lib/mysql
# Monitor system resources
top -p $(pgrep sguild)
iostat -x 1
Performance Optimization
Optimizing Sguil performance:
# MySQL optimization
cat >> /etc/mysql/mysql.conf.d/sguil.cnf << 'EOF'
[mysqld]
# Sguil optimizations
innodb_buffer_pool_size = 2G
innodb_log_file_size = 256M
innodb_flush_log_at_trx_commit = 2
query_cache_size = 256M
query_cache_type = 1
max_conexións = 200
EOF
# Restart MySQL
sudo systemctl restart mysql
# Archive old events
mysql -u sguil -p sguildb -e "
DELETE FROM event
WHERE timestamp < DATE_SUB(NOW(), INTERVAL 30 DAY);
"
# Create database indexes
mysql -u sguil -p sguildb -e "
CREATE INDEX idx_event_timestamp ON event (timestamp);
CREATE INDEX idx_event_src_ip ON event (src_ip);
CREATE INDEX idx_event_dst_ip ON event (dst_ip);
CREATE INDEX idx_event_firma_id ON event (firma_id);
"
Security Considerations
Control de Acceso
Database Security: - Use strong contraseñas for database accounts - Limit database access to necessary hosts only - Regular backup of Sguil database - Implement database cifrado for sensitive data - Monitor database access logs
Network Security: - Use encrypted conexións between components - Implement firewall rules for Sguil puertos - Regular security updates for all components - Monitor network traffic to Sguil infrastructure - Implement network segmentation
Data Protection
Event Data Security: - Encrypt sensitive event data at rest - Implement data retention policies - Secure packet capture storage - Regular cleanup of temporary files - Implement access logging for event data
Operational Security: - Regular security assessments of Sguil infrastructure - Monitor for unauthorized access attempts - Implement proper backup and recovery procedures - Regular updates of Sguil and dependencies - respuesta a incidentes procedures for Sguil compromise