Saltar a contenido

Sguil

Sguil Cheat Sheet

__HTML_TAG_19_ Todos los comandos

Overview

Sguil (pronunciado "sgweel") es una plataforma integral de monitoreo de seguridad de red que proporciona análisis y correlación en tiempo real de eventos de seguridad de red. Desarrollado por Bamm Visscher, Sguil sirve como consola centralizada para los analistas de seguridad para monitorear, investigar y responder a incidentes de seguridad en red en redes de sensores distribuidas. La plataforma integra múltiples herramientas de seguridad incluyendo Snort para la detección de intrusiones, Barnyard2 para el procesamiento de alertas, y varias utilidades de monitoreo de red para crear un centro de operaciones de seguridad unificado (SOC). La fuerza de Sguil radica en su capacidad de proporcionar un análisis de eventos de seguridad en contextos, correlacionando alertas con capturas completas de paquetes, datos de sesión e información histórica.

La arquitectura central de Sguil consta de tres componentes principales: sensores que recopilan datos de red y generan alertas, un servidor central que agrega y correlaciona eventos de seguridad, e interfaces cliente que proporcionan a los analistas capacidades de investigación poderosas. Los sensores suelen ejecutar Snort IDS, tcpdump para captura de paquetes y varios agentes de recogida de registros, mientras que el servidor central mantiene una base de datos MySQL para almacenamiento y correlación de eventos. La interfaz del cliente proporciona monitorización de alerta en tiempo real, capacidades de análisis de paquetes y funciones colaborativas que permiten a los equipos de seguridad recortar e investigar eficazmente los incidentes de seguridad.

El enfoque integral de Sguil para la vigilancia de la seguridad de la red hace que sea particularmente valioso para las organizaciones que necesitan mantener rutas de auditoría detalladas y realizar análisis forenses de incidentes de seguridad. La plataforma apoya despliegues distribuidos en múltiples segmentos de red, permitiendo a las organizaciones monitorear infraestructuras complejas de red manteniendo al mismo tiempo la visibilidad y el control centralizados. Sguil se ha convertido en una tecnología de piedra angular para muchos centros de operaciones de seguridad y equipos de respuesta a incidentes en todo el mundo.

Instalación

Ubuntu/Debian Instalación

Instalación de Sguil en sistemas Ubuntu/Debian:

# Update system packages
sudo apt update && sudo apt upgrade -y

# Install required dependencies
sudo apt install -y mysql-server mysql-client tcl tk tcl-dev tk-dev \
    tclx8.4 tcllib mysqltcl wireshark tshark snort barnyard2 \
    apache2 php php-mysql libmysqlclient-dev build-essential \
    git wget curl

# Install additional Tcl packages
sudo apt install -y tcl-tls tcl-trf

# Download Sguil
cd /opt
sudo git clone https://github.com/bammv/sguil.git
sudo chown -R $USER:$USER sguil

# Create Sguil user
sudo useradd -r -s /bin/false sguil
sudo usermod -a -G sguil $USER

# Setup directory structure
sudo mkdir -p /var/log/sguil
sudo mkdir -p /var/lib/sguil
sudo mkdir -p /etc/sguil
sudo chown -R sguil:sguil /var/log/sguil /var/lib/sguil
sudo chmod 755 /var/log/sguil /var/lib/sguil

# Install Sguil components
cd /opt/sguil
sudo cp -r server /opt/sguil-server
sudo cp -r client /opt/sguil-client
sudo cp -r sensor /opt/sguil-sensor

CentOS/RHEL Instalación

# Install EPEL repository
sudo yum install -y epel-release

# Install required packages
sudo yum groupinstall -y "Development Tools"
sudo yum install -y mysql-server mysql mysql-devel tcl tk tcl-devel \
    tk-devel wireshark snort barnyard2 httpd php php-mysql \
    git wget curl

# Install additional Tcl packages
sudo yum install -y tcl-tls

# Start and enable MySQL
sudo systemctl start mysqld
sudo systemctl enable mysqld

# Secure MySQL installation
sudo mysql_secure_installation

# Download and install Sguil
cd /opt
sudo git clone https://github.com/bammv/sguil.git
sudo chown -R $USER:$USER sguil

# Create system user
sudo useradd -r -s /bin/false sguil

# Setup directories
sudo mkdir -p /var/log/sguil /var/lib/sguil /etc/sguil
sudo chown -R sguil:sguil /var/log/sguil /var/lib/sguil

Docker Instalación

Running Sguil en contenedores Docker:

# Create Docker network for Sguil
docker network create sguil-network

# Create MySQL container for Sguil database
docker run -d --name sguil-mysql \
    --network sguil-network \
    -e MYSQL_ROOT_PASSWORD=sguilpassword \
    -e MYSQL_DATABASE=sguildb \
    -e MYSQL_USER=sguil \
    -e MYSQL_PASSWORD=sguilpass \
    -v sguil-mysql-data:/var/lib/mysql \
    mysql:5.7

# Create Sguil server container
cat > Dockerfile.sguil-server << 'EOF'
FROM ubuntu:20.04

# Install dependencies
RUN apt-get update && apt-get install -y \
    tcl tk tcl-dev tk-dev tclx8.4 tcllib mysqltcl \
    mysql-client git && \
    rm -rf /var/lib/apt/lists/*

# Copy Sguil server
COPY server /opt/sguil-server
WORKDIR /opt/sguil-server

# Create sguil user
RUN useradd -r -s /bin/false sguil

# Setup directories
RUN mkdir -p /var/log/sguil /var/lib/sguil && \
    chown -R sguil:sguil /var/log/sguil /var/lib/sguil

EXPOSE 7734 7735

CMD ["./sguild"]
EOF

# Build and run Sguil server
docker build -f Dockerfile.sguil-server -t sguil-server .
docker run -d --name sguil-server \
    --network sguil-network \
    -p 7734:7734 -p 7735:7735 \
    -v sguil-logs:/var/log/sguil \
    sguil-server

# Create Sguil sensor container
cat > Dockerfile.sguil-sensor ``<< 'EOF'
FROM ubuntu:20.04

# Install dependencies
RUN apt-get update && apt-get install -y \
    tcl tk snort barnyard2 tcpdump wireshark-common \
    git && rm -rf /var/lib/apt/lists/*

# Copy Sguil sensor
COPY sensor /opt/sguil-sensor
WORKDIR /opt/sguil-sensor

# Create sguil user
RUN useradd -r -s /bin/false sguil

EXPOSE 7736

CMD ["./sensor_agent.tcl"]
EOF

# Build and run Sguil sensor
docker build -f Dockerfile.sguil-sensor -t sguil-sensor .
docker run -d --name sguil-sensor \
    --network sguil-network \
    --cap-add=NET_ADMIN \
    --cap-add=NET_RAW \
    -v /var/log/snort:/var/log/snort \
    sguil-sensor

Fuente Instalación

# Download latest Sguil source
cd /tmp
wget https://github.com/bammv/sguil/archive/master.tar.gz
tar -xzf master.tar.gz
cd sguil-master

# Install server components
sudo mkdir -p /opt/sguil-server
sudo cp -r server/* /opt/sguil-server/
sudo chown -R sguil:sguil /opt/sguil-server

# Install client components
sudo mkdir -p /opt/sguil-client
sudo cp -r client/* /opt/sguil-client/
sudo chown -R $USER:$USER /opt/sguil-client

# Install sensor components
sudo mkdir -p /opt/sguil-sensor
sudo cp -r sensor/* /opt/sguil-sensor/
sudo chown -R sguil:sguil /opt/sguil-sensor

# Make scripts executable
sudo chmod +x /opt/sguil-server/sguild
sudo chmod +x /opt/sguil-client/sguil.tk
sudo chmod +x /opt/sguil-sensor/sensor_agent.tcl

# Create symbolic links
sudo ln -s /opt/sguil-server/sguild /usr/local/bin/sguild
sudo ln -s /opt/sguil-client/sguil.tk /usr/local/bin/sguil
sudo ln -s /opt/sguil-sensor/sensor_agent.tcl /usr/local/bin/sensor_agent

Uso básico

Database Setup

Configuración de la base de datos Sguil MySQL:

# Connect to MySQL as root
mysql -u root -p

# Create Sguil database and user
CREATE DATABASE sguildb;
CREATE USER 'sguil'@'localhost' IDENTIFIED BY 'sguilpassword';
GRANT ALL PRIVILEGES ON sguildb.* TO 'sguil'@'localhost';
FLUSH PRIVILEGES;
EXIT;

# Import Sguil database schema
cd /opt/sguil-server
mysql -u sguil -p sguildb < lib/sql_scripts/create_sguildb.sql

# Verify database creation
mysql -u sguil -p sguildb -e "SHOW TABLES;"

# Create additional indexes for performance
mysql -u sguil -p sguildb << 'EOF'
CREATE INDEX event_timestamp_idx ON event (timestamp);
CREATE INDEX event_src_ip_idx ON event (src_ip);
CREATE INDEX event_dst_ip_idx ON event (dst_ip);
CREATE INDEX event_signature_idx ON event (signature);
EOF

Configuración del servidor

Configuración del servidor Sguil:

# Create server configuration
cat >`` /etc/sguil/sguild.conf ``<< 'EOF'
# Sguil Server Configuration

# Database configuration
set DBHOST localhost
set DBPORT 3306
set DBNAME sguildb
set DBUSER sguil
set DBPASS sguilpassword

# Server configuration
set SERVER_HOST 0.0.0.0
set SERVER_PORT 7734
set SENSOR_PORT 7735

# Logging configuration
set DEBUG 1
set DAEMON 0
set MAX_DBCONNECTIONS 10

# File paths
set TMP_DIR /tmp
set LOG_DIR /var/log/sguil
set ARCHIVE_DIR /var/lib/sguil/archive

# Email configuration
set SMTP_SERVER localhost
set FROM_EMAIL sguil@localhost

# Sensor configuration
set SENSOR_TIMEOUT 300
set MAX_SENSORS 50

# Event processing
set MAX_EVENTS_PER_QUERY 1000
set EVENT_CACHE_SIZE 10000

# Auto-categorization rules
set AUTO_CAT_RULES /etc/sguil/autocat.conf
EOF

# Create auto-categorization rules
cat >`` /etc/sguil/autocat.conf << 'EOF'
# Auto-categorization rules for Sguil
# Format: signature_id|category|comment

# DNS events
1:2100001|Cat V|DNS Query
1:2100002|Cat V|DNS Response

# HTTP events
1:2100010|Cat IV|HTTP Traffic
1:2100011|Cat III|Suspicious HTTP

# SSH events
1:2100020|Cat IV|SSH Traffic
1:2100021|Cat II|SSH Brute Force

# Malware events
1:2100030|Cat I|Malware Detected
1:2100031|Cat I|Trojan Activity
EOF

# Set proper permissions
sudo chown sguil:sguil /etc/sguil/sguild.conf
sudo chmod 600 /etc/sguil/sguild.conf

Inicio Sguil Server

Iniciar y gestionar el servidor Sguil:

# Start Sguil server manually
cd /opt/sguil-server
sudo -u sguil ./sguild -c /etc/sguil/sguild.conf

# Start in daemon mode
sudo -u sguil ./sguild -c /etc/sguil/sguild.conf -D

# Create systemd service
sudo cat > /etc/systemd/system/sguild.service << 'EOF'
[Unit]
Description=Sguil Server
After=network.target mysql.service

[Service]
Type=simple
User=sguil
Group=sguil
WorkingDirectory=/opt/sguil-server
ExecStart=/opt/sguil-server/sguild -c /etc/sguil/sguild.conf -D
Restart=always
RestartSec=10

[Install]
WantedBy=multi-user.target
EOF

# Enable and start service
sudo systemctl daemon-reload
sudo systemctl enable sguild
sudo systemctl start sguild

# Check service status
sudo systemctl status sguild

# View server logs
sudo journalctl -u sguild -f

Sensor Configuration

Configuración de sensores Sguil:

# Create sensor configuration
cat > /etc/sguil/sensor_agent.conf << 'EOF'
# Sguil Sensor Configuration

# Server connection
set SERVER_HOST 192.168.1.100
set SERVER_PORT 7735
set HOSTNAME sensor01
set NET_GROUP Internal

# Sensor configuration
set INTERFACE eth0
set SENSOR_ID 1
set ENCODING ascii

# Snort configuration
set SNORT_PERF_FILE /var/log/snort/snort.stats
set WATCH_DIR /var/log/snort

# Packet capture
set PCAP_DIR /var/log/sguil/pcap
set MAX_PCAP_SIZE 100000000
set PCAP_RING_BUFFER 1

# Barnyard2 configuration
set BY_PORT 7736
set BY_HOST localhost

# File monitoring
set PORTSCAN_DIR /var/log/sguil/portscan
set SANCP_DIR /var/log/sguil/sancp

# Logging
set DEBUG 1
set LOG_DIR /var/log/sguil
EOF

# Create sensor startup script
cat > /opt/sguil-sensor/start_sensor.sh << 'EOF'
#!/bin/bash

# Sguil Sensor Startup Script

SENSOR_DIR="/opt/sguil-sensor"
CONFIG_FILE="/etc/sguil/sensor_agent.conf"
LOG_DIR="/var/log/sguil"
PCAP_DIR="/var/log/sguil/pcap"

# Create directories
mkdir -p $LOG_DIR $PCAP_DIR

# Start packet capture
tcpdump -i eth0 -s 1514 -w $PCAP_DIR/sensor01.pcap &
TCPDUMP_PID=$!

# Start Snort
snort -D -i eth0 -c /etc/snort/snort.conf -l /var/log/snort

# Start Barnyard2
barnyard2 -c /etc/snort/barnyard2.conf -d /var/log/snort -f snort.u2 -w /var/log/snort/barnyard2.waldo &
BARNYARD_PID=$!

# Start sensor agent
cd $SENSOR_DIR
./sensor_agent.tcl -c $CONFIG_FILE

# Cleanup on exit
trap "kill $TCPDUMP_PID $BARNYARD_PID" EXIT
EOF

chmod +x /opt/sguil-sensor/start_sensor.sh

# Start sensor
sudo /opt/sguil-sensor/start_sensor.sh

Características avanzadas

Custom Rules and Signatures

Crear personalizado Reglas para Sguil:

# Create custom rules directory
sudo mkdir -p /etc/snort/rules/custom

# Create custom rule file
cat > /etc/snort/rules/custom/local.rules << 'EOF'
# Custom Sguil Rules

# Detect suspicious DNS queries
alert udp any any -> any 53 (msg:"CUSTOM DNS Query to suspicious domain"; content:"|01 00 00 01|"; offset:2; depth:4; content:"malware"; nocase; sid:2100001; rev:1; classtype:trojan-activity;)

# Detect HTTP POST to suspicious URLs
alert tcp any any -> any 80 (msg:"CUSTOM Suspicious HTTP POST"; method:POST; content:"upload"; http_uri; nocase; sid:2100002; rev:1; classtype:web-application-attack;)

# Detect SSH brute force attempts
alert tcp any any -> any 22 (msg:"CUSTOM SSH Brute Force Attempt"; flags:S; threshold:type both, track by_src, count 10, seconds 60; sid:2100003; rev:1; classtype:attempted-recon;)

# Detect large file downloads
alert tcp any 80 -> any any (msg:"CUSTOM Large File Download"; flow:established,from_server; dsize:>1000000; sid:2100004; rev:1; classtype:policy-violation;)

# Detect IRC traffic
alert tcp any any -> any 6667 (msg:"CUSTOM IRC Traffic Detected"; content:"NICK"; offset:0; depth:4; sid:2100005; rev:1; classtype:policy-violation;)
EOF

# Update Snort configuration to include custom rules
echo 'include $RULE_PATH/custom/local.rules' >> /etc/snort/snort.conf

# Restart Snort to load new rules
sudo systemctl restart snort

Event Correlation Scripts

Crear correlación de eventos y scripts de análisis:

#!/usr/bin/env python3
# Sguil Event Correlation Script

import mysql.connector
import json
import time
from datetime import datetime, timedelta
import logging

class SguilCorrelator:
    def __init__(self, db_config):
        self.db_config = db_config
        self.setup_logging()

    def setup_logging(self):
        """Setup logging configuration"""
        logging.basicConfig(
            level=logging.INFO,
            format='%(asctime)s - %(levelname)s - %(message)s',
            handlers=[
                logging.FileHandler('/var/log/sguil/correlator.log'),
                logging.StreamHandler()
            ]
        )
        self.logger = logging.getLogger(__name__)

    def connect_database(self):
        """Connect to Sguil database"""
        try:
            connection = mysql.connector.connect(**self.db_config)
            return connection
        except mysql.connector.Error as e:
            self.logger.error(f"Database connection failed: \\\\{e\\\\}")
            return None

    def get_recent_events(self, hours=1):
        """Get recent events from Sguil database"""
        connection = self.connect_database()
        if not connection:
            return []

        try:
            cursor = connection.cursor(dictionary=True)

            # Calculate time threshold
            time_threshold = datetime.now() - timedelta(hours=hours)

            query = """
            SELECT
                sid, cid, signature, signature_gen, signature_id,
                src_ip, src_port, dst_ip, dst_port, ip_proto,
                timestamp, status
            FROM event
            WHERE timestamp >= %s
            ORDER BY timestamp DESC
            """

            cursor.execute(query, (time_threshold,))
            events = cursor.fetchall()

            cursor.close()
            connection.close()

            return events

        except mysql.connector.Error as e:
            self.logger.error(f"Query failed: \\\\{e\\\\}")
            return []

    def correlate_brute_force(self, events):
        """Correlate brute force attacks"""
        brute_force_attacks = \\\\{\\\\}

        for event in events:
            # Look for SSH brute force patterns
            if 'ssh' in event['signature'].lower() and 'brute' in event['signature'].lower():
                src_ip = event['src_ip']
                dst_ip = event['dst_ip']

                key = f"\\\\{src_ip\\\\}->\\\\{dst_ip\\\\}"

                if key not in brute_force_attacks:
                    brute_force_attacks[key] = \\\\{
                        'src_ip': src_ip,
                        'dst_ip': dst_ip,
                        'count': 0,
                        'first_seen': event['timestamp'],
                        'last_seen': event['timestamp'],
                        'events': []
                    \\\\}

                brute_force_attacks[key]['count'] += 1
                brute_force_attacks[key]['last_seen'] = event['timestamp']
                brute_force_attacks[key]['events'].append(event)

        # Filter significant attacks
        significant_attacks = \\\\{k: v for k, v in brute_force_attacks.items() if v['count'] >= 5\\\\}

        return significant_attacks

    def correlate_port_scans(self, events):
        """Correlate port scanning activities"""
        port_scans = \\\\{\\\\}

        for event in events:
            # Look for port scan indicators
            if any(keyword in event['signature'].lower() for keyword in ['scan', 'probe', 'recon']):
                src_ip = event['src_ip']

                if src_ip not in port_scans:
                    port_scans[src_ip] = \\\\{
                        'src_ip': src_ip,
                        'target_ports': set(),
                        'target_hosts': set(),
                        'count': 0,
                        'first_seen': event['timestamp'],
                        'last_seen': event['timestamp'],
                        'events': []
                    \\\\}

                port_scans[src_ip]['target_ports'].add(event['dst_port'])
                port_scans[src_ip]['target_hosts'].add(event['dst_ip'])
                port_scans[src_ip]['count'] += 1
                port_scans[src_ip]['last_seen'] = event['timestamp']
                port_scans[src_ip]['events'].append(event)

        # Convert sets to lists for JSON serialization
        for scan in port_scans.values():
            scan['target_ports'] = list(scan['target_ports'])
            scan['target_hosts'] = list(scan['target_hosts'])

        # Filter significant scans
        significant_scans = \\\\{k: v for k, v in port_scans.items()
                           if len(v['target_ports']) >= 5 or len(v['target_hosts']) >= 3\\\\}

        return significant_scans

    def correlate_malware_activity(self, events):
        """Correlate malware-related activities"""
        malware_activities = \\\\{\\\\}

        malware_keywords = ['trojan', 'malware', 'backdoor', 'botnet', 'c2', 'command']

        for event in events:
            signature_lower = event['signature'].lower()

            if any(keyword in signature_lower for keyword in malware_keywords):
                src_ip = event['src_ip']

                if src_ip not in malware_activities:
                    malware_activities[src_ip] = \\\\{
                        'src_ip': src_ip,
                        'malware_types': set(),
                        'target_hosts': set(),
                        'count': 0,
                        'first_seen': event['timestamp'],
                        'last_seen': event['timestamp'],
                        'events': []
                    \\\\}

                # Extract malware type
                for keyword in malware_keywords:
                    if keyword in signature_lower:
                        malware_activities[src_ip]['malware_types'].add(keyword)

                malware_activities[src_ip]['target_hosts'].add(event['dst_ip'])
                malware_activities[src_ip]['count'] += 1
                malware_activities[src_ip]['last_seen'] = event['timestamp']
                malware_activities[src_ip]['events'].append(event)

        # Convert sets to lists
        for activity in malware_activities.values():
            activity['malware_types'] = list(activity['malware_types'])
            activity['target_hosts'] = list(activity['target_hosts'])

        return malware_activities

    def generate_correlation_report(self, correlations):
        """Generate correlation report"""
        report = \\\\{
            'timestamp': datetime.now().isoformat(),
            'summary': \\\\{
                'brute_force_attacks': len(correlations.get('brute_force', \\\\{\\\\})),
                'port_scans': len(correlations.get('port_scans', \\\\{\\\\})),
                'malware_activities': len(correlations.get('malware', \\\\{\\\\}))
            \\\\},
            'correlations': correlations
        \\\\}

        # Save report
        report_file = f"/var/log/sguil/correlation-\\\\{datetime.now().strftime('%Y%m%d-%H%M%S')\\\\}.json"
        with open(report_file, 'w') as f:
            json.dump(report, f, indent=2, default=str)

        self.logger.info(f"Correlation report generated: \\\\{report_file\\\\}")
        return report

    def run_correlation(self, hours=1):
        """Run complete correlation analysis"""
        self.logger.info("Starting event correlation analysis")

        # Get recent events
        events = self.get_recent_events(hours)
        self.logger.info(f"Analyzing \\\\{len(events)\\\\} events from last \\\\{hours\\\\} hour(s)")

        if not events:
            self.logger.info("No events to correlate")
            return None

        # Perform correlations
        correlations = \\\\{
            'brute_force': self.correlate_brute_force(events),
            'port_scans': self.correlate_port_scans(events),
            'malware': self.correlate_malware_activity(events)
        \\\\}

        # Generate report
        report = self.generate_correlation_report(correlations)

        # Log summary
        summary = report['summary']
        self.logger.info(f"Correlation complete: \\\\{summary['brute_force_attacks']\\\\} brute force, "
                        f"\\\\{summary['port_scans']\\\\} port scans, \\\\{summary['malware_activities']\\\\} malware activities")

        return report

# Usage
if __name__ == "__main__":
    db_config = \\\\{
        'host': 'localhost',
        'database': 'sguildb',
        'user': 'sguil',
        'password': 'sguilpassword'
    \\\\}

    correlator = SguilCorrelator(db_config)
    report = correlator.run_correlation(hours=24)

Scripts de respuesta automatizada

Creación de scripts automatizados de respuesta a incidentes:

#!/bin/bash
# Sguil Automated Response Script

# Configuration
SGUIL_DB_HOST="localhost"
SGUIL_DB_USER="sguil"
SGUIL_DB_PASS="sguilpassword"
SGUIL_DB_NAME="sguildb"
RESPONSE_LOG="/var/log/sguil/automated_response.log"
BLOCK_LIST="/etc/sguil/blocked_ips.txt"

# Logging function
log_message() \\\\{
    echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" >> "$RESPONSE_LOG"
\\\\}

# Block IP address using iptables
block_ip() \\\\{
    local ip="$1"
    local reason="$2"

    # Check if IP is already blocked
    if iptables -L INPUT -n|grep -q "$ip"; then
        log_message "IP $ip already blocked"
        return 0
    fi

    # Add iptables rule
    iptables -I INPUT -s "$ip" -j DROP

    if [ $? -eq 0 ]; then
        log_message "BLOCKED IP: $ip - Reason: $reason"
        echo "$ip" >> "$BLOCK_LIST"

        # Send notification
        send_notification "IP Blocked" "Automatically blocked IP $ip due to: $reason"

        return 0
    else
        log_message "FAILED to block IP: $ip"
        return 1
    fi
\\\\}

# Unblock IP address
unblock_ip() \\\\{
    local ip="$1"

    # Remove iptables rule
    iptables -D INPUT -s "$ip" -j DROP 2>/dev/null

    if [ $? -eq 0 ]; then
        log_message "UNBLOCKED IP: $ip"

        # Remove from block list
        sed -i "/$ip/d" "$BLOCK_LIST"

        return 0
    else
        log_message "FAILED to unblock IP: $ip (may not have been blocked)"
        return 1
    fi
\\\\}

# Send notification
send_notification() \\\\{
    local subject="$1"
    local message="$2"

    # Send email notification (if mail is configured)
    if command -v mail >/dev/null 2>&1; then
        echo "$message"|mail -s "Sguil Alert: $subject" security@company.com
    fi

    # Send to syslog
    logger -t sguil-response "$subject: $message"
\\\\}

# Check for brute force attacks
check_brute_force() \\\\{
    log_message "Checking for brute force attacks..."

    # Query database for recent SSH brute force events
    mysql -h "$SGUIL_DB_HOST" -u "$SGUIL_DB_USER" -p"$SGUIL_DB_PASS" "$SGUIL_DB_NAME" -N -e "
        SELECT src_ip, COUNT(*) as count
        FROM event
        WHERE signature LIKE '%SSH%brute%'
        AND timestamp >= DATE_SUB(NOW(), INTERVAL 1 HOUR)
        GROUP BY src_ip
        HAVING count >= 10
    "|while read ip count; do
        if [ -n "$ip" ]; then
            block_ip "$ip" "SSH brute force attack ($count attempts in 1 hour)"
        fi
    done
\\\\}

# Check for port scanning
check_port_scans() \\\\{
    log_message "Checking for port scanning activities..."

    # Query database for port scan events
    mysql -h "$SGUIL_DB_HOST" -u "$SGUIL_DB_USER" -p"$SGUIL_DB_PASS" "$SGUIL_DB_NAME" -N -e "
        SELECT src_ip, COUNT(DISTINCT dst_port) as port_count, COUNT(*) as event_count
        FROM event
        WHERE (signature LIKE '%scan%' OR signature LIKE '%probe%')
        AND timestamp >= DATE_SUB(NOW(), INTERVAL 1 HOUR)
        GROUP BY src_ip
        HAVING port_count >= 20 OR event_count >= 50
    "|while read ip port_count event_count; do
        if [ -n "$ip" ]; then
            block_ip "$ip" "Port scanning activity ($port_count ports, $event_count events)"
        fi
    done
\\\\}

# Check for malware activity
check_malware() \\\\{
    log_message "Checking for malware activities..."

    # Query database for malware-related events
    mysql -h "$SGUIL_DB_HOST" -u "$SGUIL_DB_USER" -p"$SGUIL_DB_PASS" "$SGUIL_DB_NAME" -N -e "
        SELECT src_ip, signature, COUNT(*) as count
        FROM event
        WHERE (signature LIKE '%trojan%' OR signature LIKE '%malware%' OR signature LIKE '%backdoor%')
        AND timestamp >= DATE_SUB(NOW(), INTERVAL 1 HOUR)
        GROUP BY src_ip, signature
        HAVING count >= 5
    "|while read ip signature count; do
        if [ -n "$ip" ]; then
            block_ip "$ip" "Malware activity: $signature ($count events)"
        fi
    done
\\\\}

# Check for high-priority alerts
check_high_priority() \\\\{
    log_message "Checking for high-priority alerts..."

    # Query database for high-priority events
    mysql -h "$SGUIL_DB_HOST" -u "$SGUIL_DB_USER" -p"$SGUIL_DB_PASS" "$SGUIL_DB_NAME" -N -e "
        SELECT src_ip, dst_ip, signature, COUNT(*) as count
        FROM event
        WHERE signature_gen = 1 AND signature_id IN (1, 2, 3)  -- High priority signature IDs
        AND timestamp >= DATE_SUB(NOW(), INTERVAL 30 MINUTE)
        GROUP BY src_ip, dst_ip, signature
    "|while read src_ip dst_ip signature count; do
        if [ -n "$src_ip" ]; then
            # Send immediate notification for high-priority events
            send_notification "High Priority Alert" "Source: $src_ip, Target: $dst_ip, Signature: $signature, Count: $count"

            # Consider blocking if multiple high-priority events
            if [ "$count" -ge 3 ]; then
                block_ip "$src_ip" "Multiple high-priority alerts: $signature"
            fi
        fi
    done
\\\\}

# Cleanup old blocked IPs
cleanup_blocks() \\\\{
    log_message "Cleaning up old blocked IPs..."

    # Remove blocks older than 24 hours
    if [ -f "$BLOCK_LIST" ]; then
        # Create temporary file with current blocks
        temp_file=$(mktemp)

        # Check each blocked IP
        while read -r ip; do
            if [ -n "$ip" ]; then
                # Check if IP should remain blocked (implement your logic here)
                # For now, keep all blocks for 24 hours
                echo "$ip" >> "$temp_file"
            fi
        done ``< "$BLOCK_LIST"

        # Replace block list
        mv "$temp_file" "$BLOCK_LIST"
    fi
\\\}

# Main execution
main() \\\{
    log_message "Starting automated response checks..."

    # Create block list file if it doesn't exist
    touch "$BLOCK_LIST"

    # Run security checks
    check_brute_force
    check_port_scans
    check_malware
    check_high_priority

    # Cleanup old blocks
    cleanup_blocks

    log_message "Automated response checks completed"
\\\}

# Execute main function
main "$@"

Automation Scripts

Comprehensive Monitoring Script

#!/usr/bin/env python3
# Comprehensive Sguil monitoring and management

import mysql.connector
import subprocess
import json
import time
import os
import sys
from datetime import datetime, timedelta
import logging
import smtplib
from email.mime.text import MIMEText
from email.mime.multipart import MIMEMultipart

class SguilMonitor:
    def __init__(self, config_file="sguil_monitor.json"):
        self.load_config(config_file)
        self.setup_logging()

    def load_config(self, config_file):
        """Load monitoring configuration"""
        default_config = \\\{
            "database": \\\{
                "host": "localhost",
                "user": "sguil",
                "password": "sguilpassword",
                "database": "sguildb"
            \\\},
            "monitoring": \\\{
                "check_interval": 300,  # 5 minutes
                "alert_thresholds": \\\{
                    "events_per_hour": 1000,
                    "unique_sources": 100,
                    "high_priority_events": 10
                \\\}
            \\\},
            "notifications": \\\{
                "email": \\\{
                    "enabled": False,
                    "smtp_server": "localhost",
                    "smtp_port": 587,
                    "username": "",
                    "password": "",
                    "from": "sguil@company.com",
                    "to": "security@company.com"
                \\\}
            \\\},
            "response": \\\{
                "auto_block": True,
                "block_threshold": 50,
                "block_duration": 3600  # 1 hour
            \\\}
        \\\}

        if os.path.exists(config_file):
            with open(config_file, 'r') as f:
                user_config = json.load(f)
                # Merge configurations
                self.config = \\\{**default_config, **user_config\\\}
        else:
            self.config = default_config
            # Save default config
            with open(config_file, 'w') as f:
                json.dump(default_config, f, indent=2)

    def setup_logging(self):
        """Setup logging configuration"""
        logging.basicConfig(
            level=logging.INFO,
            format='%(asctime)s - %(levelname)s - %(message)s',
            handlers=[
                logging.FileHandler('/var/log/sguil/monitor.log'),
                logging.StreamHandler()
            ]
        )
        self.logger = logging.getLogger(__name__)

    def connect_database(self):
        """Connect to Sguil database"""
        try:
            db_config = self.config["database"]
            connection = mysql.connector.connect(**db_config)
            return connection
        except mysql.connector.Error as e:
            self.logger.error(f"Database connection failed: \\\{e\\\}")
            return None

    def check_system_health(self):
        """Check Sguil system health"""
        health_status = \\\{
            "timestamp": datetime.now().isoformat(),
            "services": \\\{\\\},
            "database": \\\{\\\},
            "disk_space": \\\{\\\},
            "overall_status": "healthy"
        \\\}

        # Check Sguil server process
        try:
            result = subprocess.run(['pgrep', '-f', 'sguild'], capture_output=True, text=True)
            health_status["services"]["sguild"] = "running" if result.returncode == 0 else "stopped"
        except Exception as e:
            health_status["services"]["sguild"] = f"error: \\\{e\\\}"

        # Check Snort processes
        try:
            result = subprocess.run(['pgrep', '-f', 'snort'], capture_output=True, text=True)
            health_status["services"]["snort"] = "running" if result.returncode == 0 else "stopped"
        except Exception as e:
            health_status["services"]["snort"] = f"error: \\\{e\\\}"

        # Check database connectivity
        connection = self.connect_database()
        if connection:
            try:
                cursor = connection.cursor()
                cursor.execute("SELECT COUNT(*) FROM event WHERE timestamp >``= DATE_SUB(NOW(), INTERVAL 1 HOUR)")
                recent_events = cursor.fetchone()[0]
                health_status["database"]["status"] = "connected"
                health_status["database"]["recent_events"] = recent_events
                cursor.close()
                connection.close()
            except Exception as e:
                health_status["database"]["status"] = f"error: \\\\{e\\\\}"
        else:
            health_status["database"]["status"] = "disconnected"

        # Check disk space
        try:
            result = subprocess.run(['df', '-h', '/var/log/sguil'], capture_output=True, text=True)
            if result.returncode == 0:
                lines = result.stdout.strip().split('\n')
                if len(lines) > 1:
                    fields = lines[1].split()
                    health_status["disk_space"]["usage"] = fields[4]
                    health_status["disk_space"]["available"] = fields[3]
        except Exception as e:
            health_status["disk_space"]["error"] = str(e)

        # Determine overall status
        if (health_status["services"].get("sguild") != "running" or
            health_status["services"].get("snort") != "running" or
            health_status["database"].get("status") != "connected"):
            health_status["overall_status"] = "critical"
        elif health_status["disk_space"].get("usage", "0%").replace("%", "") > "90":
            health_status["overall_status"] = "warning"

        return health_status

    def analyze_security_events(self, hours=1):
        """Analyze recent security events"""
        connection = self.connect_database()
        if not connection:
            return None

        try:
            cursor = connection.cursor(dictionary=True)

            # Get event statistics
            time_threshold = datetime.now() - timedelta(hours=hours)

            # Total events
            cursor.execute("""
                SELECT COUNT(*) as total_events
                FROM event
                WHERE timestamp >= %s
            """, (time_threshold,))
            total_events = cursor.fetchone()["total_events"]

            # Events by severity
            cursor.execute("""
                SELECT
                    CASE
                        WHEN signature_id IN (1, 2, 3) THEN 'high'
                        WHEN signature_id IN (4, 5, 6) THEN 'medium'
                        ELSE 'low'
                    END as severity,
                    COUNT(*) as count
                FROM event
                WHERE timestamp >= %s
                GROUP BY severity
            """, (time_threshold,))
            severity_stats = \\\\{row["severity"]: row["count"] for row in cursor.fetchall()\\\\}

            # Top source IPs
            cursor.execute("""
                SELECT src_ip, COUNT(*) as count
                FROM event
                WHERE timestamp >= %s
                GROUP BY src_ip
                ORDER BY count DESC
                LIMIT 10
            """, (time_threshold,))
            top_sources = cursor.fetchall()

            # Top signatures
            cursor.execute("""
                SELECT signature, COUNT(*) as count
                FROM event
                WHERE timestamp >= %s
                GROUP BY signature
                ORDER BY count DESC
                LIMIT 10
            """, (time_threshold,))
            top_signatures = cursor.fetchall()

            # Unique source IPs
            cursor.execute("""
                SELECT COUNT(DISTINCT src_ip) as unique_sources
                FROM event
                WHERE timestamp >= %s
            """, (time_threshold,))
            unique_sources = cursor.fetchone()["unique_sources"]

            cursor.close()
            connection.close()

            analysis = \\\\{
                "timestamp": datetime.now().isoformat(),
                "time_period_hours": hours,
                "total_events": total_events,
                "unique_sources": unique_sources,
                "severity_breakdown": severity_stats,
                "top_sources": top_sources,
                "top_signatures": top_signatures
            \\\\}

            return analysis

        except mysql.connector.Error as e:
            self.logger.error(f"Event analysis failed: \\\\{e\\\\}")
            return None

    def check_alert_thresholds(self, analysis):
        """Check if analysis exceeds alert thresholds"""
        if not analysis:
            return []

        alerts = []
        thresholds = self.config["monitoring"]["alert_thresholds"]

        # Check events per hour
        events_per_hour = analysis["total_events"] / analysis["time_period_hours"]
        if events_per_hour > thresholds["events_per_hour"]:
            alerts.append(\\\\{
                "type": "high_event_rate",
                "message": f"High event rate: \\\\{events_per_hour:.0f\\\\} events/hour (threshold: \\\\{thresholds['events_per_hour']\\\\})",
                "severity": "warning"
            \\\\})

        # Check unique sources
        if analysis["unique_sources"] > thresholds["unique_sources"]:
            alerts.append(\\\\{
                "type": "many_unique_sources",
                "message": f"Many unique sources: \\\\{analysis['unique_sources']\\\\} (threshold: \\\\{thresholds['unique_sources']\\\\})",
                "severity": "warning"
            \\\\})

        # Check high priority events
        high_priority_count = analysis["severity_breakdown"].get("high", 0)
        if high_priority_count > thresholds["high_priority_events"]:
            alerts.append(\\\\{
                "type": "high_priority_events",
                "message": f"High priority events: \\\\{high_priority_count\\\\} (threshold: \\\\{thresholds['high_priority_events']\\\\})",
                "severity": "critical"
            \\\\})

        return alerts

    def send_notification(self, subject, body, alerts=None):
        """Send notification about monitoring results"""
        email_config = self.config["notifications"]["email"]

        if not email_config.get("enabled", False):
            return

        try:
            msg = MIMEMultipart()
            msg['From'] = email_config["from"]
            msg['To'] = email_config["to"]
            msg['Subject'] = f"Sguil Monitor: \\\\{subject\\\\}"

            # Add alerts to body if provided
            if alerts:
                body += "\n\nALERTS:\n"
                for alert in alerts:
                    body += f"- [\\\\{alert['severity'].upper()\\\\}] \\\\{alert['message']\\\\}\n"

            msg.attach(MIMEText(body, 'plain'))

            server = smtplib.SMTP(email_config["smtp_server"], email_config["smtp_port"])
            server.starttls()

            if email_config.get("username") and email_config.get("password"):
                server.login(email_config["username"], email_config["password"])

            text = msg.as_string()
            server.sendmail(email_config["from"], email_config["to"], text)
            server.quit()

            self.logger.info("Notification sent successfully")

        except Exception as e:
            self.logger.error(f"Failed to send notification: \\\\{e\\\\}")

    def auto_response(self, analysis, alerts):
        """Perform automated response actions"""
        if not self.config["response"]["auto_block"]:
            return

        # Check for sources that should be blocked
        block_threshold = self.config["response"]["block_threshold"]

        for source in analysis.get("top_sources", []):
            if source["count"] >= block_threshold:
                self.block_ip(source["src_ip"], f"Exceeded event threshold: \\\\{source['count']\\\\} events")

    def block_ip(self, ip, reason):
        """Block IP address using iptables"""
        try:
            # Check if already blocked
            result = subprocess.run(['iptables', '-L', 'INPUT', '-n'], capture_output=True, text=True)
            if ip in result.stdout:
                self.logger.info(f"IP \\\\{ip\\\\} already blocked")
                return

            # Add block rule
            subprocess.run(['iptables', '-I', 'INPUT', '-s', ip, '-j', 'DROP'], check=True)

            self.logger.info(f"Blocked IP \\\\{ip\\\\}: \\\\{reason\\\\}")

            # Log to file
            with open('/var/log/sguil/blocked_ips.log', 'a') as f:
                f.write(f"\\\\{datetime.now().isoformat()\\\\} - BLOCKED \\\\{ip\\\\}: \\\\{reason\\\\}\n")

        except subprocess.CalledProcessError as e:
            self.logger.error(f"Failed to block IP \\\\{ip\\\\}: \\\\{e\\\\}")

    def generate_report(self, health_status, analysis, alerts):
        """Generate monitoring report"""
        report = \\\\{
            "timestamp": datetime.now().isoformat(),
            "health_status": health_status,
            "security_analysis": analysis,
            "alerts": alerts,
            "summary": \\\\{
                "overall_health": health_status.get("overall_status", "unknown"),
                "total_events": analysis.get("total_events", 0) if analysis else 0,
                "alert_count": len(alerts),
                "critical_alerts": len([a for a in alerts if a.get("severity") == "critical"])
            \\\\}
        \\\\}

        # Save report
        report_file = f"/var/log/sguil/monitor-report-\\\\{datetime.now().strftime('%Y%m%d-%H%M%S')\\\\}.json"
        with open(report_file, 'w') as f:
            json.dump(report, f, indent=2)

        self.logger.info(f"Monitoring report saved: \\\\{report_file\\\\}")
        return report

    def run_monitoring_cycle(self):
        """Run complete monitoring cycle"""
        self.logger.info("Starting Sguil monitoring cycle")

        # Check system health
        health_status = self.check_system_health()

        # Analyze security events
        analysis = self.analyze_security_events(hours=1)

        # Check alert thresholds
        alerts = self.check_alert_thresholds(analysis) if analysis else []

        # Generate report
        report = self.generate_report(health_status, analysis, alerts)

        # Send notifications if needed
        if alerts or health_status.get("overall_status") != "healthy":
            subject = "System Alert" if health_status.get("overall_status") != "healthy" else "Security Alert"
            body = f"Sguil monitoring report generated at \\\\{datetime.now()\\\\}\n\n"
            body += f"System Status: \\\\{health_status.get('overall_status', 'unknown')\\\\}\n"
            if analysis:
                body += f"Events in last hour: \\\\{analysis['total_events']\\\\}\n"
                body += f"Unique sources: \\\\{analysis['unique_sources']\\\\}\n"

            self.send_notification(subject, body, alerts)

        # Perform automated response
        if analysis and alerts:
            self.auto_response(analysis, alerts)

        self.logger.info("Monitoring cycle completed")
        return report

    def start_continuous_monitoring(self):
        """Start continuous monitoring"""
        self.logger.info("Starting continuous Sguil monitoring")

        check_interval = self.config["monitoring"]["check_interval"]

        while True:
            try:
                self.run_monitoring_cycle()
                time.sleep(check_interval)
            except KeyboardInterrupt:
                self.logger.info("Monitoring stopped by user")
                break
            except Exception as e:
                self.logger.error(f"Monitoring cycle failed: \\\\{e\\\\}")
                time.sleep(60)  # Wait 1 minute before retrying

# Usage
if __name__ == "__main__":
    monitor = SguilMonitor()

    if len(sys.argv) > 1 and sys.argv[1] == "--continuous":
        monitor.start_continuous_monitoring()
    else:
        monitor.run_monitoring_cycle()

Integración Ejemplos

ELK Stack Integration

#!/usr/bin/env python3
# Sguil to Elasticsearch integration

import mysql.connector
import json
from datetime import datetime
from elasticsearch import Elasticsearch

class SguilElasticsearchIntegration:
    def __init__(self, sguil_db_config, elasticsearch_config):
        self.sguil_db_config = sguil_db_config
        self.es = Elasticsearch([elasticsearch_config])

    def connect_sguil_db(self):
        """Connect to Sguil database"""
        return mysql.connector.connect(**self.sguil_db_config)

    def fetch_events(self, since_timestamp=None):
        """Fetch events from Sguil database"""
        connection = self.connect_sguil_db()
        cursor = connection.cursor(dictionary=True)

        if since_timestamp:
            query = "SELECT * FROM event WHERE timestamp > %s ORDER BY timestamp"
            cursor.execute(query, (since_timestamp,))
        else:
            query = "SELECT * FROM event ORDER BY timestamp DESC LIMIT 1000"
            cursor.execute(query)

        events = cursor.fetchall()
        cursor.close()
        connection.close()

        return events

    def transform_event(self, event):
        """Transform Sguil event for Elasticsearch"""
        return \\\\{
            "@timestamp": event["timestamp"].isoformat(),
            "sguil": \\\\{
                "sid": event["sid"],
                "cid": event["cid"],
                "signature": event["signature"],
                "signature_gen": event["signature_gen"],
                "signature_id": event["signature_id"],
                "signature_rev": event["signature_rev"]
            \\\\},
            "source": \\\\{
                "ip": event["src_ip"],
                "port": event["src_port"]
            \\\\},
            "destination": \\\\{
                "ip": event["dst_ip"],
                "port": event["dst_port"]
            \\\\},
            "network": \\\\{
                "protocol": event["ip_proto"]
            \\\\},
            "event": \\\\{
                "severity": self.get_severity(event["signature_id"]),
                "category": self.get_category(event["signature"])
            \\\\}
        \\\\}

    def get_severity(self, signature_id):
        """Determine event severity"""
        if signature_id in [1, 2, 3]:
            return "high"
        elif signature_id in [4, 5, 6]:
            return "medium"
        else:
            return "low"

    def get_category(self, signature):
        """Determine event category"""
        signature_lower = signature.lower()

        if any(word in signature_lower for word in ["trojan", "malware", "backdoor"]):
            return "malware"
        elif any(word in signature_lower for word in ["scan", "probe", "recon"]):
            return "reconnaissance"
        elif any(word in signature_lower for word in ["brute", "force", "login"]):
            return "brute_force"
        elif any(word in signature_lower for word in ["dos", "ddos", "flood"]):
            return "denial_of_service"
        else:
            return "other"

    def index_events(self, events, index_name="sguil-events"):
        """Index events in Elasticsearch"""
        for event in events:
            doc = self.transform_event(event)
            doc_id = f"\\\\{event['sid']\\\\}-\\\\{event['cid']\\\\}"

            self.es.index(
                index=f"\\\\{index_name\\\\}-\\\\{datetime.now().strftime('%Y.%m.%d')\\\\}",
                id=doc_id,
                body=doc
            )

    def sync_events(self, index_name="sguil-events"):
        """Sync events from Sguil to Elasticsearch"""
        # Get last synced timestamp
        try:
            result = self.es.search(
                index=f"\\\\{index_name\\\\}-*",
                body=\\\\{
                    "size": 1,
                    "sort": [\\\\{"@timestamp": \\\\{"order": "desc"\\\\}\\\\}],
                    "_source": ["@timestamp"]
                \\\\}
            )

            if result["hits"]["hits"]:
                last_timestamp = result["hits"]["hits"][0]["_source"]["@timestamp"]
            else:
                last_timestamp = None

        except Exception:
            last_timestamp = None

        # Fetch and index new events
        events = self.fetch_events(last_timestamp)
        if events:
            self.index_events(events, index_name)
            print(f"Indexed \\\\{len(events)\\\\} events to Elasticsearch")

# Usage
sguil_config = \\\\{
    'host': 'localhost',
    'database': 'sguildb',
    'user': 'sguil',
    'password': 'sguilpassword'
\\\\}

es_config = \\\\{
    'host': 'localhost',
    'port': 9200
\\\\}

integration = SguilElasticsearchIntegration(sguil_config, es_config)
integration.sync_events()

Splunk Integration

#!/bin/bash
# Sguil to Splunk integration script

SGUIL_DB_HOST="localhost"
SGUIL_DB_USER="sguil"
SGUIL_DB_PASS="sguilpassword"
SGUIL_DB_NAME="sguildb"
SPLUNK_HEC_URL="https://splunk.company.com:8088/services/collector/event"
SPLUNK_HEC_TOKEN="your-hec-token"

# Export Sguil events to JSON for Splunk
export_events_to_splunk() \\\\{
    local since_timestamp="$1"

    # Query Sguil database
    mysql -h "$SGUIL_DB_HOST" -u "$SGUIL_DB_USER" -p"$SGUIL_DB_PASS" "$SGUIL_DB_NAME" -e "
        SELECT
            UNIX_TIMESTAMP(timestamp) as time,
            'sguil' as source,
            'sguil:event' as sourcetype,
            JSON_OBJECT(
                'sid', sid,
                'cid', cid,
                'signature', signature,
                'signature_gen', signature_gen,
                'signature_id', signature_id,
                'src_ip', src_ip,
                'src_port', src_port,
                'dst_ip', dst_ip,
                'dst_port', dst_port,
                'ip_proto', ip_proto,
                'timestamp', timestamp
            ) as event
        FROM event
        WHERE timestamp > '$since_timestamp'
        ORDER BY timestamp
    " -N|while read time source sourcetype event; do
        # Send to Splunk HEC
        curl -k -X POST "$SPLUNK_HEC_URL" \
            -H "Authorization: Splunk $SPLUNK_HEC_TOKEN" \
            -H "Content-Type: application/json" \
            -d "\\\\{\"time\": $time, \"source\": \"$source\", \"sourcetype\": \"$sourcetype\", \"event\": $event\\\\}"
    done
\\\\}

# Get last sync timestamp
LAST_SYNC_FILE="/var/log/sguil/last_splunk_sync"
if [ -f "$LAST_SYNC_FILE" ]; then
    LAST_SYNC=$(cat "$LAST_SYNC_FILE")
else
    LAST_SYNC=$(date -d "1 hour ago" '+%Y-%m-%d %H:%M:%S')
fi

# Export events
export_events_to_splunk "$LAST_SYNC"

# Update last sync timestamp
date '+%Y-%m-%d %H:%M:%S' > "$LAST_SYNC_FILE"

Troubleshooting

Common Issues

Problemas de conexión de datos:

# Check MySQL service
sudo systemctl status mysql
sudo systemctl start mysql

# Test database connection
mysql -h localhost -u sguil -p sguildb -e "SELECT COUNT(*) FROM event;"

# Check database permissions
mysql -u root -p -e "SHOW GRANTS FOR 'sguil'@'localhost';"

# Repair database tables
mysql -u sguil -p sguildb -e "REPAIR TABLE event;"
mysql -u sguil -p sguildb -e "OPTIMIZE TABLE event;"

Sensor Connectivity Issues:

# Check sensor agent process
ps aux|grep sensor_agent

# Test network connectivity to server
telnet sguil-server 7735

# Check sensor logs
tail -f /var/log/sguil/sensor_agent.log

# Verify sensor configuration
cat /etc/sguil/sensor_agent.conf

# Test Snort configuration
snort -T -c /etc/snort/snort.conf

** Cuestiones de desempeño**

# Check database performance
mysql -u sguil -p sguildb -e "SHOW PROCESSLIST;"
mysql -u sguil -p sguildb -e "SHOW ENGINE INNODB STATUS\G"

# Optimize database
mysql -u sguil -p sguildb -e "ANALYZE TABLE event;"
mysql -u sguil -p sguildb -e "OPTIMIZE TABLE event;"

# Check disk space
df -h /var/log/sguil
df -h /var/lib/mysql

# Monitor system resources
top -p $(pgrep sguild)
iostat -x 1

Performance Optimization

Optimización del rendimiento de Sguil:

# MySQL optimization
cat >> /etc/mysql/mysql.conf.d/sguil.cnf << 'EOF'
[mysqld]
# Sguil optimizations
innodb_buffer_pool_size = 2G
innodb_log_file_size = 256M
innodb_flush_log_at_trx_commit = 2
query_cache_size = 256M
query_cache_type = 1
max_connections = 200
EOF

# Restart MySQL
sudo systemctl restart mysql

# Archive old events
mysql -u sguil -p sguildb -e "
    DELETE FROM event
    WHERE timestamp < DATE_SUB(NOW(), INTERVAL 30 DAY);
"

# Create database indexes
mysql -u sguil -p sguildb -e "
    CREATE INDEX idx_event_timestamp ON event (timestamp);
    CREATE INDEX idx_event_src_ip ON event (src_ip);
    CREATE INDEX idx_event_dst_ip ON event (dst_ip);
    CREATE INDEX idx_event_signature_id ON event (signature_id);
"

Security Considerations

Access Control

Seguridad de la base de datos** - Use contraseñas fuertes para las cuentas de bases de datos - Limitar el acceso de la base de datos a los anfitriones necesarios - Respaldo regular de la base de datos Sguil - Implementar el cifrado de bases de datos para datos sensibles - Monitorear registros de acceso a bases de datos

Seguridad de red: - Usar conexiones encriptadas entre componentes - Implementar reglas de cortafuegos para puertos Sguil - Actualizaciones periódicas de seguridad para todos los componentes - Supervisar el tráfico de red a la infraestructura de Sguil - Implementar la segmentación de la red

Protección de datos

Event Data Security: - Cifrar datos de eventos sensibles en reposo - Implementar políticas de retención de datos - Almacenamiento de captura de paquetes seguros - Limpieza regular de archivos temporales - Implementar registros de acceso para datos de eventos

** Seguridad Operacional** - Evaluaciones periódicas de seguridad de la infraestructura de Sguil - Vigilancia de los intentos de acceso no autorizado - Implementar procedimientos adecuados de respaldo y recuperación - Actualizaciones periódicas de Sguil y dependencias - Procedimientos de respuesta a incidentes de compromiso Sguil

Referencias

  1. [Sguil Official Website](URL_24__
  2. [Sguil GitHub Repository](URL_25__
  3. [Seguridad de la red con Sguil](URL_26___
  4. [Snort IDS Documentation](URL_27__
  5. MiSQL Performance Tuning