Vai al contenuto

FRONTMATTER_28_# Snorby Cheat Sheet

__HTML_TAG_18_📋 Copia Tutti i comandi_HTML_TAG_19__ __HTML_TAG_20_📄 Generare PDF

Panoramica

Snorby è un'applicazione web Ruby on Rails progettata per fornire un'interfaccia completa per il monitoraggio della sicurezza della rete e la gestione del sistema di rilevamento delle intrusioni (IDS). Originariamente sviluppato come sostituto moderno per le tradizionali interfacce di gestione IDS, Snorby offre analisti di sicurezza una piattaforma web intuitiva per il monitoraggio, l'analisi e la gestione degli eventi di sicurezza generati da Snort e altri sistemi di rilevamento delle intrusioni compatibili. L'applicazione combina potenti funzionalità di visualizzazione dei dati con funzionalità avanzate di correlazione degli eventi e reporting, rendendolo uno strumento essenziale per i centri di sicurezza e i team di risposta agli incidenti.

L'architettura core di Snorby è costruita intorno al framework Ruby on Rails, sfruttando le moderne tecnologie web per fornire l'accesso in tempo reale ai dati degli eventi di sicurezza attraverso dashboard dinamiche, grafici interattivi e interfacce di reporting complete. A differenza dei tradizionali strumenti di sicurezza basati su comandi o desktop, Snorby fornisce un'interfaccia web reattiva che può essere accessibile da qualsiasi browser moderno, consentendo ai team di sicurezza distribuiti di collaborare efficacemente all'analisi delle minacce e alle indagini sugli incidenti. L'applicazione si integra perfettamente con le implementazioni Snort esistenti, utilizzando lo stesso backend del database, fornendo funzionalità di visualizzazione e gestione migliorate.

La forza di Snorby risiede nella sua capacità di trasformare i dati di rilevamento delle intrusioni crude in intelligenza attuabile attraverso analisi avanzate, correlazione automatizzata e funzionalità di report personalizzabili. L'applicazione supporta ambienti multisensori, consentendo alle organizzazioni di monitorare complesse infrastrutture di rete da una console di gestione centralizzata. Con caratteristiche come lo streaming di eventi in tempo reale, la classificazione automatica degli avvisi e i percorsi di audit completi, Snorby è diventata una tecnologia di base per le organizzazioni che cercano di modernizzare le loro capacità di monitoraggio della sicurezza di rete mantenendo la compatibilità con l'infrastruttura di sicurezza basata su Snort esistente.

Installazione

## Ubuntu/Debian Installazione

Installazione di Snorby sui sistemi Ubuntu/Debian:

# Update system packages
sudo apt update && sudo apt upgrade -y

# Install required dependencies
sudo apt install -y ruby ruby-dev ruby-bundler nodejs npm mysql-server \
    mysql-client libmysqlclient-dev build-essential git curl wget \
    imagemagick libmagickwand-dev

# Install specific Ruby version (Snorby requires Ruby 2.x)
sudo apt install -y ruby2.7 ruby2.7-dev
sudo update-alternatives --install /usr/bin/ruby ruby /usr/bin/ruby2.7 1
sudo update-alternatives --install /usr/bin/gem gem /usr/bin/gem2.7 1

# Install Bundler
sudo gem install bundler -v '~> 1.17'

# Create Snorby user
sudo useradd -r -m -s /bin/bash snorby
sudo usermod -a -G snorby $USER

# Download Snorby
cd /opt
sudo git clone https://github.com/Snorby/snorby.git
sudo chown -R snorby:snorby snorby

# Switch to Snorby user
sudo -u snorby -i

# Navigate to Snorby directory
cd /opt/snorby

# Install Ruby dependencies
bundle install --deployment --without development test

# Create database configuration
cp config/database.yml.example config/database.yml

# Edit database configuration
cat > config/database.yml ``<< 'EOF'
production:
  adapter: mysql2
  database: snorby
  username: snorby
  password: snorbypassword
  host: localhost
  port: 3306
  encoding: utf8

development:
  adapter: mysql2
  database: snorby_dev
  username: snorby
  password: snorbypassword
  host: localhost
  port: 3306
  encoding: utf8

test:
  adapter: mysql2
  database: snorby_test
  username: snorby
  password: snorbypassword
  host: localhost
  port: 3306
  encoding: utf8
EOF

# Create Snorby configuration
cp config/snorby_config.yml.example config/snorby_config.yml

# Exit Snorby user session
exit

# Setup MySQL database
mysql -u root -p << 'EOF'
CREATE DATABASE snorby;
CREATE USER 'snorby'@'localhost' IDENTIFIED BY 'snorbypassword';
GRANT ALL PRIVILEGES ON snorby.* TO 'snorby'@'localhost';
FLUSH PRIVILEGES;
EOF

# Initialize database as Snorby user
sudo -u snorby -i
cd /opt/snorby
bundle exec rake snorby:setup RAILS_ENV=production

# Create admin user
bundle exec rake snorby:user RAILS_ENV=production

# Exit Snorby user session
exit

CentOS/RHEL Installazione

# Install EPEL repository
sudo yum install -y epel-release

# Install required packages
sudo yum groupinstall -y "Development Tools"
sudo yum install -y ruby ruby-devel rubygems nodejs npm mysql-server \
    mysql-devel git curl wget ImageMagick ImageMagick-devel

# Install Bundler
sudo gem install bundler -v '~>`` 1.17'

# Start and enable MySQL
sudo systemctl start mysqld
sudo systemctl enable mysqld

# Secure MySQL installation
sudo mysql_secure_installation

# Create Snorby user
sudo useradd -r -m -s /bin/bash snorby

# Download Snorby
cd /opt
sudo git clone https://github.com/Snorby/snorby.git
sudo chown -R snorby:snorby snorby

# Configure SELinux (if enabled)
sudo setsebool -P httpd_can_network_connect 1
sudo setsebool -P httpd_can_network_connect_db 1

# Install Ruby dependencies
sudo -u snorby -i
cd /opt/snorby
bundle install --deployment --without development test
exit

# Setup database
mysql -u root -p ``<< 'EOF'
CREATE DATABASE snorby;
CREATE USER 'snorby'@'localhost' IDENTIFIED BY 'snorbypassword';
GRANT ALL PRIVILEGES ON snorby.* TO 'snorby'@'localhost';
FLUSH PRIVILEGES;
EOF

# Configure firewall
sudo firewall-cmd --permanent --add-port=3000/tcp
sudo firewall-cmd --reload

Installazione Docker

Corsa Snorby in contenitori Docker:

# Create Docker network
docker network create snorby-network

# Create MySQL container for Snorby
docker run -d --name snorby-mysql \
    --network snorby-network \
    -e MYSQL_ROOT_PASSWORD=rootpassword \
    -e MYSQL_DATABASE=snorby \
    -e MYSQL_USER=snorby \
    -e MYSQL_PASSWORD=snorbypassword \
    -v snorby-mysql-data:/var/lib/mysql \
    mysql:5.7

# Create Snorby Dockerfile
cat >`` Dockerfile.snorby << 'EOF'
FROM ruby:2.7

# Install dependencies
RUN apt-get update && apt-get install -y \
    nodejs npm mysql-client libmysqlclient-dev \
    imagemagick libmagickwand-dev \
    && rm -rf /var/lib/apt/lists/*

# Create snorby user
RUN useradd -r -m -s /bin/bash snorby

# Set working directory
WORKDIR /opt/snorby

# Clone Snorby
RUN git clone https://github.com/Snorby/snorby.git . && \
    chown -R snorby:snorby /opt/snorby

# Switch to snorby user
USER snorby

# Install Ruby dependencies
RUN bundle install --deployment --without development test

# Copy configuration files
COPY database.yml config/database.yml
COPY snorby_config.yml config/snorby_config.yml

# Expose port
EXPOSE 3000

# Start command
CMD ["bundle", "exec", "rails", "server", "-e", "production", "-b", "0.0.0.0"]
EOF

# Create database configuration for container
cat > database.yml << 'EOF'
production:
  adapter: mysql2
  database: snorby
  username: snorby
  password: snorbypassword
  host: snorby-mysql
  port: 3306
  encoding: utf8
EOF

# Create Snorby configuration for container
cat > snorby_config.yml ``<< 'EOF'
production:
  domain: localhost
  wkhtmltopdf: /usr/bin/wkhtmltopdf
  ssl: false
  mailer_sender: snorby@localhost
  geoip_uri: "http://geolite.maxmind.com/download/geoip/database/GeoLiteCountry/GeoIP.dat.gz"
  authentication_mode: database
EOF

# Build and run Snorby container
docker build -f Dockerfile.snorby -t snorby .

# Wait for MySQL to be ready
sleep 30

# Run database setup
docker run --rm --network snorby-network \
    -v $(pwd)/database.yml:/opt/snorby/config/database.yml \
    -v $(pwd)/snorby_config.yml:/opt/snorby/config/snorby_config.yml \
    snorby bundle exec rake snorby:setup RAILS_ENV=production

# Start Snorby container
docker run -d --name snorby-app \
    --network snorby-network \
    -p 3000:3000 \
    -v $(pwd)/database.yml:/opt/snorby/config/database.yml \
    -v $(pwd)/snorby_config.yml:/opt/snorby/config/snorby_config.yml \
    snorby

Installazione manuale

# Install Ruby Version Manager (RVM)
curl -sSL https://get.rvm.io|bash -s stable
source ~/.rvm/scripts/rvm

# Install Ruby 2.7
rvm install 2.7.0
rvm use 2.7.0 --default

# Install Bundler
gem install bundler -v '~>`` 1.17'

# Download Snorby
git clone https://github.com/Snorby/snorby.git
cd snorby

# Install dependencies
bundle install

# Configure database
cp config/database.yml.example config/database.yml
cp config/snorby_config.yml.example config/snorby_config.yml

# Edit configurations as needed
nano config/database.yml
nano config/snorby_config.yml

# Setup database
bundle exec rake snorby:setup

# Create admin user
bundle exec rake snorby:user

Uso di base

Configurazione iniziale

Impostazione Snorby dopo l'installazione:

# Navigate to Snorby directory
cd /opt/snorby

# Configure Snorby settings
sudo -u snorby nano config/snorby_config.yml

# Example configuration:
cat > config/snorby_config.yml ``<< 'EOF'
production:
  # Domain configuration
  domain: snorby.company.com

  # SSL configuration
  ssl: true

  # Email configuration
  mailer_sender: snorby@company.com
  smtp_settings:
    address: smtp.company.com
    port: 587
    domain: company.com
    user_name: snorby@company.com
    password: emailpassword
    authentication: plain
    enable_starttls_auto: true

  # GeoIP configuration
  geoip_uri: "http://geolite.maxmind.com/download/geoip/database/GeoLiteCountry/GeoIP.dat.gz"

  # Authentication mode
  authentication_mode: database

  # PDF generation
  wkhtmltopdf: /usr/bin/wkhtmltopdf

  # Time zone
  time_zone: "Eastern Time (US & Canada)"

  # Lookups
  whois_enabled: true
  reputation_enabled: true

  # Performance settings
  event_page_size: 50
  cache_timeout: 300
EOF

# Set proper permissions
sudo chown snorby:snorby config/snorby_config.yml
sudo chmod 600 config/snorby_config.yml

# Download GeoIP database
sudo -u snorby mkdir -p db/geoip
sudo -u snorby wget -O db/geoip/GeoIP.dat.gz \
    "http://geolite.maxmind.com/download/geoip/database/GeoLiteCountry/GeoIP.dat.gz"
sudo -u snorby gunzip db/geoip/GeoIP.dat.gz

# Run database migrations
sudo -u snorby bundle exec rake db:migrate RAILS_ENV=production

# Precompile assets
sudo -u snorby bundle exec rake assets:precompile RAILS_ENV=production

Start Snorby

Avviare l'applicazione web Snorby:

# Start Snorby manually
sudo -u snorby -i
cd /opt/snorby
bundle exec rails server -e production -b 0.0.0.0 -p 3000

# Or start in background
nohup bundle exec rails server -e production -b 0.0.0.0 -p 3000 >`` log/snorby.log 2>&1 &

# Exit Snorby user session
exit

# Create systemd service
sudo cat > /etc/systemd/system/snorby.service ``<< 'EOF'
[Unit]
Description=Snorby Web Application
After=network.target mysql.service

[Service]
Type=simple
User=snorby
Group=snorby
WorkingDirectory=/opt/snorby
Environment=RAILS_ENV=production
ExecStart=/usr/local/bin/bundle exec rails server -e production -b 0.0.0.0 -p 3000
Restart=always
RestartSec=10

[Install]
WantedBy=multi-user.target
EOF

# Enable and start service
sudo systemctl daemon-reload
sudo systemctl enable snorby
sudo systemctl start snorby

# Check service status
sudo systemctl status snorby

# View logs
sudo journalctl -u snorby -f

Accesso all'interfaccia web

Accesso e utilizzo dell'interfaccia web Snorby:

# Access Snorby web interface
# Open browser and navigate to:
# http://your-server-ip:3000
# or
# https://snorby.company.com (if SSL configured)

# Default login credentials (if created during setup):
# Username: admin@snorby.org
# Password: snorby

# Check if Snorby is running
curl -I http://localhost:3000

# Test database connectivity
sudo -u snorby -i
cd /opt/snorby
bundle exec rails console production
# In Rails console:
# User.count
# Event.count
# exit

# Check logs for errors
sudo tail -f /opt/snorby/log/production.log

Configurazione del sensore

Configurazione dei sensori per inviare i dati a Snorby:

# Snorby uses the same database as Snort/Barnyard2
# Configure Barnyard2 to write to Snorby database

# Example Barnyard2 configuration
cat >`` /etc/snort/barnyard2.conf << 'EOF'
# Barnyard2 configuration for Snorby

# Database output
output database: log, mysql, user=snorby password=snorbypassword dbname=snorby host=localhost

# Syslog output
output alert_syslog: LOG_AUTH LOG_ALERT

# Unified2 input
config reference_file: /etc/snort/reference.config
config classification_file: /etc/snort/classification.config
config gen_file: /etc/snort/gen-msg.map
config sid_file: /etc/snort/sid-msg.map

# Processing options
config logdir: /var/log/snort
config hostname: sensor01
config interface: eth0
config waldo_file: /var/log/snort/barnyard2.waldo

# Performance tuning
config max_mpls_labelchain_len: 3
config max_ip6_extensions: 4
config addressspace_id: 0
EOF

# Start Barnyard2
barnyard2 -c /etc/snort/barnyard2.conf -d /var/log/snort -f snort.u2 -w /var/log/snort/barnyard2.waldo -g snort -u snort -D

# Verify events are being inserted
mysql -u snorby -p snorby -e "SELECT COUNT(*) FROM event;"

Caratteristiche avanzate

Custom Dashboards

Creazione di dashboard e widget personalizzati:

# Custom dashboard configuration
# File: /opt/snorby/app/models/custom_dashboard.rb

class CustomDashboard
  include ActiveModel::Model

  def self.security_overview
    \\\\{
      total_events: Event.count,
      events_today: Event.where('timestamp >= ?', Date.current.beginning_of_day).count,
      unique_sources: Event.distinct.count(:src_ip),
      unique_destinations: Event.distinct.count(:dst_ip),
      top_signatures: top_signatures(10),
      hourly_stats: hourly_event_stats,
      severity_breakdown: severity_breakdown,
      geographic_stats: geographic_breakdown
    \\\\}
  end

  def self.top_signatures(limit = 10)
    Event.joins(:signature)
         .group('signature.sig_name')
         .order('count_all DESC')
         .limit(limit)
         .count
  end

  def self.hourly_event_stats
    Event.where('timestamp >= ?', 24.hours.ago)
         .group("DATE_FORMAT(timestamp, '%H')")
         .count
  end

  def self.severity_breakdown
    Event.joins(:signature)
         .group('signature.sig_priority')
         .count
  end

  def self.geographic_breakdown
    # Requires GeoIP integration
    Event.joins(:src_ip_geolocation)
         .group('src_ip_geolocations.country')
         .limit(20)
         .count
  end

  def self.threat_intelligence
    \\\\{
      malware_events: malware_related_events,
      botnet_activity: botnet_activity,
      scanning_activity: scanning_activity,
      brute_force_attempts: brute_force_attempts
    \\\\}
  end

  def self.malware_related_events
    Event.joins(:signature)
         .where("signature.sig_name LIKE ? OR signature.sig_name LIKE ? OR signature.sig_name LIKE ?",
                '%trojan%', '%malware%', '%backdoor%')
         .where('timestamp >= ?', 24.hours.ago)
         .count
  end

  def self.botnet_activity
    Event.joins(:signature)
         .where("signature.sig_name LIKE ? OR signature.sig_name LIKE ?",
                '%botnet%', '%c2%')
         .where('timestamp >= ?', 24.hours.ago)
         .count
  end

  def self.scanning_activity
    Event.joins(:signature)
         .where("signature.sig_name LIKE ? OR signature.sig_name LIKE ?",
                '%scan%', '%probe%')
         .where('timestamp >= ?', 24.hours.ago)
         .count
  end

  def self.brute_force_attempts
    Event.joins(:signature)
         .where("signature.sig_name LIKE ? OR signature.sig_name LIKE ?",
                '%brute%', '%login%')
         .where('timestamp >= ?', 24.hours.ago)
         .count
  end
end

Advanced Analytics

Creazione di regole di analisi e correlazione avanzate:

# Advanced analytics engine
# File: /opt/snorby/lib/analytics_engine.rb

class AnalyticsEngine
  def self.detect_anomalies
    \\\\{
      traffic_anomalies: detect_traffic_anomalies,
      behavioral_anomalies: detect_behavioral_anomalies,
      temporal_anomalies: detect_temporal_anomalies
    \\\\}
  end

  def self.detect_traffic_anomalies
    # Detect unusual traffic patterns
    current_hour_events = Event.where('timestamp >= ?', 1.hour.ago).count
    average_hourly_events = Event.where('timestamp >= ?', 7.days.ago)
                                .group("DATE_FORMAT(timestamp, '%H')")
                                .average(:count)
                                .values.sum / 24

    anomalies = []

    if current_hour_events > (average_hourly_events * 2)
      anomalies << \\\\{
        type: 'high_traffic_volume',
        severity: 'warning',
        description: "Current hour events (#\\\\{current_hour_events\\\\}) significantly higher than average (#\\\\{average_hourly_events.round\\\\})",
        timestamp: Time.current
      \\\\}
    end

    # Detect port scanning
    port_scan_sources = Event.where('timestamp >= ?', 1.hour.ago)
                            .group(:src_ip)
                            .having('COUNT(DISTINCT dst_port) > ?', 20)
                            .count

    port_scan_sources.each do|src_ip, port_count|
      anomalies << \\\\{
        type: 'port_scanning',
        severity: 'high',
        description: "Source #\\\\{src_ip\\\\} accessed #\\\\{port_count\\\\} different ports in the last hour",
        src_ip: src_ip,
        timestamp: Time.current
      \\\\}
    end

    anomalies
  end

  def self.detect_behavioral_anomalies
    anomalies = []

    # Detect unusual source behavior
    unusual_sources = Event.where('timestamp >= ?', 24.hours.ago)
                          .group(:src_ip)
                          .having('COUNT(*) > ?', 1000)
                          .count

    unusual_sources.each do|src_ip, event_count|
      anomalies << \\\\{
        type: 'unusual_source_activity',
        severity: 'medium',
        description: "Source #\\\\{src_ip\\\\} generated #\\\\{event_count\\\\} events in 24 hours",
        src_ip: src_ip,
        timestamp: Time.current
      \\\\}
    end

    # Detect beaconing behavior
    beaconing_sources = detect_beaconing_behavior
    beaconing_sources.each do|src_ip, intervals|
      anomalies << \\\\{
        type: 'beaconing_behavior',
        severity: 'high',
        description: "Source #\\\\{src_ip\\\\} shows regular communication intervals (potential C2)",
        src_ip: src_ip,
        intervals: intervals,
        timestamp: Time.current
      \\\\}
    end

    anomalies
  end

  def self.detect_beaconing_behavior
    # Simplified beaconing detection
    # Look for regular communication intervals
    beaconing_sources = \\\\{\\\\}

    Event.where('timestamp >= ?', 24.hours.ago)
         .group(:src_ip, :dst_ip)
         .having('COUNT(*) > ?', 10)
         .pluck(:src_ip, :dst_ip)
         .each do|src_ip, dst_ip|

      events = Event.where(src_ip: src_ip, dst_ip: dst_ip)
                   .where('timestamp >= ?', 24.hours.ago)
                   .order(:timestamp)
                   .pluck(:timestamp)

      if events.length > 10
        intervals = []
        (1...events.length).each do|i|
          intervals << (events[i] - events[i-1]).to_i
        end

        # Check for regular intervals (simplified)
        avg_interval = intervals.sum / intervals.length
        variance = intervals.map \\\\{|i|(i - avg_interval) ** 2 \\\\}.sum / intervals.length

        if variance < (avg_interval * 0.1) # Low variance indicates regular intervals
          beaconing_sources[src_ip] = \\\\{
            dst_ip: dst_ip,
            avg_interval: avg_interval,
            variance: variance,
            event_count: events.length
          \\\\}
        end
      end
    end

    beaconing_sources
  end

  def self.detect_temporal_anomalies
    anomalies = []

    # Detect unusual activity during off-hours
    current_hour = Time.current.hour

    if (current_hour < 6||current_hour > 22) # Off-hours
      off_hours_events = Event.where('timestamp >= ?', 1.hour.ago).count
      normal_hours_avg = Event.where('timestamp >= ? AND timestamp < ?',
                                   7.days.ago, Time.current)
                             .where('HOUR(timestamp) BETWEEN ? AND ?', 6, 22)
                             .count / (7 * 16) # 7 days, 16 hours per day

      if off_hours_events > (normal_hours_avg * 1.5)
        anomalies << \\\\{
          type: 'off_hours_activity',
          severity: 'medium',
          description: "Unusual activity during off-hours: #\\\\{off_hours_events\\\\} events (normal: #\\\\{normal_hours_avg.round\\\\})",
          timestamp: Time.current
        \\\\}
      end
    end

    anomalies
  end

  def self.generate_threat_report
    \\\\{
      timestamp: Time.current,
      anomalies: detect_anomalies,
      threat_indicators: extract_threat_indicators,
      recommendations: generate_recommendations
    \\\\}
  end

  def self.extract_threat_indicators
    indicators = []

    # Extract IOCs from recent events
    recent_events = Event.includes(:signature)
                        .where('timestamp >= ?', 24.hours.ago)
                        .where("signature.sig_name LIKE ? OR signature.sig_name LIKE ?",
                               '%malware%', '%trojan%')

    recent_events.each do|event|
      indicators << \\\\{
        type: 'ip_address',
        value: event.src_ip,
        context: event.signature.sig_name,
        timestamp: event.timestamp,
        confidence: calculate_confidence(event)
      \\\\}
    end

    indicators.uniq \\\\{|i|[i[:type], i[:value]] \\\\}
  end

  def self.calculate_confidence(event)
    # Simplified confidence calculation
    base_confidence = 50

    # Increase confidence based on signature priority
    if event.signature.sig_priority <= 2
      base_confidence += 30
    elsif event.signature.sig_priority <= 3
      base_confidence += 20
    end

    # Increase confidence if multiple events from same source
    same_source_events = Event.where(src_ip: event.src_ip)
                             .where('timestamp >= ?', 24.hours.ago)
                             .count

    if same_source_events > 10
      base_confidence += 20
    end

    [base_confidence, 100].min
  end

  def self.generate_recommendations
    recommendations = []

    anomalies = detect_anomalies

    anomalies.values.flatten.each do|anomaly|
      case anomaly[:type]
      when 'port_scanning'
        recommendations << \\\\{
          priority: 'high',
          action: 'block_ip',
          target: anomaly[:src_ip],
          description: "Consider blocking source IP #\\\\{anomaly[:src_ip]\\\\} due to port scanning activity"
        \\\\}
      when 'beaconing_behavior'
        recommendations << \\\\{
          priority: 'high',
          action: 'investigate',
          target: anomaly[:src_ip],
          description: "Investigate potential C2 communication from #\\\\{anomaly[:src_ip]\\\\}"
        \\\\}
      when 'high_traffic_volume'
        recommendations << \\\\{
          priority: 'medium',
          action: 'monitor',
          description: "Monitor network for potential DDoS or scanning activity"
        \\\\}
      end
    end

    recommendations
  end
end

Reporting automatizzato

Creazione di funzionalità di reporting automatizzate:

# Automated reporting system
# File: /opt/snorby/lib/automated_reporter.rb

class AutomatedReporter
  def self.generate_daily_report(date = Date.current)
    report_data = \\\\{
      date: date,
      summary: daily_summary(date),
      top_events: top_events(date),
      geographic_analysis: geographic_analysis(date),
      threat_analysis: threat_analysis(date),
      recommendations: daily_recommendations(date)
    \\\\}

    html_report = generate_html_report(report_data, 'daily')
    pdf_report = generate_pdf_report(html_report)

    # Save reports
    save_report(html_report, "daily_#\\\\{date.strftime('%Y%m%d')\\\\}.html")
    save_report(pdf_report, "daily_#\\\\{date.strftime('%Y%m%d')\\\\}.pdf")

    # Email report if configured
    email_report(html_report, "Daily Security Report - #\\\\{date\\\\}")

    report_data
  end

  def self.generate_weekly_report(week_start = Date.current.beginning_of_week)
    week_end = week_start.end_of_week

    report_data = \\\\{
      week_start: week_start,
      week_end: week_end,
      summary: weekly_summary(week_start, week_end),
      trends: weekly_trends(week_start, week_end),
      top_threats: weekly_top_threats(week_start, week_end),
      performance_metrics: weekly_performance(week_start, week_end)
    \\\\}

    html_report = generate_html_report(report_data, 'weekly')
    save_report(html_report, "weekly_#\\\\{week_start.strftime('%Y%m%d')\\\\}_#\\\\{week_end.strftime('%Y%m%d')\\\\}.html")

    report_data
  end

  def self.daily_summary(date)
    start_time = date.beginning_of_day
    end_time = date.end_of_day

    \\\\{
      total_events: Event.where(timestamp: start_time..end_time).count,
      unique_sources: Event.where(timestamp: start_time..end_time).distinct.count(:src_ip),
      unique_destinations: Event.where(timestamp: start_time..end_time).distinct.count(:dst_ip),
      high_priority_events: Event.joins(:signature)
                                .where(timestamp: start_time..end_time)
                                .where('signature.sig_priority <= ?', 2)
                                .count,
      blocked_events: Event.where(timestamp: start_time..end_time)
                          .where(blocked: true)
                          .count
    \\\\}
  end

  def self.top_events(date, limit = 20)
    start_time = date.beginning_of_day
    end_time = date.end_of_day

    Event.joins(:signature)
         .where(timestamp: start_time..end_time)
         .group('signature.sig_name')
         .order('count_all DESC')
         .limit(limit)
         .count
  end

  def self.geographic_analysis(date)
    # Requires GeoIP integration
    start_time = date.beginning_of_day
    end_time = date.end_of_day

    # Simplified geographic analysis
    Event.where(timestamp: start_time..end_time)
         .group(:src_ip)
         .having('COUNT(*) > ?', 10)
         .count
         .transform_keys \\\\{|ip|geolocate_ip(ip) \\\\}
         .group_by \\\\{|location, count|location[:country] \\\\}
         .transform_values \\\\{|entries|entries.sum \\\\{|_, count|count \\\\} \\\\}
  end

  def self.threat_analysis(date)
    start_time = date.beginning_of_day
    end_time = date.end_of_day

    \\\\{
      malware_events: Event.joins(:signature)
                          .where(timestamp: start_time..end_time)
                          .where("signature.sig_name LIKE ?", '%malware%')
                          .count,
      scanning_events: Event.joins(:signature)
                           .where(timestamp: start_time..end_time)
                           .where("signature.sig_name LIKE ?", '%scan%')
                           .count,
      brute_force_events: Event.joins(:signature)
                              .where(timestamp: start_time..end_time)
                              .where("signature.sig_name LIKE ?", '%brute%')
                              .count,
      c2_events: Event.joins(:signature)
                     .where(timestamp: start_time..end_time)
                     .where("signature.sig_name LIKE ? OR signature.sig_name LIKE ?",
                            '%c2%', '%command%')
                     .count
    \\\\}
  end

  def self.generate_html_report(data, type)
    template = ERB.new(File.read("app/views/reports/#\\\\{type\\\\}_template.html.erb"))
    template.result(binding)
  end

  def self.generate_pdf_report(html_content)
    # Requires wkhtmltopdf
    pdf_file = Tempfile.new(['report', '.pdf'])

    system("echo '#\\\\{html_content\\\\}'|wkhtmltopdf - #\\\\{pdf_file.path\\\\}")

    File.read(pdf_file.path)
  ensure
    pdf_file.close
    pdf_file.unlink
  end

  def self.save_report(content, filename)
    reports_dir = Rails.root.join('public', 'reports')
    FileUtils.mkdir_p(reports_dir)

    File.write(reports_dir.join(filename), content)
  end

  def self.email_report(html_content, subject)
    return unless Rails.application.config.action_mailer.delivery_method

    ReportMailer.security_report(html_content, subject).deliver_now
  end

  def self.geolocate_ip(ip)
    # Simplified geolocation - integrate with actual GeoIP service
    \\\\{
      ip: ip,
      country: 'Unknown',
      city: 'Unknown',
      latitude: 0,
      longitude: 0
    \\\\}
  end
end

# Mailer for reports
# File: /opt/snorby/app/mailers/report_mailer.rb
class ReportMailer ``< ApplicationMailer
  def security_report(html_content, subject)
    @content = html_content

    mail(
      to: Rails.application.config.security_email,
      subject: subject,
      content_type: 'text/html'
    )
  end
end

Automation Scripts

Script di monitoraggio completo

#!/bin/bash
# Comprehensive Snorby monitoring and maintenance

# Configuration
SNORBY_DIR="/opt/snorby"
LOG_DIR="/var/log/snorby"
BACKUP_DIR="/var/backups/snorby"
PID_FILE="/var/run/snorby.pid"

# Database configuration
DB_HOST="localhost"
DB_USER="snorby"
DB_PASS="snorbypassword"
DB_NAME="snorby"

# Monitoring thresholds
MAX_RESPONSE_TIME="10"
MAX_MEMORY_USAGE="80"
MAX_DISK_USAGE="90"

# Create necessary directories
mkdir -p "$LOG_DIR" "$BACKUP_DIR"

# Logging function
log_message() \\\{
    echo "$(date '+%Y-%m-%d %H:%M:%S') - $1"|tee -a "$LOG_DIR/monitor.log"
\\\}

# Check Snorby web application
check_web_application() \\\{
    log_message "Checking Snorby web application..."

    local response_time
    response_time=$(curl -o /dev/null -s -w '%\\\{time_total\\\}' http://localhost:3000/ 2>``/dev/null)

    if [ $? -eq 0 ]; then
        log_message "Web application is accessible (response time: $\\\\{response_time\\\\}s)"

        # Check if response time is acceptable
        if (( $(echo "$response_time > $MAX_RESPONSE_TIME"|bc -l) )); then
            log_message "WARNING: Slow response time: $\\\\{response_time\\\\}s (threshold: $\\\\{MAX_RESPONSE_TIME\\\\}s)"
            return 1
        fi

        return 0
    else
        log_message "ERROR: Web application is not accessible"
        return 1
    fi
\\\\}

# Check Snorby process
check_snorby_process() \\\\{
    log_message "Checking Snorby process..."

    if systemctl is-active --quiet snorby; then
        log_message "Snorby service is running"

        # Check memory usage
        local memory_usage
        memory_usage=$(ps -o %mem -p $(pgrep -f "rails server")|tail -n 1|tr -d ' ')

        if [ -n "$memory_usage" ] && (( $(echo "$memory_usage > $MAX_MEMORY_USAGE"|bc -l) )); then
            log_message "WARNING: High memory usage: $\\\\{memory_usage\\\\}% (threshold: $\\\\{MAX_MEMORY_USAGE\\\\}%)"
        fi

        return 0
    else
        log_message "ERROR: Snorby service is not running"
        return 1
    fi
\\\\}

# Check database connectivity
check_database() \\\\{
    log_message "Checking database connectivity..."

    mysql -h "$DB_HOST" -u "$DB_USER" -p"$DB_PASS" "$DB_NAME" -e "SELECT 1;" >/dev/null 2>&1

    if [ $? -eq 0 ]; then
        log_message "Database connection successful"

        # Check recent events
        local recent_events
        recent_events=$(mysql -h "$DB_HOST" -u "$DB_USER" -p"$DB_PASS" "$DB_NAME" -N -e "
            SELECT COUNT(*) FROM event WHERE timestamp >= DATE_SUB(NOW(), INTERVAL 1 HOUR);
        " 2>/dev/null)

        log_message "Recent events (last hour): $recent_events"

        # Check database size
        local db_size
        db_size=$(mysql -h "$DB_HOST" -u "$DB_USER" -p"$DB_PASS" "$DB_NAME" -N -e "
            SELECT ROUND(SUM(data_length + index_length) / 1024 / 1024, 2) AS 'DB Size in MB'
            FROM information_schema.tables
            WHERE table_schema='$DB_NAME';
        " 2>/dev/null)

        log_message "Database size: $\\\\{db_size\\\\} MB"

        return 0
    else
        log_message "ERROR: Database connection failed"
        return 1
    fi
\\\\}

# Check disk space
check_disk_space() \\\\{
    log_message "Checking disk space..."

    local usage
    usage=$(df -h "$SNORBY_DIR"|awk 'NR==2 \\\\{print $5\\\\}'|sed 's/%//')

    log_message "Disk usage: $\\\\{usage\\\\}%"

    if [ "$usage" -gt "$MAX_DISK_USAGE" ]; then
        log_message "WARNING: High disk usage: $\\\\{usage\\\\}% (threshold: $\\\\{MAX_DISK_USAGE\\\\}%)"
        return 1
    fi

    return 0
\\\\}

# Check log files
check_log_files() \\\\{
    log_message "Checking log files..."

    local production_log="$SNORBY_DIR/log/production.log"

    if [ -f "$production_log" ]; then
        # Check for recent errors
        local error_count
        error_count=$(tail -n 100 "$production_log"|grep -c "ERROR\|FATAL"||echo "0")

        if [ "$error_count" -gt 5 ]; then
            log_message "WARNING: High number of errors in production log: $error_count"

            # Show recent errors
            log_message "Recent errors:"
            tail -n 100 "$production_log"|grep "ERROR\|FATAL"|tail -n 5|while read -r line; do
                log_message "  $line"
            done
        fi

        # Rotate large log files
        local log_size
        log_size=$(stat -c%s "$production_log" 2>/dev/null||echo "0")

        if [ "$log_size" -gt 104857600 ]; then # 100MB
            log_message "Rotating large production log file"
            mv "$production_log" "$\\\\{production_log\\\\}.$(date +%Y%m%d-%H%M%S)"
            touch "$production_log"
            chown snorby:snorby "$production_log"
        fi
    fi
\\\\}

# Performance optimization
optimize_performance() \\\\{
    log_message "Running performance optimization..."

    # Clear Rails cache
    sudo -u snorby -i << 'EOF'
cd /opt/snorby
bundle exec rake tmp:cache:clear RAILS_ENV=production
bundle exec rake assets:clean RAILS_ENV=production
EOF

    # Optimize database
    mysql -h "$DB_HOST" -u "$DB_USER" -p"$DB_PASS" "$DB_NAME" -e "
        OPTIMIZE TABLE event;
        OPTIMIZE TABLE signature;
        ANALYZE TABLE event;
        ANALYZE TABLE signature;
    " >/dev/null 2>&1

    # Clean old sessions
    mysql -h "$DB_HOST" -u "$DB_USER" -p"$DB_PASS" "$DB_NAME" -e "
        DELETE FROM sessions WHERE updated_at < DATE_SUB(NOW(), INTERVAL 7 DAY);
    " >/dev/null 2>&1

    log_message "Performance optimization completed"
\\\\}

# Backup Snorby
backup_snorby() \\\\{
    log_message "Backing up Snorby..."

    local backup_file="$BACKUP_DIR/snorby-backup-$(date +%Y%m%d-%H%M%S).tar.gz"

    # Backup application files and database
    (
        cd /opt
        tar -czf "$backup_file" \
            snorby/config/ \
            snorby/db/migrate/ \
            snorby/public/uploads/ \
            2>/dev/null
    )

    # Backup database
    mysqldump -h "$DB_HOST" -u "$DB_USER" -p"$DB_PASS" "$DB_NAME"|\
        gzip > "$BACKUP_DIR/snorby-db-$(date +%Y%m%d-%H%M%S).sql.gz"

    if [ $? -eq 0 ]; then
        log_message "Backup created: $backup_file"

        # Keep only last 7 days of backups
        find "$BACKUP_DIR" -name "snorby-*.tar.gz" -mtime +7 -delete
        find "$BACKUP_DIR" -name "snorby-db-*.sql.gz" -mtime +7 -delete

        return 0
    else
        log_message "ERROR: Backup failed"
        return 1
    fi
\\\\}

# Restart Snorby service
restart_snorby() \\\\{
    log_message "Restarting Snorby service..."

    systemctl restart snorby

    # Wait for service to start
    sleep 10

    if systemctl is-active --quiet snorby; then
        log_message "Snorby service restarted successfully"
        return 0
    else
        log_message "ERROR: Failed to restart Snorby service"
        return 1
    fi
\\\\}

# Generate health report
generate_health_report() \\\\{
    log_message "Generating health report..."

    local report_file="$LOG_DIR/health-report-$(date +%Y%m%d-%H%M%S).html"

    cat > "$report_file" << EOF
<!DOCTYPE html>
<html>
<head>
    <title>Snorby Health Report</title>
    <style>
        body \\\\{ font-family: Arial, sans-serif; margin: 20px; \\\\}
        .status-ok \\\\{ color: green; \\\\}
        .status-warning \\\\{ color: orange; \\\\}
        .status-error \\\\{ color: red; \\\\}
        table \\\\{ border-collapse: collapse; width: 100%; \\\\}
        th, td \\\\{ border: 1px solid #ddd; padding: 8px; text-align: left; \\\\}
        th \\\\{ background-color: #f2f2f2; \\\\}
    </style>
</head>
<body>
    <h1>Snorby Health Report</h1>
    <p>Generated: $(date)</p>

    <h2>System Status</h2>
    <table>
        <tr><th>Component</th><th>Status</th><th>Details</th></tr>
EOF

    # Check each component and add to report
    if check_web_application >/dev/null 2>&1; then
        echo "        <tr><td>Web Application</td><td class=\"status-ok\">OK</td><td>Accessible</td></tr>" >> "$report_file"
    else
        echo "        <tr><td>Web Application</td><td class=\"status-error\">ERROR</td><td>Not accessible</td></tr>" >> "$report_file"
    fi

    if check_snorby_process >/dev/null 2>&1; then
        echo "        <tr><td>Snorby Process</td><td class=\"status-ok\">OK</td><td>Running</td></tr>" >> "$report_file"
    else
        echo "        <tr><td>Snorby Process</td><td class=\"status-error\">ERROR</td><td>Not running</td></tr>" >> "$report_file"
    fi

    if check_database >/dev/null 2>&1; then
        echo "        <tr><td>Database</td><td class=\"status-ok\">OK</td><td>Connected</td></tr>" >> "$report_file"
    else
        echo "        <tr><td>Database</td><td class=\"status-error\">ERROR</td><td>Connection failed</td></tr>" >> "$report_file"
    fi

    local disk_usage
    disk_usage=$(df -h "$SNORBY_DIR"|awk 'NR==2 \\\\{print $5\\\\}')
    echo "        <tr><td>Disk Space</td><td class=\"status-ok\">OK</td><td>Usage: $disk_usage</td></tr>" >> "$report_file"

    cat >> "$report_file" << EOF
    </table>

    <h2>Recent Activity</h2>
    <pre>$(tail -n 20 "$LOG_DIR/monitor.log" 2>/dev/null||echo "No recent activity logged")</pre>
</body>
</html>
EOF

    log_message "Health report generated: $report_file"
\\\\}

# Send alert notification
send_alert() \\\\{
    local subject="$1"
    local message="$2"

    # Send email if mail is configured
    if command -v mail >/dev/null 2>&1; then
        echo "$message"|mail -s "Snorby Alert: $subject" security@company.com
    fi

    # Log to syslog
    logger -t snorby-monitor "$subject: $message"

    log_message "Alert sent: $subject"
\\\\}

# Main monitoring function
run_monitoring() \\\\{
    log_message "Starting Snorby monitoring cycle"

    local issues=0

    # Run all checks
    check_web_application||((issues++))
    check_snorby_process||((issues++))
    check_database||((issues++))
    check_disk_space||((issues++))
    check_log_files

    # Performance optimization (weekly)
    if [ "$(date +%u)" -eq 1 ] && [ "$(date +%H)" -eq 2 ]; then
        optimize_performance
        backup_snorby
    fi

    # Generate health report (daily)
    if [ "$(date +%H)" -eq 6 ]; then
        generate_health_report
    fi

    # Restart service if issues detected
    if [ "$issues" -gt 2 ]; then
        log_message "Multiple issues detected, attempting service restart"
        restart_snorby
    fi

    # Send alerts if issues found
    if [ "$issues" -gt 0 ]; then
        send_alert "System Issues Detected" "Found $issues issues during monitoring. Check logs for details."
    fi

    log_message "Monitoring cycle completed with $issues issues"
    return $issues
\\\\}

# Command line interface
case "$\\\\{1:-monitor\\\\}" in
    "monitor")
        run_monitoring
        ;;
    "restart")
        restart_snorby
        ;;
    "backup")
        backup_snorby
        ;;
    "optimize")
        optimize_performance
        ;;
    "report")
        generate_health_report
        ;;
    *)
        echo "Usage: $0 \\\\{monitor|restart|backup|optimize|report\\\\}"
        echo ""
        echo "Commands:"
        echo "  monitor  - Run complete monitoring cycle (default)"
        echo "  restart  - Restart Snorby service"
        echo "  backup   - Backup Snorby application and database"
        echo "  optimize - Run performance optimization"
        echo "  report   - Generate health report"
        exit 1
        ;;
esac

Esempi di integrazione

SIEM Integrazione

# SIEM integration for Snorby
# File: /opt/snorby/lib/siem_integration.rb

class SiemIntegration
  def self.export_to_splunk(time_range = 1.hour)
    events = Event.includes(:signature)
                  .where('timestamp >= ?', time_range.ago)

    events.find_each do|event|
      splunk_event = \\\\{
        time: event.timestamp.to_i,
        source: 'snorby',
        sourcetype: 'snort:alert',
        index: 'security',
        event: \\\\{
          sid: event.sid,
          cid: event.cid,
          signature: event.signature.sig_name,
          signature_id: event.signature.sig_id,
          src_ip: event.src_ip,
          src_port: event.src_port,
          dst_ip: event.dst_ip,
          dst_port: event.dst_port,
          protocol: event.ip_proto,
          priority: event.signature.sig_priority,
          classification: event.signature.sig_class_id
        \\\\}
      \\\\}

      send_to_splunk_hec(splunk_event)
    end
  end

  def self.send_to_splunk_hec(event_data)
    uri = URI(Rails.application.config.splunk_hec_url)
    http = Net::HTTP.new(uri.host, uri.port)
    http.use_ssl = true if uri.scheme == 'https'

    request = Net::HTTP::Post.new(uri)
    request['Authorization'] = "Splunk #\\\\{Rails.application.config.splunk_hec_token\\\\}"
    request['Content-Type'] = 'application/json'
    request.body = event_data.to_json

    response = http.request(request)

    unless response.code == '200'
      Rails.logger.error "Failed to send event to Splunk: #\\\\{response.code\\\\} #\\\\{response.body\\\\}"
    end
  end

  def self.export_to_elasticsearch(time_range = 1.hour)
    events = Event.includes(:signature)
                  .where('timestamp >= ?', time_range.ago)

    events.find_each do|event|
      es_event = \\\\{
        '@timestamp' => event.timestamp.iso8601,
        source: \\\\{
          ip: event.src_ip,
          port: event.src_port
        \\\\},
        destination: \\\\{
          ip: event.dst_ip,
          port: event.dst_port
        \\\\},
        network: \\\\{
          protocol: protocol_name(event.ip_proto)
        \\\\},
        event: \\\\{
          id: "#\\\\{event.sid\\\\}-#\\\\{event.cid\\\\}",
          category: 'network',
          type: 'alert',
          severity: severity_level(event.signature.sig_priority)
        \\\\},
        rule: \\\\{
          id: event.signature.sig_id,
          name: event.signature.sig_name,
          category: event.signature.sig_class_id
        \\\\}
      \\\\}

      send_to_elasticsearch(es_event)
    end
  end

  def self.send_to_elasticsearch(event_data)
    index_name = "snorby-#\\\\{Date.current.strftime('%Y.%m.%d')\\\\}"
    doc_id = event_data[:event][:id]

    uri = URI("#\\\\{Rails.application.config.elasticsearch_url\\\\}/#\\\\{index_name\\\\}/_doc/#\\\\{doc_id\\\\}")
    http = Net::HTTP.new(uri.host, uri.port)

    request = Net::HTTP::Put.new(uri)
    request['Content-Type'] = 'application/json'
    request.body = event_data.to_json

    response = http.request(request)

    unless ['200', '201'].include?(response.code)
      Rails.logger.error "Failed to send event to Elasticsearch: #\\\\{response.code\\\\} #\\\\{response.body\\\\}"
    end
  end

  private

  def self.protocol_name(proto_num)
    case proto_num
    when 1 then 'icmp'
    when 6 then 'tcp'
    when 17 then 'udp'
    else proto_num.to_s
    end
  end

  def self.severity_level(priority)
    case priority
    when 1..2 then 'high'
    when 3..4 then 'medium'
    else 'low'
    end
  end
end

Risoluzione dei problemi

Questioni comuni

Application Won't Start: Traduzione:

Database Problemi di connessione: Traduzione:

** Problemi di conformità: ** Traduzione:

Ottimizzazione delle prestazioni

Ottimizzazione delle prestazioni Snorby:

# Ruby/Rails optimization
cat >> /opt/snorby/config/environments/production.rb << 'EOF'
# Performance optimizations
config.cache_classes = true
config.eager_load = true
config.consider_all_requests_local = false
config.action_controller.perform_caching = true
config.cache_store = :memory_store, \\\\{ size: 64.megabytes \\\\}
config.assets.compile = false
config.assets.digest = true
config.log_level = :warn
EOF

# Database optimization
mysql -u root -p << 'EOF'
-- Optimize Snorby database
USE snorby;

-- Add indexes for common queries
CREATE INDEX idx_event_timestamp ON event (timestamp);
CREATE INDEX idx_event_src_ip_timestamp ON event (src_ip, timestamp);
CREATE INDEX idx_event_dst_ip_timestamp ON event (dst_ip, timestamp);
CREATE INDEX idx_event_signature_timestamp ON event (sig_id, timestamp);

-- Optimize tables
OPTIMIZE TABLE event;
OPTIMIZE TABLE signature;
ANALYZE TABLE event;
ANALYZE TABLE signature;
EOF

# System optimization
echo "vm.swappiness=10" >> /etc/sysctl.conf
sysctl -p

Considerazioni di sicurezza

Access Control

Web Application Security: - Implement HTTPS per tutti gli accessi Snorby - Utilizzare meccanismi di autenticazione forti - Esercitazione e gestione della sessione - Regolari aggiornamenti di sicurezza per Ruby e Rails - Monitorare i registri di accesso per attività sospette

Database Security

  • Utilizzare un utente database dedicato con privilegi minimi
  • Implementare la crittografia della connessione del database
  • Regolari aggiornamenti di sicurezza del database
  • Monitorare i log di accesso al database
  • Implementare la crittografia di backup

Protezione dei dati

Event Data Security: - Crittografare i dati sensibili degli eventi a riposo - Attuazione delle politiche di conservazione dei dati - Accesso sicuro ai dati di cattura dei pacchetti - Pulizia regolare dei file temporanei - Implement access logging per i dati degli eventi

** Sicurezza operativa: ** - Valutazioni di sicurezza regolari delle infrastrutture Snorby - Monitor per tentativi di accesso non autorizzati - Implementare procedure di backup e ripristino corrette - Aggiornamenti regolari di Snorby e dipendenze - Procedure di risposta per il compromesso di Snorby

Referenze

  1. [Snorby GitHub Repository]
  2. Ruby on Rails Documentation
  3. Snort IDS Documentazione_
  4. [MySQL Performance Tuning](URL_26___
  5. Le migliori pratiche di sicurezza di rugby_