Feuilles de chaleur
Spunk est une plate-forme puissante pour la recherche, la surveillance et l'analyse de données générées par la machine via une interface de style web. Il est largement utilisé pour la gestion des informations de sécurité et des événements (SIEM), les opérations informatiques et l'analyse des activités.
Installation et configuration
Télécharger et installer Splunk Enterprise
# Download Splunk Enterprise (Linux)
wget -O splunk-9.1.2-b6b9c8185839-linux-2.6-amd64.deb \
'https://download.splunk.com/products/splunk/releases/9.1.2/linux/splunk-9.1.2-b6b9c8185839-linux-2.6-amd64.deb'
# Install on Ubuntu/Debian
sudo dpkg -i splunk-9.1.2-b6b9c8185839-linux-2.6-amd64.deb
# Install on CentOS/RHEL
sudo rpm -i splunk-9.1.2-b6b9c8185839.x86_64.rpm
# Start Splunk
sudo /opt/splunk/bin/splunk start --accept-license
# Enable boot start
sudo /opt/splunk/bin/splunk enable boot-start
Transmetteur universel Spunk
# Download Universal Forwarder
wget -O splunkforwarder-9.1.2-b6b9c8185839-linux-2.6-amd64.deb \
'https://download.splunk.com/products/universalforwarder/releases/9.1.2/linux/splunkforwarder-9.1.2-b6b9c8185839-linux-2.6-amd64.deb'
# Install Universal Forwarder
sudo dpkg -i splunkforwarder-9.1.2-b6b9c8185839-linux-2.6-amd64.deb
# Start forwarder
sudo /opt/splunkforwarder/bin/splunk start --accept-license
```_
### Installation Docker
```bash
# Run Splunk Enterprise in Docker
docker run -d -p 8000:8000 -p 9997:9997 \
-e SPLUNK_START_ARGS='--accept-license' \
-e SPLUNK_PASSWORD='changeme123' \
--name splunk splunk/splunk:latest
# Run Universal Forwarder in Docker
docker run -d -p 9997:9997 \
-e SPLUNK_START_ARGS='--accept-license' \
-e SPLUNK_PASSWORD='changeme123' \
--name splunk-forwarder splunk/universalforwarder:latest
```_
## Commandes de recherche de base
### Rechercher les fondamentaux
```spl
# Basic search
index=main error
# Search with time range
index=main error earliest=-24h latest=now
# Search multiple indexes
(index=main OR index=security) error
# Search with wildcards
index=main source="*access*" status=404
# Case-insensitive search
index=main Error OR error OR ERROR
Opérateurs de recherche
# AND operator (implicit)
index=main error failed
# OR operator
index=main (error OR warning)
# NOT operator
index=main error NOT warning
# Field searches
index=main host=webserver01 sourcetype=access_combined
# Quoted strings
index=main "connection refused"
# Field existence
index=main user=*
# Field non-existence
index=main NOT user=*
Modificateurs de temps
# Relative time
earliest=-1h latest=now
earliest=-7d@d latest=@d
earliest=-1mon@mon latest=@mon
# Absolute time
earliest="01/01/2024:00:00:00" latest="01/31/2024:23:59:59"
# Snap to time
earliest=-1d@d latest=@d # Yesterday from midnight to midnight
earliest=@w0 latest=@w6 # This week from Sunday to Saturday
Commandes de traitement de données
Extraction de champ
# Extract fields with regex
index=main|rex field=_raw "(?<username>\w+)@(?<domain>\w+\.\w+)"
# Extract multiple fields
index=main|rex "user=(?<user>\w+).*ip=(?<ip>\d+\.\d+\.\d+\.\d+)"
# Extract with named groups
index=main|rex field=message "Error: (?<error_code>\d+) - (?<error_msg>.*)"
# Extract fields from specific field
index=main|rex field=url "\/(?<category>\w+)\/(?<item>\w+)"
Opérations hors Siège
# Create new fields
index=main|eval new_field=field1+field2
# String operations
index=main|eval upper_user=upper(user)
index=main|eval user_domain=user."@".domain
# Conditional fields
index=main|eval status_desc=case(
status>=200 AND status<300, "Success",
status>=400 AND status<500, "Client Error",
status>=500, "Server Error",
1=1, "Unknown"
)
# Mathematical operations
index=main|eval response_time_ms=response_time*1000
index=main|eval percentage=round((part/total)*100, 2)
Transformation des données
# Remove fields
index=main|fields - _raw, _time
# Keep only specific fields
index=main|fields user, ip, action
# Rename fields
index=main|rename src_ip as source_ip, dst_ip as dest_ip
# Convert field types
index=main|eval response_time=tonumber(response_time)
index=main|eval timestamp=strftime(_time, "%Y-%m-%d %H:%M:%S")
Commandes statistiques
Statistiques de base
# Count events
index=main|stats count
# Count by field
index=main|stats count by user
# Multiple statistics
index=main|stats count, avg(response_time), max(bytes) by host
# Distinct count
index=main|stats dc(user) as unique_users
# List unique values
index=main|stats values(user) as users by host
Statistiques avancées
# Percentiles
index=main|stats perc50(response_time), perc95(response_time), perc99(response_time)
# Standard deviation
index=main|stats avg(response_time), stdev(response_time)
# Range and variance
index=main|stats min(response_time), max(response_time), range(response_time), var(response_time)
# First and last values
index=main|stats first(user), last(user) by session_id
Statistiques temporelles
# Statistics over time
index=main|timechart span=1h count by status
# Average over time
index=main|timechart span=5m avg(response_time)
# Multiple metrics over time
index=main|timechart span=1h count, avg(response_time), max(bytes)
# Fill null values
index=main|timechart span=1h count|fillnull value=0
Filtrage et tri
Où commande
# Filter results
index=main|stats count by user|where count > 100
# String comparisons
index=main|where like(user, "admin%")
index=main|where match(email, ".*@company\.com")
# Numeric comparisons
index=main|where response_time > 5.0
index=main|where bytes > 1024*1024 # Greater than 1MB
Recherche et où
# Search command for filtering
index=main|search user=admin OR user=root
# Complex search conditions
index=main|search (status>=400 AND status<500) OR (response_time>10)
# Search with wildcards
index=main|search user="admin*" OR user="*admin"
Tri
# Sort ascending
index=main|stats count by user|sort user
# Sort descending
index=main|stats count by user|sort -count
# Multiple sort fields
index=main|stats count, avg(response_time) by user|sort -count, user
# Sort with limit
index=main|stats count by user|sort -count|head 10
Techniques de recherche avancées
Sous-recherches
# Basic subsearch
index=main user=[search index=security action=login|return user]
# Subsearch with formatting
index=main [search index=security failed_login|stats count by user|where count>5|format]
# Subsearch with specific return
index=main ip=[search index=blacklist|return ip]
Rejoignez
# Inner join
index=main|join user [search index=user_info|fields user, department]
# Left join
index=main|join type=left user [search index=user_info|fields user, department]
# Join with multiple fields
index=main|join user, host [search index=inventory|fields user, host, asset_tag]
Recherches
# CSV lookup
index=main|lookup user_lookup.csv user OUTPUT department, manager
# Automatic lookup (configured in transforms.conf)
index=main|lookup geoip clientip
# External lookup
index=main|lookup dnslookup ip OUTPUT hostname
Opérations
# Group events into transactions
index=main|transaction user startswith="login" endswith="logout"
# Transaction with time constraints
index=main|transaction user maxspan=1h maxpause=10m
# Transaction with event count
index=main|transaction session_id maxevents=100
# Transaction statistics
index=main|transaction user|stats avg(duration), count by user
Visualisation et rapports
Tableau des commandes
# Simple chart
index=main|chart count by status
# Chart over time
index=main|chart count over _time by status
# Chart with functions
index=main|chart avg(response_time), max(response_time) over host by status
# Chart with bins
index=main|chart count over response_time bins=10
Haut et rare
# Top values
index=main|top user
# Top with limit
index=main|top limit=20 user
# Top by another field
index=main|top user by host
# Rare values
index=main|rare user
# Top with percentage
index=main|top user showperc=true
Géostats
# Geographic statistics
index=main|iplocation clientip|geostats count by Country
# Geostats with latfield and longfield
index=main|geostats latfield=latitude longfield=longitude count by region
# Geostats with globallimit
index=main|iplocation clientip|geostats globallimit=10 count by City
Cas de sécurité et d'utilisation du SIEM
La détection de connexion a échoué
# Failed login attempts
index=security sourcetype=linux_secure "Failed password"
|rex field=_raw "Failed password for (?<user>\w+) from (?<src_ip>\d+\.\d+\.\d+\.\d+)"
|stats count by user, src_ip
|where count > 5
|sort -count
Détection de la force brute
# Brute force attack detection
index=security action=login result=failure
|bucket _time span=5m
|stats dc(user) as unique_users, count as attempts by src_ip, _time
|where attempts > 20 OR unique_users > 10
|sort -attempts
Escalade des privilèges
# Sudo usage monitoring
index=security sourcetype=linux_secure "sudo"
|rex field=_raw "sudo:\s+(?<user>\w+)\s+:\s+(?<command>.*)"
|stats count, values(command) as commands by user
|sort -count
Analyse du trafic réseau
# Large data transfers
index=network
|stats sum(bytes_out) as total_bytes by src_ip, dest_ip
|where total_bytes > 1073741824 # 1GB
|eval total_gb=round(total_bytes/1073741824, 2)
|sort -total_gb
Détection des logiciels malveillants
# Suspicious process execution
index=endpoint process_name=*
|search (process_name="*.tmp" OR process_name="*.exe" OR process_name="powershell.exe")
|stats count, values(command_line) as commands by host, process_name
|where count > 10
Optimisation des performances
Optimisation de la recherche
# Use specific indexes
index=main sourcetype=access_combined
# Filter early in search
index=main error earliest=-1h|stats count by host
# Use fast commands first
index=main|where status=404|stats count by uri
# Avoid wildcards at the beginning
index=main uri="/api/*" NOT uri="*debug*"
Optimisation de l'extraction des champs
# Extract only needed fields
index=main|rex field=_raw "user=(?<user>\w+)"|fields user, _time
# Use field extraction in search
index=main user=admin|stats count
# Limit field extraction scope
index=main|head 1000|rex field=_raw "pattern"
Optimisation de la mémoire et du processeur
# Use summary indexing for frequent searches
|collect index=summary_index source="daily_stats"
# Use report acceleration
|sistats count by user
# Limit search scope
index=main earliest=-1h latest=now|head 10000
Configuration et administration
Gestion des index
# Create new index
/opt/splunk/bin/splunk add index myindex -maxDataSize 1000 -maxHotBuckets 10
# List indexes
/opt/splunk/bin/splunk list index
# Clean index
/opt/splunk/bin/splunk clean eventdata -index myindex
Gestion des utilisateurs
# Add user
/opt/splunk/bin/splunk add user username -password password -role user
# List users
/opt/splunk/bin/splunk list user
# Change user password
/opt/splunk/bin/splunk edit user username -password newpassword
Configuration d'entrée des données
# Monitor file
/opt/splunk/bin/splunk add monitor /var/log/messages -index main
# Monitor directory
/opt/splunk/bin/splunk add monitor /var/log/ -index main
# Network input
/opt/splunk/bin/splunk add tcp 9999 -sourcetype syslog
Configuration de l'expéditeur
# Add forward server
/opt/splunkforwarder/bin/splunk add forward-server splunk-server:9997
# Add monitor to forwarder
/opt/splunkforwarder/bin/splunk add monitor /var/log/apache2/access.log -index web
# List forward servers
/opt/splunkforwarder/bin/splunk list forward-server
API REST Utilisation
Authentification
# Get session key
curl -k -u admin:password https://localhost:8089/services/auth/login \
-d username=admin -d password=password
# Use session key
curl -k -H "Authorization: Splunk <session_key>" \
https://localhost:8089/services/search/jobs
Recherche dans l'API
# Create search job
curl -k -u admin:password https://localhost:8089/services/search/jobs \
-d search="search index=main|head 10"
# Get search results
curl -k -u admin:password \
https://localhost:8089/services/search/jobs/<sid>/results \
--get -d output_mode=json
API d'entrée des données
# List data inputs
curl -k -u admin:password https://localhost:8089/services/data/inputs/monitor
# Add monitor input
curl -k -u admin:password https://localhost:8089/services/data/inputs/monitor \
-d name=/var/log/myapp.log -d index=main
Dépannage
Questions communes
# Check Splunk status
/opt/splunk/bin/splunk status
# Check license usage
/opt/splunk/bin/splunk list licenser-localslave
# Restart Splunk
/opt/splunk/bin/splunk restart
# Check configuration
/opt/splunk/bin/splunk btool inputs list
/opt/splunk/bin/splunk btool outputs list
Analyse du journal
# Check Splunk internal logs
tail -f /opt/splunk/var/log/splunk/splunkd.log
# Check metrics
tail -f /opt/splunk/var/log/splunk/metrics.log
# Check audit logs
tail -f /opt/splunk/var/log/splunk/audit.log
Surveillance de la performance
# Internal Splunk metrics
index=_internal source=*metrics.log group=per_index_thruput
|stats sum(kb) as total_kb by series
|sort -total_kb
# Search performance
index=_audit action=search
|stats avg(total_run_time) as avg_runtime by user
|sort -avg_runtime
# License usage
index=_internal source=*license_usage.log type=Usage
|stats sum(b) as bytes by idx
|eval GB=round(bytes/1024/1024/1024,2)
|sort -GB
Meilleures pratiques
Rechercher les meilleures pratiques
- Utilisez des plages de temps spécifiques - Évitez de chercher "Tous les temps"
- Filter tôt - Utiliser d'abord l'index, le type de source et les filtres hôtes
- Utilisez des commandes rapides - les statistiques, le graphique, l'organigramme sont plus rapides que la transaction
- Éviter les wildcards - Surtout au début des termes de recherche
- Utiliser l'indexation sommaire - Pour les recherches fréquemment effectuées
Données embarquées Meilleures pratiques
- Stratégie d'indexation des plans - Index séparés par type de données et conservation
- Configurer les types de sources - Extraction et analyse de champs appropriées
- Traitement de l'extraction en temps opportun - Assurez-vous d'avoir des chronomètres précis
- Utiliser les transitaires universels - Pour la collecte de données distribuées
- Utilisation de la licence de moniteur - Restez dans les limites de la licence
Pratiques exemplaires en matière de sécurité
- Utiliser l'accès basé sur le rôle - Limiter les autorisations des utilisateurs de façon appropriée
- Activer SSL - Pour l'interface web et la communication transitaire
- ** Sauvegardes régulières** - Configuration de sauvegarde et données critiques
- Activités administratives de suivi - Changements de configuration du suivi
- Keep Spunk update - Appliquer régulièrement les correctifs de sécurité
Ressources
- [Documentation sur les éclats] (LINK_5)
- [Référence à la recherche sur les éclats] (LINK_5)
- [Communauté élargie] (LINK_5)
- [Splunk Education] (LINK_5)
- [Applaudissements] (LINK_5)