Splunk Cheatsheet
"Clase de la hoja" id="copy-btn" class="copy-btn" onclick="copyAllCommands()" Copiar todos los comandos Splunk id="pdf-btn" class="pdf-btn" onclick="generatePDF()" Generar Guía de PDF Splunk ■/div titulada
Splunk es una poderosa plataforma para buscar, monitorear y analizar datos generados por la máquina a través de una interfaz de estilo web. Es ampliamente utilizado para la información de seguridad y gestión de eventos (SIEM), operaciones de TI y análisis de negocios.
Instalación y configuración
Descargar e instalar Splunk Enterprise
# Download Splunk Enterprise (Linux)
wget -O splunk-9.1.2-b6b9c8185839-linux-2.6-amd64.deb \
'https://download.splunk.com/products/splunk/releases/9.1.2/linux/splunk-9.1.2-b6b9c8185839-linux-2.6-amd64.deb'
# Install on Ubuntu/Debian
sudo dpkg -i splunk-9.1.2-b6b9c8185839-linux-2.6-amd64.deb
# Install on CentOS/RHEL
sudo rpm -i splunk-9.1.2-b6b9c8185839.x86_64.rpm
# Start Splunk
sudo /opt/splunk/bin/splunk start --accept-license
# Enable boot start
sudo /opt/splunk/bin/splunk enable boot-start
Splunk Universal Forwarder
# Download Universal Forwarder
wget -O splunkforwarder-9.1.2-b6b9c8185839-linux-2.6-amd64.deb \
'https://download.splunk.com/products/universalforwarder/releases/9.1.2/linux/splunkforwarder-9.1.2-b6b9c8185839-linux-2.6-amd64.deb'
# Install Universal Forwarder
sudo dpkg -i splunkforwarder-9.1.2-b6b9c8185839-linux-2.6-amd64.deb
# Start forwarder
sudo /opt/splunkforwarder/bin/splunk start --accept-license
Docker Instalación
# Run Splunk Enterprise in Docker
docker run -d -p 8000:8000 -p 9997:9997 \
-e SPLUNK_START_ARGS='--accept-license' \
-e SPLUNK_PASSWORD='changeme123' \
--name splunk splunk/splunk:latest
# Run Universal Forwarder in Docker
docker run -d -p 9997:9997 \
-e SPLUNK_START_ARGS='--accept-license' \
-e SPLUNK_PASSWORD='changeme123' \
--name splunk-forwarder splunk/universalforwarder:latest
Comandos de búsqueda básica
Fundamentos de búsqueda
# Basic search
index=main error
# Search with time range
index=main error earliest=-24h latest=now
# Search multiple indexes
(index=main OR index=security) error
# Search with wildcards
index=main source="*access*" status=404
# Case-insensitive search
index=main Error OR error OR ERROR
Busca Operadores
# AND operator (implicit)
index=main error failed
# OR operator
index=main (error OR warning)
# NOT operator
index=main error NOT warning
# Field searches
index=main host=webserver01 sourcetype=access_combined
# Quoted strings
index=main "connection refused"
# Field existence
index=main user=*
# Field non-existence
index=main NOT user=*
Modificadores de tiempo
# Relative time
earliest=-1h latest=now
earliest=-7d@d latest=@d
earliest=-1mon@mon latest=@mon
# Absolute time
earliest="01/01/2024:00:00:00" latest="01/31/2024:23:59:59"
# Snap to time
earliest=-1d@d latest=@d # Yesterday from midnight to midnight
earliest=@w0 latest=@w6 # This week from Sunday to Saturday
Comandos de procesamiento de datos
Extracción de campo
# Extract fields with regex
index=main|rex field=_raw "(?<username>\w+)@(?<domain>\w+\.\w+)"
# Extract multiple fields
index=main|rex "user=(?<user>\w+).*ip=(?<ip>\d+\.\d+\.\d+\.\d+)"
# Extract with named groups
index=main|rex field=message "Error: (?<error_code>\d+) - (?<error_msg>.*)"
# Extract fields from specific field
index=main|rex field=url "\/(?<category>\w+)\/(?<item>\w+)"
Operaciones sobre el terreno
# Create new fields
index=main|eval new_field=field1+field2
# String operations
index=main|eval upper_user=upper(user)
index=main|eval user_domain=user."@".domain
# Conditional fields
index=main|eval status_desc=case(
status>=200 AND status<300, "Success",
status>=400 AND status<500, "Client Error",
status>=500, "Server Error",
1=1, "Unknown"
)
# Mathematical operations
index=main|eval response_time_ms=response_time*1000
index=main|eval percentage=round((part/total)*100, 2)
Transformación de datos
# Remove fields
index=main|fields - _raw, _time
# Keep only specific fields
index=main|fields user, ip, action
# Rename fields
index=main|rename src_ip as source_ip, dst_ip as dest_ip
# Convert field types
index=main|eval response_time=tonumber(response_time)
index=main|eval timestamp=strftime(_time, "%Y-%m-%d %H:%M:%S")
Comandos estadísticos
Estadísticas básicas
# Count events
index=main|stats count
# Count by field
index=main|stats count by user
# Multiple statistics
index=main|stats count, avg(response_time), max(bytes) by host
# Distinct count
index=main|stats dc(user) as unique_users
# List unique values
index=main|stats values(user) as users by host
Estadísticas avanzadas
# Percentiles
index=main|stats perc50(response_time), perc95(response_time), perc99(response_time)
# Standard deviation
index=main|stats avg(response_time), stdev(response_time)
# Range and variance
index=main|stats min(response_time), max(response_time), range(response_time), var(response_time)
# First and last values
index=main|stats first(user), last(user) by session_id
Estadísticas basadas en el tiempo
# Statistics over time
index=main|timechart span=1h count by status
# Average over time
index=main|timechart span=5m avg(response_time)
# Multiple metrics over time
index=main|timechart span=1h count, avg(response_time), max(bytes)
# Fill null values
index=main|timechart span=1h count|fillnull value=0
Filtración y clasificación
Donde Comando
# Filter results
index=main|stats count by user|where count > 100
# String comparisons
index=main|where like(user, "admin%")
index=main|where match(email, ".*@company\.com")
# Numeric comparisons
index=main|where response_time > 5.0
index=main|where bytes > 1024*1024 # Greater than 1MB
Búsqueda y dónde
# Search command for filtering
index=main|search user=admin OR user=root
# Complex search conditions
index=main|search (status>=400 AND status<500) OR (response_time>10)
# Search with wildcards
index=main|search user="admin*" OR user="*admin"
Clasificación
# Sort ascending
index=main|stats count by user|sort user
# Sort descending
index=main|stats count by user|sort -count
# Multiple sort fields
index=main|stats count, avg(response_time) by user|sort -count, user
# Sort with limit
index=main|stats count by user|sort -count|head 10
Técnicas de búsqueda avanzada
Subsearches
# Basic subsearch
index=main user=[search index=security action=login|return user]
# Subsearch with formatting
index=main [search index=security failed_login|stats count by user|where count>5|format]
# Subsearch with specific return
index=main ip=[search index=blacklist|return ip]
Joins
# Inner join
index=main|join user [search index=user_info|fields user, department]
# Left join
index=main|join type=left user [search index=user_info|fields user, department]
# Join with multiple fields
index=main|join user, host [search index=inventory|fields user, host, asset_tag]
Lookups
# CSV lookup
index=main|lookup user_lookup.csv user OUTPUT department, manager
# Automatic lookup (configured in transforms.conf)
index=main|lookup geoip clientip
# External lookup
index=main|lookup dnslookup ip OUTPUT hostname
Transacciones
# Group events into transactions
index=main|transaction user startswith="login" endswith="logout"
# Transaction with time constraints
index=main|transaction user maxspan=1h maxpause=10m
# Transaction with event count
index=main|transaction session_id maxevents=100
# Transaction statistics
index=main|transaction user|stats avg(duration), count by user
Visualización y presentación de informes
Comandos de carga
# Simple chart
index=main|chart count by status
# Chart over time
index=main|chart count over _time by status
# Chart with functions
index=main|chart avg(response_time), max(response_time) over host by status
# Chart with bins
index=main|chart count over response_time bins=10
Top y Rare
# Top values
index=main|top user
# Top with limit
index=main|top limit=20 user
# Top by another field
index=main|top user by host
# Rare values
index=main|rare user
# Top with percentage
index=main|top user showperc=true
Geostats
# Geographic statistics
index=main|iplocation clientip|geostats count by Country
# Geostats with latfield and longfield
index=main|geostats latfield=latitude longfield=longitude count by region
# Geostats with globallimit
index=main|iplocation clientip|geostats globallimit=10 count by City
Casos de seguridad y uso de SIEM
Failed Login Detection
# Failed login attempts
index=security sourcetype=linux_secure "Failed password"
|rex field=_raw "Failed password for (?<user>\w+) from (?<src_ip>\d+\.\d+\.\d+\.\d+)"
|stats count by user, src_ip
|where count > 5
|sort -count
Detección de la fuerza bruta
# Brute force attack detection
index=security action=login result=failure
|bucket _time span=5m
|stats dc(user) as unique_users, count as attempts by src_ip, _time
|where attempts > 20 OR unique_users > 10
|sort -attempts
Escalada de Privilege
# Sudo usage monitoring
index=security sourcetype=linux_secure "sudo"
|rex field=_raw "sudo:\s+(?<user>\w+)\s+:\s+(?<command>.*)"
|stats count, values(command) as commands by user
|sort -count
Network Traffic Analysis
# Large data transfers
index=network
|stats sum(bytes_out) as total_bytes by src_ip, dest_ip
|where total_bytes > 1073741824 # 1GB
|eval total_gb=round(total_bytes/1073741824, 2)
|sort -total_gb
Detección de malware
# Suspicious process execution
index=endpoint process_name=*
|search (process_name="*.tmp" OR process_name="*.exe" OR process_name="powershell.exe")
|stats count, values(command_line) as commands by host, process_name
|where count > 10
Optimización del rendimiento
Optimización de la búsqueda
# Use specific indexes
index=main sourcetype=access_combined
# Filter early in search
index=main error earliest=-1h|stats count by host
# Use fast commands first
index=main|where status=404|stats count by uri
# Avoid wildcards at the beginning
index=main uri="/api/*" NOT uri="*debug*"
Optimización de la extracción de campo
# Extract only needed fields
index=main|rex field=_raw "user=(?<user>\w+)"|fields user, _time
# Use field extraction in search
index=main user=admin|stats count
# Limit field extraction scope
index=main|head 1000|rex field=_raw "pattern"
Optimización de memoria y CPU
# Use summary indexing for frequent searches
|collect index=summary_index source="daily_stats"
# Use report acceleration
|sistats count by user
# Limit search scope
index=main earliest=-1h latest=now|head 10000
Configuración y administración
Gestión del índice
# Create new index
/opt/splunk/bin/splunk add index myindex -maxDataSize 1000 -maxHotBuckets 10
# List indexes
/opt/splunk/bin/splunk list index
# Clean index
/opt/splunk/bin/splunk clean eventdata -index myindex
Gestión de usuarios
# Add user
/opt/splunk/bin/splunk add user username -password password -role user
# List users
/opt/splunk/bin/splunk list user
# Change user password
/opt/splunk/bin/splunk edit user username -password newpassword
Configuración de entrada de datos
# Monitor file
/opt/splunk/bin/splunk add monitor /var/log/messages -index main
# Monitor directory
/opt/splunk/bin/splunk add monitor /var/log/ -index main
# Network input
/opt/splunk/bin/splunk add tcp 9999 -sourcetype syslog
Configuración de futuro
# Add forward server
/opt/splunkforwarder/bin/splunk add forward-server splunk-server:9997
# Add monitor to forwarder
/opt/splunkforwarder/bin/splunk add monitor /var/log/apache2/access.log -index web
# List forward servers
/opt/splunkforwarder/bin/splunk list forward-server
REST API Usage
Autenticación
# Get session key
curl -k -u admin:password https://localhost:8089/services/auth/login \
-d username=admin -d password=password
# Use session key
curl -k -H "Authorization: Splunk <session_key>" \
https://localhost:8089/services/search/jobs
Search API
# Create search job
curl -k -u admin:password https://localhost:8089/services/search/jobs \
-d search="search index=main|head 10"
# Get search results
curl -k -u admin:password \
https://localhost:8089/services/search/jobs/<sid>/results \
--get -d output_mode=json
Data Input API
# List data inputs
curl -k -u admin:password https://localhost:8089/services/data/inputs/monitor
# Add monitor input
curl -k -u admin:password https://localhost:8089/services/data/inputs/monitor \
-d name=/var/log/myapp.log -d index=main
Solución de problemas
Cuestiones comunes
# Check Splunk status
/opt/splunk/bin/splunk status
# Check license usage
/opt/splunk/bin/splunk list licenser-localslave
# Restart Splunk
/opt/splunk/bin/splunk restart
# Check configuration
/opt/splunk/bin/splunk btool inputs list
/opt/splunk/bin/splunk btool outputs list
Análisis de registros
# Check Splunk internal logs
tail -f /opt/splunk/var/log/splunk/splunkd.log
# Check metrics
tail -f /opt/splunk/var/log/splunk/metrics.log
# Check audit logs
tail -f /opt/splunk/var/log/splunk/audit.log
Supervisión de la ejecución
# Internal Splunk metrics
index=_internal source=*metrics.log group=per_index_thruput
|stats sum(kb) as total_kb by series
|sort -total_kb
# Search performance
index=_audit action=search
|stats avg(total_run_time) as avg_runtime by user
|sort -avg_runtime
# License usage
index=_internal source=*license_usage.log type=Usage
|stats sum(b) as bytes by idx
|eval GB=round(bytes/1024/1024/1024,2)
|sort -GB
Buenas prácticas
Búsqueda Mejores Prácticas
- Use rangos de tiempo específicos - Evite buscar "todo el tiempo"
- Filter early - Use index, sourcetype, and host filters first
- Use comandos rápidos - estadísticas, gráfico, timechart son más rápidos que la transacción
- Evite comodines - Especialmente al comienzo de los términos de búsqueda
- Use indexación sumaria - Para búsquedas con frecuencia ejecutadas
Data Onboarding Las mejores prácticas
- Plan index strategy - Índices separados por tipo de datos y retención
- Configure sourcetypes - Extracción y persiguiendo el campo
- Configurar la extracción adecuada del tiempo - Asegurar los tiempos precisos
- Use Universal Forwarders - Para la recopilación de datos distribuidos
- ** Uso de la licencia de Monitor** - Mantente dentro de los límites de la licencia
Prácticas óptimas de seguridad
- Utilizar el acceso basado en funciones - Limitar los permisos de los usuarios adecuadamente
- Hable SSL - Para la interfaz web y la comunicación de reenvío
- ** Respaldos regionales** - Configuración de respaldo y datos críticos
- ** Actividades de administración de monitor** - Cambios de configuración
- Keep Splunk actualizado - Aplicar parches de seguridad regularmente