Zum Inhalt

Splunk Cheatsheet

generieren

Splunk ist eine leistungsstarke Plattform zum Suchen, Monitoring und Analysieren von maschinengenerierten Daten über eine Web-style-Schnittstelle. Es ist weit verbreitet für Sicherheitsinformationen und Eventmanagement (SIEM), IT-Betriebe und Business Analytics.

Installation und Inbetriebnahme

Splunk Enterprise herunterladen und installieren

```bash

Download Splunk Enterprise (Linux)

wget -O splunk-9.1.2-b6b9c8185839-linux-2.6-amd64.deb \ 'https://download.splunk.com/products/splunk/releases/9.1.2/linux/splunk-9.1.2-b6b9c8185839-linux-2.6-amd64.deb'

Install on Ubuntu/Debian

sudo dpkg -i splunk-9.1.2-b6b9c8185839-linux-2.6-amd64.deb

Install on CentOS/RHEL

sudo rpm -i splunk-9.1.2-b6b9c8185839.x86_64.rpm

Start Splunk

sudo /opt/splunk/bin/splunk start --accept-license

Enable boot start

sudo /opt/splunk/bin/splunk enable boot-start ```_

Splunk Universal Forwarder

```bash

Download Universal Forwarder

wget -O splunkforwarder-9.1.2-b6b9c8185839-linux-2.6-amd64.deb \ 'https://download.splunk.com/products/universalforwarder/releases/9.1.2/linux/splunkforwarder-9.1.2-b6b9c8185839-linux-2.6-amd64.deb'

Install Universal Forwarder

sudo dpkg -i splunkforwarder-9.1.2-b6b9c8185839-linux-2.6-amd64.deb

Start forwarder

sudo /opt/splunkforwarder/bin/splunk start --accept-license ```_

Docker Installation

```bash

Run Splunk Enterprise in Docker

docker run -d -p 8000:8000 -p 9997:9997 \ -e SPLUNK_START_ARGS='--accept-license' \ -e SPLUNK_PASSWORD='changeme123' \ --name splunk splunk/splunk:latest

Run Universal Forwarder in Docker

docker run -d -p 9997:9997 \ -e SPLUNK_START_ARGS='--accept-license' \ -e SPLUNK_PASSWORD='changeme123' \ --name splunk-forwarder splunk/universalforwarder:latest ```_

Grundlegende Suchbefehle

Suche Grundlagen

```spl

Basic search

index=main error

Search with time range

index=main error earliest=-24h latest=now

Search multiple indexes

(index=main OR index=security) error

Search with wildcards

index=main source="access" status=404

Case-insensitive search

index=main Error OR error OR ERROR ```_

Suchoperatoren

```spl

AND operator (implicit)

index=main error failed

OR operator

index=main (error OR warning)

NOT operator

index=main error NOT warning

Field searches

index=main host=webserver01 sourcetype=access_combined

Quoted strings

index=main "connection refused"

Field existence

index=main user=*

Field non-existence

index=main NOT user=* ```_

Time Modifiers

```spl

Relative time

earliest=-1h latest=now earliest=-7d@d latest=@d earliest=-1mon@mon latest=@mon

Absolute time

earliest="01/01/2024:00:00:00" latest="01/31/2024:23:59:59"

Snap to time

earliest=-1d@d latest=@d # Yesterday from midnight to midnight earliest=@w0 latest=@w6 # This week from Sunday to Saturday ```_

Befehle zur Datenverarbeitung

Feldextraktion

```spl

Extract fields with regex

index=main|rex field=_raw "(?\w+)@(?\w+.\w+)"

Extract multiple fields

index=main|rex "user=(?\w+).*ip=(?\d+.\d+.\d+.\d+)"

Extract with named groups

index=main|rex field=message "Error: (?\d+) - (?.*)"

Extract fields from specific field

index=main|rex field=url "\/(?\w+)\/(?\w+)" ```_

Feldbetrieb

```spl

Create new fields

index=main|eval new_field=field1+field2

String operations

index=main|eval upper_user=upper(user) index=main|eval user_domain=user."@".domain

Conditional fields

index=main|eval status_desc=case( status>=200 AND status<300, "Success", status>=400 AND status<500, "Client Error", status>=500, "Server Error", 1=1, "Unknown" )

Mathematical operations

index=main|eval response_time_ms=response_time1000 index=main|eval percentage=round((part/total)100, 2) ```_

Datentransformation

```spl

Remove fields

index=main|fields - _raw, _time

Keep only specific fields

index=main|fields user, ip, action

Rename fields

index=main|rename src_ip as source_ip, dst_ip as dest_ip

Convert field types

index=main|eval response_time=tonumber(response_time) index=main|eval timestamp=strftime(time, "%Y-%m-%d %H:%M:%S") ```

Statistische Befehle

Grundstatistik

```spl

Count events

index=main|stats count

Count by field

index=main|stats count by user

Multiple statistics

index=main|stats count, avg(response_time), max(bytes) by host

Distinct count

index=main|stats dc(user) as unique_users

List unique values

index=main|stats values(user) as users by host ```_

Fortgeschrittene Statistiken

```spl

Percentiles

index=main|stats perc50(response_time), perc95(response_time), perc99(response_time)

Standard deviation

index=main|stats avg(response_time), stdev(response_time)

Range and variance

index=main|stats min(response_time), max(response_time), range(response_time), var(response_time)

First and last values

index=main|stats first(user), last(user) by session_id ```_

Zeitbasierte Statistiken

```spl

Statistics over time

index=main|timechart span=1h count by status

Average over time

index=main|timechart span=5m avg(response_time)

Multiple metrics over time

index=main|timechart span=1h count, avg(response_time), max(bytes)

Fill null values

| index=main | timechart span=1h count | fillnull value=0 | ```_

Filtern und Sortieren

Das Kommando

```spl

Filter results

| index=main | stats count by user | where count > 100 |

String comparisons

index=main|where like(user, "admin%") index=main|where match(email, ".*@company.com")

Numeric comparisons

index=main|where response_time > 5.0 index=main|where bytes > 1024*1024 # Greater than 1MB ```_

Suche und Wo

```spl

Search command for filtering

index=main|search user=admin OR user=root

Complex search conditions

index=main|search (status>=400 AND status<500) OR (response_time>10)

Search with wildcards

index=main|search user="admin" OR user="admin" ```_

Sortierung

```spl

Sort ascending

| index=main | stats count by user | sort user |

Sort descending

| index=main | stats count by user | sort -count |

Multiple sort fields

| index=main | stats count, avg(response_time) by user | sort -count, user |

Sort with limit

| index=main | stats count by user | sort -count | head 10 | ```_

Erweiterte Suche Techniken

Forschungen

```spl

Basic subsearch

index=main user=[search index=security action=login|return user]

Subsearch with formatting

| index=main [search index=security failed_login | stats count by user | where count>5 | format] |

Subsearch with specific return

index=main ip=[search index=blacklist|return ip] ```_

Mitglieder

```spl

Inner join

| index=main | join user [search index=user_info | fields user, department] |

Left join

| index=main | join type=left user [search index=user_info | fields user, department] |

Join with multiple fields

| index=main | join user, host [search index=inventory | fields user, host, asset_tag] | ```_

Lookups

```spl

CSV lookup

index=main|lookup user_lookup.csv user OUTPUT department, manager

Automatic lookup (configured in transforms.conf)

index=main|lookup geoip clientip

External lookup

index=main|lookup dnslookup ip OUTPUT hostname ```_

Transaktionen

```spl

Group events into transactions

index=main|transaction user startswith="login" endswith="logout"

Transaction with time constraints

index=main|transaction user maxspan=1h maxpause=10m

Transaction with event count

index=main|transaction session_id maxevents=100

Transaction statistics

| index=main | transaction user | stats avg(duration), count by user | ```_

Visualisierung und Reporting

Diagrammbefehle

```spl

Simple chart

index=main|chart count by status

Chart over time

index=main|chart count over _time by status

Chart with functions

index=main|chart avg(response_time), max(response_time) over host by status

Chart with bins

index=main|chart count over response_time bins=10 ```_

Top und Rare

```spl

Top values

index=main|top user

Top with limit

index=main|top limit=20 user

Top by another field

index=main|top user by host

Rare values

index=main|rare user

Top with percentage

index=main|top user showperc=true ```_

Geostate

```spl

Geographic statistics

| index=main | iplocation clientip | geostats count by Country |

Geostats with latfield and longfield

index=main|geostats latfield=latitude longfield=longitude count by region

Geostats with globallimit

| index=main | iplocation clientip | geostats globallimit=10 count by City | ```_

Sicherheits- und SIEM-Nutzungsfälle

Fehlerhafte Login-Erkennung

```spl

Failed login attempts

index=security sourcetype=linux_secure "Failed password" |rex field=raw "Failed password for (?\w+) from (?\d+.\d+.\d+.\d+)" |stats count by user, src_ip |where count > 5 |sort -count ```

Brute Force Detection

```spl

Brute force attack detection

index=security action=login result=failure |bucket time span=5m |stats dc(user) as unique_users, count as attempts by src_ip, _time |where attempts > 20 OR unique_users > 10 |sort -attempts ```

Vorrechte Eskalation

```spl

Sudo usage monitoring

index=security sourcetype=linux_secure "sudo" |rex field=raw "sudo:\s+(?\w+)\s+:\s+(?.*)" |stats count, values(command) as commands by user |sort -count ```

Verkehrsanalyse

```spl

Large data transfers

index=network |stats sum(bytes_out) as total_bytes by src_ip, dest_ip |where total_bytes > 1073741824 # 1GB |eval total_gb=round(total_bytes/1073741824, 2) |sort -total_gb ```_

Malware-Detektion

```spl

Suspicious process execution

index=endpoint process_name= |search (process_name=".tmp" OR process_name="*.exe" OR process_name="powershell.exe") |stats count, values(command_line) as commands by host, process_name |where count > 10 ```_

Leistungsoptimierung

Suche Optimierung

```spl

Use specific indexes

index=main sourcetype=access_combined

Filter early in search

index=main error earliest=-1h|stats count by host

Use fast commands first

| index=main | where status=404 | stats count by uri |

Avoid wildcards at the beginning

index=main uri="/api/" NOT uri="debug*" ```_

Feldextraktion Optimierung

```spl

Extract only needed fields

| index=main | rex field=_raw "user=(?\w+)" | fields user, _time |

Use field extraction in search

index=main user=admin|stats count

Limit field extraction scope

| index=main | head 1000 | rex field=raw "pattern" | ```

Speicher und CPU Optimierung

```spl

Use summary indexing for frequent searches

|collect index=summary_index source="daily_stats"

Use report acceleration

|sistats count by user

Limit search scope

index=main earliest=-1h latest=now|head 10000 ```_

Konfiguration und Verwaltung

Index Management

```bash

Create new index

/opt/splunk/bin/splunk add index myindex -maxDataSize 1000 -maxHotBuckets 10

List indexes

/opt/splunk/bin/splunk list index

Clean index

/opt/splunk/bin/splunk clean eventdata -index myindex ```_

Benutzermanagement

```bash

Add user

/opt/splunk/bin/splunk add user username -password password -role user

List users

/opt/splunk/bin/splunk list user

Change user password

/opt/splunk/bin/splunk edit user username -password newpassword ```_

Dateneingang Konfiguration

```bash

Monitor file

/opt/splunk/bin/splunk add monitor /var/log/messages -index main

Monitor directory

/opt/splunk/bin/splunk add monitor /var/log/ -index main

Network input

/opt/splunk/bin/splunk add tcp 9999 -sourcetype syslog ```_

Forwarder Konfiguration

```bash

Add forward server

/opt/splunkforwarder/bin/splunk add forward-server splunk-server:9997

Add monitor to forwarder

/opt/splunkforwarder/bin/splunk add monitor /var/log/apache2/access.log -index web

List forward servers

/opt/splunkforwarder/bin/splunk list forward-server ```_

REST API Verwendung

Authentication

```bash

Get session key

curl -k -u admin:password https://localhost:8089/services/auth/login \ -d username=admin -d password=password

Use session key

curl -k -H "Authorization: Splunk " \ https://localhost:8089/services/search/jobs ```_

Suche API

```bash

Create search job

curl -k -u admin:password https://localhost:8089/services/search/jobs \ -d search="search index=main|head 10"

Get search results

curl -k -u admin:password \ https://localhost:8089/services/search/jobs//results \ --get -d output_mode=json ```_

Dateneingang API

```bash

List data inputs

curl -k -u admin:password https://localhost:8089/services/data/inputs/monitor

Add monitor input

curl -k -u admin:password https://localhost:8089/services/data/inputs/monitor \ -d name=/var/log/myapp.log -d index=main ```_

Fehlerbehebung

Gemeinsame Themen

```bash

Check Splunk status

/opt/splunk/bin/splunk status

Check license usage

/opt/splunk/bin/splunk list licenser-localslave

Restart Splunk

/opt/splunk/bin/splunk restart

Check configuration

/opt/splunk/bin/splunk btool inputs list /opt/splunk/bin/splunk btool outputs list ```_

Analyse der Ergebnisse

```bash

Check Splunk internal logs

tail -f /opt/splunk/var/log/splunk/splunkd.log

Check metrics

tail -f /opt/splunk/var/log/splunk/metrics.log

Check audit logs

tail -f /opt/splunk/var/log/splunk/audit.log ```_

Leistungsüberwachung

```spl

Internal Splunk metrics

index=_internal source=*metrics.log group=per_index_thruput |stats sum(kb) as total_kb by series |sort -total_kb

Search performance

index=_audit action=search |stats avg(total_run_time) as avg_runtime by user |sort -avg_runtime

License usage

index=internal source=*license_usage.log type=Usage |stats sum(b) as bytes by idx |eval GB=round(bytes/1024/1024/1024,2) |sort -GB ```

Best Practices

Suche Best Practices

  1. Benutzen Sie bestimmte Zeitbereiche - Vermeiden Sie die Suche "All time"
  2. Filter früh - Index, Quelltyp und Hostfilter zuerst verwenden
  3. ** Verwenden Sie schnelle Befehle** - Statistiken, Diagramme, Zeitdiagramme sind schneller als Transaktion
  4. *Avoid Wildcards - Besonders zu Beginn der Suchbegriffe
  5. *Benutze Zusammenfassung Indexing - Für häufig laufende Suchanfragen

Daten an Bord Best Practices

  1. Plan-Indexstrategie - Separate Indizes nach Datentyp und Speicherung
  2. ** Quelltypen konfigurieren** - Richtige Feldextraktion und Parsing
  3. *Einrichten der richtigen Zeitextraktion - Stellen Sie genaue Zeitstempel sicher
  4. *Use Universal Forwarders - Für verteilte Datenerhebung
  5. Monitor Lizenznutzung - Bleiben Sie in Lizenzgrenzen

Sicherheit Best Practices

  1. Benutzen Sie einen rollenbasierten Zugriff - Beschränken Sie Benutzerberechtigungen entsprechend
  2. Enable SSL - Für Web-Schnittstelle und Weiterleitungskommunikation
  3. *Regular Backups - Backup-Konfiguration und kritische Daten
  4. Monitor admin Aktivitäten - Konfigurationsänderungen verfolgen
  5. *Keep Splunk aktualisiert - Sicherheitspatches regelmäßig anwenden

Ressourcen