Zum Inhalt

tshark Cheatsheet

- :material-content-copy: **[Kopieren auf Clipboard](https://__LINK_0_____** - :material-file-pdf-box:*[PDF herunterladen](__LINK_0____**

Überblick

tshark ist die Kommandozeilenversion von Wireshark, einem leistungsfähigen Netzwerkprotokollanalysator. Es ermöglicht Ihnen, den Netzwerkverkehr von der Befehlszeile zu erfassen und zu analysieren, so dass er ideal für automatisierte Analyse, entfernte Fehlersuche und Integration in Skripte und Überwachungssysteme ist.

Schlüsselmerkmale

  • *Command-Line Interface: Vollständige Wireshark Funktionalität ohne GUI
  • Live Capture: Echtzeit-Paketerfassung und -analyse
  • Dateianalyse*: Lesen und Analysieren bestehender Erfassungsdateien
  • *Flexible Filter: Leistungsstarke Anzeige- und Erfassungsfilter
  • *Multiple Output Formate: Text, JSON, XML, CSV und mehr
  • *Protocol Dissection: Deep Paket Analyse für Hunderte von Protokollen
  • ** Scripting Integration*: Perfekt für Automatisierung und Überwachung

Installation

Installation des Paketmanagers

```bash

Ubuntu/Debian

sudo apt update sudo apt install tshark

RHEL/CentOS/Fedora

sudo yum install wireshark-cli

or

sudo dnf install wireshark-cli

Arch Linux

sudo pacman -S wireshark-cli

macOS (Homebrew)

brew install wireshark

Windows (Chocolatey)

choco install wireshark ```_

Quelle Installation

```bash

Download and compile from source

wget https://www.wireshark.org/download/src/wireshark-latest.tar.xz tar -xf wireshark-latest.tar.xz cd wireshark-*

Install dependencies (Ubuntu/Debian)

sudo apt install cmake build-essential libglib2.0-dev libpcap-dev

Configure and build

mkdir build && cd build cmake .. make -j$(nproc) sudo make install ```_

Zulassungsanordnung

```bash

Add user to wireshark group (Linux)

sudo usermod -a -G wireshark $USER

Set capabilities for non-root capture

sudo setcap cap_net_raw,cap_net_admin+eip /usr/bin/dumpcap

Verify permissions

getcap /usr/bin/dumpcap ```_

Basisnutzung

Schnittstellenmanagement

```bash

List available interfaces

tshark -D

Capture from specific interface

tshark -i eth0

Capture from multiple interfaces

tshark -i eth0 -i wlan0

Capture from any interface

tshark -i any

Show interface details

tshark -i eth0 --list-interfaces ```_

Grundsteinlegung

```bash

Capture packets (Ctrl+C to stop)

tshark -i eth0

Capture specific number of packets

tshark -i eth0 -c 100

Capture for specific duration

tshark -i eth0 -a duration:60

Capture to file

tshark -i eth0 -w capture.pcap

Capture with ring buffer

tshark -i eth0 -w capture.pcap -b filesize:100000 -b files:5 ```_

Lesen von Capture-Dateien

```bash

Read pcap file

tshark -r capture.pcap

Read specific packets

tshark -r capture.pcap -c 10

Read with packet range

tshark -r capture.pcap -Y "frame.number >= 100 and frame.number <= 200"

Show file information

tshark -r capture.pcap -q -z io,stat,0 ```_

Filter anzeigen

Grundfilterung

```bash

Filter by protocol

tshark -i eth0 -Y "http" tshark -i eth0 -Y "tcp" tshark -i eth0 -Y "udp" tshark -i eth0 -Y "dns"

Filter by IP address

tshark -i eth0 -Y "ip.addr == 192.168.1.1" tshark -i eth0 -Y "ip.src == 192.168.1.1" tshark -i eth0 -Y "ip.dst == 192.168.1.1"

Filter by port

tshark -i eth0 -Y "tcp.port == 80" tshark -i eth0 -Y "udp.port == 53" tshark -i eth0 -Y "tcp.srcport == 443" ```_

Erweiterte Filterung

```bash

Combine filters with logical operators

tshark -i eth0 -Y "tcp.port == 80 and ip.addr == 192.168.1.1" tshark -i eth0 -Y "http or https" tshark -i eth0 -Y "not arp"

Filter by packet size

tshark -i eth0 -Y "frame.len > 1000" tshark -i eth0 -Y "tcp.len > 0"

Filter by time

tshark -i eth0 -Y "frame.time >= \"2024-01-01 00:00:00\""

Filter by MAC address

tshark -i eth0 -Y "eth.addr == aa:bb:cc:dd:ee:ff"

Filter by network range

tshark -i eth0 -Y "ip.addr == 192.168.1.0/24" ```_

Protokoll-spezifische Filter

```bash

HTTP filters

tshark -i eth0 -Y "http.request.method == GET" tshark -i eth0 -Y "http.response.code == 200" tshark -i eth0 -Y "http.host contains \"example.com\""

DNS filters

tshark -i eth0 -Y "dns.qry.name contains \"google\"" tshark -i eth0 -Y "dns.flags.response == 1"

TCP filters

tshark -i eth0 -Y "tcp.flags.syn == 1" tshark -i eth0 -Y "tcp.flags.reset == 1" tshark -i eth0 -Y "tcp.analysis.retransmission"

SSL/TLS filters

tshark -i eth0 -Y "ssl.handshake.type == 1" tshark -i eth0 -Y "tls.record.content_type == 22" ```_

Filter erfassen

Basic Capture Filter

```bash

Capture specific host

tshark -i eth0 -f "host 192.168.1.1"

Capture specific port

tshark -i eth0 -f "port 80" tshark -i eth0 -f "tcp port 443" tshark -i eth0 -f "udp port 53"

Capture specific protocol

tshark -i eth0 -f "tcp" tshark -i eth0 -f "udp" tshark -i eth0 -f "icmp"

Capture network range

tshark -i eth0 -f "net 192.168.1.0/24" ```_

Erweiterte Capture Filter

```bash

Combine filters

tshark -i eth0 -f "host 192.168.1.1 and port 80" tshark -i eth0 -f "tcp and not port 22"

Capture by direction

tshark -i eth0 -f "src host 192.168.1.1" tshark -i eth0 -f "dst port 443"

Capture by packet size

tshark -i eth0 -f "greater 1000" tshark -i eth0 -f "less 64"

Capture multicast/broadcast

tshark -i eth0 -f "multicast" tshark -i eth0 -f "broadcast" ```_

Ausgabeformate und Felder

Ausgabeformate

```bash

Default output

tshark -i eth0

Verbose output

tshark -i eth0 -V

One-line summary

tshark -i eth0 -T fields -e frame.number -e ip.src -e ip.dst

JSON output

tshark -i eth0 -T json

XML output

tshark -i eth0 -T pdml

CSV output

tshark -i eth0 -T fields -E separator=, -e ip.src -e ip.dst -e tcp.port ```_

Benutzerdefinierte Feldausgabe

```bash

Basic fields

tshark -r capture.pcap -T fields -e frame.time -e ip.src -e ip.dst

HTTP fields

tshark -r capture.pcap -Y "http" -T fields -e http.host -e http.request.uri

DNS fields

tshark -r capture.pcap -Y "dns" -T fields -e dns.qry.name -e dns.resp.addr

TCP fields

tshark -r capture.pcap -Y "tcp" -T fields -e tcp.srcport -e tcp.dstport -e tcp.len

Custom separator

tshark -r capture.pcap -T fields -E separator="|" -e ip.src -e ip.dst ```_

Feldextraktion

```bash

Extract unique values

| tshark -r capture.pcap -T fields -e ip.src | sort | uniq |

Count occurrences

| tshark -r capture.pcap -T fields -e ip.src | sort | uniq -c |

Extract HTTP hosts

| tshark -r capture.pcap -Y "http" -T fields -e http.host | sort | uniq |

Extract DNS queries

| tshark -r capture.pcap -Y "dns" -T fields -e dns.qry.name | sort | uniq | ```_

Statistik und Analyse

Grundstatistik

```bash

I/O statistics

tshark -r capture.pcap -q -z io,stat,1

Protocol hierarchy

tshark -r capture.pcap -q -z phs

Conversation statistics

tshark -r capture.pcap -q -z conv,tcp tshark -r capture.pcap -q -z conv,udp tshark -r capture.pcap -q -z conv,ip

Endpoint statistics

tshark -r capture.pcap -q -z endpoints,tcp tshark -r capture.pcap -q -z endpoints,ip ```_

Fortgeschrittene Statistiken

```bash

HTTP statistics

tshark -r capture.pcap -q -z http,stat tshark -r capture.pcap -q -z http,tree

DNS statistics

tshark -r capture.pcap -q -z dns,tree

Response time statistics

tshark -r capture.pcap -q -z rpc,rtt,tcp

Packet length statistics

tshark -r capture.pcap -q -z plen,tree

Expert information

tshark -r capture.pcap -q -z expert ```_

Zollstatistik

```bash

Count packets by protocol

| tshark -r capture.pcap -T fields -e _ws.col.Protocol | sort | uniq -c |

Bandwidth usage by IP

tshark -r capture.pcap -T fields -e ip.src -e frame.len | \ awk '{bytes[$1]+=$2} END {for(ip in bytes) print ip, bytes[ip]}'

Top talkers

| tshark -r capture.pcap -T fields -e ip.src | sort | uniq -c | sort -nr | head -10 |

Port usage statistics

| tshark -r capture.pcap -Y "tcp" -T fields -e tcp.dstport | sort | uniq -c | sort -nr | ```_

Erweiterte Funktionen

Entschlüsselung

```bash

SSL/TLS decryption with key file

tshark -r capture.pcap -o ssl.keylog_file:sslkeys.log -Y "http"

WEP decryption

tshark -r capture.pcap -o wlan.wep_key1:"01:02:03:04:05"

WPA decryption

tshark -r capture.pcap -o wlan.wpa-pwd:"password:SSID" ```_

Protokoll

```bash

Disable protocol dissection

tshark -r capture.pcap -d tcp.port==8080,http

Force protocol dissection

tshark -r capture.pcap -d udp.port==1234,dns

Show available dissectors

tshark -G protocols

Show protocol fields

tshark -G fields | grep http ```_

Zeit und Zeitstempel

```bash

Absolute timestamps

tshark -r capture.pcap -t a

Relative timestamps

tshark -r capture.pcap -t r

Delta timestamps

tshark -r capture.pcap -t d

Epoch timestamps

tshark -r capture.pcap -t e

Custom time format

tshark -r capture.pcap -t a -T fields -e frame.time_epoch ```_

Automatisierungsskripte

Netzwerküberwachung Skript

```bash

!/bin/bash

network-monitor.sh

set -e

Configuration

INTERFACE="${INTERFACE:-eth0}" CAPTURE_DIR="${CAPTURE_DIR:-/var/log/network-captures}" ALERT_THRESHOLD="${ALERT_THRESHOLD:-1000}" MONITORING_DURATION="${MONITORING_DURATION:-300}"

Create capture directory

mkdir -p "$CAPTURE_DIR"

Function to capture and analyze traffic

monitor_network() { local timestamp=$(date +%Y%m%d_%H%M%S) local capture_file="$CAPTURE_DIR/capture_$timestamp.pcap"

echo "Starting network monitoring on $INTERFACE..."
echo "Capture file: $capture_file"

# Start capture in background
tshark -i "$INTERFACE" -w "$capture_file" -a duration:$MONITORING_DURATION &
local tshark_pid=$!

# Wait for capture to complete
wait $tshark_pid

echo "Capture completed. Analyzing..."

# Analyze capture
analyze_capture "$capture_file"

}

Function to analyze capture file

analyze_capture() { local capture_file="$1"

echo "=== Network Analysis Report ==="
echo "File: $capture_file"
echo "Timestamp: $(date)"
echo

# Basic statistics
echo "--- Basic Statistics ---"
tshark -r "$capture_file" -q -z io,stat,0
echo

# Protocol hierarchy
echo "--- Protocol Hierarchy ---"
tshark -r "$capture_file" -q -z phs
echo

# Top talkers
echo "--- Top Source IPs ---"

| tshark -r "$capture_file" -T fields -e ip.src | sort | uniq -c | sort -nr | head -10 | echo

# Top destinations
echo "--- Top Destination IPs ---"

| tshark -r "$capture_file" -T fields -e ip.dst | sort | uniq -c | sort -nr | head -10 | echo

# Top ports
echo "--- Top TCP Ports ---"

| tshark -r "$capture_file" -Y "tcp" -T fields -e tcp.dstport | sort | uniq -c | sort -nr | head -10 | echo

# Security analysis
security_analysis "$capture_file"

}

Function for security analysis

security_analysis() { local capture_file="$1"

echo "--- Security Analysis ---"

# Check for suspicious activity
local syn_flood=$(tshark -r "$capture_file" -Y "tcp.flags.syn==1 and tcp.flags.ack==0" | wc -l)
local port_scans=$(tshark -r "$capture_file" -T fields -e ip.src -e tcp.dstport | \
                  awk '{ports[$1]++} END {for(ip in ports) if(ports[ip]>50) print ip, ports[ip]}')

echo "SYN packets (potential SYN flood): $syn_flood"

if [ "$syn_flood" -gt "$ALERT_THRESHOLD" ]; then
    echo "⚠️  WARNING: High number of SYN packets detected!"
    send_alert "SYN Flood Alert" "Detected $syn_flood SYN packets"
fi

if [ -n "$port_scans" ]; then
    echo "⚠️  WARNING: Potential port scans detected:"
    echo "$port_scans"
    send_alert "Port Scan Alert" "Potential port scanning activity detected"
fi

# Check for unusual protocols
echo "--- Unusual Protocol Activity ---"

| tshark -r "$capture_file" -T fields -e _ws.col.Protocol | sort | uniq -c | sort -nr | head -20 | }

Function to send alerts

send_alert() { local subject="$1" local message="$2"

echo "🚨 ALERT: $subject"
echo "Details: $message"

# Send email alert (if configured)
if [ -n "$ALERT_EMAIL" ]; then
    echo "$message" | mail -s "$subject" "$ALERT_EMAIL"
fi

# Send to syslog
logger -p local0.warning "Network Monitor Alert: $subject - $message"

}

Main execution

main() { case "${1:-monitor}" in "monitor") monitor_network ;; "analyze") if [ -z "$2" ]; then echo "Usage: $0 analyze " exit 1 fi analyze_capture "$2" ;; "continuous") while true; do monitor_network sleep 60 done ;; *) | echo "Usage: $0 {monitor | analyze | continuous}" | exit 1 ;; esac }

main "$@" ```_

HTTP Traffic Analyzer

```python

!/usr/bin/env python3

http-analyzer.py

import subprocess import json import sys import argparse from collections import defaultdict, Counter from urllib.parse import urlparse

class HTTPAnalyzer: def init(self, capture_file): self.capture_file = capture_file self.http_requests = [] self.http_responses = []

def extract_http_traffic(self):
    """Extract HTTP traffic from capture file"""
    print("Extracting HTTP traffic...")

    # Extract HTTP requests
    cmd_requests = [
        'tshark', '-r', self.capture_file,
        '-Y', 'http.request',
        '-T', 'json'
    ]

    try:
        result = subprocess.run(cmd_requests, capture_output=True, text=True, check=True)
        if result.stdout.strip():
            self.http_requests = json.loads(result.stdout)
    except subprocess.CalledProcessError as e:
        print(f"Error extracting HTTP requests: {e}")
        return False

    # Extract HTTP responses
    cmd_responses = [
        'tshark', '-r', self.capture_file,
        '-Y', 'http.response',
        '-T', 'json'
    ]

    try:
        result = subprocess.run(cmd_responses, capture_output=True, text=True, check=True)
        if result.stdout.strip():
            self.http_responses = json.loads(result.stdout)
    except subprocess.CalledProcessError as e:
        print(f"Error extracting HTTP responses: {e}")
        return False

    return True

def analyze_requests(self):
    """Analyze HTTP requests"""
    print("\n=== HTTP Request Analysis ===")

    if not self.http_requests:
        print("No HTTP requests found")
        return

    methods = Counter()
    hosts = Counter()
    user_agents = Counter()
    urls = []

    for packet in self.http_requests:
        try:
            http_layer = packet['_source']['layers']['http']

            # Extract method
            if 'http.request.method' in http_layer:
                methods[http_layer['http.request.method']] += 1

            # Extract host
            if 'http.host' in http_layer:
                hosts[http_layer['http.host']] += 1

            # Extract User-Agent
            if 'http.user_agent' in http_layer:
                user_agents[http_layer['http.user_agent']] += 1

            # Extract full URL
            if 'http.host' in http_layer and 'http.request.uri' in http_layer:
                url = f"http://{http_layer['http.host']}{http_layer['http.request.uri']}"
                urls.append(url)

        except KeyError:
            continue

    # Print analysis
    print(f"Total HTTP requests: {len(self.http_requests)}")

    print("\n--- Top HTTP Methods ---")
    for method, count in methods.most_common(10):
        print(f"{method}: {count}")

    print("\n--- Top Hosts ---")
    for host, count in hosts.most_common(10):
        print(f"{host}: {count}")

    print("\n--- Top User Agents ---")
    for ua, count in user_agents.most_common(5):
        print(f"{ua}: {count}")

    print("\n--- Sample URLs ---")
    for url in urls[:20]:
        print(url)

def analyze_responses(self):
    """Analyze HTTP responses"""
    print("\n=== HTTP Response Analysis ===")

    if not self.http_responses:
        print("No HTTP responses found")
        return

    status_codes = Counter()
    content_types = Counter()
    servers = Counter()

    for packet in self.http_responses:
        try:
            http_layer = packet['_source']['layers']['http']

            # Extract status code
            if 'http.response.code' in http_layer:
                status_codes[http_layer['http.response.code']] += 1

            # Extract content type
            if 'http.content_type' in http_layer:
                content_types[http_layer['http.content_type']] += 1

            # Extract server
            if 'http.server' in http_layer:
                servers[http_layer['http.server']] += 1

        except KeyError:
            continue

    # Print analysis
    print(f"Total HTTP responses: {len(self.http_responses)}")

    print("\n--- Status Codes ---")
    for code, count in status_codes.most_common(10):
        print(f"{code}: {count}")

    print("\n--- Content Types ---")
    for ct, count in content_types.most_common(10):
        print(f"{ct}: {count}")

    print("\n--- Servers ---")
    for server, count in servers.most_common(10):
        print(f"{server}: {count}")

def security_analysis(self):
    """Perform security analysis"""
    print("\n=== Security Analysis ===")

    suspicious_patterns = [
        'admin', 'login', 'password', 'config', 'backup',
        'test', 'debug', 'dev', 'staging', 'api'
    ]

    suspicious_urls = []

    for packet in self.http_requests:
        try:
            http_layer = packet['_source']['layers']['http']

            if 'http.request.uri' in http_layer:
                uri = http_layer['http.request.uri'].lower()

                for pattern in suspicious_patterns:
                    if pattern in uri:
                        if 'http.host' in http_layer:
                            full_url = f"http://{http_layer['http.host']}{http_layer['http.request.uri']}"
                            suspicious_urls.append(full_url)
                        break

        except KeyError:
            continue

    if suspicious_urls:
        print("⚠️  Potentially suspicious URLs detected:")
        for url in set(suspicious_urls[:20]):
            print(f"  {url}")
    else:
        print("✅ No obviously suspicious URLs detected")

def generate_report(self, output_file=None):
    """Generate comprehensive report"""
    if not self.extract_http_traffic():
        print("Failed to extract HTTP traffic")
        return

    # Redirect output to file if specified
    if output_file:
        sys.stdout = open(output_file, 'w')

    print(f"HTTP Traffic Analysis Report")
    print(f"Capture File: {self.capture_file}")
    print(f"Generated: {subprocess.run(['date'], capture_output=True, text=True).stdout.strip()}")
    print("=" * 50)

    self.analyze_requests()
    self.analyze_responses()
    self.security_analysis()

    if output_file:
        sys.stdout.close()
        sys.stdout = sys.__stdout__
        print(f"Report saved to: {output_file}")

def main(): parser = argparse.ArgumentParser(description='HTTP Traffic Analyzer') parser.add_argument('capture_file', help='Path to capture file') parser.add_argument('-o', '--output', help='Output report file')

args = parser.parse_args()

analyzer = HTTPAnalyzer(args.capture_file)
analyzer.generate_report(args.output)

if name == "main": main() ```_

DNS Analyse Skript

```bash

!/bin/bash

dns-analyzer.sh

set -e

Configuration

CAPTURE_FILE="$1" OUTPUT_DIR="${OUTPUT_DIR:-./dns-analysis}"

if [ -z "$CAPTURE_FILE" ]; then echo "Usage: $0 " exit 1 fi

Create output directory

mkdir -p "$OUTPUT_DIR"

echo "Analyzing DNS traffic in $CAPTURE_FILE..."

Extract DNS queries

echo "Extracting DNS queries..." tshark -r "$CAPTURE_FILE" -Y "dns.flags.response == 0" \ -T fields -e frame.time -e ip.src -e dns.qry.name -e dns.qry.type \ > "$OUTPUT_DIR/dns_queries.txt"

Extract DNS responses

echo "Extracting DNS responses..." tshark -r "$CAPTURE_FILE" -Y "dns.flags.response == 1" \ -T fields -e frame.time -e ip.src -e dns.qry.name -e dns.resp.addr \ > "$OUTPUT_DIR/dns_responses.txt"

Analyze queries

echo "Analyzing DNS queries..." cat > "$OUTPUT_DIR/dns_analysis.txt" << EOF DNS Traffic Analysis Report ========================== Capture File: $CAPTURE_FILE Generated: $(date)

--- Query Statistics --- Total DNS queries: $(wc -l < "$OUTPUT_DIR/dns_queries.txt") Total DNS responses: $(wc -l < "$OUTPUT_DIR/dns_responses.txt")

--- Top Queried Domains --- | $(awk '{print $3}' "$OUTPUT_DIR/dns_queries.txt" | sort | uniq -c | sort -nr | head -20) |

--- Top Query Types --- | $(awk '{print $4}' "$OUTPUT_DIR/dns_queries.txt" | sort | uniq -c | sort -nr) |

--- Top DNS Clients --- | $(awk '{print $2}' "$OUTPUT_DIR/dns_queries.txt" | sort | uniq -c | sort -nr | head -10) |

--- Suspicious Domains --- | $(awk '{print $3}' "$OUTPUT_DIR/dns_queries.txt" | grep -E "(.tk$ | .ml$ | .ga$ | .cf$ | dga- | random-)" | sort | uniq | | echo "None detected") |

EOF

echo "Analysis complete. Results saved to $OUTPUT_DIR/" echo "Summary:" cat "$OUTPUT_DIR/dns_analysis.txt" ```_

CI/CD Integration

GitHub Aktionen

```yaml

.github/workflows/network-analysis.yml

name: Network Traffic Analysis

on: push: paths: - 'captures/*.pcap' workflow_dispatch: inputs: capture_file: description: 'Capture file to analyze' required: true

jobs: analyze-traffic: runs-on: ubuntu-latest steps: - name: Checkout code uses: actions/checkout@v3

- name: Install tshark
  run: |
    sudo apt update
    sudo apt install -y tshark

- name: Analyze network traffic
  run: |
    # Create analysis directory
    mkdir -p analysis-results

    # Find capture files
    if [ "${{ github.event.inputs.capture_file }}" ]; then
      CAPTURE_FILE="${{ github.event.inputs.capture_file }}"
    else
      CAPTURE_FILE=$(find captures/ -name "*.pcap" -type f | head -1)
    fi

    if [ -z "$CAPTURE_FILE" ]; then
      echo "No capture file found"
      exit 1
    fi

    echo "Analyzing: $CAPTURE_FILE"

    # Basic statistics
    tshark -r "$CAPTURE_FILE" -q -z io,stat,0 > analysis-results/basic-stats.txt

    # Protocol hierarchy
    tshark -r "$CAPTURE_FILE" -q -z phs > analysis-results/protocols.txt

    # Top talkers

| tshark -r "$CAPTURE_FILE" -T fields -e ip.src | sort | uniq -c | sort -nr | head -20 > analysis-results/top-sources.txt |

    # HTTP analysis
    tshark -r "$CAPTURE_FILE" -Y "http" -T fields -e http.host -e http.request.uri | head -100 > analysis-results/http-requests.txt

    # DNS analysis

| tshark -r "$CAPTURE_FILE" -Y "dns" -T fields -e dns.qry.name | sort | uniq -c | sort -nr | head -50 > analysis-results/dns-queries.txt |

- name: Generate report
  run: |
    cat > analysis-results/report.md << EOF
    # Network Traffic Analysis Report

    **File: ** $CAPTURE_FILE
    **Date: ** $(date)

    ## Basic Statistics
    \`\`\`
    $(cat analysis-results/basic-stats.txt)
    \`\`\`

    ## Protocol Hierarchy
    \`\`\`
    $(cat analysis-results/protocols.txt)
    \`\`\`

    ## Top Source IPs
    \`\`\`
    $(cat analysis-results/top-sources.txt)
    \`\`\`

    ## HTTP Requests (Sample)
    \`\`\`
    $(cat analysis-results/http-requests.txt)
    \`\`\`

    ## DNS Queries (Top 50)
    \`\`\`
    $(cat analysis-results/dns-queries.txt)
    \`\`\`
    EOF

- name: Upload analysis results
  uses: actions/upload-artifact@v3
  with:
    name: network-analysis-results
    path: analysis-results/

- name: Comment on PR
  if: github.event_name == 'pull_request'
  uses: actions/github-script@v6
  with:
    script: |
      const fs = require('fs');
      const report = fs.readFileSync('analysis-results/report.md', 'utf8');

      github.rest.issues.createComment({
        issue_number: context.issue.number,
        owner: context.repo.owner,
        repo: context.repo.repo,
        body: report
      });

```_

Jenkins Pipeline

```groovy // Jenkinsfile pipeline { agent any

parameters {
    string(name: 'CAPTURE_FILE', defaultValue: '', description: 'Path to capture file')
    choice(name: 'ANALYSIS_TYPE', choices: ['basic', 'security', 'performance'], description: 'Type of analysis')
}

stages {
    stage('Setup') {
        steps {
            script {
                // Install tshark if not available
                sh '''
                    if ! command -v tshark &> /dev/null; then
                        sudo apt update
                        sudo apt install -y tshark
                    fi
                '''
            }
        }
    }

    stage('Validate Input') {
        steps {
            script {
                if (!params.CAPTURE_FILE) {
                    error("CAPTURE_FILE parameter is required")
                }

                if (!fileExists(params.CAPTURE_FILE)) {
                    error("Capture file does not exist: ${params.CAPTURE_FILE}")
                }
            }
        }
    }

    stage('Basic Analysis') {
        steps {
            script {
                sh """
                    mkdir -p analysis-results

                    # Basic statistics
                    tshark -r "${params.CAPTURE_FILE}" -q -z io,stat,0 > analysis-results/basic-stats.txt

                    # Protocol hierarchy
                    tshark -r "${params.CAPTURE_FILE}" -q -z phs > analysis-results/protocols.txt

                    # Conversation statistics
                    tshark -r "${params.CAPTURE_FILE}" -q -z conv,tcp > analysis-results/tcp-conversations.txt
                    tshark -r "${params.CAPTURE_FILE}" -q -z conv,udp > analysis-results/udp-conversations.txt
                """
            }
        }
    }

    stage('Security Analysis') {
        when {
            expression { params.ANALYSIS_TYPE == 'security' }
        }
        steps {
            script {
                sh """
                    # Check for suspicious activity
                    echo "=== Security Analysis ===" > analysis-results/security-analysis.txt

                    # SYN flood detection
                    SYN_COUNT=\$(tshark -r "${params.CAPTURE_FILE}" -Y "tcp.flags.syn==1 and tcp.flags.ack==0" | wc -l)
                    echo "SYN packets: \$SYN_COUNT" >> analysis-results/security-analysis.txt

                    # Port scan detection
                    echo "Potential port scans:" >> analysis-results/security-analysis.txt
                    tshark -r "${params.CAPTURE_FILE}" -T fields -e ip.src -e tcp.dstport | \\
                        awk '{ports[\$1]++} END {for(ip in ports) if(ports[ip]>50) print ip, ports[ip]}' >> analysis-results/security-analysis.txt

                    # Suspicious DNS queries
                    echo "Suspicious DNS queries:" >> analysis-results/security-analysis.txt
                    tshark -r "${params.CAPTURE_FILE}" -Y "dns" -T fields -e dns.qry.name | \\

| grep -E "(.tk\$ | \.ml\$ | \.ga\$ | \.cf\$)" | sort | uniq >> analysis-results/security-analysis.txt | | true | """ } } }

    stage('Performance Analysis') {
        when {
            expression { params.ANALYSIS_TYPE == 'performance' }
        }
        steps {
            script {
                sh """
                    # Performance metrics
                    echo "=== Performance Analysis ===" > analysis-results/performance-analysis.txt

                    # Bandwidth usage
                    tshark -r "${params.CAPTURE_FILE}" -T fields -e ip.src -e frame.len | \\
                        awk '{bytes[\$1]+=\$2} END {for(ip in bytes) print ip, bytes[ip]}' | \\
                        sort -k2 -nr | head -20 >> analysis-results/performance-analysis.txt

                    # Response time analysis

| tshark -r "${params.CAPTURE_FILE}" -q -z rpc,rtt,tcp >> analysis-results/performance-analysis.txt | | true |

                    # Packet size distribution
                    tshark -r "${params.CAPTURE_FILE}" -q -z plen,tree >> analysis-results/performance-analysis.txt
                """
            }
        }
    }

    stage('Generate Report') {
        steps {
            script {
                sh '''
                    # Create comprehensive report
                    cat > analysis-results/report.html << EOF
Network Analysis Report

Network Analysis Report

File: ${CAPTURE_FILE}

Analysis Type: ${ANALYSIS_TYPE}

Generated: $(date)

Basic Statistics

$(cat analysis-results/basic-stats.txt)

Protocol Hierarchy

$(cat analysis-results/protocols.txt)
EOF # Add security analysis if available if [ -f analysis-results/security-analysis.txt ]; then cat >> analysis-results/report.html << EOF

Security Analysis

$(cat analysis-results/security-analysis.txt)
EOF fi # Add performance analysis if available if [ -f analysis-results/performance-analysis.txt ]; then cat >> analysis-results/report.html << EOF

Performance Analysis

$(cat analysis-results/performance-analysis.txt)
EOF fi echo "

" >> analysis-results/report.html ''' } } } }

post {
    always {
        archiveArtifacts artifacts: 'analysis-results/**', fingerprint: true

        publishHTML([
            allowMissing: false,
            alwaysLinkToLastBuild: true,
            keepAll: true,
            reportDir: 'analysis-results',
            reportFiles: 'report.html',
            reportName: 'Network Analysis Report'
        ])
    }

    failure {
        emailext (
            subject: "Network Analysis Failed: ${env.JOB_NAME} - ${env.BUILD_NUMBER}",
            body: "The network analysis job failed. Please check the console output for details.",
            to: "${env.CHANGE_AUTHOR_EMAIL}"
        )
    }
}

} ```_

Fehlerbehebung

Gemeinsame Themen

```bash

Permission denied errors

sudo usermod -a -G wireshark $USER sudo setcap cap_net_raw,cap_net_admin+eip /usr/bin/dumpcap

Interface not found

tshark -D ip link show

Capture file corruption

tshark -r capture.pcap -q -z io,stat,0

Memory issues with large files

tshark -r large_capture.pcap -c 1000

Display filter syntax errors

tshark -Y "invalid filter" 2>&1 | grep -i error ```_

Leistungsoptimierung

```bash

Use capture filters to reduce data

tshark -i eth0 -f "port 80 or port 443"

Limit capture size

tshark -i eth0 -s 96

Use ring buffer for continuous capture

tshark -i eth0 -w capture.pcap -b filesize:100000 -b files:10

Disable name resolution

tshark -i eth0 -n

Use specific protocols only

tshark -i eth0 -Y "tcp or udp" ```_

Debugging

```bash

Verbose output for debugging

tshark -i eth0 -V

Show field names

tshark -G fields | grep -i http

Test display filters

tshark -Y "tcp.port == 80" -c 1

Check tshark version and capabilities

tshark -v tshark -G protocols | wc -l ```_

Best Practices

Capture Guidelines

  1. *Benutzen Sie Capture Filter: Bewerben Sie Filter zur Erfassungszeit, um die Dateigröße zu reduzieren
  2. Limit Packet Size: Verwenden Sie die Option -s_ für Header-only-Captures, wenn zutreffend
  3. Ring Buffers: Ringpuffer zur kontinuierlichen Überwachung verwenden
  4. *Proper Berechtigungen: Richten Sie die richtigen Berechtigungen für nicht-root-Capture ein
  5. Storage Management: Implementieren Sie Protokolldrehung für Langzeiterfassungen

Analyse Best Practices

  1. *Start mit Statistiken: Verwenden Sie -q -z Optionen für schnelle Übersichten
  2. *Progressive Filterung: Starten Sie breit, dann verengen Sie sich mit bestimmten Filtern
  3. *Save Intermediate Results: Filterte Ergebnisse für weitere Analysen speichern
  4. *Document Findings: Detaillierte Anmerkungen zu Analyseschritten halten
  5. ** Automatische Repetitive Aufgaben*: Verwenden Sie Skripte für gemeinsame Analysemuster

Sicherheitsüberlegungen

  1. *Sensitive Daten: Beachten Sie sensible Daten bei der Erfassung
  2. Access Control: Zugriff auf Dateien beschränken
  3. Verschlüsselung: Berücksichtigen Sie die Verschlüsselung gespeicherter Erfassungsdateien
  4. Retention Policies: Umsetzung geeigneter Datenrückhalterichtlinien
  5. Legal Compliance: Sicherstellung der Einhaltung lokaler Gesetze und Vorschriften

Dieses umfassende tshark cheatsheet bietet alles, was für eine professionelle Netzwerkanalyse benötigt wird, von der Grundpaketerfassung über fortgeschrittene Automatisierungs- und Integrationsszenarien.