Zum Inhalt

NetworkMiner Network Forensic Analysis Tool Cheat Sheet

generieren

Überblick

NetworkMiner ist ein Netzwerk Forensic Analysis Tool (NFAT) für Windows, das Betriebssysteme, Sitzungen, Hostnamen, offene Ports und mehr erkennen kann, indem der Netzwerkverkehr in PCAP-Dateien erfasst wird. Es bietet eine benutzerfreundliche Schnittstelle für Netzwerk-Forensik und Zwischenfall-Reaktion, bietet sowohl kostenlose als auch professionelle Versionen mit erweiterten Funktionen für tiefe Paketanalyse, Dateiextraktion und Netzwerkrekonstruktion.

ZEIT Note: NetworkMiner ist in erster Linie eine Windows-Anwendung. Die kostenlose Version hat Einschränkungen auf PCAP-Dateigröße und Funktionen. Professionelle Version bietet erweiterte Fähigkeiten für den Unternehmensgebrauch.

Installation

Windows Installation

```powershell

Download NetworkMiner Free from official website

https://www.netresec.com/?page=NetworkMiner

Extract to desired directory

Expand-Archive -Path NetworkMiner_2-8-1.zip -DestinationPath C:\Tools\NetworkMiner

Run NetworkMiner

cd C:\Tools\NetworkMiner .\NetworkMiner.exe

For Professional version

Purchase license from https://www.netresec.com/

Install using provided installer

```_

Linux Installation (Mono)

```bash

Install Mono runtime

sudo apt update sudo apt install mono-complete

Download NetworkMiner

wget https://www.netresec.com/files/NetworkMiner_2-8-1.zip unzip NetworkMiner_2-8-1.zip cd NetworkMiner_2-8-1

Run with Mono

mono NetworkMiner.exe

Install additional dependencies if needed

sudo apt install libmono-winforms2.0-cil sudo apt install libmono-system-windows-forms4.0-cil ```_

Docker Installation

```bash

Create Dockerfile for NetworkMiner

cat > Dockerfile << 'EOF' FROM mono:latest

RUN apt-get update && apt-get install -y \ wget \ unzip \ libmono-winforms2.0-cil \ libmono-system-windows-forms4.0-cil

WORKDIR /app RUN wget https://www.netresec.com/files/NetworkMiner_2-8-1.zip && \ unzip NetworkMiner_2-8-1.zip && \ rm NetworkMiner_2-8-1.zip

WORKDIR /app/NetworkMiner_2-8-1 ENTRYPOINT ["mono", "NetworkMiner.exe"] EOF

Build and run

docker build -t networkminer . docker run -it --rm -v $(pwd)/pcaps:/pcaps networkminer ```_

Portable Installation

```bash

Download portable version

wget https://www.netresec.com/files/NetworkMiner_2-8-1.zip

Extract to USB drive or portable location

unzip NetworkMiner_2-8-1.zip -d /media/usb/NetworkMiner

Create launcher script

cat > /media/usb/NetworkMiner/run_networkminer.sh ``<< 'EOF'

!/bin/bash

cd "$(dirname "$0")/NetworkMiner_2-8-1" mono NetworkMiner.exe "$@" EOF

chmod +x /media/usb/NetworkMiner/run_networkminer.sh ```_

Basisnutzung

PCAP laden Dateien

```powershell

Command line usage

NetworkMiner.exe --help

Load single PCAP file

NetworkMiner.exe capture.pcap

Load multiple PCAP files

NetworkMiner.exe capture1.pcap capture2.pcap capture3.pcap

Load PCAP with specific output directory

NetworkMiner.exe --output C:\Analysis\Output capture.pcap

Load large PCAP files (Professional)

NetworkMiner.exe --maxframes 1000000 large_capture.pcap ```_

GUI Operationen

```text File Menu: - Open File(s): Load PCAP/PCAPNG files - Open Directory: Load all PCAP files from directory - Receive from Sniffer: Capture live traffic (Professional) - Clear GUI: Clear current analysis - Export: Export analysis results

Tools Menu: - Reassemble Files: Extract files from traffic - Generate Report: Create analysis report - Calculate Hash: Generate file hashes - Decode Base64: Decode Base64 strings - Convert PCAP: Convert between formats ```_

Kommandozeilenschnittstelle

```bash

Basic analysis

mono NetworkMiner.exe capture.pcap

Specify output directory

mono NetworkMiner.exe --output /tmp/analysis capture.pcap

Set maximum frames to process

mono NetworkMiner.exe --maxframes 100000 capture.pcap

Enable verbose output

mono NetworkMiner.exe --verbose capture.pcap

Process multiple files

mono NetworkMiner.exe *.pcap

Generate report automatically

mono NetworkMiner.exe --report capture.pcap ```_

Netzwerkanalyse Funktionen

Host Discovery

```text Hosts Tab Analysis: - IP addresses and MAC addresses - Operating system detection - Open ports and services - Hostname resolution - Network distance (TTL analysis) - First and last seen timestamps

Host Information Extraction: - DHCP hostnames - NetBIOS names - DNS queries and responses - HTTP User-Agent strings - SMB/CIFS hostnames ```_

Sitzungsanalyse

```text Sessions Tab Features: - TCP sessions reconstruction - Client-server relationships - Session duration and data volume - Protocol identification - Application layer protocols

Session Details: - Source and destination hosts - Port numbers and protocols - Start and end timestamps - Bytes transferred - Frame count ```_

Dateiextraktion

```text Files Tab Capabilities: - Automatic file extraction from traffic - File type identification - MD5/SHA1/SHA256 hash calculation - File size and timestamps - Source host identification

Supported Protocols for File Extraction: - HTTP/HTTPS file downloads - FTP file transfers - SMB/CIFS file shares - SMTP email attachments - POP3/IMAP email content - TFTP transfers ```_

Credential Harvesting

```text Credentials Tab Features: - Clear-text password extraction - Authentication attempts - Protocol-specific credentials - Success/failure indicators

Supported Protocols: - HTTP Basic/Digest authentication - FTP login credentials - Telnet authentication - POP3/IMAP login - SMB/NTLM authentication - SNMP community strings ```_

Erweiterte Analysetechniken

Protokollanalyse

```bash

!/bin/bash

networkminer-protocol-analysis.sh

PCAP_FILE="$1" OUTPUT_DIR="/tmp/nm_analysis"

if [ $# -ne 1 ]; then echo "Usage: $0 ``" exit 1 fi

mkdir -p "$OUTPUT_DIR"

echo "Starting NetworkMiner analysis of $PCAP_FILE"

Run NetworkMiner analysis

mono NetworkMiner.exe --output "$OUTPUT_DIR" "$PCAP_FILE"

Parse results

echo "=== Host Summary ===" if [ -f "$OUTPUT_DIR/hosts.csv" ]; then cat "$OUTPUT_DIR/hosts.csv"|head -20 fi

echo -e "\n=== Session Summary ===" if [ -f "$OUTPUT_DIR/sessions.csv" ]; then cat "$OUTPUT_DIR/sessions.csv"|head -20 fi

echo -e "\n=== Extracted Files ===" if [ -d "$OUTPUT_DIR/AssembledFiles" ]; then find "$OUTPUT_DIR/AssembledFiles" -type f|head -20 fi

echo -e "\n=== Credentials Found ===" if [ -f "$OUTPUT_DIR/credentials.csv" ]; then cat "$OUTPUT_DIR/credentials.csv" fi ```_

Automatisierte Dateianalyse

```python

!/usr/bin/env python3

networkminer-file-analyzer.py

import os import sys import hashlib import subprocess import csv from pathlib import Path

class NetworkMinerAnalyzer: def init(self, pcap_file, output_dir): self.pcap_file = pcap_file self.output_dir = Path(output_dir) self.assembled_files_dir = self.output_dir / "AssembledFiles"

def run_networkminer(self):
    """Run NetworkMiner analysis"""
    cmd = [
        "mono", "NetworkMiner.exe",
        "--output", str(self.output_dir),
        self.pcap_file
    ]

    try:
        result = subprocess.run(cmd, capture_output=True, text=True)
        if result.returncode != 0:
            print(f"NetworkMiner error: \\\\{result.stderr\\\\}")
            return False
        return True
    except Exception as e:
        print(f"Error running NetworkMiner: \\\\{e\\\\}")
        return False

def analyze_extracted_files(self):
    """Analyze extracted files"""
    if not self.assembled_files_dir.exists():
        print("No assembled files directory found")
        return

    print("=== Extracted Files Analysis ===")

    for file_path in self.assembled_files_dir.rglob("*"):
        if file_path.is_file():
            self.analyze_file(file_path)

def analyze_file(self, file_path):
    """Analyze individual extracted file"""
    try:
        # Calculate file hash
        with open(file_path, 'rb') as f:
            file_hash = hashlib.sha256(f.read()).hexdigest()

        # Get file info
        file_size = file_path.stat().st_size
        file_type = self.get_file_type(file_path)

        print(f"File: \\\\{file_path.name\\\\}")
        print(f"  Size: \\\\{file_size\\\\} bytes")
        print(f"  Type: \\\\{file_type\\\\}")
        print(f"  SHA256: \\\\{file_hash\\\\}")
        print(f"  Path: \\\\{file_path\\\\}")

        # Check for suspicious files
        if self.is_suspicious_file(file_path, file_type):
            print(f"  ⚠️  SUSPICIOUS FILE DETECTED")

        print()

    except Exception as e:
        print(f"Error analyzing \\\\{file_path\\\\}: \\\\{e\\\\}")

def get_file_type(self, file_path):
    """Determine file type"""
    try:
        result = subprocess.run(['file', str(file_path)],
                              capture_output=True, text=True)
        return result.stdout.strip().split(':', 1)[1].strip()
    except:
        return "Unknown"

def is_suspicious_file(self, file_path, file_type):
    """Check if file is potentially suspicious"""
    suspicious_extensions = ['.exe', '.scr', '.bat', '.cmd', '.com', '.pif']
    suspicious_types = ['executable', 'script', 'batch']

    # Check extension
    if any(str(file_path).lower().endswith(ext) for ext in suspicious_extensions):
        return True

    # Check file type
    if any(sus_type in file_type.lower() for sus_type in suspicious_types):
        return True

    return False

def parse_hosts_csv(self):
    """Parse hosts.csv file"""
    hosts_file = self.output_dir / "hosts.csv"
    if not hosts_file.exists():
        return

    print("=== Host Analysis ===")
    try:
        with open(hosts_file, 'r') as f:
            reader = csv.DictReader(f)
            for row in reader:
                print(f"Host: \\\\{row.get('IP', 'Unknown')\\\\}")
                print(f"  MAC: \\\\{row.get('MAC', 'Unknown')\\\\}")
                print(f"  OS: \\\\{row.get('OS', 'Unknown')\\\\}")
                print(f"  Hostname: \\\\{row.get('Hostname', 'Unknown')\\\\}")
                print()
    except Exception as e:
        print(f"Error parsing hosts.csv: \\\\{e\\\\}")

def parse_credentials_csv(self):
    """Parse credentials.csv file"""
    creds_file = self.output_dir / "credentials.csv"
    if not creds_file.exists():
        return

    print("=== Credentials Analysis ===")
    try:
        with open(creds_file, 'r') as f:
            reader = csv.DictReader(f)
            for row in reader:
                print(f"Protocol: \\\\{row.get('Protocol', 'Unknown')\\\\}")
                print(f"  Username: \\\\{row.get('Username', 'Unknown')\\\\}")
                print(f"  Password: \\\\{row.get('Password', 'Unknown')\\\\}")
                print(f"  Host: \\\\{row.get('Host', 'Unknown')\\\\}")
                print()
    except Exception as e:
        print(f"Error parsing credentials.csv: \\\\{e\\\\}")

def generate_report(self):
    """Generate comprehensive analysis report"""
    report_file = self.output_dir / "analysis_report.txt"

    with open(report_file, 'w') as f:
        f.write(f"NetworkMiner Analysis Report\n")
        f.write(f"PCAP File: \\\\{self.pcap_file\\\\}\n")
        f.write(f"Analysis Date: \\\\{os.popen('date').read().strip()\\\\}\n")
        f.write("=" * 50 + "\n\n")

        # File statistics
        if self.assembled_files_dir.exists():
            file_count = len(list(self.assembled_files_dir.rglob("*")))
            f.write(f"Extracted Files: \\\\{file_count\\\\}\n")

        # Host statistics
        hosts_file = self.output_dir / "hosts.csv"
        if hosts_file.exists():
            with open(hosts_file, 'r') as hosts_f:
                host_count = len(hosts_f.readlines()) - 1  # Subtract header
                f.write(f"Unique Hosts: \\\\{host_count\\\\}\n")

        f.write("\nDetailed analysis available in CSV files and AssembledFiles directory.\n")

    print(f"Report generated: \\\\{report_file\\\\}")

def main(): if len(sys.argv) != 3: print("Usage: python3 networkminer-file-analyzer.py ") sys.exit(1)

pcap_file = sys.argv[1]
output_dir = sys.argv[2]

analyzer = NetworkMinerAnalyzer(pcap_file, output_dir)

print("Running NetworkMiner analysis...")
if analyzer.run_networkminer():
    print("Analysis complete. Processing results...")
    analyzer.parse_hosts_csv()
    analyzer.parse_credentials_csv()
    analyzer.analyze_extracted_files()
    analyzer.generate_report()
else:
    print("NetworkMiner analysis failed")

if name == "main": main() ```_

Batch Processing Script

```bash

!/bin/bash

networkminer-batch-processor.sh

PCAP_DIR="$1" OUTPUT_BASE_DIR="$2" NETWORKMINER_PATH="/opt/NetworkMiner"

if [ $# -ne 2 ]; then echo "Usage: $0 " exit 1 fi

if [ ! -d "$PCAP_DIR" ]; then echo "Error: PCAP directory does not exist" exit 1 fi

mkdir -p "$OUTPUT_BASE_DIR"

Process each PCAP file

find "$PCAP_DIR" -name ".pcap" -o -name ".pcapng"|while read pcap_file; do echo "Processing: $pcap_file"

# Create output directory for this PCAP
pcap_basename=$(basename "$pcap_file"|sed 's/\.[^.]*$//')
output_dir="$OUTPUT_BASE_DIR/$pcap_basename"
mkdir -p "$output_dir"

# Run NetworkMiner
cd "$NETWORKMINER_PATH"
mono NetworkMiner.exe --output "$output_dir" "$pcap_file"

# Generate summary
echo "=== Analysis Summary for $pcap_file ===" > "$output_dir/summary.txt"
echo "Analysis Date: $(date)" >> "$output_dir/summary.txt"
echo "PCAP File: $pcap_file" >> "$output_dir/summary.txt"
echo "Output Directory: $output_dir" >> "$output_dir/summary.txt"
echo "" >> "$output_dir/summary.txt"

# Count extracted files
if [ -d "$output_dir/AssembledFiles" ]; then
    file_count=$(find "$output_dir/AssembledFiles" -type f|wc -l)
    echo "Extracted Files: $file_count" >> "$output_dir/summary.txt"
fi

# Count hosts
if [ -f "$output_dir/hosts.csv" ]; then
    host_count=$(($(wc -l < "$output_dir/hosts.csv") - 1))
    echo "Unique Hosts: $host_count" >> "$output_dir/summary.txt"
fi

# Count sessions
if [ -f "$output_dir/sessions.csv" ]; then
    session_count=$(($(wc -l < "$output_dir/sessions.csv") - 1))
    echo "Network Sessions: $session_count" >> "$output_dir/summary.txt"
fi

echo "Completed: $pcap_file"
echo "Output: $output_dir"
echo "---"

done

echo "Batch processing complete!" echo "Results available in: $OUTPUT_BASE_DIR" ```_

Integration und Automatisierung

SIEM Integration

```python

!/usr/bin/env python3

networkminer-siem-integration.py

import json import csv import requests from pathlib import Path import hashlib

class NetworkMinerSIEMIntegrator: def init(self, output_dir, siem_endpoint, api_key): self.output_dir = Path(output_dir) self.siem_endpoint = siem_endpoint self.api_key = api_key

def send_to_siem(self, event_data):
    """Send event to SIEM"""
    try:
        headers = \\\\{
            'Content-Type': 'application/json',
            'Authorization': f'Bearer \\\\{self.api_key\\\\}'
        \\\\}

        response = requests.post(
            self.siem_endpoint,
            json=event_data,
            headers=headers,
            timeout=10
        )
        response.raise_for_status()
        return True

    except requests.RequestException as e:
        print(f"Error sending to SIEM: \\\\{e\\\\}")
        return False

def process_hosts(self):
    """Process and send host information to SIEM"""
    hosts_file = self.output_dir / "hosts.csv"
    if not hosts_file.exists():
        return

    with open(hosts_file, 'r') as f:
        reader = csv.DictReader(f)
        for row in reader:
            event = \\\\{
                'event_type': 'host_discovery',
                'source': 'networkminer',
                'timestamp': row.get('First Seen', ''),
                'ip_address': row.get('IP', ''),
                'mac_address': row.get('MAC', ''),
                'hostname': row.get('Hostname', ''),
                'operating_system': row.get('OS', ''),
                'open_ports': row.get('Open Ports', ''),
                'last_seen': row.get('Last Seen', '')
            \\\\}

            self.send_to_siem(event)

def process_credentials(self):
    """Process and send credential information to SIEM"""
    creds_file = self.output_dir / "credentials.csv"
    if not creds_file.exists():
        return

    with open(creds_file, 'r') as f:
        reader = csv.DictReader(f)
        for row in reader:
            event = \\\\{
                'event_type': 'credential_exposure',
                'source': 'networkminer',
                'severity': 'HIGH',
                'protocol': row.get('Protocol', ''),
                'username': row.get('Username', ''),
                'password': row.get('Password', ''),
                'host': row.get('Host', ''),
                'timestamp': row.get('Timestamp', '')
            \\\\}

            self.send_to_siem(event)

def process_files(self):
    """Process and send file information to SIEM"""
    files_dir = self.output_dir / "AssembledFiles"
    if not files_dir.exists():
        return

    for file_path in files_dir.rglob("*"):
        if file_path.is_file():
            try:
                # Calculate file hash
                with open(file_path, 'rb') as f:
                    file_hash = hashlib.sha256(f.read()).hexdigest()

                event = \\\\{
                    'event_type': 'file_extraction',
                    'source': 'networkminer',
                    'filename': file_path.name,
                    'file_size': file_path.stat().st_size,
                    'file_hash': file_hash,
                    'extraction_path': str(file_path),
                    'timestamp': file_path.stat().st_mtime
                \\\\}

                self.send_to_siem(event)

            except Exception as e:
                print(f"Error processing file \\\\{file_path\\\\}: \\\\{e\\\\}")

def main(): output_dir = "/tmp/networkminer_output" siem_endpoint = "https://your-siem.com/api/events" api_key = "your-api-key"

integrator = NetworkMinerSIEMIntegrator(output_dir, siem_endpoint, api_key)

print("Sending NetworkMiner results to SIEM...")
integrator.process_hosts()
integrator.process_credentials()
integrator.process_files()
print("SIEM integration complete")

if name == "main": main() ```_

Threat Intelligence Integration

```python

!/usr/bin/env python3

networkminer-threat-intel.py

import csv import requests import json from pathlib import Path

class ThreatIntelChecker: def init(self, output_dir, virustotal_api_key): self.output_dir = Path(output_dir) self.vt_api_key = virustotal_api_key self.vt_base_url = "https://www.virustotal.com/vtapi/v2"

def check_ip_reputation(self, ip_address):
    """Check IP reputation with VirusTotal"""
    try:
        url = f"\\\\{self.vt_base_url\\\\}/ip-address/report"
        params = \\\\{
            'apikey': self.vt_api_key,
            'ip': ip_address
        \\\\}

        response = requests.get(url, params=params)
        if response.status_code == 200:
            return response.json()

    except Exception as e:
        print(f"Error checking IP \\\\{ip_address\\\\}: \\\\{e\\\\}")

    return None

def check_file_hash(self, file_hash):
    """Check file hash with VirusTotal"""
    try:
        url = f"\\\\{self.vt_base_url\\\\}/file/report"
        params = \\\\{
            'apikey': self.vt_api_key,
            'resource': file_hash
        \\\\}

        response = requests.get(url, params=params)
        if response.status_code == 200:
            return response.json()

    except Exception as e:
        print(f"Error checking hash \\\\{file_hash\\\\}: \\\\{e\\\\}")

    return None

def analyze_hosts(self):
    """Analyze hosts for threat intelligence"""
    hosts_file = self.output_dir / "hosts.csv"
    if not hosts_file.exists():
        return

    print("=== Threat Intelligence Analysis - Hosts ===")

    with open(hosts_file, 'r') as f:
        reader = csv.DictReader(f)
        for row in reader:
            ip = row.get('IP', '')
            if ip and self.is_external_ip(ip):
                print(f"Checking IP: \\\\{ip\\\\}")

                result = self.check_ip_reputation(ip)
                if result and result.get('response_code') == 1:
                    detected_urls = result.get('detected_urls', [])
                    detected_samples = result.get('detected_communicating_samples', [])

                    if detected_urls or detected_samples:
                        print(f"  ⚠️  THREAT DETECTED for \\\\{ip\\\\}")
                        print(f"     Malicious URLs: \\\\{len(detected_urls)\\\\}")
                        print(f"     Malicious Samples: \\\\{len(detected_samples)\\\\}")
                    else:
                        print(f"  ✓  Clean reputation for \\\\{ip\\\\}")

def analyze_files(self):
    """Analyze extracted files for threats"""
    files_dir = self.output_dir / "AssembledFiles"
    if not files_dir.exists():
        return

    print("=== Threat Intelligence Analysis - Files ===")

    for file_path in files_dir.rglob("*"):
        if file_path.is_file() and file_path.stat().st_size > 0:
            try:
                import hashlib
                with open(file_path, 'rb') as f:
                    file_hash = hashlib.sha256(f.read()).hexdigest()

                print(f"Checking file: \\\\{file_path.name\\\\}")

                result = self.check_file_hash(file_hash)
                if result and result.get('response_code') == 1:
                    positives = result.get('positives', 0)
                    total = result.get('total', 0)

                    if positives > 0:
                        print(f"  ⚠️  MALWARE DETECTED: \\\\{file_path.name\\\\}")
                        print(f"     Detection: \\\\{positives\\\\}/\\\\{total\\\\} engines")
                        print(f"     SHA256: \\\\{file_hash\\\\}")
                    else:
                        print(f"  ✓  Clean file: \\\\{file_path.name\\\\}")

            except Exception as e:
                print(f"Error analyzing \\\\{file_path\\\\}: \\\\{e\\\\}")

def is_external_ip(self, ip):
    """Check if IP is external (not private)"""
    import ipaddress
    try:
        ip_obj = ipaddress.ip_address(ip)
        return not ip_obj.is_private
    except:
        return False

def main(): output_dir = "/tmp/networkminer_output" virustotal_api_key = "your-virustotal-api-key"

checker = ThreatIntelChecker(output_dir, virustotal_api_key)

print("Starting threat intelligence analysis...")
checker.analyze_hosts()
checker.analyze_files()
print("Threat intelligence analysis complete")

if name == "main": main() ```_

Reporting und Visualisierung

HTML Report Generator

```python

!/usr/bin/env python3

networkminer-html-report.py

import csv import json from pathlib import Path from datetime import datetime

class NetworkMinerHTMLReporter: def init(self, output_dir, pcap_file): self.output_dir = Path(output_dir) self.pcap_file = pcap_file self.report_data = \\{\\}

def collect_data(self):
    """Collect all analysis data"""
    self.report_data = \\\\{
        'pcap_file': self.pcap_file,
        'analysis_date': datetime.now().strftime('%Y-%m-%d %H:%M:%S'),
        'hosts': self.load_hosts(),
        'sessions': self.load_sessions(),
        'files': self.load_files(),
        'credentials': self.load_credentials()
    \\\\}

def load_hosts(self):
    """Load host data"""
    hosts_file = self.output_dir / "hosts.csv"
    hosts = []

    if hosts_file.exists():
        with open(hosts_file, 'r') as f:
            reader = csv.DictReader(f)
            hosts = list(reader)

    return hosts

def load_sessions(self):
    """Load session data"""
    sessions_file = self.output_dir / "sessions.csv"
    sessions = []

    if sessions_file.exists():
        with open(sessions_file, 'r') as f:
            reader = csv.DictReader(f)
            sessions = list(reader)

    return sessions

def load_files(self):
    """Load extracted files data"""
    files_dir = self.output_dir / "AssembledFiles"
    files = []

    if files_dir.exists():
        for file_path in files_dir.rglob("*"):
            if file_path.is_file():
                files.append(\\\\{
                    'name': file_path.name,
                    'size': file_path.stat().st_size,
                    'path': str(file_path.relative_to(files_dir))
                \\\\})

    return files

def load_credentials(self):
    """Load credentials data"""
    creds_file = self.output_dir / "credentials.csv"
    credentials = []

    if creds_file.exists():
        with open(creds_file, 'r') as f:
            reader = csv.DictReader(f)
            credentials = list(reader)

    return credentials

def generate_html_report(self):
    """Generate HTML report"""
    html_template = """
NetworkMiner Analysis Report

NetworkMiner Analysis Report

PCAP File: \\\\{pcap_file\\\\}

Analysis Date: \\\\{analysis_date\\\\}

\\\\{host_count\\\\}

Unique Hosts

\\\\{session_count\\\\}

Network Sessions

\\\\{file_count\\\\}

Extracted Files

\\\\{credential_count\\\\}

Credentials Found

\\\\{credentials_warning\\\\}

Host Information

\\\\{host_rows\\\\}
IP Address MAC Address Hostname Operating System Open Ports

Network Sessions

\\\\{session_rows\\\\}
Client Server Protocol Start Time Duration

Extracted Files

\\\\{file_rows\\\\}
Filename Size (bytes) Path
\\\\{credentials_section\\\\}
    """

    # Generate table rows
    host_rows = ""
    for host in self.report_data['hosts'][:20]:  # Limit to first 20
        host_rows += f"""
        <tr>
            <td>\\\\{host.get('IP', '')\\\\}</td>
            <td>\\\\{host.get('MAC', '')\\\\}</td>
            <td>\\\\{host.get('Hostname', '')\\\\}</td>
            <td>\\\\{host.get('OS', '')\\\\}</td>
            <td>\\\\{host.get('Open Ports', '')\\\\}</td>
        </tr>
        """

    session_rows = ""
    for session in self.report_data['sessions'][:20]:  # Limit to first 20
        session_rows += f"""
        <tr>
            <td>\\\\{session.get('Client', '')\\\\}</td>
            <td>\\\\{session.get('Server', '')\\\\}</td>
            <td>\\\\{session.get('Protocol', '')\\\\}</td>
            <td>\\\\{session.get('Start Time', '')\\\\}</td>
            <td>\\\\{session.get('Duration', '')\\\\}</td>
        </tr>
        """

    file_rows = ""
    for file_info in self.report_data['files'][:20]:  # Limit to first 20
        file_rows += f"""
        <tr>
            <td>\\\\{file_info['name']\\\\}</td>
            <td>\\\\{file_info['size']\\\\}</td>
            <td>\\\\{file_info['path']\\\\}</td>
        </tr>
        """

    # Credentials section
    credentials_warning = ""
    credentials_section = ""

    if self.report_data['credentials']:
        credentials_warning = """
        <div class="warning">
            <strong>⚠️ Warning:</strong> Clear-text credentials were found in the network traffic!
        </div>
        """

        credential_rows = ""
        for cred in self.report_data['credentials']:
            credential_rows += f"""
            <tr>
                <td>\\\\{cred.get('Protocol', '')\\\\}</td>
                <td>\\\\{cred.get('Username', '')\\\\}</td>
                <td>\\\\{'*' * len(cred.get('Password', ''))\\\\}</td>
                <td>\\\\{cred.get('Host', '')\\\\}</td>
            </tr>
            """

        credentials_section = f"""
        <div class="section">
            <h2>Credentials (Passwords Hidden)</h2>
            <table>
                <tr>
                    <th>Protocol</th>
                    <th>Username</th>
                    <th>Password</th>
                    <th>Host</th>
                </tr>
                \\\\{credential_rows\\\\}
            </table>
        </div>
        """

    # Fill template
    html_content = html_template.format(
        pcap_file=self.report_data['pcap_file'],
        analysis_date=self.report_data['analysis_date'],
        host_count=len(self.report_data['hosts']),
        session_count=len(self.report_data['sessions']),
        file_count=len(self.report_data['files']),
        credential_count=len(self.report_data['credentials']),
        credentials_warning=credentials_warning,
        host_rows=host_rows,
        session_rows=session_rows,
        file_rows=file_rows,
        credentials_section=credentials_section
    )

    # Save report
    report_file = self.output_dir / "analysis_report.html"
    with open(report_file, 'w') as f:
        f.write(html_content)

    print(f"HTML report generated: \\\\{report_file\\\\}")
    return report_file

def main(): import sys

if len(sys.argv) != 3:
    print("Usage: python3 networkminer-html-report.py <output_dir> <pcap_file>")
    sys.exit(1)

output_dir = sys.argv[1]
pcap_file = sys.argv[2]

reporter = NetworkMinerHTMLReporter(output_dir, pcap_file)
reporter.collect_data()
reporter.generate_html_report()

if name == "main": main() ```_

Best Practices

Analyse-Workflow

```bash

!/bin/bash

networkminer-best-practices.sh

PCAP_FILE="$1" CASE_NAME="$2" ANALYST="$3"

if [ $# -ne 3 ]; then echo "Usage: $0 " exit 1 fi

Create case directory structure

CASE_DIR="/cases/$CASE_NAME" mkdir -p "$CASE_DIR"/\\{evidence,analysis,reports,notes\\}

Copy evidence with hash verification

echo "Copying evidence file..." cp "$PCAP_FILE" "$CASE_DIR/evidence/" ORIGINAL_HASH=$(sha256sum "$PCAP_FILE"|cut -d' ' -f1) COPY_HASH=$(sha256sum "$CASE_DIR/evidence/$(basename "$PCAP_FILE")"|cut -d' ' -f1)

if [ "$ORIGINAL_HASH" != "$COPY_HASH" ]; then echo "ERROR: Hash mismatch during evidence copy!" exit 1 fi

Create case log

CASE_LOG="$CASE_DIR/notes/case_log.txt" echo "Case: $CASE_NAME" > "$CASE_LOG" echo "Analyst: $ANALYST" >> "$CASE_LOG" echo "Start Time: $(date)" >> "$CASE_LOG" echo "Evidence File: $(basename "$PCAP_FILE")" >> "$CASE_LOG" echo "Evidence Hash: $ORIGINAL_HASH" >> "$CASE_LOG" echo "---" >> "$CASE_LOG"

Run NetworkMiner analysis

echo "Running NetworkMiner analysis..." OUTPUT_DIR="$CASE_DIR/analysis/networkminer" mkdir -p "$OUTPUT_DIR"

mono NetworkMiner.exe --output "$OUTPUT_DIR" "$CASE_DIR/evidence/$(basename "$PCAP_FILE")"

Document analysis steps

echo "$(date): NetworkMiner analysis completed" >> "$CASE_LOG" echo "Output directory: $OUTPUT_DIR" >> "$CASE_LOG"

Generate reports

echo "Generating reports..." python3 networkminer-html-report.py "$OUTPUT_DIR" "$(basename "$PCAP_FILE")" > "$CASE_DIR/reports/"

echo "Analysis complete. Case directory: $CASE_DIR" ```_

Leistungsoptimierung

```text NetworkMiner Performance Tips:

  1. PCAP File Size:

    • Free version limited to 5MB PCAP files
    • Professional version handles larger files
    • Split large PCAPs if using free version
  2. Memory Usage:

    • Close unnecessary applications
    • Increase virtual memory if needed
    • Monitor RAM usage during analysis
  3. Processing Speed:

    • Use SSD storage for faster I/O
    • Process files locally (not over network)
    • Close GUI tabs not in use
  4. Batch Processing:

    • Process multiple small files vs. one large file
    • Use command-line mode for automation
    • Schedule analysis during off-hours ```_

Evidenzbehandlung

```bash

!/bin/bash

evidence-handling-checklist.sh

echo "NetworkMiner Evidence Handling Checklist" echo "========================================"

echo "1. Evidence Acquisition:" echo " □ Verify PCAP file integrity (hash)" echo " □ Document chain of custody" echo " □ Create working copy for analysis" echo " □ Store original in secure location"

echo -e "\n2. Analysis Environment:" echo " □ Use isolated analysis system" echo " □ Document system configuration" echo " □ Ensure sufficient disk space" echo " □ Backup analysis results"

echo -e "\n3. Documentation:" echo " □ Record analysis start/end times" echo " □ Document tools and versions used" echo " □ Save all output files" echo " □ Create analysis summary report"

echo -e "\n4. Quality Assurance:" echo " □ Verify extracted files integrity" echo " □ Cross-reference findings" echo " □ Peer review analysis results" echo " □ Archive complete case file" ```_

Fehlerbehebung

Gemeinsame Themen

```bash

Issue: NetworkMiner won't start on Linux

Solution: Install required Mono components

sudo apt install mono-complete libmono-winforms2.0-cil

Issue: PCAP file too large (free version)

Solution: Split PCAP file

editcap -c 1000000 large_file.pcap split_file.pcap

Issue: Missing extracted files

Check file extraction settings in GUI

Verify PCAP contains actual file transfers

Issue: No credentials found

Ensure traffic contains clear-text protocols

Check for encrypted connections (HTTPS, SSH)

Issue: Slow performance

Close unnecessary GUI tabs

Increase system memory

Use command-line mode for batch processing

```_

Debug Mode

```bash

Enable verbose logging

mono --debug NetworkMiner.exe capture.pcap

Check Mono version compatibility

mono --version

Verify PCAP file format

file capture.pcap tcpdump -r capture.pcap -c 10

Test with sample PCAP

wget https://wiki.wireshark.org/SampleCaptures/http.cap mono NetworkMiner.exe http.cap ```_

Analyse der Ergebnisse

```bash

Check system logs for errors

journalctl -u mono --since "1 hour ago"

Monitor resource usage

top -p $(pgrep mono)

Check disk space

df -h /tmp

Verify file permissions

ls -la NetworkMiner.exe ls -la *.pcap ```_

Ressourcen

  • [NetworkMiner Offizielle Website](LINK_5
  • [NetworkMiner Professional](__LINK_5___
  • [NetworkMiner Manual](LINK_5
  • [Beispiel-PCAP-Dateien](LINK_5
  • [Network Forensics Guide](__LINK_5___

--

*Dieses Betrugsblatt bietet umfassende Anleitung für die Nutzung von NetworkMiner für Netzwerk forensische Analyse. Regelmäßige Praxis mit PCAP-Dateien und Verständnis von Netzwerkprotokollen verbessern die Wirksamkeit der Analyse. *