NetworkMiner Network Forensic Analysis Tool aide-mémoire
Overview
NetworkMiner is a Network Forensic Analysis Tool (NFAT) for Windows that can detect operating systems, sessions, hôtenames, open ports, and more by analyzing network traffic captured in PCAP files. It provides a user-friendly interface for network forensics and réponse aux incidents, offering both free and professional versions with advanced features for deep packet analysis, file extraction, and network reconstruction.
⚠️ Note: NetworkMiner is primarily a Windows application. The free version has limitations on PCAP file size and features. Professional version offers advanced capabilities for enterprise use.
Installation
Windows Installation
# Download NetworkMiner Free from official website
# https://www.netresec.com/?page=NetworkMiner
# Extract to desired directory
Expand-Archive -Path NetworkMiner_2-8-1.zip -DestinationPath C:\Tools\NetworkMiner
# Run NetworkMiner
cd C:\Tools\NetworkMiner
.\NetworkMiner.exe
# For Professional version
# Purchase license from https://www.netresec.com/
# Install using provided installer
Linux Installation (Mono)
# Install Mono runtime
sudo apt update
sudo apt install mono-complete
# Download NetworkMiner
wget https://www.netresec.com/files/NetworkMiner_2-8-1.zip
unzip NetworkMiner_2-8-1.zip
cd NetworkMiner_2-8-1
# Run with Mono
mono NetworkMiner.exe
# Install additional dependencies if needed
sudo apt install libmono-winforms2.0-cil
sudo apt install libmono-system-windows-forms4.0-cil
Docker Installation
# Create Dockerfile for NetworkMiner
cat > Dockerfile << 'EOF'
FROM mono:latest
RUN apt-get update && apt-get install -y \
wget \
unzip \
libmono-winforms2.0-cil \
libmono-system-windows-forms4.0-cil
WORKDIR /app
RUN wget https://www.netresec.com/files/NetworkMiner_2-8-1.zip && \
unzip NetworkMiner_2-8-1.zip && \
rm NetworkMiner_2-8-1.zip
WORKDIR /app/NetworkMiner_2-8-1
ENTRYPOINT ["mono", "NetworkMiner.exe"]
EOF
# Build and run
docker build -t networkminer .
docker run -it --rm -v $(pwd)/pcaps:/pcaps networkminer
portable Installation
# Download portable version
wget https://www.netresec.com/files/NetworkMiner_2-8-1.zip
# Extract to USB drive or portable location
unzip NetworkMiner_2-8-1.zip -d /media/usb/NetworkMiner
# Create launcher script
cat > /media/usb/NetworkMiner/run_networkminer.sh ``<< 'EOF'
#!/bin/bash
cd "$(dirname "$0")/NetworkMiner_2-8-1"
mono NetworkMiner.exe "$@"
EOF
chmod +x /media/usb/NetworkMiner/run_networkminer.sh
Basic utilisation
Loading PCAP Files
# commande line utilisation
NetworkMiner.exe --help
# Load single PCAP file
NetworkMiner.exe capture.pcap
# Load multiple PCAP files
NetworkMiner.exe capture1.pcap capture2.pcap capture3.pcap
# Load PCAP with specific output directory
NetworkMiner.exe --output C:\Analysis\Output capture.pcap
# Load large PCAP files (Professional)
NetworkMiner.exe --maxframes 1000000 large_capture.pcap
GUI Operations
File Menu:
- Open File(s): Load PCAP/PCAPNG files
- Open Directory: Load all PCAP files from directory
- Receive from Sniffer: Capture live traffic (Professional)
- Clear GUI: Clear current analysis
- Export: Export analysis results
Tools Menu:
- Reassemble Files: Extract files from traffic
- Generate Report: Create analysis report
- Calculate hash: Generate file hashes
- Decode Base64: Decode Base64 strings
- Convert PCAP: Convert between formats
commande Line Interface
# Basic analysis
mono NetworkMiner.exe capture.pcap
# Specify output directory
mono NetworkMiner.exe --output /tmp/analysis capture.pcap
# Set maximum frames to processus
mono NetworkMiner.exe --maxframes 100000 capture.pcap
# Enable verbose output
mono NetworkMiner.exe --verbose capture.pcap
# processus multiple files
mono NetworkMiner.exe *.pcap
# Generate report automatically
mono NetworkMiner.exe --report capture.pcap
Analyse Réseau Features
hôte Discovery
hôtes Tab Analysis:
- IP addresses and MAC addresses
- Operating system detection
- Open ports and services
- hôtename resolution
- Network distance (TTL analysis)
- First and last seen timestamps
hôte Information Extraction:
- DHCP hôtenames
- NetBIOS names
- DNS queries and responses
- HTTP User-Agent strings
- SMB/CIFS hôtenames
session Analysis
sessions Tab Features:
- TCP sessions reconstruction
- Client-server relationships
- session duration and data volume
- protocole identification
- Application layer protocoles
session Details:
- Source and destination hôtes
- port numbers and protocoles
- Start and end timestamps
- Bytes transferred
- Frame count
File Extraction
Files Tab Capabilities:
- Automatic file extraction from traffic
- File type identification
- MD5/SHA1/SHA256 hash calculation
- File size and timestamps
- Source hôte identification
Supported protocoles for File Extraction:
- HTTP/HTTPS file downloads
- FTP file transfers
- SMB/CIFS file shares
- SMTP email attachments
- POP3/IMAP email content
- TFTP transfers
Credential Harvesting
identifiants Tab Features:
- Clear-text mot de passe extraction
- authentification attempts
- protocole-specific identifiants
- Success/failure indicators
Supported protocoles:
- HTTP Basic/Digest authentification
- FTP login identifiants
- Telnet authentification
- POP3/IMAP login
- SMB/NTLM authentification
- SNMP community strings
Advanced Analysis Techniques
protocole Analysis
#!/bin/bash
# networkminer-protocole-analysis.sh
PCAP_FILE="$1"
OUTPUT_DIR="/tmp/nm_analysis"
if [ $# -ne 1 ]; then
echo "utilisation: $0 <pcap_file>``"
exit 1
fi
mkdir -p "$OUTPUT_DIR"
echo "Starting NetworkMiner analysis of $PCAP_FILE"
# Run NetworkMiner analysis
mono NetworkMiner.exe --output "$OUTPUT_DIR" "$PCAP_FILE"
# Parse results
echo "=== hôte Summary ==="
if [ -f "$OUTPUT_DIR/hôtes.csv" ]; then
cat "$OUTPUT_DIR/hôtes.csv"|head -20
fi
echo -e "\n=== session Summary ==="
if [ -f "$OUTPUT_DIR/sessions.csv" ]; then
cat "$OUTPUT_DIR/sessions.csv"|head -20
fi
echo -e "\n=== Extracted Files ==="
if [ -d "$OUTPUT_DIR/AssembledFiles" ]; then
find "$OUTPUT_DIR/AssembledFiles" -type f|head -20
fi
echo -e "\n=== identifiants Found ==="
if [ -f "$OUTPUT_DIR/identifiants.csv" ]; then
cat "$OUTPUT_DIR/identifiants.csv"
fi
Automated File Analysis
#!/usr/bin/env python3
# networkminer-file-analyzer.py
import os
import sys
import hashlib
import subprocessus
import csv
from pathlib import Path
class NetworkMinerAnalyzer:
def __init__(self, pcap_file, output_dir):
self.pcap_file = pcap_file
self.output_dir = Path(output_dir)
self.assembled_files_dir = self.output_dir / "AssembledFiles"
def run_networkminer(self):
"""Run NetworkMiner analysis"""
cmd = [
"mono", "NetworkMiner.exe",
"--output", str(self.output_dir),
self.pcap_file
]
try:
result = subprocessus.run(cmd, capture_output=True, text=True)
if result.returncode != 0:
print(f"NetworkMiner error: \\\\{result.stderr\\\\}")
return False
return True
except Exception as e:
print(f"Error running NetworkMiner: \\\\{e\\\\}")
return False
def analyze_extracted_files(self):
"""Analyze extracted files"""
if not self.assembled_files_dir.exists():
print("No assembled files directory found")
return
print("=== Extracted Files Analysis ===")
for file_path in self.assembled_files_dir.rglob("*"):
if file_path.is_file():
self.analyze_file(file_path)
def analyze_file(self, file_path):
"""Analyze individual extracted file"""
try:
# Calculate file hash
with open(file_path, 'rb') as f:
file_hash = hashlib.sha256(f.read()).hexdigest()
# Get file info
file_size = file_path.stat().st_size
file_type = self.get_file_type(file_path)
print(f"File: \\\\{file_path.name\\\\}")
print(f" Size: \\\\{file_size\\\\} bytes")
print(f" Type: \\\\{file_type\\\\}")
print(f" SHA256: \\\\{file_hash\\\\}")
print(f" Path: \\\\{file_path\\\\}")
# Check for suspicious files
if self.is_suspicious_file(file_path, file_type):
print(f" ⚠️ SUSPICIOUS FILE DETECTED")
print()
except Exception as e:
print(f"Error analyzing \\\\{file_path\\\\}: \\\\{e\\\\}")
def get_file_type(self, file_path):
"""Determine file type"""
try:
result = subprocessus.run(['file', str(file_path)],
capture_output=True, text=True)
return result.stdout.strip().split(':', 1)[1].strip()
except:
return "Unknown"
def is_suspicious_file(self, file_path, file_type):
"""Check if file is potentially suspicious"""
suspicious_extensions = ['.exe', '.scr', '.bat', '.cmd', '.com', '.pif']
suspicious_types = ['executable', 'script', 'batch']
# Check extension
if any(str(file_path).lower().endswith(ext) for ext in suspicious_extensions):
return True
# Check file type
if any(sus_type in file_type.lower() for sus_type in suspicious_types):
return True
return False
def parse_hôtes_csv(self):
"""Parse hôtes.csv file"""
hôtes_file = self.output_dir / "hôtes.csv"
if not hôtes_file.exists():
return
print("=== hôte Analysis ===")
try:
with open(hôtes_file, 'r') as f:
reader = csv.DictReader(f)
for row in reader:
print(f"hôte: \\\\{row.get('IP', 'Unknown')\\\\}")
print(f" MAC: \\\\{row.get('MAC', 'Unknown')\\\\}")
print(f" OS: \\\\{row.get('OS', 'Unknown')\\\\}")
print(f" hôtename: \\\\{row.get('hôtename', 'Unknown')\\\\}")
print()
except Exception as e:
print(f"Error parsing hôtes.csv: \\\\{e\\\\}")
def parse_identifiants_csv(self):
"""Parse identifiants.csv file"""
creds_file = self.output_dir / "identifiants.csv"
if not creds_file.exists():
return
print("=== identifiants Analysis ===")
try:
with open(creds_file, 'r') as f:
reader = csv.DictReader(f)
for row in reader:
print(f"protocole: \\\\{row.get('protocole', 'Unknown')\\\\}")
print(f" nom d'utilisateur: \\\\{row.get('nom d'utilisateur', 'Unknown')\\\\}")
print(f" mot de passe: \\\\{row.get('mot de passe', 'Unknown')\\\\}")
print(f" hôte: \\\\{row.get('hôte', 'Unknown')\\\\}")
print()
except Exception as e:
print(f"Error parsing identifiants.csv: \\\\{e\\\\}")
def generate_report(self):
"""Generate comprehensive analysis report"""
report_file = self.output_dir / "analysis_report.txt"
with open(report_file, 'w') as f:
f.write(f"NetworkMiner Analysis Report\n")
f.write(f"PCAP File: \\\\{self.pcap_file\\\\}\n")
f.write(f"Analysis Date: \\\\{os.popen('date').read().strip()\\\\}\n")
f.write("=" * 50 + "\n\n")
# File statistics
if self.assembled_files_dir.exists():
file_count = len(list(self.assembled_files_dir.rglob("*")))
f.write(f"Extracted Files: \\\\{file_count\\\\}\n")
# hôte statistics
hôtes_file = self.output_dir / "hôtes.csv"
if hôtes_file.exists():
with open(hôtes_file, 'r') as hôtes_f:
hôte_count = len(hôtes_f.readlines()) - 1 # Subtract header
f.write(f"Unique hôtes: \\\\{hôte_count\\\\}\n")
f.write("\nDetailed analysis available in CSV files and AssembledFiles directory.\n")
print(f"Report generated: \\\\{report_file\\\\}")
def main():
if len(sys.argv) != 3:
print("utilisation: python3 networkminer-file-analyzer.py <pcap_file> <output_dir>")
sys.exit(1)
pcap_file = sys.argv[1]
output_dir = sys.argv[2]
analyzer = NetworkMinerAnalyzer(pcap_file, output_dir)
print("Running NetworkMiner analysis...")
if analyzer.run_networkminer():
print("Analysis complete. processusing results...")
analyzer.parse_hôtes_csv()
analyzer.parse_identifiants_csv()
analyzer.analyze_extracted_files()
analyzer.generate_report()
else:
print("NetworkMiner analysis failed")
if __name__ == "__main__":
main()
Batch processusing Script
#!/bin/bash
# networkminer-batch-processusor.sh
PCAP_DIR="$1"
OUTPUT_BASE_DIR="$2"
NETWORKMINER_PATH="/opt/NetworkMiner"
if [ $# -ne 2 ]; then
echo "utilisation: $0 <pcap_directory> <output_base_directory>"
exit 1
fi
if [ ! -d "$PCAP_DIR" ]; then
echo "Error: PCAP directory does not exist"
exit 1
fi
mkdir -p "$OUTPUT_BASE_DIR"
# processus each PCAP file
find "$PCAP_DIR" -name "*.pcap" -o -name "*.pcapng"|while read pcap_file; do
echo "processusing: $pcap_file"
# Create output directory for this PCAP
pcap_basename=$(basename "$pcap_file"|sed 's/\.[^.]*$//')
output_dir="$OUTPUT_BASE_DIR/$pcap_basename"
mkdir -p "$output_dir"
# Run NetworkMiner
cd "$NETWORKMINER_PATH"
mono NetworkMiner.exe --output "$output_dir" "$pcap_file"
# Generate summary
echo "=== Analysis Summary for $pcap_file ===" > "$output_dir/summary.txt"
echo "Analysis Date: $(date)" >> "$output_dir/summary.txt"
echo "PCAP File: $pcap_file" >> "$output_dir/summary.txt"
echo "Output Directory: $output_dir" >> "$output_dir/summary.txt"
echo "" >> "$output_dir/summary.txt"
# Count extracted files
if [ -d "$output_dir/AssembledFiles" ]; then
file_count=$(find "$output_dir/AssembledFiles" -type f|wc -l)
echo "Extracted Files: $file_count" >> "$output_dir/summary.txt"
fi
# Count hôtes
if [ -f "$output_dir/hôtes.csv" ]; then
hôte_count=$(($(wc -l < "$output_dir/hôtes.csv") - 1))
echo "Unique hôtes: $hôte_count" >> "$output_dir/summary.txt"
fi
# Count sessions
if [ -f "$output_dir/sessions.csv" ]; then
session_count=$(($(wc -l < "$output_dir/sessions.csv") - 1))
echo "Network sessions: $session_count" >> "$output_dir/summary.txt"
fi
echo "Completed: $pcap_file"
echo "Output: $output_dir"
echo "---"
done
echo "Batch processusing complete!"
echo "Results available in: $OUTPUT_BASE_DIR"
Integration and Automation
SIEM Integration
#!/usr/bin/env python3
# networkminer-siem-integration.py
import json
import csv
import requests
from pathlib import Path
import hashlib
class NetworkMinerSIEMIntegrator:
def __init__(self, output_dir, siem_endpoint, api_clé):
self.output_dir = Path(output_dir)
self.siem_endpoint = siem_endpoint
self.api_clé = api_clé
def send_to_siem(self, event_data):
"""Send event to SIEM"""
try:
headers = \\\\{
'Content-Type': 'application/json',
'autorisation': f'Bearer \\\\{self.api_clé\\\\}'
\\\\}
response = requests.post(
self.siem_endpoint,
json=event_data,
headers=headers,
timeout=10
)
response.raise_for_status()
return True
except requests.RequestException as e:
print(f"Error sending to SIEM: \\\\{e\\\\}")
return False
def processus_hôtes(self):
"""processus and send hôte information to SIEM"""
hôtes_file = self.output_dir / "hôtes.csv"
if not hôtes_file.exists():
return
with open(hôtes_file, 'r') as f:
reader = csv.DictReader(f)
for row in reader:
event = \\\\{
'event_type': 'hôte_discovery',
'source': 'networkminer',
'timestamp': row.get('First Seen', ''),
'ip_address': row.get('IP', ''),
'mac_address': row.get('MAC', ''),
'hôtename': row.get('hôtename', ''),
'operating_system': row.get('OS', ''),
'open_ports': row.get('Open ports', ''),
'last_seen': row.get('Last Seen', '')
\\\\}
self.send_to_siem(event)
def processus_identifiants(self):
"""processus and send credential information to SIEM"""
creds_file = self.output_dir / "identifiants.csv"
if not creds_file.exists():
return
with open(creds_file, 'r') as f:
reader = csv.DictReader(f)
for row in reader:
event = \\\\{
'event_type': 'credential_exposure',
'source': 'networkminer',
'severity': 'HIGH',
'protocole': row.get('protocole', ''),
'nom d'utilisateur': row.get('nom d'utilisateur', ''),
'mot de passe': row.get('mot de passe', ''),
'hôte': row.get('hôte', ''),
'timestamp': row.get('Timestamp', '')
\\\\}
self.send_to_siem(event)
def processus_files(self):
"""processus and send file information to SIEM"""
files_dir = self.output_dir / "AssembledFiles"
if not files_dir.exists():
return
for file_path in files_dir.rglob("*"):
if file_path.is_file():
try:
# Calculate file hash
with open(file_path, 'rb') as f:
file_hash = hashlib.sha256(f.read()).hexdigest()
event = \\\\{
'event_type': 'file_extraction',
'source': 'networkminer',
'filename': file_path.name,
'file_size': file_path.stat().st_size,
'file_hash': file_hash,
'extraction_path': str(file_path),
'timestamp': file_path.stat().st_mtime
\\\\}
self.send_to_siem(event)
except Exception as e:
print(f"Error processusing file \\\\{file_path\\\\}: \\\\{e\\\\}")
def main():
output_dir = "/tmp/networkminer_output"
siem_endpoint = "https://your-siem.com/api/events"
api_clé = "your-api-clé"
integrator = NetworkMinerSIEMIntegrator(output_dir, siem_endpoint, api_clé)
print("Sending NetworkMiner results to SIEM...")
integrator.processus_hôtes()
integrator.processus_identifiants()
integrator.processus_files()
print("SIEM integration complete")
if __name__ == "__main__":
main()
Renseignement sur les Menaces Integration
#!/usr/bin/env python3
# networkminer-threat-intel.py
import csv
import requests
import json
from pathlib import Path
class ThreatIntelChecker:
def __init__(self, output_dir, virustotal_api_clé):
self.output_dir = Path(output_dir)
self.vt_api_clé = virustotal_api_clé
self.vt_base_url = "https://www.virustotal.com/vtapi/v2"
def check_ip_reputation(self, ip_address):
"""Check IP reputation with VirusTotal"""
try:
url = f"\\\\{self.vt_base_url\\\\}/ip-address/report"
params = \\\\{
'apiclé': self.vt_api_clé,
'ip': ip_address
\\\\}
response = requests.get(url, params=params)
if response.status_code == 200:
return response.json()
except Exception as e:
print(f"Error checking IP \\\\{ip_address\\\\}: \\\\{e\\\\}")
return None
def check_file_hash(self, file_hash):
"""Check file hash with VirusTotal"""
try:
url = f"\\\\{self.vt_base_url\\\\}/file/report"
params = \\\\{
'apiclé': self.vt_api_clé,
'resource': file_hash
\\\\}
response = requests.get(url, params=params)
if response.status_code == 200:
return response.json()
except Exception as e:
print(f"Error checking hash \\\\{file_hash\\\\}: \\\\{e\\\\}")
return None
def analyze_hôtes(self):
"""Analyze hôtes for Renseignement sur les Menaces"""
hôtes_file = self.output_dir / "hôtes.csv"
if not hôtes_file.exists():
return
print("=== Renseignement sur les Menaces Analysis - hôtes ===")
with open(hôtes_file, 'r') as f:
reader = csv.DictReader(f)
for row in reader:
ip = row.get('IP', '')
if ip and self.is_external_ip(ip):
print(f"Checking IP: \\\\{ip\\\\}")
result = self.check_ip_reputation(ip)
if result and result.get('response_code') == 1:
detected_urls = result.get('detected_urls', [])
detected_samples = result.get('detected_communicating_samples', [])
if detected_urls or detected_samples:
print(f" ⚠️ THREAT DETECTED for \\\\{ip\\\\}")
print(f" Malicious URLs: \\\\{len(detected_urls)\\\\}")
print(f" Malicious Samples: \\\\{len(detected_samples)\\\\}")
else:
print(f" ✓ Clean reputation for \\\\{ip\\\\}")
def analyze_files(self):
"""Analyze extracted files for threats"""
files_dir = self.output_dir / "AssembledFiles"
if not files_dir.exists():
return
print("=== Renseignement sur les Menaces Analysis - Files ===")
for file_path in files_dir.rglob("*"):
if file_path.is_file() and file_path.stat().st_size > 0:
try:
import hashlib
with open(file_path, 'rb') as f:
file_hash = hashlib.sha256(f.read()).hexdigest()
print(f"Checking file: \\\\{file_path.name\\\\}")
result = self.check_file_hash(file_hash)
if result and result.get('response_code') == 1:
positives = result.get('positives', 0)
total = result.get('total', 0)
if positives > 0:
print(f" ⚠️ logiciel malveillant DETECTED: \\\\{file_path.name\\\\}")
print(f" Detection: \\\\{positives\\\\}/\\\\{total\\\\} engines")
print(f" SHA256: \\\\{file_hash\\\\}")
else:
print(f" ✓ Clean file: \\\\{file_path.name\\\\}")
except Exception as e:
print(f"Error analyzing \\\\{file_path\\\\}: \\\\{e\\\\}")
def is_external_ip(self, ip):
"""Check if IP is external (not private)"""
import ipaddress
try:
ip_obj = ipaddress.ip_address(ip)
return not ip_obj.is_private
except:
return False
def main():
output_dir = "/tmp/networkminer_output"
virustotal_api_clé = "your-virustotal-api-clé"
checker = ThreatIntelChecker(output_dir, virustotal_api_clé)
print("Starting Renseignement sur les Menaces analysis...")
checker.analyze_hôtes()
checker.analyze_files()
print("Renseignement sur les Menaces analysis complete")
if __name__ == "__main__":
main()
Reporting and Visualization
HTML Report Generator
#!/usr/bin/env python3
# networkminer-html-report.py
import csv
import json
from pathlib import Path
from datetime import datetime
class NetworkMinerHTMLReporter:
def __init__(self, output_dir, pcap_file):
self.output_dir = Path(output_dir)
self.pcap_file = pcap_file
self.report_data = \\\\{\\\\}
def collect_data(self):
"""Collect all analysis data"""
self.report_data = \\\\{
'pcap_file': self.pcap_file,
'analysis_date': datetime.now().strftime('%Y-%m-%d %H:%M:%S'),
'hôtes': self.load_hôtes(),
'sessions': self.load_sessions(),
'files': self.load_files(),
'identifiants': self.load_identifiants()
\\\\}
def load_hôtes(self):
"""Load hôte data"""
hôtes_file = self.output_dir / "hôtes.csv"
hôtes = []
if hôtes_file.exists():
with open(hôtes_file, 'r') as f:
reader = csv.DictReader(f)
hôtes = list(reader)
return hôtes
def load_sessions(self):
"""Load session data"""
sessions_file = self.output_dir / "sessions.csv"
sessions = []
if sessions_file.exists():
with open(sessions_file, 'r') as f:
reader = csv.DictReader(f)
sessions = list(reader)
return sessions
def load_files(self):
"""Load extracted files data"""
files_dir = self.output_dir / "AssembledFiles"
files = []
if files_dir.exists():
for file_path in files_dir.rglob("*"):
if file_path.is_file():
files.append(\\\\{
'name': file_path.name,
'size': file_path.stat().st_size,
'path': str(file_path.relative_to(files_dir))
\\\\})
return files
def load_identifiants(self):
"""Load identifiants data"""
creds_file = self.output_dir / "identifiants.csv"
identifiants = []
if creds_file.exists():
with open(creds_file, 'r') as f:
reader = csv.DictReader(f)
identifiants = list(reader)
return identifiants
def generate_html_report(self):
"""Generate HTML report"""
html_template = """
<!DOCTYPE html>
<html>
<head>
<title>NetworkMiner Analysis Report</title>
<style>
body \\\\{ font-family: Arial, sans-serif; margin: 20px; \\\\}
.header \\\\{ background-color: #f0f0f0; padding: 20px; border-radius: 5px; \\\\}
.section \\\\{ margin: 20px 0; \\\\}
.section h2 \\\\{ color: #333; border-bottom: 2px solid #ccc; \\\\}
table \\\\{ border-collapse: collapse; width: 100%; margin: 10px 0; \\\\}
th, td \\\\{ border: 1px solid #ddd; padding: 8px; text-align: left; \\\\}
th \\\\{ background-color: #f2f2f2; \\\\}
.stats \\\\{ display: flex; gap: 20px; margin: 20px 0; \\\\}
.stat-box \\\\{ background-color: #e9f4ff; padding: 15px; border-radius: 5px; text-align: center; \\\\}
.warning \\\\{ background-color: #fff3cd; border: 1px solid #ffeaa7; padding: 10px; border-radius: 5px; \\\\}
</style>
</head>
<body>
<div class="header">
<h1>NetworkMiner Analysis Report</h1>
<p><strong>PCAP File:</strong> \\\\{pcap_file\\\\}</p>
<p><strong>Analysis Date:</strong> \\\\{analysis_date\\\\}</p>
</div>
<div class="stats">
<div class="stat-box">
<h3>\\\\{hôte_count\\\\}</h3>
<p>Unique hôtes</p>
</div>
<div class="stat-box">
<h3>\\\\{session_count\\\\}</h3>
<p>Network sessions</p>
</div>
<div class="stat-box">
<h3>\\\\{file_count\\\\}</h3>
<p>Extracted Files</p>
</div>
<div class="stat-box">
<h3>\\\\{credential_count\\\\}</h3>
<p>identifiants Found</p>
</div>
</div>
\\\\{identifiants_warning\\\\}
<div class="section">
<h2>hôte Information</h2>
<table>
<tr>
<th>IP Address</th>
<th>MAC Address</th>
<th>hôtename</th>
<th>Operating System</th>
<th>Open ports</th>
</tr>
\\\\{hôte_rows\\\\}
</table>
</div>
<div class="section">
<h2>Network sessions</h2>
<table>
<tr>
<th>Client</th>
<th>Server</th>
<th>protocole</th>
<th>Start Time</th>
<th>Duration</th>
</tr>
\\\\{session_rows\\\\}
</table>
</div>
<div class="section">
<h2>Extracted Files</h2>
<table>
<tr>
<th>Filename</th>
<th>Size (bytes)</th>
<th>Path</th>
</tr>
\\\\{file_rows\\\\}
</table>
</div>
\\\\{identifiants_section\\\\}
</body>
</html>
"""
# Generate table rows
hôte_rows = ""
for hôte in self.report_data['hôtes'][:20]: # Limit to first 20
hôte_rows += f"""
<tr>
<td>\\\\{hôte.get('IP', '')\\\\}</td>
<td>\\\\{hôte.get('MAC', '')\\\\}</td>
<td>\\\\{hôte.get('hôtename', '')\\\\}</td>
<td>\\\\{hôte.get('OS', '')\\\\}</td>
<td>\\\\{hôte.get('Open ports', '')\\\\}</td>
</tr>
"""
session_rows = ""
for session in self.report_data['sessions'][:20]: # Limit to first 20
session_rows += f"""
<tr>
<td>\\\\{session.get('Client', '')\\\\}</td>
<td>\\\\{session.get('Server', '')\\\\}</td>
<td>\\\\{session.get('protocole', '')\\\\}</td>
<td>\\\\{session.get('Start Time', '')\\\\}</td>
<td>\\\\{session.get('Duration', '')\\\\}</td>
</tr>
"""
file_rows = ""
for file_info in self.report_data['files'][:20]: # Limit to first 20
file_rows += f"""
<tr>
<td>\\\\{file_info['name']\\\\}</td>
<td>\\\\{file_info['size']\\\\}</td>
<td>\\\\{file_info['path']\\\\}</td>
</tr>
"""
# identifiants section
identifiants_warning = ""
identifiants_section = ""
if self.report_data['identifiants']:
identifiants_warning = """
<div class="warning">
<strong>⚠️ Warning:</strong> Clear-text identifiants were found in the network traffic!
</div>
"""
credential_rows = ""
for cred in self.report_data['identifiants']:
credential_rows += f"""
<tr>
<td>\\\\{cred.get('protocole', '')\\\\}</td>
<td>\\\\{cred.get('nom d'utilisateur', '')\\\\}</td>
<td>\\\\{'*' * len(cred.get('mot de passe', ''))\\\\}</td>
<td>\\\\{cred.get('hôte', '')\\\\}</td>
</tr>
"""
identifiants_section = f"""
<div class="section">
<h2>identifiants (mot de passes Hidden)</h2>
<table>
<tr>
<th>protocole</th>
<th>nom d'utilisateur</th>
<th>mot de passe</th>
<th>hôte</th>
</tr>
\\\\{credential_rows\\\\}
</table>
</div>
"""
# Fill template
html_content = html_template.format(
pcap_file=self.report_data['pcap_file'],
analysis_date=self.report_data['analysis_date'],
hôte_count=len(self.report_data['hôtes']),
session_count=len(self.report_data['sessions']),
file_count=len(self.report_data['files']),
credential_count=len(self.report_data['identifiants']),
identifiants_warning=identifiants_warning,
hôte_rows=hôte_rows,
session_rows=session_rows,
file_rows=file_rows,
identifiants_section=identifiants_section
)
# Save report
report_file = self.output_dir / "analysis_report.html"
with open(report_file, 'w') as f:
f.write(html_content)
print(f"HTML report generated: \\\\{report_file\\\\}")
return report_file
def main():
import sys
if len(sys.argv) != 3:
print("utilisation: python3 networkminer-html-report.py <output_dir> <pcap_file>")
sys.exit(1)
output_dir = sys.argv[1]
pcap_file = sys.argv[2]
reporter = NetworkMinerHTMLReporter(output_dir, pcap_file)
reporter.collect_data()
reporter.generate_html_report()
if __name__ == "__main__":
main()
Best Practices
Analysis Workflow
#!/bin/bash
# networkminer-best-practices.sh
PCAP_FILE="$1"
CASE_NAME="$2"
ANALYST="$3"
if [ $# -ne 3 ]; then
echo "utilisation: $0 <pcap_file> <case_name> <analyst_name>"
exit 1
fi
# Create case directory structure
CASE_DIR="/cases/$CASE_NAME"
mkdir -p "$CASE_DIR"/\\\\{evidence,analysis,reports,notes\\\\}
# Copy evidence with hash verification
echo "Copying evidence file..."
cp "$PCAP_FILE" "$CASE_DIR/evidence/"
ORIGINAL_hash=$(sha256sum "$PCAP_FILE"|cut -d' ' -f1)
COPY_hash=$(sha256sum "$CASE_DIR/evidence/$(basename "$PCAP_FILE")"|cut -d' ' -f1)
if [ "$ORIGINAL_hash" != "$COPY_hash" ]; then
echo "ERROR: hash mismatch during evidence copy!"
exit 1
fi
# Create case log
CASE_LOG="$CASE_DIR/notes/case_log.txt"
echo "Case: $CASE_NAME" > "$CASE_LOG"
echo "Analyst: $ANALYST" >> "$CASE_LOG"
echo "Start Time: $(date)" >> "$CASE_LOG"
echo "Evidence File: $(basename "$PCAP_FILE")" >> "$CASE_LOG"
echo "Evidence hash: $ORIGINAL_hash" >> "$CASE_LOG"
echo "---" >> "$CASE_LOG"
# Run NetworkMiner analysis
echo "Running NetworkMiner analysis..."
OUTPUT_DIR="$CASE_DIR/analysis/networkminer"
mkdir -p "$OUTPUT_DIR"
mono NetworkMiner.exe --output "$OUTPUT_DIR" "$CASE_DIR/evidence/$(basename "$PCAP_FILE")"
# Document analysis steps
echo "$(date): NetworkMiner analysis completed" >> "$CASE_LOG"
echo "Output directory: $OUTPUT_DIR" >> "$CASE_LOG"
# Generate reports
echo "Generating reports..."
python3 networkminer-html-report.py "$OUTPUT_DIR" "$(basename "$PCAP_FILE")" > "$CASE_DIR/reports/"
echo "Analysis complete. Case directory: $CASE_DIR"
Performance Optimization
NetworkMiner Performance Tips:
1. PCAP File Size:
- Free version limited to 5MB PCAP files
- Professional version handles larger files
- Split large PCAPs if using free version
2. Memory utilisation:
- Close unnecessary applications
- Increase virtual memory if needed
- Monitor RAM utilisation during analysis
3. processusing Speed:
- Use SSD storage for faster I/O
- processus files locally (not over network)
- Close GUI tabs not in use
4. Batch processusing:
- processus multiple small files vs. one large file
- Use commande-line mode for automation
- Schedule analysis during off-hours
Evidence Handling
#!/bin/bash
# evidence-handling-checklist.sh
echo "NetworkMiner Evidence Handling Checklist"
echo "========================================"
echo "1. Evidence Acquisition:"
echo " □ Verify PCAP file integrity (hash)"
echo " □ Document chain of custody"
echo " □ Create working copy for analysis"
echo " □ Store original in secure location"
echo -e "\n2. Analysis Environment:"
echo " □ Use isolated analysis system"
echo " □ Document system configuration"
echo " □ Ensure sufficient disk space"
echo " □ Backup analysis results"
echo -e "\n3. documentation:"
echo " □ Record analysis start/end times"
echo " □ Document tools and versions used"
echo " □ Save all output files"
echo " □ Create analysis summary report"
echo -e "\n4. Quality Assurance:"
echo " □ Verify extracted files integrity"
echo " □ Cross-reference findings"
echo " □ Peer review analysis results"
echo " □ Archive complete case file"
dépannage
Common Issues
# Issue: NetworkMiner won't start on Linux
# Solution: Install required Mono components
sudo apt install mono-complete libmono-winforms2.0-cil
# Issue: PCAP file too large (free version)
# Solution: Split PCAP file
editcap -c 1000000 large_file.pcap split_file.pcap
# Issue: Missing extracted files
# Check file extraction settings in GUI
# Verify PCAP contains actual file transfers
# Issue: No identifiants found
# Ensure traffic contains clear-text protocoles
# Check for encrypted connexions (HTTPS, SSH)
# Issue: Slow performance
# Close unnecessary GUI tabs
# Increase system memory
# Use commande-line mode for batch processusing
Debug Mode
# Enable verbose logging
mono --debug NetworkMiner.exe capture.pcap
# Check Mono version compatibility
mono --version
# Verify PCAP file format
file capture.pcap
tcpdump -r capture.pcap -c 10
# Test with sample PCAP
wget https://wiki.wireshark.org/SampleCaptures/http.cap
mono NetworkMiner.exe http.cap
Analyse de Logs
# Check system logs for errors
journalctl -u mono --since "1 hour ago"
# Monitor resource utilisation
top -p $(pgrep mono)
# Check disk space
df -h /tmp
# Verify file permissions
ls -la NetworkMiner.exe
ls -la *.pcap
Resources
- NetworkMiner Official Website
- NetworkMiner Professional
- NetworkMiner Manual
- Sample PCAP Files
- Network Forensics Guide
This aide-mémoire provides comprehensive guidance for using NetworkMiner for network forensic analysis. Regular practice with sample PCAP files and understanding of network protocoles enhance analysis effectiveness.