NetworkMiner Network Forensic Analysis Tool hoja de trucos
Overview
NetworkMiner is a Network Forensic Analysis Tool (NFAT) for Windows that can detect operating systems, sesións, hostnames, open puertos, and more by analyzing network traffic captured in PCAP files. It provides a user-friendly interface for network forensics and respuesta a incidentes, offering both free and professional versions with advanced features for deep packet analysis, file extraction, and network reconstruction.
⚠️ Note: NetworkMiner is primarily a Windows application. The free version has limitations on PCAP file size and features. Professional version offers advanced capabilities for enterprise use.
instalación
Windows instalación
# Download NetworkMiner Free from official website
# https://www.netresec.com/?page=NetworkMiner
# Extract to desired directory
Expand-Archive -Path NetworkMiner_2-8-1.zip -DestinationPath C:\Tools\NetworkMiner
# Run NetworkMiner
cd C:\Tools\NetworkMiner
.\NetworkMiner.exe
# For Professional version
# Purchase license from https://www.netresec.com/
# Install using provided installer
Linux instalación (Mono)
# Install Mono runtime
sudo apt update
sudo apt install mono-complete
# Download NetworkMiner
wget https://www.netresec.com/files/NetworkMiner_2-8-1.zip
unzip NetworkMiner_2-8-1.zip
cd NetworkMiner_2-8-1
# Run with Mono
mono NetworkMiner.exe
# Install additional dependencies if needed
sudo apt install libmono-winforms2.0-cil
sudo apt install libmono-system-windows-forms4.0-cil
Docker instalación
# Create Dockerfile for NetworkMiner
cat > Dockerfile << 'EOF'
FROM mono:latest
RUN apt-get update && apt-get install -y \
wget \
unzip \
libmono-winforms2.0-cil \
libmono-system-windows-forms4.0-cil
WORKDIR /app
RUN wget https://www.netresec.com/files/NetworkMiner_2-8-1.zip && \
unzip NetworkMiner_2-8-1.zip && \
rm NetworkMiner_2-8-1.zip
WORKDIR /app/NetworkMiner_2-8-1
ENTRYPOINT ["mono", "NetworkMiner.exe"]
EOF
# Build and run
docker build -t networkminer .
docker run -it --rm -v $(pwd)/pcaps:/pcaps networkminer
puertoable instalación
# Download puertoable version
wget https://www.netresec.com/files/NetworkMiner_2-8-1.zip
# Extract to USB drive or puertoable location
unzip NetworkMiner_2-8-1.zip -d /media/usb/NetworkMiner
# Create launcher script
cat > /media/usb/NetworkMiner/run_networkminer.sh ``<< 'EOF'
#!/bin/bash
cd "$(dirname "$0")/NetworkMiner_2-8-1"
mono NetworkMiner.exe "$@"
EOF
chmod +x /media/usb/NetworkMiner/run_networkminer.sh
Basic uso
Loading PCAP Files
# comando line uso
NetworkMiner.exe --help
# Load single PCAP file
NetworkMiner.exe capture.pcap
# Load multiple PCAP files
NetworkMiner.exe capture1.pcap capture2.pcap capture3.pcap
# Load PCAP with specific output directory
NetworkMiner.exe --output C:\Analysis\Output capture.pcap
# Load large PCAP files (Professional)
NetworkMiner.exe --maxframes 1000000 large_capture.pcap
GUI Operations
File Menu:
- Open File(s): Load PCAP/PCAPNG files
- Open Directory: Load all PCAP files from directory
- Receive from Sniffer: Capture live traffic (Professional)
- Clear GUI: Clear current analysis
- Expuerto: Expuerto analysis results
Tools Menu:
- Reassemble Files: Extract files from traffic
- Generate Repuerto: Create analysis repuerto
- Calculate hash: Generate file hashes
- Decode Base64: Decode Base64 strings
- Convert PCAP: Convert between formats
comando Line Interface
# Basic analysis
mono NetworkMiner.exe capture.pcap
# Specify output directory
mono NetworkMiner.exe --output /tmp/analysis capture.pcap
# Set maximum frames to proceso
mono NetworkMiner.exe --maxframes 100000 capture.pcap
# Enable verbose output
mono NetworkMiner.exe --verbose capture.pcap
# proceso multiple files
mono NetworkMiner.exe *.pcap
# Generate repuerto automatically
mono NetworkMiner.exe --repuerto capture.pcap
Análisis de Red Features
host Discovery
hosts Tab Analysis:
- IP addresses and MAC addresses
- Operating system detection
- Open puertos and servicios
- hostname resolution
- Network distance (TTL analysis)
- First and last seen timestamps
host Information Extraction:
- DHCP hostnames
- NetBIOS names
- DNS queries and responses
- HTTP User-Agent strings
- SMB/CIFS hostnames
sesión Analysis
sesións Tab Features:
- TCP sesións reconstruction
- Client-server relationships
- sesión duration and data volume
- protocolo identification
- Application layer protocolos
sesión Details:
- Source and destination hosts
- puerto numbers and protocolos
- Start and end timestamps
- Bytes transferred
- Frame count
File Extraction
Files Tab Capabilities:
- Automatic file extraction from traffic
- File type identification
- MD5/SHA1/SHA256 hash calculation
- File size and timestamps
- Source host identification
Suppuertoed protocolos for File Extraction:
- HTTP/HTTPS file downloads
- FTP file transfers
- SMB/CIFS file shares
- SMTP email attachments
- POP3/IMAP email content
- TFTP transfers
Credential Harvesting
credenciales Tab Features:
- Clear-text contraseña extraction
- autenticación attempts
- protocolo-specific credenciales
- Success/failure indicators
Suppuertoed protocolos:
- HTTP Basic/Digest autenticación
- FTP login credenciales
- Telnet autenticación
- POP3/IMAP login
- SMB/NTLM autenticación
- SNMP community strings
Advanced Analysis Techniques
protocolo Analysis
#!/bin/bash
# networkminer-protocolo-analysis.sh
PCAP_FILE="$1"
OUTPUT_DIR="/tmp/nm_analysis"
if [ $# -ne 1 ]; then
echo "uso: $0 <pcap_file>``"
exit 1
fi
mkdir -p "$OUTPUT_DIR"
echo "Starting NetworkMiner analysis of $PCAP_FILE"
# Run NetworkMiner analysis
mono NetworkMiner.exe --output "$OUTPUT_DIR" "$PCAP_FILE"
# Parse results
echo "=== host Summary ==="
if [ -f "$OUTPUT_DIR/hosts.csv" ]; then
cat "$OUTPUT_DIR/hosts.csv"|head -20
fi
echo -e "\n=== sesión Summary ==="
if [ -f "$OUTPUT_DIR/sesións.csv" ]; then
cat "$OUTPUT_DIR/sesións.csv"|head -20
fi
echo -e "\n=== Extracted Files ==="
if [ -d "$OUTPUT_DIR/AssembledFiles" ]; then
find "$OUTPUT_DIR/AssembledFiles" -type f|head -20
fi
echo -e "\n=== credenciales Found ==="
if [ -f "$OUTPUT_DIR/credenciales.csv" ]; then
cat "$OUTPUT_DIR/credenciales.csv"
fi
Automated File Analysis
#!/usr/bin/env python3
# networkminer-file-analyzer.py
impuerto os
impuerto sys
impuerto hashlib
impuerto subproceso
impuerto csv
from pathlib impuerto Path
class NetworkMinerAnalyzer:
def __init__(self, pcap_file, output_dir):
self.pcap_file = pcap_file
self.output_dir = Path(output_dir)
self.assembled_files_dir = self.output_dir / "AssembledFiles"
def run_networkminer(self):
"""Run NetworkMiner analysis"""
cmd = [
"mono", "NetworkMiner.exe",
"--output", str(self.output_dir),
self.pcap_file
]
try:
result = subproceso.run(cmd, capture_output=True, text=True)
if result.returncode != 0:
print(f"NetworkMiner error: \\\\{result.stderr\\\\}")
return False
return True
except Exception as e:
print(f"Error running NetworkMiner: \\\\{e\\\\}")
return False
def analyze_extracted_files(self):
"""Analyze extracted files"""
if not self.assembled_files_dir.exists():
print("No assembled files directory found")
return
print("=== Extracted Files Analysis ===")
for file_path in self.assembled_files_dir.rglob("*"):
if file_path.is_file():
self.analyze_file(file_path)
def analyze_file(self, file_path):
"""Analyze individual extracted file"""
try:
# Calculate file hash
with open(file_path, 'rb') as f:
file_hash = hashlib.sha256(f.read()).hexdigest()
# Get file info
file_size = file_path.stat().st_size
file_type = self.get_file_type(file_path)
print(f"File: \\\\{file_path.name\\\\}")
print(f" Size: \\\\{file_size\\\\} bytes")
print(f" Type: \\\\{file_type\\\\}")
print(f" SHA256: \\\\{file_hash\\\\}")
print(f" Path: \\\\{file_path\\\\}")
# Check for suspicious files
if self.is_suspicious_file(file_path, file_type):
print(f" ⚠️ SUSPICIOUS FILE DETECTED")
print()
except Exception as e:
print(f"Error analyzing \\\\{file_path\\\\}: \\\\{e\\\\}")
def get_file_type(self, file_path):
"""Determine file type"""
try:
result = subproceso.run(['file', str(file_path)],
capture_output=True, text=True)
return result.stdout.strip().split(':', 1)[1].strip()
except:
return "Unknown"
def is_suspicious_file(self, file_path, file_type):
"""Check if file is potentially suspicious"""
suspicious_extensions = ['.exe', '.scr', '.bat', '.cmd', '.com', '.pif']
suspicious_types = ['executable', 'script', 'batch']
# Check extension
if any(str(file_path).lower().endswith(ext) for ext in suspicious_extensions):
return True
# Check file type
if any(sus_type in file_type.lower() for sus_type in suspicious_types):
return True
return False
def parse_hosts_csv(self):
"""Parse hosts.csv file"""
hosts_file = self.output_dir / "hosts.csv"
if not hosts_file.exists():
return
print("=== host Analysis ===")
try:
with open(hosts_file, 'r') as f:
reader = csv.DictReader(f)
for row in reader:
print(f"host: \\\\{row.get('IP', 'Unknown')\\\\}")
print(f" MAC: \\\\{row.get('MAC', 'Unknown')\\\\}")
print(f" OS: \\\\{row.get('OS', 'Unknown')\\\\}")
print(f" hostname: \\\\{row.get('hostname', 'Unknown')\\\\}")
print()
except Exception as e:
print(f"Error parsing hosts.csv: \\\\{e\\\\}")
def parse_credenciales_csv(self):
"""Parse credenciales.csv file"""
creds_file = self.output_dir / "credenciales.csv"
if not creds_file.exists():
return
print("=== credenciales Analysis ===")
try:
with open(creds_file, 'r') as f:
reader = csv.DictReader(f)
for row in reader:
print(f"protocolo: \\\\{row.get('protocolo', 'Unknown')\\\\}")
print(f" nombre de usuario: \\\\{row.get('nombre de usuario', 'Unknown')\\\\}")
print(f" contraseña: \\\\{row.get('contraseña', 'Unknown')\\\\}")
print(f" host: \\\\{row.get('host', 'Unknown')\\\\}")
print()
except Exception as e:
print(f"Error parsing credenciales.csv: \\\\{e\\\\}")
def generate_repuerto(self):
"""Generate comprehensive analysis repuerto"""
repuerto_file = self.output_dir / "analysis_repuerto.txt"
with open(repuerto_file, 'w') as f:
f.write(f"NetworkMiner Analysis Repuerto\n")
f.write(f"PCAP File: \\\\{self.pcap_file\\\\}\n")
f.write(f"Analysis Date: \\\\{os.popen('date').read().strip()\\\\}\n")
f.write("=" * 50 + "\n\n")
# File statistics
if self.assembled_files_dir.exists():
file_count = len(list(self.assembled_files_dir.rglob("*")))
f.write(f"Extracted Files: \\\\{file_count\\\\}\n")
# host statistics
hosts_file = self.output_dir / "hosts.csv"
if hosts_file.exists():
with open(hosts_file, 'r') as hosts_f:
host_count = len(hosts_f.readlines()) - 1 # Subtract header
f.write(f"Unique hosts: \\\\{host_count\\\\}\n")
f.write("\nDetailed analysis available in CSV files and AssembledFiles directory.\n")
print(f"Repuerto generated: \\\\{repuerto_file\\\\}")
def main():
if len(sys.argv) != 3:
print("uso: python3 networkminer-file-analyzer.py <pcap_file> <output_dir>")
sys.exit(1)
pcap_file = sys.argv[1]
output_dir = sys.argv[2]
analyzer = NetworkMinerAnalyzer(pcap_file, output_dir)
print("Running NetworkMiner analysis...")
if analyzer.run_networkminer():
print("Analysis complete. procesoing results...")
analyzer.parse_hosts_csv()
analyzer.parse_credenciales_csv()
analyzer.analyze_extracted_files()
analyzer.generate_repuerto()
else:
print("NetworkMiner analysis failed")
if __name__ == "__main__":
main()
Batch procesoing Script
#!/bin/bash
# networkminer-batch-procesoor.sh
PCAP_DIR="$1"
OUTPUT_BASE_DIR="$2"
NETWORKMINER_PATH="/opt/NetworkMiner"
if [ $# -ne 2 ]; then
echo "uso: $0 <pcap_directory> <output_base_directory>"
exit 1
fi
if [ ! -d "$PCAP_DIR" ]; then
echo "Error: PCAP directory does not exist"
exit 1
fi
mkdir -p "$OUTPUT_BASE_DIR"
# proceso each PCAP file
find "$PCAP_DIR" -name "*.pcap" -o -name "*.pcapng"|while read pcap_file; do
echo "procesoing: $pcap_file"
# Create output directory for this PCAP
pcap_basename=$(basename "$pcap_file"|sed 's/\.[^.]*$//')
output_dir="$OUTPUT_BASE_DIR/$pcap_basename"
mkdir -p "$output_dir"
# Run NetworkMiner
cd "$NETWORKMINER_PATH"
mono NetworkMiner.exe --output "$output_dir" "$pcap_file"
# Generate summary
echo "=== Analysis Summary for $pcap_file ===" > "$output_dir/summary.txt"
echo "Analysis Date: $(date)" >> "$output_dir/summary.txt"
echo "PCAP File: $pcap_file" >> "$output_dir/summary.txt"
echo "Output Directory: $output_dir" >> "$output_dir/summary.txt"
echo "" >> "$output_dir/summary.txt"
# Count extracted files
if [ -d "$output_dir/AssembledFiles" ]; then
file_count=$(find "$output_dir/AssembledFiles" -type f|wc -l)
echo "Extracted Files: $file_count" >> "$output_dir/summary.txt"
fi
# Count hosts
if [ -f "$output_dir/hosts.csv" ]; then
host_count=$(($(wc -l < "$output_dir/hosts.csv") - 1))
echo "Unique hosts: $host_count" >> "$output_dir/summary.txt"
fi
# Count sesións
if [ -f "$output_dir/sesións.csv" ]; then
sesión_count=$(($(wc -l < "$output_dir/sesións.csv") - 1))
echo "Network sesións: $sesión_count" >> "$output_dir/summary.txt"
fi
echo "Completed: $pcap_file"
echo "Output: $output_dir"
echo "---"
done
echo "Batch procesoing complete!"
echo "Results available in: $OUTPUT_BASE_DIR"
Integration and Automation
SIEM Integration
#!/usr/bin/env python3
# networkminer-siem-integration.py
impuerto json
impuerto csv
impuerto requests
from pathlib impuerto Path
impuerto hashlib
class NetworkMinerSIEMIntegrator:
def __init__(self, output_dir, siem_endpoint, api_clave):
self.output_dir = Path(output_dir)
self.siem_endpoint = siem_endpoint
self.api_clave = api_clave
def send_to_siem(self, event_data):
"""Send event to SIEM"""
try:
headers = \\\\{
'Content-Type': 'application/json',
'autorización': f'Bearer \\\\{self.api_clave\\\\}'
\\\\}
response = requests.post(
self.siem_endpoint,
json=event_data,
headers=headers,
timeout=10
)
response.raise_for_status()
return True
except requests.RequestException as e:
print(f"Error sending to SIEM: \\\\{e\\\\}")
return False
def proceso_hosts(self):
"""proceso and send host information to SIEM"""
hosts_file = self.output_dir / "hosts.csv"
if not hosts_file.exists():
return
with open(hosts_file, 'r') as f:
reader = csv.DictReader(f)
for row in reader:
event = \\\\{
'event_type': 'host_discovery',
'source': 'networkminer',
'timestamp': row.get('First Seen', ''),
'ip_address': row.get('IP', ''),
'mac_address': row.get('MAC', ''),
'hostname': row.get('hostname', ''),
'operating_system': row.get('OS', ''),
'open_puertos': row.get('Open puertos', ''),
'last_seen': row.get('Last Seen', '')
\\\\}
self.send_to_siem(event)
def proceso_credenciales(self):
"""proceso and send credential information to SIEM"""
creds_file = self.output_dir / "credenciales.csv"
if not creds_file.exists():
return
with open(creds_file, 'r') as f:
reader = csv.DictReader(f)
for row in reader:
event = \\\\{
'event_type': 'credential_exposure',
'source': 'networkminer',
'severity': 'HIGH',
'protocolo': row.get('protocolo', ''),
'nombre de usuario': row.get('nombre de usuario', ''),
'contraseña': row.get('contraseña', ''),
'host': row.get('host', ''),
'timestamp': row.get('Timestamp', '')
\\\\}
self.send_to_siem(event)
def proceso_files(self):
"""proceso and send file information to SIEM"""
files_dir = self.output_dir / "AssembledFiles"
if not files_dir.exists():
return
for file_path in files_dir.rglob("*"):
if file_path.is_file():
try:
# Calculate file hash
with open(file_path, 'rb') as f:
file_hash = hashlib.sha256(f.read()).hexdigest()
event = \\\\{
'event_type': 'file_extraction',
'source': 'networkminer',
'filename': file_path.name,
'file_size': file_path.stat().st_size,
'file_hash': file_hash,
'extraction_path': str(file_path),
'timestamp': file_path.stat().st_mtime
\\\\}
self.send_to_siem(event)
except Exception as e:
print(f"Error procesoing file \\\\{file_path\\\\}: \\\\{e\\\\}")
def main():
output_dir = "/tmp/networkminer_output"
siem_endpoint = "https://your-siem.com/api/events"
api_clave = "your-api-clave"
integrator = NetworkMinerSIEMIntegrator(output_dir, siem_endpoint, api_clave)
print("Sending NetworkMiner results to SIEM...")
integrator.proceso_hosts()
integrator.proceso_credenciales()
integrator.proceso_files()
print("SIEM integration complete")
if __name__ == "__main__":
main()
Inteligencia de Amenazas Integration
#!/usr/bin/env python3
# networkminer-threat-intel.py
impuerto csv
impuerto requests
impuerto json
from pathlib impuerto Path
class ThreatIntelChecker:
def __init__(self, output_dir, virustotal_api_clave):
self.output_dir = Path(output_dir)
self.vt_api_clave = virustotal_api_clave
self.vt_base_url = "https://www.virustotal.com/vtapi/v2"
def check_ip_reputation(self, ip_address):
"""Check IP reputation with VirusTotal"""
try:
url = f"\\\\{self.vt_base_url\\\\}/ip-address/repuerto"
params = \\\\{
'apiclave': self.vt_api_clave,
'ip': ip_address
\\\\}
response = requests.get(url, params=params)
if response.status_code == 200:
return response.json()
except Exception as e:
print(f"Error checking IP \\\\{ip_address\\\\}: \\\\{e\\\\}")
return None
def check_file_hash(self, file_hash):
"""Check file hash with VirusTotal"""
try:
url = f"\\\\{self.vt_base_url\\\\}/file/repuerto"
params = \\\\{
'apiclave': self.vt_api_clave,
'resource': file_hash
\\\\}
response = requests.get(url, params=params)
if response.status_code == 200:
return response.json()
except Exception as e:
print(f"Error checking hash \\\\{file_hash\\\\}: \\\\{e\\\\}")
return None
def analyze_hosts(self):
"""Analyze hosts for Inteligencia de Amenazas"""
hosts_file = self.output_dir / "hosts.csv"
if not hosts_file.exists():
return
print("=== Inteligencia de Amenazas Analysis - hosts ===")
with open(hosts_file, 'r') as f:
reader = csv.DictReader(f)
for row in reader:
ip = row.get('IP', '')
if ip and self.is_external_ip(ip):
print(f"Checking IP: \\\\{ip\\\\}")
result = self.check_ip_reputation(ip)
if result and result.get('response_code') == 1:
detected_urls = result.get('detected_urls', [])
detected_samples = result.get('detected_communicating_samples', [])
if detected_urls or detected_samples:
print(f" ⚠️ THREAT DETECTED for \\\\{ip\\\\}")
print(f" Malicious URLs: \\\\{len(detected_urls)\\\\}")
print(f" Malicious Samples: \\\\{len(detected_samples)\\\\}")
else:
print(f" ✓ Clean reputation for \\\\{ip\\\\}")
def analyze_files(self):
"""Analyze extracted files for threats"""
files_dir = self.output_dir / "AssembledFiles"
if not files_dir.exists():
return
print("=== Inteligencia de Amenazas Analysis - Files ===")
for file_path in files_dir.rglob("*"):
if file_path.is_file() and file_path.stat().st_size > 0:
try:
impuerto hashlib
with open(file_path, 'rb') as f:
file_hash = hashlib.sha256(f.read()).hexdigest()
print(f"Checking file: \\\\{file_path.name\\\\}")
result = self.check_file_hash(file_hash)
if result and result.get('response_code') == 1:
positives = result.get('positives', 0)
total = result.get('total', 0)
if positives > 0:
print(f" ⚠️ malware DETECTED: \\\\{file_path.name\\\\}")
print(f" Detection: \\\\{positives\\\\}/\\\\{total\\\\} engines")
print(f" SHA256: \\\\{file_hash\\\\}")
else:
print(f" ✓ Clean file: \\\\{file_path.name\\\\}")
except Exception as e:
print(f"Error analyzing \\\\{file_path\\\\}: \\\\{e\\\\}")
def is_external_ip(self, ip):
"""Check if IP is external (not private)"""
impuerto ipaddress
try:
ip_obj = ipaddress.ip_address(ip)
return not ip_obj.is_private
except:
return False
def main():
output_dir = "/tmp/networkminer_output"
virustotal_api_clave = "your-virustotal-api-clave"
checker = ThreatIntelChecker(output_dir, virustotal_api_clave)
print("Starting Inteligencia de Amenazas analysis...")
checker.analyze_hosts()
checker.analyze_files()
print("Inteligencia de Amenazas analysis complete")
if __name__ == "__main__":
main()
Repuertoing and Visualization
HTML Repuerto Generator
#!/usr/bin/env python3
# networkminer-html-repuerto.py
impuerto csv
impuerto json
from pathlib impuerto Path
from datetime impuerto datetime
class NetworkMinerHTMLRepuertoer:
def __init__(self, output_dir, pcap_file):
self.output_dir = Path(output_dir)
self.pcap_file = pcap_file
self.repuerto_data = \\\\{\\\\}
def collect_data(self):
"""Collect all analysis data"""
self.repuerto_data = \\\\{
'pcap_file': self.pcap_file,
'analysis_date': datetime.now().strftime('%Y-%m-%d %H:%M:%S'),
'hosts': self.load_hosts(),
'sesións': self.load_sesións(),
'files': self.load_files(),
'credenciales': self.load_credenciales()
\\\\}
def load_hosts(self):
"""Load host data"""
hosts_file = self.output_dir / "hosts.csv"
hosts = []
if hosts_file.exists():
with open(hosts_file, 'r') as f:
reader = csv.DictReader(f)
hosts = list(reader)
return hosts
def load_sesións(self):
"""Load sesión data"""
sesións_file = self.output_dir / "sesións.csv"
sesións = []
if sesións_file.exists():
with open(sesións_file, 'r') as f:
reader = csv.DictReader(f)
sesións = list(reader)
return sesións
def load_files(self):
"""Load extracted files data"""
files_dir = self.output_dir / "AssembledFiles"
files = []
if files_dir.exists():
for file_path in files_dir.rglob("*"):
if file_path.is_file():
files.append(\\\\{
'name': file_path.name,
'size': file_path.stat().st_size,
'path': str(file_path.relative_to(files_dir))
\\\\})
return files
def load_credenciales(self):
"""Load credenciales data"""
creds_file = self.output_dir / "credenciales.csv"
credenciales = []
if creds_file.exists():
with open(creds_file, 'r') as f:
reader = csv.DictReader(f)
credenciales = list(reader)
return credenciales
def generate_html_repuerto(self):
"""Generate HTML repuerto"""
html_template = """
<!DOCTYPE html>
<html>
<head>
<title>NetworkMiner Analysis Repuerto</title>
<style>
body \\\\{ font-family: Arial, sans-serif; margin: 20px; \\\\}
.header \\\\{ background-color: #f0f0f0; padding: 20px; border-radius: 5px; \\\\}
.section \\\\{ margin: 20px 0; \\\\}
.section h2 \\\\{ color: #333; border-bottom: 2px solid #ccc; \\\\}
table \\\\{ border-collapse: collapse; width: 100%; margin: 10px 0; \\\\}
th, td \\\\{ border: 1px solid #ddd; padding: 8px; text-align: left; \\\\}
th \\\\{ background-color: #f2f2f2; \\\\}
.stats \\\\{ display: flex; gap: 20px; margin: 20px 0; \\\\}
.stat-box \\\\{ background-color: #e9f4ff; padding: 15px; border-radius: 5px; text-align: center; \\\\}
.warning \\\\{ background-color: #fff3cd; border: 1px solid #ffeaa7; padding: 10px; border-radius: 5px; \\\\}
</style>
</head>
<body>
<div class="header">
<h1>NetworkMiner Analysis Repuerto</h1>
<p><strong>PCAP File:</strong> \\\\{pcap_file\\\\}</p>
<p><strong>Analysis Date:</strong> \\\\{analysis_date\\\\}</p>
</div>
<div class="stats">
<div class="stat-box">
<h3>\\\\{host_count\\\\}</h3>
<p>Unique hosts</p>
</div>
<div class="stat-box">
<h3>\\\\{sesión_count\\\\}</h3>
<p>Network sesións</p>
</div>
<div class="stat-box">
<h3>\\\\{file_count\\\\}</h3>
<p>Extracted Files</p>
</div>
<div class="stat-box">
<h3>\\\\{credential_count\\\\}</h3>
<p>credenciales Found</p>
</div>
</div>
\\\\{credenciales_warning\\\\}
<div class="section">
<h2>host Information</h2>
<table>
<tr>
<th>IP Address</th>
<th>MAC Address</th>
<th>hostname</th>
<th>Operating System</th>
<th>Open puertos</th>
</tr>
\\\\{host_rows\\\\}
</table>
</div>
<div class="section">
<h2>Network sesións</h2>
<table>
<tr>
<th>Client</th>
<th>Server</th>
<th>protocolo</th>
<th>Start Time</th>
<th>Duration</th>
</tr>
\\\\{sesión_rows\\\\}
</table>
</div>
<div class="section">
<h2>Extracted Files</h2>
<table>
<tr>
<th>Filename</th>
<th>Size (bytes)</th>
<th>Path</th>
</tr>
\\\\{file_rows\\\\}
</table>
</div>
\\\\{credenciales_section\\\\}
</body>
</html>
"""
# Generate table rows
host_rows = ""
for host in self.repuerto_data['hosts'][:20]: # Limit to first 20
host_rows += f"""
<tr>
<td>\\\\{host.get('IP', '')\\\\}</td>
<td>\\\\{host.get('MAC', '')\\\\}</td>
<td>\\\\{host.get('hostname', '')\\\\}</td>
<td>\\\\{host.get('OS', '')\\\\}</td>
<td>\\\\{host.get('Open puertos', '')\\\\}</td>
</tr>
"""
sesión_rows = ""
for sesión in self.repuerto_data['sesións'][:20]: # Limit to first 20
sesión_rows += f"""
<tr>
<td>\\\\{sesión.get('Client', '')\\\\}</td>
<td>\\\\{sesión.get('Server', '')\\\\}</td>
<td>\\\\{sesión.get('protocolo', '')\\\\}</td>
<td>\\\\{sesión.get('Start Time', '')\\\\}</td>
<td>\\\\{sesión.get('Duration', '')\\\\}</td>
</tr>
"""
file_rows = ""
for file_info in self.repuerto_data['files'][:20]: # Limit to first 20
file_rows += f"""
<tr>
<td>\\\\{file_info['name']\\\\}</td>
<td>\\\\{file_info['size']\\\\}</td>
<td>\\\\{file_info['path']\\\\}</td>
</tr>
"""
# credenciales section
credenciales_warning = ""
credenciales_section = ""
if self.repuerto_data['credenciales']:
credenciales_warning = """
<div class="warning">
<strong>⚠️ Warning:</strong> Clear-text credenciales were found in the network traffic!
</div>
"""
credential_rows = ""
for cred in self.repuerto_data['credenciales']:
credential_rows += f"""
<tr>
<td>\\\\{cred.get('protocolo', '')\\\\}</td>
<td>\\\\{cred.get('nombre de usuario', '')\\\\}</td>
<td>\\\\{'*' * len(cred.get('contraseña', ''))\\\\}</td>
<td>\\\\{cred.get('host', '')\\\\}</td>
</tr>
"""
credenciales_section = f"""
<div class="section">
<h2>credenciales (contraseñas Hidden)</h2>
<table>
<tr>
<th>protocolo</th>
<th>nombre de usuario</th>
<th>contraseña</th>
<th>host</th>
</tr>
\\\\{credential_rows\\\\}
</table>
</div>
"""
# Fill template
html_content = html_template.format(
pcap_file=self.repuerto_data['pcap_file'],
analysis_date=self.repuerto_data['analysis_date'],
host_count=len(self.repuerto_data['hosts']),
sesión_count=len(self.repuerto_data['sesións']),
file_count=len(self.repuerto_data['files']),
credential_count=len(self.repuerto_data['credenciales']),
credenciales_warning=credenciales_warning,
host_rows=host_rows,
sesión_rows=sesión_rows,
file_rows=file_rows,
credenciales_section=credenciales_section
)
# Save repuerto
repuerto_file = self.output_dir / "analysis_repuerto.html"
with open(repuerto_file, 'w') as f:
f.write(html_content)
print(f"HTML repuerto generated: \\\\{repuerto_file\\\\}")
return repuerto_file
def main():
impuerto sys
if len(sys.argv) != 3:
print("uso: python3 networkminer-html-repuerto.py <output_dir> <pcap_file>")
sys.exit(1)
output_dir = sys.argv[1]
pcap_file = sys.argv[2]
repuertoer = NetworkMinerHTMLRepuertoer(output_dir, pcap_file)
repuertoer.collect_data()
repuertoer.generate_html_repuerto()
if __name__ == "__main__":
main()
Best Practices
Analysis Workflow
#!/bin/bash
# networkminer-best-practices.sh
PCAP_FILE="$1"
CASE_NAME="$2"
ANALYST="$3"
if [ $# -ne 3 ]; then
echo "uso: $0 <pcap_file> <case_name> <analyst_name>"
exit 1
fi
# Create case directory structure
CASE_DIR="/cases/$CASE_NAME"
mkdir -p "$CASE_DIR"/\\\\{evidence,analysis,repuertos,notes\\\\}
# Copy evidence with hash verification
echo "Copying evidence file..."
cp "$PCAP_FILE" "$CASE_DIR/evidence/"
ORIGINAL_hash=$(sha256sum "$PCAP_FILE"|cut -d' ' -f1)
COPY_hash=$(sha256sum "$CASE_DIR/evidence/$(basename "$PCAP_FILE")"|cut -d' ' -f1)
if [ "$ORIGINAL_hash" != "$COPY_hash" ]; then
echo "ERROR: hash mismatch during evidence copy!"
exit 1
fi
# Create case log
CASE_LOG="$CASE_DIR/notes/case_log.txt"
echo "Case: $CASE_NAME" > "$CASE_LOG"
echo "Analyst: $ANALYST" >> "$CASE_LOG"
echo "Start Time: $(date)" >> "$CASE_LOG"
echo "Evidence File: $(basename "$PCAP_FILE")" >> "$CASE_LOG"
echo "Evidence hash: $ORIGINAL_hash" >> "$CASE_LOG"
echo "---" >> "$CASE_LOG"
# Run NetworkMiner analysis
echo "Running NetworkMiner analysis..."
OUTPUT_DIR="$CASE_DIR/analysis/networkminer"
mkdir -p "$OUTPUT_DIR"
mono NetworkMiner.exe --output "$OUTPUT_DIR" "$CASE_DIR/evidence/$(basename "$PCAP_FILE")"
# Document analysis steps
echo "$(date): NetworkMiner analysis completed" >> "$CASE_LOG"
echo "Output directory: $OUTPUT_DIR" >> "$CASE_LOG"
# Generate repuertos
echo "Generating repuertos..."
python3 networkminer-html-repuerto.py "$OUTPUT_DIR" "$(basename "$PCAP_FILE")" > "$CASE_DIR/repuertos/"
echo "Analysis complete. Case directory: $CASE_DIR"
Performance Optimization
NetworkMiner Performance Tips:
1. PCAP File Size:
- Free version limited to 5MB PCAP files
- Professional version handles larger files
- Split large PCAPs if using free version
2. Memory uso:
- Close unnecessary applications
- Increase virtual memory if needed
- Monitor RAM uso during analysis
3. procesoing Speed:
- Use SSD storage for faster I/O
- proceso files locally (not over network)
- Close GUI tabs not in use
4. Batch procesoing:
- proceso multiple small files vs. one large file
- Use comando-line mode for automation
- Schedule analysis during off-hours
Evidence Handling
#!/bin/bash
# evidence-handling-checklist.sh
echo "NetworkMiner Evidence Handling Checklist"
echo "========================================"
echo "1. Evidence Acquisition:"
echo " □ Verify PCAP file integrity (hash)"
echo " □ Document chain of custody"
echo " □ Create working copy for analysis"
echo " □ Store original in secure location"
echo -e "\n2. Analysis Environment:"
echo " □ Use isolated analysis system"
echo " □ Document system configuración"
echo " □ Ensure sufficient disk space"
echo " □ Backup analysis results"
echo -e "\n3. documentación:"
echo " □ Record analysis start/end times"
echo " □ Document tools and versions used"
echo " □ Save all output files"
echo " □ Create analysis summary repuerto"
echo -e "\n4. Quality Assurance:"
echo " □ Verify extracted files integrity"
echo " □ Cross-reference findings"
echo " □ Peer review analysis results"
echo " □ Archive complete case file"
solución de problemas
Common Issues
# Issue: NetworkMiner won't start on Linux
# Solution: Install required Mono components
sudo apt install mono-complete libmono-winforms2.0-cil
# Issue: PCAP file too large (free version)
# Solution: Split PCAP file
editcap -c 1000000 large_file.pcap split_file.pcap
# Issue: Missing extracted files
# Check file extraction settings in GUI
# Verify PCAP contains actual file transfers
# Issue: No credenciales found
# Ensure traffic contains clear-text protocolos
# Check for encrypted conexións (HTTPS, SSH)
# Issue: Slow performance
# Close unnecessary GUI tabs
# Increase system memory
# Use comando-line mode for batch procesoing
Debug Mode
# Enable verbose logging
mono --debug NetworkMiner.exe capture.pcap
# Check Mono version compatibility
mono --version
# Verify PCAP file format
file capture.pcap
tcpdump -r capture.pcap -c 10
# Test with sample PCAP
wget https://wiki.wireshark.org/SampleCaptures/http.cap
mono NetworkMiner.exe http.cap
Análisis de Logs
# Check system logs for errors
journalctl -u mono --since "1 hour ago"
# Monitor resource uso
top -p $(pgrep mono)
# Check disk space
df -h /tmp
# Verify file permissions
ls -la NetworkMiner.exe
ls -la *.pcap
Resources
- NetworkMiner Official Website
- NetworkMiner Professional
- NetworkMiner Manual
- Sample PCAP Files
- Network Forensics Guide
This hoja de trucos provides comprehensive guidance for using NetworkMiner for network forensic analysis. Regular practice with sample PCAP files and understanding of network protocolos enhance analysis effectiveness.