MTR Cheatsheet
%20Überblick
MTR%20(My%20Traceroute) ist ein leistungsstarkes Netzwerk-Diagnose-Tool, das die Funktionalität von Ping und Traceroute in einem einzigen Dienstprogramm vereint. Es sendet kontinuierlich Pakete an ein Ziel und zeigt Echtzeit-Statistiken über Paketverlust und Latenz für jeden Hop entlang der Strecke an. MTR bietet einen umfassenderen Überblick über die Netzleistung als herkömmliche Tools, indem es laufende Statistiken anstelle einzelner Snapshots zeigt.
Schlüsselmerkmale
- *Realtime Monitoring: Kontinuierliches Paket mit Live-Statistik
- ** Kombinierte Funktionalität*: Ping und Traceroute in einem Werkzeug
- *Packet Loss Detection: Pro-Hop-Paketverluststatistik
- *Latency Analysis: Min, max, durchschnittliche und Standardabweichungsmetriken
- *Multiple Ausgabeformate: Text, CSV, JSON und XML-Ausgabe
- *IPv4 und IPv6 Support: Dual-Stack-Netzwerkanalyse
- *GUI und CLI Modes: Terminal- und grafische Schnittstellen
- Kundenspezifische Parameter: Paketgröße, Intervall und Zähloptionen
- *Network Path Visualisierung: Übersichtliche Routen-Topologieanzeige
Installation
Linux Systeme
```bash
Ubuntu/Debian
sudo apt update sudo apt install mtr mtr-tiny
CentOS/RHEL/Fedora
sudo yum install mtr
or
sudo dnf install mtr
Arch Linux
sudo pacman -S mtr
openSUSE
sudo zypper install mtr
From source
git clone https://github.com/traviscross/mtr.git cd mtr ./bootstrap.sh ./configure make sudo make install
Verify installation
mtr --version ```_
Windows Systems
```powershell
WinMTR (Windows GUI version)
Download from: https://sourceforge.net/projects/winmtr/
Using Chocolatey
choco install winmtr
Using Scoop
scoop install winmtr
Manual installation
1. Download WinMTR from SourceForge
2. Extract to desired location
3. Run WinMTR.exe
Command line version via WSL
wsl --install wsl sudo apt install mtr ```_
macOS Systeme
```bash
Using Homebrew
brew install mtr
Using MacPorts
sudo port install mtr
From source
git clone https://github.com/traviscross/mtr.git cd mtr ./bootstrap.sh ./configure make sudo make install
Note: May require additional permissions for raw sockets
sudo mtr google.com
Verify installation
mtr --version ```_
Docker Installation
```bash
Pull MTR image
docker pull alpine:latest
Create custom MTR container
cat > Dockerfile << EOF FROM alpine:latest RUN apk add --no-cache mtr ENTRYPOINT ["mtr"] EOF
docker build -t mtr-container .
Run MTR in container
docker run --rm -it mtr-container google.com
One-liner with Alpine
docker run --rm -it alpine:latest sh -c "apk add --no-cache mtr && mtr google.com" ```_
Basisnutzung
Kommandozeilenschnittstelle
```bash
Basic MTR to hostname
mtr google.com
MTR to IP address
mtr 8.8.8.8
Run for specific number of cycles
mtr -c 10 google.com
Report mode (non-interactive)
mtr --report google.com
Report with specific count
mtr --report --report-cycles 20 google.com
No DNS resolution
mtr -n google.com
IPv6 mode
mtr -6 google.com
IPv4 mode (explicit)
mtr -4 google.com
Specify interface
mtr -I eth0 google.com ```_
Interaktiver Modus
```bash
Start interactive MTR
mtr google.com
Interactive mode key bindings:
q - quit
r - reset statistics
d - toggle display mode
n - toggle DNS resolution
p - pause/unpause
space - pause/unpause
h - help
? - help
Display modes in interactive:
0 - default display
1 - latency and packet loss
2 - packet loss percentage only
```_
Bericht Generation
```bash
Generate report with 50 cycles
mtr --report --report-cycles 50 google.com
Wide report format
mtr --report-wide --report-cycles 30 google.com
CSV output
mtr --csv --report-cycles 20 google.com
JSON output
mtr --json --report-cycles 15 google.com
XML output
mtr --xml --report-cycles 25 google.com
Raw output format
mtr --raw --report-cycles 10 google.com ```_
Erweiterte Konfiguration
Paket- und Timingoptionen
```bash
Custom packet size
mtr -s 1400 google.com
Custom interval (seconds between packets)
mtr -i 2 google.com
Timeout per packet
mtr -t 5 google.com
Maximum hops
mtr -m 20 google.com
First hop to start from
mtr -f 3 google.com
Specify source address
mtr -a 192.168.1.100 google.com
Set Type of Service (ToS)
mtr -Q 0x10 google.com
Use TCP instead of ICMP
mtr --tcp google.com
Specify TCP/UDP port
mtr --port 80 google.com
UDP mode
mtr --udp google.com ```_
Erweiterte Reporting-Optionen
```bash
Order output by different fields
mtr --order "Loss%,Avg" --report google.com
Show IP addresses and hostnames
mtr --show-ips --report google.com
Display AS numbers
mtr --aslookup --report google.com
Split output by packet size
mtr --split --report google.com
Display jitter (standard deviation)
mtr --jitter --report google.com
Bitpattern for packets
mtr --bitpattern 0xFF --report google.com
Grace period before starting
mtr --gracetime 5 --report google.com ```_
Produktionsanpassung
```bash
Custom field selection
mtr --displaymode 0 --report google.com # Default mtr --displaymode 1 --report google.com # Latency focus mtr --displaymode 2 --report google.com # Loss focus
Wide format with all statistics
mtr --report-wide --report-cycles 30 google.com
Compact format
mtr --curses --report-cycles 20 google.com
No header in output
mtr --no-dns --report google.com | tail -n +2 ```_
Netzwerkanalyse und Fehlerbehebung
Umfassende Netzwerkanalyse Script
```bash
!/bin/bash
comprehensive_mtr_analysis.sh
TARGET="$1" CYCLES="${2:-100}" OUTPUT_DIR="mtr_analysis_$(date +%Y%m%d_%H%M%S)"
if [ -z "$TARGET" ]; then
echo "Usage: $0
mkdir -p "$OUTPUT_DIR"
echo "Comprehensive MTR Network Analysis" echo "==================================" echo "Target: $TARGET" echo "Cycles: $CYCLES" echo "Output Directory: $OUTPUT_DIR" echo ""
Function to run MTR test and analyze
run_mtr_analysis() { local test_name=$1 local description=$2 local mtr_options=$3 local output_file="$OUTPUT_DIR/${test_name}.txt" local analysis_file="$OUTPUT_DIR/${test_name}_analysis.txt"
echo "Running: $test_name"
echo "Description: $description"
echo "Options: $mtr_options"
# Run MTR test
eval "mtr $mtr_options --report --report-cycles $CYCLES $TARGET" > "$output_file"
# Analyze results
echo "Analysis for: $test_name" > "$analysis_file"
echo "Description: $description" >> "$analysis_file"
echo "Timestamp: $(date)" >> "$analysis_file"
echo "========================================" >> "$analysis_file"
# Extract key metrics
if [ -s "$output_file" ]; then
# Count hops
hop_count=$(grep -c "^ *[0-9]" "$output_file")
echo "Total hops: $hop_count" >> "$analysis_file"
# Find problematic hops
echo "" >> "$analysis_file"
echo "Hop Analysis:" >> "$analysis_file"
echo "-------------" >> "$analysis_file"
grep "^ *[0-9]" "$output_file" | while read line; do
hop=$(echo "$line" | awk '{print $1}')
host=$(echo "$line" | awk '{print $2}')
| loss=$(echo "$line" | awk '{print $3}' | tr -d '%') | avg=$(echo "$line" | awk '{print $6}')
# Check for issues
issues=""
if [[ "$loss" =~ ^[0-9]+$ ]] && [ "$loss" -gt 0 ]; then
issues="$issues PACKET_LOSS(${loss}%)"
fi
if [[ "$avg" =~ ^[0-9]+\.?[0-9]*$ ]] && (( $(echo "$avg > 200" | bc -l) )); then
issues="$issues HIGH_LATENCY(${avg}ms)"
fi
if [ -n "$issues" ]; then
echo "Hop $hop ($host): $issues" >> "$analysis_file"
fi
done
# Overall assessment
echo "" >> "$analysis_file"
echo "Overall Assessment:" >> "$analysis_file"
echo "------------------" >> "$analysis_file"
# Check final hop performance
final_line=$(tail -1 "$output_file")
if echo "$final_line" | grep -q "^ *[0-9]"; then
| final_loss=$(echo "$final_line" | awk '{print $3}' | tr -d '%') | final_avg=$(echo "$final_line" | awk '{print $6}')
if [[ "$final_loss" =~ ^[0-9]+$ ]]; then
if [ "$final_loss" -eq 0 ]; then
echo "✓ No packet loss to destination" >> "$analysis_file"
elif [ "$final_loss" -lt 5 ]; then
echo "⚠ Minor packet loss: ${final_loss}%" >> "$analysis_file"
else
echo "✗ Significant packet loss: ${final_loss}%" >> "$analysis_file"
fi
fi
if [[ "$final_avg" =~ ^[0-9]+\.?[0-9]*$ ]]; then
if (( $(echo "$final_avg < 50" | bc -l) )); then
echo "✓ Good latency: ${final_avg}ms" >> "$analysis_file"
elif (( $(echo "$final_avg < 150" | bc -l) )); then
echo "⚠ Acceptable latency: ${final_avg}ms" >> "$analysis_file"
else
echo "✗ High latency: ${final_avg}ms" >> "$analysis_file"
fi
fi
fi
echo " Results saved to: $output_file"
echo " Analysis saved to: $analysis_file"
else
echo " Test failed - no results"
echo "Test failed - no output generated" >> "$analysis_file"
fi
echo ""
sleep 2
}
1. Standard ICMP test
echo "1. Standard Tests" echo "=================" run_mtr_analysis "icmp_standard" \ "Standard ICMP test" \ ""
run_mtr_analysis "icmp_no_dns" \ "ICMP test without DNS resolution" \ "-n"
2. Protocol variations
echo "2. Protocol Tests" echo "=================" run_mtr_analysis "tcp_test" \ "TCP test (port 80)" \ "--tcp --port 80"
run_mtr_analysis "udp_test" \ "UDP test" \ "--udp"
3. Packet size tests
echo "3. Packet Size Tests" echo "===================" for size in 64 512 1400; do run_mtr_analysis "packet_size_${size}" \ "Test with ${size} byte packets" \ "-s $size" done
4. IPv6 test (if supported)
echo "4. IPv6 Test" echo "============" if ping6 -c 1 "$TARGET" >/dev/null 2>&1; then run_mtr_analysis "ipv6_test" \ "IPv6 connectivity test" \ "-6" else echo "IPv6 not supported or target not reachable via IPv6" fi
5. Generate comprehensive report
echo "5. Generating Comprehensive Report" echo "=================================="
REPORT_FILE="$OUTPUT_DIR/comprehensive_report.html"
cat > "$REPORT_FILE" << EOF
MTR Network Analysis Report
Test Summary
Target: $TARGET
Cycles per test: $CYCLES
Generated: $(date)
Test Directory: $OUTPUT_DIR
$test_name
" >> "$REPORT_FILE" if [ -f "$analysis_file" ]; then | description=$(grep "Description:" "$analysis_file" | cut -d: -f2- | sed 's/^ *//') | echo "Description: $description
" >> "$REPORT_FILE" # Add analysis summary if grep -q "Overall Assessment:" "$analysis_file"; then echo "Analysis Summary
" >> "$REPORT_FILE" echo "- " >> "$REPORT_FILE"
| sed -n '/Overall Assessment:/,/^$/p' "$analysis_file" | tail -n +3 | while read line; do |
if [ -n "$line" ]; then
echo "
- $line " >> "$REPORT_FILE" fi done echo "
Raw Results
" >> "$REPORT_FILE" echo "" >> "$REPORT_FILE" cat "$result_file" >> "$REPORT_FILE" echo "" >> "$REPORT_FILE" echo "
Recommendations
- Review hop-by-hop analysis for bottlenecks
- Compare different protocol results
- Monitor packet loss patterns over time
- Consider packet size impact on performance
- Use results for capacity planning and SLA monitoring
EOF
echo "Comprehensive analysis completed!" echo "Results directory: $OUTPUT_DIR" echo "HTML report: $REPORT_FILE" echo ""
Display summary
echo "Test Summary:" echo "=============" for analysis_file in "$OUTPUT_DIR"/*_analysis.txt; do if [ -f "$analysis_file" ]; then test_name=$(basename "$analysis_file" _analysis.txt) echo -n "$test_name: "
if grep -q "✓.*No packet loss" "$analysis_file"; then
echo "Good (No packet loss)"
elif grep -q "⚠.*packet loss" "$analysis_file"; then
loss=$(grep "⚠.*packet loss" "$analysis_file" | grep -o "[0-9]*%")
echo "Warning (Packet loss: $loss)"
elif grep -q "✗.*packet loss" "$analysis_file"; then
loss=$(grep "✗.*packet loss" "$analysis_file" | grep -o "[0-9]*%")
echo "Alert (High packet loss: $loss)"
else
echo "Completed"
fi
fi
done ```_
Echtzeit-Netzwerküberwachung
```bash
!/bin/bash
realtime_mtr_monitor.sh
TARGET="$1" DURATION="${2:-3600}" # Default 1 hour LOG_INTERVAL="${3:-300}" # Log every 5 minutes
if [ -z "$TARGET" ]; then
echo "Usage: $0
MONITOR_DIR="mtr_monitor_$(date +%Y%m%d_%H%M%S)" mkdir -p "$MONITOR_DIR"
LOG_FILE="$MONITOR_DIR/monitor.log" CSV_FILE="$MONITOR_DIR/monitor.csv" ALERT_FILE="$MONITOR_DIR/alerts.log"
CSV header
echo "timestamp,avg_loss,avg_latency,max_latency,hop_count,worst_hop,worst_hop_loss" > "$CSV_FILE"
echo "Starting MTR real-time monitoring..." echo "Target: $TARGET" echo "Duration: $DURATION seconds" echo "Log interval: $LOG_INTERVAL seconds" echo "Monitor directory: $MONITOR_DIR" echo ""
Alert thresholds
LOSS_THRESHOLD=5 # 5% packet loss LATENCY_THRESHOLD=200 # 200ms latency
END_TIME=$(($(date +%s) + DURATION)) CYCLE_COUNT=0
Function to analyze MTR output
analyze_mtr_output() { local mtr_output="$1" local timestamp="$2"
# Extract metrics
local total_loss=0
local total_latency=0
local max_latency=0
local hop_count=0
local worst_hop=""
local worst_hop_loss=0
while read line; do
if echo "$line" | grep -q "^ *[0-9]"; then
hop_count=$((hop_count + 1))
hop=$(echo "$line" | awk '{print $1}')
host=$(echo "$line" | awk '{print $2}')
| loss=$(echo "$line" | awk '{print $3}' | tr -d '%') | avg=$(echo "$line" | awk '{print $6}')
# Accumulate statistics
if [[ "$loss" =~ ^[0-9]+$ ]]; then
total_loss=$((total_loss + loss))
if [ "$loss" -gt "$worst_hop_loss" ]; then
worst_hop_loss=$loss
worst_hop="$hop ($host)"
fi
fi
if [[ "$avg" =~ ^[0-9]+\.?[0-9]*$ ]]; then
total_latency=$(echo "$total_latency + $avg" | bc)
if (( $(echo "$avg > $max_latency" | bc -l) )); then
max_latency=$avg
fi
fi
fi
done <<< "$mtr_output"
# Calculate averages
local avg_loss=0
local avg_latency=0
if [ "$hop_count" -gt 0 ]; then
avg_loss=$(echo "scale=2; $total_loss / $hop_count" | bc)
avg_latency=$(echo "scale=2; $total_latency / $hop_count" | bc)
fi
# Log to CSV
echo "$timestamp,$avg_loss,$avg_latency,$max_latency,$hop_count,$worst_hop,$worst_hop_loss" >> "$CSV_FILE"
# Check for alerts
local alerts=""
if (( $(echo "$avg_loss > $LOSS_THRESHOLD" | bc -l) )); then
alerts="$alerts HIGH_PACKET_LOSS(${avg_loss}%)"
fi
if (( $(echo "$avg_latency > $LATENCY_THRESHOLD" | bc -l) )); then
alerts="$alerts HIGH_LATENCY(${avg_latency}ms)"
fi
if [ -n "$alerts" ]; then
echo "[$timestamp] ALERT: $alerts" | tee -a "$ALERT_FILE"
echo " Worst hop: $worst_hop (${worst_hop_loss}% loss)" | tee -a "$ALERT_FILE"
fi
# Display current status
echo "[$timestamp] Avg Loss: ${avg_loss}%, Avg Latency: ${avg_latency}ms, Max: ${max_latency}ms, Hops: $hop_count"
if [ "$worst_hop_loss" -gt 0 ]; then
echo " Worst hop: $worst_hop (${worst_hop_loss}% loss)"
fi
}
Main monitoring loop
while [ $(date +%s) -lt $END_TIME ]; do CYCLE_COUNT=$((CYCLE_COUNT + 1)) TIMESTAMP=$(date '+%Y-%m-%d %H:%M:%S')
echo "Cycle $CYCLE_COUNT - $TIMESTAMP" | tee -a "$LOG_FILE"
# Run MTR test
MTR_OUTPUT=$(mtr --report --report-cycles 20 -n "$TARGET" 2>/dev/null)
if [ $? -eq 0 ] && [ -n "$MTR_OUTPUT" ]; then
# Save full output
echo "=== Cycle $CYCLE_COUNT - $TIMESTAMP ===" >> "$MONITOR_DIR/full_output.log"
echo "$MTR_OUTPUT" >> "$MONITOR_DIR/full_output.log"
echo "" >> "$MONITOR_DIR/full_output.log"
# Analyze output
analyze_mtr_output "$MTR_OUTPUT" "$TIMESTAMP"
else
echo " ERROR: MTR test failed" | tee -a "$LOG_FILE"
echo "$TIMESTAMP,0,0,0,0,ERROR,0" >> "$CSV_FILE"
fi
echo "" | tee -a "$LOG_FILE"
# Wait for next cycle
sleep $LOG_INTERVAL
done
echo "Monitoring completed!" echo "Results saved in: $MONITOR_DIR"
Generate summary statistics
if command -v python3 >/dev/null 2>&1; then python3 << EOF import csv import statistics
Read monitoring data
data = [] with open('$CSV_FILE', 'r') as f: reader = csv.DictReader(f) for row in reader: if row['avg_loss'] != '0' or row['avg_latency'] != '0': try: data.append({ 'loss': float(row['avg_loss']), 'latency': float(row['avg_latency']), 'max_latency': float(row['max_latency']), 'hop_count': int(row['hop_count']) }) except ValueError: continue
if data: losses = [d['loss'] for d in data] latencies = [d['latency'] for d in data] max_latencies = [d['max_latency'] for d in data]
print("Monitoring Summary Statistics:")
print("==============================")
print(f"Total monitoring cycles: {len(data)}")
print(f"Average packet loss: {statistics.mean(losses):.2f}%")
print(f"Maximum packet loss: {max(losses):.2f}%")
print(f"Average latency: {statistics.mean(latencies):.2f}ms")
print(f"Maximum latency: {max(max_latencies):.2f}ms")
print(f"Latency std deviation: {statistics.stdev(latencies):.2f}ms")
# Count alerts
high_loss_count = sum(1 for loss in losses if loss > $LOSS_THRESHOLD)
high_latency_count = sum(1 for lat in latencies if lat > $LATENCY_THRESHOLD)
print(f"High packet loss alerts: {high_loss_count}")
print(f"High latency alerts: {high_latency_count}")
else: print("No valid monitoring data collected") EOF fi
Check if alerts were generated
if [ -f "$ALERT_FILE" ]; then echo "" echo "ALERTS GENERATED:" echo "==================" cat "$ALERT_FILE" fi ```_
Leistungsvergleich Werkzeug
```python
!/usr/bin/env python3
mtr_performance_comparison.py
import subprocess import json import time import datetime import argparse import statistics from pathlib import Path import matplotlib.pyplot as plt import pandas as pd
class MTRComparison: def init(self, targets, cycles=50): self.targets = targets self.cycles = cycles self.results = {}
def run_mtr_test(self, target):
"""Run MTR test and parse results"""
try:
# Run MTR with report mode
cmd = ['mtr', '--report', '--report-cycles', str(self.cycles), '-n', target]
result = subprocess.run(cmd, capture_output=True, text=True, timeout=300)
if result.returncode == 0:
return self.parse_mtr_output(result.stdout, target)
else:
print(f"MTR failed for {target}: {result.stderr}")
return None
except subprocess.TimeoutExpired:
print(f"MTR timed out for {target}")
return None
except Exception as e:
print(f"Error running MTR for {target}: {e}")
return None
def parse_mtr_output(self, output, target):
"""Parse MTR output into structured data"""
lines = output.strip().split('\n')
hops = []
for line in lines:
# Skip header and empty lines
if not line.strip() or 'HOST:' in line or 'Start:' in line:
continue
# Parse hop lines
parts = line.split()
if len(parts) >= 7 and parts[0].isdigit():
try:
hop_data = {
'hop': int(parts[0]),
'host': parts[1],
'loss_percent': float(parts[2].rstrip('%')),
'sent': int(parts[3]),
'last': float(parts[4]),
'avg': float(parts[5]),
'best': float(parts[6]),
'worst': float(parts[7]),
'stdev': float(parts[8]) if len(parts) > 8 else 0.0
}
hops.append(hop_data)
except (ValueError, IndexError):
continue
return {
'target': target,
'timestamp': datetime.datetime.now().isoformat(),
'cycles': self.cycles,
'hops': hops,
'total_hops': len(hops)
}
def run_comparison(self):
"""Run MTR tests for all targets"""
print(f"Running MTR comparison for {len(self.targets)} targets")
print(f"Cycles per target: {self.cycles}")
print("=" * 50)
for i, target in enumerate(self.targets, 1):
print(f"Testing {i}/{len(self.targets)}: {target}")
result = self.run_mtr_test(target)
if result:
self.results[target] = result
# Display summary
if result['hops']:
final_hop = result['hops'][-1]
print(f" Hops: {result['total_hops']}")
print(f" Final hop loss: {final_hop['loss_percent']:.1f}%")
print(f" Final hop avg latency: {final_hop['avg']:.1f}ms")
else:
print(" No hop data available")
else:
print(" Test failed")
print()
# Small delay between tests
if i < len(self.targets):
time.sleep(2)
return self.results
def generate_comparison_report(self, output_dir="mtr_comparison"):
"""Generate comprehensive comparison report"""
if not self.results:
print("No results to compare")
return
output_path = Path(output_dir)
output_path.mkdir(exist_ok=True)
timestamp = datetime.datetime.now().strftime("%Y%m%d_%H%M%S")
# Generate summary statistics
summary_data = []
for target, result in self.results.items():
if result['hops']:
final_hop = result['hops'][-1]
summary_data.append({
'target': target,
'hops': result['total_hops'],
'final_loss': final_hop['loss_percent'],
'final_avg_latency': final_hop['avg'],
'final_best_latency': final_hop['best'],
'final_worst_latency': final_hop['worst'],
'final_stdev': final_hop['stdev']
})
# Save to CSV
if summary_data:
df = pd.DataFrame(summary_data)
csv_file = output_path / f"mtr_comparison_{timestamp}.csv"
df.to_csv(csv_file, index=False)
print(f"Summary CSV saved: {csv_file}")
# Generate visualizations
self.create_visualizations(output_path, timestamp)
# Generate HTML report
html_file = output_path / f"mtr_comparison_{timestamp}.html"
self.generate_html_report(html_file, summary_data)
# Save raw JSON data
json_file = output_path / f"mtr_raw_data_{timestamp}.json"
with open(json_file, 'w') as f:
json.dump(self.results, f, indent=2)
print(f"Comparison report generated:")
print(f" HTML: {html_file}")
print(f" JSON: {json_file}")
return str(html_file)
def create_visualizations(self, output_path, timestamp):
"""Create comparison visualizations"""
if not self.results:
return
# Prepare data for plotting
targets = []
avg_latencies = []
loss_percentages = []
hop_counts = []
for target, result in self.results.items():
if result['hops']:
final_hop = result['hops'][-1]
targets.append(target)
avg_latencies.append(final_hop['avg'])
loss_percentages.append(final_hop['loss_percent'])
hop_counts.append(result['total_hops'])
if not targets:
return
# Create subplots
fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, figsize=(15, 12))
fig.suptitle('MTR Network Performance Comparison', fontsize=16)
# 1. Average Latency Comparison
ax1.bar(range(len(targets)), avg_latencies, color='skyblue')
ax1.set_title('Average Latency Comparison')
ax1.set_ylabel('Latency (ms)')
ax1.set_xticks(range(len(targets)))
ax1.set_xticklabels(targets, rotation=45, ha='right')
# Add value labels on bars
for i, v in enumerate(avg_latencies):
ax1.text(i, v + max(avg_latencies) * 0.01, f'{v:.1f}ms',
ha='center', va='bottom')
# 2. Packet Loss Comparison
colors = ['green' if loss == 0 else 'orange' if loss < 5 else 'red'
for loss in loss_percentages]
ax2.bar(range(len(targets)), loss_percentages, color=colors)
ax2.set_title('Packet Loss Comparison')
ax2.set_ylabel('Packet Loss (%)')
ax2.set_xticks(range(len(targets)))
ax2.set_xticklabels(targets, rotation=45, ha='right')
# Add value labels
for i, v in enumerate(loss_percentages):
ax2.text(i, v + max(loss_percentages + [1]) * 0.01, f'{v:.1f}%',
ha='center', va='bottom')
# 3. Hop Count Comparison
ax3.bar(range(len(targets)), hop_counts, color='lightcoral')
ax3.set_title('Network Hop Count Comparison')
ax3.set_ylabel('Number of Hops')
ax3.set_xticks(range(len(targets)))
ax3.set_xticklabels(targets, rotation=45, ha='right')
# Add value labels
for i, v in enumerate(hop_counts):
ax3.text(i, v + max(hop_counts) * 0.01, str(v),
ha='center', va='bottom')
# 4. Latency vs Loss Scatter Plot
ax4.scatter(avg_latencies, loss_percentages, s=100, alpha=0.7)
ax4.set_title('Latency vs Packet Loss')
ax4.set_xlabel('Average Latency (ms)')
ax4.set_ylabel('Packet Loss (%)')
# Add target labels to scatter points
for i, target in enumerate(targets):
ax4.annotate(target, (avg_latencies[i], loss_percentages[i]),
xytext=(5, 5), textcoords='offset points', fontsize=8)
plt.tight_layout()
# Save plot
plot_file = output_path / f"mtr_comparison_plots_{timestamp}.png"
plt.savefig(plot_file, dpi=300, bbox_inches='tight')
plt.close()
print(f"Visualization saved: {plot_file}")
def generate_html_report(self, output_file, summary_data):
"""Generate HTML comparison report"""
html_content = f"""
MTR Network Performance Comparison Report
Test Summary
Targets Tested: {len(self.targets)}
Cycles per Target: {self.cycles}
Generated: {datetime.datetime.now()}
Performance Highlights
Detailed Comparison
Target | Hops | Packet Loss (%) | Avg Latency (ms) | Best Latency (ms) | Worst Latency (ms) | Std Dev (ms) | Performance Rating |
---|---|---|---|---|---|---|---|
{data['target']} | {data['hops']} | {data['final_loss']:.1f} | {data['final_avg_latency']:.1f} | {data['final_best_latency']:.1f} | {data['final_worst_latency']:.1f} | {data['final_stdev']:.1f} | {rating} |
Detailed Hop Analysis
" for target, result in self.results.items(): html_content += f"""{target}
Hop | Host | Loss (%) | Packets Sent | Last (ms) | Avg (ms) | Best (ms) | Worst (ms) | StdDev (ms) |
---|---|---|---|---|---|---|---|---|
{hop['hop']} | {hop['host']} | {hop['loss_percent']:.1f} | {hop['sent']} | {hop['last']:.1f} | {hop['avg']:.1f} | {hop['best']:.1f} | {hop['worst']:.1f} | {hop['stdev']:.1f} |
"""
with open(output_file, 'w') as f:
f.write(html_content)
def main(): parser = argparse.ArgumentParser(description='MTR Performance Comparison Tool') parser.add_argument('targets', nargs='+', help='Target hostnames or IPs to compare') parser.add_argument('--cycles', type=int, default=50, help='Number of cycles per test (default: 50)') parser.add_argument('--output-dir', default='mtr_comparison', help='Output directory for results')
args = parser.parse_args()
# Create comparison instance
comparison = MTRComparison(args.targets, args.cycles)
# Run comparison
results = comparison.run_comparison()
# Generate report
if results:
report_file = comparison.generate_comparison_report(args.output_dir)
print(f"\nOpen the HTML report: {report_file}")
else:
print("No successful tests to compare")
if name == "main": main() ```_
Dieses umfassende MTR-Catsheet bietet alles, was für professionelle Netzwerkdiagnostik, Echtzeitüberwachung und vergleichende Netzwerkanalysen benötigt wird, von der Basisroute bis hin zu fortschrittlichen Automatisierungs- und Visualisierungsszenarien.