Aller au contenu

MTR Feuilles de chaleur

- :material-content-copy: **[Copier sur le presse-papiers] (__LINK_0__)** - **[Télécharger PDF](__LINK_0__)**

Aperçu général

MTR (Mon Traceroute) est un puissant outil de diagnostic réseau qui combine les fonctionnalités de ping et traceroute en un seul utilitaire. Il envoie continuellement des paquets vers une destination et affiche des statistiques en temps réel sur la perte de paquets et la latence pour chaque saut le long de la route. L'examen à mi-parcours offre une vision plus complète de la performance du réseau que les outils traditionnels en montrant des statistiques en cours plutôt que des instantanés uniques.

Caractéristiques principales

  • Surveillance en temps réel: Envoi continu de paquets avec statistiques en direct
  • Fonctionnalité combinée: Ping et traceroute dans un outil
  • Détection de perte de paquets: Statistiques de perte de paquets per-hop
  • Analyse des latences: valeurs minimales, maximales, moyennes et écarts types
  • ** Formats de sortie multiples**: sortie texte, CSV, JSON et XML
  • ** Assistance IPv4 et IPv6**: analyse du réseau à deux piles
  • Modes GUI et CLI: Interfaces terminales et graphiques
  • ** Paramètres personnalisables**: Taille du paquet, intervalle et options de comptage
  • ** Visualisation du parcours du réseau** : Effacer l'affichage de la topologie de route

Installation

Systèmes Linux

# Ubuntu/Debian
sudo apt update
sudo apt install mtr mtr-tiny

# CentOS/RHEL/Fedora
sudo yum install mtr
# or
sudo dnf install mtr

# Arch Linux
sudo pacman -S mtr

# openSUSE
sudo zypper install mtr

# From source
git clone https://github.com/traviscross/mtr.git
cd mtr
./bootstrap.sh
./configure
make
sudo make install

# Verify installation
mtr --version

Systèmes Windows

# WinMTR (Windows GUI version)
# Download from: https://sourceforge.net/projects/winmtr/

# Using Chocolatey
choco install winmtr

# Using Scoop
scoop install winmtr

# Manual installation
# 1. Download WinMTR from SourceForge
# 2. Extract to desired location
# 3. Run WinMTR.exe

# Command line version via WSL
wsl --install
wsl
sudo apt install mtr
```_

### Systèmes macOS

```bash
# Using Homebrew
brew install mtr

# Using MacPorts
sudo port install mtr

# From source
git clone https://github.com/traviscross/mtr.git
cd mtr
./bootstrap.sh
./configure
make
sudo make install

# Note: May require additional permissions for raw sockets
sudo mtr google.com

# Verify installation
mtr --version
```_

### Installation Docker

```bash
# Pull MTR image
docker pull alpine:latest

# Create custom MTR container
cat > Dockerfile << EOF
FROM alpine:latest
RUN apk add --no-cache mtr
ENTRYPOINT ["mtr"]
EOF

docker build -t mtr-container .

# Run MTR in container
docker run --rm -it mtr-container google.com

# One-liner with Alpine
docker run --rm -it alpine:latest sh -c "apk add --no-cache mtr && mtr google.com"

Utilisation de base

Interface de ligne de commande

# Basic MTR to hostname
mtr google.com

# MTR to IP address
mtr 8.8.8.8

# Run for specific number of cycles
mtr -c 10 google.com

# Report mode (non-interactive)
mtr --report google.com

# Report with specific count
mtr --report --report-cycles 20 google.com

# No DNS resolution
mtr -n google.com

# IPv6 mode
mtr -6 google.com

# IPv4 mode (explicit)
mtr -4 google.com

# Specify interface
mtr -I eth0 google.com

Mode interactif

# Start interactive MTR
mtr google.com

# Interactive mode key bindings:
# q - quit
# r - reset statistics
# d - toggle display mode
# n - toggle DNS resolution
# p - pause/unpause
# space - pause/unpause
# h - help
# ? - help

# Display modes in interactive:
# 0 - default display
# 1 - latency and packet loss
# 2 - packet loss percentage only

Production de rapports

# Generate report with 50 cycles
mtr --report --report-cycles 50 google.com

# Wide report format
mtr --report-wide --report-cycles 30 google.com

# CSV output
mtr --csv --report-cycles 20 google.com

# JSON output
mtr --json --report-cycles 15 google.com

# XML output
mtr --xml --report-cycles 25 google.com

# Raw output format
mtr --raw --report-cycles 10 google.com

Configuration avancée

Options de paquetage et de calendrier

# Custom packet size
mtr -s 1400 google.com

# Custom interval (seconds between packets)
mtr -i 2 google.com

# Timeout per packet
mtr -t 5 google.com

# Maximum hops
mtr -m 20 google.com

# First hop to start from
mtr -f 3 google.com

# Specify source address
mtr -a 192.168.1.100 google.com

# Set Type of Service (ToS)
mtr -Q 0x10 google.com

# Use TCP instead of ICMP
mtr --tcp google.com

# Specify TCP/UDP port
mtr --port 80 google.com

# UDP mode
mtr --udp google.com

Options avancées de déclaration

# Order output by different fields
mtr --order "Loss%,Avg" --report google.com

# Show IP addresses and hostnames
mtr --show-ips --report google.com

# Display AS numbers
mtr --aslookup --report google.com

# Split output by packet size
mtr --split --report google.com

# Display jitter (standard deviation)
mtr --jitter --report google.com

# Bitpattern for packets
mtr --bitpattern 0xFF --report google.com

# Grace period before starting
mtr --gracetime 5 --report google.com

Personnalisation des sorties

# Custom field selection
mtr --displaymode 0 --report google.com  # Default
mtr --displaymode 1 --report google.com  # Latency focus
mtr --displaymode 2 --report google.com  # Loss focus

# Wide format with all statistics
mtr --report-wide --report-cycles 30 google.com

# Compact format
mtr --curses --report-cycles 20 google.com

# No header in output
mtr --no-dns --report google.com | tail -n +2

Analyse des réseaux et dépannage

Script d'analyse réseau complète

#!/bin/bash
# comprehensive_mtr_analysis.sh

TARGET="$1"
CYCLES="${2:-100}"
OUTPUT_DIR="mtr_analysis_$(date +%Y%m%d_%H%M%S)"

if [ -z "$TARGET" ]; then
    echo "Usage: $0 <target> [cycles]"
    echo "Example: $0 google.com 200"
    exit 1
fi

mkdir -p "$OUTPUT_DIR"

echo "Comprehensive MTR Network Analysis"
echo "=================================="
echo "Target: $TARGET"
echo "Cycles: $CYCLES"
echo "Output Directory: $OUTPUT_DIR"
echo ""

# Function to run MTR test and analyze
run_mtr_analysis() {
    local test_name=$1
    local description=$2
    local mtr_options=$3
    local output_file="$OUTPUT_DIR/${test_name}.txt"
    local analysis_file="$OUTPUT_DIR/${test_name}_analysis.txt"

    echo "Running: $test_name"
    echo "Description: $description"
    echo "Options: $mtr_options"

    # Run MTR test
    eval "mtr $mtr_options --report --report-cycles $CYCLES $TARGET" > "$output_file"

    # Analyze results
    echo "Analysis for: $test_name" > "$analysis_file"
    echo "Description: $description" >> "$analysis_file"
    echo "Timestamp: $(date)" >> "$analysis_file"
    echo "========================================" >> "$analysis_file"

    # Extract key metrics
    if [ -s "$output_file" ]; then
        # Count hops
        hop_count=$(grep -c "^ *[0-9]" "$output_file")
        echo "Total hops: $hop_count" >> "$analysis_file"

        # Find problematic hops
        echo "" >> "$analysis_file"
        echo "Hop Analysis:" >> "$analysis_file"
        echo "-------------" >> "$analysis_file"

        grep "^ *[0-9]" "$output_file" | while read line; do
            hop=$(echo "$line" | awk '{print $1}')
            host=$(echo "$line" | awk '{print $2}')
            loss=$(echo "$line" | awk '{print $3}' | tr -d '%')
            avg=$(echo "$line" | awk '{print $6}')

            # Check for issues
            issues=""
            if [[ "$loss" =~ ^[0-9]+$ ]] && [ "$loss" -gt 0 ]; then
                issues="$issues PACKET_LOSS(${loss}%)"
            fi

            if [[ "$avg" =~ ^[0-9]+\.?[0-9]*$ ]] && (( $(echo "$avg > 200" | bc -l) )); then
                issues="$issues HIGH_LATENCY(${avg}ms)"
            fi

            if [ -n "$issues" ]; then
                echo "Hop $hop ($host): $issues" >> "$analysis_file"
            fi
        done

        # Overall assessment
        echo "" >> "$analysis_file"
        echo "Overall Assessment:" >> "$analysis_file"
        echo "------------------" >> "$analysis_file"

        # Check final hop performance
        final_line=$(tail -1 "$output_file")
        if echo "$final_line" | grep -q "^ *[0-9]"; then
            final_loss=$(echo "$final_line" | awk '{print $3}' | tr -d '%')
            final_avg=$(echo "$final_line" | awk '{print $6}')

            if [[ "$final_loss" =~ ^[0-9]+$ ]]; then
                if [ "$final_loss" -eq 0 ]; then
                    echo "✓ No packet loss to destination" >> "$analysis_file"
                elif [ "$final_loss" -lt 5 ]; then
                    echo "⚠ Minor packet loss: ${final_loss}%" >> "$analysis_file"
                else
                    echo "✗ Significant packet loss: ${final_loss}%" >> "$analysis_file"
                fi
            fi

            if [[ "$final_avg" =~ ^[0-9]+\.?[0-9]*$ ]]; then
                if (( $(echo "$final_avg < 50" | bc -l) )); then
                    echo "✓ Good latency: ${final_avg}ms" >> "$analysis_file"
                elif (( $(echo "$final_avg < 150" | bc -l) )); then
                    echo "⚠ Acceptable latency: ${final_avg}ms" >> "$analysis_file"
                else
                    echo "✗ High latency: ${final_avg}ms" >> "$analysis_file"
                fi
            fi
        fi

        echo "  Results saved to: $output_file"
        echo "  Analysis saved to: $analysis_file"
    else
        echo "  Test failed - no results"
        echo "Test failed - no output generated" >> "$analysis_file"
    fi

    echo ""
    sleep 2
}

# 1. Standard ICMP test
echo "1. Standard Tests"
echo "================="
run_mtr_analysis "icmp_standard" \
    "Standard ICMP test" \
    ""

run_mtr_analysis "icmp_no_dns" \
    "ICMP test without DNS resolution" \
    "-n"

# 2. Protocol variations
echo "2. Protocol Tests"
echo "================="
run_mtr_analysis "tcp_test" \
    "TCP test (port 80)" \
    "--tcp --port 80"

run_mtr_analysis "udp_test" \
    "UDP test" \
    "--udp"

# 3. Packet size tests
echo "3. Packet Size Tests"
echo "==================="
for size in 64 512 1400; do
    run_mtr_analysis "packet_size_${size}" \
        "Test with ${size} byte packets" \
        "-s $size"
done

# 4. IPv6 test (if supported)
echo "4. IPv6 Test"
echo "============"
if ping6 -c 1 "$TARGET" >/dev/null 2>&1; then
    run_mtr_analysis "ipv6_test" \
        "IPv6 connectivity test" \
        "-6"
else
    echo "IPv6 not supported or target not reachable via IPv6"
fi

# 5. Generate comprehensive report
echo "5. Generating Comprehensive Report"
echo "=================================="

REPORT_FILE="$OUTPUT_DIR/comprehensive_report.html"

cat > "$REPORT_FILE" << EOF
<!DOCTYPE html>
<html>
<head>
    <title>MTR Network Analysis Report</title>
    <style>
        body { font-family: Arial, sans-serif; margin: 20px; }
        table { border-collapse: collapse; width: 100%; margin: 20px 0; }
        th, td { border: 1px solid #ddd; padding: 12px; text-align: left; }
        th { background-color: #f2f2f2; }
        .summary { background-color: #e7f3ff; padding: 15px; border-radius: 5px; margin: 20px 0; }
        .good { background-color: #d4edda; }
        .warning { background-color: #fff3cd; }
        .alert { background-color: #f8d7da; }
        .test-section { margin: 30px 0; }
        pre { background-color: #f8f9fa; padding: 15px; border-radius: 5px; overflow-x: auto; }
    </style>
</head>
<body>
    <h1>MTR Network Analysis Report</h1>
    <div class="summary">
        <h3>Test Summary</h3>
        <p><strong>Target:</strong> $TARGET</p>
        <p><strong>Cycles per test:</strong> $CYCLES</p>
        <p><strong>Generated:</strong> $(date)</p>
        <p><strong>Test Directory:</strong> $OUTPUT_DIR</p>
    </div>
EOF

# Process each test result
for result_file in "$OUTPUT_DIR"/*.txt; do
    if [[ "$result_file" != *"_analysis.txt" ]] && [ -f "$result_file" ]; then
        test_name=$(basename "$result_file" .txt)
        analysis_file="$OUTPUT_DIR/${test_name}_analysis.txt"

        echo "    <div class=\"test-section\">" >> "$REPORT_FILE"
        echo "        <h2>$test_name</h2>" >> "$REPORT_FILE"

        if [ -f "$analysis_file" ]; then
            description=$(grep "Description:" "$analysis_file" | cut -d: -f2- | sed 's/^ *//')
            echo "        <p><strong>Description:</strong> $description</p>" >> "$REPORT_FILE"

            # Add analysis summary
            if grep -q "Overall Assessment:" "$analysis_file"; then
                echo "        <h3>Analysis Summary</h3>" >> "$REPORT_FILE"
                echo "        <ul>" >> "$REPORT_FILE"

                sed -n '/Overall Assessment:/,/^$/p' "$analysis_file" | tail -n +3 | while read line; do
                    if [ -n "$line" ]; then
                        echo "            <li>$line</li>" >> "$REPORT_FILE"
                    fi
                done

                echo "        </ul>" >> "$REPORT_FILE"
            fi
        fi

        # Add raw results
        echo "        <h3>Raw Results</h3>" >> "$REPORT_FILE"
        echo "        <pre>" >> "$REPORT_FILE"
        cat "$result_file" >> "$REPORT_FILE"
        echo "        </pre>" >> "$REPORT_FILE"
        echo "    </div>" >> "$REPORT_FILE"
    fi
done

cat >> "$REPORT_FILE" << EOF

    <div class="summary">
        <h3>Recommendations</h3>
        <ul>
            <li>Review hop-by-hop analysis for bottlenecks</li>
            <li>Compare different protocol results</li>
            <li>Monitor packet loss patterns over time</li>
            <li>Consider packet size impact on performance</li>
            <li>Use results for capacity planning and SLA monitoring</li>
        </ul>
    </div>
</body>
</html>
EOF

echo "Comprehensive analysis completed!"
echo "Results directory: $OUTPUT_DIR"
echo "HTML report: $REPORT_FILE"
echo ""

# Display summary
echo "Test Summary:"
echo "============="
for analysis_file in "$OUTPUT_DIR"/*_analysis.txt; do
    if [ -f "$analysis_file" ]; then
        test_name=$(basename "$analysis_file" _analysis.txt)
        echo -n "$test_name: "

        if grep -q "✓.*No packet loss" "$analysis_file"; then
            echo "Good (No packet loss)"
        elif grep -q "⚠.*packet loss" "$analysis_file"; then
            loss=$(grep "⚠.*packet loss" "$analysis_file" | grep -o "[0-9]*%")
            echo "Warning (Packet loss: $loss)"
        elif grep -q "✗.*packet loss" "$analysis_file"; then
            loss=$(grep "✗.*packet loss" "$analysis_file" | grep -o "[0-9]*%")
            echo "Alert (High packet loss: $loss)"
        else
            echo "Completed"
        fi
    fi
done

Surveillance des réseaux en temps réel

#!/bin/bash
# realtime_mtr_monitor.sh

TARGET="$1"
DURATION="${2:-3600}"  # Default 1 hour
LOG_INTERVAL="${3:-300}"  # Log every 5 minutes

if [ -z "$TARGET" ]; then
    echo "Usage: $0 <target> [duration_seconds] [log_interval_seconds]"
    echo "Example: $0 google.com 7200 180"
    exit 1
fi

MONITOR_DIR="mtr_monitor_$(date +%Y%m%d_%H%M%S)"
mkdir -p "$MONITOR_DIR"

LOG_FILE="$MONITOR_DIR/monitor.log"
CSV_FILE="$MONITOR_DIR/monitor.csv"
ALERT_FILE="$MONITOR_DIR/alerts.log"

# CSV header
echo "timestamp,avg_loss,avg_latency,max_latency,hop_count,worst_hop,worst_hop_loss" > "$CSV_FILE"

echo "Starting MTR real-time monitoring..."
echo "Target: $TARGET"
echo "Duration: $DURATION seconds"
echo "Log interval: $LOG_INTERVAL seconds"
echo "Monitor directory: $MONITOR_DIR"
echo ""

# Alert thresholds
LOSS_THRESHOLD=5      # 5% packet loss
LATENCY_THRESHOLD=200 # 200ms latency

END_TIME=$(($(date +%s) + DURATION))
CYCLE_COUNT=0

# Function to analyze MTR output
analyze_mtr_output() {
    local mtr_output="$1"
    local timestamp="$2"

    # Extract metrics
    local total_loss=0
    local total_latency=0
    local max_latency=0
    local hop_count=0
    local worst_hop=""
    local worst_hop_loss=0

    while read line; do
        if echo "$line" | grep -q "^ *[0-9]"; then
            hop_count=$((hop_count + 1))

            hop=$(echo "$line" | awk '{print $1}')
            host=$(echo "$line" | awk '{print $2}')
            loss=$(echo "$line" | awk '{print $3}' | tr -d '%')
            avg=$(echo "$line" | awk '{print $6}')

            # Accumulate statistics
            if [[ "$loss" =~ ^[0-9]+$ ]]; then
                total_loss=$((total_loss + loss))

                if [ "$loss" -gt "$worst_hop_loss" ]; then
                    worst_hop_loss=$loss
                    worst_hop="$hop ($host)"
                fi
            fi

            if [[ "$avg" =~ ^[0-9]+\.?[0-9]*$ ]]; then
                total_latency=$(echo "$total_latency + $avg" | bc)

                if (( $(echo "$avg > $max_latency" | bc -l) )); then
                    max_latency=$avg
                fi
            fi
        fi
    done <<< "$mtr_output"

    # Calculate averages
    local avg_loss=0
    local avg_latency=0

    if [ "$hop_count" -gt 0 ]; then
        avg_loss=$(echo "scale=2; $total_loss / $hop_count" | bc)
        avg_latency=$(echo "scale=2; $total_latency / $hop_count" | bc)
    fi

    # Log to CSV
    echo "$timestamp,$avg_loss,$avg_latency,$max_latency,$hop_count,$worst_hop,$worst_hop_loss" >> "$CSV_FILE"

    # Check for alerts
    local alerts=""

    if (( $(echo "$avg_loss > $LOSS_THRESHOLD" | bc -l) )); then
        alerts="$alerts HIGH_PACKET_LOSS(${avg_loss}%)"
    fi

    if (( $(echo "$avg_latency > $LATENCY_THRESHOLD" | bc -l) )); then
        alerts="$alerts HIGH_LATENCY(${avg_latency}ms)"
    fi

    if [ -n "$alerts" ]; then
        echo "[$timestamp] ALERT: $alerts" | tee -a "$ALERT_FILE"
        echo "  Worst hop: $worst_hop (${worst_hop_loss}% loss)" | tee -a "$ALERT_FILE"
    fi

    # Display current status
    echo "[$timestamp] Avg Loss: ${avg_loss}%, Avg Latency: ${avg_latency}ms, Max: ${max_latency}ms, Hops: $hop_count"

    if [ "$worst_hop_loss" -gt 0 ]; then
        echo "  Worst hop: $worst_hop (${worst_hop_loss}% loss)"
    fi
}

# Main monitoring loop
while [ $(date +%s) -lt $END_TIME ]; do
    CYCLE_COUNT=$((CYCLE_COUNT + 1))
    TIMESTAMP=$(date '+%Y-%m-%d %H:%M:%S')

    echo "Cycle $CYCLE_COUNT - $TIMESTAMP" | tee -a "$LOG_FILE"

    # Run MTR test
    MTR_OUTPUT=$(mtr --report --report-cycles 20 -n "$TARGET" 2>/dev/null)

    if [ $? -eq 0 ] && [ -n "$MTR_OUTPUT" ]; then
        # Save full output
        echo "=== Cycle $CYCLE_COUNT - $TIMESTAMP ===" >> "$MONITOR_DIR/full_output.log"
        echo "$MTR_OUTPUT" >> "$MONITOR_DIR/full_output.log"
        echo "" >> "$MONITOR_DIR/full_output.log"

        # Analyze output
        analyze_mtr_output "$MTR_OUTPUT" "$TIMESTAMP"
    else
        echo "  ERROR: MTR test failed" | tee -a "$LOG_FILE"
        echo "$TIMESTAMP,0,0,0,0,ERROR,0" >> "$CSV_FILE"
    fi

    echo "" | tee -a "$LOG_FILE"

    # Wait for next cycle
    sleep $LOG_INTERVAL
done

echo "Monitoring completed!"
echo "Results saved in: $MONITOR_DIR"

# Generate summary statistics
if command -v python3 >/dev/null 2>&1; then
    python3 << EOF
import csv
import statistics

# Read monitoring data
data = []
with open('$CSV_FILE', 'r') as f:
    reader = csv.DictReader(f)
    for row in reader:
        if row['avg_loss'] != '0' or row['avg_latency'] != '0':
            try:
                data.append({
                    'loss': float(row['avg_loss']),
                    'latency': float(row['avg_latency']),
                    'max_latency': float(row['max_latency']),
                    'hop_count': int(row['hop_count'])
                })
            except ValueError:
                continue

if data:
    losses = [d['loss'] for d in data]
    latencies = [d['latency'] for d in data]
    max_latencies = [d['max_latency'] for d in data]

    print("Monitoring Summary Statistics:")
    print("==============================")
    print(f"Total monitoring cycles: {len(data)}")
    print(f"Average packet loss: {statistics.mean(losses):.2f}%")
    print(f"Maximum packet loss: {max(losses):.2f}%")
    print(f"Average latency: {statistics.mean(latencies):.2f}ms")
    print(f"Maximum latency: {max(max_latencies):.2f}ms")
    print(f"Latency std deviation: {statistics.stdev(latencies):.2f}ms")

    # Count alerts
    high_loss_count = sum(1 for loss in losses if loss > $LOSS_THRESHOLD)
    high_latency_count = sum(1 for lat in latencies if lat > $LATENCY_THRESHOLD)

    print(f"High packet loss alerts: {high_loss_count}")
    print(f"High latency alerts: {high_latency_count}")
else:
    print("No valid monitoring data collected")
EOF
fi

# Check if alerts were generated
if [ -f "$ALERT_FILE" ]; then
    echo ""
    echo "ALERTS GENERATED:"
    echo "=================="
    cat "$ALERT_FILE"
fi

Outil de comparaison des performances

#!/usr/bin/env python3
# mtr_performance_comparison.py

import subprocess
import json
import time
import datetime
import argparse
import statistics
from pathlib import Path
import matplotlib.pyplot as plt
import pandas as pd

class MTRComparison:
    def __init__(self, targets, cycles=50):
        self.targets = targets
        self.cycles = cycles
        self.results = {}

    def run_mtr_test(self, target):
        """Run MTR test and parse results"""
        try:
            # Run MTR with report mode
            cmd = ['mtr', '--report', '--report-cycles', str(self.cycles), '-n', target]
            result = subprocess.run(cmd, capture_output=True, text=True, timeout=300)

            if result.returncode == 0:
                return self.parse_mtr_output(result.stdout, target)
            else:
                print(f"MTR failed for {target}: {result.stderr}")
                return None

        except subprocess.TimeoutExpired:
            print(f"MTR timed out for {target}")
            return None
        except Exception as e:
            print(f"Error running MTR for {target}: {e}")
            return None

    def parse_mtr_output(self, output, target):
        """Parse MTR output into structured data"""
        lines = output.strip().split('\n')
        hops = []

        for line in lines:
            # Skip header and empty lines
            if not line.strip() or 'HOST:' in line or 'Start:' in line:
                continue

            # Parse hop lines
            parts = line.split()
            if len(parts) >= 7 and parts[0].isdigit():
                try:
                    hop_data = {
                        'hop': int(parts[0]),
                        'host': parts[1],
                        'loss_percent': float(parts[2].rstrip('%')),
                        'sent': int(parts[3]),
                        'last': float(parts[4]),
                        'avg': float(parts[5]),
                        'best': float(parts[6]),
                        'worst': float(parts[7]),
                        'stdev': float(parts[8]) if len(parts) > 8 else 0.0
                    }
                    hops.append(hop_data)
                except (ValueError, IndexError):
                    continue

        return {
            'target': target,
            'timestamp': datetime.datetime.now().isoformat(),
            'cycles': self.cycles,
            'hops': hops,
            'total_hops': len(hops)
        }

    def run_comparison(self):
        """Run MTR tests for all targets"""
        print(f"Running MTR comparison for {len(self.targets)} targets")
        print(f"Cycles per target: {self.cycles}")
        print("=" * 50)

        for i, target in enumerate(self.targets, 1):
            print(f"Testing {i}/{len(self.targets)}: {target}")

            result = self.run_mtr_test(target)
            if result:
                self.results[target] = result

                # Display summary
                if result['hops']:
                    final_hop = result['hops'][-1]
                    print(f"  Hops: {result['total_hops']}")
                    print(f"  Final hop loss: {final_hop['loss_percent']:.1f}%")
                    print(f"  Final hop avg latency: {final_hop['avg']:.1f}ms")
                else:
                    print("  No hop data available")
            else:
                print("  Test failed")

            print()

            # Small delay between tests
            if i < len(self.targets):
                time.sleep(2)

        return self.results

    def generate_comparison_report(self, output_dir="mtr_comparison"):
        """Generate comprehensive comparison report"""
        if not self.results:
            print("No results to compare")
            return

        output_path = Path(output_dir)
        output_path.mkdir(exist_ok=True)

        timestamp = datetime.datetime.now().strftime("%Y%m%d_%H%M%S")

        # Generate summary statistics
        summary_data = []
        for target, result in self.results.items():
            if result['hops']:
                final_hop = result['hops'][-1]
                summary_data.append({
                    'target': target,
                    'hops': result['total_hops'],
                    'final_loss': final_hop['loss_percent'],
                    'final_avg_latency': final_hop['avg'],
                    'final_best_latency': final_hop['best'],
                    'final_worst_latency': final_hop['worst'],
                    'final_stdev': final_hop['stdev']
                })

        # Save to CSV
        if summary_data:
            df = pd.DataFrame(summary_data)
            csv_file = output_path / f"mtr_comparison_{timestamp}.csv"
            df.to_csv(csv_file, index=False)
            print(f"Summary CSV saved: {csv_file}")

        # Generate visualizations
        self.create_visualizations(output_path, timestamp)

        # Generate HTML report
        html_file = output_path / f"mtr_comparison_{timestamp}.html"
        self.generate_html_report(html_file, summary_data)

        # Save raw JSON data
        json_file = output_path / f"mtr_raw_data_{timestamp}.json"
        with open(json_file, 'w') as f:
            json.dump(self.results, f, indent=2)

        print(f"Comparison report generated:")
        print(f"  HTML: {html_file}")
        print(f"  JSON: {json_file}")

        return str(html_file)

    def create_visualizations(self, output_path, timestamp):
        """Create comparison visualizations"""
        if not self.results:
            return

        # Prepare data for plotting
        targets = []
        avg_latencies = []
        loss_percentages = []
        hop_counts = []

        for target, result in self.results.items():
            if result['hops']:
                final_hop = result['hops'][-1]
                targets.append(target)
                avg_latencies.append(final_hop['avg'])
                loss_percentages.append(final_hop['loss_percent'])
                hop_counts.append(result['total_hops'])

        if not targets:
            return

        # Create subplots
        fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, figsize=(15, 12))
        fig.suptitle('MTR Network Performance Comparison', fontsize=16)

        # 1. Average Latency Comparison
        ax1.bar(range(len(targets)), avg_latencies, color='skyblue')
        ax1.set_title('Average Latency Comparison')
        ax1.set_ylabel('Latency (ms)')
        ax1.set_xticks(range(len(targets)))
        ax1.set_xticklabels(targets, rotation=45, ha='right')

        # Add value labels on bars
        for i, v in enumerate(avg_latencies):
            ax1.text(i, v + max(avg_latencies) * 0.01, f'{v:.1f}ms', 
                    ha='center', va='bottom')

        # 2. Packet Loss Comparison
        colors = ['green' if loss == 0 else 'orange' if loss < 5 else 'red' 
                 for loss in loss_percentages]
        ax2.bar(range(len(targets)), loss_percentages, color=colors)
        ax2.set_title('Packet Loss Comparison')
        ax2.set_ylabel('Packet Loss (%)')
        ax2.set_xticks(range(len(targets)))
        ax2.set_xticklabels(targets, rotation=45, ha='right')

        # Add value labels
        for i, v in enumerate(loss_percentages):
            ax2.text(i, v + max(loss_percentages + [1]) * 0.01, f'{v:.1f}%', 
                    ha='center', va='bottom')

        # 3. Hop Count Comparison
        ax3.bar(range(len(targets)), hop_counts, color='lightcoral')
        ax3.set_title('Network Hop Count Comparison')
        ax3.set_ylabel('Number of Hops')
        ax3.set_xticks(range(len(targets)))
        ax3.set_xticklabels(targets, rotation=45, ha='right')

        # Add value labels
        for i, v in enumerate(hop_counts):
            ax3.text(i, v + max(hop_counts) * 0.01, str(v), 
                    ha='center', va='bottom')

        # 4. Latency vs Loss Scatter Plot
        ax4.scatter(avg_latencies, loss_percentages, s=100, alpha=0.7)
        ax4.set_title('Latency vs Packet Loss')
        ax4.set_xlabel('Average Latency (ms)')
        ax4.set_ylabel('Packet Loss (%)')

        # Add target labels to scatter points
        for i, target in enumerate(targets):
            ax4.annotate(target, (avg_latencies[i], loss_percentages[i]),
                        xytext=(5, 5), textcoords='offset points', fontsize=8)

        plt.tight_layout()

        # Save plot
        plot_file = output_path / f"mtr_comparison_plots_{timestamp}.png"
        plt.savefig(plot_file, dpi=300, bbox_inches='tight')
        plt.close()

        print(f"Visualization saved: {plot_file}")

    def generate_html_report(self, output_file, summary_data):
        """Generate HTML comparison report"""
        html_content = f"""
<!DOCTYPE html>
<html>
<head>
    <title>MTR Network Performance Comparison Report</title>
    <style>
        body {{ font-family: Arial, sans-serif; margin: 20px; }}
        table {{ border-collapse: collapse; width: 100%; margin: 20px 0; }}
        th, td {{ border: 1px solid #ddd; padding: 12px; text-align: left; }}
        th {{ background-color: #f2f2f2; font-weight: bold; }}
        .summary {{ background-color: #e7f3ff; padding: 15px; border-radius: 5px; margin: 20px 0; }}
        .good {{ background-color: #d4edda; }}
        .warning {{ background-color: #fff3cd; }}
        .alert {{ background-color: #f8d7da; }}
        .metric {{ display: inline-block; margin: 10px 20px; }}
        .best {{ font-weight: bold; color: #28a745; }}
        .worst {{ font-weight: bold; color: #dc3545; }}
    </style>
</head>
<body>
    <h1>MTR Network Performance Comparison Report</h1>
    <div class="summary">
        <h3>Test Summary</h3>
        <p><strong>Targets Tested:</strong> {len(self.targets)}</p>
        <p><strong>Cycles per Target:</strong> {self.cycles}</p>
        <p><strong>Generated:</strong> {datetime.datetime.now()}</p>
    </div>
"""

        if summary_data:
            # Find best and worst performers
            best_latency = min(summary_data, key=lambda x: x['final_avg_latency'])
            worst_latency = max(summary_data, key=lambda x: x['final_avg_latency'])
            best_loss = min(summary_data, key=lambda x: x['final_loss'])
            worst_loss = max(summary_data, key=lambda x: x['final_loss'])

            html_content += f"""
    <div class="summary">
        <h3>Performance Highlights</h3>
        <div class="metric"><strong>Best Latency:</strong> <span class="best">{best_latency['target']} ({best_latency['final_avg_latency']:.1f}ms)</span></div>
        <div class="metric"><strong>Worst Latency:</strong> <span class="worst">{worst_latency['target']} ({worst_latency['final_avg_latency']:.1f}ms)</span></div>
        <div class="metric"><strong>Best Loss:</strong> <span class="best">{best_loss['target']} ({best_loss['final_loss']:.1f}%)</span></div>
        <div class="metric"><strong>Worst Loss:</strong> <span class="worst">{worst_loss['target']} ({worst_loss['final_loss']:.1f}%)</span></div>
    </div>
"""

            # Comparison table
            html_content += """
    <h2>Detailed Comparison</h2>
    <table>
        <tr>
            <th>Target</th>
            <th>Hops</th>
            <th>Packet Loss (%)</th>
            <th>Avg Latency (ms)</th>
            <th>Best Latency (ms)</th>
            <th>Worst Latency (ms)</th>
            <th>Std Dev (ms)</th>
            <th>Performance Rating</th>
        </tr>
"""

            for data in summary_data:
                # Determine performance rating
                if data['final_loss'] == 0 and data['final_avg_latency'] < 50:
                    rating = "Excellent"
                    row_class = "good"
                elif data['final_loss'] < 1 and data['final_avg_latency'] < 100:
                    rating = "Good"
                    row_class = "good"
                elif data['final_loss'] < 5 and data['final_avg_latency'] < 200:
                    rating = "Acceptable"
                    row_class = "warning"
                else:
                    rating = "Poor"
                    row_class = "alert"

                html_content += f"""
        <tr class="{row_class}">
            <td>{data['target']}</td>
            <td>{data['hops']}</td>
            <td>{data['final_loss']:.1f}</td>
            <td>{data['final_avg_latency']:.1f}</td>
            <td>{data['final_best_latency']:.1f}</td>
            <td>{data['final_worst_latency']:.1f}</td>
            <td>{data['final_stdev']:.1f}</td>
            <td>{rating}</td>
        </tr>
"""

            html_content += "</table>"

        # Detailed results for each target
        html_content += "<h2>Detailed Hop Analysis</h2>"

        for target, result in self.results.items():
            html_content += f"""
    <h3>{target}</h3>
    <table>
        <tr>
            <th>Hop</th>
            <th>Host</th>
            <th>Loss (%)</th>
            <th>Packets Sent</th>
            <th>Last (ms)</th>
            <th>Avg (ms)</th>
            <th>Best (ms)</th>
            <th>Worst (ms)</th>
            <th>StdDev (ms)</th>
        </tr>
"""

            for hop in result['hops']:
                # Determine row class based on performance
                if hop['loss_percent'] == 0 and hop['avg'] < 100:
                    row_class = "good"
                elif hop['loss_percent'] < 5 and hop['avg'] < 200:
                    row_class = "warning"
                else:
                    row_class = "alert"

                html_content += f"""
        <tr class="{row_class}">
            <td>{hop['hop']}</td>
            <td>{hop['host']}</td>
            <td>{hop['loss_percent']:.1f}</td>
            <td>{hop['sent']}</td>
            <td>{hop['last']:.1f}</td>
            <td>{hop['avg']:.1f}</td>
            <td>{hop['best']:.1f}</td>
            <td>{hop['worst']:.1f}</td>
            <td>{hop['stdev']:.1f}</td>
        </tr>
"""

            html_content += "</table>"

        html_content += """
</body>
</html>
"""

        with open(output_file, 'w') as f:
            f.write(html_content)

def main():
    parser = argparse.ArgumentParser(description='MTR Performance Comparison Tool')
    parser.add_argument('targets', nargs='+', help='Target hostnames or IPs to compare')
    parser.add_argument('--cycles', type=int, default=50, help='Number of cycles per test (default: 50)')
    parser.add_argument('--output-dir', default='mtr_comparison', help='Output directory for results')

    args = parser.parse_args()

    # Create comparison instance
    comparison = MTRComparison(args.targets, args.cycles)

    # Run comparison
    results = comparison.run_comparison()

    # Generate report
    if results:
        report_file = comparison.generate_comparison_report(args.output_dir)
        print(f"\nOpen the HTML report: {report_file}")
    else:
        print("No successful tests to compare")

if __name__ == "__main__":
    main()

Cette feuille de triche complète de MTR fournit tout ce qu'il faut pour le diagnostic de réseau professionnel, la surveillance en temps réel et l'analyse comparative du réseau, du tracé de l'itinéraire de base aux scénarios d'automatisation et de visualisation avancés.