Aller au contenu

Zmap Feuille de chaleur

=" Copier toutes les commandes Générer PDF

Aperçu général

Zmap est un scanner de réseau à paquets simple et rapide conçu pour les enquêtes réseau sur Internet et la découverte de réseaux à grande échelle. Développé par des chercheurs de l'Université du Michigan, Zmap est capable de scanner l'espace IPv4 entier en moins de 45 minutes sur une connexion réseau gigabit. Contrairement aux scanners de port traditionnels qui sont conçus pour scanner les petits réseaux en profondeur, Zmap est optimisé pour scanner rapidement les grands espaces d'adresse en envoyant un seul paquet de sonde à chaque hôte et en maintenant un état de connexion minimal. Cela en fait un outil inestimable pour les chercheurs en sécurité, les administrateurs de réseau et les testeurs de pénétration qui doivent effectuer des études de reconnaissance réseau à grande échelle et de sécurité sur Internet.

C'est pas vrai. Attention: Zmap est un puissant outil de numérisation de réseau qui peut générer un trafic réseau important. Utilisez seulement Zmap contre les réseaux que vous possédez ou avez la permission écrite explicite de scanner. La numérisation à l'échelle de l'Internet peut violer les conditions de service et les lois locales. Respectez toujours les pratiques de divulgation responsable et les lignes directrices sur l'analyse éthique.

Installation

Installation Ubuntu/Debian

# Install from package repository
sudo apt update
sudo apt install zmap

# Verify installation
zmap --version

# Install additional dependencies
sudo apt install libpcap-dev libgmp-dev libssl-dev

# Install development tools if building from source
sudo apt install build-essential cmake libpcap-dev libgmp-dev libssl-dev

CentOS/RHEL Installation

# Install EPEL repository
sudo yum install epel-release

# Install Zmap
sudo yum install zmap

# Install dependencies for building from source
sudo yum groupinstall "Development Tools"
sudo yum install cmake libpcap-devel gmp-devel openssl-devel
```_

### Bâtiment de source
```bash
# Clone Zmap repository
git clone https://github.com/zmap/zmap.git
cd zmap

# Create build directory
mkdir build
cd build

# Configure build
cmake ..

# Compile
make -j$(nproc)

# Install
sudo make install

# Verify installation
zmap --version
```_

### Installation Docker
```bash
# Pull Zmap Docker image
docker pull zmap/zmap

# Run Zmap in Docker
docker run --rm --net=host zmap/zmap zmap --version

# Create alias for easier usage
echo 'alias zmap="docker run --rm --net=host zmap/zmap zmap"' >> ~/.bashrc
source ~/.bashrc

# Run with volume mount for output
docker run --rm --net=host -v $(pwd):/data zmap/zmap zmap -p 80 10.0.0.0/8 -o /data/scan_results.txt

installation macOS

# Install using Homebrew
brew install zmap

# Install dependencies
brew install libpcap gmp openssl cmake

# Verify installation
zmap --version

# If building from source on macOS
git clone https://github.com/zmap/zmap.git
cd zmap
mkdir build && cd build
cmake -DOPENSSL_ROOT_DIR=/usr/local/opt/openssl ..
make -j$(sysctl -n hw.ncpu)
sudo make install

Utilisation de base

Scans de ports simples

# Scan single port on subnet
zmap -p 80 192.168.1.0/24

# Scan multiple subnets
zmap -p 443 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16

# Scan with output to file
zmap -p 22 192.168.0.0/16 -o ssh_hosts.txt

# Scan with rate limiting
zmap -p 80 10.0.0.0/8 -r 1000

# Scan with bandwidth limiting
zmap -p 443 192.168.0.0/16 -B 10M

# Verbose output
zmap -p 80 192.168.1.0/24 -v

Options avancées de numérisation

# Scan with custom source port
zmap -p 80 192.168.1.0/24 -s 12345

# Scan with custom interface
zmap -p 80 192.168.1.0/24 -i eth0

# Scan with custom gateway MAC
zmap -p 80 192.168.1.0/24 -G 00:11:22:33:44:55

# Scan with custom source IP
zmap -p 80 192.168.1.0/24 -S 192.168.1.100

# Scan with probe module
zmap -p 80 192.168.1.0/24 -M tcp_synscan

# Scan with output module
zmap -p 80 192.168.1.0/24 -O csv -o results.csv

Modules de sondes

# TCP SYN scan (default)
zmap -p 80 192.168.1.0/24 -M tcp_synscan

# ICMP echo scan
zmap 192.168.1.0/24 -M icmp_echoscan

# UDP scan
zmap -p 53 192.168.1.0/24 -M udp

# TCP ACK scan
zmap -p 80 192.168.1.0/24 -M tcp_ackscan

# NTP scan
zmap -p 123 192.168.1.0/24 -M ntp

# DNS scan
zmap -p 53 192.168.1.0/24 -M dns

# List available probe modules
zmap --list-probe-modules

Modules de sortie

# Default output (IP addresses)
zmap -p 80 192.168.1.0/24

# CSV output
zmap -p 80 192.168.1.0/24 -O csv -o results.csv

# JSON output
zmap -p 80 192.168.1.0/24 -O json -o results.json

# Extended output with additional fields
zmap -p 80 192.168.1.0/24 -O extended_file -o results.txt

# Redis output
zmap -p 80 192.168.1.0/24 -O redis --redis-server 127.0.0.1

# List available output modules
zmap --list-output-modules

Caractéristiques avancées

Scannage Internet à grande échelle

# Scan entire IPv4 space for HTTP servers
zmap -p 80 0.0.0.0/0 -o http_servers.txt -r 10000

# Scan for HTTPS servers with rate limiting
zmap -p 443 0.0.0.0/0 -o https_servers.txt -r 5000 -B 100M

# Scan for SSH servers
zmap -p 22 0.0.0.0/0 -o ssh_servers.txt -r 2000

# Scan for DNS servers
zmap -p 53 0.0.0.0/0 -M udp -o dns_servers.txt -r 1000

# Scan with blacklist file
zmap -p 80 0.0.0.0/0 -b blacklist.txt -o results.txt

# Scan with whitelist file
zmap -p 80 -w whitelist.txt -o results.txt

Configuration personnalisée des sondes

# TCP SYN scan with custom TCP options
zmap -p 80 192.168.1.0/24 -M tcp_synscan --probe-args="tcp_window=1024"

# ICMP scan with custom payload
zmap 192.168.1.0/24 -M icmp_echoscan --probe-args="icmp_payload=deadbeef"

# UDP scan with custom payload
zmap -p 53 192.168.1.0/24 -M udp --probe-args="udp_payload_file=dns_query.bin"

# NTP scan with custom NTP packet
zmap -p 123 192.168.1.0/24 -M ntp --probe-args="ntp_version=3"

# DNS scan with custom query
zmap -p 53 192.168.1.0/24 -M dns --probe-args="dns_query=google.com"

Optimisation des performances

# High-speed scanning with multiple threads
zmap -p 80 10.0.0.0/8 -r 100000 -T 4

# Optimize for gigabit networks
zmap -p 80 0.0.0.0/0 -r 1400000 -B 1G

# Memory optimization for large scans
zmap -p 80 0.0.0.0/0 -r 10000 --max-targets 1000000

# CPU optimization
zmap -p 80 192.168.0.0/16 -T $(nproc)

# Network buffer optimization
zmap -p 80 192.168.1.0/24 --sender-threads 4 --cores 4

Filtrage et ciblage

# Exclude private networks
zmap -p 80 0.0.0.0/0 --exclude-file private_networks.txt

# Include only specific ASNs
zmap -p 80 --include-file target_asns.txt

# Scan with seed for reproducible randomization
zmap -p 80 192.168.1.0/24 --seed 12345

# Scan with custom target list
zmap -p 80 --target-file targets.txt

# Scan with CIDR exclusions
zmap -p 80 0.0.0.0/0 --exclude 10.0.0.0/8,172.16.0.0/12,192.168.0.0/16

Scripts d'automatisation

Découverte du port à grande échelle

#!/bin/bash
# Large-scale port discovery using Zmap

TARGET_RANGE="$1"
OUTPUT_DIR="zmap_discovery_$(date +%Y%m%d_%H%M%S)"
RATE_LIMIT=10000
BANDWIDTH_LIMIT="100M"

if [ -z "$TARGET_RANGE" ]; then
    echo "Usage: $0 <target_range>"
    echo "Example: $0 '0.0.0.0/0'"
    echo "Example: $0 '10.0.0.0/8'"
    exit 1
fi

mkdir -p "$OUTPUT_DIR"

# Common ports to scan
COMMON_PORTS=(
    21 22 23 25 53 80 110 111 135 139 143 443 993 995 1723 3306 3389 5432 5900 8080
)

# Function to scan single port
scan_port() \\\\{
    local port="$1"
    local output_file="$OUTPUT_DIR/port_$\\\\{port\\\\}_hosts.txt"
    local log_file="$OUTPUT_DIR/port_$\\\\{port\\\\}_scan.log"

    echo "[+] Scanning port $port on $TARGET_RANGE"

    # Determine probe module based on port
    local probe_module="tcp_synscan"
    case "$port" in
        53|123|161|162|514) probe_module="udp" ;;
        *) probe_module="tcp_synscan" ;;
    esac

    # Perform scan
    zmap -p "$port" "$TARGET_RANGE" \
        -M "$probe_module" \
        -r "$RATE_LIMIT" \
        -B "$BANDWIDTH_LIMIT" \
        -o "$output_file" \
        -v 2> "$log_file"

    if [ $? -eq 0 ]; then
        local host_count=$(wc -l < "$output_file" 2>/dev/null||echo 0)
        echo "  [+] Port $port: $host_count hosts found"

        # Generate summary
        echo "Port: $port" >> "$OUTPUT_DIR/scan_summary.txt"
        echo "Hosts found: $host_count" >> "$OUTPUT_DIR/scan_summary.txt"
        echo "Probe module: $probe_module" >> "$OUTPUT_DIR/scan_summary.txt"
        echo "---" >> "$OUTPUT_DIR/scan_summary.txt"
    else
        echo "  [-] Port $port: Scan failed"
    fi
\\\\}

# Function to scan ports in parallel
scan_ports_parallel() \\\\{
    local max_jobs=5
    local job_count=0

    for port in "$\\\\{COMMON_PORTS[@]\\\\}"; do
        # Limit concurrent jobs
        while [ $(jobs -r|wc -l) -ge $max_jobs ]; do
            sleep 1
        done

        # Start scan in background
        scan_port "$port" &

        job_count=$((job_count + 1))
        echo "[+] Started scan job $job_count for port $port"

        # Small delay between job starts
        sleep 2
    done

    # Wait for all jobs to complete
    wait
    echo "[+] All port scans completed"
\\\\}

# Function to analyze results
analyze_results() \\\\{
    echo "[+] Analyzing scan results"

    local analysis_file="$OUTPUT_DIR/analysis_report.txt"

    cat > "$analysis_file" ``<< EOF
Zmap Port Discovery Analysis Report
==================================
Target Range: $TARGET_RANGE
Scan Date: $(date)
Output Directory: $OUTPUT_DIR

Port Scan Summary:
EOF

    # Analyze each port
    for port in "$\\\{COMMON_PORTS[@]\\\}"; do
        local port_file="$OUTPUT_DIR/port_$\\\{port\\\}_hosts.txt"
        if [ -f "$port_file" ]; then
            local count=$(wc -l < "$port_file")
            echo "Port $port: $count hosts" >``> "$analysis_file"
        fi
    done

    # Find most common open ports
    echo "" >> "$analysis_file"
    echo "Top 10 Most Common Open Ports:" >> "$analysis_file"
    for port in "$\\\\{COMMON_PORTS[@]\\\\}"; do
        local port_file="$OUTPUT_DIR/port_$\\\\{port\\\\}_hosts.txt"
        if [ -f "$port_file" ]; then
            local count=$(wc -l < "$port_file")
            echo "$count $port"
        fi
    done|sort -nr|head -10 >> "$analysis_file"

    # Generate combined host list
    echo "" >> "$analysis_file"
    echo "Generating combined host list..." >> "$analysis_file"

    cat "$OUTPUT_DIR"/port_*_hosts.txt|sort -u > "$OUTPUT_DIR/all_responsive_hosts.txt"
    local total_hosts=$(wc -l < "$OUTPUT_DIR/all_responsive_hosts.txt")

    echo "Total unique responsive hosts: $total_hosts" >> "$analysis_file"

    echo "[+] Analysis completed: $analysis_file"
\\\\}

# Function to generate visualization data
generate_visualization() \\\\{
    echo "[+] Generating visualization data"

    local viz_file="$OUTPUT_DIR/visualization_data.json"

    cat > "$viz_file" ``<< 'EOF'
\\\{
    "scan_metadata": \\\{
        "target_range": "TARGET_RANGE_PLACEHOLDER",
        "scan_date": "SCAN_DATE_PLACEHOLDER",
        "total_ports_scanned": TOTAL_PORTS_PLACEHOLDER
    \\\},
    "port_data": [
EOF

    # Replace placeholders
    sed -i "s/TARGET_RANGE_PLACEHOLDER/$TARGET_RANGE/g" "$viz_file"
    sed -i "s/SCAN_DATE_PLACEHOLDER/$(date -Iseconds)/g" "$viz_file"
    sed -i "s/TOTAL_PORTS_PLACEHOLDER/$\\\{#COMMON_PORTS[@]\\\}/g" "$viz_file"

    # Add port data
    local first=true
    for port in "$\\\{COMMON_PORTS[@]\\\}"; do
        local port_file="$OUTPUT_DIR/port_$\\\{port\\\}_hosts.txt"
        if [ -f "$port_file" ]; then
            local count=$(wc -l < "$port_file")

            if [ "$first" = true ]; then
                first=false
            else
                echo "," >``> "$viz_file"
            fi

            cat >> "$viz_file" << EOF
        \\\\{
            "port": $port,
            "host_count": $count,
            "service": "$(getent services $port/tcp 2>/dev/null|awk '\\\\{print $1\\\\}'||echo 'unknown')"
        \\\\}
EOF
        fi
    done

    echo "" >> "$viz_file"
    echo "    ]" >> "$viz_file"
    echo "\\\\}" >> "$viz_file"

    echo "[+] Visualization data generated: $viz_file"
\\\\}

# Function to create HTML report
create_html_report() \\\\{
    echo "[+] Creating HTML report"

    local html_file="$OUTPUT_DIR/scan_report.html"

    cat > "$html_file" << 'EOF'
<!DOCTYPE html>
<html>
<head>
    <title>Zmap Port Discovery Report</title>
    <style>
        body \\\\{ font-family: Arial, sans-serif; margin: 20px; \\\\}
        .header \\\\{ background-color: #f0f0f0; padding: 20px; border-radius: 5px; \\\\}
        .port-section \\\\{ margin: 20px 0; padding: 15px; border: 1px solid #ddd; border-radius: 5px; \\\\}
        table \\\\{ border-collapse: collapse; width: 100%; \\\\}
        th, td \\\\{ border: 1px solid #ddd; padding: 8px; text-align: left; \\\\}
        th \\\\{ background-color: #f2f2f2; \\\\}
        .chart \\\\{ margin: 20px 0; \\\\}
    </style>
    <script src="https://cdn.jsdelivr.net/npm/chart.js"></script>
</head>
<body>
    <div class="header">
        <h1>Zmap Port Discovery Report</h1>
        <p><strong>Target Range:</strong> TARGET_RANGE_PLACEHOLDER</p>
        <p><strong>Scan Date:</strong> SCAN_DATE_PLACEHOLDER</p>
        <p><strong>Total Ports Scanned:</strong> TOTAL_PORTS_PLACEHOLDER</p>
    </div>

    <div class="port-section">
        <h2>Port Scan Results</h2>
        <div class="chart">
            <canvas id="portChart" width="400" height="200"></canvas>
        </div>

        <table>
            <tr><th>Port</th><th>Service</th><th>Hosts Found</th><th>Percentage</th></tr>
EOF

    # Add port data to HTML
    local total_scanned_hosts=0
    for port in "$\\\\{COMMON_PORTS[@]\\\\}"; do
        local port_file="$OUTPUT_DIR/port_$\\\\{port\\\\}_hosts.txt"
        if [ -f "$port_file" ]; then
            local count=$(wc -l ``< "$port_file")
            total_scanned_hosts=$((total_scanned_hosts + count))
        fi
    done

    for port in "$\\\{COMMON_PORTS[@]\\\}"; do
        local port_file="$OUTPUT_DIR/port_$\\\{port\\\}_hosts.txt"
        if [ -f "$port_file" ]; then
            local count=$(wc -l < "$port_file")
            local service=$(getent services $port/tcp 2>``/dev/null|awk '\\\\{print $1\\\\}'||echo 'unknown')
            local percentage=0
            if [ $total_scanned_hosts -gt 0 ]; then
                percentage=$(echo "scale=2; $count * 100 / $total_scanned_hosts"|bc -l)
            fi

            echo "            <tr><td>$port</td><td>$service</td><td>$count</td><td>$\\\\{percentage\\\\}%</td></tr>" >> "$html_file"
        fi
    done

    cat >> "$html_file" << 'EOF'
        </table>
    </div>

    <script>
        // Load visualization data and create chart
        fetch('visualization_data.json')
            .then(response => response.json())
            .then(data => \\\\{
                const ctx = document.getElementById('portChart').getContext('2d');
                const chart = new Chart(ctx, \\\\{
                    type: 'bar',
                    data: \\\\{
                        labels: data.port_data.map(p => `Port $\\{p.port\\}`),
                        datasets: [\\\\{
                            label: 'Hosts Found',
                            data: data.port_data.map(p => p.host_count),
                            backgroundColor: 'rgba(54, 162, 235, 0.6)',
                            borderColor: 'rgba(54, 162, 235, 1)',
                            borderWidth: 1
                        \\\\}]
                    \\\\},
                    options: \\\\{
                        responsive: true,
                        plugins: \\\\{
                            title: \\\\{
                                display: true,
                                text: 'Port Discovery Results'
                            \\\\}
                        \\\\},
                        scales: \\\\{
                            y: \\\\{
                                beginAtZero: true
                            \\\\}
                        \\\\}
                    \\\\}
                \\\\});
            \\\\});
    </script>
</body>
</html>
EOF

    # Replace placeholders
    sed -i "s/TARGET_RANGE_PLACEHOLDER/$TARGET_RANGE/g" "$html_file"
    sed -i "s/SCAN_DATE_PLACEHOLDER/$(date)/g" "$html_file"
    sed -i "s/TOTAL_PORTS_PLACEHOLDER/$\\\\{#COMMON_PORTS[@]\\\\}/g" "$html_file"

    echo "[+] HTML report created: $html_file"
\\\\}

# Main execution
echo "[+] Starting large-scale port discovery"
echo "[+] Target range: $TARGET_RANGE"
echo "[+] Output directory: $OUTPUT_DIR"
echo "[+] Rate limit: $RATE_LIMIT packets/second"
echo "[+] Bandwidth limit: $BANDWIDTH_LIMIT"

# Check if running as root
if [ "$EUID" -ne 0 ]; then
    echo "[-] This script requires root privileges for raw socket access"
    exit 1
fi

# Check if zmap is installed
if ! command -v zmap &> /dev/null; then
    echo "[-] Zmap not found. Please install zmap first."
    exit 1
fi

# Perform scans
scan_ports_parallel

# Analyze results
analyze_results

# Generate visualization data
generate_visualization

# Create HTML report
create_html_report

echo "[+] Large-scale port discovery completed"
echo "[+] Results saved in: $OUTPUT_DIR"
echo "[+] Open $OUTPUT_DIR/scan_report.html for detailed report"

Découverte du service à l'échelle de l'Internet

#!/bin/bash
# Internet-wide service discovery using Zmap

SERVICE_TYPE="$1"
OUTPUT_DIR="internet_discovery_$(date +%Y%m%d_%H%M%S)"
RATE_LIMIT=50000
BANDWIDTH_LIMIT="500M"

if [ -z "$SERVICE_TYPE" ]; then
    echo "Usage: $0 <service_type>"
    echo "Service types: web, ssh, dns, mail, ftp, telnet, ntp, snmp"
    exit 1
fi

mkdir -p "$OUTPUT_DIR"

# Service configuration
declare -A SERVICE_CONFIG
SERVICE_CONFIG[web]="80,tcp_synscan"
SERVICE_CONFIG[web_ssl]="443,tcp_synscan"
SERVICE_CONFIG[ssh]="22,tcp_synscan"
SERVICE_CONFIG[dns]="53,udp"
SERVICE_CONFIG[mail_smtp]="25,tcp_synscan"
SERVICE_CONFIG[mail_pop3]="110,tcp_synscan"
SERVICE_CONFIG[mail_imap]="143,tcp_synscan"
SERVICE_CONFIG[ftp]="21,tcp_synscan"
SERVICE_CONFIG[telnet]="23,tcp_synscan"
SERVICE_CONFIG[ntp]="123,ntp"
SERVICE_CONFIG[snmp]="161,udp"

# Function to perform service discovery
discover_service() \\\\{
    local service="$1"
    local config="$\\\\{SERVICE_CONFIG[$service]\\\\}"

    if [ -z "$config" ]; then
        echo "[-] Unknown service type: $service"
        return 1
    fi

    local port=$(echo "$config"|cut -d, -f1)
    local probe_module=$(echo "$config"|cut -d, -f2)
    local output_file="$OUTPUT_DIR/$\\\\{service\\\\}_servers.txt"
    local log_file="$OUTPUT_DIR/$\\\\{service\\\\}_scan.log"

    echo "[+] Discovering $service servers on port $port"
    echo "[+] Using probe module: $probe_module"
    echo "[+] Rate limit: $RATE_LIMIT packets/second"

    # Create blacklist for private networks
    cat > "$OUTPUT_DIR/blacklist.txt" << 'EOF'
0.0.0.0/8
10.0.0.0/8
100.64.0.0/10
127.0.0.0/8
169.254.0.0/16
172.16.0.0/12
192.0.0.0/24
192.0.2.0/24
192.88.99.0/24
192.168.0.0/16
198.18.0.0/15
198.51.100.0/24
203.0.113.0/24
224.0.0.0/4
240.0.0.0/4
255.255.255.255/32
EOF

    # Perform Internet-wide scan
    zmap -p "$port" 0.0.0.0/0 \
        -M "$probe_module" \
        -r "$RATE_LIMIT" \
        -B "$BANDWIDTH_LIMIT" \
        -b "$OUTPUT_DIR/blacklist.txt" \
        -o "$output_file" \
        -v 2> "$log_file"

    if [ $? -eq 0 ]; then
        local server_count=$(wc -l < "$output_file" 2>/dev/null||echo 0)
        echo "[+] Discovery completed: $server_count $service servers found"

        # Generate statistics
        generate_statistics "$service" "$output_file"

        return 0
    else
        echo "[-] Discovery failed for $service"
        return 1
    fi
\\\\}

# Function to generate statistics
generate_statistics() \\\\{
    local service="$1"
    local results_file="$2"
    local stats_file="$OUTPUT_DIR/$\\\\{service\\\\}_statistics.txt"

    echo "[+] Generating statistics for $service"

    cat > "$stats_file" << EOF
Service Discovery Statistics: $service
======================================
Discovery Date: $(date)
Total Servers Found: $(wc -l < "$results_file")

Geographic Distribution:
EOF

    # Analyze geographic distribution using GeoIP
    if command -v geoiplookup &> /dev/null; then
        echo "Analyzing geographic distribution..." >> "$stats_file"

        # Sample first 1000 IPs for geographic analysis
        head -1000 "$results_file"|while read ip; do
            geoiplookup "$ip"|grep "GeoIP Country Edition"|cut -d: -f2|xargs
        done|sort|uniq -c|sort -nr|head -20 >> "$stats_file"
    else
        echo "GeoIP lookup not available" >> "$stats_file"
    fi

    # Analyze ASN distribution
    echo "" >> "$stats_file"
    echo "ASN Distribution (Top 20):" >> "$stats_file"

    if command -v whois &> /dev/null; then
        # Sample first 100 IPs for ASN analysis
        head -100 "$results_file"|while read ip; do
            whois "$ip"|grep -i "origin"|head -1|awk '\\\\{print $2\\\\}'
        done|sort|uniq -c|sort -nr|head -20 >> "$stats_file"
    else
        echo "Whois lookup not available" >> "$stats_file"
    fi

    echo "[+] Statistics generated: $stats_file"
\\\\}

# Function to perform follow-up analysis
followup_analysis() \\\\{
    local service="$1"
    local results_file="$OUTPUT_DIR/$\\\\{service\\\\}_servers.txt"
    local analysis_file="$OUTPUT_DIR/$\\\\{service\\\\}_analysis.txt"

    echo "[+] Performing follow-up analysis for $service"

    # Sample servers for detailed analysis
    local sample_size=100
    local sample_file="$OUTPUT_DIR/$\\\\{service\\\\}_sample.txt"

    shuf -n "$sample_size" "$results_file" > "$sample_file"

    cat > "$analysis_file" << EOF
Follow-up Analysis: $service
===========================
Analysis Date: $(date)
Sample Size: $sample_size servers

Detailed Analysis Results:
EOF

    # Service-specific analysis
    case "$service" in
        "web"|"web_ssl")
            analyze_web_servers "$sample_file" "$analysis_file"
            ;;
        "ssh")
            analyze_ssh_servers "$sample_file" "$analysis_file"
            ;;
        "dns")
            analyze_dns_servers "$sample_file" "$analysis_file"
            ;;
        "ntp")
            analyze_ntp_servers "$sample_file" "$analysis_file"
            ;;
        *)
            echo "Generic analysis for $service" >> "$analysis_file"
            ;;
    esac

    echo "[+] Follow-up analysis completed: $analysis_file"
\\\\}

# Function to analyze web servers
analyze_web_servers() \\\\{
    local sample_file="$1"
    local analysis_file="$2"

    echo "Web Server Analysis:" >> "$analysis_file"
    echo "===================" >> "$analysis_file"

    # Analyze HTTP headers
    while read ip; do
        echo "Analyzing $ip..." >> "$analysis_file"

        # Get HTTP headers
        timeout 10 curl -I "http://$ip" 2>/dev/null|head -10 >> "$analysis_file"
        echo "---" >> "$analysis_file"

        # Rate limiting
        sleep 0.1
    done < "$sample_file"
\\\\}

# Function to analyze SSH servers
analyze_ssh_servers() \\\\{
    local sample_file="$1"
    local analysis_file="$2"

    echo "SSH Server Analysis:" >> "$analysis_file"
    echo "===================" >> "$analysis_file"

    # Analyze SSH banners
    while read ip; do
        echo "Analyzing $ip..." >> "$analysis_file"

        # Get SSH banner
        timeout 5 nc "$ip" 22 < /dev/null 2>/dev/null|head -1 >> "$analysis_file"

        # Rate limiting
        sleep 0.1
    done < "$sample_file"
\\\\}

# Function to analyze DNS servers
analyze_dns_servers() \\\\{
    local sample_file="$1"
    local analysis_file="$2"

    echo "DNS Server Analysis:" >> "$analysis_file"
    echo "===================" >> "$analysis_file"

    # Test DNS resolution
    while read ip; do
        echo "Testing DNS server $ip..." >> "$analysis_file"

        # Test DNS query
        timeout 5 dig @"$ip" google.com +short 2>/dev/null >> "$analysis_file"
        echo "---" >> "$analysis_file"

        # Rate limiting
        sleep 0.1
    done < "$sample_file"
\\\\}

# Function to analyze NTP servers
analyze_ntp_servers() \\\\{
    local sample_file="$1"
    local analysis_file="$2"

    echo "NTP Server Analysis:" >> "$analysis_file"
    echo "===================" >> "$analysis_file"

    # Test NTP response
    while read ip; do
        echo "Testing NTP server $ip..." >> "$analysis_file"

        # Test NTP query
        timeout 5 ntpdate -q "$ip" 2>/dev/null >> "$analysis_file"
        echo "---" >> "$analysis_file"

        # Rate limiting
        sleep 0.1
    done < "$sample_file"
\\\\}

# Function to generate final report
generate_final_report() \\\\{
    local service="$1"
    local report_file="$OUTPUT_DIR/final_report.html"

    echo "[+] Generating final report"

    cat > "$report_file" << EOF
<!DOCTYPE html>
<html>
<head>
    <title>Internet-Wide $service Discovery Report</title>
    <style>
        body \\\\{ font-family: Arial, sans-serif; margin: 20px; \\\\}
        .header \\\\{ background-color: #f0f0f0; padding: 20px; border-radius: 5px; \\\\}
        .section \\\\{ margin: 20px 0; padding: 15px; border: 1px solid #ddd; border-radius: 5px; \\\\}
        .warning \\\\{ background-color: #fff3cd; border-color: #ffeaa7; \\\\}
        table \\\\{ border-collapse: collapse; width: 100%; \\\\}
        th, td \\\\{ border: 1px solid #ddd; padding: 8px; text-align: left; \\\\}
        th \\\\{ background-color: #f2f2f2; \\\\}
        pre \\\\{ background-color: #f5f5f5; padding: 10px; border-radius: 3px; overflow-x: auto; \\\\}
    </style>
</head>
<body>
    <div class="header">
        <h1>Internet-Wide $service Discovery Report</h1>
        <p><strong>Discovery Date:</strong> $(date)</p>
        <p><strong>Service Type:</strong> $service</p>
        <p><strong>Scan Rate:</strong> $RATE_LIMIT packets/second</p>
    </div>

    <div class="section warning">
        <h2>⚠️ Important Notice</h2>
        <p>This report contains results from Internet-wide scanning. Use this information responsibly and in accordance with applicable laws and ethical guidelines.</p>
    </div>

    <div class="section">
        <h2>Discovery Summary</h2>
        <table>
            <tr><th>Metric</th><th>Value</th></tr>
            <tr><td>Total Servers Found</td><td>$(wc -l < "$OUTPUT_DIR/$\\\\{service\\\\}_servers.txt" 2>/dev/null||echo 0)</td></tr>
            <tr><td>Scan Duration</td><td>$(grep "completed" "$OUTPUT_DIR/$\\\\{service\\\\}_scan.log" 2>/dev/null|tail -1||echo "Unknown")</td></tr>
            <tr><td>Output Files</td><td>$(ls -1 "$OUTPUT_DIR"|wc -l)</td></tr>
        </table>
    </div>

    <div class="section">
        <h2>Files Generated</h2>
        <ul>
            <li><strong>Server List:</strong> $\\\\{service\\\\}_servers.txt</li>
            <li><strong>Statistics:</strong> $\\\\{service\\\\}_statistics.txt</li>
            <li><strong>Analysis:</strong> $\\\\{service\\\\}_analysis.txt</li>
            <li><strong>Scan Log:</strong> $\\\\{service\\\\}_scan.log</li>
        </ul>
    </div>

    <div class="section">
        <h2>Next Steps</h2>
        <ol>
            <li>Review the server list and statistics</li>
            <li>Analyze the follow-up analysis results</li>
            <li>Consider responsible disclosure for any security issues found</li>
            <li>Implement appropriate security measures based on findings</li>
        </ol>
    </div>
</body>
</html>
EOF

    echo "[+] Final report generated: $report_file"
\\\\}

# Main execution
echo "[+] Starting Internet-wide $SERVICE_TYPE discovery"
echo "[+] Output directory: $OUTPUT_DIR"

# Check if running as root
if [ "$EUID" -ne 0 ]; then
    echo "[-] This script requires root privileges for raw socket access"
    exit 1
fi

# Check if zmap is installed
if ! command -v zmap &> /dev/null; then
    echo "[-] Zmap not found. Please install zmap first."
    exit 1
fi

# Perform service discovery
if discover_service "$SERVICE_TYPE"; then
    # Perform follow-up analysis
    followup_analysis "$SERVICE_TYPE"

    # Generate final report
    generate_final_report "$SERVICE_TYPE"

    echo "[+] Internet-wide $SERVICE_TYPE discovery completed"
    echo "[+] Results saved in: $OUTPUT_DIR"
    echo "[+] Open $OUTPUT_DIR/final_report.html for detailed report"
else
    echo "[-] Service discovery failed"
    exit 1
fi

Surveillance continue des réseaux

#!/bin/bash
# Continuous network monitoring using Zmap

MONITOR_CONFIG="monitor.conf"
LOG_DIR="monitoring_logs"
ALERT_WEBHOOK="$1"
CHECK_INTERVAL=3600  # 1 hour

if [ -z "$ALERT_WEBHOOK" ]; then
    echo "Usage: $0 <alert_webhook_url>"
    echo "Example: $0 'https://hooks.slack.com/services/...'"
    exit 1
fi

mkdir -p "$LOG_DIR"

# Function to perform monitoring scan
perform_monitoring_scan() \\\\{
    local timestamp=$(date +%Y%m%d_%H%M%S)
    local scan_output="$LOG_DIR/scan_$timestamp.txt"
    local baseline_file="$LOG_DIR/baseline.txt"

    echo "[+] Performing monitoring scan at $(date)"

    # Read monitoring configuration
    if [ ! -f "$MONITOR_CONFIG" ]; then
        create_default_config
    fi

    source "$MONITOR_CONFIG"

    # Perform scan
    zmap -p "$MONITOR_PORT" "$MONITOR_RANGE" \
        -M "$PROBE_MODULE" \
        -r "$SCAN_RATE" \
        -o "$scan_output" \
        -v 2> "$LOG_DIR/scan_$timestamp.log"

    if [ $? -ne 0 ]; then
        echo "[-] Monitoring scan failed"
        return 1
    fi

    # Compare with baseline
    if [ -f "$baseline_file" ]; then
        echo "  [+] Comparing with baseline"

        local changes_file="$LOG_DIR/changes_$timestamp.txt"
        compare_scans "$baseline_file" "$scan_output" "$changes_file"

        # Analyze changes
        analyze_changes "$changes_file" "$timestamp"
    else
        echo "  [+] Creating initial baseline"
        cp "$scan_output" "$baseline_file"
    fi

    # Update baseline if significant time has passed
    local baseline_age=$(stat -c %Y "$baseline_file" 2>/dev/null||echo 0)
    local current_time=$(date +%s)
    local age_hours=$(( (current_time - baseline_age) / 3600 ))

    if [ $age_hours -gt 168 ]; then  # 1 week
        echo "  [+] Updating baseline (age: $\\\\{age_hours\\\\} hours)"
        cp "$scan_output" "$baseline_file"
    fi

    return 0
\\\\}

# Function to compare scans
compare_scans() \\\\{
    local baseline="$1"
    local current="$2"
    local changes="$3"

    # Find new hosts
    comm -13 <(sort "$baseline") <(sort "$current") > "$\\\\{changes\\\\}.new"

    # Find disappeared hosts
    comm -23 <(sort "$baseline") <(sort "$current") > "$\\\\{changes\\\\}.gone"

    # Create summary
    cat > "$changes" << EOF
Scan Comparison Results
======================
Baseline: $baseline
Current: $current
Comparison Time: $(date)

New Hosts: $(wc -l < "$\\\\{changes\\\\}.new")
Disappeared Hosts: $(wc -l < "$\\\\{changes\\\\}.gone")

New Hosts List:
$(cat "$\\\\{changes\\\\}.new")

Disappeared Hosts List:
$(cat "$\\\\{changes\\\\}.gone")
EOF
\\\\}

# Function to analyze changes
analyze_changes() \\\\{
    local changes_file="$1"
    local timestamp="$2"

    local new_count=$(wc -l < "$\\\\{changes_file\\\\}.new" 2>/dev/null||echo 0)
    local gone_count=$(wc -l < "$\\\\{changes_file\\\\}.gone" 2>/dev/null||echo 0)

    echo "  [+] Changes detected: $new_count new, $gone_count disappeared"

    # Check thresholds
    if [ "$new_count" -gt "$NEW_HOST_THRESHOLD" ]; then
        echo "  [!] New host threshold exceeded: $new_count > $NEW_HOST_THRESHOLD"
        send_alert "NEW_HOSTS" "$new_count" "$\\\\{changes_file\\\\}.new"
    fi

    if [ "$gone_count" -gt "$GONE_HOST_THRESHOLD" ]; then
        echo "  [!] Disappeared host threshold exceeded: $gone_count > $GONE_HOST_THRESHOLD"
        send_alert "DISAPPEARED_HOSTS" "$gone_count" "$\\\\{changes_file\\\\}.gone"
    fi

    # Analyze new hosts for suspicious patterns
    if [ "$new_count" -gt 0 ]; then
        analyze_new_hosts "$\\\\{changes_file\\\\}.new" "$timestamp"
    fi
\\\\}

# Function to analyze new hosts
analyze_new_hosts() \\\\{
    local new_hosts_file="$1"
    local timestamp="$2"
    local analysis_file="$LOG_DIR/new_host_analysis_$timestamp.txt"

    echo "  [+] Analyzing new hosts"

    cat > "$analysis_file" << EOF
New Host Analysis
================
Analysis Time: $(date)
New Hosts Count: $(wc -l < "$new_hosts_file")

Detailed Analysis:
EOF

    # Analyze IP ranges
    echo "IP Range Analysis:" >> "$analysis_file"
    cat "$new_hosts_file"|cut -d. -f1-3|sort|uniq -c|sort -nr|head -10 >> "$analysis_file"

    # Check for suspicious patterns
    echo "" >> "$analysis_file"
    echo "Suspicious Pattern Detection:" >> "$analysis_file"

    # Check for sequential IPs
    local sequential_count=$(cat "$new_hosts_file"|sort -V|awk '
        BEGIN \\\\{ prev = 0; seq_count = 0 \\\\}
        \\\\{
            split($1, ip, ".")
            current = ip[4]
            if (current == prev + 1) seq_count++
            prev = current
        \\\\}
        END \\\\{ print seq_count \\\\}
    ')

    if [ "$sequential_count" -gt 10 ]; then
        echo "WARNING: $sequential_count sequential IP addresses detected" >> "$analysis_file"
        send_alert "SEQUENTIAL_IPS" "$sequential_count" "$new_hosts_file"
    fi

    # Perform reverse DNS lookups on sample
    echo "" >> "$analysis_file"
    echo "Reverse DNS Analysis (sample):" >> "$analysis_file"
    head -20 "$new_hosts_file"|while read ip; do
        local hostname=$(timeout 5 dig +short -x "$ip" 2>/dev/null|head -1)
        echo "$ip -> $\\\\{hostname:-No PTR record\\\\}" >> "$analysis_file"
    done
\\\\}

# Function to send alerts
send_alert() \\\\{
    local alert_type="$1"
    local count="$2"
    local details_file="$3"

    echo "[!] Sending alert: $alert_type"

    local message="🚨 Network Monitoring Alert: $alert_type detected ($count items) at $(date)"

    # Send to webhook
    if [ -n "$ALERT_WEBHOOK" ]; then
        curl -X POST -H 'Content-type: application/json' \
            --data "\\\\{\"text\":\"$message\"\\\\}" \
            "$ALERT_WEBHOOK" 2>/dev/null||echo "Webhook alert failed"
    fi

    # Send email if configured
    if [ -n "$ALERT_EMAIL" ]; then
        echo "$message"|mail -s "Network Monitoring Alert: $alert_type" \
            -A "$details_file" "$ALERT_EMAIL" 2>/dev/null||echo "Email alert failed"
    fi

    # Log alert
    echo "$(date): $alert_type - $count items" >> "$LOG_DIR/alerts.log"
\\\\}

# Function to generate monitoring report
generate_monitoring_report() \\\\{
    echo "[+] Generating monitoring report"

    local report_file="$LOG_DIR/monitoring_report_$(date +%Y%m%d).html"

    cat > "$report_file" << 'EOF'
<!DOCTYPE html>
<html>
<head>
    <title>Network Monitoring Report</title>
    <style>
        body \\\\{ font-family: Arial, sans-serif; margin: 20px; \\\\}
        .header \\\\{ background-color: #f0f0f0; padding: 20px; border-radius: 5px; \\\\}
        .alert \\\\{ background-color: #ffebee; border: 1px solid #f44336; padding: 15px; border-radius: 5px; margin: 10px 0; \\\\}
        .section \\\\{ margin: 20px 0; padding: 15px; border: 1px solid #ddd; border-radius: 5px; \\\\}
        table \\\\{ border-collapse: collapse; width: 100%; \\\\}
        th, td \\\\{ border: 1px solid #ddd; padding: 8px; text-align: left; \\\\}
        th \\\\{ background-color: #f2f2f2; \\\\}
    </style>
</head>
<body>
    <div class="header">
        <h1>Network Monitoring Report</h1>
        <p>Generated: $(date)</p>
        <p>Monitoring Period: Last 24 hours</p>
    </div>
EOF

    # Add recent alerts
    if [ -f "$LOG_DIR/alerts.log" ]; then
        local recent_alerts=$(tail -10 "$LOG_DIR/alerts.log" 2>/dev/null)
        if [ -n "$recent_alerts" ]; then
            cat >> "$report_file" << EOF
    <div class="alert">
        <h2>⚠️ Recent Alerts</h2>
        <pre>$recent_alerts</pre>
    </div>
EOF
        fi
    fi

    # Add scan statistics
    cat >> "$report_file" << EOF
    <div class="section">
        <h2>Scan Statistics</h2>
        <table>
            <tr><th>Metric</th><th>Value</th></tr>
            <tr><td>Total Scans</td><td>$(ls -1 "$LOG_DIR"/scan_*.txt 2>/dev/null|wc -l)</td></tr>
            <tr><td>Current Baseline Hosts</td><td>$(wc -l < "$LOG_DIR/baseline.txt" 2>/dev/null||echo 0)</td></tr>
            <tr><td>Last Scan</td><td>$(ls -1t "$LOG_DIR"/scan_*.txt 2>/dev/null|head -1|xargs stat -c %y 2>/dev/null||echo "None")</td></tr>
        </table>
    </div>
</body>
</html>
EOF

    echo "  [+] Monitoring report generated: $report_file"
\\\\}

# Function to cleanup old logs
cleanup_logs() \\\\{
    echo "[+] Cleaning up old monitoring logs"

    # Keep logs for 30 days
    find "$LOG_DIR" -name "scan_*.txt" -mtime +30 -delete
    find "$LOG_DIR" -name "scan_*.log" -mtime +30 -delete
    find "$LOG_DIR" -name "changes_*.txt*" -mtime +30 -delete
    find "$LOG_DIR" -name "new_host_analysis_*.txt" -mtime +30 -delete

    # Keep reports for 90 days
    find "$LOG_DIR" -name "monitoring_report_*.html" -mtime +90 -delete
\\\\}

# Function to create default configuration
create_default_config() \\\\{
    cat > "$MONITOR_CONFIG" << 'EOF'
# Network Monitoring Configuration

# Scan parameters
MONITOR_RANGE="192.168.0.0/16"
MONITOR_PORT="80"
PROBE_MODULE="tcp_synscan"
SCAN_RATE="1000"

# Alert thresholds
NEW_HOST_THRESHOLD=10
GONE_HOST_THRESHOLD=5

# Notification settings
ALERT_EMAIL=""
EOF

    echo "Created default configuration: $MONITOR_CONFIG"
\\\\}

# Main monitoring loop
echo "[+] Starting continuous network monitoring"
echo "[+] Check interval: $((CHECK_INTERVAL / 60)) minutes"
echo "[+] Alert webhook: $ALERT_WEBHOOK"

# Check if running as root
if [ "$EUID" -ne 0 ]; then
    echo "[-] This script requires root privileges for raw socket access"
    exit 1
fi

while true; do
    echo "[+] Starting monitoring cycle at $(date)"

    if perform_monitoring_scan; then
        echo "  [+] Monitoring scan completed successfully"
    else
        echo "  [-] Monitoring scan failed"
        send_alert "SCAN_FAILURE" "1" "/dev/null"
    fi

    # Generate daily report and cleanup
    if [ "$(date +%H)" = "06" ]; then  # 6 AM
        generate_monitoring_report
        cleanup_logs
    fi

    echo "[+] Monitoring cycle completed at $(date)"
    echo "[+] Next check in $((CHECK_INTERVAL / 60)) minutes"

    sleep "$CHECK_INTERVAL"
done

Intégration avec d'autres outils

Intégration Nmap

# Use Zmap for initial discovery, then Nmap for detailed scanning
zmap -p 80 192.168.1.0/24 -o web_hosts.txt
nmap -sV -p 80 -iL web_hosts.txt -oA detailed_scan

# Combine Zmap and Nmap in pipeline
zmap -p 22 10.0.0.0/8|head -1000|nmap -sV -p 22 -iL - -oA ssh_scan

Intégration de Masscan

# Compare Zmap and Masscan results
zmap -p 80 192.168.0.0/16 -o zmap_results.txt
masscan -p80 192.168.0.0/16 --rate=1000 -oL masscan_results.txt

# Combine results
cat zmap_results.txt masscan_results.txt|sort -u > combined_results.txt

Intégration Shodan

# Use Zmap results to query Shodan
while read ip; do
    shodan host "$ip"
    sleep 1
done < zmap_results.txt > shodan_analysis.txt

Dépannage

Questions communes

Autorisation refusée

# Run as root for raw socket access
sudo zmap -p 80 192.168.1.0/24

# Check capabilities
getcap $(which zmap)

# Set capabilities (alternative to root)
sudo setcap cap_net_raw=eip $(which zmap)

Perte de paquets élevés

# Reduce scan rate
zmap -p 80 192.168.1.0/24 -r 100

# Increase bandwidth limit
zmap -p 80 192.168.1.0/24 -B 10M

# Check network interface
zmap -p 80 192.168.1.0/24 -i eth0 -v

Problèmes de mémoire

# Limit target count
zmap -p 80 0.0.0.0/0 --max-targets 1000000

# Monitor memory usage
top -p $(pgrep zmap)

# Use output modules that don't buffer
zmap -p 80 192.168.1.0/24 -O csv -o results.csv

Configuration du réseau

# Check routing table
ip route show

# Check ARP table
arp -a

# Test connectivity
ping -c 1 192.168.1.1

# Check firewall rules
iptables -L

Optimisation des performances

# Optimize for high-speed scanning
zmap -p 80 0.0.0.0/0 -r 1400000 -B 1G --sender-threads 4

# CPU optimization
zmap -p 80 192.168.0.0/16 --cores 8

# Memory optimization
zmap -p 80 0.0.0.0/0 --max-targets 10000000

# Network buffer optimization
echo 'net.core.rmem_max = 134217728' >> /etc/sysctl.conf
echo 'net.core.rmem_default = 134217728' >> /etc/sysctl.conf
sysctl -p

Ressources

  • [Site Web officiel de Zmap] (LINK_7)
  • [Zmap GitHub Repository] (LINK_7)
  • [Zmap Research Papers] (LINK_7)
  • [Pratiques exemplaires de numérisation à l'échelle de l'Internet] (LINK_7)
  • [Éthique de numérisation du réseau] (LINK_7)
  • [Mesure du réseau à grande échelle] (LINK_7)
  • [Greb Banner Grabber] (LINK_7)

*Cette feuille de triage fournit une référence complète pour l'utilisation de Zmap pour la numérisation réseau à grande échelle et les enquêtes sur Internet. Assurez-vous toujours d'avoir une autorisation appropriée avant de scanner les réseaux et de suivre les pratiques de numérisation éthique. *