Zmap Cheat Sheet
Überblick
Zmap ist ein schneller Single-Paket-Netzwerk-Scanner für Internet-weite Netzwerk-Umfragen und großformatige Netzwerk-Erkennung. Entwickelt von Forschern der Universität Michigan, ist Zmap in der Lage, den gesamten IPv4-Adressenraum in weniger als 45 Minuten auf einer Gigabit-Netzwerkverbindung zu scannen. Im Gegensatz zu herkömmlichen Portscannern, die für das Scannen kleiner Netzwerke konzipiert sind, ist Zmap für das Scannen großer Adressräume schnell optimiert, indem man jedem Host ein einziges Sondenpaket zusendet und minimale Per-Verbindungszustände beibehält. Dies macht es zu einem unschätzbaren Werkzeug für Sicherheitsforscher, Netzwerkadministratoren und Penetrationstester, die umfangreiche Netzwerk-Erkundung und Internet-weite Sicherheitsstudien durchführen müssen.
ZEIT Warning: Zmap ist ein leistungsstarkes Netzwerk-Scanning-Tool, das einen erheblichen Netzwerkverkehr erzeugen kann. Verwenden Sie nur Zmap gegen Netzwerke, die Sie besitzen oder eine ausdrückliche schriftliche Erlaubnis zum Scannen haben. Internetweites Scannen kann gegen Nutzungsbedingungen und lokale Gesetze verstoßen. Befolgen Sie immer verantwortungsvolle Offenlegungspraktiken und ethische Scanrichtlinien.
Installation
Ubuntu/Debian Installation
```bash
Install from package repository
sudo apt update sudo apt install zmap
Verify installation
zmap --version
Install additional dependencies
sudo apt install libpcap-dev libgmp-dev libssl-dev
Install development tools if building from source
sudo apt install build-essential cmake libpcap-dev libgmp-dev libssl-dev ```_
CentOS/RHEL Installation
```bash
Install EPEL repository
sudo yum install epel-release
Install Zmap
sudo yum install zmap
Install dependencies for building from source
sudo yum groupinstall "Development Tools" sudo yum install cmake libpcap-devel gmp-devel openssl-devel ```_
Gebäude aus der Quelle
```bash
Clone Zmap repository
git clone https://github.com/zmap/zmap.git cd zmap
Create build directory
mkdir build cd build
Configure build
cmake ..
Compile
make -j$(nproc)
Install
sudo make install
Verify installation
zmap --version ```_
Docker Installation
```bash
Pull Zmap Docker image
docker pull zmap/zmap
Run Zmap in Docker
docker run --rm --net=host zmap/zmap zmap --version
Create alias for easier usage
echo 'alias zmap="docker run --rm --net=host zmap/zmap zmap"' >> ~/.bashrc source ~/.bashrc
Run with volume mount for output
docker run --rm --net=host -v $(pwd):/data zmap/zmap zmap -p 80 10.0.0.0/8 -o /data/scan_results.txt ```_
macOS Installation
```bash
Install using Homebrew
brew install zmap
Install dependencies
brew install libpcap gmp openssl cmake
Verify installation
zmap --version
If building from source on macOS
git clone https://github.com/zmap/zmap.git cd zmap mkdir build && cd build cmake -DOPENSSL_ROOT_DIR=/usr/local/opt/openssl .. make -j$(sysctl -n hw.ncpu) sudo make install ```_
Basisnutzung
Einfache Port-Scans
```bash
Scan single port on subnet
zmap -p 80 192.168.1.0/24
Scan multiple subnets
zmap -p 443 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16
Scan with output to file
zmap -p 22 192.168.0.0/16 -o ssh_hosts.txt
Scan with rate limiting
zmap -p 80 10.0.0.0/8 -r 1000
Scan with bandwidth limiting
zmap -p 443 192.168.0.0/16 -B 10M
Verbose output
zmap -p 80 192.168.1.0/24 -v ```_
Erweiterte Scanoptionen
```bash
Scan with custom source port
zmap -p 80 192.168.1.0/24 -s 12345
Scan with custom interface
zmap -p 80 192.168.1.0/24 -i eth0
Scan with custom gateway MAC
zmap -p 80 192.168.1.0/24 -G 00:11:22:33:44:55
Scan with custom source IP
zmap -p 80 192.168.1.0/24 -S 192.168.1.100
Scan with probe module
zmap -p 80 192.168.1.0/24 -M tcp_synscan
Scan with output module
zmap -p 80 192.168.1.0/24 -O csv -o results.csv ```_
Sondenmodule
```bash
TCP SYN scan (default)
zmap -p 80 192.168.1.0/24 -M tcp_synscan
ICMP echo scan
zmap 192.168.1.0/24 -M icmp_echoscan
UDP scan
zmap -p 53 192.168.1.0/24 -M udp
TCP ACK scan
zmap -p 80 192.168.1.0/24 -M tcp_ackscan
NTP scan
zmap -p 123 192.168.1.0/24 -M ntp
DNS scan
zmap -p 53 192.168.1.0/24 -M dns
List available probe modules
zmap --list-probe-modules ```_
Ausgangsmodule
```bash
Default output (IP addresses)
zmap -p 80 192.168.1.0/24
CSV output
zmap -p 80 192.168.1.0/24 -O csv -o results.csv
JSON output
zmap -p 80 192.168.1.0/24 -O json -o results.json
Extended output with additional fields
zmap -p 80 192.168.1.0/24 -O extended_file -o results.txt
Redis output
zmap -p 80 192.168.1.0/24 -O redis --redis-server 127.0.0.1
List available output modules
zmap --list-output-modules ```_
Erweiterte Funktionen
Großes Internet Scanning
```bash
Scan entire IPv4 space for HTTP servers
zmap -p 80 0.0.0.0/0 -o http_servers.txt -r 10000
Scan for HTTPS servers with rate limiting
zmap -p 443 0.0.0.0/0 -o https_servers.txt -r 5000 -B 100M
Scan for SSH servers
zmap -p 22 0.0.0.0/0 -o ssh_servers.txt -r 2000
Scan for DNS servers
zmap -p 53 0.0.0.0/0 -M udp -o dns_servers.txt -r 1000
Scan with blacklist file
zmap -p 80 0.0.0.0/0 -b blacklist.txt -o results.txt
Scan with whitelist file
zmap -p 80 -w whitelist.txt -o results.txt ```_
Benutzerdefinierte Sonde Konfiguration
```bash
TCP SYN scan with custom TCP options
zmap -p 80 192.168.1.0/24 -M tcp_synscan --probe-args="tcp_window=1024"
ICMP scan with custom payload
zmap 192.168.1.0/24 -M icmp_echoscan --probe-args="icmp_payload=deadbeef"
UDP scan with custom payload
zmap -p 53 192.168.1.0/24 -M udp --probe-args="udp_payload_file=dns_query.bin"
NTP scan with custom NTP packet
zmap -p 123 192.168.1.0/24 -M ntp --probe-args="ntp_version=3"
DNS scan with custom query
zmap -p 53 192.168.1.0/24 -M dns --probe-args="dns_query=google.com" ```_
Leistungsoptimierung
```bash
High-speed scanning with multiple threads
zmap -p 80 10.0.0.0/8 -r 100000 -T 4
Optimize for gigabit networks
zmap -p 80 0.0.0.0/0 -r 1400000 -B 1G
Memory optimization for large scans
zmap -p 80 0.0.0.0/0 -r 10000 --max-targets 1000000
CPU optimization
zmap -p 80 192.168.0.0/16 -T $(nproc)
Network buffer optimization
zmap -p 80 192.168.1.0/24 --sender-threads 4 --cores 4 ```_
Filtern und Targeting
```bash
Exclude private networks
zmap -p 80 0.0.0.0/0 --exclude-file private_networks.txt
Include only specific ASNs
zmap -p 80 --include-file target_asns.txt
Scan with seed for reproducible randomization
zmap -p 80 192.168.1.0/24 --seed 12345
Scan with custom target list
zmap -p 80 --target-file targets.txt
Scan with CIDR exclusions
zmap -p 80 0.0.0.0/0 --exclude 10.0.0.0/8,172.16.0.0/12,192.168.0.0/16 ```_
Automatisierungsskripte
Große Hafenentdeckung
```bash
!/bin/bash
Large-scale port discovery using Zmap
TARGET_RANGE="$1" OUTPUT_DIR="zmap_discovery_$(date +%Y%m%d_%H%M%S)" RATE_LIMIT=10000 BANDWIDTH_LIMIT="100M"
if [ -z "$TARGET_RANGE" ]; then
echo "Usage: $0
mkdir -p "$OUTPUT_DIR"
Common ports to scan
COMMON_PORTS=( 21 22 23 25 53 80 110 111 135 139 143 443 993 995 1723 3306 3389 5432 5900 8080 )
Function to scan single port
scan_port() \\{ local port="$1" local output_file="$OUTPUT_DIR/port_$\\{port\\}hosts.txt" local log_file="$OUTPUT_DIR/port$\\{port\\}_scan.log"
echo "[+] Scanning port $port on $TARGET_RANGE"
# Determine probe module based on port
local probe_module="tcp_synscan"
case "$port" in
| 53 | 123 | 161 | 162 | 514) probe_module="udp" ;; | *) probe_module="tcp_synscan" ;; esac
# Perform scan
zmap -p "$port" "$TARGET_RANGE" \
-M "$probe_module" \
-r "$RATE_LIMIT" \
-B "$BANDWIDTH_LIMIT" \
-o "$output_file" \
-v 2> "$log_file"
if [ $? -eq 0 ]; then
| local host_count=$(wc -l < "$output_file" 2>/dev/null | | echo 0) | echo " [+] Port $port: $host_count hosts found"
# Generate summary
echo "Port: $port" >> "$OUTPUT_DIR/scan_summary.txt"
echo "Hosts found: $host_count" >> "$OUTPUT_DIR/scan_summary.txt"
echo "Probe module: $probe_module" >> "$OUTPUT_DIR/scan_summary.txt"
echo "---" >> "$OUTPUT_DIR/scan_summary.txt"
else
echo " [-] Port $port: Scan failed"
fi
\\}
Function to scan ports in parallel
scan_ports_parallel() \\{ local max_jobs=5 local job_count=0
for port in "$\\\\{COMMON_PORTS[@]\\\\}"; do
# Limit concurrent jobs
while [ $(jobs -r|wc -l) -ge $max_jobs ]; do
sleep 1
done
# Start scan in background
scan_port "$port" &
job_count=$((job_count + 1))
echo "[+] Started scan job $job_count for port $port"
# Small delay between job starts
sleep 2
done
# Wait for all jobs to complete
wait
echo "[+] All port scans completed"
\\}
Function to analyze results
analyze_results() \\{ echo "[+] Analyzing scan results"
local analysis_file="$OUTPUT_DIR/analysis_report.txt"
cat > "$analysis_file" ``<< EOF
Zmap Port Discovery Analysis Report
Target Range: $TARGET_RANGE Scan Date: $(date) Output Directory: $OUTPUT_DIR
Port Scan Summary: EOF
# Analyze each port
for port in "$\\\{COMMON_PORTS[@]\\\}"; do
local port_file="$OUTPUT_DIR/port_$\\\{port\\\}_hosts.txt"
if [ -f "$port_file" ]; then
local count=$(wc -l < "$port_file")
echo "Port $port: $count hosts" >``> "$analysis_file"
fi
done
# Find most common open ports
echo "" >> "$analysis_file"
echo "Top 10 Most Common Open Ports:" >> "$analysis_file"
for port in "$\\\\{COMMON_PORTS[@]\\\\}"; do
local port_file="$OUTPUT_DIR/port_$\\\\{port\\\\}_hosts.txt"
if [ -f "$port_file" ]; then
local count=$(wc -l < "$port_file")
echo "$count $port"
fi
| done | sort -nr | head -10 >> "$analysis_file" |
# Generate combined host list
echo "" >> "$analysis_file"
echo "Generating combined host list..." >> "$analysis_file"
cat "$OUTPUT_DIR"/port_*_hosts.txt|sort -u > "$OUTPUT_DIR/all_responsive_hosts.txt"
local total_hosts=$(wc -l < "$OUTPUT_DIR/all_responsive_hosts.txt")
echo "Total unique responsive hosts: $total_hosts" >> "$analysis_file"
echo "[+] Analysis completed: $analysis_file"
\\}
Function to generate visualization data
generate_visualization() \\{ echo "[+] Generating visualization data"
local viz_file="$OUTPUT_DIR/visualization_data.json"
cat > "$viz_file" ``<< 'EOF'
\{ "scan_metadata": \{ "target_range": "TARGET_RANGE_PLACEHOLDER", "scan_date": "SCAN_DATE_PLACEHOLDER", "total_ports_scanned": TOTAL_PORTS_PLACEHOLDER \}, "port_data": [ EOF
# Replace placeholders
sed -i "s/TARGET_RANGE_PLACEHOLDER/$TARGET_RANGE/g" "$viz_file"
sed -i "s/SCAN_DATE_PLACEHOLDER/$(date -Iseconds)/g" "$viz_file"
sed -i "s/TOTAL_PORTS_PLACEHOLDER/$\\\{#COMMON_PORTS[@]\\\}/g" "$viz_file"
# Add port data
local first=true
for port in "$\\\{COMMON_PORTS[@]\\\}"; do
local port_file="$OUTPUT_DIR/port_$\\\{port\\\}_hosts.txt"
if [ -f "$port_file" ]; then
local count=$(wc -l < "$port_file")
if [ "$first" = true ]; then
first=false
else
echo "," >``> "$viz_file"
fi
cat >> "$viz_file" << EOF
\\\\{
"port": $port,
"host_count": $count,
| "service": "$(getent services $port/tcp 2>/dev/null | awk '\\{print $1\\}' | | echo 'unknown')" | \\} EOF fi done
echo "" >> "$viz_file"
echo " ]" >> "$viz_file"
echo "\\\\}" >> "$viz_file"
echo "[+] Visualization data generated: $viz_file"
\\}
Function to create HTML report
create_html_report() \\{ echo "[+] Creating HTML report"
local html_file="$OUTPUT_DIR/scan_report.html"
cat > "$html_file" << 'EOF'
Zmap Port Discovery Report
Target Range: TARGET_RANGE_PLACEHOLDER
Scan Date: SCAN_DATE_PLACEHOLDER
Total Ports Scanned: TOTAL_PORTS_PLACEHOLDER
Port Scan Results
Port | Service | Hosts Found | Percentage |
---|---|---|---|
$port | $service | $count | $\\\\{percentage\\\\}% |
EOF
# Replace placeholders
sed -i "s/TARGET_RANGE_PLACEHOLDER/$TARGET_RANGE/g" "$html_file"
sed -i "s/SCAN_DATE_PLACEHOLDER/$(date)/g" "$html_file"
sed -i "s/TOTAL_PORTS_PLACEHOLDER/$\\\\{#COMMON_PORTS[@]\\\\}/g" "$html_file"
echo "[+] HTML report created: $html_file"
\\}
Main execution
echo "[+] Starting large-scale port discovery" echo "[+] Target range: $TARGET_RANGE" echo "[+] Output directory: $OUTPUT_DIR" echo "[+] Rate limit: $RATE_LIMIT packets/second" echo "[+] Bandwidth limit: $BANDWIDTH_LIMIT"
Check if running as root
if [ "$EUID" -ne 0 ]; then echo "[-] This script requires root privileges for raw socket access" exit 1 fi
Check if zmap is installed
if ! command -v zmap &> /dev/null; then echo "[-] Zmap not found. Please install zmap first." exit 1 fi
Perform scans
scan_ports_parallel
Analyze results
analyze_results
Generate visualization data
generate_visualization
Create HTML report
create_html_report
echo "[+] Large-scale port discovery completed" echo "[+] Results saved in: $OUTPUT_DIR" echo "[+] Open $OUTPUT_DIR/scan_report.html for detailed report" ```_
Internet-Wide Service Discovery
```bash
!/bin/bash
Internet-wide service discovery using Zmap
SERVICE_TYPE="$1" OUTPUT_DIR="internet_discovery_$(date +%Y%m%d_%H%M%S)" RATE_LIMIT=50000 BANDWIDTH_LIMIT="500M"
if [ -z "$SERVICE_TYPE" ]; then
echo "Usage: $0
mkdir -p "$OUTPUT_DIR"
Service configuration
declare -A SERVICE_CONFIG SERVICE_CONFIG[web]="80,tcp_synscan" SERVICE_CONFIG[web_ssl]="443,tcp_synscan" SERVICE_CONFIG[ssh]="22,tcp_synscan" SERVICE_CONFIG[dns]="53,udp" SERVICE_CONFIG[mail_smtp]="25,tcp_synscan" SERVICE_CONFIG[mail_pop3]="110,tcp_synscan" SERVICE_CONFIG[mail_imap]="143,tcp_synscan" SERVICE_CONFIG[ftp]="21,tcp_synscan" SERVICE_CONFIG[telnet]="23,tcp_synscan" SERVICE_CONFIG[ntp]="123,ntp" SERVICE_CONFIG[snmp]="161,udp"
Function to perform service discovery
discover_service() \\{ local service="$1" local config="$\\{SERVICE_CONFIG[$service]\\}"
if [ -z "$config" ]; then
echo "[-] Unknown service type: $service"
return 1
fi
local port=$(echo "$config"|cut -d, -f1)
local probe_module=$(echo "$config"|cut -d, -f2)
local output_file="$OUTPUT_DIR/$\\\\{service\\\\}_servers.txt"
local log_file="$OUTPUT_DIR/$\\\\{service\\\\}_scan.log"
echo "[+] Discovering $service servers on port $port"
echo "[+] Using probe module: $probe_module"
echo "[+] Rate limit: $RATE_LIMIT packets/second"
# Create blacklist for private networks
cat > "$OUTPUT_DIR/blacklist.txt" << 'EOF'
0.0.0.0/8 10.0.0.0/8 100.64.0.0/10 127.0.0.0/8 169.254.0.0/16 172.16.0.0/12 192.0.0.0/24 192.0.2.0/24 192.88.99.0/24 192.168.0.0/16 198.18.0.0/15 198.51.100.0/24 203.0.113.0/24 224.0.0.0/4 240.0.0.0/4 255.255.255.255/32 EOF
# Perform Internet-wide scan
zmap -p "$port" 0.0.0.0/0 \
-M "$probe_module" \
-r "$RATE_LIMIT" \
-B "$BANDWIDTH_LIMIT" \
-b "$OUTPUT_DIR/blacklist.txt" \
-o "$output_file" \
-v 2> "$log_file"
if [ $? -eq 0 ]; then
| local server_count=$(wc -l < "$output_file" 2>/dev/null | | echo 0) | echo "[+] Discovery completed: $server_count $service servers found"
# Generate statistics
generate_statistics "$service" "$output_file"
return 0
else
echo "[-] Discovery failed for $service"
return 1
fi
\\}
Function to generate statistics
generate_statistics() \\{ local service="$1" local results_file="$2" local stats_file="$OUTPUT_DIR/$\\{service\\}_statistics.txt"
echo "[+] Generating statistics for $service"
cat > "$stats_file" << EOF
Service Discovery Statistics: $service
Discovery Date: $(date) Total Servers Found: $(wc -l < "$results_file")
Geographic Distribution: EOF
# Analyze geographic distribution using GeoIP
if command -v geoiplookup &> /dev/null; then
echo "Analyzing geographic distribution..." >> "$stats_file"
# Sample first 1000 IPs for geographic analysis
head -1000 "$results_file"|while read ip; do
| geoiplookup "$ip" | grep "GeoIP Country Edition" | cut -d: -f2 | xargs | | done | sort | uniq -c | sort -nr | head -20 >> "$stats_file" | else echo "GeoIP lookup not available" >> "$stats_file" fi
# Analyze ASN distribution
echo "" >> "$stats_file"
echo "ASN Distribution (Top 20):" >> "$stats_file"
if command -v whois &> /dev/null; then
# Sample first 100 IPs for ASN analysis
head -100 "$results_file"|while read ip; do
| whois "$ip" | grep -i "origin" | head -1 | awk '\\{print $2\\}' | | done | sort | uniq -c | sort -nr | head -20 >> "$stats_file" | else echo "Whois lookup not available" >> "$stats_file" fi
echo "[+] Statistics generated: $stats_file"
\\}
Function to perform follow-up analysis
followup_analysis() \\{ local service="$1" local results_file="$OUTPUT_DIR/$\\{service\\}_servers.txt" local analysis_file="$OUTPUT_DIR/$\\{service\\}_analysis.txt"
echo "[+] Performing follow-up analysis for $service"
# Sample servers for detailed analysis
local sample_size=100
local sample_file="$OUTPUT_DIR/$\\\\{service\\\\}_sample.txt"
shuf -n "$sample_size" "$results_file" > "$sample_file"
cat > "$analysis_file" << EOF
Follow-up Analysis: $service
Analysis Date: $(date) Sample Size: $sample_size servers
Detailed Analysis Results: EOF
# Service-specific analysis
case "$service" in
"web"|"web_ssl")
analyze_web_servers "$sample_file" "$analysis_file"
;;
"ssh")
analyze_ssh_servers "$sample_file" "$analysis_file"
;;
"dns")
analyze_dns_servers "$sample_file" "$analysis_file"
;;
"ntp")
analyze_ntp_servers "$sample_file" "$analysis_file"
;;
*)
echo "Generic analysis for $service" >> "$analysis_file"
;;
esac
echo "[+] Follow-up analysis completed: $analysis_file"
\\}
Function to analyze web servers
analyze_web_servers() \\{ local sample_file="$1" local analysis_file="$2"
echo "Web Server Analysis:" >> "$analysis_file"
echo "===================" >> "$analysis_file"
# Analyze HTTP headers
while read ip; do
echo "Analyzing $ip..." >> "$analysis_file"
# Get HTTP headers
timeout 10 curl -I "http://$ip" 2>/dev/null|head -10 >> "$analysis_file"
echo "---" >> "$analysis_file"
# Rate limiting
sleep 0.1
done < "$sample_file"
\\}
Function to analyze SSH servers
analyze_ssh_servers() \\{ local sample_file="$1" local analysis_file="$2"
echo "SSH Server Analysis:" >> "$analysis_file"
echo "===================" >> "$analysis_file"
# Analyze SSH banners
while read ip; do
echo "Analyzing $ip..." >> "$analysis_file"
# Get SSH banner
timeout 5 nc "$ip" 22 < /dev/null 2>/dev/null|head -1 >> "$analysis_file"
# Rate limiting
sleep 0.1
done < "$sample_file"
\\}
Function to analyze DNS servers
analyze_dns_servers() \\{ local sample_file="$1" local analysis_file="$2"
echo "DNS Server Analysis:" >> "$analysis_file"
echo "===================" >> "$analysis_file"
# Test DNS resolution
while read ip; do
echo "Testing DNS server $ip..." >> "$analysis_file"
# Test DNS query
timeout 5 dig @"$ip" google.com +short 2>/dev/null >> "$analysis_file"
echo "---" >> "$analysis_file"
# Rate limiting
sleep 0.1
done < "$sample_file"
\\}
Function to analyze NTP servers
analyze_ntp_servers() \\{ local sample_file="$1" local analysis_file="$2"
echo "NTP Server Analysis:" >> "$analysis_file"
echo "===================" >> "$analysis_file"
# Test NTP response
while read ip; do
echo "Testing NTP server $ip..." >> "$analysis_file"
# Test NTP query
timeout 5 ntpdate -q "$ip" 2>/dev/null >> "$analysis_file"
echo "---" >> "$analysis_file"
# Rate limiting
sleep 0.1
done < "$sample_file"
\\}
Function to generate final report
generate_final_report() \\{ local service="$1" local report_file="$OUTPUT_DIR/final_report.html"
echo "[+] Generating final report"
cat > "$report_file" << EOF
Internet-Wide $service Discovery Report
Discovery Date: $(date)
Service Type: $service
Scan Rate: $RATE_LIMIT packets/second
⚠️ Important Notice
This report contains results from Internet-wide scanning. Use this information responsibly and in accordance with applicable laws and ethical guidelines.
Discovery Summary
Metric | Value |
---|---|
Total Servers Found | $(wc -l < "$OUTPUT_DIR/$\\\\{service\\\\}_servers.txt" 2>/dev/null | | echo 0) |
Scan Duration | $(grep "completed" "$OUTPUT_DIR/$\\\\{service\\\\}_scan.log" 2>/dev/null | tail -1 | | echo "Unknown") |
Output Files | $(ls -1 "$OUTPUT_DIR"|wc -l) |
Files Generated
- Server List: $\\\\{service\\\\}_servers.txt
- Statistics: $\\\\{service\\\\}_statistics.txt
- Analysis: $\\\\{service\\\\}_analysis.txt
- Scan Log: $\\\\{service\\\\}_scan.log
Next Steps
- Review the server list and statistics
- Analyze the follow-up analysis results
- Consider responsible disclosure for any security issues found
- Implement appropriate security measures based on findings
EOF
echo "[+] Final report generated: $report_file"
\\}
Main execution
echo "[+] Starting Internet-wide $SERVICE_TYPE discovery" echo "[+] Output directory: $OUTPUT_DIR"
Check if running as root
if [ "$EUID" -ne 0 ]; then echo "[-] This script requires root privileges for raw socket access" exit 1 fi
Check if zmap is installed
if ! command -v zmap &> /dev/null; then echo "[-] Zmap not found. Please install zmap first." exit 1 fi
Perform service discovery
if discover_service "$SERVICE_TYPE"; then # Perform follow-up analysis followup_analysis "$SERVICE_TYPE"
# Generate final report
generate_final_report "$SERVICE_TYPE"
echo "[+] Internet-wide $SERVICE_TYPE discovery completed"
echo "[+] Results saved in: $OUTPUT_DIR"
echo "[+] Open $OUTPUT_DIR/final_report.html for detailed report"
else echo "[-] Service discovery failed" exit 1 fi ```_
Kontinuierliche Netzwerküberwachung
```bash
!/bin/bash
Continuous network monitoring using Zmap
MONITOR_CONFIG="monitor.conf" LOG_DIR="monitoring_logs" ALERT_WEBHOOK="$1" CHECK_INTERVAL=3600 # 1 hour
if [ -z "$ALERT_WEBHOOK" ]; then
echo "Usage: $0
mkdir -p "$LOG_DIR"
Function to perform monitoring scan
perform_monitoring_scan() \\{ local timestamp=$(date +%Y%m%d_%H%M%S) local scan_output="$LOG_DIR/scan_$timestamp.txt" local baseline_file="$LOG_DIR/baseline.txt"
echo "[+] Performing monitoring scan at $(date)"
# Read monitoring configuration
if [ ! -f "$MONITOR_CONFIG" ]; then
create_default_config
fi
source "$MONITOR_CONFIG"
# Perform scan
zmap -p "$MONITOR_PORT" "$MONITOR_RANGE" \
-M "$PROBE_MODULE" \
-r "$SCAN_RATE" \
-o "$scan_output" \
-v 2> "$LOG_DIR/scan_$timestamp.log"
if [ $? -ne 0 ]; then
echo "[-] Monitoring scan failed"
return 1
fi
# Compare with baseline
if [ -f "$baseline_file" ]; then
echo " [+] Comparing with baseline"
local changes_file="$LOG_DIR/changes_$timestamp.txt"
compare_scans "$baseline_file" "$scan_output" "$changes_file"
# Analyze changes
analyze_changes "$changes_file" "$timestamp"
else
echo " [+] Creating initial baseline"
cp "$scan_output" "$baseline_file"
fi
# Update baseline if significant time has passed
| local baseline_age=$(stat -c %Y "$baseline_file" 2>/dev/null | | echo 0) | local current_time=$(date +%s) local age_hours=$(( (current_time - baseline_age) / 3600 ))
if [ $age_hours -gt 168 ]; then # 1 week
echo " [+] Updating baseline (age: $\\\\{age_hours\\\\} hours)"
cp "$scan_output" "$baseline_file"
fi
return 0
\\}
Function to compare scans
compare_scans() \\{ local baseline="$1" local current="$2" local changes="$3"
# Find new hosts
comm -13 <(sort "$baseline") <(sort "$current") > "$\\\\{changes\\\\}.new"
# Find disappeared hosts
comm -23 <(sort "$baseline") <(sort "$current") > "$\\\\{changes\\\\}.gone"
# Create summary
cat > "$changes" << EOF
Scan Comparison Results
Baseline: $baseline Current: $current Comparison Time: $(date)
New Hosts: $(wc -l < "$\\{changes\\}.new") Disappeared Hosts: $(wc -l < "$\\{changes\\}.gone")
New Hosts List: $(cat "$\\{changes\\}.new")
Disappeared Hosts List: $(cat "$\\{changes\\}.gone") EOF \\}
Function to analyze changes
analyze_changes() \\{ local changes_file="$1" local timestamp="$2"
| local new_count=$(wc -l < "$\\{changes_file\\}.new" 2>/dev/null | | echo 0) | | local gone_count=$(wc -l < "$\\{changes_file\\}.gone" 2>/dev/null | | echo 0) |
echo " [+] Changes detected: $new_count new, $gone_count disappeared"
# Check thresholds
if [ "$new_count" -gt "$NEW_HOST_THRESHOLD" ]; then
echo " [!] New host threshold exceeded: $new_count > $NEW_HOST_THRESHOLD"
send_alert "NEW_HOSTS" "$new_count" "$\\\\{changes_file\\\\}.new"
fi
if [ "$gone_count" -gt "$GONE_HOST_THRESHOLD" ]; then
echo " [!] Disappeared host threshold exceeded: $gone_count > $GONE_HOST_THRESHOLD"
send_alert "DISAPPEARED_HOSTS" "$gone_count" "$\\\\{changes_file\\\\}.gone"
fi
# Analyze new hosts for suspicious patterns
if [ "$new_count" -gt 0 ]; then
analyze_new_hosts "$\\\\{changes_file\\\\}.new" "$timestamp"
fi
\\}
Function to analyze new hosts
analyze_new_hosts() \\{ local new_hosts_file="$1" local timestamp="$2" local analysis_file="$LOG_DIR/new_host_analysis_$timestamp.txt"
echo " [+] Analyzing new hosts"
cat > "$analysis_file" << EOF
New Host Analysis
Analysis Time: $(date) New Hosts Count: $(wc -l < "$new_hosts_file")
Detailed Analysis: EOF
# Analyze IP ranges
echo "IP Range Analysis:" >> "$analysis_file"
| cat "$new_hosts_file" | cut -d. -f1-3 | sort | uniq -c | sort -nr | head -10 >> "$analysis_file" |
# Check for suspicious patterns
echo "" >> "$analysis_file"
echo "Suspicious Pattern Detection:" >> "$analysis_file"
# Check for sequential IPs
| local sequential_count=$(cat "$new_hosts_file" | sort -V | awk ' | BEGIN \\{ prev = 0; seq_count = 0 \\} \\{ split($1, ip, ".") current = ip[4] if (current == prev + 1) seq_count++ prev = current \\} END \\{ print seq_count \\} ')
if [ "$sequential_count" -gt 10 ]; then
echo "WARNING: $sequential_count sequential IP addresses detected" >> "$analysis_file"
send_alert "SEQUENTIAL_IPS" "$sequential_count" "$new_hosts_file"
fi
# Perform reverse DNS lookups on sample
echo "" >> "$analysis_file"
echo "Reverse DNS Analysis (sample):" >> "$analysis_file"
head -20 "$new_hosts_file"|while read ip; do
local hostname=$(timeout 5 dig +short -x "$ip" 2>/dev/null|head -1)
echo "$ip -> $\\\\{hostname:-No PTR record\\\\}" >> "$analysis_file"
done
\\}
Function to send alerts
send_alert() \\{ local alert_type="$1" local count="$2" local details_file="$3"
echo "[!] Sending alert: $alert_type"
local message="🚨 Network Monitoring Alert: $alert_type detected ($count items) at $(date)"
# Send to webhook
if [ -n "$ALERT_WEBHOOK" ]; then
curl -X POST -H 'Content-type: application/json' \
--data "\\\\{\"text\":\"$message\"\\\\}" \
| "$ALERT_WEBHOOK" 2>/dev/null | | echo "Webhook alert failed" | fi
# Send email if configured
if [ -n "$ALERT_EMAIL" ]; then
echo "$message"|mail -s "Network Monitoring Alert: $alert_type" \
| -A "$details_file" "$ALERT_EMAIL" 2>/dev/null | | echo "Email alert failed" | fi
# Log alert
echo "$(date): $alert_type - $count items" >> "$LOG_DIR/alerts.log"
\\}
Function to generate monitoring report
generate_monitoring_report() \\{ echo "[+] Generating monitoring report"
local report_file="$LOG_DIR/monitoring_report_$(date +%Y%m%d).html"
cat > "$report_file" << 'EOF'
Network Monitoring Report
Generated: $(date)
Monitoring Period: Last 24 hours
⚠️ Recent Alerts
$recent_alerts
Scan Statistics
Metric | Value |
---|---|
Total Scans | $(ls -1 "$LOG_DIR"/scan_*.txt 2>/dev/null|wc -l) |
Current Baseline Hosts | $(wc -l < "$LOG_DIR/baseline.txt" 2>/dev/null | | echo 0) |
Last Scan | $(ls -1t "$LOG_DIR"/scan_*.txt 2>/dev/null | head -1 | xargs stat -c %y 2>/dev/null | | echo "None") |
EOF
echo " [+] Monitoring report generated: $report_file"
\\}
Function to cleanup old logs
cleanup_logs() \\{ echo "[+] Cleaning up old monitoring logs"
# Keep logs for 30 days
find "$LOG_DIR" -name "scan_*.txt" -mtime +30 -delete
find "$LOG_DIR" -name "scan_*.log" -mtime +30 -delete
find "$LOG_DIR" -name "changes_*.txt*" -mtime +30 -delete
find "$LOG_DIR" -name "new_host_analysis_*.txt" -mtime +30 -delete
# Keep reports for 90 days
find "$LOG_DIR" -name "monitoring_report_*.html" -mtime +90 -delete
\\}
Function to create default configuration
create_default_config() \\{ cat > "$MONITOR_CONFIG" << 'EOF'
Network Monitoring Configuration
Scan parameters
MONITOR_RANGE="192.168.0.0/16" MONITOR_PORT="80" PROBE_MODULE="tcp_synscan" SCAN_RATE="1000"
Alert thresholds
NEW_HOST_THRESHOLD=10 GONE_HOST_THRESHOLD=5
Notification settings
ALERT_EMAIL="" EOF
echo "Created default configuration: $MONITOR_CONFIG"
\\}
Main monitoring loop
echo "[+] Starting continuous network monitoring" echo "[+] Check interval: $((CHECK_INTERVAL / 60)) minutes" echo "[+] Alert webhook: $ALERT_WEBHOOK"
Check if running as root
if [ "$EUID" -ne 0 ]; then echo "[-] This script requires root privileges for raw socket access" exit 1 fi
while true; do echo "[+] Starting monitoring cycle at $(date)"
if perform_monitoring_scan; then
echo " [+] Monitoring scan completed successfully"
else
echo " [-] Monitoring scan failed"
send_alert "SCAN_FAILURE" "1" "/dev/null"
fi
# Generate daily report and cleanup
if [ "$(date +%H)" = "06" ]; then # 6 AM
generate_monitoring_report
cleanup_logs
fi
echo "[+] Monitoring cycle completed at $(date)"
echo "[+] Next check in $((CHECK_INTERVAL / 60)) minutes"
sleep "$CHECK_INTERVAL"
done ```_
Integration mit anderen Tools
Nmap Integration
```bash
Use Zmap for initial discovery, then Nmap for detailed scanning
zmap -p 80 192.168.1.0/24 -o web_hosts.txt nmap -sV -p 80 -iL web_hosts.txt -oA detailed_scan
Combine Zmap and Nmap in pipeline
| zmap -p 22 10.0.0.0/8 | head -1000 | nmap -sV -p 22 -iL - -oA ssh_scan | ```_
Integration von Massaker
```bash
Compare Zmap and Masscan results
zmap -p 80 192.168.0.0/16 -o zmap_results.txt masscan -p80 192.168.0.0/16 --rate=1000 -oL masscan_results.txt
Combine results
cat zmap_results.txt masscan_results.txt|sort -u > combined_results.txt ```_
Shodan Integration
```bash
Use Zmap results to query Shodan
while read ip; do shodan host "$ip" sleep 1 done < zmap_results.txt > shodan_analysis.txt ```_
Fehlerbehebung
Gemeinsame Themen
Genehmigung verweigert
```bash
Run as root for raw socket access
sudo zmap -p 80 192.168.1.0/24
Check capabilities
getcap $(which zmap)
Set capabilities (alternative to root)
sudo setcap cap_net_raw=eip $(which zmap) ```_
Hohe Packungsverluste
```bash
Reduce scan rate
zmap -p 80 192.168.1.0/24 -r 100
Increase bandwidth limit
zmap -p 80 192.168.1.0/24 -B 10M
Check network interface
zmap -p 80 192.168.1.0/24 -i eth0 -v ```_
Speicherprobleme
```bash
Limit target count
zmap -p 80 0.0.0.0/0 --max-targets 1000000
Monitor memory usage
top -p $(pgrep zmap)
Use output modules that don't buffer
zmap -p 80 192.168.1.0/24 -O csv -o results.csv ```_
Netzwerkkonfiguration
```bash
Check routing table
ip route show
Check ARP table
arp -a
Test connectivity
ping -c 1 192.168.1.1
Check firewall rules
iptables -L ```_
Leistungsoptimierung
```bash
Optimize for high-speed scanning
zmap -p 80 0.0.0.0/0 -r 1400000 -B 1G --sender-threads 4
CPU optimization
zmap -p 80 192.168.0.0/16 --cores 8
Memory optimization
zmap -p 80 0.0.0.0/0 --max-targets 10000000
Network buffer optimization
echo 'net.core.rmem_max = 134217728' >> /etc/sysctl.conf echo 'net.core.rmem_default = 134217728' >> /etc/sysctl.conf sysctl -p ```_
Ressourcen
- [Zmap Offizielle Website](_LINK_7 -%20[Zmap%20GitHub%20Repository](_LINK_7 -%20Zmap%20Forschungspapiere
- (LINK_7)
- Network Scanning Ethics
- [Large-Scale Network Measurement](LINK_7 -%20ZGrab%20Banner%20Grabber
--
*Dieses Betrügereiblatt bietet eine umfassende Referenz für die Verwendung von Zmap für großformatige Netzwerk-Scanning und Internet-weite Umfragen. Stellen Sie immer sicher, dass Sie eine ordnungsgemäße Autorisierung vor dem Scannen von Netzwerken haben und ethische Scanpraktiken verfolgen. *