iperf3 Cheatsheet¶
- [Kopieren auf Clipboard](#copy-to-clipboard___
- [PDF herunterladen](#download-pdf_*
Überblick¶
iperf3 ist ein leistungsstarkes Netzwerkleistungsmessgerät, das die maximal erreichbare Bandbreite an IP-Netzwerken testet. Es unterstützt die Abstimmung verschiedener Parameter im Zusammenhang mit Timing, Puffer und Protokollen (TCP, UDP, SCTP mit IPv4 und IPv6). Für jeden Test meldet er die Bandbreite, den Verlust und andere Parameter, was es für die Fehlersuche im Netzwerk, die Kapazitätsplanung und die Leistungsvalidierung wesentlich macht.
Schlüsselmerkmale¶
- **Bandbreitentest*: Maximale erreichbare Bandbreite messen
- **Protocol Support*: TCP, UDP und SCTP-Tests
- Dual Stack: IPv4 und IPv6 Unterstützung
- **Bidirektionale Prüfung*: Gleichzeitiges Senden und Empfangen von Tests
- **Multiple Streams*: Parallele Anschlussprüfung
- **Realtime Reporting*: Live-Bandbreite und Statistikanzeige
- JSON Output: Maschinenlesbare Ergebnisse für die Automatisierung
- **Cross-Plattform*: Linux, Windows, macOS und Embedded Systeme
- **Client-Server Architektur*: Flexible Testszenarien
Installation¶
Linux Systeme¶
```bash
Ubuntu/Debian¶
sudo apt update sudo apt install iperf3
CentOS/RHEL/Fedora¶
sudo yum install iperf3
or¶
sudo dnf install iperf3
Arch Linux¶
sudo pacman -S iperf3
From source (latest version)¶
wget https://downloads.es.net/pub/iperf/iperf-3.15.tar.gz tar -xzf iperf-3.15.tar.gz cd iperf-3.15 ./configure make sudo make install
Verify installation¶
iperf3 --version ```_
Windows Systems¶
```powershell
Download from official website¶
https://iperf.fr/iperf-download.php¶
Using Chocolatey¶
choco install iperf3
Using Scoop¶
scoop install iperf3
Using winget¶
winget install iperf3
Verify installation¶
iperf3.exe --version ```_
macOS Systeme¶
```bash
Using Homebrew¶
brew install iperf3
Using MacPorts¶
sudo port install iperf3
From source¶
curl -O https://downloads.es.net/pub/iperf/iperf-3.15.tar.gz tar -xzf iperf-3.15.tar.gz cd iperf-3.15 ./configure make sudo make install
Verify installation¶
iperf3 --version ```_
Docker Installation¶
```bash
Pull official iperf3 image¶
docker pull networkstatic/iperf3
Run server¶
docker run -it --rm --name=iperf3-server -p 5201:5201 networkstatic/iperf3 -s
Run client (from another terminal)¶
docker run -it --rm networkstatic/iperf3 -c
Custom Dockerfile¶
cat > Dockerfile << EOF FROM alpine:latest RUN apk add --no-cache iperf3 EXPOSE 5201 ENTRYPOINT ["iperf3"] EOF
docker build -t custom-iperf3 . ```_
Basisnutzung¶
Server-Modus¶
```bash
Start basic server¶
iperf3 -s
Server with specific port¶
iperf3 -s -p 5202
Server with IPv6¶
iperf3 -s -6
Server binding to specific interface¶
iperf3 -s -B 192.168.1.100
Server with authentication¶
iperf3 -s --rsa-private-key-path server.key --authorized-users-path users.csv
Daemon mode (background)¶
iperf3 -s -D
Server with logging¶
iperf3 -s --logfile /var/log/iperf3.log
One-time server (exit after one test)¶
iperf3 -s -1 ```_
Kundenmodus¶
```bash
Basic client test¶
iperf3 -c server.example.com
Test for specific duration¶
iperf3 -c server.example.com -t 30
Test with specific bandwidth target¶
iperf3 -c server.example.com -b 100M
UDP test¶
iperf3 -c server.example.com -u
IPv6 test¶
iperf3 -c server.example.com -6
Reverse test (server sends to client)¶
iperf3 -c server.example.com -R
Bidirectional test¶
iperf3 -c server.example.com --bidir
Multiple parallel streams¶
iperf3 -c server.example.com -P 4 ```_
Erweiterte Testing Szenarien¶
TCP Prüfung¶
```bash
Basic TCP throughput test¶
iperf3 -c server.example.com -t 60
TCP with specific window size¶
iperf3 -c server.example.com -w 64K
TCP with custom MSS¶
iperf3 -c server.example.com -M 1460
TCP with no delay (disable Nagle's algorithm)¶
iperf3 -c server.example.com -N
TCP congestion control algorithm¶
iperf3 -c server.example.com -C cubic
TCP with specific buffer size¶
iperf3 -c server.example.com -l 128K
Multiple TCP streams¶
iperf3 -c server.example.com -P 8 -t 30
TCP reverse test (server to client)¶
iperf3 -c server.example.com -R -t 30
Bidirectional TCP test¶
iperf3 -c server.example.com --bidir -t 30 ```_
UDP Prüfung¶
```bash
Basic UDP test¶
iperf3 -c server.example.com -u
UDP with specific bandwidth¶
iperf3 -c server.example.com -u -b 50M
UDP with packet size¶
iperf3 -c server.example.com -u -l 1400
UDP reverse test¶
iperf3 -c server.example.com -u -R
UDP bidirectional test¶
iperf3 -c server.example.com -u --bidir
UDP with zero copy (if supported)¶
iperf3 -c server.example.com -u -Z
UDP with specific target bandwidth and duration¶
iperf3 -c server.example.com -u -b 100M -t 60
UDP jitter and loss testing¶
iperf3 -c server.example.com -u -b 10M -t 30 -i 1 ```_
Erweiterte Konfiguration¶
```bash
Custom port¶
iperf3 -c server.example.com -p 5202
Bind to specific local interface¶
iperf3 -c server.example.com -B 192.168.1.100
Set Type of Service (ToS)¶
iperf3 -c server.example.com -S 0x10
Custom interval reporting¶
iperf3 -c server.example.com -i 5
JSON output format¶
iperf3 -c server.example.com -J
Omit first n seconds from statistics¶
iperf3 -c server.example.com -O 5
Set socket buffer size¶
iperf3 -c server.example.com -w 1M
Enable debug output¶
iperf3 -c server.example.com -d
Verbose output¶
iperf3 -c server.example.com -V ```_
Leistungsoptimierung¶
Puffer und Fenster Tuning¶
```bash
Optimal buffer size testing¶
for size in 64K 128K 256K 512K 1M 2M; do echo "Testing with buffer size: $size" iperf3 -c server.example.com -w $size -t 10 sleep 2 done
Socket buffer optimization¶
echo "Testing socket buffer sizes:" for size in 87380 131072 262144 524288 1048576; do echo "Buffer size: $size bytes" iperf3 -c server.example.com -w $size -t 10 sleep 2 done
TCP window scaling test¶
iperf3 -c server.example.com -w 4M -t 30
MSS optimization¶
for mss in 1460 1400 1200 1000; do echo "Testing MSS: $mss" iperf3 -c server.example.com -M $mss -t 10 sleep 2 done ```_
Multistream-Testing¶
```bash
Test with different stream counts¶
for streams in 1 2 4 8 16; do echo "Testing with $streams streams:" iperf3 -c server.example.com -P $streams -t 20 echo "" sleep 5 done
Parallel stream optimization script¶
!/bin/bash¶
test_parallel_streams() { local server=\(1 local max_streams=\)
echo "Parallel Stream Optimization Test"
echo "================================="
echo "Server: $server"
echo "Max Streams: $max_streams"
echo ""
for streams in $(seq 1 $max_streams); do
echo "Testing $streams stream(s)..."
result=$(iperf3 -c "$server" -P $streams -t 10 -f M | grep "SUM.*receiver" | awk '{print $6}')
echo " $streams streams: $result Mbits/sec"
# Small delay between tests
sleep 2
done
}
Usage: test_parallel_streams server.example.com 8¶
```_
Congestion Control Testing¶
```bash
Test different congestion control algorithms¶
algorithms=("reno" "cubic" "bbr" "vegas" "westwood")
for algo in "${algorithms[@]}"; do echo "Testing congestion control: \(algo" iperf3 -c server.example.com -C "\)algo" -t 15 echo "" sleep 3 done
BBR vs CUBIC comparison¶
echo "Comparing BBR vs CUBIC:" echo "BBR:" iperf3 -c server.example.com -C bbr -t 30 echo "" echo "CUBIC:" iperf3 -c server.example.com -C cubic -t 30 ```_
Automatisierung und Schrift¶
Umfassendes Netzwerk Testing Script¶
```bash
!/bin/bash¶
comprehensive_network_test.sh¶
SERVER="\(1"
if [ -z "\)SERVER" ]; then
echo "Usage: $0
RESULTS_DIR="network_test_results_\((date +%Y%m%d_%H%M%S)" mkdir -p "\)RESULTS_DIR"
echo "Comprehensive Network Performance Test" echo "=====================================" echo "Server: $SERVER" echo "Results Directory: $RESULTS_DIR" echo ""
Function to run test and save results¶
run_test() { local test_name=\(1 local command=\)2 local output_file="\(RESULTS_DIR/\).txt"
echo "Running: $test_name"
echo "Command: $command" > "$output_file"
echo "Timestamp: $(date)" >> "$output_file"
echo "===========================================" >> "$output_file"
eval "$command" >> "$output_file" 2>&1
# Extract key metrics
if grep -q "receiver" "$output_file"; then
bandwidth=$(grep "receiver" "$output_file" | tail -1 | awk '{print $7, $8}')
echo " Result: $bandwidth"
else
echo " Result: Test failed or incomplete"
fi
sleep 2
}
1. Basic TCP tests¶
echo "1. Basic TCP Tests" echo "==================" run_test "tcp_basic_10s" "iperf3 -c $SERVER -t 10" run_test "tcp_basic_30s" "iperf3 -c $SERVER -t 30" run_test "tcp_reverse" "iperf3 -c $SERVER -R -t 20" run_test "tcp_bidirectional" "iperf3 -c $SERVER --bidir -t 20"
2. Multi-stream TCP tests¶
echo "" echo "2. Multi-stream TCP Tests" echo "=========================" for streams in 2 4 8; do run_test "tcp_${streams}_streams" "iperf3 -c $SERVER -P $streams -t 15" done
3. UDP tests¶
echo "" echo "3. UDP Tests" echo "============" run_test "udp_10mbps" "iperf3 -c $SERVER -u -b 10M -t 15" run_test "udp_50mbps" "iperf3 -c $SERVER -u -b 50M -t 15" run_test "udp_100mbps" "iperf3 -c $SERVER -u -b 100M -t 15" run_test "udp_reverse" "iperf3 -c $SERVER -u -b 50M -R -t 15"
4. Buffer size optimization¶
echo "" echo "4. Buffer Size Tests" echo "===================" for size in 64K 256K 1M; do size_name=$(echo \(size | tr '[:upper:]' '[:lower:]') run_test "tcp_buffer_\)" "iperf3 -c $SERVER -w $size -t 15" done
5. IPv6 tests (if supported)¶
echo "" echo "5. IPv6 Tests" echo "=============" if ping6 -c 1 "$SERVER" >/dev/null 2>&1; then run_test "tcp_ipv6" "iperf3 -c $SERVER -6 -t 15" run_test "udp_ipv6" "iperf3 -c $SERVER -6 -u -b 50M -t 15" else echo "IPv6 not supported or server not reachable via IPv6" fi
Generate summary report¶
echo "" echo "6. Generating Summary Report" echo "============================"
SUMMARY_FILE="\(RESULTS_DIR/summary_report.txt" cat > "\)SUMMARY_FILE" << EOF Network Performance Test Summary =============================== Server: $SERVER Test Date: $(date) Test Duration: Approximately $(( $(date +%s) - \((stat -c %Y "\)RESULTS_DIR") )) seconds
Test Results: EOF
Extract and summarize results¶
for result_file in "\(RESULTS_DIR"/*.txt; do if [ "\)result_file" != "\(SUMMARY_FILE" ]; then test_name=\)(basename "$result_file" .txt)
if grep -q "receiver" "$result_file"; then
bandwidth=$(grep "receiver" "$result_file" | tail -1 | awk '{print $7, $8}')
echo "$test_name: $bandwidth" >> "$SUMMARY_FILE"
else
echo "$test_name: Failed or incomplete" >> "$SUMMARY_FILE"
fi
fi
done
echo "" echo "Summary Report:" cat "$SUMMARY_FILE"
echo "" echo "All results saved in: $RESULTS_DIR" echo "Test completed successfully!" ```_
Python Automation Framework¶
```python
!/usr/bin/env python3¶
iperf3_automation.py¶
import subprocess import json import time import datetime import argparse import csv import statistics from pathlib import Path
class IPerf3Tester: def init(self, server, port=5201): self.server = server self.port = port self.results = []
def run_test(self, test_config):
"""Run a single iperf3 test with given configuration"""
cmd = ['iperf3', '-c', self.server, '-p', str(self.port), '-J']
# Add test-specific parameters
for param, value in test_config.items():
if param == 'duration':
cmd.extend(['-t', str(value)])
elif param == 'parallel':
cmd.extend(['-P', str(value)])
elif param == 'udp' and value:
cmd.append('-u')
elif param == 'reverse' and value:
cmd.append('-R')
elif param == 'bidir' and value:
cmd.append('--bidir')
elif param == 'bandwidth':
cmd.extend(['-b', str(value)])
elif param == 'window_size':
cmd.extend(['-w', str(value)])
elif param == 'buffer_length':
cmd.extend(['-l', str(value)])
elif param == 'mss':
cmd.extend(['-M', str(value)])
elif param == 'tos':
cmd.extend(['-S', str(value)])
elif param == 'ipv6' and value:
cmd.append('-6')
try:
print(f"Running test: {test_config.get('name', 'Unnamed')}")
print(f"Command: {' '.join(cmd)}")
result = subprocess.run(cmd, capture_output=True, text=True, timeout=300)
if result.returncode == 0:
data = json.loads(result.stdout)
return self.parse_results(data, test_config)
else:
print(f"Test failed: {result.stderr}")
return None
except subprocess.TimeoutExpired:
print("Test timed out")
return None
except json.JSONDecodeError as e:
print(f"Failed to parse JSON output: {e}")
return None
except Exception as e:
print(f"Test error: {e}")
return None
def parse_results(self, data, test_config):
"""Parse iperf3 JSON output"""
result = {
'timestamp': datetime.datetime.now().isoformat(),
'test_name': test_config.get('name', 'Unnamed'),
'server': self.server,
'port': self.port,
'config': test_config
}
# Extract end results
if 'end' in data:
end_data = data['end']
# TCP results
if 'sum_received' in end_data:
result.update({
'protocol': 'TCP',
'bytes_sent': end_data.get('sum_sent', {}).get('bytes', 0),
'bytes_received': end_data.get('sum_received', {}).get('bytes', 0),
'bits_per_second_sent': end_data.get('sum_sent', {}).get('bits_per_second', 0),
'bits_per_second_received': end_data.get('sum_received', {}).get('bits_per_second', 0),
'retransmits': end_data.get('sum_sent', {}).get('retransmits', 0),
'mean_rtt': end_data.get('sum_sent', {}).get('mean_rtt', 0)
})
# UDP results
elif 'sum' in end_data:
sum_data = end_data['sum']
result.update({
'protocol': 'UDP',
'bytes': sum_data.get('bytes', 0),
'bits_per_second': sum_data.get('bits_per_second', 0),
'jitter_ms': sum_data.get('jitter_ms', 0),
'lost_packets': sum_data.get('lost_packets', 0),
'packets': sum_data.get('packets', 0),
'lost_percent': sum_data.get('lost_percent', 0)
})
# CPU utilization
if 'cpu_utilization_percent' in end_data:
result['cpu_utilization'] = end_data['cpu_utilization_percent']
return result
def run_test_suite(self, test_suite):
"""Run a complete test suite"""
print(f"Starting test suite with {len(test_suite)} tests")
print(f"Target server: {self.server}:{self.port}")
print("=" * 50)
for i, test_config in enumerate(test_suite, 1):
print(f"\nTest {i}/{len(test_suite)}: {test_config.get('name', 'Unnamed')}")
result = self.run_test(test_config)
if result:
self.results.append(result)
# Display key metrics
if result['protocol'] == 'TCP':
throughput = result['bits_per_second_received'] / 1e6 # Mbps
print(f" Throughput: {throughput:.2f} Mbps")
if result.get('retransmits', 0) > 0:
print(f" Retransmits: {result['retransmits']}")
elif result['protocol'] == 'UDP':
throughput = result['bits_per_second'] / 1e6 # Mbps
print(f" Throughput: {throughput:.2f} Mbps")
print(f" Packet Loss: {result['lost_percent']:.2f}%")
print(f" Jitter: {result['jitter_ms']:.2f} ms")
else:
print(" Test failed")
# Delay between tests
if i < len(test_suite):
time.sleep(2)
print("\n" + "=" * 50)
print("Test suite completed!")
return self.results
def generate_report(self, output_dir="iperf3_results"):
"""Generate comprehensive test report"""
if not self.results:
print("No results to report")
return
# Create output directory
output_path = Path(output_dir)
output_path.mkdir(exist_ok=True)
timestamp = datetime.datetime.now().strftime("%Y%m%d_%H%M%S")
# Save raw JSON results
json_file = output_path / f"iperf3_results_{timestamp}.json"
with open(json_file, 'w') as f:
json.dump(self.results, f, indent=2)
# Generate CSV summary
csv_file = output_path / f"iperf3_summary_{timestamp}.csv"
with open(csv_file, 'w', newline='') as f:
if self.results:
writer = csv.DictWriter(f, fieldnames=self.results[0].keys())
writer.writeheader()
writer.writerows(self.results)
# Generate HTML report
html_file = output_path / f"iperf3_report_{timestamp}.html"
self.generate_html_report(html_file)
print(f"Reports generated:")
print(f" JSON: {json_file}")
print(f" CSV: {csv_file}")
print(f" HTML: {html_file}")
return str(html_file)
def generate_html_report(self, output_file):
"""Generate HTML report"""
html_content = f"""
iperf3 Network Performance Test Report
Test Summary
Server: {self.server}:{self.port}
Generated: {datetime.datetime.now()}
Total Tests: {len(self.results)}
TCP Performance Summary
UDP Performance Summary
Detailed Test Results
| Test Name | Protocol | Throughput (Mbps) | Additional Metrics | Timestamp |
|---|---|---|---|---|
| {result['test_name']} | {result['protocol']} | {throughput:.2f} | {additional} | {result['timestamp']} |
"""
with open(output_file, 'w') as f:
f.write(html_content)
def create_standard_test_suite(): """Create a standard comprehensive test suite""" return [ { 'name': 'TCP Baseline 30s', 'duration': 30 }, { 'name': 'TCP Reverse 30s', 'duration': 30, 'reverse': True }, { 'name': 'TCP Bidirectional 30s', 'duration': 30, 'bidir': True }, { 'name': 'TCP 4 Streams', 'duration': 20, 'parallel': 4 }, { 'name': 'TCP 8 Streams', 'duration': 20, 'parallel': 8 }, { 'name': 'TCP Large Window', 'duration': 20, 'window_size': '1M' }, { 'name': 'UDP 10 Mbps', 'duration': 20, 'udp': True, 'bandwidth': '10M' }, { 'name': 'UDP 50 Mbps', 'duration': 20, 'udp': True, 'bandwidth': '50M' }, { 'name': 'UDP 100 Mbps', 'duration': 20, 'udp': True, 'bandwidth': '100M' }, { 'name': 'UDP Reverse 50 Mbps', 'duration': 20, 'udp': True, 'bandwidth': '50M', 'reverse': True } ]
def main(): parser = argparse.ArgumentParser(description='iperf3 Automation Framework') parser.add_argument('server', help='iperf3 server hostname or IP') parser.add_argument('--port', type=int, default=5201, help='Server port (default: 5201)') parser.add_argument('--output-dir', default='iperf3_results', help='Output directory for results') parser.add_argument('--custom-tests', help='JSON file with custom test configurations')
args = parser.parse_args()
# Create tester instance
tester = IPerf3Tester(args.server, args.port)
# Load test suite
if args.custom_tests:
with open(args.custom_tests, 'r') as f:
test_suite = json.load(f)
else:
test_suite = create_standard_test_suite()
# Run tests
results = tester.run_test_suite(test_suite)
# Generate reports
if results:
report_file = tester.generate_report(args.output_dir)
print(f"\nOpen the HTML report: {report_file}")
else:
print("No successful tests to report")
if name == "main": main() ```_
Überwachung und Analyse¶
Echtzeit-Leistungsüberwachung¶
```bash
!/bin/bash¶
realtime_iperf3_monitor.sh¶
SERVER="\(1" DURATION="\)" # Default 1 hour INTERVAL="${3:-60}" # Default 60 seconds
if [ -z "$SERVER" ]; then
echo "Usage: $0
LOGFILE="iperf3_monitor_\((date +%Y%m%d_%H%M%S).log" CSVFILE="iperf3_monitor_\)(date +%Y%m%d_%H%M%S).csv"
CSV header¶
echo "timestamp,throughput_mbps,retransmits,rtt_ms,cpu_percent" > "$CSVFILE"
echo "Starting iperf3 monitoring..." echo "Server: $SERVER" echo "Duration: $DURATION seconds" echo "Interval: $INTERVAL seconds" echo "Log file: $LOGFILE" echo "CSV file: $CSVFILE" echo ""
END_TIME=\(((\)(date +%s) + DURATION)) TEST_COUNT=0
while [ \((date +%s) -lt \(END_TIME ]; do TEST_COUNT=\)((TEST_COUNT + 1)) TIMESTAMP=\)(date '+%Y-%m-%d %H:%M:%S')
echo "[$TIMESTAMP] Running test $TEST_COUNT..." | tee -a "$LOGFILE"
# Run iperf3 test with JSON output
RESULT=$(iperf3 -c "$SERVER" -t 10 -J 2>/dev/null)
if [ $? -eq 0 ]; then
# Parse JSON results
THROUGHPUT=$(echo "$RESULT" | jq -r '.end.sum_received.bits_per_second // 0' | awk '{print $1/1000000}')
RETRANSMITS=$(echo "$RESULT" | jq -r '.end.sum_sent.retransmits // 0')
RTT=$(echo "$RESULT" | jq -r '.end.sum_sent.mean_rtt // 0')
CPU=$(echo "$RESULT" | jq -r '.end.cpu_utilization_percent.host_total // 0')
# Log results
echo " Throughput: ${THROUGHPUT} Mbps" | tee -a "$LOGFILE"
echo " Retransmits: $RETRANSMITS" | tee -a "$LOGFILE"
echo " RTT: ${RTT} ms" | tee -a "$LOGFILE"
echo " CPU: ${CPU}%" | tee -a "$LOGFILE"
# Save to CSV
echo "$(date '+%Y-%m-%d %H:%M:%S'),$THROUGHPUT,$RETRANSMITS,$RTT,$CPU" >> "$CSVFILE"
# Check for performance issues
if (( $(echo "$THROUGHPUT < 10" | bc -l) )); then
echo " WARNING: Low throughput detected!" | tee -a "$LOGFILE"
fi
if [ "$RETRANSMITS" -gt 100 ]; then
echo " WARNING: High retransmit count!" | tee -a "$LOGFILE"
fi
else
echo " ERROR: Test failed" | tee -a "$LOGFILE"
echo "$(date '+%Y-%m-%d %H:%M:%S'),0,0,0,0" >> "$CSVFILE"
fi
echo "" | tee -a "$LOGFILE"
# Wait for next interval
sleep $INTERVAL
done
echo "Monitoring completed. Results saved to:" echo " Log: $LOGFILE" echo " CSV: $CSVFILE"
Generate basic statistics¶
if command -v python3 >/dev/null 2>&1; then python3 << EOF import csv import statistics
Read CSV data¶
throughputs = [] with open('$CSVFILE', 'r') as f: reader = csv.DictReader(f) for row in reader: if float(row['throughput_mbps']) > 0: throughputs.append(float(row['throughput_mbps']))
if throughputs: print("Performance Statistics:") print(f" Average throughput: {statistics.mean(throughputs):.2f} Mbps") print(f" Maximum throughput: {max(throughputs):.2f} Mbps") print(f" Minimum throughput: {min(throughputs):.2f} Mbps") print(f" Standard deviation: {statistics.stdev(throughputs):.2f} Mbps") print(f" Total tests: {len(throughputs)}") else: print("No valid throughput data collected") EOF fi ```_
Performance Baseline Testing¶
```bash
!/bin/bash¶
performance_baseline.sh¶
SERVER="\(1"
if [ -z "\)SERVER" ]; then
echo "Usage: $0
BASELINE_DIR="baseline_\((date +%Y%m%d_%H%M%S)" mkdir -p "\)BASELINE_DIR"
echo "Network Performance Baseline Test" echo "=================================" echo "Server: $SERVER" echo "Results Directory: $BASELINE_DIR" echo ""
Function to run baseline test¶
run_baseline_test() { local test_name=\(1 local description=\)2 local command=$3
echo "Running: $test_name"
echo "Description: $description"
echo "Command: $command"
local output_file="$BASELINE_DIR/${test_name}.json"
local summary_file="$BASELINE_DIR/${test_name}_summary.txt"
# Run test multiple times for consistency
local total_throughput=0
local test_count=3
local successful_tests=0
echo "Running $test_count iterations..." > "$summary_file"
echo "=================================" >> "$summary_file"
for i in $(seq 1 $test_count); do
echo " Iteration $i/$test_count..."
if eval "$command" > "/tmp/iperf3_temp.json" 2>/dev/null; then
# Parse results
throughput=$(jq -r '.end.sum_received.bits_per_second // .end.sum.bits_per_second // 0' "/tmp/iperf3_temp.json")
if [ "$throughput" != "0" ] && [ "$throughput" != "null" ]; then
throughput_mbps=$(echo "scale=2; $throughput / 1000000" | bc)
echo " Result: ${throughput_mbps} Mbps"
echo "Iteration $i: ${throughput_mbps} Mbps" >> "$summary_file"
total_throughput=$(echo "$total_throughput + $throughput_mbps" | bc)
successful_tests=$((successful_tests + 1))
# Save individual result
cp "/tmp/iperf3_temp.json" "$BASELINE_DIR/${test_name}_iter${i}.json"
else
echo " Result: Failed"
echo "Iteration $i: Failed" >> "$summary_file"
fi
else
echo " Result: Failed"
echo "Iteration $i: Failed" >> "$summary_file"
fi
sleep 3
done
# Calculate average
if [ $successful_tests -gt 0 ]; then
avg_throughput=$(echo "scale=2; $total_throughput / $successful_tests" | bc)
echo " Average: ${avg_throughput} Mbps ($successful_tests/$test_count successful)"
echo "" >> "$summary_file"
echo "Average: ${avg_throughput} Mbps" >> "$summary_file"
echo "Successful tests: $successful_tests/$test_count" >> "$summary_file"
else
echo " Average: No successful tests"
echo "" >> "$summary_file"
echo "Average: No successful tests" >> "$summary_file"
fi
echo ""
rm -f "/tmp/iperf3_temp.json"
}
1. TCP Baseline Tests¶
echo "1. TCP Baseline Tests" echo "===================="
run_baseline_test "tcp_single_stream" \ "Single TCP stream, 30 seconds" \ "iperf3 -c $SERVER -t 30 -J"
run_baseline_test "tcp_multiple_streams" \ "4 parallel TCP streams, 30 seconds" \ "iperf3 -c $SERVER -P 4 -t 30 -J"
run_baseline_test "tcp_reverse" \ "TCP reverse test (server to client), 30 seconds" \ "iperf3 -c $SERVER -R -t 30 -J"
run_baseline_test "tcp_bidirectional" \ "TCP bidirectional test, 30 seconds" \ "iperf3 -c $SERVER --bidir -t 30 -J"
2. UDP Baseline Tests¶
echo "2. UDP Baseline Tests" echo "===================="
run_baseline_test "udp_10mbps" \ "UDP test at 10 Mbps, 30 seconds" \ "iperf3 -c $SERVER -u -b 10M -t 30 -J"
run_baseline_test "udp_50mbps" \ "UDP test at 50 Mbps, 30 seconds" \ "iperf3 -c $SERVER -u -b 50M -t 30 -J"
run_baseline_test "udp_100mbps" \ "UDP test at 100 Mbps, 30 seconds" \ "iperf3 -c $SERVER -u -b 100M -t 30 -J"
3. Buffer Size Tests¶
echo "3. Buffer Size Optimization Tests" echo "================================="
for buffer_size in 64K 256K 1M 4M; do run_baseline_test "tcp_buffer_${buffer_size}" \ "TCP with ${buffer_size} buffer, 20 seconds" \ "iperf3 -c $SERVER -w $buffer_size -t 20 -J" done
Generate comprehensive baseline report¶
REPORT_FILE="$BASELINE_DIR/baseline_report.html"
cat > "$REPORT_FILE" << 'EOF'
Network Performance Baseline Report
Test Summary
EOF echo "Server: $SERVER
" >> "$REPORT_FILE" echo "Generated: $(date)
" >> "$REPORT_FILE" echo "Test Directory: $BASELINE_DIR
" >> "$REPORT_FILE" cat >> "$REPORT_FILE" << 'EOF'Baseline Results
| Test Category | Test Name | Description | Average Throughput (Mbps) | Success Rate |
|---|---|---|---|---|
| $category | " >> "$REPORT_FILE" echo "$test_name | " >> "$REPORT_FILE" echo "$description | " >> "$REPORT_FILE" echo "$avg | " >> "$REPORT_FILE" echo "$success | " >> "$REPORT_FILE" echo "
Recommendations
Performance Analysis
- Review TCP single stream performance as baseline
- Compare multi-stream results to identify optimal parallelism
- Check UDP packet loss rates for capacity planning
- Use buffer size results for application optimization
- Monitor for consistency across test iterations
EOF
echo "Baseline testing completed!" echo "Results saved in: \(BASELINE_DIR" echo "HTML report: \(REPORT_FILE" echo "" echo "Summary of results:" for summary_file in "\)BASELINE_DIR"/*_summary.txt; do if [ -f "\)summary_file" ]; then test_name=\((basename "\)summary_file" summary.txt) avg=\((grep "Average:" "\)summary_file" | awk '{print $2}' | head -1) echo " $test_name: $avg" fi done ```
Dieses umfassende iperf3 cheatsheet bietet alles, was für professionelle Netzwerkleistungstests, Bandbreitenmessung und automatisierte Netzwerkanalysen benötigt wird, von der Basisdurchsatzprüfung bis hin zu fortschrittlichen Monitoring- und Automatisierungssszenarien.