_
Kube-hunter Cheatsheet¶
- :Material-Content-Copy: **Copy to Clipboard* *
- **PDF herunterladen* *
Im Überblick
Kube-hunter ist ein Penetrationstest-Tool, das auf die Suche nach Sicherheitsschwächen in Kubernetes Clustern ausgelegt ist. Es simuliert die Techniken, die von Angreifern verwendet werden, um Kubernetes Umgebungen zu kompromittieren und bietet detaillierte Berichte über entdeckte Schwachstellen und mögliche Angriffswege.
Schlüsselmerkmale¶
- Aktive Schwachstelle Jagd: Entdeckt und nutzt Kubernetes Schwachstellen
- **Multiple Scanning Modes*: Remote-, interne und Netzwerk-Scanning-Funktionen
- **Attack Path Discovery*: Karten potentielle Angriffsvektoren und Privileg Eskalationspfade
- ** Umfassende Reporting*: Detaillierte Schwachstellenberichte mit Abhilfeanleitung
- **CI/CD Integration*: Einfache Integration in Sicherheitspipelines
- **Kundenjäger*: Umfangreiche Rahmenbedingungen für benutzerdefinierte Sicherheitskontrollen
• Installation
Binary Installation¶
# Download latest release
curl -L https://github.com/aquasecurity/kube-hunter/releases/latest/download/kube-hunter_Linux_x86_64 -o kube-hunter
chmod +x kube-hunter
sudo mv kube-hunter /usr/local/bin/
# Verify installation
kube-hunter --version
Python Installation¶
# Install via pip
pip install kube-hunter
# Install from source
git clone https://github.com/aquasecurity/kube-hunter.git
cd kube-hunter
pip install -r requirements.txt
python setup.py install
# Verify installation
kube-hunter --version
Container Installation¶
# Pull Docker image
docker pull aquasec/kube-hunter:latest
# Run as container
docker run --rm --network host aquasec/kube-hunter:latest
Kubernetes Job Installation¶
# kube-hunter-job.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: kube-hunter
spec:
template:
spec:
containers:
- name: kube-hunter
image: aquasec/kube-hunter:latest
command: ["kube-hunter"]
args: ["--pod"]
restartPolicy: Never
backoffLimit: 4
oder Basisnutzung
Scanning Modes¶
# Remote scanning (external perspective)
kube-hunter --remote some.node.com
# Internal scanning (from within cluster)
kube-hunter --internal
# Network scanning (discover and scan)
kube-hunter --cidr 192.168.1.0/24
# Pod scanning (from within a pod)
kube-hunter --pod
# Interface scanning
kube-hunter --interface eth0
Ausgabeformate¶
# JSON output
kube-hunter --report json
# YAML output
kube-hunter --report yaml
# Plain text output (default)
kube-hunter --report plain
# Save to file
kube-hunter --report json --log /tmp/kube-hunter-report.json
Scannen von Optionen¶
# Quick scan (passive only)
kube-hunter --quick
# Active hunting (potentially disruptive)
kube-hunter --active
# Include statistics
kube-hunter --statistics
# Verbose output
kube-hunter --verbose
# Dispatch all hunters
kube-hunter --dispatch
Erweiterte Scantechniken
Network Discovery and Scanning¶
# Scan multiple CIDR ranges
kube-hunter --cidr 10.0.0.0/8,172.16.0.0/12,192.168.0.0/16
# Scan specific IP ranges
kube-hunter --remote 10.0.1.1-10.0.1.100
# Scan with custom ports
kube-hunter --remote target.com --port 8080,8443,10250
# Network interface scanning
kube-hunter --interface eth0,eth1
Active Hunting Configuration¶
# Enable active hunting with specific modules
kube-hunter --active --hunter-modules kubelet,api-server
# Active hunting with custom payloads
kube-hunter --active --payload-file custom-payloads.yaml
# Active hunting with timeout
kube-hunter --active --timeout 300
# Active hunting with rate limiting
kube-hunter --active --rate-limit 10
Custom Hunter Konfiguration¶
# custom_hunter.py
from kube_hunter.core.types import Hunter, KubernetesCluster
from kube_hunter.core.events import handler
@handler.subscribe(KubernetesCluster)
class CustomHunter(Hunter):
"""Custom vulnerability hunter"""
def __init__(self, event):
self.event = event
def execute(self):
# Custom vulnerability detection logic
self.publish_event(CustomVulnerability())
class CustomVulnerability(Vulnerability, Event):
"""Custom vulnerability event"""
def __init__(self):
Vulnerability.__init__(
self,
component="Custom Component",
name="Custom Vulnerability",
category=InformationDisclosure,
)
self.evidence = "Custom evidence"
• Cloud Provider Spezifisches Scannen
Amazon EKS¶
# EKS cluster scanning
kube-hunter --remote eks-cluster.region.eks.amazonaws.com
# EKS with IAM authentication
export AWS_PROFILE=eks-profile
kube-hunter --remote eks-cluster.region.eks.amazonaws.com --eks
# EKS worker node scanning
kube-hunter --cidr 10.0.0.0/16 --active
Google GKE¶
# GKE cluster scanning
kube-hunter --remote gke-cluster.googleapis.com
# GKE with service account
export GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account.json
kube-hunter --remote gke-cluster.googleapis.com --gke
# GKE node pool scanning
kube-hunter --cidr 10.128.0.0/20 --active
Azure AKS¶
# AKS cluster scanning
kube-hunter --remote aks-cluster.region.azmk8s.io
# AKS with Azure CLI authentication
az login
kube-hunter --remote aks-cluster.region.azmk8s.io --aks
# AKS subnet scanning
kube-hunter --cidr 10.240.0.0/16 --active
CI/CD Integration
GitHub Aktionen¶
# .github/workflows/kube-hunter.yml
name: Kube-hunter Security Scan
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
schedule:
- cron: '0 3 * * *' # Daily at 3 AM
jobs:
kube-hunter-scan:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.9'
- name: Install Kube-hunter
run: |
pip install kube-hunter
- name: Configure kubectl
run: |
echo "${{ secrets.KUBE_CONFIG }}" | base64 -d > kubeconfig
export KUBECONFIG=kubeconfig
- name: Run Kube-hunter scan
run: |
# Remote scan
kube-hunter --remote ${{ secrets.CLUSTER_ENDPOINT }} \
--report json --log kube-hunter-remote.json
# Internal scan via job
kubectl apply -f - <<EOF
apiVersion: batch/v1
kind: Job
metadata:
name: kube-hunter-internal
spec:
template:
spec:
containers:
- name: kube-hunter
image: aquasec/kube-hunter:latest
command: ["kube-hunter"]
args: ["--pod", "--report", "json"]
restartPolicy: Never
EOF
kubectl wait --for=condition=complete job/kube-hunter-internal --timeout=300s
kubectl logs job/kube-hunter-internal > kube-hunter-internal.json
- name: Parse results
run: |
python3 << 'EOF'
import json
import sys
def parse_results(file_path):
try:
with open(file_path, 'r') as f:
data = json.load(f)
return data
except:
return {"vulnerabilities": []}
# Parse remote scan
remote_data = parse_results('kube-hunter-remote.json')
remote_vulns = len(remote_data.get('vulnerabilities', []))
# Parse internal scan
internal_data = parse_results('kube-hunter-internal.json')
internal_vulns = len(internal_data.get('vulnerabilities', []))
total_vulns = remote_vulns + internal_vulns
print(f"Remote vulnerabilities: {remote_vulns}")
print(f"Internal vulnerabilities: {internal_vulns}")
print(f"Total vulnerabilities: {total_vulns}")
# Set environment variables for next steps
with open('GITHUB_ENV', 'a') as f:
f.write(f"REMOTE_VULNS={remote_vulns}\n")
f.write(f"INTERNAL_VULNS={internal_vulns}\n")
f.write(f"TOTAL_VULNS={total_vulns}\n")
EOF
- name: Upload results
uses: actions/upload-artifact@v3
with:
name: kube-hunter-results
path: |
kube-hunter-*.json
- name: Comment PR
if: github.event_name == 'pull_request'
uses: actions/github-script@v6
with:
script: |
const remoteVulns = process.env.REMOTE_VULNS;
const internalVulns = process.env.INTERNAL_VULNS;
const totalVulns = process.env.TOTAL_VULNS;
const comment = `## Kube-hunter Security Scan Results
- **Remote vulnerabilities:** ${remoteVulns}
- **Internal vulnerabilities:** ${internalVulns}
- **Total vulnerabilities:** ${totalVulns}
${totalVulns > 0 ? '⚠️ Vulnerabilities detected! Please review the detailed results.' : '✅ No vulnerabilities detected.'}`;
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: comment
});
- name: Fail on vulnerabilities
if: env.TOTAL_VULNS > 0
run: |
echo "❌ Kube-hunter found $TOTAL_VULNS vulnerabilities"
exit 1
GitLab CI¶
# .gitlab-ci.yml
stages:
- security
kube-hunter-scan:
stage: security
image: aquasec/kube-hunter:latest
script:
- kube-hunter --remote $CLUSTER_ENDPOINT --report json --log kube-hunter-results.json
artifacts:
reports:
container_scanning: kube-hunter-results.json
paths:
- kube-hunter-results.json
expire_in: 1 week
only:
- main
- merge_requests
kube-hunter-internal:
stage: security
image: bitnami/kubectl:latest
script:
- kubectl apply -f kube-hunter-job.yaml
- kubectl wait --for=condition=complete job/kube-hunter --timeout=300s
- kubectl logs job/kube-hunter > kube-hunter-internal.json
artifacts:
paths:
- kube-hunter-internal.json
expire_in: 1 week
only:
- main
Jenkins Pipeline¶
// Jenkinsfile
pipeline {
agent any
environment {
KUBECONFIG = credentials('kubeconfig')
CLUSTER_ENDPOINT = credentials('cluster-endpoint')
}
stages {
stage('Kube-hunter Scan') {
parallel {
stage('Remote Scan') {
steps {
script {
// Install kube-hunter
sh 'pip install kube-hunter'
// Run remote scan
sh '''
kube-hunter --remote $CLUSTER_ENDPOINT \
--report json --log kube-hunter-remote.json
'''
}
}
}
stage('Internal Scan') {
steps {
script {
// Run internal scan via Kubernetes job
sh '''
kubectl apply -f - <<EOF
apiVersion: batch/v1
kind: Job
metadata:
name: kube-hunter-internal-${BUILD_NUMBER}
spec:
template:
spec:
containers:
- name: kube-hunter
image: aquasec/kube-hunter:latest
command: ["kube-hunter"]
args: ["--pod", "--report", "json"]
restartPolicy: Never
EOF
kubectl wait --for=condition=complete job/kube-hunter-internal-${BUILD_NUMBER} --timeout=300s
kubectl logs job/kube-hunter-internal-${BUILD_NUMBER} > kube-hunter-internal.json
kubectl delete job kube-hunter-internal-${BUILD_NUMBER}
'''
}
}
}
}
}
stage('Analyze Results') {
steps {
script {
// Parse and analyze results
def remoteResults = readJSON file: 'kube-hunter-remote.json'
def internalResults = readJSON file: 'kube-hunter-internal.json'
def remoteVulns = remoteResults.vulnerabilities?.size() ?: 0
def internalVulns = internalResults.vulnerabilities?.size() ?: 0
def totalVulns = remoteVulns + internalVulns
echo "Remote vulnerabilities: ${remoteVulns}"
echo "Internal vulnerabilities: ${internalVulns}"
echo "Total vulnerabilities: ${totalVulns}"
// Set build description
currentBuild.description = "Vulnerabilities: ${totalVulns}"
// Fail build if vulnerabilities found
if (totalVulns > 0) {
currentBuild.result = 'UNSTABLE'
error("Kube-hunter found ${totalVulns} vulnerabilities")
}
}
}
}
}
post {
always {
archiveArtifacts artifacts: 'kube-hunter-*.json', fingerprint: true
publishHTML([
allowMissing: false,
alwaysLinkToLastBuild: true,
keepAll: true,
reportDir: '.',
reportFiles: 'kube-hunter-*.json',
reportName: 'Kube-hunter Security Report'
])
}
}
}
Automatisierungsskripte
Comprehensive Scanning Script¶
#!/bin/bash
# comprehensive-kube-hunter.sh
set -e
# Configuration
RESULTS_DIR="kube-hunter-results"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
CLUSTER_ENDPOINT="${CLUSTER_ENDPOINT:-}"
CIDR_RANGES="${CIDR_RANGES:-10.0.0.0/8,172.16.0.0/12,192.168.0.0/16}"
# Create results directory
mkdir -p "$RESULTS_DIR"
echo "Starting comprehensive Kube-hunter security scan..."
# Function to run scan
run_scan() {
local scan_type="$1"
local target="$2"
local output_file="$3"
local additional_args="$4"
echo "Running $scan_type scan..."
case "$scan_type" in
"remote")
kube-hunter --remote "$target" --report json --log "$output_file" $additional_args
;;
"cidr")
kube-hunter --cidr "$target" --report json --log "$output_file" $additional_args
;;
"internal")
kube-hunter --internal --report json --log "$output_file" $additional_args
;;
"pod")
kubectl apply -f - <<EOF
apiVersion: batch/v1
kind: Job
metadata:
name: kube-hunter-pod-$TIMESTAMP
spec:
template:
spec:
containers:
- name: kube-hunter
image: aquasec/kube-hunter:latest
command: ["kube-hunter"]
args: ["--pod", "--report", "json", $additional_args]
restartPolicy: Never
EOF
kubectl wait --for=condition=complete job/kube-hunter-pod-$TIMESTAMP --timeout=300s
kubectl logs job/kube-hunter-pod-$TIMESTAMP > "$output_file"
kubectl delete job kube-hunter-pod-$TIMESTAMP
;;
esac
}
# Main scanning logic
main() {
# Remote scanning
if [ -n "$CLUSTER_ENDPOINT" ]; then
run_scan "remote" "$CLUSTER_ENDPOINT" "$RESULTS_DIR/remote-scan-$TIMESTAMP.json" "--statistics"
run_scan "remote" "$CLUSTER_ENDPOINT" "$RESULTS_DIR/remote-active-scan-$TIMESTAMP.json" "--active --statistics"
fi
# Network scanning
run_scan "cidr" "$CIDR_RANGES" "$RESULTS_DIR/network-scan-$TIMESTAMP.json" "--statistics"
run_scan "cidr" "$CIDR_RANGES" "$RESULTS_DIR/network-active-scan-$TIMESTAMP.json" "--active --statistics"
# Internal scanning (if kubectl is configured)
if kubectl cluster-info &>/dev/null; then
run_scan "internal" "" "$RESULTS_DIR/internal-scan-$TIMESTAMP.json" "--statistics"
run_scan "pod" "" "$RESULTS_DIR/pod-scan-$TIMESTAMP.json" "--statistics"
fi
# Generate comprehensive report
python3 << 'EOF'
import json
import os
import glob
from datetime import datetime
def parse_kube_hunter_results(file_path):
try:
with open(file_path, 'r') as f:
data = json.load(f)
return data
except:
return {"vulnerabilities": [], "hunter_statistics": []}
def generate_comprehensive_report():
results_dir = "kube-hunter-results"
timestamp = os.environ.get('TIMESTAMP', datetime.now().strftime('%Y%m%d_%H%M%S'))
# Find all result files
result_files = glob.glob(f"{results_dir}/*-{timestamp}.json")
all_vulnerabilities = []
all_statistics = []
scan_summary = {}
for file_path in result_files:
scan_type = os.path.basename(file_path).split('-')[0]
data = parse_kube_hunter_results(file_path)
vulnerabilities = data.get('vulnerabilities', [])
statistics = data.get('hunter_statistics', [])
all_vulnerabilities.extend(vulnerabilities)
all_statistics.extend(statistics)
scan_summary[scan_type] = {
'vulnerabilities': len(vulnerabilities),
'file': file_path
}
# Generate summary report
report = {
'timestamp': datetime.now().isoformat(),
'scan_summary': scan_summary,
'total_vulnerabilities': len(all_vulnerabilities),
'vulnerability_breakdown': {},
'hunter_statistics': all_statistics,
'vulnerabilities': all_vulnerabilities
}
# Categorize vulnerabilities
for vuln in all_vulnerabilities:
category = vuln.get('category', 'Unknown')
if category not in report['vulnerability_breakdown']:
report['vulnerability_breakdown'][category] = 0
report['vulnerability_breakdown'][category] += 1
# Save comprehensive report
with open(f"{results_dir}/comprehensive-report-{timestamp}.json", 'w') as f:
json.dump(report, f, indent=2)
# Print summary
print(f"\n=== Kube-hunter Comprehensive Scan Summary ===")
print(f"Timestamp: {report['timestamp']}")
print(f"Total Vulnerabilities: {report['total_vulnerabilities']}")
print(f"\nScan Breakdown:")
for scan_type, summary in scan_summary.items():
print(f" {scan_type}: {summary['vulnerabilities']} vulnerabilities")
print(f"\nVulnerability Categories:")
for category, count in report['vulnerability_breakdown'].items():
print(f" {category}: {count}")
# Return exit code based on vulnerabilities
return 1 if report['total_vulnerabilities'] > 0 else 0
exit_code = generate_comprehensive_report()
exit(exit_code)
EOF
echo "Comprehensive scan completed. Results saved in $RESULTS_DIR/"
}
# Export timestamp for Python script
export TIMESTAMP
# Run main function
main
Automatisierte Entfernung Script¶
#!/bin/bash
# kube-hunter-remediation.sh
set -e
RESULTS_FILE="$1"
REMEDIATION_LOG="kube-hunter-remediation-$(date +%Y%m%d_%H%M%S).log"
if [ -z "$RESULTS_FILE" ]; then
echo "Usage: $0 <kube-hunter-results.json>"
exit 1
fi
echo "Starting automated remediation based on Kube-hunter results..."
echo "Results file: $RESULTS_FILE"
echo "Remediation log: $REMEDIATION_LOG"
# Function to apply remediation
apply_remediation() {
local vuln_id="$1"
local description="$2"
local evidence="$3"
echo "Applying remediation for vulnerability: $description" | tee -a "$REMEDIATION_LOG"
echo "Evidence: $evidence" | tee -a "$REMEDIATION_LOG"
case "$vuln_id" in
"KHV002")
# K8s Version Disclosure
echo "Remediating K8s Version Disclosure..." | tee -a "$REMEDIATION_LOG"
kubectl patch configmap kube-proxy -n kube-system --type='merge' -p='{"data":{"config.conf":"mode: iptables\nclusterCIDR: 10.244.0.0/16\n"}}' 2>&1 | tee -a "$REMEDIATION_LOG"
;;
"KHV005")
# Access to pod's secrets
echo "Remediating pod secrets access..." | tee -a "$REMEDIATION_LOG"
kubectl create rolebinding default-view --clusterrole=view --serviceaccount=default:default --namespace=default 2>&1 | tee -a "$REMEDIATION_LOG"
;;
"KHV050")
# Read access to pod's service account token
echo "Remediating service account token access..." | tee -a "$REMEDIATION_LOG"
kubectl patch serviceaccount default -p '{"automountServiceAccountToken":false}' 2>&1 | tee -a "$REMEDIATION_LOG"
;;
"KHV053")
# Pod Security Policy not enabled
echo "Enabling Pod Security Policy..." | tee -a "$REMEDIATION_LOG"
kubectl apply -f - <<EOF 2>&1 | tee -a "$REMEDIATION_LOG"
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: restricted
spec:
privileged: false
allowPrivilegeEscalation: false
requiredDropCapabilities:
- ALL
volumes:
- 'configMap'
- 'emptyDir'
- 'projected'
- 'secret'
- 'downwardAPI'
- 'persistentVolumeClaim'
runAsUser:
rule: 'MustRunAsNonRoot'
seLinux:
rule: 'RunAsAny'
fsGroup:
rule: 'RunAsAny'
EOF
;;
*)
echo "No automated remediation available for vulnerability: $description" | tee -a "$REMEDIATION_LOG"
echo "Manual remediation required" | tee -a "$REMEDIATION_LOG"
;;
esac
}
# Parse JSON results and apply remediations
python3 << EOF
import json
import subprocess
import sys
def run_remediation(vuln_id, description, evidence):
try:
subprocess.run(['bash', '-c', f'apply_remediation "{vuln_id}" "{description}" "{evidence}"'], check=True)
return True
except subprocess.CalledProcessError as e:
print(f"Failed to apply remediation for {vuln_id}: {e}")
return False
# Load results
with open('$RESULTS_FILE', 'r') as f:
data = json.load(f)
remediation_count = 0
success_count = 0
for vuln in data.get('vulnerabilities', []):
vuln_id = vuln.get('vid', 'Unknown')
description = vuln.get('description', 'Unknown')
evidence = vuln.get('evidence', 'No evidence')
print(f"Attempting remediation for vulnerability {vuln_id}: {description}")
remediation_count += 1
if run_remediation(vuln_id, description, evidence):
success_count += 1
print(f"\nRemediation Summary:")
print(f"Total remediations attempted: {remediation_count}")
print(f"Successful remediations: {success_count}")
print(f"Failed remediations: {remediation_count - success_count}")
EOF
echo "Remediation completed. Check $REMEDIATION_LOG for details."
Monitoring und Alerting Script¶
#!/bin/bash
# kube-hunter-monitor.sh
set -e
# Configuration
SLACK_WEBHOOK_URL="${SLACK_WEBHOOK_URL:-}"
EMAIL_RECIPIENTS="${EMAIL_RECIPIENTS:-}"
THRESHOLD_HIGH="${THRESHOLD_HIGH:-0}"
THRESHOLD_MEDIUM="${THRESHOLD_MEDIUM:-3}"
# Function to send Slack notification
send_slack_notification() {
local message="$1"
local color="$2"
if [ -n "$SLACK_WEBHOOK_URL" ]; then
curl -X POST -H 'Content-type: application/json' \
--data "{
\"attachments\": [{
\"color\": \"$color\",
\"title\": \"Kube-hunter Security Scan Results\",
\"text\": \"$message\",
\"footer\": \"Kube-hunter Monitor\",
\"ts\": $(date +%s)
}]
}" \
"$SLACK_WEBHOOK_URL"
fi
}
# Function to send email notification
send_email_notification() {
local subject="$1"
local body="$2"
if [ -n "$EMAIL_RECIPIENTS" ]; then
echo "$body" | mail -s "$subject" "$EMAIL_RECIPIENTS"
fi
}
# Run kube-hunter scan
echo "Running Kube-hunter security scan..."
kube-hunter --remote "${CLUSTER_ENDPOINT}" --report json --log scan-results.json
# Parse results
python3 << 'EOF'
import json
import sys
import os
# Load results
with open('scan-results.json', 'r') as f:
data = json.load(f)
# Count vulnerabilities by severity
high_count = 0
medium_count = 0
low_count = 0
for vuln in data.get('vulnerabilities', []):
severity = vuln.get('severity', 'low').lower()
if severity == 'high':
high_count += 1
elif severity == 'medium':
medium_count += 1
else:
low_count += 1
# Write summary to file
with open('scan-summary.txt', 'w') as f:
f.write(f"HIGH_COUNT={high_count}\n")
f.write(f"MEDIUM_COUNT={medium_count}\n")
f.write(f"LOW_COUNT={low_count}\n")
print(f"Scan completed: {high_count} high, {medium_count} medium, {low_count} low severity vulnerabilities")
EOF
# Load summary
source scan-summary.txt
# Determine alert level
if [ "$HIGH_COUNT" -gt "$THRESHOLD_HIGH" ]; then
ALERT_LEVEL="critical"
COLOR="danger"
elif [ "$MEDIUM_COUNT" -gt "$THRESHOLD_MEDIUM" ]; then
ALERT_LEVEL="warning"
COLOR="warning"
else
ALERT_LEVEL="good"
COLOR="good"
fi
# Create notification message
MESSAGE="Kube-hunter Security Scan Results:
• High severity: $HIGH_COUNT
• Medium severity: $MEDIUM_COUNT
• Low severity: $LOW_COUNT
• Alert level: $ALERT_LEVEL"
# Send notifications
send_slack_notification "$MESSAGE" "$COLOR"
send_email_notification "Kube-hunter Security Scan - $ALERT_LEVEL" "$MESSAGE"
# Exit with appropriate code
if [ "$HIGH_COUNT" -gt "$THRESHOLD_HIGH" ]; then
echo "❌ Critical vulnerabilities found: $HIGH_COUNT high severity"
exit 1
elif [ "$MEDIUM_COUNT" -gt "$THRESHOLD_MEDIUM" ]; then
echo "⚠️ Warning threshold exceeded: $MEDIUM_COUNT medium severity"
exit 2
else
echo "✅ Security scan passed"
exit 0
fi
Fehlerbehebung
Häufige Fragen¶
# Network connectivity issues
kube-hunter --remote target.com --timeout 60
# Permission issues
sudo kube-hunter --internal
# Kubernetes API access issues
export KUBECONFIG=/path/to/kubeconfig
kube-hunter --pod
# Docker socket access issues
sudo docker run --rm -v /var/run/docker.sock:/var/run/docker.sock aquasec/kube-hunter:latest
# Debug mode
kube-hunter --verbose --remote target.com
Leistungsoptimierung¶
# Reduce scan scope
kube-hunter --quick --remote target.com
# Limit concurrent hunters
kube-hunter --rate-limit 5 --remote target.com
# Increase timeout for slow networks
kube-hunter --timeout 300 --remote target.com
# Use specific hunters only
kube-hunter --hunter-modules api-server,kubelet --remote target.com
Konfigurationsvalidierung¶
# Test connectivity
kube-hunter --remote target.com --quick
# Validate Kubernetes access
kubectl cluster-info
kube-hunter --pod
# Check hunter modules
kube-hunter --list-hunters
# Verify installation
kube-hunter --version
kube-hunter --help
oder Best Practices
Security Testing Strategy¶
ANHANG Regular Scanning: wöchentliche Sicherheitsscans planen 2. Multi-Perspective Testing*: Verwenden Sie sowohl Remote- als auch interne Scans 3. **Active vs Passive: Balance gründliche Tests mit Sicherheit 4. Baseline Establishment: Sicherheitsbasislinien erstellen 5. Trend Monitoring: Trends der Schwachstelle im Laufe der Zeit verfolgen
Scanning Guidelines¶
ANHANG Leistungsmanagement: Verwenden Sie mindestens Privileg für Scans 2. Network Segmentation: Test aus verschiedenen Netzwerksegmenten 3. Timing Erwägungen: Scans während der Wartungsfenster 4. Dokumentation: Dokumentenbefunde und Abhilfemaßnahmen 5. Validation: Wirksamkeit der Abhilfe überprüfen
Integration Best Practices¶
ANHANG CI/CD Integration*: In Sicherheitspipelines enthalten 2. **Automatisierte Entfernung: Implementieren Sie sichere Autoremediation 3. Alert Management: angemessene Alarmschwellen festlegen 4. Reporting: Aktionsfähige Sicherheitsberichte generieren 5. Compliance: Einhaltung der Sicherheitsanforderungen
Dieses umfassende Kube-Hunter-Catsheet bietet alles, was für professionelle Kubernetes Sicherheitsjagd- und Penetrationstests benötigt wird, von der Basisnutzung bis hin zu fortschrittlichen Automatisierungs- und Integrationsszenarien.