Kitsunner Cheat Sheet
Überblick
Kiterunner ist ein schnelles und modulares Werkzeug, das für Content Discovery und API Endpoint Enumeration konzipiert ist. Es zeichnet sich durch intelligente wortlistenbasierte Scan- und Content-Analyse durch die Entdeckung versteckter API-Endpunkte, Verzeichnisse und Dateien aus. Kiterunner ist besonders effektiv für moderne Web-Anwendungen und APIs, bietet erweiterte Funktionen wie benutzerdefinierte Wortlisten, Antwortanalyse und Integration mit anderen Sicherheitstools.
RECHT *Key Features: Schnelles Multi-Threaded Scannen, intelligente Antwortanalyse, benutzerdefinierte Wordlist-Unterstützung, API Endpoint-Erkennung, Content-basierte Filterung, JSON/XML Parsing und umfassende Ausgabeformate.
Installation und Inbetriebnahme
Binärinstallation
```bash
Download latest release from GitHub
wget https://github.com/assetnote/kiterunner/releases/latest/download/kiterunner_linux_amd64.tar.gz
Extract the archive
tar -xzf kiterunner_linux_amd64.tar.gz
Move to system path
sudo mv kr /usr/local/bin/
Verify installation
kr --help
Download wordlists
mkdir -p ~/.kiterunner/wordlists cd ~/.kiterunner/wordlists
Download common wordlists
wget https://raw.githubusercontent.com/assetnote/kiterunner/main/wordlists/routes-large.kite wget https://raw.githubusercontent.com/assetnote/kiterunner/main/wordlists/routes-small.kite wget https://raw.githubusercontent.com/assetnote/kiterunner/main/wordlists/apiroutes-210228.kite
Set permissions
chmod +x /usr/local/bin/kr ```_
Docker Installation
```bash
Pull Docker image
docker pull assetnote/kiterunner
Create alias for easier usage
echo 'alias kr="docker run --rm -it -v $(pwd):/app assetnote/kiterunner"' >> ~/.bashrc source ~/.bashrc
Test installation
kr --help
Run with volume mount for wordlists
docker run --rm -it -v $(pwd):/app -v ~/.kiterunner:/root/.kiterunner assetnote/kiterunner
Create Docker wrapper script
cat > kiterunner-docker.sh << 'EOF'
!/bin/bash
docker run --rm -it \ -v $(pwd):/app \ -v ~/.kiterunner:/root/.kiterunner \ assetnote/kiterunner "$@" EOF
chmod +x kiterunner-docker.sh sudo mv kiterunner-docker.sh /usr/local/bin/kr-docker ```_
Quelle Installation
```bash
Install Go (if not already installed)
wget https://golang.org/dl/go1.19.linux-amd64.tar.gz sudo tar -C /usr/local -xzf go1.19.linux-amd64.tar.gz export PATH=$PATH:/usr/local/go/bin
Clone repository
git clone https://github.com/assetnote/kiterunner.git cd kiterunner
Build from source
make build
Install binary
sudo cp dist/kr /usr/local/bin/
Verify installation
kr version
Build with custom options
go build -ldflags="-s -w" -o kr cmd/kiterunner/main.go ```_
Konfiguration und Setup
```bash
Create configuration directory
mkdir -p ~/.kiterunner/{wordlists,output,config}
Create default configuration
cat > ~/.kiterunner/config/config.yaml << 'EOF'
Kiterunner Configuration
default: threads: 50 timeout: 10 delay: 0 max_redirects: 3 user_agent: "Kiterunner/1.0"
wordlists: default_path: "~/.kiterunner/wordlists" routes_large: "routes-large.kite" routes_small: "routes-small.kite" api_routes: "apiroutes-210228.kite"
output: default_format: "json" save_responses: false output_directory: "~/.kiterunner/output"
filters: status_codes: ignore: [404, 403, 400] interesting: [200, 201, 202, 301, 302, 500, 502, 503]
content_length: min: 0 max: 1048576 # 1MB
response_time: max: 30000 # 30 seconds
proxy: enabled: false url: "http://127.0.0.1:8080"
headers: custom: - "X-Forwarded-For: 127.0.0.1" - "X-Real-IP: 127.0.0.1" EOF
Set environment variables
export KITERUNNER_CONFIG=~/.kiterunner/config/config.yaml export KITERUNNER_WORDLISTS=~/.kiterunner/wordlists
Create wordlist management script
cat > ~/.kiterunner/manage_wordlists.sh << 'EOF'
!/bin/bash
WORDLIST_DIR="$HOME/.kiterunner/wordlists"
download_wordlists() { echo "Downloading Kiterunner wordlists..."
# Official wordlists
wget -O "$WORDLIST_DIR/routes-large.kite" \
"https://raw.githubusercontent.com/assetnote/kiterunner/main/wordlists/routes-large.kite"
wget -O "$WORDLIST_DIR/routes-small.kite" \
"https://raw.githubusercontent.com/assetnote/kiterunner/main/wordlists/routes-small.kite"
wget -O "$WORDLIST_DIR/apiroutes-210228.kite" \
"https://raw.githubusercontent.com/assetnote/kiterunner/main/wordlists/apiroutes-210228.kite"
# Additional useful wordlists
wget -O "$WORDLIST_DIR/common.txt" \
"https://raw.githubusercontent.com/danielmiessler/SecLists/master/Discovery/Web-Content/common.txt"
wget -O "$WORDLIST_DIR/api-endpoints.txt" \
"https://raw.githubusercontent.com/danielmiessler/SecLists/master/Discovery/Web-Content/api/api-endpoints.txt"
echo "Wordlists downloaded successfully"
}
update_wordlists() { echo "Updating wordlists..." download_wordlists }
list_wordlists() { echo "Available wordlists:" ls -la "$WORDLIST_DIR" }
case "$1" in download) download_wordlists ;; update) update_wordlists ;; list) list_wordlists ;; | *) echo "Usage: $0 {download | update | list}" ;; | esac EOF
chmod +x ~/.kiterunner/manage_wordlists.sh ```_
Grundlegende Verwendung und Befehle
Simple Directory Discovery
```bash
Basic directory discovery
kr brute -w ~/.kiterunner/wordlists/routes-small.kite -t 50 https://example.com
Discovery with custom wordlist
kr brute -w /path/to/custom/wordlist.txt -t 100 https://example.com
Multiple targets from file
echo -e "https://example.com\nhttps://test.com" > targets.txt kr brute -w ~/.kiterunner/wordlists/routes-large.kite -A targets.txt
Discovery with specific extensions
kr brute -w ~/.kiterunner/wordlists/routes-small.kite -x php,asp,aspx,jsp https://example.com
Discovery with custom headers
kr brute -w ~/.kiterunner/wordlists/routes-small.kite -H "Authorization: Bearer token123" https://example.com
Discovery with proxy
kr brute -w ~/.kiterunner/wordlists/routes-small.kite --proxy http://127.0.0.1:8080 https://example.com ```_
API Endpoint Discovery
```bash
API endpoint discovery with specialized wordlist
kr brute -w ~/.kiterunner/wordlists/apiroutes-210228.kite -t 100 https://api.example.com
API discovery with JSON output
kr brute -w ~/.kiterunner/wordlists/apiroutes-210228.kite -o json https://api.example.com
API discovery with response analysis
kr brute -w ~/.kiterunner/wordlists/apiroutes-210228.kite --fail-status-codes 404,403 https://api.example.com
API discovery with content length filtering
kr brute -w ~/.kiterunner/wordlists/apiroutes-210228.kite --filter-length 0 https://api.example.com
API discovery with custom user agent
kr brute -w ~/.kiterunner/wordlists/apiroutes-210228.kite -H "User-Agent: Mobile App 1.0" https://api.example.com
API versioning discovery
kr brute -w ~/.kiterunner/wordlists/apiroutes-210228.kite --prefixes /v1,/v2,/api/v1,/api/v2 https://example.com ```_
Erweiterte Scanoptionen
```bash
High-performance scanning
kr brute -w ~/.kiterunner/wordlists/routes-large.kite -t 200 --delay 0 https://example.com
Scanning with custom timeout
kr brute -w ~/.kiterunner/wordlists/routes-small.kite --timeout 30 https://example.com
Scanning with retry logic
kr brute -w ~/.kiterunner/wordlists/routes-small.kite --max-retries 3 https://example.com
Scanning with rate limiting
kr brute -w ~/.kiterunner/wordlists/routes-small.kite --delay 100 https://example.com
Scanning with custom status code filtering
kr brute -w ~/.kiterunner/wordlists/routes-small.kite --fail-status-codes 404,403,400,401 https://example.com
Scanning with response size filtering
kr brute -w ~/.kiterunner/wordlists/routes-small.kite --filter-length 0,1,2,3 https://example.com ```_
Advanced Discovery Techniques
Benutzerdefinierte Wordlist-Erstellung
```bash
!/bin/bash
Create custom wordlists for specific applications
create_api_wordlist() { local output_file="$1"
cat > "$output_file" << 'EOF'
API Endpoints
/api /api/v1 /api/v2 /api/v3 /rest /rest/v1 /rest/v2 /graphql /graph /gql
Authentication endpoints
/auth /login /logout /signin /signout /register /signup /oauth /token /refresh /verify
User management
/users /user /profile /account /accounts /me /settings /preferences
Data endpoints
/data /export /import /backup /restore /sync /upload /download
Admin endpoints
/admin /administrator /management /manage /dashboard /panel /control
Common API patterns
/health /status /ping /version /info /metrics /stats /analytics
CRUD operations
/create /read /update /delete /list /search /find /get /post /put /patch
File operations
/files /documents /images /media /assets /static /public /private
Configuration
/config /configuration /settings /options /parameters /env /environment EOF
echo "API wordlist created: $output_file"
}
create_technology_specific_wordlist() { local technology="$1" local output_file="$2"
case "$technology" in
"spring")
cat > "$output_file" << 'EOF'
/actuator /actuator/health /actuator/info /actuator/metrics /actuator/env /actuator/configprops /actuator/mappings /actuator/beans /actuator/trace /actuator/dump /actuator/autoconfig /management /management/health /management/info EOF ;; "django") cat > "$output_file" << 'EOF' /admin /admin/ /django-admin /debug /debug /static /media /api /api-auth /api-token-auth /accounts /accounts/login /accounts/logout /accounts/signup EOF ;; "laravel") cat > "$output_file" << 'EOF' /api /admin /dashboard /telescope /horizon /nova /storage /public /vendor /artisan /routes /config /cache /session EOF ;; "nodejs") cat > "$output_file" << 'EOF' /api /admin /auth /login /logout /register /users /user /profile /dashboard /health /status /metrics /debug /console /socket.io /graphql EOF ;; esac
echo "$technology wordlist created: $output_file"
}
create_mobile_api_wordlist() { local output_file="$1"
cat > "$output_file" << 'EOF'
Mobile API endpoints
/mobile /mobile/api /mobile/v1 /mobile/v2 /app /app/api /app/v1 /app/v2
Mobile-specific features
/push /notifications /fcm /apns /device /devices /registration /unregister
Mobile authentication
/mobile/auth /mobile/login /mobile/logout /mobile/token /mobile/refresh /app/auth /app/login /app/token
Mobile data sync
/sync /mobile/sync /app/sync /offline /cache /local
Mobile analytics
/analytics /events /tracking /metrics /crash /crashes /feedback
Mobile updates
/update /updates /version /versions /download /upgrade EOF
echo "Mobile API wordlist created: $output_file"
}
Usage examples
create_api_wordlist "~/.kiterunner/wordlists/custom-api.txt" create_technology_specific_wordlist "spring" "~/.kiterunner/wordlists/spring-endpoints.txt" create_mobile_api_wordlist "~/.kiterunner/wordlists/mobile-api.txt" ```_
Intelligente Antwortanalyse
```python
!/usr/bin/env python3
Advanced response analysis for Kiterunner results
import json import re import requests import argparse from urllib.parse import urljoin, urlparse import time from concurrent.futures import ThreadPoolExecutor, as_completed
class KiterunnerAnalyzer: """Advanced analysis of Kiterunner discovery results"""
def __init__(self, results_file, target_base_url):
self.results_file = results_file
self.target_base_url = target_base_url
self.session = requests.Session()
self.session.headers.update({
'User-Agent': 'Kiterunner-Analyzer/1.0'
})
# Analysis results
self.interesting_endpoints = []
self.api_endpoints = []
self.admin_endpoints = []
self.sensitive_files = []
self.error_pages = []
def load_results(self):
"""Load Kiterunner results from JSON file"""
try:
with open(self.results_file, 'r') as f:
# Handle both single JSON objects and line-delimited JSON
content = f.read().strip()
if content.startswith('['):
# Standard JSON array
return json.loads(content)
else:
# Line-delimited JSON
results = []
for line in content.split('\n'):
if line.strip():
try:
results.append(json.loads(line))
except json.JSONDecodeError:
continue
return results
except Exception as e:
print(f"Error loading results: {e}")
return []
def analyze_results(self):
"""Analyze Kiterunner results for interesting findings"""
results = self.load_results()
print(f"Analyzing {len(results)} discovered endpoints...")
for result in results:
self._analyze_single_result(result)
# Generate analysis report
self._generate_analysis_report()
def _analyze_single_result(self, result):
"""Analyze a single discovery result"""
url = result.get('url', '')
status_code = result.get('status', 0)
content_length = result.get('length', 0)
response_time = result.get('time', 0)
# Categorize endpoints
if self._is_api_endpoint(url):
self.api_endpoints.append(result)
if self._is_admin_endpoint(url):
self.admin_endpoints.append(result)
if self._is_sensitive_file(url):
self.sensitive_files.append(result)
if self._is_error_page(result):
self.error_pages.append(result)
if self._is_interesting_endpoint(result):
self.interesting_endpoints.append(result)
def _is_api_endpoint(self, url):
"""Check if URL appears to be an API endpoint"""
api_patterns = [
r'/api/',
r'/rest/',
r'/graphql',
r'/v\d+/',
r'\.json$',
r'\.xml$',
r'/oauth',
r'/token'
]
return any(re.search(pattern, url, re.IGNORECASE) for pattern in api_patterns)
def _is_admin_endpoint(self, url):
"""Check if URL appears to be an admin endpoint"""
admin_patterns = [
r'/admin',
r'/administrator',
r'/management',
r'/dashboard',
r'/panel',
r'/control',
r'/console',
r'/manager'
]
return any(re.search(pattern, url, re.IGNORECASE) for pattern in admin_patterns)
def _is_sensitive_file(self, url):
"""Check if URL appears to be a sensitive file"""
sensitive_patterns = [
r'\.env$',
r'\.config$',
r'\.conf$',
r'\.ini$',
r'\.properties$',
r'\.yaml$',
r'\.yml$',
r'\.json$',
r'\.xml$',
r'\.sql$',
r'\.db$',
r'\.backup$',
r'\.bak$',
r'\.log$',
r'\.key$',
r'\.pem$',
r'\.crt$',
r'\.p12$',
r'\.pfx$'
]
return any(re.search(pattern, url, re.IGNORECASE) for pattern in sensitive_patterns)
def _is_error_page(self, result):
"""Check if result appears to be an error page with useful information"""
status_code = result.get('status', 0)
content_length = result.get('length', 0)
# Look for error pages that might leak information
if status_code in [500, 502, 503] and content_length > 100:
return True
return False
def _is_interesting_endpoint(self, result):
"""Check if endpoint is interesting for further investigation"""
url = result.get('url', '')
status_code = result.get('status', 0)
content_length = result.get('length', 0)
# Interesting status codes
if status_code in [200, 201, 202, 301, 302, 401, 403]:
return True
# Unusual content lengths
if content_length > 10000 or (content_length > 0 and content_length < 10):
return True
# Interesting URL patterns
interesting_patterns = [
r'/debug',
r'/test',
r'/dev',
r'/staging',
r'/backup',
r'/old',
r'/new',
r'/temp',
r'/tmp',
r'/upload',
r'/download',
r'/export',
r'/import'
]
if any(re.search(pattern, url, re.IGNORECASE) for pattern in interesting_patterns):
return True
return False
def deep_analysis(self, max_workers=10):
"""Perform deep analysis of interesting endpoints"""
print("Performing deep analysis of interesting endpoints...")
endpoints_to_analyze = (
self.interesting_endpoints +
self.api_endpoints +
self.admin_endpoints
)
with ThreadPoolExecutor(max_workers=max_workers) as executor:
future_to_endpoint = {
executor.submit(self._deep_analyze_endpoint, endpoint): endpoint
for endpoint in endpoints_to_analyze[:50] # Limit to first 50
}
for future in as_completed(future_to_endpoint):
endpoint = future_to_endpoint[future]
try:
analysis = future.result()
if analysis:
endpoint['deep_analysis'] = analysis
except Exception as e:
print(f"Error analyzing {endpoint.get('url', 'unknown')}: {e}")
def _deep_analyze_endpoint(self, endpoint):
"""Perform deep analysis of a single endpoint"""
url = endpoint.get('url', '')
try:
# Make request to endpoint
response = self.session.get(url, timeout=10, allow_redirects=True)
analysis = {
'final_url': response.url,
'status_code': response.status_code,
'headers': dict(response.headers),
'content_type': response.headers.get('content-type', ''),
'server': response.headers.get('server', ''),
'technologies': self._detect_technologies(response),
'security_headers': self._analyze_security_headers(response.headers),
'content_analysis': self._analyze_content(response.text),
'forms': self._extract_forms(response.text),
'links': self._extract_links(response.text, url)
}
return analysis
except Exception as e:
return {'error': str(e)}
def _detect_technologies(self, response):
"""Detect technologies used by the endpoint"""
technologies = []
# Check headers for technology indicators
server = response.headers.get('server', '').lower()
if 'apache' in server:
technologies.append('Apache')
if 'nginx' in server:
technologies.append('Nginx')
if 'iis' in server:
technologies.append('IIS')
# Check for framework-specific headers
if 'x-powered-by' in response.headers:
technologies.append(response.headers['x-powered-by'])
# Check content for technology indicators
content = response.text.lower()
tech_patterns = {
'React': r'react',
'Angular': r'angular',
'Vue.js': r'vue\.js',
'jQuery': r'jquery',
'Bootstrap': r'bootstrap',
'Django': r'django',
'Laravel': r'laravel',
'Spring': r'spring',
'Express': r'express',
'WordPress': r'wp-content|wordpress',
'Drupal': r'drupal',
'Joomla': r'joomla'
}
for tech, pattern in tech_patterns.items():
if re.search(pattern, content):
technologies.append(tech)
return list(set(technologies))
def _analyze_security_headers(self, headers):
"""Analyze security headers"""
security_headers = {
'Content-Security-Policy': headers.get('content-security-policy'),
'X-Frame-Options': headers.get('x-frame-options'),
'X-XSS-Protection': headers.get('x-xss-protection'),
'X-Content-Type-Options': headers.get('x-content-type-options'),
'Strict-Transport-Security': headers.get('strict-transport-security'),
'Referrer-Policy': headers.get('referrer-policy'),
'Permissions-Policy': headers.get('permissions-policy')
}
# Identify missing security headers
missing_headers = [name for name, value in security_headers.items() if not value]
return {
'present': {k: v for k, v in security_headers.items() if v},
'missing': missing_headers
}
def _analyze_content(self, content):
"""Analyze response content for interesting information"""
analysis = {
'length': len(content),
'contains_forms': '<form' in content.lower(),
'contains_javascript': '<script' in content.lower(),
'contains_comments': '<!--' in content,
'potential_secrets': [],
'error_messages': [],
'debug_info': []
}
# Look for potential secrets
secret_patterns = [
(r'api[_-]?key["\']?\s*[:=]\s*["\']([a-zA-Z0-9_-]{10,})["\']', 'API Key'),
(r'secret[_-]?key["\']?\s*[:=]\s*["\']([a-zA-Z0-9_-]{10,})["\']', 'Secret Key'),
(r'password["\']?\s*[:=]\s*["\']([a-zA-Z0-9_-]{8,})["\']', 'Password'),
(r'token["\']?\s*[:=]\s*["\']([a-zA-Z0-9_-]{10,})["\']', 'Token'),
(r'aws[_-]?access[_-]?key["\']?\s*[:=]\s*["\']([A-Z0-9]{20})["\']', 'AWS Access Key'),
(r'aws[_-]?secret[_-]?key["\']?\s*[:=]\s*["\']([a-zA-Z0-9/+=]{40})["\']', 'AWS Secret Key')
]
for pattern, secret_type in secret_patterns:
matches = re.findall(pattern, content, re.IGNORECASE)
for match in matches:
analysis['potential_secrets'].append({
'type': secret_type,
'value': match[:20] + '...' if len(match) > 20 else match
})
# Look for error messages
error_patterns = [
r'error[:\s]+([^\n\r]{10,100})',
r'exception[:\s]+([^\n\r]{10,100})',
r'stack trace[:\s]+([^\n\r]{10,100})',
r'fatal[:\s]+([^\n\r]{10,100})'
]
for pattern in error_patterns:
matches = re.findall(pattern, content, re.IGNORECASE)
analysis['error_messages'].extend(matches[:5]) # Limit to 5 matches
# Look for debug information
debug_patterns = [
r'debug[:\s]+([^\n\r]{10,100})',
r'console\.log\(["\']([^"\']{10,100})["\']',
r'var_dump\(["\']([^"\']{10,100})["\']',
r'print_r\(["\']([^"\']{10,100})["\']'
]
for pattern in debug_patterns:
matches = re.findall(pattern, content, re.IGNORECASE)
analysis['debug_info'].extend(matches[:5]) # Limit to 5 matches
return analysis
def _extract_forms(self, content):
"""Extract forms from HTML content"""
forms = []
form_pattern = r'<form[^>]*>(.*?)</form>'
for form_match in re.finditer(form_pattern, content, re.DOTALL | re.IGNORECASE):
form_html = form_match.group(0)
# Extract form attributes
action = re.search(r'action=["\']([^"\']*)["\']', form_html, re.IGNORECASE)
method = re.search(r'method=["\']([^"\']*)["\']', form_html, re.IGNORECASE)
# Extract input fields
inputs = []
input_pattern = r'<input[^>]*>'
for input_match in re.finditer(input_pattern, form_html, re.IGNORECASE):
input_html = input_match.group(0)
name = re.search(r'name=["\']([^"\']*)["\']', input_html, re.IGNORECASE)
input_type = re.search(r'type=["\']([^"\']*)["\']', input_html, re.IGNORECASE)
inputs.append({
'name': name.group(1) if name else '',
'type': input_type.group(1) if input_type else 'text'
})
forms.append({
'action': action.group(1) if action else '',
'method': method.group(1) if method else 'GET',
'inputs': inputs
})
return forms
def _extract_links(self, content, base_url):
"""Extract links from HTML content"""
links = []
link_pattern = r'href=["\']([^"\']*)["\']'
for link_match in re.finditer(link_pattern, content, re.IGNORECASE):
link = link_match.group(1)
# Convert relative URLs to absolute
if link.startswith('/'):
parsed_base = urlparse(base_url)
link = f"{parsed_base.scheme}://{parsed_base.netloc}{link}"
elif not link.startswith(('http://', 'https://')):
link = urljoin(base_url, link)
links.append(link)
# Remove duplicates and limit
return list(set(links))[:20]
def _generate_analysis_report(self):
"""Generate comprehensive analysis report"""
report = {
'summary': {
'total_endpoints': len(self.load_results()),
'api_endpoints': len(self.api_endpoints),
'admin_endpoints': len(self.admin_endpoints),
'sensitive_files': len(self.sensitive_files),
'error_pages': len(self.error_pages),
'interesting_endpoints': len(self.interesting_endpoints)
},
'findings': {
'api_endpoints': self.api_endpoints[:10], # Top 10
'admin_endpoints': self.admin_endpoints[:10],
'sensitive_files': self.sensitive_files[:10],
'error_pages': self.error_pages[:5],
'interesting_endpoints': self.interesting_endpoints[:15]
},
'recommendations': self._generate_recommendations()
}
# Save report
report_file = f"kiterunner_analysis_{int(time.time())}.json"
with open(report_file, 'w') as f:
json.dump(report, f, indent=2)
print(f"Analysis report saved: {report_file}")
# Print summary
self._print_summary(report)
def _generate_recommendations(self):
"""Generate security recommendations based on findings"""
recommendations = []
if self.admin_endpoints:
recommendations.append({
'priority': 'HIGH',
'category': 'Access Control',
'finding': f"Found {len(self.admin_endpoints)} admin endpoints",
'recommendation': "Review admin endpoints for proper authentication and authorization"
})
if self.sensitive_files:
recommendations.append({
'priority': 'HIGH',
'category': 'Information Disclosure',
'finding': f"Found {len(self.sensitive_files)} sensitive files",
'recommendation': "Remove or protect sensitive files from public access"
})
if self.error_pages:
recommendations.append({
'priority': 'MEDIUM',
'category': 'Information Disclosure',
'finding': f"Found {len(self.error_pages)} error pages",
'recommendation': "Review error pages for information leakage"
})
if self.api_endpoints:
recommendations.append({
'priority': 'MEDIUM',
'category': 'API Security',
'finding': f"Found {len(self.api_endpoints)} API endpoints",
'recommendation': "Test API endpoints for authentication, authorization, and input validation"
})
return recommendations
def _print_summary(self, report):
"""Print analysis summary"""
print("\n" + "="*60)
print("KITERUNNER ANALYSIS SUMMARY")
print("="*60)
summary = report['summary']
print(f"Total Endpoints Discovered: {summary['total_endpoints']}")
print(f"API Endpoints: {summary['api_endpoints']}")
print(f"Admin Endpoints: {summary['admin_endpoints']}")
print(f"Sensitive Files: {summary['sensitive_files']}")
print(f"Error Pages: {summary['error_pages']}")
print(f"Interesting Endpoints: {summary['interesting_endpoints']}")
print("\nTOP FINDINGS:")
print("-" * 40)
if self.admin_endpoints:
print("Admin Endpoints:")
for endpoint in self.admin_endpoints[:5]:
print(f" • {endpoint.get('url', 'N/A')} ({endpoint.get('status', 'N/A')})")
if self.sensitive_files:
print("\\nSensitive Files:")
for file_endpoint in self.sensitive_files[:5]:
print(f" • {file_endpoint.get('url', 'N/A')} ({file_endpoint.get('status', 'N/A')})")
print("\\nRECOMMENDATIONS:")
print("-" * 40)
for rec in report['recommendations']:
print(f"[{rec['priority']}] {rec['category']}: {rec['recommendation']}")
def main(): parser = argparse.ArgumentParser(description='Analyze Kiterunner discovery results') parser.add_argument('results_file', help='Kiterunner results file (JSON)') parser.add_argument('target_url', help='Target base URL') parser.add_argument('--deep-analysis', action='store_true', help='Perform deep analysis') parser.add_argument('--threads', type=int, default=10, help='Number of threads for deep analysis')
args = parser.parse_args()
analyzer = KiterunnerAnalyzer(args.results_file, args.target_url)
analyzer.analyze_results()
if args.deep_analysis:
analyzer.deep_analysis(max_workers=args.threads)
if name == "main": main() ```_
Integration von Sicherheitswerkzeugen
Integration von Burp Suite
```python
!/usr/bin/env python3
Burp Suite integration for Kiterunner results
import json import requests import base64 from urllib.parse import urlparse
class BurpSuiteIntegration: """Integration with Burp Suite for discovered endpoints"""
def __init__(self, burp_api_url="http://127.0.0.1:1337", api_key=None):
self.burp_api_url = burp_api_url
self.api_key = api_key
self.session = requests.Session()
if api_key:
self.session.headers.update({'X-API-Key': api_key})
def import_kiterunner_results(self, results_file, target_scope=None):
"""Import Kiterunner results into Burp Suite"""
print("Importing Kiterunner results into Burp Suite...")
# Load results
with open(results_file, 'r') as f:
results = json.load(f)
imported_count = 0
for result in results:
url = result.get('url', '')
status_code = result.get('status', 0)
# Filter by scope if specified
if target_scope and not self._in_scope(url, target_scope):
continue
# Only import successful responses
if status_code in [200, 201, 202, 301, 302, 401, 403]:
if self._add_to_sitemap(url, result):
imported_count += 1
print(f"Imported {imported_count} endpoints into Burp Suite")
def _in_scope(self, url, scope_patterns):
"""Check if URL is in scope"""
for pattern in scope_patterns:
if pattern in url:
return True
return False
def _add_to_sitemap(self, url, result):
"""Add URL to Burp Suite sitemap"""
try:
# Create request data
request_data = {
'url': url,
'method': 'GET',
'headers': [],
'body': ''
}
# Create response data
response_data = {
'status_code': result.get('status', 200),
'headers': [],
'body': result.get('response', '')
}
# Add to sitemap via API
api_endpoint = f"{self.burp_api_url}/v0.1/sitemap"
payload = {
'request': request_data,
'response': response_data
}
response = self.session.post(api_endpoint, json=payload)
return response.status_code == 200
except Exception as e:
print(f"Error adding {url} to sitemap: {e}")
return False
def create_scan_configuration(self, results_file, output_file):
"""Create Burp Suite scan configuration from results"""
with open(results_file, 'r') as f:
results = json.load(f)
# Extract unique hosts and paths
hosts = set()
paths = []
for result in results:
url = result.get('url', '')
parsed = urlparse(url)
hosts.add(f"{parsed.scheme}://{parsed.netloc}")
paths.append(parsed.path)
# Create scan configuration
scan_config = {
'scan_configurations': [
{
'name': 'Kiterunner Discovery Scan',
'type': 'NamedConfiguration',
'built_in_configuration_name': 'Audit coverage - thorough'
}
],
'application_logins': [],
'resource_pool': {
'maximum_concurrent_scans': 10,
'maximum_requests_per_second': 100
},
'scan_targets': [
{
'urls': list(hosts),
'scope': {
'include': [
{
'rule': host,
'type': 'SimpleScopeRule'
} for host in hosts
]
}
}
]
}
# Save configuration
with open(output_file, 'w') as f:
json.dump(scan_config, f, indent=2)
print(f"Burp Suite scan configuration saved: {output_file}")
def create_burp_extension(): """Create Burp Suite extension for Kiterunner integration"""
extension_code = '''
from burp import IBurpExtender, ITab, IHttpListener, IContextMenuFactory from javax.swing import JPanel, JButton, JTextArea, JScrollPane, JLabel, JTextField from java.awt import BorderLayout, GridLayout import json import subprocess import threading
class BurpExtender(IBurpExtender, ITab, IHttpListener, IContextMenuFactory): def registerExtenderCallbacks(self, callbacks): self._callbacks = callbacks self._helpers = callbacks.getHelpers()
callbacks.setExtensionName("Kiterunner Integration")
callbacks.registerHttpListener(self)
callbacks.registerContextMenuFactory(self)
# Create UI
self._create_ui()
callbacks.addSuiteTab(self)
print("Kiterunner Integration loaded successfully")
def _create_ui(self):
self._main_panel = JPanel(BorderLayout())
# Control panel
control_panel = JPanel(GridLayout(4, 2))
# Target URL input
control_panel.add(JLabel("Target URL:"))
self._target_url = JTextField("https://example.com")
control_panel.add(self._target_url)
# Wordlist selection
control_panel.add(JLabel("Wordlist:"))
self._wordlist_path = JTextField("/path/to/wordlist.kite")
control_panel.add(self._wordlist_path)
# Threads setting
control_panel.add(JLabel("Threads:"))
self._threads = JTextField("50")
control_panel.add(self._threads)
# Run button
self._run_button = JButton("Run Kiterunner", actionPerformed=self._run_kiterunner)
control_panel.add(self._run_button)
# Import button
self._import_button = JButton("Import Results", actionPerformed=self._import_results)
control_panel.add(self._import_button)
# Results area
self._results_area = JTextArea(20, 80)
self._results_area.setEditable(False)
results_scroll = JScrollPane(self._results_area)
# Add components
self._main_panel.add(control_panel, BorderLayout.NORTH)
self._main_panel.add(results_scroll, BorderLayout.CENTER)
def getTabCaption(self):
return "Kiterunner"
def getUiComponent(self):
return self._main_panel
def _run_kiterunner(self, event):
"""Run Kiterunner scan"""
target_url = self._target_url.getText()
wordlist_path = self._wordlist_path.getText()
threads = self._threads.getText()
if not target_url or not wordlist_path:
self._results_area.append("Please specify target URL and wordlist path\\n")
return
# Run in background thread
thread = threading.Thread(target=self._execute_kiterunner,
args=(target_url, wordlist_path, threads))
thread.daemon = True
thread.start()
def _execute_kiterunner(self, target_url, wordlist_path, threads):
"""Execute Kiterunner command"""
try:
self._results_area.append(f"Starting Kiterunner scan on {target_url}...\\n")
# Build command
cmd = [
"kr", "brute",
"-w", wordlist_path,
"-t", threads,
"-o", "json",
target_url
]
# Execute command
process = subprocess.Popen(cmd, stdout=subprocess.PIPE,
stderr=subprocess.PIPE, text=True)
# Read output
for line in process.stdout:
self._results_area.append(line)
self._results_area.setCaretPosition(self._results_area.getDocument().getLength())
# Wait for completion
process.wait()
if process.returncode == 0:
self._results_area.append("\\nKiterunner scan completed successfully\\n")
else:
error_output = process.stderr.read()
self._results_area.append(f"\\nKiterunner scan failed: {error_output}\\n")
except Exception as e:
self._results_area.append(f"\\nError running Kiterunner: {e}\\n")
def _import_results(self, event):
"""Import Kiterunner results into Burp Suite"""
# This would implement the import functionality
# For now, just show a message
self._results_area.append("Import functionality would be implemented here\\n")
'''
# Save extension code
with open('kiterunner_burp_extension.py', 'w') as f:
f.write(extension_code)
print("Burp Suite extension created: kiterunner_burp_extension.py")
def main(): # Example usage burp_integration = BurpSuiteIntegration()
# Import results (example)
# burp_integration.import_kiterunner_results('kiterunner_results.json', ['example.com'])
# Create scan configuration
# burp_integration.create_scan_configuration('kiterunner_results.json', 'burp_scan_config.json')
# Create Burp extension
create_burp_extension()
if name == "main": main() ```_
Leistungsoptimierung und Fehlerbehebung
Leistung Tuning
```bash
!/bin/bash
Kiterunner performance optimization
optimize_kiterunner_performance() { echo "Optimizing Kiterunner performance..."
# 1. System-level optimizations
echo "Applying system optimizations..."
# Increase file descriptor limits
ulimit -n 65536
echo "* soft nofile 65536" | sudo tee -a /etc/security/limits.conf
echo "* hard nofile 65536" | sudo tee -a /etc/security/limits.conf
# Optimize network settings
echo 'net.core.somaxconn = 65536' | sudo tee -a /etc/sysctl.conf
echo 'net.ipv4.tcp_max_syn_backlog = 65536' | sudo tee -a /etc/sysctl.conf
echo 'net.ipv4.tcp_fin_timeout = 15' | sudo tee -a /etc/sysctl.conf
echo 'net.ipv4.tcp_tw_reuse = 1' | sudo tee -a /etc/sysctl.conf
sudo sysctl -p
# 2. Kiterunner-specific optimizations
echo "Configuring Kiterunner optimizations..."
# Create optimized configuration
cat > ~/.kiterunner/performance_config.yaml << 'EOF'
High-performance Kiterunner configuration
performance: threads: 200 # Increase threads for faster scanning timeout: 5 # Reduce timeout for faster responses delay: 0 # No delay between requests max_retries: 1 # Reduce retries to avoid slowdowns
network: keep_alive: true # Reuse connections max_idle_conns: 100 # Increase connection pool max_conns_per_host: 50 # More connections per host
filtering: fail_status_codes: [404, 403, 400, 401, 500, 502, 503] filter_length: [0, 1, 2, 3] # Filter out tiny responses
output: minimal: true # Reduce output verbosity save_responses: false # Don't save full responses EOF
echo "Performance optimizations applied"
}
Performance monitoring script
monitor_kiterunner_performance() { local target="$1" local wordlist="$2" local output_file="kiterunner_performance_$(date +%s).log"
echo "Monitoring Kiterunner performance for: $target"
echo "Wordlist: $wordlist"
# Start monitoring
{
echo "Timestamp,CPU%,Memory(MB),Network_Connections,Requests_Per_Second"
while true; do
if pgrep -f "kr" > /dev/null; then
local pid=$(pgrep -f "kr")
| local cpu=$(ps -p $pid -o %cpu --no-headers 2>/dev/null | | echo "0") | | local mem=$(ps -p $pid -o rss --no-headers 2>/dev/null | awk '{print $1/1024}' | | echo "0") | | local connections=$(ss -tuln 2>/dev/null | wc -l | | echo "0") | local timestamp=$(date +%s)
echo "$timestamp,$cpu,$mem,$connections,N/A"
fi
sleep 2
done
} > "$output_file" &
local monitor_pid=$!
# Run Kiterunner with timing
echo "Starting Kiterunner scan..."
time kr brute -w "$wordlist" -t 100 "$target"
# Stop monitoring
kill $monitor_pid 2>/dev/null
echo "Performance monitoring completed: $output_file"
}
Benchmark different configurations
benchmark_kiterunner() { local target="$1" local wordlist="$2"
echo "Benchmarking Kiterunner configurations..."
# Test different thread counts
thread_counts=(10 25 50 100 200)
for threads in "${thread_counts[@]}"; do
echo "Testing with $threads threads..."
start_time=$(date +%s)
kr brute -w "$wordlist" -t "$threads" --timeout 5 "$target" > /dev/null 2>&1
end_time=$(date +%s)
duration=$((end_time - start_time))
echo "Threads: $threads, Duration: ${duration}s"
done
# Test different timeout values
timeouts=(1 3 5 10 15)
echo "Testing different timeout values..."
for timeout in "${timeouts[@]}"; do
echo "Testing with ${timeout}s timeout..."
start_time=$(date +%s)
kr brute -w "$wordlist" -t 50 --timeout "$timeout" "$target" > /dev/null 2>&1
end_time=$(date +%s)
duration=$((end_time - start_time))
echo "Timeout: ${timeout}s, Duration: ${duration}s"
done
}
Memory optimization for large wordlists
optimize_memory_usage() { echo "Optimizing memory usage for large wordlists..."
# Split large wordlists into smaller chunks
split_wordlist() {
local input_wordlist="$1"
local chunk_size="${2:-1000}"
local output_dir="${3:-./wordlist_chunks}"
mkdir -p "$output_dir"
# Split wordlist
split -l "$chunk_size" "$input_wordlist" "$output_dir/chunk_"
echo "Wordlist split into chunks in: $output_dir"
}
# Process wordlist chunks sequentially
process_chunks() {
local target="$1"
local chunk_dir="$2"
local output_file="$3"
echo "Processing wordlist chunks for: $target"
for chunk in "$chunk_dir"/chunk_*; do
echo "Processing chunk: $(basename "$chunk")"
kr brute -w "$chunk" -t 50 "$target" >> "$output_file"
# Small delay to prevent overwhelming the target
sleep 1
done
echo "All chunks processed. Results in: $output_file"
}
# Example usage
# split_wordlist "large_wordlist.txt" 500 "./chunks"
# process_chunks "https://example.com" "./chunks" "results.json"
}
Run optimizations
optimize_kiterunner_performance ```_
Probleme bei der Fehlerbehebung
```bash
!/bin/bash
Kiterunner troubleshooting guide
troubleshoot_kiterunner() { echo "Kiterunner Troubleshooting Guide" echo "================================"
# Check if Kiterunner is installed
if ! command -v kr &> /dev/null; then
echo "❌ Kiterunner not found"
echo "Solution: Download and install Kiterunner from GitHub releases"
echo " wget https://github.com/assetnote/kiterunner/releases/latest/download/kiterunner_linux_amd64.tar.gz"
echo " tar -xzf kiterunner_linux_amd64.tar.gz"
echo " sudo mv kr /usr/local/bin/"
return 1
fi
| echo "✅ Kiterunner found: $(kr version 2>/dev/null | | echo 'Version unknown')" |
# Check wordlists
if [ ! -d ~/.kiterunner/wordlists ]; then
echo "❌ Wordlists directory not found"
echo "Solution: Create wordlists directory and download wordlists"
echo " mkdir -p ~/.kiterunner/wordlists"
echo " cd ~/.kiterunner/wordlists"
echo " wget https://raw.githubusercontent.com/assetnote/kiterunner/main/wordlists/routes-small.kite"
return 1
fi
echo "✅ Wordlists directory exists"
# Check for common wordlists
wordlists=(
"routes-small.kite"
"routes-large.kite"
"apiroutes-210228.kite"
)
for wordlist in "${wordlists[@]}"; do
if [ -f ~/.kiterunner/wordlists/"$wordlist" ]; then
echo "✅ Wordlist found: $wordlist"
else
echo "⚠️ Wordlist missing: $wordlist"
fi
done
# Check network connectivity
if ! curl -s --connect-timeout 5 https://httpbin.org/get > /dev/null; then
echo "❌ Network connectivity issues"
echo "Solution: Check internet connection and proxy settings"
return 1
fi
echo "✅ Network connectivity OK"
# Check system resources
available_memory=$(free -m | awk 'NR==2{printf "%.1f", $7/1024}')
if (( $(echo "$available_memory < 0.5" | bc -l) )); then
echo "⚠️ Low available memory: ${available_memory}GB"
echo "Solution: Free up memory or reduce thread count"
else
echo "✅ Available memory: ${available_memory}GB"
fi
# Check file descriptor limits
fd_limit=$(ulimit -n)
if [ "$fd_limit" -lt 1024 ]; then
echo "⚠️ Low file descriptor limit: $fd_limit"
echo "Solution: Increase file descriptor limit"
echo " ulimit -n 65536"
else
echo "✅ File descriptor limit: $fd_limit"
fi
echo "Troubleshooting completed"
}
Common error solutions
fix_common_kiterunner_errors() { echo "Common Kiterunner Errors and Solutions" echo "======================================"
cat << 'EOF'
-
"kr: command not found" Solution:
- Download Kiterunner binary from GitHub releases
- Extract and move to /usr/local/bin/
- Ensure /usr/local/bin/ is in your PATH
-
"permission denied" when running kr Solution:
- Make the binary executable: chmod +x /usr/local/bin/kr
- Check file ownership and permissions
-
"too many open files" error Solution:
- Increase file descriptor limit: ulimit -n 65536
- Add to ~/.bashrc for persistence
- Reduce thread count if problem persists
-
"connection timeout" or "connection refused" Solution:
- Check target URL is accessible
- Verify firewall settings
- Increase timeout value: --timeout 30
- Check if target is rate limiting
-
"wordlist not found" error Solution:
- Verify wordlist path is correct
- Download official wordlists from GitHub
- Use absolute paths for wordlists
-
Slow scanning performance Solution:
- Increase thread count: -t 100
- Reduce timeout: --timeout 5
- Use smaller wordlists for testing
- Check network latency to target
-
"out of memory" errors Solution:
- Reduce thread count
- Split large wordlists into chunks
- Process targets sequentially
- Monitor memory usage during scans
-
No results found (false negatives) Solution:
- Try different wordlists
- Adjust status code filtering
- Check response length filtering
- Verify target is responding correctly
-
SSL/TLS certificate errors Solution:
- Use --insecure flag for self-signed certificates
- Update system CA certificates
- Check target SSL configuration
-
Rate limiting by target Solution:
- Reduce thread count: -t 10
- Add delay between requests: --delay 100
- Use proxy rotation if available
- Respect target's robots.txt and rate limits EOF }
Test basic functionality
test_kiterunner_functionality() { echo "Testing Kiterunner Basic Functionality" echo "======================================"
# Test with httpbin.org (reliable test target)
test_target="https://httpbin.org"
echo "Testing basic discovery against: $test_target"
# Create minimal test wordlist
cat > /tmp/test_wordlist.txt << 'EOF'
get post put delete status headers ip user-agent EOF
# Run test scan
echo "Running test scan..."
if kr brute -w /tmp/test_wordlist.txt -t 5 --timeout 10 "$test_target" > /tmp/kr_test_output.txt 2>&1; then
echo "✅ Basic functionality test passed"
# Check if any endpoints were found
if [ -s /tmp/kr_test_output.txt ]; then
echo "✅ Endpoints discovered during test"
echo "Sample results:"
head -5 /tmp/kr_test_output.txt
else
echo "⚠️ No endpoints found (this might be normal)"
fi
else
echo "❌ Basic functionality test failed"
echo "Error output:"
cat /tmp/kr_test_output.txt
fi
# Clean up
rm -f /tmp/test_wordlist.txt /tmp/kr_test_output.txt
}
Performance diagnostics
diagnose_performance_issues() { echo "Diagnosing Kiterunner Performance Issues" echo "======================================="
# Check system load
| load_avg=$(uptime | awk -F'load average:' '{print $2}' | awk '{print $1}' | sed 's/,//') | echo "System load average: $load_avg"
# Check available CPU cores
cpu_cores=$(nproc)
echo "Available CPU cores: $cpu_cores"
# Check memory usage
memory_info=$(free -h | grep "Mem:")
echo "Memory info: $memory_info"
# Check disk I/O
if command -v iostat &> /dev/null; then
echo "Disk I/O statistics:"
iostat -x 1 1 | tail -n +4
fi
# Check network connectivity speed
echo "Testing network speed to common target..."
if command -v curl &> /dev/null; then
time_total=$(curl -o /dev/null -s -w "%{time_total}" https://httpbin.org/get)
echo "Network response time: ${time_total}s"
fi
# Recommendations based on findings
echo ""
echo "Performance Recommendations:"
echo "- Optimal thread count: $((cpu_cores * 10))"
echo "- Recommended timeout: 5-10 seconds"
echo "- Consider using delay if target is slow: --delay 50"
}
Main troubleshooting function
main() { troubleshoot_kiterunner echo "" fix_common_kiterunner_errors echo "" test_kiterunner_functionality echo "" diagnose_performance_issues }
Run troubleshooting
main ```_
Ressourcen und Dokumentation
Offizielle Mittel
- Kiterunner GitHub Repository - Hauptrepository und Quellcode
- Assetnote Blog - Forschungs- und Methodikartikel
- Kiterunner veröffentlicht - Aktuelle Versionen herunterladen
Gemeinschaftsmittel
- Bug Bounty Community - Discord Community für Diskussionen
- OWASP Testing Guide - Methoden zur Prüfung von Webanwendungen
- API Security Testing - API Security Best Practices
Integrationsbeispiele
- Reconnaissance Workflows - Umfassende Rekonstruktionsautomatisierung
- Bug Bounty Toolkit - Bug bounty Jagdwerkzeuge
- API Discovery Tools - Related API Discovery Tools
- Content Discovery - Methoden zur Entdeckung von Webinhalten