Appearance
SearchSploit Cheat Sheet
Overview
SearchSploit is a command-line search tool for ExploitDB that allows you to take a copy of the ExploitDB with you wherever you go. SearchSploit gives you the power to perform detailed off-line searches through your locally checked-out copy of the repository. This capability is particularly useful during penetration testing engagements where internet connectivity may be limited or when you need to quickly search through thousands of exploits without relying on web interfaces.
⚠️ Warning: SearchSploit provides access to real exploits that can cause damage to systems. Only use these exploits against systems you own or have explicit written permission to test. Unauthorized use of exploits may violate local laws and regulations.
Installation
Kali Linux Installation
bash
# SearchSploit is pre-installed on Kali Linux
searchsploit --version
# Update the database
searchsploit -u
# Check installation path
which searchsploit
# Verify database location
searchsploit --path
Ubuntu/Debian Installation
bash
# Install git if not already installed
sudo apt update
sudo apt install git
# Clone ExploitDB repository
sudo git clone https://github.com/offensive-security/exploitdb.git /opt/exploitdb
# Create symbolic link
sudo ln -sf /opt/exploitdb/searchsploit /usr/local/bin/searchsploit
# Update PATH (add to ~/.bashrc for persistence)
export PATH="$PATH:/opt/exploitdb"
# Verify installation
searchsploit --help
Manual Installation
bash
# Download and extract
wget https://github.com/offensive-security/exploitdb/archive/main.zip
unzip main.zip
mv exploitdb-main /opt/exploitdb
# Make searchsploit executable
chmod +x /opt/exploitdb/searchsploit
# Create symbolic link
sudo ln -sf /opt/exploitdb/searchsploit /usr/local/bin/searchsploit
# Configure .searchsploit_rc file
echo 'papers_directory="/opt/exploitdb/papers"' > ~/.searchsploit_rc
echo 'exploits_directory="/opt/exploitdb/exploits"' >> ~/.searchsploit_rc
Docker Installation
bash
# Create Dockerfile
cat > Dockerfile << 'EOF'
FROM ubuntu:20.04
RUN apt-get update && apt-get install -y git
RUN git clone https://github.com/offensive-security/exploitdb.git /opt/exploitdb
RUN ln -sf /opt/exploitdb/searchsploit /usr/local/bin/searchsploit
WORKDIR /opt/exploitdb
ENTRYPOINT ["/opt/exploitdb/searchsploit"]
EOF
# Build Docker image
docker build -t searchsploit .
# Run SearchSploit in Docker
docker run --rm searchsploit apache
# Create alias for easier usage
echo 'alias searchsploit="docker run --rm -v $(pwd):/data searchsploit"' >> ~/.bashrc
Basic Usage
Simple Searches
bash
# Basic search
searchsploit apache
# Search for multiple terms
searchsploit apache 2.4
# Search with quotes for exact phrase
searchsploit "apache 2.4.7"
# Case-insensitive search (default)
searchsploit APACHE
# Case-sensitive search
searchsploit -c Apache
# Search in title only
searchsploit -t apache
# Search excluding specific terms
searchsploit apache --exclude="2.2"
Advanced Search Options
bash
# Search by platform
searchsploit --platform windows apache
searchsploit --platform linux kernel
searchsploit --platform php web
# Search by exploit type
searchsploit --type remote apache
searchsploit --type local windows
searchsploit --type webapps php
# Search by author
searchsploit --author "Metasploit"
searchsploit --author "exploit-db"
# Search by port
searchsploit --port 80
searchsploit --port 443
# Combine multiple filters
searchsploit --platform linux --type local kernel
CVE and Vulnerability Searches
bash
# Search by CVE number
searchsploit --cve 2021-44228
searchsploit --cve CVE-2021-34527
# Search by multiple CVEs
searchsploit --cve 2021-44228,2021-34527
# Search for recent CVEs
searchsploit --cve 2023-
# Search by vulnerability type
searchsploit "buffer overflow"
searchsploit "sql injection"
searchsploit "privilege escalation"
searchsploit "remote code execution"
Date-based Searches
bash
# Search for exploits after specific date
searchsploit --after 2020 apache
searchsploit --after 2021-01-01 windows
# Search for exploits before specific date
searchsploit --before 2019 linux
searchsploit --before 2020-12-31 php
# Search within date range
searchsploit --after 2020 --before 2021 kernel
# Search for recent exploits (last 30 days)
searchsploit --after $(date -d "30 days ago" +%Y-%m-%d)
Output Formatting and Display
Output Formats
bash
# Default table format
searchsploit apache
# Verbose output with full paths
searchsploit -v apache
# JSON output
searchsploit -j apache
# XML output
searchsploit -x apache
# CSV output (pipe to file)
searchsploit apache | sed 's/|/,/g' > results.csv
# Color output (default)
searchsploit --colour apache
# No color output
searchsploit --no-colour apache
Filtering and Limiting Results
bash
# Limit number of results
searchsploit apache | head -10
# Show only exploit IDs
searchsploit apache | awk '{print $1}' | grep -E '^[0-9]+$'
# Filter by specific platforms
searchsploit apache | grep -i linux
searchsploit apache | grep -i windows
# Filter by exploit type
searchsploit apache | grep -i remote
searchsploit apache | grep -i local
# Sort by date (newest first)
searchsploit -j apache | jq -r '.RESULTS_EXPLOIT[] | "\(.Date) \(.Title)"' | sort -r
Search Result Analysis
bash
# Count total results
searchsploit apache | wc -l
# Count by platform
searchsploit apache | grep -c linux
searchsploit apache | grep -c windows
# Count by type
searchsploit apache | grep -c remote
searchsploit apache | grep -c local
# Extract unique platforms
searchsploit -j apache | jq -r '.RESULTS_EXPLOIT[].Platform' | sort | uniq -c
# Extract unique authors
searchsploit -j apache | jq -r '.RESULTS_EXPLOIT[].Author' | sort | uniq -c
Exploit Management
Copying and Downloading Exploits
bash
# Copy exploit to current directory
searchsploit -m 50383
# Copy multiple exploits
searchsploit -m 50383,50384,50385
# Copy exploit to specific directory
searchsploit -m 50383 -o /tmp/exploits/
# Copy with original filename preserved
searchsploit -m exploits/linux/local/50383.c
# Copy all exploits from search results
searchsploit -j apache | jq -r '.RESULTS_EXPLOIT[].EDB-ID' | head -5 | xargs searchsploit -m
# Batch copy exploits
echo "50383,50384,50385" | tr ',' '\n' | xargs -I {} searchsploit -m {}
Viewing and Examining Exploits
bash
# View exploit content
searchsploit -x 50383
# Open exploit in default editor
searchsploit -e 50383
# View exploit with syntax highlighting
searchsploit -m 50383 && cat 50383.c | highlight --syntax=c
# View exploit metadata
searchsploit -j apache | jq '.RESULTS_EXPLOIT[] | select(.["EDB-ID"] == "50383")'
# Extract exploit information
searchsploit -p 50383
Exploit Organization
bash
# Create organized directory structure
mkdir -p exploits/{windows,linux,web,mobile}
# Copy exploits by platform
searchsploit --platform windows -j | jq -r '.RESULTS_EXPLOIT[].["EDB-ID"]' | head -10 | xargs -I {} searchsploit -m {} -o exploits/windows/
# Copy exploits by type
searchsploit --type webapps -j | jq -r '.RESULTS_EXPLOIT[].["EDB-ID"]' | head -10 | xargs -I {} searchsploit -m {} -o exploits/web/
# Create exploit inventory
searchsploit -j apache | jq -r '.RESULTS_EXPLOIT[] | "\(.["EDB-ID"]),\(.Title),\(.Platform),\(.Type)"' > apache_exploits.csv
Database Management
Database Updates
bash
# Update ExploitDB database
searchsploit -u
# Force update (overwrite local changes)
cd /opt/exploitdb && git reset --hard && git pull
# Check for updates without applying
cd /opt/exploitdb && git fetch && git status
# Update specific branch
cd /opt/exploitdb && git pull origin main
# Verify update
searchsploit --stats
Database Information
bash
# Show database statistics
searchsploit --stats
# Show database path
searchsploit --path
# Check database integrity
searchsploit --check
# Rebuild database index
searchsploit --rebuild
# Show version information
searchsploit --version
# Show help information
searchsploit --help
Database Maintenance
bash
# Clean up temporary files
find /opt/exploitdb -name "*.tmp" -delete
# Check disk usage
du -sh /opt/exploitdb
# Backup database
tar -czf exploitdb_backup_$(date +%Y%m%d).tar.gz /opt/exploitdb
# Restore database from backup
tar -xzf exploitdb_backup_20231201.tar.gz -C /
# Verify database after maintenance
searchsploit --check
Automation Scripts
Automated Vulnerability Assessment
bash
#!/bin/bash
# Automated vulnerability assessment using SearchSploit
TARGET_LIST="$1"
OUTPUT_DIR="searchsploit_assessment_$(date +%Y%m%d_%H%M%S)"
REPORT_FILE="$OUTPUT_DIR/vulnerability_assessment_report.html"
if [ -z "$TARGET_LIST" ] || [ ! -f "$TARGET_LIST" ]; then
echo "Usage: $0 <target_list_file>"
echo "Target list file should contain one software/service per line"
echo "Example: 'Apache 2.4.7', 'Windows 10', 'PHP 7.4'"
exit 1
fi
mkdir -p "$OUTPUT_DIR"
# Function to assess single target
assess_target() {
local target="$1"
local target_dir="$OUTPUT_DIR/$(echo "$target" | tr ' /' '_')"
echo "[+] Assessing: $target"
mkdir -p "$target_dir"
# Search for exploits
searchsploit -j "$target" > "$target_dir/search_results.json"
if [ -s "$target_dir/search_results.json" ]; then
# Parse and analyze results
python3 << EOF
import json
import os
from collections import defaultdict
# Read search results
with open('$target_dir/search_results.json', 'r') as f:
data = json.load(f)
exploits = data.get('RESULTS_EXPLOIT', [])
shellcodes = data.get('RESULTS_SHELLCODE', [])
print(f" [+] Found {len(exploits)} exploits and {len(shellcodes)} shellcodes")
if not exploits and not shellcodes:
print(f" [-] No exploits found for: $target")
exit(0)
# Analyze exploits
analysis = {
'target': '$target',
'total_exploits': len(exploits),
'total_shellcodes': len(shellcodes),
'platforms': defaultdict(int),
'types': defaultdict(int),
'years': defaultdict(int),
'severity_assessment': 'Unknown',
'high_priority_exploits': []
}
for exploit in exploits:
platform = exploit.get('Platform', 'Unknown')
exploit_type = exploit.get('Type', 'Unknown')
date = exploit.get('Date', '')
title = exploit.get('Title', '').lower()
analysis['platforms'][platform] += 1
analysis['types'][exploit_type] += 1
if date:
year = date.split('-')[0]
analysis['years'][year] += 1
# Identify high-priority exploits
if any(keyword in title for keyword in ['remote', 'rce', 'privilege', 'escalation', 'buffer overflow']):
analysis['high_priority_exploits'].append(exploit)
# Assess severity
total_exploits = len(exploits)
high_priority_count = len(analysis['high_priority_exploits'])
remote_count = analysis['types'].get('remote', 0)
if high_priority_count > 5 or remote_count > 3:
analysis['severity_assessment'] = 'Critical'
elif high_priority_count > 2 or remote_count > 1:
analysis['severity_assessment'] = 'High'
elif total_exploits > 5:
analysis['severity_assessment'] = 'Medium'
else:
analysis['severity_assessment'] = 'Low'
# Save analysis
with open('$target_dir/analysis.json', 'w') as f:
json.dump(analysis, f, indent=2, default=str)
print(f" [+] Severity assessment: {analysis['severity_assessment']}")
print(f" [+] High-priority exploits: {high_priority_count}")
EOF
# Download high-priority exploits
if [ -f "$target_dir/analysis.json" ]; then
python3 << EOF
import json
with open('$target_dir/analysis.json', 'r') as f:
analysis = json.load(f)
high_priority = analysis.get('high_priority_exploits', [])[:10] # Limit to 10
if high_priority:
with open('$target_dir/priority_exploits.txt', 'w') as f:
for exploit in high_priority:
f.write(f"{exploit.get('EDB-ID', '')}\\n")
EOF
# Download priority exploits
if [ -f "$target_dir/priority_exploits.txt" ]; then
while read -r edb_id; do
if [ -n "$edb_id" ]; then
searchsploit -m "$edb_id" -o "$target_dir/" 2>/dev/null || true
fi
done < "$target_dir/priority_exploits.txt"
fi
fi
return 0
else
echo " [-] No exploits found for: $target"
return 1
fi
}
# Function to generate comprehensive report
generate_report() {
echo "[+] Generating comprehensive assessment report"
python3 << EOF
import json
import os
import glob
from datetime import datetime
from collections import defaultdict
# Collect all analysis data
all_analyses = []
for analysis_file in glob.glob('$OUTPUT_DIR/*/analysis.json'):
try:
with open(analysis_file, 'r') as f:
data = json.load(f)
all_analyses.append(data)
except:
continue
# Calculate overall statistics
total_targets = len(all_analyses)
total_exploits = sum(a.get('total_exploits', 0) for a in all_analyses)
total_shellcodes = sum(a.get('total_shellcodes', 0) for a in all_analyses)
severity_counts = defaultdict(int)
platform_counts = defaultdict(int)
type_counts = defaultdict(int)
for analysis in all_analyses:
severity_counts[analysis.get('severity_assessment', 'Unknown')] += 1
for platform, count in analysis.get('platforms', {}).items():
platform_counts[platform] += count
for exploit_type, count in analysis.get('types', {}).items():
type_counts[exploit_type] += count
# Generate HTML report
html_content = f"""
<!DOCTYPE html>
<html>
<head>
<title>SearchSploit Vulnerability Assessment Report</title>
<style>
body {{ font-family: Arial, sans-serif; margin: 20px; }}
.header {{ background-color: #f0f0f0; padding: 20px; border-radius: 5px; }}
.summary {{ margin: 20px 0; padding: 15px; border: 1px solid #ddd; border-radius: 5px; }}
.target {{ margin: 20px 0; padding: 15px; border: 1px solid #ddd; border-radius: 5px; }}
.critical {{ border-color: #f44336; background-color: #ffebee; }}
.high {{ border-color: #ff9800; background-color: #fff3e0; }}
.medium {{ border-color: #2196f3; background-color: #e3f2fd; }}
.low {{ border-color: #4caf50; background-color: #e8f5e8; }}
table {{ border-collapse: collapse; width: 100%; margin: 10px 0; }}
th, td {{ border: 1px solid #ddd; padding: 8px; text-align: left; }}
th {{ background-color: #f2f2f2; }}
.chart {{ margin: 20px 0; }}
</style>
<script src="https://cdn.jsdelivr.net/npm/chart.js"></script>
</head>
<body>
<div class="header">
<h1>SearchSploit Vulnerability Assessment Report</h1>
<p>Generated: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}</p>
</div>
<div class="summary">
<h2>Executive Summary</h2>
<table>
<tr><th>Metric</th><th>Value</th></tr>
<tr><td>Targets Assessed</td><td>{total_targets}</td></tr>
<tr><td>Total Exploits Found</td><td>{total_exploits}</td></tr>
<tr><td>Total Shellcodes Found</td><td>{total_shellcodes}</td></tr>
<tr><td>Critical Risk Targets</td><td>{severity_counts.get('Critical', 0)}</td></tr>
<tr><td>High Risk Targets</td><td>{severity_counts.get('High', 0)}</td></tr>
<tr><td>Medium Risk Targets</td><td>{severity_counts.get('Medium', 0)}</td></tr>
<tr><td>Low Risk Targets</td><td>{severity_counts.get('Low', 0)}</td></tr>
</table>
</div>
<div class="summary">
<h2>Risk Distribution</h2>
<div class="chart">
<canvas id="riskChart" width="400" height="200"></canvas>
</div>
</div>
<div class="summary">
<h2>Platform Distribution</h2>
<table>
<tr><th>Platform</th><th>Exploit Count</th></tr>
"""
for platform, count in sorted(platform_counts.items(), key=lambda x: x[1], reverse=True)[:10]:
html_content += f" <tr><td>{platform}</td><td>{count}</td></tr>\\n"
html_content += """
</table>
</div>
<h2>Individual Target Assessments</h2>
"""
# Add individual target details
for analysis in sorted(all_analyses, key=lambda x: x.get('total_exploits', 0), reverse=True):
severity = analysis.get('severity_assessment', 'Unknown').lower()
target = analysis.get('target', 'Unknown')
html_content += f"""
<div class="target {severity}">
<h3>{target}</h3>
<p><strong>Risk Level:</strong> {analysis.get('severity_assessment', 'Unknown')}</p>
<p><strong>Total Exploits:</strong> {analysis.get('total_exploits', 0)}</p>
<p><strong>High-Priority Exploits:</strong> {len(analysis.get('high_priority_exploits', []))}</p>
<h4>Platform Breakdown:</h4>
<table>
<tr><th>Platform</th><th>Count</th></tr>
"""
for platform, count in analysis.get('platforms', {}).items():
html_content += f" <tr><td>{platform}</td><td>{count}</td></tr>\\n"
html_content += """
</table>
</div>
"""
html_content += f"""
<script>
const ctx = document.getElementById('riskChart').getContext('2d');
const chart = new Chart(ctx, {{
type: 'doughnut',
data: {{
labels: ['Critical', 'High', 'Medium', 'Low'],
datasets: [{{
data: [{severity_counts.get('Critical', 0)}, {severity_counts.get('High', 0)}, {severity_counts.get('Medium', 0)}, {severity_counts.get('Low', 0)}],
backgroundColor: ['#f44336', '#ff9800', '#2196f3', '#4caf50']
}}]
}},
options: {{
responsive: true,
plugins: {{
title: {{
display: true,
text: 'Risk Level Distribution'
}}
}}
}}
}});
</script>
</body>
</html>
"""
with open('$REPORT_FILE', 'w') as f:
f.write(html_content)
print(f"[+] Comprehensive report generated: $REPORT_FILE")
EOF
}
# Function to generate CSV summary
generate_csv_summary() {
echo "[+] Generating CSV summary"
local csv_file="$OUTPUT_DIR/vulnerability_summary.csv"
echo "Target,Total_Exploits,Total_Shellcodes,Severity,High_Priority_Exploits,Top_Platform,Top_Type" > "$csv_file"
for analysis_file in "$OUTPUT_DIR"/*/analysis.json; do
if [ -f "$analysis_file" ]; then
python3 << EOF
import json
with open('$analysis_file', 'r') as f:
data = json.load(f)
target = data.get('target', 'Unknown').replace(',', ';')
total_exploits = data.get('total_exploits', 0)
total_shellcodes = data.get('total_shellcodes', 0)
severity = data.get('severity_assessment', 'Unknown')
high_priority = len(data.get('high_priority_exploits', []))
platforms = data.get('platforms', {})
top_platform = max(platforms.keys(), key=lambda k: platforms[k]) if platforms else 'Unknown'
types = data.get('types', {})
top_type = max(types.keys(), key=lambda k: types[k]) if types else 'Unknown'
print(f"{target},{total_exploits},{total_shellcodes},{severity},{high_priority},{top_platform},{top_type}")
EOF
fi
done >> "$csv_file"
echo "[+] CSV summary generated: $csv_file"
}
# Main execution
echo "[+] Starting automated vulnerability assessment"
echo "[+] Target list: $TARGET_LIST"
echo "[+] Output directory: $OUTPUT_DIR"
# Check dependencies
if ! command -v searchsploit &> /dev/null; then
echo "[-] SearchSploit not found. Please install ExploitDB first."
exit 1
fi
# Process each target
total_targets=0
successful_assessments=0
while read -r target; do
# Skip empty lines and comments
[[ -z "$target" || "$target" =~ ^#.*$ ]] && continue
total_targets=$((total_targets + 1))
if assess_target "$target"; then
successful_assessments=$((successful_assessments + 1))
fi
# Small delay to avoid overwhelming the system
sleep 1
done < "$TARGET_LIST"
echo "[+] Assessment completed"
echo " Total targets: $total_targets"
echo " Successful assessments: $successful_assessments"
# Generate reports
generate_report
generate_csv_summary
echo "[+] Vulnerability assessment completed"
echo "[+] Results saved in: $OUTPUT_DIR"
echo "[+] Open $REPORT_FILE for detailed report"
Exploit Collection and Organization
bash
#!/bin/bash
# Automated exploit collection and organization
COLLECTION_NAME="$1"
SEARCH_TERMS="$2"
OUTPUT_DIR="exploit_collection_${COLLECTION_NAME}_$(date +%Y%m%d_%H%M%S)"
if [ -z "$COLLECTION_NAME" ] || [ -z "$SEARCH_TERMS" ]; then
echo "Usage: $0 <collection_name> <search_terms>"
echo "Example: $0 'web_exploits' 'php,apache,nginx,wordpress'"
exit 1
fi
mkdir -p "$OUTPUT_DIR"
# Function to collect exploits for search term
collect_exploits() {
local search_term="$1"
local term_dir="$OUTPUT_DIR/$(echo "$search_term" | tr ' /' '_')"
echo "[+] Collecting exploits for: $search_term"
mkdir -p "$term_dir"
# Search and save results
searchsploit -j "$search_term" > "$term_dir/search_results.json"
if [ ! -s "$term_dir/search_results.json" ]; then
echo " [-] No exploits found for: $search_term"
return 1
fi
# Parse and categorize exploits
python3 << EOF
import json
import os
from collections import defaultdict
# Read search results
with open('$term_dir/search_results.json', 'r') as f:
data = json.load(f)
exploits = data.get('RESULTS_EXPLOIT', [])
print(f" [+] Found {len(exploits)} exploits for $search_term")
# Categorize exploits
categories = {
'remote': [],
'local': [],
'webapps': [],
'dos': [],
'windows': [],
'linux': [],
'php': [],
'recent': [] # Last 2 years
}
for exploit in exploits:
exploit_type = exploit.get('Type', '').lower()
platform = exploit.get('Platform', '').lower()
title = exploit.get('Title', '').lower()
date = exploit.get('Date', '')
# Categorize by type
if 'remote' in exploit_type:
categories['remote'].append(exploit)
elif 'local' in exploit_type:
categories['local'].append(exploit)
elif 'webapps' in exploit_type:
categories['webapps'].append(exploit)
elif 'dos' in exploit_type:
categories['dos'].append(exploit)
# Categorize by platform
if 'windows' in platform:
categories['windows'].append(exploit)
elif 'linux' in platform:
categories['linux'].append(exploit)
elif 'php' in platform:
categories['php'].append(exploit)
# Check if recent (last 2 years)
if date:
year = int(date.split('-')[0])
if year >= 2022: # Adjust based on current year
categories['recent'].append(exploit)
# Save categorized data
for category, exploits_list in categories.items():
if exploits_list:
category_dir = f'$term_dir/{category}'
os.makedirs(category_dir, exist_ok=True)
with open(f'{category_dir}/exploits.json', 'w') as f:
json.dump(exploits_list, f, indent=2)
# Create download list
with open(f'{category_dir}/download_list.txt', 'w') as f:
for exploit in exploits_list[:20]: # Limit to 20 per category
f.write(f"{exploit.get('EDB-ID', '')}\\n")
print(f" [+] {category}: {len(exploits_list)} exploits")
print(f" [+] Categorization completed for $search_term")
EOF
# Download exploits by category
for category_dir in "$term_dir"/*; do
if [ -d "$category_dir" ] && [ -f "$category_dir/download_list.txt" ]; then
category_name=$(basename "$category_dir")
echo " [+] Downloading $category_name exploits"
while read -r edb_id; do
if [ -n "$edb_id" ]; then
searchsploit -m "$edb_id" -o "$category_dir/" 2>/dev/null || true
fi
done < "$category_dir/download_list.txt"
fi
done
return 0
}
# Function to create collection index
create_collection_index() {
echo "[+] Creating collection index"
local index_file="$OUTPUT_DIR/collection_index.html"
python3 << EOF
import json
import os
import glob
from datetime import datetime
from collections import defaultdict
# Collect all exploit data
all_exploits = []
term_stats = defaultdict(lambda: defaultdict(int))
for results_file in glob.glob('$OUTPUT_DIR/*/search_results.json'):
term_name = os.path.basename(os.path.dirname(results_file))
try:
with open(results_file, 'r') as f:
data = json.load(f)
exploits = data.get('RESULTS_EXPLOIT', [])
all_exploits.extend(exploits)
# Calculate statistics
term_stats[term_name]['total'] = len(exploits)
for exploit in exploits:
platform = exploit.get('Platform', 'Unknown')
exploit_type = exploit.get('Type', 'Unknown')
term_stats[term_name]['platforms'][platform] += 1
term_stats[term_name]['types'][exploit_type] += 1
except:
continue
# Generate HTML index
html_content = f"""
<!DOCTYPE html>
<html>
<head>
<title>Exploit Collection: $COLLECTION_NAME</title>
<style>
body {{ font-family: Arial, sans-serif; margin: 20px; }}
.header {{ background-color: #f0f0f0; padding: 20px; border-radius: 5px; }}
.term {{ margin: 20px 0; padding: 15px; border: 1px solid #ddd; border-radius: 5px; }}
.category {{ margin: 10px 0; padding: 10px; background-color: #f9f9f9; border-radius: 3px; }}
table {{ border-collapse: collapse; width: 100%; margin: 10px 0; }}
th, td {{ border: 1px solid #ddd; padding: 8px; text-align: left; }}
th {{ background-color: #f2f2f2; }}
.stats {{ display: flex; justify-content: space-between; }}
.stat-box {{ background-color: #e3f2fd; padding: 10px; border-radius: 5px; text-align: center; }}
</style>
</head>
<body>
<div class="header">
<h1>Exploit Collection: $COLLECTION_NAME</h1>
<p>Generated: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}</p>
<p>Search Terms: $SEARCH_TERMS</p>
</div>
<div class="stats">
<div class="stat-box">
<h3>{len(all_exploits)}</h3>
<p>Total Exploits</p>
</div>
<div class="stat-box">
<h3>{len(term_stats)}</h3>
<p>Search Terms</p>
</div>
<div class="stat-box">
<h3>{len(set(e.get('Platform', 'Unknown') for e in all_exploits))}</h3>
<p>Platforms</p>
</div>
<div class="stat-box">
<h3>{len(set(e.get('Type', 'Unknown') for e in all_exploits))}</h3>
<p>Exploit Types</p>
</div>
</div>
<h2>Collection Contents</h2>
"""
# Add details for each search term
for term_name, stats in term_stats.items():
html_content += f"""
<div class="term">
<h3>{term_name.replace('_', ' ').title()}</h3>
<p><strong>Total Exploits:</strong> {stats['total']}</p>
<h4>Categories Available:</h4>
<div style="display: flex; flex-wrap: wrap; gap: 10px;">
"""
# List available categories
term_dir = f"$OUTPUT_DIR/{term_name}"
for category in ['remote', 'local', 'webapps', 'dos', 'windows', 'linux', 'php', 'recent']:
category_path = f"{term_dir}/{category}"
if os.path.exists(category_path):
exploit_count = 0
try:
with open(f"{category_path}/exploits.json", 'r') as f:
exploits = json.load(f)
exploit_count = len(exploits)
except:
pass
html_content += f"""
<div class="category">
<strong>{category.title()}</strong><br>
{exploit_count} exploits
</div>
"""
html_content += """
</div>
</div>
"""
html_content += """
<h2>Usage Instructions</h2>
<div class="term">
<h3>Directory Structure</h3>
<ul>
<li><strong>search_term/</strong> - Individual search term results</li>
<li><strong>search_term/category/</strong> - Exploits categorized by type/platform</li>
<li><strong>search_term/category/exploits.json</strong> - Exploit metadata</li>
<li><strong>search_term/category/[EDB-ID].*</strong> - Downloaded exploit files</li>
</ul>
<h3>Quick Access Commands</h3>
<pre>
# View all remote exploits
find . -name "remote" -type d
# List all downloaded exploit files
find . -name "*.c" -o -name "*.py" -o -name "*.rb" -o -name "*.pl"
# Search within collection
grep -r "buffer overflow" .
# Count exploits by type
find . -name "exploits.json" -exec jq -r '.[].Type' {} \; | sort | uniq -c
</pre>
</div>
</body>
</html>
"""
with open('$index_file', 'w') as f:
f.write(html_content)
print(f"[+] Collection index generated: $index_file")
EOF
}
# Function to create portable collection
create_portable_collection() {
echo "[+] Creating portable collection archive"
local archive_name="${COLLECTION_NAME}_exploit_collection_$(date +%Y%m%d).tar.gz"
# Create README
cat > "$OUTPUT_DIR/README.md" << EOF
# Exploit Collection: $COLLECTION_NAME
Generated: $(date)
Search Terms: $SEARCH_TERMS
## Directory Structure
- **search_term/**: Individual search term results
- **search_term/category/**: Exploits categorized by type/platform
- **search_term/category/exploits.json**: Exploit metadata
- **search_term/category/[EDB-ID].***: Downloaded exploit files
## Categories
- **remote**: Remote code execution exploits
- **local**: Local privilege escalation exploits
- **webapps**: Web application exploits
- **dos**: Denial of service exploits
- **windows**: Windows-specific exploits
- **linux**: Linux-specific exploits
- **php**: PHP-specific exploits
- **recent**: Recent exploits (last 2 years)
## Usage
1. Extract the archive to your desired location
2. Open collection_index.html for an overview
3. Navigate to specific categories for targeted exploits
4. Review exploit code before use
5. Ensure proper authorization before testing
## Legal Notice
These exploits are provided for educational and authorized testing purposes only.
Only use against systems you own or have explicit written permission to test.
Unauthorized use may violate local laws and regulations.
EOF
# Create archive
tar -czf "$archive_name" -C "$(dirname "$OUTPUT_DIR")" "$(basename "$OUTPUT_DIR")"
echo "[+] Portable collection created: $archive_name"
echo " Archive size: $(du -h "$archive_name" | cut -f1)"
}
# Main execution
echo "[+] Starting exploit collection and organization"
echo "[+] Collection name: $COLLECTION_NAME"
echo "[+] Search terms: $SEARCH_TERMS"
# Check dependencies
if ! command -v searchsploit &> /dev/null; then
echo "[-] SearchSploit not found. Please install ExploitDB first."
exit 1
fi
# Process each search term
IFS=',' read -ra TERMS <<< "$SEARCH_TERMS"
for term in "${TERMS[@]}"; do
# Trim whitespace
term=$(echo "$term" | xargs)
collect_exploits "$term"
done
# Create collection index and archive
create_collection_index
create_portable_collection
echo "[+] Exploit collection completed"
echo "[+] Results saved in: $OUTPUT_DIR"
echo "[+] Open $OUTPUT_DIR/collection_index.html for overview"
Continuous Monitoring for New Exploits
bash
#!/bin/bash
# Continuous monitoring for new exploits
CONFIG_FILE="exploit_monitoring.conf"
LOG_DIR="exploit_monitoring_logs"
ALERT_EMAIL="security@company.com"
CHECK_INTERVAL=3600 # 1 hour
mkdir -p "$LOG_DIR"
# Create default configuration
if [ ! -f "$CONFIG_FILE" ]; then
cat > "$CONFIG_FILE" << 'EOF'
# Exploit Monitoring Configuration
# Monitoring targets (one per line)
MONITOR_TARGETS="
Apache
nginx
WordPress
PHP
Windows 10
Linux kernel
OpenSSL
"
# Alert settings
ALERT_ON_NEW_EXPLOITS=true
ALERT_ON_HIGH_SEVERITY=true
MINIMUM_SEVERITY_THRESHOLD=5
# Database settings
UPDATE_DATABASE=true
UPDATE_INTERVAL=86400 # 24 hours
# Notification settings
EMAIL_ALERTS=true
SLACK_WEBHOOK=""
DISCORD_WEBHOOK=""
EOF
echo "Created $CONFIG_FILE - please configure monitoring settings"
exit 1
fi
source "$CONFIG_FILE"
# Function to update ExploitDB
update_database() {
echo "[+] Updating ExploitDB database"
local update_log="$LOG_DIR/database_update_$(date +%Y%m%d_%H%M%S).log"
searchsploit -u > "$update_log" 2>&1
if [ $? -eq 0 ]; then
echo " [+] Database updated successfully"
return 0
else
echo " [-] Database update failed"
return 1
fi
}
# Function to check for new exploits
check_new_exploits() {
local target="$1"
local timestamp=$(date +%Y%m%d_%H%M%S)
local current_results="$LOG_DIR/${target}_${timestamp}.json"
local previous_results="$LOG_DIR/${target}_previous.json"
echo "[+] Checking for new exploits: $target"
# Get current exploits
searchsploit -j "$target" > "$current_results"
if [ ! -s "$current_results" ]; then
echo " [-] No exploits found for: $target"
return 1
fi
# Compare with previous results
if [ -f "$previous_results" ]; then
# Extract exploit IDs
local current_ids=$(jq -r '.RESULTS_EXPLOIT[]?.["EDB-ID"]' "$current_results" 2>/dev/null | sort)
local previous_ids=$(jq -r '.RESULTS_EXPLOIT[]?.["EDB-ID"]' "$previous_results" 2>/dev/null | sort)
# Find new exploits
local new_exploits=$(comm -23 <(echo "$current_ids") <(echo "$previous_ids"))
if [ -n "$new_exploits" ]; then
local new_count=$(echo "$new_exploits" | wc -l)
echo " [!] Found $new_count new exploits for: $target"
# Get details of new exploits
local new_exploits_details="$LOG_DIR/${target}_new_${timestamp}.json"
python3 << EOF
import json
# Read current results
with open('$current_results', 'r') as f:
data = json.load(f)
exploits = data.get('RESULTS_EXPLOIT', [])
new_ids = """$new_exploits""".strip().split('\n')
# Filter new exploits
new_exploits = [e for e in exploits if e.get('EDB-ID') in new_ids]
# Save new exploits
with open('$new_exploits_details', 'w') as f:
json.dump(new_exploits, f, indent=2)
print(f"New exploits saved: $new_exploits_details")
EOF
# Send alert
if [ "$ALERT_ON_NEW_EXPLOITS" = "true" ]; then
send_alert "NEW_EXPLOITS" "$target" "$new_count" "$new_exploits_details"
fi
return 0
else
echo " [+] No new exploits found for: $target"
fi
else
echo " [+] First scan for: $target"
fi
# Update previous results
cp "$current_results" "$previous_results"
return 0
}
# Function to assess exploit severity
assess_severity() {
local exploits_file="$1"
local target="$2"
python3 << EOF
import json
try:
with open('$exploits_file', 'r') as f:
exploits = json.load(f)
if not isinstance(exploits, list):
exploits = exploits.get('RESULTS_EXPLOIT', [])
# Severity scoring
severity_score = 0
high_severity_count = 0
for exploit in exploits:
title = exploit.get('Title', '').lower()
exploit_type = exploit.get('Type', '').lower()
# High severity indicators
if any(keyword in title for keyword in ['remote', 'rce', 'buffer overflow', 'privilege escalation']):
severity_score += 3
high_severity_count += 1
elif 'remote' in exploit_type:
severity_score += 2
high_severity_count += 1
elif any(keyword in title for keyword in ['dos', 'denial of service']):
severity_score += 1
else:
severity_score += 0.5
print(f"Severity score: {severity_score}")
print(f"High severity exploits: {high_severity_count}")
# Check threshold
if severity_score >= $MINIMUM_SEVERITY_THRESHOLD:
print("ALERT_THRESHOLD_EXCEEDED")
except Exception as e:
print(f"Error assessing severity: {e}")
EOF
}
# Function to send alerts
send_alert() {
local alert_type="$1"
local target="$2"
local count="$3"
local details_file="$4"
local subject="[EXPLOIT ALERT] $alert_type: $target"
local message="Alert: $count new exploits found for $target at $(date)"
echo "[!] Sending alert: $subject"
# Email alert
if [ "$EMAIL_ALERTS" = "true" ] && [ -n "$ALERT_EMAIL" ]; then
if [ -f "$details_file" ]; then
echo "$message" | mail -s "$subject" -A "$details_file" "$ALERT_EMAIL" 2>/dev/null || \
echo "Email alert failed"
else
echo "$message" | mail -s "$subject" "$ALERT_EMAIL" 2>/dev/null || \
echo "Email alert failed"
fi
fi
# Slack alert
if [ -n "$SLACK_WEBHOOK" ]; then
curl -X POST -H 'Content-type: application/json' \
--data "{\"text\":\"$subject: $message\"}" \
"$SLACK_WEBHOOK" 2>/dev/null || echo "Slack alert failed"
fi
# Discord alert
if [ -n "$DISCORD_WEBHOOK" ]; then
curl -X POST -H 'Content-type: application/json' \
--data "{\"content\":\"$subject: $message\"}" \
"$DISCORD_WEBHOOK" 2>/dev/null || echo "Discord alert failed"
fi
}
# Function to generate monitoring report
generate_monitoring_report() {
echo "[+] Generating monitoring report"
local report_file="$LOG_DIR/monitoring_report_$(date +%Y%m%d).html"
python3 << EOF
import json
import glob
import os
from datetime import datetime, timedelta
from collections import defaultdict
# Collect monitoring data
monitoring_data = defaultdict(list)
total_new_exploits = 0
# Find all new exploit files from last 24 hours
cutoff_time = datetime.now() - timedelta(hours=24)
for new_file in glob.glob('$LOG_DIR/*_new_*.json'):
try:
# Extract timestamp from filename
filename = os.path.basename(new_file)
timestamp_str = filename.split('_new_')[1].replace('.json', '')
file_time = datetime.strptime(timestamp_str, '%Y%m%d_%H%M%S')
if file_time >= cutoff_time:
# Extract target name
target = filename.split('_new_')[0]
with open(new_file, 'r') as f:
exploits = json.load(f)
monitoring_data[target].extend(exploits)
total_new_exploits += len(exploits)
except:
continue
# Generate HTML report
html_content = f"""
<!DOCTYPE html>
<html>
<head>
<title>Exploit Monitoring Report</title>
<style>
body {{ font-family: Arial, sans-serif; margin: 20px; }}
.header {{ background-color: #f0f0f0; padding: 20px; border-radius: 5px; }}
.alert {{ background-color: #ffebee; border: 1px solid #f44336; padding: 15px; border-radius: 5px; margin: 10px 0; }}
.target {{ margin: 20px 0; padding: 15px; border: 1px solid #ddd; border-radius: 5px; }}
table {{ border-collapse: collapse; width: 100%; margin: 10px 0; }}
th, td {{ border: 1px solid #ddd; padding: 8px; text-align: left; }}
th {{ background-color: #f2f2f2; }}
</style>
</head>
<body>
<div class="header">
<h1>Exploit Monitoring Report</h1>
<p>Generated: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}</p>
<p>Monitoring Period: Last 24 hours</p>
</div>
<div class="alert">
<h2>⚠️ Alert Summary</h2>
<p><strong>Total New Exploits:</strong> {total_new_exploits}</p>
<p><strong>Affected Targets:</strong> {len(monitoring_data)}</p>
</div>
"""
if monitoring_data:
html_content += "<h2>New Exploits by Target</h2>"
for target, exploits in monitoring_data.items():
html_content += f"""
<div class="target">
<h3>{target}</h3>
<p><strong>New Exploits:</strong> {len(exploits)}</p>
<table>
<tr><th>EDB-ID</th><th>Title</th><th>Platform</th><th>Type</th><th>Date</th></tr>
"""
for exploit in exploits[:10]: # Show top 10
html_content += f"""
<tr>
<td><a href="https://www.exploit-db.com/exploits/{exploit.get('EDB-ID', '')}" target="_blank">{exploit.get('EDB-ID', '')}</a></td>
<td>{exploit.get('Title', '')}</td>
<td>{exploit.get('Platform', '')}</td>
<td>{exploit.get('Type', '')}</td>
<td>{exploit.get('Date', '')}</td>
</tr>
"""
html_content += """
</table>
</div>
"""
else:
html_content += """
<div class="target">
<h2>✅ No New Exploits</h2>
<p>No new exploits were detected in the last 24 hours for monitored targets.</p>
</div>
"""
html_content += """
</body>
</html>
"""
with open('$report_file', 'w') as f:
f.write(html_content)
print(f"[+] Monitoring report generated: $report_file")
EOF
}
# Function to cleanup old logs
cleanup_logs() {
echo "[+] Cleaning up old monitoring logs"
# Keep logs for 30 days
find "$LOG_DIR" -name "*.json" -mtime +30 -delete
find "$LOG_DIR" -name "*.log" -mtime +30 -delete
find "$LOG_DIR" -name "*.html" -mtime +7 -delete
}
# Main monitoring loop
echo "[+] Starting continuous exploit monitoring"
echo "[+] Check interval: $((CHECK_INTERVAL / 60)) minutes"
last_update=0
while true; do
echo "[+] Starting monitoring cycle at $(date)"
# Update database if needed
current_time=$(date +%s)
if [ "$UPDATE_DATABASE" = "true" ] && [ $((current_time - last_update)) -ge $UPDATE_INTERVAL ]; then
if update_database; then
last_update=$current_time
fi
fi
# Check each monitored target
echo "$MONITOR_TARGETS" | while read -r target; do
# Skip empty lines
[ -z "$target" ] && continue
check_new_exploits "$target"
done
# Generate daily report and cleanup
generate_monitoring_report
cleanup_logs
echo "[+] Monitoring cycle completed at $(date)"
echo "[+] Next check in $((CHECK_INTERVAL / 60)) minutes"
sleep "$CHECK_INTERVAL"
done
Integration with Security Tools
Metasploit Integration
bash
# Search for Metasploit modules using SearchSploit
searchsploit metasploit apache
# Find exploits with Metasploit modules
searchsploit -j apache | jq -r '.RESULTS_EXPLOIT[] | select(.Title | contains("Metasploit")) | .["EDB-ID"]'
# Cross-reference with Metasploit database
msfconsole -q -x "search edb:12345; exit"
Nmap Integration
bash
# Use SearchSploit with Nmap scan results
nmap -sV target.com | grep -E "^[0-9]+/tcp" | while read line; do
service=$(echo "$line" | awk '{print $3}')
version=$(echo "$line" | awk '{print $4" "$5}')
echo "Searching exploits for: $service $version"
searchsploit "$service $version"
done
# Create Nmap script using SearchSploit
cat > searchsploit.nse << 'EOF'
local nmap = require "nmap"
local shortport = require "shortport"
local stdnse = require "stdnse"
description = [[
Uses SearchSploit to find exploits for detected services.
]]
author = "Security Researcher"
license = "Same as Nmap--See https://nmap.org/book/man-legal.html"
categories = {"discovery", "safe"}
portrule = shortport.version_port_or_service()
action = function(host, port)
local service = port.service
local version = port.version
if service and version then
local cmd = string.format("searchsploit '%s %s'", service, version.version or "")
local result = os.execute(cmd)
return string.format("SearchSploit query: %s", cmd)
end
return nil
end
EOF
Burp Suite Integration
bash
# Export SearchSploit results for Burp Suite
searchsploit -j web | jq -r '.RESULTS_EXPLOIT[] | select(.Type | contains("webapps")) | .Title' > burp_payloads.txt
# Create Burp Suite extension payload list
searchsploit --type webapps -j | jq -r '.RESULTS_EXPLOIT[] | .["EDB-ID"]' | while read id; do
searchsploit -m "$id" -o /tmp/burp_exploits/
done
Troubleshooting
Common Issues
Database Problems
bash
# Database not found
searchsploit --path
ls -la /opt/exploitdb/
# Rebuild database
searchsploit --rebuild
# Fix permissions
sudo chown -R $USER:$USER /opt/exploitdb/
# Manual database update
cd /opt/exploitdb && git pull
Search Issues
bash
# No results found
searchsploit --check
searchsploit --stats
# Clear cache
rm -rf ~/.searchsploit_cache
# Debug search
searchsploit --debug apache
# Check search terms
searchsploit -e "exact match"
searchsploit -i "case insensitive"
File Access Problems
bash
# Permission denied
sudo chmod +x /opt/exploitdb/searchsploit
# File not found
searchsploit -p 12345
ls -la /opt/exploitdb/exploits/
# Copy issues
searchsploit -m 12345 -o /tmp/
ls -la /tmp/
Performance Issues
bash
# Slow searches
searchsploit --platform linux apache # Limit platform
searchsploit -t apache # Title only
searchsploit apache | head -20 # Limit results
# Large database
du -sh /opt/exploitdb/
git gc --aggressive # Cleanup git repo
# Memory issues
ulimit -v 1000000 # Limit virtual memory
Resources
- SearchSploit GitHub Repository
- ExploitDB Website
- Offensive Security Documentation
- SearchSploit Manual
- Exploit Development Resources
- CVE Database
- National Vulnerability Database
This cheat sheet provides a comprehensive reference for using SearchSploit for exploit research and vulnerability assessment. Always ensure you have proper authorization before using any exploits in any environment.