Aller au contenu

SearchSploit aide-mémoire

Overview

SearchSploit is a commande-line search tool for exploitDB that allows you to take a copy of the exploitDB with you wherever you go. SearchSploit gives you the power to perform detailed off-line searches through your locally checked-out copy of the repository. This capability is particularly useful during tests de pénétration engagements where internet connectivity may be limited or when you need to quickly search through thousands of exploits without relying on web interfaces.

⚠️ Warning: SearchSploit provides access to real exploits that can cause damage to systems. Only use these exploits against systems you own or have explicit written permission to test. Unauthorized use of exploits may violate local laws and regulations.

Installation

Kali Linux Installation

# SearchSploit is pre-installed on Kali Linux
searchsploit --version

# Update the database
searchsploit -u

# Check Installation path
which searchsploit

# Verify database location
searchsploit --path

Ubuntu/Debian Installation

# Install git if not already installed
sudo apt update
sudo apt install git

# Clone exploitDB repository
sudo git clone https://github.com/offensive-security/exploitdb.git /opt/exploitdb

# Create symbolic link
sudo ln -sf /opt/exploitdb/searchsploit /usr/local/bin/searchsploit

# Update PATH (add to ~/.bashrc for persistance)
export PATH="$PATH:/opt/exploitdb"

# Verify Installation
searchsploit --help

Manual Installation

# Download and extract
wget https://github.com/offensive-security/exploitdb/archive/main.zip
unzip main.zip
mv exploitdb-main /opt/exploitdb

# Make searchsploit executable
chmod +x /opt/exploitdb/searchsploit

# Create symbolic link
sudo ln -sf /opt/exploitdb/searchsploit /usr/local/bin/searchsploit

# Configure .searchsploit_rc file
echo 'papers_directory="/opt/exploitdb/papers"' > ~/.searchsploit_rc
echo 'exploits_directory="/opt/exploitdb/exploits"' >> ~/.searchsploit_rc

Docker Installation

# Create Dockerfile
cat > Dockerfile << 'EOF'
FROM ubuntu:20.04
RUN apt-get update && apt-get install -y git
RUN git clone https://github.com/offensive-security/exploitdb.git /opt/exploitdb
RUN ln -sf /opt/exploitdb/searchsploit /usr/local/bin/searchsploit
WORKDIR /opt/exploitdb
ENTRYPOINT ["/opt/exploitdb/searchsploit"]
EOF

# Build Docker image
docker build -t searchsploit .

# Run SearchSploit in Docker
docker run --rm searchsploit apache

# Create alias for easier utilisation
echo 'alias searchsploit="docker run --rm -v $(pwd):/data searchsploit"' >> ~/.bashrc

Basic utilisation

Simple Searches

# Basic search
searchsploit apache

# Search for multiple terms
searchsploit apache 2.4

# Search with quotes for exact phrase
searchsploit "apache 2.4.7"

# Case-insensitive search (default)
searchsploit APACHE

# Case-sensitive search
searchsploit -c Apache

# Search in title only
searchsploit -t apache

# Search excluding specific terms
searchsploit apache --exclude="2.2"

Advanced Search options

# Search by platform
searchsploit --platform windows apache
searchsploit --platform linux kernel
searchsploit --platform php web

# Search by exploit type
searchsploit --type remote apache
searchsploit --type local windows
searchsploit --type webapps php

# Search by author
searchsploit --author "Metasploit"
searchsploit --author "exploit-db"

# Search by port
searchsploit --port 80
searchsploit --port 443

# Combine multiple filters
searchsploit --platform linux --type local kernel

CVE and vulnérabilité Searches

# Search by CVE number
searchsploit --cve 2021-44228
searchsploit --cve CVE-2021-34527

# Search by multiple CVEs
searchsploit --cve 2021-44228,2021-34527

# Search for recent CVEs
searchsploit --cve 2023-

# Search by vulnérabilité type
searchsploit "débordement de tampon"
searchsploit "injection SQL"
searchsploit "escalade de privilèges"
searchsploit "remote code execution"

Date-based Searches

# Search for exploits after specific date
searchsploit --after 2020 apache
searchsploit --after 2021-01-01 windows

# Search for exploits before specific date
searchsploit --before 2019 linux
searchsploit --before 2020-12-31 php

# Search within date range
searchsploit --after 2020 --before 2021 kernel

# Search for recent exploits (last 30 days)
searchsploit --after $(date -d "30 days ago" +%Y-%m-%d)

Output Formatting and Display

Output Formats

# Default table format
searchsploit apache

# Verbose output with full paths
searchsploit -v apache

# JSON output
searchsploit -j apache

# XML output
searchsploit -x apache

# CSV output (pipe to file)
| searchsploit apache | sed 's/ | /,/g' > results.csv |

# Color output (default)
searchsploit --colour apache

# No color output
searchsploit --no-colour apache

Filtering and Limiting Results

# Limit number of results
searchsploit apache|head -10

# Show only exploit IDs
| searchsploit apache | awk '\\\\{print $1\\\\}' | grep -E '^[0-9]+ |

### Search Result Analysis
```bash
# Count total results
searchsploit apache|wc -l

# Count by platform
searchsploit apache|grep -c linux
searchsploit apache|grep -c windows

# Count by type
searchsploit apache|grep -c remote
searchsploit apache|grep -c local

# Extract unique platforms
| searchsploit -j apache | jq -r '.RESULTS_exploit[].Platform' | sort | uniq -c |

# Extract unique authors
| searchsploit -j apache | jq -r '.RESULTS_exploit[].Author' | sort | uniq -c |

exploit Management

Copying and Downloading exploits

# Copy exploit to current directory
searchsploit -m 50383

# Copy multiple exploits
searchsploit -m 50383,50384,50385

# Copy exploit to specific directory
searchsploit -m 50383 -o /tmp/exploits/

# Copy with original filename preserved
searchsploit -m exploits/linux/local/50383.c

# Copy all exploits from search results
| searchsploit -j apache | jq -r '.RESULTS_exploit[].EDB-ID' | head -5 | xargs searchsploit -m |

# Batch copy exploits
| echo "50383,50384,50385" | tr ',' '\n' | xargs -I \{\} searchsploit -m \{\} |

Viewing and Examining exploits

# View exploit content
searchsploit -x 50383

# Open exploit in default editor
searchsploit -e 50383

# View exploit with syntaxe highlighting
searchsploit -m 50383 && cat 50383.c|highlight --syntaxe=c

# View exploit metadata
| searchsploit -j apache | jq '.RESULTS_exploit[] | select(.["EDB-ID"] == "50383")' |

# Extract exploit information
searchsploit -p 50383

exploit Organization

# Create organized directory structure
mkdir -p exploits/\{windows,linux,web,mobile\}

# Copy exploits by platform
| searchsploit --platform windows -j | jq -r '.RESULTS_exploit[].["EDB-ID"]' | head -10 | xargs -I \{\} searchsploit -m \{\} -o exploits/windows/ |

# Copy exploits by type
| searchsploit --type webapps -j | jq -r '.RESULTS_exploit[].["EDB-ID"]' | head -10 | xargs -I \{\} searchsploit -m \{\} -o exploits/web/ |

# Create exploit inventory
| searchsploit -j apache | jq -r '.RESULTS_exploit[] | "\(.["EDB-ID"]),\(.Title),\(.Platform),\(.Type)"' > apache_exploits.csv |

Database Management

Database Updates

# Update exploitDB database
searchsploit -u

# Force update (overwrite local changes)
cd /opt/exploitdb && git reset --hard && git pull

# Check for updates without applying
cd /opt/exploitdb && git fetch && git status

# Update specific branch
cd /opt/exploitdb && git pull origin main

# Verify update
searchsploit --stats

Database Information

# Show database statistics
searchsploit --stats

# Show database path
searchsploit --path

# Check database integrity
searchsploit --check

# Rebuild database index
searchsploit --rebuild

# Show version information
searchsploit --version

# Show help information
searchsploit --help

Database Maintenance

# Clean up temporary files
find /opt/exploitdb -name "*.tmp" -delete

# Check disk utilisation
du -sh /opt/exploitdb

# Backup database
tar -czf exploitdb_backup_$(date +%Y%m%d).tar.gz /opt/exploitdb

# Restore database from backup
tar -xzf exploitdb_backup_20231201.tar.gz -C /

# Verify database after maintenance
searchsploit --check

Automation Scripts

Automated vulnérabilité Assessment

#!/bin/bash
# Automated vulnérabilité assessment using SearchSploit

cible_LIST="$1"
OUTPUT_DIR="searchsploit_assessment_$(date +%Y%m%d_%H%M%S)"
REport_FILE="$OUTPUT_DIR/vulnérabilité_assessment_report.html"

| if [ -z "$cible_LIST" ] |  | [ ! -f "$cible_LIST" ]; then |
    echo "utilisation: $0 <cible_list_file>"
    echo "cible list file should contain one software/service per line"
    echo "exemple: 'Apache 2.4.7', 'Windows 10', 'PHP 7.4'"
    exit 1
fi

mkdir -p "$OUTPUT_DIR"

# Function to assess single cible
assess_cible() \{
    local cible="$1"
    local cible_dir="$OUTPUT_DIR/$(echo "$cible"|tr ' /' '_')"

    echo "[+] Assessing: $cible"

    mkdir -p "$cible_dir"

    # Search for exploits
    searchsploit -j "$cible" > "$cible_dir/search_results.json"

    if [ -s "$cible_dir/search_results.json" ]; then
        # Parse and analyze results
        python3 << EOF
import json
import os
from collections import defaultdict

# Read search results
with open('$cible_dir/search_results.json', 'r') as f:
    data = json.load(f)

exploits = data.get('RESULTS_exploit', [])
shellcodes = data.get('RESULTS_SHELLCODE', [])

print(f"  [+] Found \{len(exploits)\} exploits and \{len(shellcodes)\} shellcodes")

if not exploits and not shellcodes:
    print(f"  [-] No exploits found for: $cible")
    exit(0)

# Analyze exploits
analysis = \{
    'cible': '$cible',
    'total_exploits': len(exploits),
    'total_shellcodes': len(shellcodes),
    'platforms': defaultdict(int),
    'types': defaultdict(int),
    'years': defaultdict(int),
    'severity_assessment': 'Unknown',
    'high_priority_exploits': []
\}

for exploit in exploits:
    platform = exploit.get('Platform', 'Unknown')
    exploit_type = exploit.get('Type', 'Unknown')
    date = exploit.get('Date', '')
    title = exploit.get('Title', '').lower()

    analysis['platforms'][platform] += 1
    analysis['types'][exploit_type] += 1

    if date:
        year = date.split('-')[0]
        analysis['years'][year] += 1

    # Identify high-priority exploits
    if any(cléword in title for cléword in ['remote', 'rce', 'privilege', 'escalation', 'débordement de tampon']):
        analysis['high_priority_exploits'].append(exploit)

# Assess severity
total_exploits = len(exploits)
high_priority_count = len(analysis['high_priority_exploits'])
remote_count = analysis['types'].get('remote', 0)

if high_priority_count > 5 or remote_count > 3:
    analysis['severity_assessment'] = 'Critical'
elif high_priority_count > 2 or remote_count > 1:
    analysis['severity_assessment'] = 'High'
elif total_exploits > 5:
    analysis['severity_assessment'] = 'Medium'
else:
    analysis['severity_assessment'] = 'Low'

# Save analysis
with open('$cible_dir/analysis.json', 'w') as f:
    json.dump(analysis, f, indent=2, default=str)

print(f"  [+] Severity assessment: \{analysis['severity_assessment']\}")
print(f"  [+] High-priority exploits: \{high_priority_count\}")
EOF

        # Download high-priority exploits
        if [ -f "$cible_dir/analysis.json" ]; then
            python3 << EOF
import json

with open('$cible_dir/analysis.json', 'r') as f:
    analysis = json.load(f)

high_priority = analysis.get('high_priority_exploits', [])[:10]  # Limit to 10

if high_priority:
    with open('$cible_dir/priority_exploits.txt', 'w') as f:
        for exploit in high_priority:
            f.write(f"\{exploit.get('EDB-ID', '')\}\\n")
EOF

            # Download priority exploits
            if [ -f "$cible_dir/priority_exploits.txt" ]; then
                while read -r edb_id; do
                    if [ -n "$edb_id" ]; then
| searchsploit -m "$edb_id" -o "$cible_dir/" 2>/dev/null |  | true |
                    fi
                done < "$cible_dir/priority_exploits.txt"
            fi
        fi

        return 0
    else
        echo "  [-] No exploits found for: $cible"
        return 1
    fi
\}

# Function to generate comprehensive report
generate_report() \{
    echo "[+] Generating comprehensive assessment report"

    python3 << EOF
import json
import os
import glob
from datetime import datetime
from collections import defaultdict

# Collect all analysis data
all_analyses = []
for analysis_file in glob.glob('$OUTPUT_DIR/*/analysis.json'):
    try:
        with open(analysis_file, 'r') as f:
            data = json.load(f)
            all_analyses.append(data)
    except:
        continue

# Calculate overall statistics
total_cibles = len(all_analyses)
total_exploits = sum(a.get('total_exploits', 0) for a in all_analyses)
total_shellcodes = sum(a.get('total_shellcodes', 0) for a in all_analyses)

severity_counts = defaultdict(int)
platform_counts = defaultdict(int)
type_counts = defaultdict(int)

for analysis in all_analyses:
    severity_counts[analysis.get('severity_assessment', 'Unknown')] += 1

    for platform, count in analysis.get('platforms', \{\}).items():
        platform_counts[platform] += count

    for exploit_type, count in analysis.get('types', \{\}).items():
        type_counts[exploit_type] += count

# Generate HTML report
html_content = f"""
<!DOCTYPE html>
<html>
<head>
    <title>SearchSploit vulnérabilité Assessment Report</title>


</head>
<body>
    <div class="header">
        <h1>SearchSploit vulnérabilité Assessment Report</h1>
        <p>Generated: \{datetime.now().strftime('%Y-%m-%d %H:%M:%S')\}</p>
    </div>

    <div class="summary">
        <h2>Executive Summary</h2>
        <table>
            <tr><th>Metric</th><th>Value</th></tr>
            <tr><td>cibles Assessed</td><td>\{total_cibles\}</td></tr>
            <tr><td>Total exploits Found</td><td>\{total_exploits\}</td></tr>
            <tr><td>Total Shellcodes Found</td><td>\{total_shellcodes\}</td></tr>
            <tr><td>Critical Risk cibles</td><td>\{severity_counts.get('Critical', 0)\}</td></tr>
            <tr><td>High Risk cibles</td><td>\{severity_counts.get('High', 0)\}</td></tr>
            <tr><td>Medium Risk cibles</td><td>\{severity_counts.get('Medium', 0)\}</td></tr>
            <tr><td>Low Risk cibles</td><td>\{severity_counts.get('Low', 0)\}</td></tr>
        </table>
    </div>

    <div class="summary">
        <h2>Risk Distribution</h2>
        <div class="chart">
            <canvas id="riskChart" width="400" height="200"></canvas>
        </div>
    </div>

    <div class="summary">
        <h2>Platform Distribution</h2>
        <table>
            <tr><th>Platform</th><th>exploit Count</th></tr>
"""

for platform, count in sorted(platform_counts.items(), clé=lambda x: x[1], reverse=True)[:10]:
    html_content += f"            <tr><td>\{platform\}</td><td>\{count\}</td></tr>\\n"

html_content += """
        </table>
    </div>

    <h2>Individual cible Assessments</h2>
"""

# Add individual cible details
for analysis in sorted(all_analyses, clé=lambda x: x.get('total_exploits', 0), reverse=True):
    severity = analysis.get('severity_assessment', 'Unknown').lower()
    cible = analysis.get('cible', 'Unknown')

    html_content += f"""
    <div class="cible \{severity\}">
        <h3>\{cible\}</h3>
        <p><strong>Risk Level:</strong> \{analysis.get('severity_assessment', 'Unknown')\}</p>
        <p><strong>Total exploits:</strong> \{analysis.get('total_exploits', 0)\}</p>
        <p><strong>High-Priority exploits:</strong> \{len(analysis.get('high_priority_exploits', []))\}</p>

        <h4>Platform Breakdown:</h4>
        <table>
            <tr><th>Platform</th><th>Count</th></tr>
"""

    for platform, count in analysis.get('platforms', \{\}).items():
        html_content += f"            <tr><td>\{platform\}</td><td>\{count\}</td></tr>\\n"

    html_content += """
        </table>
    </div>
"""

html_content += f"""

</body>
</html>
"""

with open('$REport_FILE', 'w') as f:
    f.write(html_content)

print(f"[+] Comprehensive report generated: $REport_FILE")
EOF
\}

# Function to generate CSV summary
generate_csv_summary() \{
    echo "[+] Generating CSV summary"

    local csv_file="$OUTPUT_DIR/vulnérabilité_summary.csv"

    echo "cible,Total_exploits,Total_Shellcodes,Severity,High_Priority_exploits,Top_Platform,Top_Type" > "$csv_file"

    for analysis_file in "$OUTPUT_DIR"/*/analysis.json; do
        if [ -f "$analysis_file" ]; then
            python3 << EOF
import json

with open('$analysis_file', 'r') as f:
    data = json.load(f)

cible = data.get('cible', 'Unknown').replace(',', ';')
total_exploits = data.get('total_exploits', 0)
total_shellcodes = data.get('total_shellcodes', 0)
severity = data.get('severity_assessment', 'Unknown')
high_priority = len(data.get('high_priority_exploits', []))

platforms = data.get('platforms', \{\})
top_platform = max(platforms.clés(), clé=lambda k: platforms[k]) if platforms else 'Unknown'

types = data.get('types', \{\})
top_type = max(types.clés(), clé=lambda k: types[k]) if types else 'Unknown'

print(f"\{cible\},\{total_exploits\},\{total_shellcodes\},\{severity\},\{high_priority\},\{top_platform\},\{top_type\}")
EOF
        fi
    done >> "$csv_file"

    echo "[+] CSV summary generated: $csv_file"
\}

# Main execution
echo "[+] Starting automated vulnérabilité assessment"
echo "[+] cible list: $cible_LIST"
echo "[+] Output directory: $OUTPUT_DIR"

# Check dependencies
if ! commande -v searchsploit &> /dev/null; then
    echo "[-] SearchSploit not found. Please install exploitDB first."
    exit 1
fi

# processus each cible
total_cibles=0
successful_assessments=0

while read -r cible; do
    # Skip empty lines and comments
| [[ -z "$cible" |  | "$cible" =~ ^#.*$ ]] && continue |

    total_cibles=$((total_cibles + 1))

    if assess_cible "$cible"; then
        successful_assessments=$((successful_assessments + 1))
    fi

    # Small delay to avoid overwhelming the system
    sleep 1

done < "$cible_LIST"

echo "[+] Assessment completed"
echo "  Total cibles: $total_cibles"
echo "  Successful assessments: $successful_assessments"

# Generate reports
generate_report
generate_csv_summary

echo "[+] vulnérabilité assessment completed"
echo "[+] Results saved in: $OUTPUT_DIR"
echo "[+] Open $REport_FILE for detailed report"

exploit Collection and Organization

#!/bin/bash
# Automated exploit collection and organization

COLLECTION_NAME="$1"
SEARCH_TERMS="$2"
OUTPUT_DIR="exploit_collection_$\{COLLECTION_NAME\}_$(date +%Y%m%d_%H%M%S)"

| if [ -z "$COLLECTION_NAME" ] |  | [ -z "$SEARCH_TERMS" ]; then |
    echo "utilisation: $0 <collection_name> <search_terms>"
    echo "exemple: $0 'web_exploits' 'php,apache,nginx,wordpress'"
    exit 1
fi

mkdir -p "$OUTPUT_DIR"

# Function to collect exploits for search term
collect_exploits() \{
    local search_term="$1"
    local term_dir="$OUTPUT_DIR/$(echo "$search_term"|tr ' /' '_')"

    echo "[+] Collecting exploits for: $search_term"

    mkdir -p "$term_dir"

    # Search and save results
    searchsploit -j "$search_term" > "$term_dir/search_results.json"

    if [ ! -s "$term_dir/search_results.json" ]; then
        echo "  [-] No exploits found for: $search_term"
        return 1
    fi

    # Parse and categorize exploits
    python3 << EOF
import json
import os
from collections import defaultdict

# Read search results
with open('$term_dir/search_results.json', 'r') as f:
    data = json.load(f)

exploits = data.get('RESULTS_exploit', [])
print(f"  [+] Found \{len(exploits)\} exploits for $search_term")

# Categorize exploits
categories = \{
    'remote': [],
    'local': [],
    'webapps': [],
    'dos': [],
    'windows': [],
    'linux': [],
    'php': [],
    'recent': []  # Last 2 years
\}

for exploit in exploits:
    exploit_type = exploit.get('Type', '').lower()
    platform = exploit.get('Platform', '').lower()
    title = exploit.get('Title', '').lower()
    date = exploit.get('Date', '')

    # Categorize by type
    if 'remote' in exploit_type:
        categories['remote'].append(exploit)
    elif 'local' in exploit_type:
        categories['local'].append(exploit)
    elif 'webapps' in exploit_type:
        categories['webapps'].append(exploit)
    elif 'dos' in exploit_type:
        categories['dos'].append(exploit)

    # Categorize by platform
    if 'windows' in platform:
        categories['windows'].append(exploit)
    elif 'linux' in platform:
        categories['linux'].append(exploit)
    elif 'php' in platform:
        categories['php'].append(exploit)

    # Check if recent (last 2 years)
    if date:
        year = int(date.split('-')[0])
        if year >= 2022:  # Adjust based on current year
            categories['recent'].append(exploit)

# Save categorized data
for category, exploits_list in categories.items():
    if exploits_list:
        category_dir = f'$term_dir/\{category\}'
        os.makedirs(category_dir, exist_ok=True)

        with open(f'\{category_dir\}/exploits.json', 'w') as f:
            json.dump(exploits_list, f, indent=2)

        # Create download list
        with open(f'\{category_dir\}/download_list.txt', 'w') as f:
            for exploit in exploits_list[:20]:  # Limit to 20 per category
                f.write(f"\{exploit.get('EDB-ID', '')\}\\n")

        print(f"    [+] \{category\}: \{len(exploits_list)\} exploits")

print(f"  [+] Categorization completed for $search_term")
EOF

    # Download exploits by category
    for category_dir in "$term_dir"/*; do
        if [ -d "$category_dir" ] && [ -f "$category_dir/download_list.txt" ]; then
            category_name=$(basename "$category_dir")
            echo "    [+] Downloading $category_name exploits"

            while read -r edb_id; do
                if [ -n "$edb_id" ]; then
| searchsploit -m "$edb_id" -o "$category_dir/" 2>/dev/null |  | true |
                fi
            done < "$category_dir/download_list.txt"
        fi
    done

    return 0
\}

# Function to create collection index
create_collection_index() \{
    echo "[+] Creating collection index"

    local index_file="$OUTPUT_DIR/collection_index.html"

    python3 << EOF
import json
import os
import glob
from datetime import datetime
from collections import defaultdict

# Collect all exploit data
all_exploits = []
term_stats = defaultdict(lambda: defaultdict(int))

for results_file in glob.glob('$OUTPUT_DIR/*/search_results.json'):
    term_name = os.path.basename(os.path.dirname(results_file))

    try:
        with open(results_file, 'r') as f:
            data = json.load(f)

        exploits = data.get('RESULTS_exploit', [])
        all_exploits.extend(exploits)

        # Calculate statistics
        term_stats[term_name]['total'] = len(exploits)

        for exploit in exploits:
            platform = exploit.get('Platform', 'Unknown')
            exploit_type = exploit.get('Type', 'Unknown')

            term_stats[term_name]['platforms'][platform] += 1
            term_stats[term_name]['types'][exploit_type] += 1

    except:
        continue

# Generate HTML index
html_content = f"""
<!DOCTYPE html>
<html>
<head>
    <title>exploit Collection: $COLLECTION_NAME</title>

</head>
<body>
    <div class="header">
        <h1>exploit Collection: $COLLECTION_NAME</h1>
        <p>Generated: \{datetime.now().strftime('%Y-%m-%d %H:%M:%S')\}</p>
        <p>Search Terms: $SEARCH_TERMS</p>
    </div>

    <div class="stats">
        <div class="stat-box">
            <h3>\{len(all_exploits)\}</h3>
            <p>Total exploits</p>
        </div>
        <div class="stat-box">
            <h3>\{len(term_stats)\}</h3>
            <p>Search Terms</p>
        </div>
        <div class="stat-box">
            <h3>\{len(set(e.get('Platform', 'Unknown') for e in all_exploits))\}</h3>
            <p>Platforms</p>
        </div>
        <div class="stat-box">
            <h3>\{len(set(e.get('Type', 'Unknown') for e in all_exploits))\}</h3>
            <p>exploit Types</p>
        </div>
    </div>

    <h2>Collection Contents</h2>
"""

# Add details for each search term
for term_name, stats in term_stats.items():
    html_content += f"""
    <div class="term">
        <h3>\{term_name.replace('_', ' ').title()\}</h3>
        <p><strong>Total exploits:</strong> \{stats['total']\}</p>

        <h4>Categories Available:</h4>
        <div style="display: flex; flex-wrap: wrap; gap: 10px;">
"""

    # List available categories
    term_dir = f"$OUTPUT_DIR/\{term_name\}"
    for category in ['remote', 'local', 'webapps', 'dos', 'windows', 'linux', 'php', 'recent']:
        category_path = f"\{term_dir\}/\{category\}"
        if os.path.exists(category_path):
            exploit_count = 0
            try:
                with open(f"\{category_path\}/exploits.json", 'r') as f:
                    exploits = json.load(f)
                    exploit_count = len(exploits)
            except:
                pass

            html_content += f"""
            <div class="category">
                <strong>\{category.title()\}</strong><br>
                \{exploit_count\} exploits
            </div>
"""

    html_content += """
        </div>
    </div>
"""

html_content += """

    <h2>utilisation Instructions</h2>
    <div class="term">
        <h3>Directory Structure</h3>
        <ul>
            <li><strong>search_term/</strong> - Individual search term results</li>
            <li><strong>search_term/category/</strong> - exploits categorized by type/platform</li>
            <li><strong>search_term/category/exploits.json</strong> - exploit metadata</li>
            <li><strong>search_term/category/[EDB-ID].*</strong> - Downloaded exploit files</li>
        </ul>

        <h3>Quick Access commandes</h3>
        <pre>
# View all remote exploits
find . -name "remote" -type d

# List all downloaded exploit files
find . -name "*.c" -o -name "*.py" -o -name "*.rb" -o -name "*.pl"

# Search within collection
grep -r "débordement de tampon" .

# Count exploits by type
| find . -name "exploits.json" -exec jq -r '.[].Type' \{\} \; | sort | uniq -c |
        </pre>
    </div>
</body>
</html>
"""

with open('$index_file', 'w') as f:
    f.write(html_content)

print(f"[+] Collection index generated: $index_file")
EOF
\}

# Function to create portable collection
create_portable_collection() \{
    echo "[+] Creating portable collection archive"

    local archive_name="$\{COLLECTION_NAME\}_exploit_collection_$(date +%Y%m%d).tar.gz"

    # Create README
    cat > "$OUTPUT_DIR/README.md" << EOF
# exploit Collection: $COLLECTION_NAME

Generated: $(date)
Search Terms: $SEARCH_TERMS

## Directory Structure

- **search_term/**: Individual search term results
- **search_term/category/**: exploits categorized by type/platform
- **search_term/category/exploits.json**: exploit metadata
- **search_term/category/[EDB-ID].***: Downloaded exploit files

## Categories

- **remote**: Remote code execution exploits
- **local**: Local escalade de privilèges exploits
- **webapps**: application web exploits
- **dos**: Denial of service exploits
- **windows**: Windows-specific exploits
- **linux**: Linux-specific exploits
- **php**: PHP-specific exploits
- **recent**: Recent exploits (last 2 years)

## utilisation

1. Extract the archive to your desired location
2. Open collection_index.html for an overview
3. Navigate to specific categories for cibleed exploits
4. Review exploit code before use
5. Ensure proper autorisation before testing

## Legal Notice

These exploits are provided for educational and authorized testing purposes only.
Only use against systems you own or have explicit written permission to test.
Unauthorized use may violate local laws and regulations.
EOF

    # Create archive
    tar -czf "$archive_name" -C "$(dirname "$OUTPUT_DIR")" "$(basename "$OUTPUT_DIR")"

    echo "[+] portable collection created: $archive_name"
    echo "  Archive size: $(du -h "$archive_name"|cut -f1)"
\}

# Main execution
echo "[+] Starting exploit collection and organization"
echo "[+] Collection name: $COLLECTION_NAME"
echo "[+] Search terms: $SEARCH_TERMS"

# Check dependencies
if ! commande -v searchsploit &> /dev/null; then
    echo "[-] SearchSploit not found. Please install exploitDB first."
    exit 1
fi

# processus each search term
IFS=',' read -ra TERMS ``<<< "$SEARCH_TERMS"
for term in "$\\{TERMS[@]\\}"; do
    # Trim whitespace
    term=$(echo "$term"|xargs)
    collect_exploits "$term"
done

# Create collection index and archive
create_collection_index
create_portable_collection

echo "[+] exploit collection completed"
echo "[+] Results saved in: $OUTPUT_DIR"
echo "[+] Open $OUTPUT_DIR/collection_index.html for overview"

Continuous Monitoring for New exploits

#!/bin/bash
# Continuous monitoring for new exploits

CONFIG_FILE="exploit_monitoring.conf"
LOG_DIR="exploit_monitoring_logs"
ALERT_EMAIL="security@company.com"
CHECK_INTERVAL=3600  # 1 hour

mkdir -p "$LOG_DIR"

# Create default configuration
if [ ! -f "$CONFIG_FILE" ]; then
    cat >`` "$CONFIG_FILE" << 'EOF'
# exploit Monitoring configuration

# Monitoring cibles (one per line)
MONITOR_cibleS="
Apache
nginx
WordPress
PHP
Windows 10
Linux kernel
OpenSSL
"

# Alert settings
ALERT_ON_NEW_exploitS=true
ALERT_ON_HIGH_SEVERITY=true
MINIMUM_SEVERITY_THRESHOLD=5

# Database settings
UPDATE_DATABASE=true
UPDATE_INTERVAL=86400  # 24 hours

# Notification settings
EMAIL_ALERTS=true
SLACK_WEBHOOK=""
DISCORD_WEBHOOK=""
EOF
    echo "Created $CONFIG_FILE - please configure monitoring settings"
    exit 1
fi

source "$CONFIG_FILE"

# Function to update exploitDB
update_database() \{
    echo "[+] Updating exploitDB database"

    local update_log="$LOG_DIR/database_update_$(date +%Y%m%d_%H%M%S).log"

    searchsploit -u > "$update_log" 2>&1

    if [ $? -eq 0 ]; then
        echo "  [+] Database updated successfully"
        return 0
    else
        echo "  [-] Database update failed"
        return 1
    fi
\}

# Function to check for new exploits
check_new_exploits() \{
    local cible="$1"
    local timestamp=$(date +%Y%m%d_%H%M%S)
    local current_results="$LOG_DIR/$\{cible\}_$\{timestamp\}.json"
    local previous_results="$LOG_DIR/$\{cible\}_previous.json"

    echo "[+] Checking for new exploits: $cible"

    # Get current exploits
    searchsploit -j "$cible" > "$current_results"

    if [ ! -s "$current_results" ]; then
        echo "  [-] No exploits found for: $cible"
        return 1
    fi

    # Compare with previous results
    if [ -f "$previous_results" ]; then
        # Extract exploit IDs
        local current_ids=$(jq -r '.RESULTS_exploit[]?.["EDB-ID"]' "$current_results" 2>/dev/null|sort)
        local previous_ids=$(jq -r '.RESULTS_exploit[]?.["EDB-ID"]' "$previous_results" 2>/dev/null|sort)

        # Find new exploits
        local new_exploits=$(comm -23 <(echo "$current_ids") <(echo "$previous_ids"))

        if [ -n "$new_exploits" ]; then
            local new_count=$(echo "$new_exploits"|wc -l)
            echo "  [!] Found $new_count new exploits for: $cible"

            # Get details of new exploits
            local new_exploits_details="$LOG_DIR/$\{cible\}_new_$\{timestamp\}.json"

            python3 << EOF
import json

# Read current results
with open('$current_results', 'r') as f:
    data = json.load(f)

exploits = data.get('RESULTS_exploit', [])
new_ids = """$new_exploits""".strip().split('\n')

# Filter new exploits
new_exploits = [e for e in exploits if e.get('EDB-ID') in new_ids]

# Save new exploits
with open('$new_exploits_details', 'w') as f:
    json.dump(new_exploits, f, indent=2)

print(f"New exploits saved: $new_exploits_details")
EOF

            # Send alert
            if [ "$ALERT_ON_NEW_exploitS" = "true" ]; then
                send_alert "NEW_exploitS" "$cible" "$new_count" "$new_exploits_details"
            fi

            return 0
        else
            echo "  [+] No new exploits found for: $cible"
        fi
    else
        echo "  [+] First scan for: $cible"
    fi

    # Update previous results
    cp "$current_results" "$previous_results"

    return 0
\}

# Function to assess exploit severity
assess_severity() \{
    local exploits_file="$1"
    local cible="$2"

    python3 << EOF
import json

try:
    with open('$exploits_file', 'r') as f:
        exploits = json.load(f)

    if not isinstance(exploits, list):
        exploits = exploits.get('RESULTS_exploit', [])

    # Severity scoring
    severity_score = 0
    high_severity_count = 0

    for exploit in exploits:
        title = exploit.get('Title', '').lower()
        exploit_type = exploit.get('Type', '').lower()

        # High severity indicators
        if any(cléword in title for cléword in ['remote', 'rce', 'débordement de tampon', 'escalade de privilèges']):
            severity_score += 3
            high_severity_count += 1
        elif 'remote' in exploit_type:
            severity_score += 2
            high_severity_count += 1
        elif any(cléword in title for cléword in ['dos', 'denial of service']):
            severity_score += 1
        else:
            severity_score += 0.5

    print(f"Severity score: \{severity_score\}")
    print(f"High severity exploits: \{high_severity_count\}")

    # Check threshold
    if severity_score >= $MINIMUM_SEVERITY_THRESHOLD:
        print("ALERT_THRESHOLD_EXCEEDED")

except Exception as e:
    print(f"Error assessing severity: \{e\}")
EOF
\}

# Function to send alerts
send_alert() \{
    local alert_type="$1"
    local cible="$2"
    local count="$3"
    local details_file="$4"

    local subject="[exploit ALERT] $alert_type: $cible"
    local message="Alert: $count new exploits found for $cible at $(date)"

    echo "[!] Sending alert: $subject"

    # Email alert
    if [ "$EMAIL_ALERTS" = "true" ] && [ -n "$ALERT_EMAIL" ]; then
        if [ -f "$details_file" ]; then
| echo "$message" | mail -s "$subject" -A "$details_file" "$ALERT_EMAIL" 2>/dev/null |  | \ |
                echo "Email alert failed"
        else
| echo "$message" | mail -s "$subject" "$ALERT_EMAIL" 2>/dev/null |  | \ |
                echo "Email alert failed"
        fi
    fi

    # Slack alert
    if [ -n "$SLACK_WEBHOOK" ]; then
        curl -X POST -H 'Content-type: application/json' \
            --data "\{\"text\":\"$subject: $message\"\}" \
| "$SLACK_WEBHOOK" 2>/dev/null |  | echo "Slack alert failed" |
    fi

    # Discord alert
    if [ -n "$DISCORD_WEBHOOK" ]; then
        curl -X POST -H 'Content-type: application/json' \
            --data "\{\"content\":\"$subject: $message\"\}" \
| "$DISCORD_WEBHOOK" 2>/dev/null |  | echo "Discord alert failed" |
    fi
\}

# Function to generate monitoring report
generate_monitoring_report() \{
    echo "[+] Generating monitoring report"

    local report_file="$LOG_DIR/monitoring_report_$(date +%Y%m%d).html"

    python3 << EOF
import json
import glob
import os
from datetime import datetime, timedelta
from collections import defaultdict

# Collect monitoring data
monitoring_data = defaultdict(list)
total_new_exploits = 0

# Find all new exploit files from last 24 hours
cutoff_time = datetime.now() - timedelta(hours=24)

for new_file in glob.glob('$LOG_DIR/*_new_*.json'):
    try:
        # Extract timestamp from filename
        filename = os.path.basename(new_file)
        timestamp_str = filename.split('_new_')[1].replace('.json', '')
        file_time = datetime.strptime(timestamp_str, '%Y%m%d_%H%M%S')

        if file_time >= cutoff_time:
            # Extract cible name
            cible = filename.split('_new_')[0]

            with open(new_file, 'r') as f:
                exploits = json.load(f)

            monitoring_data[cible].extend(exploits)
            total_new_exploits += len(exploits)

    except:
        continue

# Generate HTML report
html_content = f"""
<!DOCTYPE html>
<html>
<head>
    <title>exploit Monitoring Report</title>

</head>
<body>
    <div class="header">
        <h1>exploit Monitoring Report</h1>
        <p>Generated: \{datetime.now().strftime('%Y-%m-%d %H:%M:%S')\}</p>
        <p>Monitoring Period: Last 24 hours</p>
    </div>

    <div class="alert">
        <h2>⚠️ Alert Summary</h2>
        <p><strong>Total New exploits:</strong> \{total_new_exploits\}</p>
        <p><strong>Affected cibles:</strong> \{len(monitoring_data)\}</p>
    </div>
"""

if monitoring_data:
    html_content += "<h2>New exploits by cible</h2>"

    for cible, exploits in monitoring_data.items():
        html_content += f"""
        <div class="cible">
            <h3>\{cible\}</h3>
            <p><strong>New exploits:</strong> \{len(exploits)\}</p>

            <table>
                <tr><th>EDB-ID</th><th>Title</th><th>Platform</th><th>Type</th><th>Date</th></tr>
"""

        for exploit in exploits[:10]:  # Show top 10
            html_content += f"""
                <tr>
                    <td><a href="https://www.exploit-db.com/exploits/\{exploit.get('EDB-ID', '')\}" cible="_blank">\{exploit.get('EDB-ID', '')\}</a></td>
                    <td>\{exploit.get('Title', '')\}</td>
                    <td>\{exploit.get('Platform', '')\}</td>
                    <td>\{exploit.get('Type', '')\}</td>
                    <td>\{exploit.get('Date', '')\}</td>
                </tr>
"""

        html_content += """
            </table>
        </div>
"""
else:
    html_content += """
    <div class="cible">
        <h2>✅ No New exploits</h2>
        <p>No new exploits were detected in the last 24 hours for monitored cibles.</p>
    </div>
"""

html_content += """
</body>
</html>
"""

with open('$report_file', 'w') as f:
    f.write(html_content)

print(f"[+] Monitoring report generated: $report_file")
EOF
\}

# Function to cleanup old logs
cleanup_logs() \{
    echo "[+] Cleaning up old monitoring logs"

    # Keep logs for 30 days
    find "$LOG_DIR" -name "*.json" -mtime +30 -delete
    find "$LOG_DIR" -name "*.log" -mtime +30 -delete
    find "$LOG_DIR" -name "*.html" -mtime +7 -delete
\}

# Main monitoring loop
echo "[+] Starting continuous exploit monitoring"
echo "[+] Check interval: $((CHECK_INTERVAL / 60)) minutes"

last_update=0

while true; do
    echo "[+] Starting monitoring cycle at $(date)"

    # Update database if needed
    current_time=$(date +%s)
    if [ "$UPDATE_DATABASE" = "true" ] && [ $((current_time - last_update)) -ge $UPDATE_INTERVAL ]; then
        if update_database; then
            last_update=$current_time
        fi
    fi

    # Check each monitored cible
    echo "$MONITOR_cibleS"|while read -r cible; do
        # Skip empty lines
        [ -z "$cible" ] && continue

        check_new_exploits "$cible"
    done

    # Generate daily report and cleanup
    generate_monitoring_report
    cleanup_logs

    echo "[+] Monitoring cycle completed at $(date)"
    echo "[+] Next check in $((CHECK_INTERVAL / 60)) minutes"

    sleep "$CHECK_INTERVAL"
done

Integration with Security Tools

Metasploit Integration

# Search for Metasploit modules using SearchSploit
searchsploit metasploit apache

# Find exploits with Metasploit modules
| searchsploit -j apache | jq -r '.RESULTS_exploit[] | select(.Title | contains("Metasploit")) | .["EDB-ID"]' |

# Cross-reference with Metasploit database
msfconsole -q -x "search edb:12345; exit"

Nmap Integration

# Use SearchSploit with Nmap scan results
| nmap -sV cible.com | grep -E "^[0-9]+/tcp" | while read line; do |
    service=$(echo "$line"|awk '\{print $3\}')
    version=$(echo "$line"|awk '\{print $4" "$5\}')
    echo "Searching exploits for: $service $version"
    searchsploit "$service $version"
done

# Create Nmap script using SearchSploit
cat > searchsploit.nse << 'EOF'
local nmap = require "nmap"
local shortport = require "shortport"
local stdnse = require "stdnse"

Description = [[
Uses SearchSploit to find exploits for detected services.
]]

author = "Security Researcher"
license = "Same as Nmap--See https://nmap.org/book/man-legal.html"
categories = \{"discovery", "safe"\}

portrule = shortport.version_port_or_service()

action = function(hôte, port)
    local service = port.service
    local version = port.version

    if service and version then
        local cmd = string.format("searchsploit '%s %s'", service, version.version or "")
        local result = os.execute(cmd)
        return string.format("SearchSploit query: %s", cmd)
    end

    return nil
end
EOF

Burp Suite Integration

# Export SearchSploit results for Burp Suite
| searchsploit -j web | jq -r '.RESULTS_exploit[] | select(.Type | contains("webapps")) | .Title' > burp_charge utiles.txt |

# Create Burp Suite extension charge utile list
| searchsploit --type webapps -j | jq -r '.RESULTS_exploit[] | .["EDB-ID"]' | while read id; do |
    searchsploit -m "$id" -o /tmp/burp_exploits/
done

dépannage

Common Issues

Database Problems

# Database not found
searchsploit --path
ls -la /opt/exploitdb/

# Rebuild database
searchsploit --rebuild

# Fix permissions
sudo chown -R $USER:$USER /opt/exploitdb/

# Manual database update
cd /opt/exploitdb && git pull

Search Issues

# No results found
searchsploit --check
searchsploit --stats

# Clear cache
rm -rf ~/.searchsploit_cache

# Debug search
searchsploit --debug apache

# Check search terms
searchsploit -e "exact match"
searchsploit -i "case insensitive"

File Access Problems

# Permission denied
sudo chmod +x /opt/exploitdb/searchsploit

# File not found
searchsploit -p 12345
ls -la /opt/exploitdb/exploits/

# Copy issues
searchsploit -m 12345 -o /tmp/
ls -la /tmp/

Performance Issues

# Slow searches
searchsploit --platform linux apache  # Limit platform
searchsploit -t apache                 # Title only
searchsploit apache|head -20         # Limit results

# Large database
du -sh /opt/exploitdb/
git gc --aggressive                    # Cleanup git repo

# Memory issues
ulimit -v 1000000                      # Limit virtual memory

Resources


This aide-mémoire provides a comprehensive reference for using SearchSploit for exploit research and vulnérabilité assessment. Always ensure you have proper autorisation before using any exploits in any environment.

Filter by specific platforms

searchsploit apache|grep -i linux searchsploit apache|grep -i windows

Filter by exploit type

searchsploit apache|grep -i remote searchsploit apache|grep -i local

Sort by date (newest first)

| searchsploit -j apache | jq -r '.RESULTS_exploit[] | "(.Date) (.Title)"' | sort -r | ```

Search Result Analysis

CODE_BLOCK_10

exploit Management

Copying and Downloading exploits

CODE_BLOCK_11

Viewing and Examining exploits

CODE_BLOCK_12

exploit Organization

CODE_BLOCK_13

Database Management

Database Updates

CODE_BLOCK_14

Database Information

CODE_BLOCK_15

Database Maintenance

CODE_BLOCK_16

Automation Scripts

Automated vulnérabilité Assessment

CODE_BLOCK_17

exploit Collection and Organization

CODE_BLOCK_18

Continuous Monitoring for New exploits

CODE_BLOCK_19

Integration with Security Tools

Metasploit Integration

CODE_BLOCK_20

Nmap Integration

CODE_BLOCK_21

Burp Suite Integration

CODE_BLOCK_22

dépannage

Common Issues

Database Problems

CODE_BLOCK_23

Search Issues

CODE_BLOCK_24

File Access Problems

CODE_BLOCK_25

Performance Issues

CODE_BLOCK_26

Resources


This aide-mémoire provides a comprehensive reference for using SearchSploit for exploit research and vulnérabilité assessment. Always ensure you have proper autorisation before using any exploits in any environment.