Salta ai contenuti

WordlistRaider

WordlistRaider is a sophisticated wordlist optimization and generation tool designed to create highly targeted, efficient wordlists for penetration testing. Rather than using generic wordlists, WordlistRaider intelligently generates custom wordlists based on reconnaissance data, significantly improving the success rate of password and directory brute-force attacks.

Key capabilities include:

  • Intelligent wordlist generation from target metadata
  • Multiple wordlist combination and deduplication
  • Custom rule-based wordlist manipulation
  • Wordlist filtering and optimization
  • Integration with reconnaissance data
git clone https://github.com/cakinney/wordlistraider.git
cd wordlistraider
pip install -r requirements.txt
python wordlistraider.py --help
pip install wordlistraider
wordlistraider --help
# Clone repository
git clone https://github.com/cakinney/wordlistraider.git

# Install Python 3.7+
python3 --version

# Install dependencies
cd wordlistraider
pip3 install -r requirements.txt

# Run directly
python3 wordlistraider.py
docker pull cakinney/wordlistraider
docker run -it wordlistraider --help
CommandDescription
wordlistraider -u <url>Generate wordlist from target URL
wordlistraider -f <file>Process file for wordlist generation
wordlistraider -c <file>Combine multiple wordlists
wordlistraider -m <wordlist>Mutate wordlist with rules
wordlistraider -o <output>Specify output file
wordlistraider --helpShow all available options
wordlistraider -u https://example.com -o custom_wordlist.txt

Analyzes the target website’s content, extracts relevant words, company names, and technical terms to create a highly targeted wordlist for password attacks.

wordlistraider -c wordlist1.txt wordlist2.txt wordlist3.txt -o combined_wordlist.txt

Combines multiple wordlist sources, removes duplicates, and creates a consolidated wordlist optimized for efficiency.

wordlistraider -f base_wordlist.txt -m rules.txt -o mutated_wordlist.txt

Applies sophisticated mutation rules (capitalizations, number appending, special characters) to expand the wordlist intelligently.

wordlistraider -u https://target.company.com --extract-text -o company_wordlist.txt

Scrapes and analyzes all text content from the website, extracting domain-relevant vocabulary for targeted attacks.

# Extract from target and apply mutations
wordlistraider -u https://example.com \
  --extract-text \
  --apply-mutations \
  --min-length 4 \
  --max-length 20 \
  -o example_advanced.txt

Extracts words from the target, applies intelligent mutations, and filters by length for better results.

# Extract company information
# Then generate wordlist from company name, domain, products, etc.

# For company: ACME Corp, domain: acme.com, products: SecureVault
cat > acme_terms.txt << 'EOF'
acme
corp
vault
secure
password
admin
test
ACME
SecureVault
EOF

# Generate mutations
wordlistraider -f acme_terms.txt -m mutations.txt -o acme_passwords.txt

# View results
head -20 acme_passwords.txt
#!/bin/bash
# Target organization: TechCorp
ORG="techcorp"

# Create base wordlist from reconnaissance
cat > ${ORG}_base.txt << 'EOF'
techcorp
technology
corporation
secure
cloud
server
database
password
admin
user
EOT

# Apply mutations
wordlistraider -f ${ORG}_base.txt -m standard_mutations.txt -o ${ORG}_mutations.txt

# Combine with common wordlists
wordlistraider -c ${ORG}_mutations.txt common_passwords.txt -o ${ORG}_final.txt

# Statistics
echo "Final wordlist size:"
wc -l ${ORG}_final.txt
# Combine multiple intelligence sources
wordlistraider \
  -c \
  industry_wordlist.txt \
  company_wordlist.txt \
  extracted_website_words.txt \
  common_tech_terms.txt \
  domain_names.txt \
  -o comprehensive_wordlist.txt

# Remove duplicates and sort
sort -u comprehensive_wordlist.txt > final_wordlist.txt
# Create mutations.txt with transformation rules
cat > mutations.txt << 'EOF'
# Capitalization
$U  # UPPERCASE
$L  # lowercase
$C  # Capitalize
$T  # Title

# Appending
$1, $2, $3, $4, $5
$!, $@, $#, $%

# Prefixing
^admin, ^test, ^user

# Common patterns
password123, admin123, letmein, welcome
EOF

# Apply mutations
wordlistraider -f base_wordlist.txt -m mutations.txt -o mutated.txt
# Input word: password
# After mutations generates:
password
PASSWORD
Password
Prassword
password1
password123
password!
admin_password
test_password
password@123
# ... and hundreds more depending on rules
# Generate wordlist, filter by length
wordlistraider -u https://example.com \
  --min-length 5 \
  --max-length 25 \
  -o filtered_wordlist.txt

Removes very short words (common false positives) and very long words (unlikely passwords).

# Clean wordlist
cat multiple_wordlists.txt | sort -u > deduplicated.txt

# Remove whitespace-only lines
grep -v '^[[:space:]]*$' deduplicated.txt > cleaned.txt

# Count unique entries
wc -l cleaned.txt
# Create frequency analysis
cat passwords.txt | sort | uniq -c | sort -rn > frequency.txt

# Extract top N most common
head -100 frequency.txt | awk '{print $NF}' > top_passwords.txt
#!/bin/bash
TARGET="https://example.com"

# Step 1: Extract all text from website
curl -s $TARGET | \
  sed 's/<[^>]*>//g' | \
  tr ' ' '\n' | \
  grep -E '^[a-zA-Z]{4,}$' | \
  sort -u > website_words.txt

# Step 2: Extract company info
# Manually add company name, products, domain terms
cat >> website_words.txt << 'EOF'
example
company
product1
product2
service
technology
EOF

# Step 3: Generate wordlist with mutations
wordlistraider -f website_words.txt -m mutations.txt -o final_wordlist.txt

# Step 4: Optimize final list
sort -u final_wordlist.txt | \
  awk 'length($0) >= 5 && length($0) <= 20' > optimized_wordlist.txt
#!/bin/bash
# Extract employee names from company LinkedIn
# Save to names.txt

# Common password patterns with names
cat names.txt | while read name; do
  echo "$name"
  echo "$name!" 
  echo "$name123"
  echo "$name@123"
  echo "Welcome${name}"
done > linkedin_passwords.txt

# Combine with other wordlists
wordlistraider -c linkedin_passwords.txt standard_wordlist.txt -o combined.txt
#!/bin/bash
# Clone target repository
git clone https://github.com/target/repo.git
cd repo

# Extract all identifiers, function names, variables
find . -type f \( -name "*.py" -o -name "*.js" -o -name "*.java" \) | \
  xargs grep -oh '[a-zA-Z_][a-zA-Z0-9_]*' | \
  sort -u > code_identifiers.txt

# Generate wordlist
wordlistraider -f code_identifiers.txt -m mutations.txt -o code_wordlist.txt
# Check wordlist size
ls -lh wordlist.txt

# If too large, filter by frequency or length
# Top 50,000 most likely passwords
head -50000 wordlist.txt > reduced_wordlist.txt

# Optimize for specific tool (Hashcat, John, etc.)
wc -l wordlist.txt
# Split large wordlist for parallel brute-forcing
split -l 10000 large_wordlist.txt chunk_

# Process each chunk in parallel
for chunk in chunk_*; do
  hashcat -m 1000 hashes.txt $chunk &
done
wait
# Stream processing for very large wordlists
cat huge_wordlist.txt | \
  awk 'length($0) >= 8 && length($0) <= 20 { print }' | \
  sort -u | \
  head -100000 > filtered_wordlist.txt
# Generate custom wordlist
wordlistraider -u https://target.com -o target_wordlist.txt

# Use with Hashcat
hashcat -m 1000 hashes.txt target_wordlist.txt

# With mutations in Hashcat
hashcat -m 1000 hashes.txt target_wordlist.txt -r rules/best64.rule
# Generate wordlist
wordlistraider -f base_words.txt -o wordlist.txt

# Single crack mode with custom wordlist
john --wordlist=wordlist.txt --rules:Single hashes.txt

# Incremental with wordlist as seed
john --wordlist=wordlist.txt hashes.txt
# Generate HTTP service wordlist
wordlistraider -u https://target.com/login -o login_wordlist.txt

# HTTP form brute-force
hydra -L users.txt -P login_wordlist.txt target.com http-post-form \
  "/login.php:user=^USER^&pass=^PASS^:F=incorrect"
# Generate common keyboard patterns
cat > keyboard_patterns.txt << 'EOF'
qwerty
asdfgh
zxcvbn
qweasd
12345
123456
EOF

# Combine with other wordlists
wordlistraider -c keyboard_patterns.txt other_wordlists.txt -o combined.txt
# Create l33t mutations
cat > leet_mutations.txt << 'EOF'
# Replace common letters
s -> $
a -> @
e -> 3
i -> 1
o -> 0
l -> 1
EOF

# Apply mutations
wordlistraider -f base_wordlist.txt -m leet_mutations.txt -o leet_wordlist.txt
# Generate time-based passwords (common default patterns)
for year in 2020 2021 2022 2023 2024 2025; do
  echo "$year"
  for month in 01 02 03 04 05 06 07 08 09 10 11 12; do
    echo "${year}${month}"
  done
done >> time_passwords.txt

# Combine with other wordlists
wordlistraider -c time_passwords.txt main_wordlist.txt -o final.txt
  1. Target-Specific: Always customize wordlists based on reconnaissance
  2. Organization: Keep separate wordlists for different organizations
  3. Mutation Limits: Don’t create excessively large wordlists (>100MB)
  4. Deduplication: Always remove duplicates before use
  5. Length Filtering: Filter words by realistic password length
  6. Combination Strategy: Combine multiple sources strategically
  7. Documentation: Track wordlist provenance and creation date
  8. Authorization: Only use for authorized security testing
# If wordlist is too large
wc -l huge_wordlist.txt

# Reduce by filtering
awk 'length($0) >= 8 && length($0) <= 16' huge_wordlist.txt > optimized.txt

# Or take most likely subset
head -50000 huge_wordlist.txt > reduced.txt
# Monitor memory during generation
time wordlistraider -u https://example.com -o wordlist.txt

# If running out of memory, process in chunks
# First extract words, then apply mutations separately
# Ensure UTF-8 encoding
file wordlist.txt

# Convert if needed
iconv -f ISO-8859-1 -t UTF-8 wordlist.txt > wordlist_utf8.txt
StrategySizeSpeedSuccess Rate
Generic (rockyou.txt)Large (100K+)SlowMedium
Organization-SpecificMedium (10K-50K)FastHigh
Fully CustomizedSmall (1K-10K)Very FastVery High
# Crunch - generates all permutations (huge wordlists)
crunch 8 12 -o permutations.txt  # Likely >1GB

# WordlistRaider - intelligently filters (targeted wordlists)
wordlistraider -u https://target.com -o wordlist.txt  # Likely <10MB

WordlistRaider is essential for creating highly targeted, efficient wordlists that significantly improve brute-force success rates. By intelligently combining reconnaissance data with sophisticated mutations, it enables authorized security professionals to conduct more effective password and directory enumeration testing with minimal resource overhead.