- Linux, macOS, oder Windows mit WSL2
- Docker 20.10+ und Docker Compose
- Python 3.12+
- Node.js 18+ (für Web UI)
- 4GB+ RAM (8GB empfohlen)
# macOS & Linux
curl -LsSf https://astral.sh/uv/install.sh | sh
# Verify installation
uv --version
# Ubuntu/Debian
sudo apt update && sudo apt install -y docker.io docker-compose
# macOS
brew install --cask docker
# Verify
docker --version && docker compose version
# Ubuntu/Debian - Node.js
curl -fsSL https://deb.nodesource.com/setup_18.x | sudo -E bash -
sudo apt install -y nodejs
# macOS - Node.js
brew install node@18
# Install pnpm
npm install -g pnpm
# or
corepack enable && corepack prepare pnpm@latest --activate
# Clone repository
git clone https://github.com/yohannesgk/blacksmith.git
cd blacksmith
# Install Python dependencies
cd blacksmithAI/blacksmithAI
uv sync
# Build mini-kali Docker image
docker compose up -d
# Optional: Install frontend dependencies
cd ../frontend && pnpm install
# Copy environment template
cp blacksmithAI/.env.example blacksmithAI/.env
# Edit .env file
nano blacksmithAI/.env
# Add to .env
OPENROUTER_API_KEY=your-openrouter-api-key-here
# Install VLLM
cd blacksmithAI/blacksmithAI
uv add vllm huggingface_hub
# Start VLLM server
vllm serve mistralai/Devstral-2-123B-Instruct-2512 \
--host 0.0.0.0 \
--port 8000 \
--max-model-len 8192 \
--gpu-memory-utilization 0.75
Edit blacksmithAI/config.json:
{
"defaults": {
"provider": "openrouter"
},
"providers": {
"openrouter": {
"base_url": "https://openrouter.ai/api/v1/chat/completions",
"default_model": "mistralai/devstral-2512:free",
"default_embedding_model": "openai/text-embedding-3-small",
"default_model_config": {
"context_size": 200000,
"max_retries": 3,
"max_tokens": null
}
},
"vllm": {
"base_url": "http://localhost:8000/v1/chat/completions",
"default_model": "mistralai/devstral-2512",
"default_embedding_model": "text-embedding-3-small",
"default_model_config": {
"context_size": 200000,
"max_retries": 3
}
}
}
}
# Add to config.json providers section
"openai": {
"base_url": "https://api.openai.com/v1/chat/completions",
"default_model": "gpt-4-turbo",
"default_embedding_model": "text-embedding-3-small"
}
# Add to .env
OPENAI_API_KEY=your-openai-key
# Update defaults
"defaults": { "provider": "openai" }
# Terminal 1: Start mini-kali container
cd blacksmithAI/blacksmithAI
docker compose up -d
# Terminal 2: Run BlacksmithAI CLI
cd blacksmithAI/blacksmithAI
uv run main.py
# Or use Makefile shortcut
make start-cli
# Terminal 1: Start mini-kali Docker container
cd blacksmithAI/blacksmithAI
docker compose up -d
# Terminal 2: Start frontend (from blacksmithAI/frontend directory)
pnpm build && pnpm start
# Terminal 3: Start LangGraph dev server (from blacksmithAI/blacksmithAI)
uv run langgraph dev
# Access at http://localhost:3000
# Start container
docker compose up -d
# View logs
docker logs mini-kali-slim
# Stop container
docker compose down
# Restart container
docker compose restart
# Remove all containers
docker compose down -v
- Rolle: Zentrales Kommando und Kontrolle
- Funktion: Missionsplanung, Aufgabendelegation, Berichtsgenerierung
- Verfügbare Tools: Planungstools, Dateisystem-Tools
- Zweck: Passive und aktive Informationsbeschaffung
- Tools: assetfinder, subfinder, whois, dig, nslookup, hping3, dnsrecon
- Ausgabe: DNS-Einträge, Subdomains, Netzwerktopologie
- Zweck: Benutzerlisten, API-Überprüfung, Versionserkennung
- Tools: nmap, masscan, enum4linux-ng, nikto, whatweb, fingerprintx, gobuster, wpscan
- Ausgabe: Offene Ports, Dienste, Technologien, Endpunkte
- Zweck: Dienste zu CVEs abbilden, Risiken priorisieren
- Tools: nuclei, sslscan
- Ausgabe: Schwachstellen-Liste mit Schweregrad und Exploit-Bewertungen
- Zweck: Schwachstellen mit kontrollierten Exploits validieren
- Tools: sqlmap, hydra, medusa, ncrack, benutzerdefinierte Skripte (Python/Go/Perl/Ruby)
- Ausgabe: Kompromittierungsbeweis mit Beweisen
- Zweck: Blast-Radius und Pivot-Möglichkeiten bewerten
- Tools: netcat, socat, ssh tunneling, impacket
- Ausgabe: Lateral-Movement-Pfade, Anmeldedaten-Inventar, Geschäftsauswirkungen
# Typische Workflow-Sequenz
1. Target specification (domain/IP)
2. Orchestrator initiates recon phase
3. Recon Agent discovers: subdomains, DNS records, mail servers
4. Results fed to Scan/Enum agent
# Kompletter Bewertungs-Workflow
Orchestrator (planning)
↓
Recon Agent (attack surface) → output: services & technologies
↓
Scan/Enum Agent (deep inspection) → output: ports & versions
↓
Vuln Analysis Agent (risk mapping) → output: CVE list with scores
↓
Exploit Agent (PoC validation) → output: working exploits
↓
Post-Exploit Agent (impact) → output: lateral movement paths
↓
Orchestrator (final report with remediation)
# Target identified services directly to vuln analysis
1. Provide service information (version, port)
2. Vuln Analysis Agent maps to CVEs
3. Exploit Agent tests high-priority vulnerabilities
| Tool | Command | Purpose |
|---|
| assetfinder | assetfinder target.com | Subdomain discovery |
| subfinder | subfinder -d target.com | Subdomain enumeration |
| whois | whois target.com | Domain info lookup |
| dig | dig target.com | DNS record queries |
| dnsrecon | dnsrecon -d target.com | DNS enumeration |
| hping3 | hping3 -p 80 target.com | Network scanning |
| Tool | Command | Purpose |
|---|
| nmap | nmap -sV target.com | Port scanning & version detection |
| masscan | masscan 0.0.0.0/0 -p80,443 | High-speed port scanning |
| nikto | nikto -h target.com | Web server vulnerability scanning |
| gobuster | gobuster dir -u http://target.com -w wordlist.txt | Directory/DNS brute-forcing |
| wpscan | wpscan --url target.com | WordPress vulnerability scanning |
| whatweb | whatweb target.com | Web technology identification |
| Tool | Command | Purpose |
|---|
| nuclei | nuclei -target target.com | Fast vulnerability scanning |
| sslscan | sslscan target.com:443 | SSL/TLS configuration analysis |
| Tool | Purpose |
|---|
| sqlmap | Automated SQL injection testing |
| hydra | Password brute-forcing |
| medusa | Parallel network login auditing |
| ncrack | Network authentication cracking |
| Custom Scripts | Python/Go/Perl/Ruby exploit development |
| Tool | Purpose |
|---|
| netcat | Network debugging & data transfer |
| socat | Multi-purpose relay |
| ssh -D | SOCKS proxy tunneling for pivoting |
| impacket | Windows protocol tools (psexec, secretsdump) |
# Orchestrator generates structured reports containing:
- Executive Summary
- Findings with severity ratings
- Evidence and proof-of-concept details
- Affected systems and services
- Remediation guidance
- Business impact assessment
{
"mission_name": "Penetration Test - Example Corp",
"target": "example.com",
"findings": [
{
"vulnerability": "SQL Injection in login form",
"severity": "CRITICAL",
"affected_service": "Web Application v2.1",
"evidence": "SELECT version() returned database version",
"remediation": "Use prepared statements, input validation"
}
],
"timeline": "2025-03-10T14:30:00Z",
"tested_systems": ["web-server", "api-gateway", "database"]
}
{
"providers": {
"custom-provider": {
"base_url": "https://your-api-endpoint.com/v1/chat/completions",
"default_model": "your-model-name",
"default_embedding_model": "embedding-model",
"default_model_config": {
"context_size": 200000,
"max_retries": 3,
"max_tokens": null
}
}
}
}
# Small model (7B) - faster, less memory
vllm serve mistralai/Mistral-7B-Instruct-v0.2 \
--host 0.0.0.0 --port 8000 --gpu-memory-utilization 0.75
# Large model (123B) - more capable
vllm serve mistralai/Devstral-2-123B-Instruct-2512 \
--host 0.0.0.0 --port 8000 --max-model-len 8192
# Custom model from HuggingFace
uv run hf auth login # Login to HF first
vllm serve meta-llama/Llama-2-7b-chat-hf \
--host 0.0.0.0 --port 8000
# Setup & Installation
make help # Show all commands
make setup # Complete initial setup
make install # Python dependencies only
make frontend-install # Frontend dependencies
# Docker Management
make docker-build # Build mini-kali image
make docker-up # Start container
make docker-down # Stop container
make docker-logs # View logs
# VLLM Local LLM
make vllm-install # Install VLLM
make vllm-serve # Start VLLM (123B)
make vllm-serve-small # Start VLLM (7B)
# Running BlacksmithAI
make start-cli # CLI mode
make start-ui # Web UI (shows setup)
make start-all # Quick start CLI
# Utilities
make status # Docker container status
make check-deps # Verify dependencies
make check-config # Verify configuration
make clean # Clean up containers
make stop # Stop all services
# Collect data with external tools
assetfinder -subs-only target.com > subdomains.txt
# Feed into BlacksmithAI
# Agents will perform targeted scans on discovered subdomains
- Nmap output → Fed to Vuln Analysis for service mapping
- Subdomain lists → Targeted scanning by Scan/Enum agent
- API endpoints → Direct vulnerability testing
- Authentication systems → Brute-force & exploitation attempts
# Write custom exploitation scripts in agent directories
# Agents can execute Python/Go/Perl/Ruby scripts
# Example: Custom SQL injection payload
cat > custom_exploit.py << 'EOF'
import sys
target = sys.argv[1]
# Custom exploitation logic
EOF
# Agents invoke: python custom_exploit.py target.com
# Define scope in writing
- Target list (domains/IPs)
- Testing timeframe
- Authorized systems & attack vectors
- Reporting requirements
- Sensitive data handling
# Validate legal authorization
- Written permission from client/owner
- Rules of engagement (RoE)
- NDA and confidentiality agreements
# Recommendations
1. Use isolated test environments when possible
2. Start with reconnaissance only
3. Test vulnerabilities sequentially, not in parallel
4. Avoid destructive payloads without explicit approval
5. Monitor system behavior during exploitation
6. Maintain detailed logs of all activities
# Comprehensive findings should include:
- Clear vulnerability description
- CVSS/severity scoring
- Proof-of-concept evidence
- Affected services and versions
- Business impact assessment
- Prioritized remediation steps
- Timeline of exploitation
# Model choice affects results:
- Faster models: Quick reconnaissance, basic scanning
- Larger models: Complex analysis, better decision-making
- Local VLLM: Privacy-focused, no API dependencies
- OpenRouter: Cost-effective, multiple model options
# Recommendation: Use Sonnet/GPT-4 for complex assessment
- Vor dem Testen immer schriftliche Genehmigung einholen
- Bereich klar in den Einsatzregeln definieren
- Nur Systeme testen, die Sie besitzen oder für die Sie explizite schriftliche Genehmigung haben
- Verstöße gegen Gesetze (CFAA, GDPR, usw.) können zu zivilen/strafrechtlichen Konsequenzen führen
# After testing:
1. Document all findings with severity
2. Notify system owner of vulnerabilities
3. Allow reasonable time for patching (typically 90 days)
4. Maintain confidentiality of vulnerability details
5. Report to vendors if third-party products are affected
- Datenbeschaffung während des Testens minimieren
- Alle Berichte und Beweise sicher speichern
- Verschlüsselung für sensitive Ergebnisse verwenden
- Datenschutzgesetze (GDPR, HIPAA, usw.) einhalten
- Daten nach Engagement-Ende löschen
# Container won't start
docker ps # Check if running
docker logs mini-kali-slim # View errors
# Port conflicts
lsof -i :9756 # Check port usage
docker run -i --rm -p 9757:9756 mini-kali-slim -d # Use different port
# OpenRouter issues
# - Verify API key: echo $OPENROUTER_API_KEY
# - Check status: https://status.openrouter.ai/
# VLLM issues
curl http://localhost:8000/v1/models # Verify VLLM running
# - Check memory: free -h
# - Ensure GPU available: nvidia-smi
# Slow responses
# - Switch to faster model in config.json
# - Check system resources (top, htop)
# Agent stuck in loop
# - Reduce task complexity
# - Check tool output for errors
# - Review agent logs for infinite loops
# Dependencies missing
cd blacksmithAI/blacksmithAI
uv sync # Reinstall
- Open Source: GPL-3.0-only (community use, modification, redistribution)
- Commercial License: Available for closed-source integration
- Contact: yohannesgk@kahanlabs.com