Zum Inhalt springen

VECTR

VECTR (Vectorized Engagement and Campaign Tracking for Reporting) is SecurityRisk Advisors’ open-source platform for purple team operations, enabling teams to document adversary emulation campaigns, align techniques to MITRE ATT&CK, and measure detection coverage gaps over time. It bridges red and blue teams by tracking both attack execution and detection outcomes in a unified interface.

VECTR is deployed via Docker Compose from the official repository:

# Clone VECTR repository
git clone https://github.com/SecurityRiskAdvisors/VECTR.git
cd VECTR

# Start Docker Compose (includes nginx, application, and postgres)
docker-compose up -d

# Verify containers are running
docker-compose ps
# Check logs for startup status
docker-compose logs -f app

# Access web interface
# http://localhost:8080 (default)
# or https://localhost:443 (if TLS enabled)

# Default credentials (CHANGE IMMEDIATELY)
# Username: admin
# Password: admin
# docker-compose.yml customization
environment:
  - NODE_ENV=production
  - DB_HOST=postgres
  - DB_PORT=5432
  - DB_USER=vectr
  - DB_PASSWORD=change_me
  - REDIS_HOST=redis
  - REDIS_PORT=6379
  1. Access Web UI

  2. Create First Assessment

    • Click “New Assessment”
    • Enter assessment name (e.g., “Q2 2026 Purple Team Campaign”)
    • Select MITRE ATT&CK version (default: latest)
    • Define assessment scope and objectives
    • Assign team members
  3. Invite Team Members

    • Navigate to Settings → Users
    • Add user email addresses
    • Assign roles: Admin, Red Team, Blue Team, Analyst
    • Send invitations
ComponentPurpose
CampaignsHigh-level purple team exercise containers
AssessmentsSub-campaigns with specific scope and timeline
Test CasesIndividual adversary emulation techniques and detections
ResultsOutcome tracking (detected, alerted, blocked, etc.)
Heat MapsVisual ATT&CK coverage analysis

Campaigns are top-level containers for purple team activities, representing organization-wide adversary emulation programs:

Campaign Structure:
  - Campaign Name: "2026 Annual Purple Team Program"
  - Duration: Start and end dates
  - Objectives: Measurable goals for coverage improvement
  - Phases: Grouped assessments by campaign phase
  - Participants: Cross-functional team roster

Assessments are scoped sub-campaigns with defined target systems, techniques, and timelines:

Assessment Properties:
  - Name: Specific assessment name
  - Campaign: Parent campaign
  - Target Systems: Scope (endpoints, servers, networks)
  - Start/End Date: Assessment window
  - MITRE Version: ATT&CK version used (v13, v14, etc.)
  - Status: Planning, Active, Complete

Test cases document individual adversary emulation executions:

  • Technique ID: MITRE ATT&CK technique (e.g., T1566.002)
  • Name: Descriptive test case name
  • Description: Attack scenario details
  • Procedure: Step-by-step execution instructions
  • Tool Used: Red team tool (Mimikatz, certutil, etc.)
  • Execution Date: When test was performed
  • Evidence: Screenshots, logs, artifacts
  • Detection Status: Outcome from blue team perspective

Outcomes track both attack execution and detection results:

Red Team OutcomeBlue Team Detection
SuccessDetected / Alerted / Blocked
SuccessNot Detected
FailureN/A (technique didn’t execute)
N/ANot Applicable (not targeted)

Every test case maps to MITRE ATT&CK techniques:

Campaign Heat Map:
  - Reconnaissance: 8/12 techniques covered (67%)
  - Resource Development: 5/10 techniques covered (50%)
  - Initial Access: 6/8 techniques covered (75%)
  - Execution: 12/15 techniques covered (80%)
1. Dashboard → Create Campaign
2. Enter campaign metadata:
   - Campaign Name: "2026 Detection Engineering Program"
   - Campaign Manager: Select lead
   - Objective: "Improve detection coverage in EDR"
   - Start Date: 2026-04-01
   - End Date: 2026-12-31
   - Description: Campaign context and goals
3. Click Create
4. Add phases (e.g., "Phase 1: Initial Access", "Phase 2: Persistence")
# Scope Definition
Target Tactics:
  - Initial Access
  - Execution
  - Persistence
  - Privilege Escalation
  - Defense Evasion

Target Platforms:
  - Windows
  - Linux
  - macOS

Asset Groups:
  - Production Servers
  - Endpoint Devices
  - Network Infrastructure
  • Navigate to Campaign → Technique Selection
  • View full ATT&CK matrix
  • Filter by tactic, platform, or sub-technique
  • Select techniques to target in campaign
  • Export technique list for red team planning
Phase Management:
1. Create phase within campaign
   - Name: "Initial Access & Execution"
   - Duration: 2 weeks
   - Focus areas: Phishing, scripting techniques
2. Link assessments to phases
3. Schedule red team operations by phase
4. Track phase completion and coverage
Assessment → Create Test Case

Required Fields:
  - MITRE Technique ID: T1566.002 (Phishing: Spearphishing Link)
  - Test Case Name: "Phishing Link Campaign to Marketing"
  - Description: Description of attack scenario
  - Attack Procedure: Step-by-step attack execution
  - Tool Used: Browser, domain registrar info
  - Execution Date: When red team executed
  - Red Team Notes: Observations, success/failure details
# Example test case structure
Test Case: T1566.002
├── Tactic: Initial Access
├── Name: Spearphishing Link Delivery
├── Sub-techniques: Attached file is not used
├── Platform: Windows
├── Procedure: 
   1. Create malicious URL with payload
   2. Spoof marketing sender email
   3. Send to 100 marketing employees
   4. Track link clicks and execution
└── Evidence: Email logs, URL visit records
FieldExample
Tool UsedGophish + Custom payload
ProcedureSpearphishing URL in email body
Red Team OutcomeSuccess - 25 clicks, 5 executed
Blue Team DetectionAlerted on phishing link (Proofpoint)
Detection StatusDetected
RemediationUpdated email filter, user training
EvidenceScreenshots, alert logs, forensics
Test Case → Add Outcome

Red Team Perspective:
  ✓ Success: Attack achieved objective
  ✗ Failure: Attack did not execute
  ⊘ N/A: Not attempted/applicable

Blue Team Perspective:
  ✓ Detected: Security control identified attack
  ✓ Alerted: Alert/notification triggered
  ✓ Blocked: Attack blocked before success
  ✗ Not Detected: Attack completed undetected
  ⊘ Not Applicable: Technique not in scope

VECTR calculates coverage metrics:

Coverage Calculation:
  - Total Techniques Executed: 45
  - Total Techniques Detected: 38 (84%)
  - Detection Gap: 7 techniques (16%)
  
Trend Analysis:
  - Previous Campaign: 72% detected
  - Current Campaign: 84% detected
  - Improvement: +12 percentage points
Campaign → Reports → Detection Coverage

Output Includes:
- Technique-by-technique detection status
- Detected vs. Not Detected breakdown
- Trend graphs (coverage over time)
- Tactics with highest/lowest detection
- Red team success rate by technique
- Blue team detection speed (time-to-detect)
Campaign Dashboard → ATT&CK Heat Map

Color Coding:
  🟢 Green: Technique tested and detected (100%)
  🟡 Yellow: Technique tested, partially detected (50-99%)
  🔴 Red: Technique tested, not detected (0-49%)
  ⚪ Gray: Technique not tested
Matrix View:
- X-axis: MITRE ATT&CK Techniques
- Y-axis: Detection Status
- Click technique to view all test cases for that technique
- Export heat map as PNG or JSON for presentations
# ATT&CK Navigator Integration
1. Navigate to Campaign Technique Selection
2. Open MITRE ATT&CK Navigator (embedded or external link)
3. Create technique layer in Navigator
4. Import layer into VECTR campaign
5. VECTR auto-populates campaign techniques
Campaign → Export as Navigator Layer

Output:
- JSON format compatible with ATT&CK Navigator
- Includes detection status and metadata
- Share with stakeholders and executives
- Upload to Navigator for visualization
Reports → Generate Campaign Report

Report Sections:
1. Executive Summary
   - Campaign overview and objectives
   - High-level metrics (% coverage, trends)
   - Key findings and recommendations

2. Detailed Findings
   - Technique-by-technique analysis
   - Detection gaps with remediation
   - Red team success rates

3. Appendix
   - Full test case listing
   - Evidence and screenshots
   - Timeline of executions
Gap Analysis Report:
- Not Detected Techniques:
  - T1547.001: Registry Run Keys (no EDR detection)
  - T1574.001: DLL Search Order Hijacking (bypasses defenses)
  - T1562.001: Disable or Modify System Firewall (insufficient logging)

- Recommendations:
  - Implement ETW-based detection for T1547.001
  - Deploy DLL hijacking behavioral detection
  - Enable advanced logging for firewall modifications
Metrics → Trend Analysis

Metrics Tracked:
- Detection coverage over time (%) 
- Techniques tested per month
- Average red team success rate
- Detection speed (TTD in hours)
- Top tactics for improvement
- Year-over-year improvement
# Export Options
Reports Export

Formats:
- PDF: Full formatted report with branding
- CSV: Technique data for spreadsheet analysis
- JSON: Programmatic export for integrations
- PNG: Heat maps for presentations

Customization:
- Logo and branding
- Include/exclude sections
- Redact sensitive data
- Custom date ranges
Settings → Templates → Assessment Templates

Pre-built Templates:
- "Initial Access Focus" (phishing, watering hole, supply chain)
- "Persistence & Privilege Escalation" (scheduled tasks, registry, kernel)
- "Defense Evasion" (UAC bypass, AMSI evasion, LOLBins)
- "Lateral Movement" (pass-the-hash, Kerberos, SMB abuse)
Create from Existing Assessment:
1. Assessment → Save as Template
2. Strip sensitive data (client names, real targets)
3. Generalize procedures for reuse
4. Add tags for searching (phishing, Windows, EDR)
5. Share with team or organization

Use Template:
1. Create Assessment → Select Template
2. Review and customize procedures for target environment
3. Assign to red team
4. Execute and track outcomes
# Export template
Settings Templates Export Template
# Generates JSON file with all test cases and configurations

# Import template
Settings Templates Import Template
# Select JSON file
# Creates new assessment from template

# Share templates
# Send JSON file via secure channel
# Import in target VECTR instance
RolePermissions
AdminFull system access, user management, settings
Red Team LeadCreate/edit assessments, manage red team ops
Red TeamExecute test cases, submit outcomes
Blue Team LeadConfigure detections, analyze coverage gaps
Blue TeamView test cases, record detection outcomes
AnalystRead-only access, generate reports
Settings → Team Management

User Invite:
- Email: user@organization.com
- Role: Red Team, Blue Team, or Analyst
- Campaign Access: Specific campaigns or all
- Send invitation → User accepts → Account created

Concurrent Assessments:
- Multiple teams work on different assessments
- Real-time synchronization across users
- Comments and notes on test cases
- Activity log tracks all changes
Real-time Collaboration:
- Multiple red teamers execute test cases simultaneously
- Blue team updates detection outcomes in parallel
- Lock test case during active recording to prevent conflicts
- Merge comments and evidence from team members
# Authentication
curl -X POST http://localhost:8080/api/auth/login \
  -H "Content-Type: application/json" \
  -d '{"username":"admin","password":"admin"}'
# Returns: { "token": "eyJhbGciOiJIUzI1NiIsInR5..." }
# Create assessment via API
curl -X POST http://localhost:8080/api/assessments \
  -H "Authorization: Bearer YOUR_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "Q2 2026 Initial Access Campaign",
    "campaignId": "camp_abc123",
    "startDate": "2026-04-01",
    "endDate": "2026-06-30",
    "mitre_version": "14"
  }'
# Add test case
curl -X POST http://localhost:8080/api/test-cases \
  -H "Authorization: Bearer YOUR_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "assessmentId": "assess_xyz789",
    "techniqueId": "T1566.002",
    "name": "Spearphishing Link",
    "procedure": "Send malicious link via email",
    "toolUsed": "Gophish",
    "executionDate": "2026-04-15"
  }'

# Record outcome
curl -X POST http://localhost:8080/api/outcomes \
  -H "Authorization: Bearer YOUR_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "testCaseId": "tc_123",
    "redTeamOutcome": "Success",
    "blueTeamDetection": "Detected",
    "notes": "Proofpoint alert triggered"
  }'
# Export all assessments
curl http://localhost:8080/api/assessments \
  -H "Authorization: Bearer YOUR_TOKEN" | jq '.' > assessments.json

# Import test cases from CSV
python3 bulk_import.py \
  --token YOUR_TOKEN \
  --file test_cases.csv \
  --assessment assess_xyz789
IssueSolution
Port 8080 already in useChange port in docker-compose.yml, restart containers
Postgres connection errorCheck DB credentials in environment, verify postgres container running
ATT&CK data not loadingRun database migration: docker-compose exec app npm run migrate
Slow heat map generationIncrease container memory, reduce technique count temporarily
Login failuresClear browser cache, reset admin password via postgres CLI
# Access postgres container
docker-compose exec postgres psql -U vectr

# Check assessment count
SELECT COUNT(*) FROM assessments;

# Reset admin password
UPDATE users SET password=hash('newpassword') WHERE username='admin';

# Backup database
docker-compose exec postgres pg_dump -U vectr > backup.sql
# Increase container resources
# docker-compose.yml
services:
  app:
    mem_limit: 4g
    memswap_limit: 4g
  postgres:
    mem_limit: 2g

# Restart containers
docker-compose down && docker-compose up -d
  • Define clear objectives before campaign launch (detection gaps, remediation, training)
  • Map to adversary TTPs relevant to your threat landscape
  • Schedule phases strategically (avoid high-ops periods, coordinate with blue team)
  • Set realistic metrics (coverage targets, detection speed goals)
  • Document assumptions about tooling, network conditions, and defenses
  • Preserve evidence (screenshots, logs, artifacts) for audit trail
  • Document procedures precisely so findings are reproducible
  • Use realistic tools that threat actors employ in your vertical
  • Test detection evasion (UAC bypass, AMSI evasion, LOLBins) alongside technique execution
  • Coordinate with blue team to avoid unplanned business impact
  • Record detection method (EDR, IDS, SIEM, manual investigation)
  • Note detection time (immediate vs. delayed detection)
  • Identify false negatives quickly for remediation priority
  • Track false positives from test cases
  • Implement detections incrementally to avoid alert fatigue
  • Executive summaries focus on coverage improvement and business impact
  • Technical details support remediation prioritization
  • Trend analysis demonstrates program maturity and progress
  • Assign ownership for detection gap remediation
  • Schedule follow-up campaigns to verify detection improvements
ToolPurpose
CALDERAAutomated adversary emulation platform (pairs with VECTR)
Atomic Red TeamLibrary of small, testable ATT&CK techniques
AttackIQCommercial continuous red teaming (similar to VECTR)
MITRE ATT&CK NavigatorVisualize and plan ATT&CK-based assessments
PlexTracPurple team reporting and engagement tracking
Incident Response RunbooksProceduralize detection and response
EDR PlatformsEndpoint Detection and Response (primary detection layer)