Tracecat
Overview
섹션 제목: “Overview”Tracecat is an open-source, AI-native Security Orchestration, Automation, and Response (SOAR) platform designed as a modern alternative to proprietary solutions like Tines and Splunk SOAR. It provides a visual workflow builder for automating security operations, case management for incident handling, and AI-powered alert triage to reduce mean time to response (MTTR).
Built with Python and Docker, Tracecat enables security teams to orchestrate complex incident response playbooks, integrate with third-party security tools, and leverage AI models for intelligent alert correlation and enrichment—all within a self-hosted, fully open-source platform.
GitHub: TracecatHQ/tracecat
License: Open Source
Built With: Python, Docker, Web-based UI
Installation
섹션 제목: “Installation”Prerequisites
섹션 제목: “Prerequisites”- Docker and Docker Compose
- Python 3.10+
- PostgreSQL 12+ (or use included container)
- 4GB+ RAM for comfortable operation
- Modern web browser
Quick Start with Docker Compose
섹션 제목: “Quick Start with Docker Compose”# docker-compose.yml
version: '3.8'
services:
postgres:
image: postgres:15-alpine
environment:
POSTGRES_DB: tracecat
POSTGRES_PASSWORD: secure_password
volumes:
- postgres_data:/var/lib/postgresql/data
ports:
- "5432:5432"
redis:
image: redis:7-alpine
ports:
- "6379:6379"
tracecat-api:
image: tracecathq/tracecat-api:latest
environment:
DATABASE_URL: postgresql://postgres:secure_password@postgres:5432/tracecat
REDIS_URL: redis://redis:6379
SECRET_KEY: ${SECRET_KEY}
OPENAI_API_KEY: ${OPENAI_API_KEY}
ports:
- "8000:8000"
depends_on:
- postgres
- redis
command: uvicorn main:app --host 0.0.0.0 --port 8000
tracecat-worker:
image: tracecathq/tracecat-worker:latest
environment:
DATABASE_URL: postgresql://postgres:secure_password@postgres:5432/tracecat
REDIS_URL: redis://redis:6379
OPENAI_API_KEY: ${OPENAI_API_KEY}
depends_on:
- postgres
- redis
tracecat-ui:
image: tracecathq/tracecat-ui:latest
environment:
REACT_APP_API_URL: http://localhost:8000
ports:
- "3000:3000"
depends_on:
- tracecat-api
volumes:
postgres_data:
Docker Installation
섹션 제목: “Docker Installation”# Pull images
docker pull tracecathq/tracecat-api:latest
docker pull tracecathq/tracecat-ui:latest
docker pull tracecathq/tracecat-worker:latest
# Set environment variables
export SECRET_KEY=$(openssl rand -hex 32)
export OPENAI_API_KEY="sk-..." # Your OpenAI API key
# Run with docker-compose
docker-compose up -d
# Access UI
# http://localhost:3000
Kubernetes Deployment
섹션 제목: “Kubernetes Deployment”apiVersion: v1
kind: Namespace
metadata:
name: tracecat
---
apiVersion: helm.sh/v1
kind: HelmChart
metadata:
name: tracecat
namespace: tracecat
spec:
chart: tracecat
repo: https://charts.tracecat.dev
targetNamespace: tracecat
values:
image:
tag: latest
replicaCount: 2
postgresql:
enabled: true
postgresPassword: secure_password
redis:
enabled: true
ingress:
enabled: true
hosts:
- host: tracecat.example.com
paths:
- path: /
pathType: Prefix
Installation from Source
섹션 제목: “Installation from Source”# Clone repository
git clone https://github.com/TracecatHQ/tracecat.git
cd tracecat
# Create virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Set up environment
cp .env.example .env
# Edit .env with your configuration
# Initialize database
python -m alembic upgrade head
# Start development server
python -m uvicorn main:app --reload --port 8000
Configuration
섹션 제목: “Configuration”Environment Variables
섹션 제목: “Environment Variables”# Core settings
TRACECAT_ENV=production
SECRET_KEY=$(openssl rand -hex 32)
DEBUG=false
# Database
DATABASE_URL=postgresql://user:password@localhost:5432/tracecat
REDIS_URL=redis://localhost:6379
# API configuration
API_HOST=0.0.0.0
API_PORT=8000
API_LOG_LEVEL=info
# AI/LLM configuration
OPENAI_API_KEY=sk-...
OPENAI_MODEL=gpt-4
OPENAI_TEMPERATURE=0.7
# UI configuration
UI_HOST=0.0.0.0
UI_PORT=3000
UI_API_URL=http://localhost:8000
# Security
ALLOW_ORIGINS=["http://localhost:3000"]
CORS_CREDENTIALS=true
SESSION_TIMEOUT=3600
# Integrations
SLACK_BOT_TOKEN=xoxb-...
SLACK_SIGNING_SECRET=...
Configuration File
섹션 제목: “Configuration File”# config.yaml
server:
host: "0.0.0.0"
port: 8000
workers: 4
log_level: "info"
database:
url: "postgresql://user:password@localhost/tracecat"
pool_size: 20
max_overflow: 10
redis:
url: "redis://localhost:6379"
db: 0
security:
secret_key: "your-secret-key"
access_token_expire: 3600
algorithm: "HS256"
ai:
provider: "openai"
model: "gpt-4"
temperature: 0.7
max_tokens: 2000
integrations:
slack:
enabled: true
bot_token: "xoxb-..."
pagerduty:
enabled: true
api_key: "..."
Core Features
섹션 제목: “Core Features”Workflow Builder
섹션 제목: “Workflow Builder”The visual workflow builder enables creation of complex incident response automation:
┌─────────────────────────────────────────────────┐
│ Tracecat Workflow Editor │
├─────────────────────────────────────────────────┤
│ │
│ Trigger ─→ Extract ─→ Enrich ─→ Decide ─→ │
│ ↓ │
│ Send Alert ─→ Incident │
│ │
└─────────────────────────────────────────────────┘
Trigger Types
섹션 제목: “Trigger Types”| Trigger | Purpose | Example |
|---|---|---|
| Webhook | External event input | Syslog, email alert |
| Scheduled | Time-based execution | Daily report generation |
| Manual | User-initiated | On-demand investigation |
| Stream | Real-time events | Kafka, Pub/Sub |
| Alert Feed | SIEM/monitoring tool | Splunk, Datadog, Elastic |
Action Types
섹션 제목: “Action Types”| Action | Purpose | Example |
|---|---|---|
| HTTP | Call REST API | Query external systems |
| Database | Query/update DB | Store incident data |
| Slack | Send Slack messages | Notifications, alerts |
| Send emails | Escalations, reports | |
| Webhook | Call external webhook | Trigger other systems |
| Script | Execute Python | Custom logic |
| AI | Call LLM | Triage, summarization |
| Case | Create/update case | Incident management |
Workflow Examples
섹션 제목: “Workflow Examples”Simple Alert Triage Workflow
섹션 제목: “Simple Alert Triage Workflow”{
"id": "alert-triage",
"title": "Alert Triage",
"description": "Automatic alert triage and enrichment",
"triggers": [
{
"type": "webhook",
"path": "/webhooks/alerts"
}
],
"actions": [
{
"id": "extract-alert",
"type": "script",
"code": "alert = input.alert; return { severity: alert.severity, source: alert.source, timestamp: alert.timestamp }"
},
{
"id": "enrich-with-ai",
"type": "ai",
"prompt": "Analyze this security alert and determine if it's a true positive or false positive: {{ extract-alert.output }}",
"model": "gpt-4"
},
{
"id": "create-case",
"type": "case",
"action": "create",
"mapping": {
"title": "{{ extract-alert.source }} Alert",
"severity": "{{ extract-alert.severity }}",
"description": "{{ enrich-with-ai.output }}",
"status": "open"
}
},
{
"id": "notify-slack",
"type": "slack",
"action": "send_message",
"channel": "#security-alerts",
"text": "New {{ extract-alert.severity }} alert from {{ extract-alert.source }}. Case ID: {{ create-case.case_id }}"
}
]
}
Complex Incident Response Workflow
섹션 제목: “Complex Incident Response Workflow”workflow:
id: incident-response
title: "Automated Incident Response"
triggers:
- type: alert_feed
source: splunk
query: "alert_type=security"
actions:
# Step 1: Extract alert details
- id: parse_alert
type: script
input: "{{ trigger.payload }}"
script: |
alert = input
return {
source_ip: alert.src_ip,
target_ip: alert.dest_ip,
event_type: alert.event_type,
timestamp: alert.timestamp
}
# Step 2: Enrich with threat intelligence
- id: threat_intel_lookup
type: http
method: POST
url: "https://api.abuse.ch/query"
body: "ip={{ parse_alert.source_ip }}"
# Step 3: AI-powered analysis
- id: ai_analysis
type: ai
prompt: |
Analyze this security event:
- Source IP: {{ parse_alert.source_ip }}
- Threat Intel: {{ threat_intel_lookup.reputation }}
- Event Type: {{ parse_alert.event_type }}
Determine severity and recommended actions.
# Step 4: Decision logic
- id: severity_decision
type: condition
condition: "{{ ai_analysis.severity }} == 'critical'"
then_action: escalate
else_action: queue_for_review
# Step 5: Create incident case
- id: create_incident
type: case
action: create
mapping:
title: "{{ ai_analysis.incident_type }}"
severity: "{{ ai_analysis.severity }}"
description: "{{ ai_analysis.summary }}"
tags: ["{{ parse_alert.event_type }}"]
# Step 6: Notify team
- id: send_notification
type: slack
channel: "#incident-response"
blocks:
- type: section
text:
type: mrkdwn
text: "*New Incident*\nCase: {{ create_incident.case_id }}\nSeverity: {{ ai_analysis.severity }}"
# Step 7: Escalate if critical
- id: escalate_critical
type: http
method: POST
url: "https://api.pagerduty.com/incidents"
condition: "{{ severity_decision }} == 'escalate'"
body:
title: "{{ create_incident.title }}"
urgency: "high"
service_id: "{{ env.PAGERDUTY_SERVICE_ID }}"
Case Management
섹션 제목: “Case Management”Creating Cases
섹션 제목: “Creating Cases”# Via API
curl -X POST http://localhost:8000/api/cases \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{
"title": "Suspicious Login Detected",
"severity": "high",
"status": "open",
"description": "Unusual login from unknown location",
"tags": ["authentication", "suspicious-activity"],
"assignee_id": "user123"
}'
# Via UI: Cases → New Case → Fill form
Case Fields
섹션 제목: “Case Fields”case:
id: case_12345
title: "Suspicious Login Detected"
description: "Detailed description of the incident"
severity: high # critical, high, medium, low
status: open # open, investigating, resolved, closed
assignee_id: user123
tags:
- authentication
- suspicious-activity
created_at: "2024-01-15T10:30:00Z"
updated_at: "2024-01-15T11:45:00Z"
events:
- timestamp: "2024-01-15T10:31:00Z"
action: "case_created"
user: "automation"
- timestamp: "2024-01-15T10:35:00Z"
action: "case_assigned"
user: "automation"
details: "Assigned to John Doe"
Case Workflows
섹션 제목: “Case Workflows”# Update case status
curl -X PATCH http://localhost:8000/api/cases/case_12345 \
-H "Authorization: Bearer $TOKEN" \
-d '{"status": "investigating"}'
# Add case comment
curl -X POST http://localhost:8000/api/cases/case_12345/comments \
-d '{"text": "Found malicious IP in logs"}'
# Assign case
curl -X PATCH http://localhost:8000/api/cases/case_12345 \
-d '{"assignee_id": "user456"}'
Integration Examples
섹션 제목: “Integration Examples”Slack Integration
섹션 제목: “Slack Integration”actions:
- id: send_slack_alert
type: slack
config:
bot_token: "{{ env.SLACK_BOT_TOKEN }}"
action: send_message
channel: "#security-alerts"
text: "Security Alert: {{ alert.type }}"
blocks:
- type: section
text:
type: mrkdwn
text: "*{{ alert.title }}*\nSeverity: {{ alert.severity }}"
- type: actions
elements:
- type: button
text: "View Case"
action_id: "view_case"
value: "{{ case.id }}"
- type: button
text: "Acknowledge"
action_id: "acknowledge"
value: "{{ alert.id }}"
PagerDuty Integration
섹션 제목: “PagerDuty Integration”actions:
- id: create_pagerduty_incident
type: http
method: POST
url: "https://api.pagerduty.com/incidents"
headers:
Authorization: "Token token={{ env.PAGERDUTY_TOKEN }}"
Content-Type: "application/json"
body:
title: "{{ incident.title }}"
urgency: "{{ incident.severity }}"
service_id: "{{ env.PAGERDUTY_SERVICE_ID }}"
body:
type: incident_body
details: "{{ incident.description }}"
Splunk Integration
섹션 제목: “Splunk Integration”triggers:
- id: splunk_alerts
type: alert_feed
source: splunk
config:
url: "{{ env.SPLUNK_URL }}"
username: "{{ env.SPLUNK_USER }}"
password: "{{ env.SPLUNK_PASSWORD }}"
search: "alert_name=security_* earliest=-1h"
HTTP Integration
섹션 제목: “HTTP Integration”actions:
- id: query_threat_intelligence
type: http
method: GET
url: "https://api.abuseipdb.com/api/v2/check"
headers:
Key: "{{ env.ABUSEIPDB_API_KEY }}"
params:
ipAddress: "{{ extracted_ip }}"
maxAgeInDays: 90
AI-Powered Features
섹션 제목: “AI-Powered Features”Alert Triage
섹션 제목: “Alert Triage”actions:
- id: ai_triage
type: ai
model: gpt-4
prompt: |
You are a security analyst. Analyze this alert and determine:
1. Is this a true positive or false positive?
2. What is the severity (critical/high/medium/low)?
3. What is the recommended action?
Alert details:
- Source IP: {{ alert.source_ip }}
- Event: {{ alert.event_type }}
- Timestamp: {{ alert.timestamp }}
- Context: {{ alert.context }}
Respond in JSON format with fields: is_true_positive, severity, recommended_action
Incident Summarization
섹션 제목: “Incident Summarization”actions:
- id: summarize_incident
type: ai
model: gpt-4
prompt: |
Summarize this security incident for executive briefing:
Case ID: {{ case.id }}
Title: {{ case.title }}
Events: {{ case.events | join }}
Provide a 2-3 sentence professional summary.
Monitoring & Observability
섹션 제목: “Monitoring & Observability”Health Check
섹션 제목: “Health Check”# Check API health
curl http://localhost:8000/health
# Response
{
"status": "healthy",
"database": "connected",
"redis": "connected",
"timestamp": "2024-01-15T10:30:00Z"
}
Metrics
섹션 제목: “Metrics”# Prometheus metrics endpoint
curl http://localhost:8000/metrics
# Includes:
# tracecat_workflow_executions_total
# tracecat_workflow_execution_duration_seconds
# tracecat_cases_open
# tracecat_ai_requests_total
Logging
섹션 제목: “Logging”logging:
level: INFO
format: json
outputs:
- stdout
- file: /var/log/tracecat.log
# Log filtering
filters:
- module: "workflow"
level: DEBUG
- module: "integrations"
level: WARNING
Performance Tuning
섹션 제목: “Performance Tuning”Database Optimization
섹션 제목: “Database Optimization”-- Create indexes for common queries
CREATE INDEX idx_cases_status ON cases(status);
CREATE INDEX idx_cases_severity ON cases(severity);
CREATE INDEX idx_cases_created_at ON cases(created_at);
CREATE INDEX idx_workflow_executions_workflow_id ON workflow_executions(workflow_id);
CREATE INDEX idx_workflow_executions_status ON workflow_executions(status);
Redis Caching
섹션 제목: “Redis Caching”cache:
backend: redis
ttl: 3600
# Cache strategies
strategies:
- key: "workflow:*"
ttl: 1800
- key: "case:*"
ttl: 3600
- key: "alert:*"
ttl: 300
Worker Scaling
섹션 제목: “Worker Scaling”workers:
api:
replicas: 2
workers_per_replica: 4
background:
replicas: 3
concurrency: 10
timeout: 300
ai:
replicas: 2
concurrency: 5
timeout: 600
Best Practices
섹션 제목: “Best Practices”Workflow Design
섹션 제목: “Workflow Design”- Keep workflows modular - Reuse actions across workflows
- Add error handling - Use try-catch and fallback actions
- Test thoroughly - Use dry-run mode before deployment
- Monitor performance - Track execution times and failures
- Document workflows - Add descriptions and comments
Security Operations
섹션 제목: “Security Operations”# Security best practices
# 1. Least privilege for integrations
slack_token: "{{ env.SLACK_BOT_TOKEN }}" # Limited-scope bot token
# 2. Audit logging
audit:
enabled: true
log_all_case_changes: true
log_workflow_executions: true
# 3. Encryption
database:
ssl_mode: require
# 4. Access control
rbac:
enabled: true
default_role: analyst
Troubleshooting
섹션 제목: “Troubleshooting”Workflow Execution Issues
섹션 제목: “Workflow Execution Issues”# Check workflow execution logs
curl http://localhost:8000/api/workflows/workflow_id/executions \
-H "Authorization: Bearer $TOKEN"
# Debug specific execution
curl http://localhost:8000/api/executions/exec_id \
-H "Authorization: Bearer $TOKEN"
Integration Connection Issues
섹션 제목: “Integration Connection Issues”# Test Slack integration
curl -X POST http://localhost:8000/api/integrations/slack/test \
-H "Authorization: Bearer $TOKEN" \
-d '{"channel": "#test"}'
# Test HTTP integration
curl -X POST http://localhost:8000/api/integrations/http/test \
-H "Authorization: Bearer $TOKEN" \
-d '{"url": "https://api.example.com/test"}'
Performance Issues
섹션 제목: “Performance Issues”# Check database connection pool
curl http://localhost:8000/api/diagnostics/database
# Monitor worker queue
curl http://localhost:8000/api/diagnostics/queue
# Check AI token usage
curl http://localhost:8000/api/diagnostics/ai-usage
Deployment
섹션 제목: “Deployment”Production Deployment
섹션 제목: “Production Deployment”# Use environment variables for secrets
export SECRET_KEY=$(openssl rand -hex 32)
export OPENAI_API_KEY="sk-..."
export DATABASE_URL="postgresql://user:pass@db.example.com/tracecat"
# Deploy with docker-compose
docker-compose -f docker-compose.prod.yml up -d
# Run migrations
docker-compose exec tracecat-api alembic upgrade head
High Availability Setup
섹션 제목: “High Availability Setup”version: '3.8'
services:
postgres:
image: postgres:15
environment:
POSTGRES_PASSWORD: secure_pass
volumes:
- postgres_data:/var/lib/postgresql/data
redis-primary:
image: redis:7
command: redis-server --appendonly yes
volumes:
- redis_data:/data
tracecat-api:
image: tracecathq/tracecat-api:latest
deploy:
replicas: 3
environment:
DATABASE_URL: postgresql://postgres:secure_pass@postgres/tracecat
REDIS_URL: redis://redis-primary:6379
traefik:
image: traefik:latest
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
Resources
섹션 제목: “Resources”- GitHub Repository: https://github.com/TracecatHQ/tracecat
- Documentation: https://docs.tracecat.dev
- Community: https://discord.gg/tracecat
- Issue Tracker: https://github.com/TracecatHQ/tracecat/issues
Related Tools
섹션 제목: “Related Tools”- Tines (commercial)
- Splunk SOAR (commercial)
- Demisto/Cortex XSOAR (commercial)
- n8n (general automation)
- Zapier (cloud automation)