تخطَّ إلى المحتوى

Tracecat

Tracecat is an open-source, AI-native Security Orchestration, Automation, and Response (SOAR) platform designed as a modern alternative to proprietary solutions like Tines and Splunk SOAR. It provides a visual workflow builder for automating security operations, case management for incident handling, and AI-powered alert triage to reduce mean time to response (MTTR).

Built with Python and Docker, Tracecat enables security teams to orchestrate complex incident response playbooks, integrate with third-party security tools, and leverage AI models for intelligent alert correlation and enrichment—all within a self-hosted, fully open-source platform.

GitHub: TracecatHQ/tracecat
License: Open Source
Built With: Python, Docker, Web-based UI

  • Docker and Docker Compose
  • Python 3.10+
  • PostgreSQL 12+ (or use included container)
  • 4GB+ RAM for comfortable operation
  • Modern web browser
# docker-compose.yml
version: '3.8'

services:
  postgres:
    image: postgres:15-alpine
    environment:
      POSTGRES_DB: tracecat
      POSTGRES_PASSWORD: secure_password
    volumes:
      - postgres_data:/var/lib/postgresql/data
    ports:
      - "5432:5432"

  redis:
    image: redis:7-alpine
    ports:
      - "6379:6379"

  tracecat-api:
    image: tracecathq/tracecat-api:latest
    environment:
      DATABASE_URL: postgresql://postgres:secure_password@postgres:5432/tracecat
      REDIS_URL: redis://redis:6379
      SECRET_KEY: ${SECRET_KEY}
      OPENAI_API_KEY: ${OPENAI_API_KEY}
    ports:
      - "8000:8000"
    depends_on:
      - postgres
      - redis
    command: uvicorn main:app --host 0.0.0.0 --port 8000

  tracecat-worker:
    image: tracecathq/tracecat-worker:latest
    environment:
      DATABASE_URL: postgresql://postgres:secure_password@postgres:5432/tracecat
      REDIS_URL: redis://redis:6379
      OPENAI_API_KEY: ${OPENAI_API_KEY}
    depends_on:
      - postgres
      - redis

  tracecat-ui:
    image: tracecathq/tracecat-ui:latest
    environment:
      REACT_APP_API_URL: http://localhost:8000
    ports:
      - "3000:3000"
    depends_on:
      - tracecat-api

volumes:
  postgres_data:
# Pull images
docker pull tracecathq/tracecat-api:latest
docker pull tracecathq/tracecat-ui:latest
docker pull tracecathq/tracecat-worker:latest

# Set environment variables
export SECRET_KEY=$(openssl rand -hex 32)
export OPENAI_API_KEY="sk-..."  # Your OpenAI API key

# Run with docker-compose
docker-compose up -d

# Access UI
# http://localhost:3000
apiVersion: v1
kind: Namespace
metadata:
  name: tracecat

---
apiVersion: helm.sh/v1
kind: HelmChart
metadata:
  name: tracecat
  namespace: tracecat
spec:
  chart: tracecat
  repo: https://charts.tracecat.dev
  targetNamespace: tracecat
  values:
    image:
      tag: latest
    replicaCount: 2
    postgresql:
      enabled: true
      postgresPassword: secure_password
    redis:
      enabled: true
    ingress:
      enabled: true
      hosts:
        - host: tracecat.example.com
          paths:
            - path: /
              pathType: Prefix
# Clone repository
git clone https://github.com/TracecatHQ/tracecat.git
cd tracecat

# Create virtual environment
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate

# Install dependencies
pip install -r requirements.txt

# Set up environment
cp .env.example .env
# Edit .env with your configuration

# Initialize database
python -m alembic upgrade head

# Start development server
python -m uvicorn main:app --reload --port 8000
# Core settings
TRACECAT_ENV=production
SECRET_KEY=$(openssl rand -hex 32)
DEBUG=false

# Database
DATABASE_URL=postgresql://user:password@localhost:5432/tracecat
REDIS_URL=redis://localhost:6379

# API configuration
API_HOST=0.0.0.0
API_PORT=8000
API_LOG_LEVEL=info

# AI/LLM configuration
OPENAI_API_KEY=sk-...
OPENAI_MODEL=gpt-4
OPENAI_TEMPERATURE=0.7

# UI configuration
UI_HOST=0.0.0.0
UI_PORT=3000
UI_API_URL=http://localhost:8000

# Security
ALLOW_ORIGINS=["http://localhost:3000"]
CORS_CREDENTIALS=true
SESSION_TIMEOUT=3600

# Integrations
SLACK_BOT_TOKEN=xoxb-...
SLACK_SIGNING_SECRET=...
# config.yaml
server:
  host: "0.0.0.0"
  port: 8000
  workers: 4
  log_level: "info"

database:
  url: "postgresql://user:password@localhost/tracecat"
  pool_size: 20
  max_overflow: 10

redis:
  url: "redis://localhost:6379"
  db: 0

security:
  secret_key: "your-secret-key"
  access_token_expire: 3600
  algorithm: "HS256"

ai:
  provider: "openai"
  model: "gpt-4"
  temperature: 0.7
  max_tokens: 2000

integrations:
  slack:
    enabled: true
    bot_token: "xoxb-..."
  
  pagerduty:
    enabled: true
    api_key: "..."

The visual workflow builder enables creation of complex incident response automation:

┌─────────────────────────────────────────────────┐
│           Tracecat Workflow Editor              │
├─────────────────────────────────────────────────┤
│                                                 │
│  Trigger ─→ Extract ─→ Enrich ─→ Decide ─→    │
│                          ↓                      │
│                    Send Alert ─→ Incident       │
│                                                 │
└─────────────────────────────────────────────────┘
TriggerPurposeExample
WebhookExternal event inputSyslog, email alert
ScheduledTime-based executionDaily report generation
ManualUser-initiatedOn-demand investigation
StreamReal-time eventsKafka, Pub/Sub
Alert FeedSIEM/monitoring toolSplunk, Datadog, Elastic
ActionPurposeExample
HTTPCall REST APIQuery external systems
DatabaseQuery/update DBStore incident data
SlackSend Slack messagesNotifications, alerts
EmailSend emailsEscalations, reports
WebhookCall external webhookTrigger other systems
ScriptExecute PythonCustom logic
AICall LLMTriage, summarization
CaseCreate/update caseIncident management
{
  "id": "alert-triage",
  "title": "Alert Triage",
  "description": "Automatic alert triage and enrichment",
  "triggers": [
    {
      "type": "webhook",
      "path": "/webhooks/alerts"
    }
  ],
  "actions": [
    {
      "id": "extract-alert",
      "type": "script",
      "code": "alert = input.alert; return { severity: alert.severity, source: alert.source, timestamp: alert.timestamp }"
    },
    {
      "id": "enrich-with-ai",
      "type": "ai",
      "prompt": "Analyze this security alert and determine if it's a true positive or false positive: {{ extract-alert.output }}",
      "model": "gpt-4"
    },
    {
      "id": "create-case",
      "type": "case",
      "action": "create",
      "mapping": {
        "title": "{{ extract-alert.source }} Alert",
        "severity": "{{ extract-alert.severity }}",
        "description": "{{ enrich-with-ai.output }}",
        "status": "open"
      }
    },
    {
      "id": "notify-slack",
      "type": "slack",
      "action": "send_message",
      "channel": "#security-alerts",
      "text": "New {{ extract-alert.severity }} alert from {{ extract-alert.source }}. Case ID: {{ create-case.case_id }}"
    }
  ]
}
workflow:
  id: incident-response
  title: "Automated Incident Response"
  triggers:
    - type: alert_feed
      source: splunk
      query: "alert_type=security"
  
  actions:
    # Step 1: Extract alert details
    - id: parse_alert
      type: script
      input: "{{ trigger.payload }}"
      script: |
        alert = input
        return {
          source_ip: alert.src_ip,
          target_ip: alert.dest_ip,
          event_type: alert.event_type,
          timestamp: alert.timestamp
        }
    
    # Step 2: Enrich with threat intelligence
    - id: threat_intel_lookup
      type: http
      method: POST
      url: "https://api.abuse.ch/query"
      body: "ip={{ parse_alert.source_ip }}"
    
    # Step 3: AI-powered analysis
    - id: ai_analysis
      type: ai
      prompt: |
        Analyze this security event:
        - Source IP: {{ parse_alert.source_ip }}
        - Threat Intel: {{ threat_intel_lookup.reputation }}
        - Event Type: {{ parse_alert.event_type }}
        Determine severity and recommended actions.
    
    # Step 4: Decision logic
    - id: severity_decision
      type: condition
      condition: "{{ ai_analysis.severity }} == 'critical'"
      then_action: escalate
      else_action: queue_for_review
    
    # Step 5: Create incident case
    - id: create_incident
      type: case
      action: create
      mapping:
        title: "{{ ai_analysis.incident_type }}"
        severity: "{{ ai_analysis.severity }}"
        description: "{{ ai_analysis.summary }}"
        tags: ["{{ parse_alert.event_type }}"]
    
    # Step 6: Notify team
    - id: send_notification
      type: slack
      channel: "#incident-response"
      blocks:
        - type: section
          text:
            type: mrkdwn
            text: "*New Incident*\nCase: {{ create_incident.case_id }}\nSeverity: {{ ai_analysis.severity }}"
    
    # Step 7: Escalate if critical
    - id: escalate_critical
      type: http
      method: POST
      url: "https://api.pagerduty.com/incidents"
      condition: "{{ severity_decision }} == 'escalate'"
      body:
        title: "{{ create_incident.title }}"
        urgency: "high"
        service_id: "{{ env.PAGERDUTY_SERVICE_ID }}"
# Via API
curl -X POST http://localhost:8000/api/cases \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "title": "Suspicious Login Detected",
    "severity": "high",
    "status": "open",
    "description": "Unusual login from unknown location",
    "tags": ["authentication", "suspicious-activity"],
    "assignee_id": "user123"
  }'

# Via UI: Cases → New Case → Fill form
case:
  id: case_12345
  title: "Suspicious Login Detected"
  description: "Detailed description of the incident"
  severity: high  # critical, high, medium, low
  status: open    # open, investigating, resolved, closed
  assignee_id: user123
  tags:
    - authentication
    - suspicious-activity
  created_at: "2024-01-15T10:30:00Z"
  updated_at: "2024-01-15T11:45:00Z"
  events:
    - timestamp: "2024-01-15T10:31:00Z"
      action: "case_created"
      user: "automation"
    - timestamp: "2024-01-15T10:35:00Z"
      action: "case_assigned"
      user: "automation"
      details: "Assigned to John Doe"
# Update case status
curl -X PATCH http://localhost:8000/api/cases/case_12345 \
  -H "Authorization: Bearer $TOKEN" \
  -d '{"status": "investigating"}'

# Add case comment
curl -X POST http://localhost:8000/api/cases/case_12345/comments \
  -d '{"text": "Found malicious IP in logs"}'

# Assign case
curl -X PATCH http://localhost:8000/api/cases/case_12345 \
  -d '{"assignee_id": "user456"}'
actions:
  - id: send_slack_alert
    type: slack
    config:
      bot_token: "{{ env.SLACK_BOT_TOKEN }}"
    action: send_message
    channel: "#security-alerts"
    text: "Security Alert: {{ alert.type }}"
    blocks:
      - type: section
        text:
          type: mrkdwn
          text: "*{{ alert.title }}*\nSeverity: {{ alert.severity }}"
      - type: actions
        elements:
          - type: button
            text: "View Case"
            action_id: "view_case"
            value: "{{ case.id }}"
          - type: button
            text: "Acknowledge"
            action_id: "acknowledge"
            value: "{{ alert.id }}"
actions:
  - id: create_pagerduty_incident
    type: http
    method: POST
    url: "https://api.pagerduty.com/incidents"
    headers:
      Authorization: "Token token={{ env.PAGERDUTY_TOKEN }}"
      Content-Type: "application/json"
    body:
      title: "{{ incident.title }}"
      urgency: "{{ incident.severity }}"
      service_id: "{{ env.PAGERDUTY_SERVICE_ID }}"
      body:
        type: incident_body
        details: "{{ incident.description }}"
triggers:
  - id: splunk_alerts
    type: alert_feed
    source: splunk
    config:
      url: "{{ env.SPLUNK_URL }}"
      username: "{{ env.SPLUNK_USER }}"
      password: "{{ env.SPLUNK_PASSWORD }}"
      search: "alert_name=security_* earliest=-1h"
actions:
  - id: query_threat_intelligence
    type: http
    method: GET
    url: "https://api.abuseipdb.com/api/v2/check"
    headers:
      Key: "{{ env.ABUSEIPDB_API_KEY }}"
    params:
      ipAddress: "{{ extracted_ip }}"
      maxAgeInDays: 90
actions:
  - id: ai_triage
    type: ai
    model: gpt-4
    prompt: |
      You are a security analyst. Analyze this alert and determine:
      1. Is this a true positive or false positive?
      2. What is the severity (critical/high/medium/low)?
      3. What is the recommended action?
      
      Alert details:
      - Source IP: {{ alert.source_ip }}
      - Event: {{ alert.event_type }}
      - Timestamp: {{ alert.timestamp }}
      - Context: {{ alert.context }}
      
      Respond in JSON format with fields: is_true_positive, severity, recommended_action
actions:
  - id: summarize_incident
    type: ai
    model: gpt-4
    prompt: |
      Summarize this security incident for executive briefing:
      
      Case ID: {{ case.id }}
      Title: {{ case.title }}
      Events: {{ case.events | join }}
      
      Provide a 2-3 sentence professional summary.
# Check API health
curl http://localhost:8000/health

# Response
{
  "status": "healthy",
  "database": "connected",
  "redis": "connected",
  "timestamp": "2024-01-15T10:30:00Z"
}
# Prometheus metrics endpoint
curl http://localhost:8000/metrics

# Includes:
# tracecat_workflow_executions_total
# tracecat_workflow_execution_duration_seconds
# tracecat_cases_open
# tracecat_ai_requests_total
logging:
  level: INFO
  format: json
  outputs:
    - stdout
    - file: /var/log/tracecat.log
  
  # Log filtering
  filters:
    - module: "workflow"
      level: DEBUG
    - module: "integrations"
      level: WARNING
-- Create indexes for common queries
CREATE INDEX idx_cases_status ON cases(status);
CREATE INDEX idx_cases_severity ON cases(severity);
CREATE INDEX idx_cases_created_at ON cases(created_at);
CREATE INDEX idx_workflow_executions_workflow_id ON workflow_executions(workflow_id);
CREATE INDEX idx_workflow_executions_status ON workflow_executions(status);
cache:
  backend: redis
  ttl: 3600
  
  # Cache strategies
  strategies:
    - key: "workflow:*"
      ttl: 1800
    - key: "case:*"
      ttl: 3600
    - key: "alert:*"
      ttl: 300
workers:
  api:
    replicas: 2
    workers_per_replica: 4
  
  background:
    replicas: 3
    concurrency: 10
    timeout: 300
  
  ai:
    replicas: 2
    concurrency: 5
    timeout: 600
  1. Keep workflows modular - Reuse actions across workflows
  2. Add error handling - Use try-catch and fallback actions
  3. Test thoroughly - Use dry-run mode before deployment
  4. Monitor performance - Track execution times and failures
  5. Document workflows - Add descriptions and comments
# Security best practices

# 1. Least privilege for integrations
slack_token: "{{ env.SLACK_BOT_TOKEN }}"  # Limited-scope bot token

# 2. Audit logging
audit:
  enabled: true
  log_all_case_changes: true
  log_workflow_executions: true

# 3. Encryption
database:
  ssl_mode: require

# 4. Access control
rbac:
  enabled: true
  default_role: analyst
# Check workflow execution logs
curl http://localhost:8000/api/workflows/workflow_id/executions \
  -H "Authorization: Bearer $TOKEN"

# Debug specific execution
curl http://localhost:8000/api/executions/exec_id \
  -H "Authorization: Bearer $TOKEN"
# Test Slack integration
curl -X POST http://localhost:8000/api/integrations/slack/test \
  -H "Authorization: Bearer $TOKEN" \
  -d '{"channel": "#test"}'

# Test HTTP integration
curl -X POST http://localhost:8000/api/integrations/http/test \
  -H "Authorization: Bearer $TOKEN" \
  -d '{"url": "https://api.example.com/test"}'
# Check database connection pool
curl http://localhost:8000/api/diagnostics/database

# Monitor worker queue
curl http://localhost:8000/api/diagnostics/queue

# Check AI token usage
curl http://localhost:8000/api/diagnostics/ai-usage
# Use environment variables for secrets
export SECRET_KEY=$(openssl rand -hex 32)
export OPENAI_API_KEY="sk-..."
export DATABASE_URL="postgresql://user:pass@db.example.com/tracecat"

# Deploy with docker-compose
docker-compose -f docker-compose.prod.yml up -d

# Run migrations
docker-compose exec tracecat-api alembic upgrade head
version: '3.8'

services:
  postgres:
    image: postgres:15
    environment:
      POSTGRES_PASSWORD: secure_pass
    volumes:
      - postgres_data:/var/lib/postgresql/data

  redis-primary:
    image: redis:7
    command: redis-server --appendonly yes
    volumes:
      - redis_data:/data

  tracecat-api:
    image: tracecathq/tracecat-api:latest
    deploy:
      replicas: 3
    environment:
      DATABASE_URL: postgresql://postgres:secure_pass@postgres/tracecat
      REDIS_URL: redis://redis-primary:6379

  traefik:
    image: traefik:latest
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
  • Tines (commercial)
  • Splunk SOAR (commercial)
  • Demisto/Cortex XSOAR (commercial)
  • n8n (general automation)
  • Zapier (cloud automation)