콘텐츠로 이동

ShipSec Studio

ShipSec Studio is an open-source no-code security workflow automation platform that enables security teams to orchestrate reconnaissance, scanning, and security operations with a visual pipeline builder backed by Temporal.io. Build complex security workflows without writing code and deploy them in your own infrastructure.

ShipSec Studio uses Docker for deployment. The quickest way to get started is the one-line installer.

  • Docker (v20.10+) and Docker Compose (v2.0+)
  • 4GB RAM minimum, 8GB recommended
  • Port access: 8000 (web UI), 9090 (Temporal server)
  • Linux/macOS/Windows with WSL2
curl https://install.shipsec.ai | bash

This downloads and runs the installer, which handles all initial configuration and starts the services.

Create a docker-compose.yml file:

version: '3.8'

services:
  temporal:
    image: temporalio/auto-setup:latest
    ports:
      - "7233:7233"
      - "8233:8233"
    environment:
      - DB=postgresql
      - DB_PORT=5432
      - POSTGRES_USER=temporal
      - POSTGRES_PASSWORD=temporal
      - POSTGRES_DB=temporal
    depends_on:
      - postgres
    healthcheck:
      test: ["CMD", "tctl", "workflow", "list"]
      interval: 10s
      timeout: 5s
      retries: 5

  postgres:
    image: postgres:14-alpine
    environment:
      - POSTGRES_USER=temporal
      - POSTGRES_PASSWORD=temporal
      - POSTGRES_DB=temporal
    volumes:
      - postgres-data:/var/lib/postgresql/data
    ports:
      - "5432:5432"

  management-plane:
    image: shipsecai/studio-management:latest
    ports:
      - "8000:3000"
    environment:
      - NODE_ENV=production
      - TEMPORAL_ADDRESS=temporal:7233
      - DATABASE_URL=postgresql://temporal:temporal@postgres:5432/temporal
      - JWT_SECRET=your-secret-key-here
    depends_on:
      temporal:
        condition: service_healthy
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
      interval: 10s
      timeout: 5s
      retries: 3

  worker:
    image: shipsecai/studio-worker:latest
    environment:
      - TEMPORAL_ADDRESS=temporal:7233
      - WORKER_POOL_SIZE=5
      - LOG_LEVEL=info
    depends_on:
      - temporal
    deploy:
      replicas: 2
      resources:
        limits:
          cpus: '1'
          memory: 2G

volumes:
  postgres-data:

Start the services:

docker-compose up -d

Access the web UI at http://localhost:8000

Once services are running, open your browser and navigate to http://localhost:8000. You’ll see the dashboard with your workflows, execution history, and status overview.

Default login uses the initial admin credentials (change these on first login for security).

  1. Click New Workflow on the dashboard
  2. Name it (e.g., “Domain Reconnaissance”)
  3. Select Visual Builder to enter the drag-and-drop editor
  4. Add nodes by dragging tools from the left panel:
    • Subfinder for subdomain enumeration
    • DNSX for DNS resolution
    • HTTPx for web probing
  5. Connect nodes by clicking output ports and dragging to input ports
  6. Configure each node with:
    • Input sources (previous node output, manual values, secrets)
    • Parameters (threads, filters, output format)
    • Error handling (retry, skip, fail)
  7. Click Compile to generate the DSL
  8. Click Save & Deploy to make it available
  9. Run the workflow manually or set a schedule

ShipSec Studio uses a three-plane distributed architecture:

Management Plane (NestJS)

  • REST API and web UI
  • Workflow definition storage
  • User authentication and authorization
  • Scheduling and trigger management
  • Runs on port 3000 (mapped to 8000 externally)

Orchestration Plane (Temporal.io)

  • Workflow state management
  • Task routing and retry logic
  • Execution history and durability
  • Handles millions of concurrent workflows
  • Server on port 7233, UI on port 8233

Worker Plane (Stateless Containers)

  • Ephemeral task execution
  • Tool integration plugins
  • Isolated security context per task
  • Scales horizontally with load
  • Pulls tasks from Temporal queue

All three planes communicate via gRPC and HTTP APIs. Workers are stateless and disposable—failed workers don’t impact overall system state because Temporal retains the workflow history.

The visual editor is the primary way to design workflows without code.

Start with a blank canvas and add nodes from the sidebar:

  • Input nodes — manual triggers, webhook triggers, scheduled inputs
  • Tool nodes — Subfinder, Naabu, custom integrations
  • Decision nodes — conditional branches, filters
  • Output nodes — store results, send alerts, trigger downstream workflows

Drag from the output port of one node to the input port of another. The system validates:

  • Data type compatibility (e.g., domain list → subdomain tool)
  • Required vs. optional inputs
  • Circular dependencies (prevented)

Each connection shows the data flowing through it. Hover to see the schema.

After designing visually, click Compile to generate the underlying Workflow Definition Language (a YAML-based DSL). This DSL is version-controlled and can be imported back into the visual builder or stored in Git.

# Generated DSL example
workflow:
  name: domain-recon-workflow
  version: 1
  steps:
    - id: subfinder
      tool: subfinder
      input:
        domain: "{{ input.target_domain }}"
        silent: true
      output: subdomains
    - id: dnsx
      tool: dnsx
      input:
        domains: "{{ subfinder.subdomains }}"
        threads: 100
      output: resolved_ips
    - id: httpx
      tool: httpx
      input:
        urls: "{{ dnsx.resolved_ips }}"
        status-code: true
      output: live_hosts
  output:
    result: "{{ httpx.live_hosts }}"

ShipSec Studio ships with first-class integrations for the most common security tools.

ToolPurposeKey Params
SubfinderSubdomain enumerationdomain, silent, all-sources, threads
DNSXDNS resolution & validationdomains, threads, resolver-list
NaabuPort scanninghosts, ports, threads, rate
HTTPxHTTP probing & fingerprintingurls, status-code, title, tech-detect
NucleiVulnerability scanningtargets, templates, severity, tags

Each tool node accepts input from previous nodes or manual values:

nodes:
  - id: subfinder_step
    tool: subfinder
    config:
      domain: "{{ trigger.domain }}"  # From workflow trigger
      silent: false
      all_sources: true
      max_retries: 3

Map tool output fields to next step inputs via the Output Mapping panel. Define transforms like:

transforms:
  - source: subfinder.subdomains
    target: dnsx.domains
    filter: "unique_domains"

The Workflow Definition Language is a declarative format for defining workflows programmatically.

version: "1.0"
name: "advanced-recon"
description: "Multi-stage reconnaissance workflow"

triggers:
  - type: webhook
    path: /workflows/advanced-recon
  - type: schedule
    cron: "0 0 * * 0"  # Weekly

inputs:
  target_domain:
    type: string
    required: true
  include_passive:
    type: boolean
    default: true

steps:
  - id: stage_1_subdomains
    tool: subfinder
    parallel: false
    input:
      domain: "{{ inputs.target_domain }}"
      all_sources: "{{ inputs.include_passive }}"
    retry:
      max_attempts: 3
      backoff: exponential
    on_error: continue

  - id: stage_2_resolve
    tool: dnsx
    depends_on: stage_1_subdomains
    input:
      domains: "{{ stage_1_subdomains.output.subdomains }}"
      threads: 100
    timeout: 300

  - id: stage_3_probe
    tool: httpx
    depends_on: stage_2_resolve
    input:
      urls: "{{ stage_2_resolve.output.ips }}"
      status_code: true
      title: true
    parallel: true
    workers: 5

outputs:
  live_hosts:
    path: "{{ stage_3_probe.output.results }}"
    format: json

notifications:
  - type: webhook
    url: "{{ secrets.slack_webhook }}"
    on: workflow_complete
    payload:
      message: "Recon complete: {{ outputs.live_hosts }}"

Use {{ }} syntax to reference:

  • Trigger inputs: {{ trigger.param }}
  • Step outputs: {{ step_id.output.field }}
  • Secrets: {{ secrets.api_key }}
  • Workflow inputs: {{ inputs.var }}

ShipSec Studio encrypts all secrets using AES-256-GCM at rest and in transit.

Via the web UI under Settings → Secrets:

  1. Click Add Secret
  2. Enter name (alphanumeric, underscores)
  3. Paste the value (API key, token, password)
  4. Select scope: Workflow, Team, or Personal
  5. Click Save

Secrets are encrypted immediately and never logged or displayed in plaintext.

Reference via template syntax:

steps:
  - id: shodan_scan
    tool: shodan_integration
    input:
      api_key: "{{ secrets.shodan_api_key }}"
      query: "{{ inputs.query }}"

Rotate keys without updating workflows:

  1. Create new secret with suffix _v2
  2. Update code/integrations to use new key
  3. Delete old secret after verification

Workers execute tasks in ephemeral containers, pulling from the Temporal queue.

Define worker capacity in docker-compose.yml:

worker:
  image: shipsecai/studio-worker:latest
  environment:
    - TEMPORAL_ADDRESS=temporal:7233
    - WORKER_POOL_SIZE=10
    - MAX_CONCURRENT_TASKS=5
    - TASK_TIMEOUT=600
    - LOG_LEVEL=debug
  deploy:
    replicas: 3
    resources:
      limits:
        cpus: '2'
        memory: 4G

Increase replicas for higher throughput:

docker-compose up -d --scale worker=5

Monitor worker health:

docker-compose logs -f worker | grep ERROR

Each task runs in an isolated container with:

  • Separate filesystem (no shared state)
  • Resource limits (CPU, memory)
  • Network isolation (outbound only)
  • Unique environment variables per execution

Workers are stateless and replaced frequently—no persistent data should be stored on workers.

Trigger workflows manually, via schedule, or from external events.

Enable webhook trigger in workflow definition:

triggers:
  - type: webhook
    path: /workflows/my-workflow
    method: POST

Invoke via HTTP:

curl -X POST http://localhost:8000/workflows/my-workflow \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_TOKEN" \
  -d '{
    "target_domain": "example.org",
    "include_passive": true
  }'

Response includes execution ID for tracking.

Schedule recurring workflows with standard cron syntax:

triggers:
  - type: schedule
    cron: "0 9 * * 1-5"  # 9 AM Mon-Fri
    timezone: "UTC"

Common patterns:

  • 0 0 * * * — Daily at midnight
  • 0 */6 * * * — Every 6 hours
  • 30 2 * * 0 — Weekly Sunday 2:30 AM
  • 0 0 1 * * — Monthly on the 1st

Trigger based on external events via webhooks:

triggers:
  - type: event
    source: github
    event: push
    repository: "your-org/repo"

Extend ShipSec Studio with your own tools.

Create a plugin by implementing the ShipSec Tool Interface:

mkdir plugins/my-custom-tool
cd plugins/my-custom-tool
{
  "name": "my-custom-tool",
  "version": "1.0.0",
  "author": "your-org",
  "description": "Custom security tool integration",
  "inputs": [
    {
      "name": "target",
      "type": "string",
      "required": true
    }
  ],
  "outputs": [
    {
      "name": "results",
      "type": "array"
    }
  ],
  "executable": "./run.sh"
}

The executable (shell script, Python, Go, etc.) processes input and outputs JSON:

#!/bin/bash

# Read input from stdin
INPUT=$(cat)
TARGET=$(echo $INPUT | jq -r '.target')

# Run your tool logic
RESULTS=$(my-tool scan "$TARGET")

# Output as JSON
jq -n --arg results "$RESULTS" '{results: $results}'

Place the plugin directory in ./plugins/ and restart workers:

docker-compose restart worker

The tool is now available in the Visual Builder under Custom Tools.

ShipSec Studio exposes a REST API for programmatic workflow management.

All API requests require a Bearer token in the Authorization header:

Authorization: Bearer YOUR_API_TOKEN

Generate tokens in Settings → API Tokens.

MethodEndpointPurpose
POST/api/v1/workflowsCreate workflow
GET/api/v1/workflowsList workflows
GET/api/v1/workflows/{id}Get workflow details
PUT/api/v1/workflows/{id}Update workflow
DELETE/api/v1/workflows/{id}Delete workflow
POST/api/v1/workflows/{id}/executeRun workflow
GET/api/v1/executions/{id}Get execution status
GET/api/v1/executions/{id}/logsStream execution logs
curl -X POST http://localhost:8000/api/v1/workflows \
  -H "Authorization: Bearer YOUR_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "api-workflow",
    "description": "Created via API",
    "definition": {
      "version": "1.0",
      "steps": [
        {
          "id": "subfinder",
          "tool": "subfinder",
          "input": {
            "domain": "{{ inputs.target }}"
          }
        }
      ]
    }
  }'
curl -X POST http://localhost:8000/api/v1/workflows/abc123/execute \
  -H "Authorization: Bearer YOUR_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "target": "example.org",
    "include_passive": true
  }'

Returns execution ID and status URL.

Track workflow execution and diagnose issues via logs.

The Executions tab shows all runs with:

  • Execution ID
  • Start/end time
  • Status (running, completed, failed)
  • Input parameters
  • Output summary

Click an execution to see detailed logs.

Stream logs for a running workflow:

curl -H "Authorization: Bearer YOUR_TOKEN" \
  http://localhost:8000/api/v1/executions/exec-12345/logs?follow=true

Logs include:

  • Timestamp
  • Step ID
  • Log level (DEBUG, INFO, WARN, ERROR)
  • Message

Access Temporal’s native UI at http://localhost:8233 to inspect:

  • Workflow execution history
  • Task queue depth
  • Worker status
  • Detailed event logs

Deploy ShipSec Studio in your own infrastructure for data residency and control.

  • Docker and Docker Compose
  • PostgreSQL 14+ (or use Docker service)
  • Temporal server (v1.20+)
  • 4 vCPU, 8GB RAM minimum
  • Network access for tool integrations (Shodan API, etc.)

For networks without internet access:

  1. Pre-load all Docker images on the target network:
docker pull shipsecai/studio-management:latest
docker pull shipsecai/studio-worker:latest
docker pull temporalio/auto-setup:latest
docker pull postgres:14-alpine

# Save and transfer to target network
docker save -o studio-images.tar \
  shipsecai/studio-management:latest \
  shipsecai/studio-worker:latest \
  temporalio/auto-setup:latest \
  postgres:14-alpine
  1. Load on target network:
docker load -i studio-images.tar
  1. Configure worker integrations to use local tool mirrors (Subfinder, Nuclei, etc.)

For production deployments, use environment-specific overrides:

# docker-compose.prod.yml
version: '3.8'

services:
  management-plane:
    image: shipsecai/studio-management:latest
    environment:
      - NODE_ENV=production
      - TEMPORAL_ADDRESS=temporal:7233
      - DATABASE_URL=postgresql://user:pass@postgres-host:5432/studio
      - JWT_SECRET=${JWT_SECRET}
      - REDIS_URL=redis://redis:6379
    restart: always
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
      interval: 30s
      timeout: 10s
      retries: 3

  worker:
    image: shipsecai/studio-worker:latest
    environment:
      - TEMPORAL_ADDRESS=temporal:7233
      - WORKER_POOL_SIZE=20
      - LOG_LEVEL=info
    restart: always
    deploy:
      replicas: 5
      resources:
        limits:
          cpus: '2'
          memory: 4G
        reservations:
          cpus: '1'
          memory: 2G

Deploy:

docker-compose -f docker-compose.prod.yml up -d

Backup the PostgreSQL database regularly:

docker-compose exec postgres pg_dump -U temporal studio > backup.sql

Restore from backup:

docker-compose exec -T postgres psql -U temporal studio < backup.sql

Temporal maintains workflow history independently—no additional backup needed.

Error: InvalidWorkflowDefinition

Check for:

  • Undefined variable references (typo in step IDs or input names)
  • Circular dependencies (step A depends on B, B depends on A)
  • Missing required tool parameters

Fix: Review the workflow in the Visual Builder and validate connections.

Error: Worker queue depth increasing, tasks not executing

Check worker status:

docker-compose logs worker | grep -i error

Verify worker can reach Temporal:

docker-compose exec worker curl -s temporal:7233 || echo "Connection failed"

Restart workers:

docker-compose restart worker

Decrease WORKER_POOL_SIZE and increase memory limits:

environment:
  - WORKER_POOL_SIZE=5  # Reduce from 10
resources:
  limits:
    memory: 8G  # Increase

Or scale to more worker instances with smaller pools.

Regenerate tokens in Settings → API Tokens. Old tokens remain valid until explicitly revoked.

Ensure PostgreSQL is running and accessible:

docker-compose logs postgres

Check connection string in DATABASE_URL.

  • Modular steps — Keep each step focused on one tool. Chain steps for complex logic.
  • Error handling — Set on_error: continue for optional steps, on_error: fail for critical ones.
  • Timeouts — Add explicit timeouts to prevent runaway tasks (default 600s).
  • Parallel execution — Use parallel: true for independent steps to reduce runtime.
  • Rotate secrets — Update API keys quarterly and rotate immediately if exposed.
  • Least privilege — Limit worker permissions to only required network destinations.
  • Audit logs — Enable audit logging in production and review regularly.
  • Network isolation — Run workers on a dedicated subnet with egress firewall rules.
  • Batch inputs — Send domains in batches to tools rather than one-by-one.
  • Cache results — Use workflow outputs to avoid redundant scans.
  • Scale workers — Monitor queue depth and scale workers before bottleneck.
  • Adjust threading — Tool-specific threading (e.g., threads: 200 in Subfinder) improves throughput.
  • Version workflows — Use semantic versioning for workflow definitions.
  • Test changes — Deploy updated workflows to a staging environment first.
  • Monitor logs — Set up alerts on ERROR and WARN logs in production.
  • Update tools — Regularly pull latest tool container images.
ToolPurposeComparison
TinesLow-code security automation platformCommercial, managed cloud service; ShipSec is open-source and self-hosted
Shuffle SOAROpen-source security orchestration platformSimilar architecture; Shuffle focuses on enterprise SOAR, ShipSec on reconnaissance/scanning
n8nGeneral-purpose workflow automationBroader use cases (CRM, marketing, etc.); ShipSec is security-focused with built-in tool integrations
StackStormOpen-source event-driven automationBroader IT automation; ShipSec has tighter security tool integration

For more information, visit the official docs at docs.shipsec.ai or the GitHub repository at github.com/shipsecai/studio.