ShipSec Studio
ShipSec Studio is an open-source no-code security workflow automation platform that enables security teams to orchestrate reconnaissance, scanning, and security operations with a visual pipeline builder backed by Temporal.io. Build complex security workflows without writing code and deploy them in your own infrastructure.
Installation
섹션 제목: “Installation”ShipSec Studio uses Docker for deployment. The quickest way to get started is the one-line installer.
Prerequisites
섹션 제목: “Prerequisites”- Docker (v20.10+) and Docker Compose (v2.0+)
- 4GB RAM minimum, 8GB recommended
- Port access: 8000 (web UI), 9090 (Temporal server)
- Linux/macOS/Windows with WSL2
One-Line Installer
섹션 제목: “One-Line Installer”curl https://install.shipsec.ai | bash
This downloads and runs the installer, which handles all initial configuration and starts the services.
Manual Docker Compose Setup
섹션 제목: “Manual Docker Compose Setup”Create a docker-compose.yml file:
version: '3.8'
services:
temporal:
image: temporalio/auto-setup:latest
ports:
- "7233:7233"
- "8233:8233"
environment:
- DB=postgresql
- DB_PORT=5432
- POSTGRES_USER=temporal
- POSTGRES_PASSWORD=temporal
- POSTGRES_DB=temporal
depends_on:
- postgres
healthcheck:
test: ["CMD", "tctl", "workflow", "list"]
interval: 10s
timeout: 5s
retries: 5
postgres:
image: postgres:14-alpine
environment:
- POSTGRES_USER=temporal
- POSTGRES_PASSWORD=temporal
- POSTGRES_DB=temporal
volumes:
- postgres-data:/var/lib/postgresql/data
ports:
- "5432:5432"
management-plane:
image: shipsecai/studio-management:latest
ports:
- "8000:3000"
environment:
- NODE_ENV=production
- TEMPORAL_ADDRESS=temporal:7233
- DATABASE_URL=postgresql://temporal:temporal@postgres:5432/temporal
- JWT_SECRET=your-secret-key-here
depends_on:
temporal:
condition: service_healthy
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 10s
timeout: 5s
retries: 3
worker:
image: shipsecai/studio-worker:latest
environment:
- TEMPORAL_ADDRESS=temporal:7233
- WORKER_POOL_SIZE=5
- LOG_LEVEL=info
depends_on:
- temporal
deploy:
replicas: 2
resources:
limits:
cpus: '1'
memory: 2G
volumes:
postgres-data:
Start the services:
docker-compose up -d
Access the web UI at http://localhost:8000
Quick Start
섹션 제목: “Quick Start”Accessing the Web UI
섹션 제목: “Accessing the Web UI”Once services are running, open your browser and navigate to http://localhost:8000. You’ll see the dashboard with your workflows, execution history, and status overview.
Default login uses the initial admin credentials (change these on first login for security).
Creating Your First Workflow
섹션 제목: “Creating Your First Workflow”- Click New Workflow on the dashboard
- Name it (e.g., “Domain Reconnaissance”)
- Select Visual Builder to enter the drag-and-drop editor
- Add nodes by dragging tools from the left panel:
- Subfinder for subdomain enumeration
- DNSX for DNS resolution
- HTTPx for web probing
- Connect nodes by clicking output ports and dragging to input ports
- Configure each node with:
- Input sources (previous node output, manual values, secrets)
- Parameters (threads, filters, output format)
- Error handling (retry, skip, fail)
- Click Compile to generate the DSL
- Click Save & Deploy to make it available
- Run the workflow manually or set a schedule
Architecture
섹션 제목: “Architecture”ShipSec Studio uses a three-plane distributed architecture:
Management Plane (NestJS)
- REST API and web UI
- Workflow definition storage
- User authentication and authorization
- Scheduling and trigger management
- Runs on port 3000 (mapped to 8000 externally)
Orchestration Plane (Temporal.io)
- Workflow state management
- Task routing and retry logic
- Execution history and durability
- Handles millions of concurrent workflows
- Server on port 7233, UI on port 8233
Worker Plane (Stateless Containers)
- Ephemeral task execution
- Tool integration plugins
- Isolated security context per task
- Scales horizontally with load
- Pulls tasks from Temporal queue
All three planes communicate via gRPC and HTTP APIs. Workers are stateless and disposable—failed workers don’t impact overall system state because Temporal retains the workflow history.
Visual Workflow Builder
섹션 제목: “Visual Workflow Builder”The visual editor is the primary way to design workflows without code.
Creating Pipelines
섹션 제목: “Creating Pipelines”Start with a blank canvas and add nodes from the sidebar:
- Input nodes — manual triggers, webhook triggers, scheduled inputs
- Tool nodes — Subfinder, Naabu, custom integrations
- Decision nodes — conditional branches, filters
- Output nodes — store results, send alerts, trigger downstream workflows
Connecting Nodes
섹션 제목: “Connecting Nodes”Drag from the output port of one node to the input port of another. The system validates:
- Data type compatibility (e.g., domain list → subdomain tool)
- Required vs. optional inputs
- Circular dependencies (prevented)
Each connection shows the data flowing through it. Hover to see the schema.
Compiling to DSL
섹션 제목: “Compiling to DSL”After designing visually, click Compile to generate the underlying Workflow Definition Language (a YAML-based DSL). This DSL is version-controlled and can be imported back into the visual builder or stored in Git.
# Generated DSL example
workflow:
name: domain-recon-workflow
version: 1
steps:
- id: subfinder
tool: subfinder
input:
domain: "{{ input.target_domain }}"
silent: true
output: subdomains
- id: dnsx
tool: dnsx
input:
domains: "{{ subfinder.subdomains }}"
threads: 100
output: resolved_ips
- id: httpx
tool: httpx
input:
urls: "{{ dnsx.resolved_ips }}"
status-code: true
output: live_hosts
output:
result: "{{ httpx.live_hosts }}"
Built-in Integrations
섹션 제목: “Built-in Integrations”ShipSec Studio ships with first-class integrations for the most common security tools.
Reconnaissance Tools
섹션 제목: “Reconnaissance Tools”| Tool | Purpose | Key Params |
|---|---|---|
| Subfinder | Subdomain enumeration | domain, silent, all-sources, threads |
| DNSX | DNS resolution & validation | domains, threads, resolver-list |
| Naabu | Port scanning | hosts, ports, threads, rate |
| HTTPx | HTTP probing & fingerprinting | urls, status-code, title, tech-detect |
| Nuclei | Vulnerability scanning | targets, templates, severity, tags |
Parameters
섹션 제목: “Parameters”Each tool node accepts input from previous nodes or manual values:
nodes:
- id: subfinder_step
tool: subfinder
config:
domain: "{{ trigger.domain }}" # From workflow trigger
silent: false
all_sources: true
max_retries: 3
Custom Tool Mappings
섹션 제목: “Custom Tool Mappings”Map tool output fields to next step inputs via the Output Mapping panel. Define transforms like:
transforms:
- source: subfinder.subdomains
target: dnsx.domains
filter: "unique_domains"
Workflow DSL
섹션 제목: “Workflow DSL”The Workflow Definition Language is a declarative format for defining workflows programmatically.
Format
섹션 제목: “Format”version: "1.0"
name: "advanced-recon"
description: "Multi-stage reconnaissance workflow"
triggers:
- type: webhook
path: /workflows/advanced-recon
- type: schedule
cron: "0 0 * * 0" # Weekly
inputs:
target_domain:
type: string
required: true
include_passive:
type: boolean
default: true
steps:
- id: stage_1_subdomains
tool: subfinder
parallel: false
input:
domain: "{{ inputs.target_domain }}"
all_sources: "{{ inputs.include_passive }}"
retry:
max_attempts: 3
backoff: exponential
on_error: continue
- id: stage_2_resolve
tool: dnsx
depends_on: stage_1_subdomains
input:
domains: "{{ stage_1_subdomains.output.subdomains }}"
threads: 100
timeout: 300
- id: stage_3_probe
tool: httpx
depends_on: stage_2_resolve
input:
urls: "{{ stage_2_resolve.output.ips }}"
status_code: true
title: true
parallel: true
workers: 5
outputs:
live_hosts:
path: "{{ stage_3_probe.output.results }}"
format: json
notifications:
- type: webhook
url: "{{ secrets.slack_webhook }}"
on: workflow_complete
payload:
message: "Recon complete: {{ outputs.live_hosts }}"
Variables and Interpolation
섹션 제목: “Variables and Interpolation”Use {{ }} syntax to reference:
- Trigger inputs:
{{ trigger.param }} - Step outputs:
{{ step_id.output.field }} - Secrets:
{{ secrets.api_key }} - Workflow inputs:
{{ inputs.var }}
Secrets Management
섹션 제목: “Secrets Management”ShipSec Studio encrypts all secrets using AES-256-GCM at rest and in transit.
Adding Secrets
섹션 제목: “Adding Secrets”Via the web UI under Settings → Secrets:
- Click Add Secret
- Enter name (alphanumeric, underscores)
- Paste the value (API key, token, password)
- Select scope: Workflow, Team, or Personal
- Click Save
Secrets are encrypted immediately and never logged or displayed in plaintext.
Using Secrets in Workflows
섹션 제목: “Using Secrets in Workflows”Reference via template syntax:
steps:
- id: shodan_scan
tool: shodan_integration
input:
api_key: "{{ secrets.shodan_api_key }}"
query: "{{ inputs.query }}"
API Key Management
섹션 제목: “API Key Management”Rotate keys without updating workflows:
- Create new secret with suffix
_v2 - Update code/integrations to use new key
- Delete old secret after verification
Worker Configuration
섹션 제목: “Worker Configuration”Workers execute tasks in ephemeral containers, pulling from the Temporal queue.
Worker Pool Configuration
섹션 제목: “Worker Pool Configuration”Define worker capacity in docker-compose.yml:
worker:
image: shipsecai/studio-worker:latest
environment:
- TEMPORAL_ADDRESS=temporal:7233
- WORKER_POOL_SIZE=10
- MAX_CONCURRENT_TASKS=5
- TASK_TIMEOUT=600
- LOG_LEVEL=debug
deploy:
replicas: 3
resources:
limits:
cpus: '2'
memory: 4G
Scaling Workers
섹션 제목: “Scaling Workers”Increase replicas for higher throughput:
docker-compose up -d --scale worker=5
Monitor worker health:
docker-compose logs -f worker | grep ERROR
Task Isolation
섹션 제목: “Task Isolation”Each task runs in an isolated container with:
- Separate filesystem (no shared state)
- Resource limits (CPU, memory)
- Network isolation (outbound only)
- Unique environment variables per execution
Workers are stateless and replaced frequently—no persistent data should be stored on workers.
Scheduling and Triggers
섹션 제목: “Scheduling and Triggers”Trigger workflows manually, via schedule, or from external events.
Webhook Triggers
섹션 제목: “Webhook Triggers”Enable webhook trigger in workflow definition:
triggers:
- type: webhook
path: /workflows/my-workflow
method: POST
Invoke via HTTP:
curl -X POST http://localhost:8000/workflows/my-workflow \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_TOKEN" \
-d '{
"target_domain": "example.org",
"include_passive": true
}'
Response includes execution ID for tracking.
Cron Scheduling
섹션 제목: “Cron Scheduling”Schedule recurring workflows with standard cron syntax:
triggers:
- type: schedule
cron: "0 9 * * 1-5" # 9 AM Mon-Fri
timezone: "UTC"
Common patterns:
0 0 * * *— Daily at midnight0 */6 * * *— Every 6 hours30 2 * * 0— Weekly Sunday 2:30 AM0 0 1 * *— Monthly on the 1st
Event-Driven Triggers
섹션 제목: “Event-Driven Triggers”Trigger based on external events via webhooks:
triggers:
- type: event
source: github
event: push
repository: "your-org/repo"
Custom Integrations
섹션 제목: “Custom Integrations”Extend ShipSec Studio with your own tools.
Plugin Architecture
섹션 제목: “Plugin Architecture”Create a plugin by implementing the ShipSec Tool Interface:
mkdir plugins/my-custom-tool
cd plugins/my-custom-tool
{
"name": "my-custom-tool",
"version": "1.0.0",
"author": "your-org",
"description": "Custom security tool integration",
"inputs": [
{
"name": "target",
"type": "string",
"required": true
}
],
"outputs": [
{
"name": "results",
"type": "array"
}
],
"executable": "./run.sh"
}
Integration Script
섹션 제목: “Integration Script”The executable (shell script, Python, Go, etc.) processes input and outputs JSON:
#!/bin/bash
# Read input from stdin
INPUT=$(cat)
TARGET=$(echo $INPUT | jq -r '.target')
# Run your tool logic
RESULTS=$(my-tool scan "$TARGET")
# Output as JSON
jq -n --arg results "$RESULTS" '{results: $results}'
Registering the Plugin
섹션 제목: “Registering the Plugin”Place the plugin directory in ./plugins/ and restart workers:
docker-compose restart worker
The tool is now available in the Visual Builder under Custom Tools.
API Reference
섹션 제목: “API Reference”ShipSec Studio exposes a REST API for programmatic workflow management.
Authentication
섹션 제목: “Authentication”All API requests require a Bearer token in the Authorization header:
Authorization: Bearer YOUR_API_TOKEN
Generate tokens in Settings → API Tokens.
Core Endpoints
섹션 제목: “Core Endpoints”| Method | Endpoint | Purpose |
|---|---|---|
| POST | /api/v1/workflows | Create workflow |
| GET | /api/v1/workflows | List workflows |
| GET | /api/v1/workflows/{id} | Get workflow details |
| PUT | /api/v1/workflows/{id} | Update workflow |
| DELETE | /api/v1/workflows/{id} | Delete workflow |
| POST | /api/v1/workflows/{id}/execute | Run workflow |
| GET | /api/v1/executions/{id} | Get execution status |
| GET | /api/v1/executions/{id}/logs | Stream execution logs |
Creating Workflow via API
섹션 제목: “Creating Workflow via API”curl -X POST http://localhost:8000/api/v1/workflows \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"name": "api-workflow",
"description": "Created via API",
"definition": {
"version": "1.0",
"steps": [
{
"id": "subfinder",
"tool": "subfinder",
"input": {
"domain": "{{ inputs.target }}"
}
}
]
}
}'
Executing Workflow
섹션 제목: “Executing Workflow”curl -X POST http://localhost:8000/api/v1/workflows/abc123/execute \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"target": "example.org",
"include_passive": true
}'
Returns execution ID and status URL.
Monitoring and Logging
섹션 제목: “Monitoring and Logging”Track workflow execution and diagnose issues via logs.
Workflow Execution Dashboard
섹션 제목: “Workflow Execution Dashboard”The Executions tab shows all runs with:
- Execution ID
- Start/end time
- Status (running, completed, failed)
- Input parameters
- Output summary
Click an execution to see detailed logs.
Execution Logs
섹션 제목: “Execution Logs”Stream logs for a running workflow:
curl -H "Authorization: Bearer YOUR_TOKEN" \
http://localhost:8000/api/v1/executions/exec-12345/logs?follow=true
Logs include:
- Timestamp
- Step ID
- Log level (DEBUG, INFO, WARN, ERROR)
- Message
Temporal UI
섹션 제목: “Temporal UI”Access Temporal’s native UI at http://localhost:8233 to inspect:
- Workflow execution history
- Task queue depth
- Worker status
- Detailed event logs
Self-Hosted Deployment
섹션 제목: “Self-Hosted Deployment”Deploy ShipSec Studio in your own infrastructure for data residency and control.
Requirements
섹션 제목: “Requirements”- Docker and Docker Compose
- PostgreSQL 14+ (or use Docker service)
- Temporal server (v1.20+)
- 4 vCPU, 8GB RAM minimum
- Network access for tool integrations (Shodan API, etc.)
Air-Gapped Deployment
섹션 제목: “Air-Gapped Deployment”For networks without internet access:
- Pre-load all Docker images on the target network:
docker pull shipsecai/studio-management:latest
docker pull shipsecai/studio-worker:latest
docker pull temporalio/auto-setup:latest
docker pull postgres:14-alpine
# Save and transfer to target network
docker save -o studio-images.tar \
shipsecai/studio-management:latest \
shipsecai/studio-worker:latest \
temporalio/auto-setup:latest \
postgres:14-alpine
- Load on target network:
docker load -i studio-images.tar
- Configure worker integrations to use local tool mirrors (Subfinder, Nuclei, etc.)
Production Configuration
섹션 제목: “Production Configuration”For production deployments, use environment-specific overrides:
# docker-compose.prod.yml
version: '3.8'
services:
management-plane:
image: shipsecai/studio-management:latest
environment:
- NODE_ENV=production
- TEMPORAL_ADDRESS=temporal:7233
- DATABASE_URL=postgresql://user:pass@postgres-host:5432/studio
- JWT_SECRET=${JWT_SECRET}
- REDIS_URL=redis://redis:6379
restart: always
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3
worker:
image: shipsecai/studio-worker:latest
environment:
- TEMPORAL_ADDRESS=temporal:7233
- WORKER_POOL_SIZE=20
- LOG_LEVEL=info
restart: always
deploy:
replicas: 5
resources:
limits:
cpus: '2'
memory: 4G
reservations:
cpus: '1'
memory: 2G
Deploy:
docker-compose -f docker-compose.prod.yml up -d
Backup and Recovery
섹션 제목: “Backup and Recovery”Backup the PostgreSQL database regularly:
docker-compose exec postgres pg_dump -U temporal studio > backup.sql
Restore from backup:
docker-compose exec -T postgres psql -U temporal studio < backup.sql
Temporal maintains workflow history independently—no additional backup needed.
Troubleshooting
섹션 제목: “Troubleshooting”Workflow Fails to Compile
섹션 제목: “Workflow Fails to Compile”Error: InvalidWorkflowDefinition
Check for:
- Undefined variable references (typo in step IDs or input names)
- Circular dependencies (step A depends on B, B depends on A)
- Missing required tool parameters
Fix: Review the workflow in the Visual Builder and validate connections.
Worker Not Processing Tasks
섹션 제목: “Worker Not Processing Tasks”Error: Worker queue depth increasing, tasks not executing
Check worker status:
docker-compose logs worker | grep -i error
Verify worker can reach Temporal:
docker-compose exec worker curl -s temporal:7233 || echo "Connection failed"
Restart workers:
docker-compose restart worker
Out of Memory During Large Scans
섹션 제목: “Out of Memory During Large Scans”Decrease WORKER_POOL_SIZE and increase memory limits:
environment:
- WORKER_POOL_SIZE=5 # Reduce from 10
resources:
limits:
memory: 8G # Increase
Or scale to more worker instances with smaller pools.
API Token Expired
섹션 제목: “API Token Expired”Regenerate tokens in Settings → API Tokens. Old tokens remain valid until explicitly revoked.
Temporal Server Won’t Start
섹션 제목: “Temporal Server Won’t Start”Ensure PostgreSQL is running and accessible:
docker-compose logs postgres
Check connection string in DATABASE_URL.
Best Practices
섹션 제목: “Best Practices”Workflow Design
섹션 제목: “Workflow Design”- Modular steps — Keep each step focused on one tool. Chain steps for complex logic.
- Error handling — Set
on_error: continuefor optional steps,on_error: failfor critical ones. - Timeouts — Add explicit timeouts to prevent runaway tasks (default 600s).
- Parallel execution — Use
parallel: truefor independent steps to reduce runtime.
Security
섹션 제목: “Security”- Rotate secrets — Update API keys quarterly and rotate immediately if exposed.
- Least privilege — Limit worker permissions to only required network destinations.
- Audit logs — Enable audit logging in production and review regularly.
- Network isolation — Run workers on a dedicated subnet with egress firewall rules.
Performance
섹션 제목: “Performance”- Batch inputs — Send domains in batches to tools rather than one-by-one.
- Cache results — Use workflow outputs to avoid redundant scans.
- Scale workers — Monitor queue depth and scale workers before bottleneck.
- Adjust threading — Tool-specific threading (e.g.,
threads: 200in Subfinder) improves throughput.
Maintenance
섹션 제목: “Maintenance”- Version workflows — Use semantic versioning for workflow definitions.
- Test changes — Deploy updated workflows to a staging environment first.
- Monitor logs — Set up alerts on ERROR and WARN logs in production.
- Update tools — Regularly pull latest tool container images.
Related Tools
섹션 제목: “Related Tools”| Tool | Purpose | Comparison |
|---|---|---|
| Tines | Low-code security automation platform | Commercial, managed cloud service; ShipSec is open-source and self-hosted |
| Shuffle SOAR | Open-source security orchestration platform | Similar architecture; Shuffle focuses on enterprise SOAR, ShipSec on reconnaissance/scanning |
| n8n | General-purpose workflow automation | Broader use cases (CRM, marketing, etc.); ShipSec is security-focused with built-in tool integrations |
| StackStorm | Open-source event-driven automation | Broader IT automation; ShipSec has tighter security tool integration |
For more information, visit the official docs at docs.shipsec.ai or the GitHub repository at github.com/shipsecai/studio.