Self-hosted uptime monitoring tool with support for HTTP, TCP, DNS, Docker, and 90+ notification services including Slack, Discord, and Telegram.
# Run Uptime Kuma with persistent storage
docker run -d \
--restart=always \
-p 3001:3001 \
-v uptime-kuma:/app/data \
--name uptime-kuma \
louislam/uptime-kuma:2
# Access at http://localhost:3001
# Bind to localhost only (no external access)
docker run -d \
--restart=always \
-p 127.0.0.1:3001:3001 \
-v uptime-kuma:/app/data \
--name uptime-kuma \
louislam/uptime-kuma:2
# Download official compose file
mkdir uptime-kuma && cd uptime-kuma
curl -o compose.yaml \
https://raw.githubusercontent.com/louislam/uptime-kuma/master/compose.yaml
# Start the service
docker compose up -d
# Access at http://localhost:3001
# Mount Docker socket for container health monitoring
docker run -d \
--restart=always \
-p 3001:3001 \
-v uptime-kuma:/app/data \
-v /var/run/docker.sock:/var/run/docker.sock:ro \
--name uptime-kuma \
louislam/uptime-kuma:2
# Requirements: Node.js 20.4+, Git
# Clone and setup
git clone https://github.com/louislam/uptime-kuma.git
cd uptime-kuma
npm run setup
# Start directly
node server/server.js
# Or use PM2 for background operation
npm install -g pm2
pm2 start server/server.js --name uptime-kuma
| Command | Description |
|---|
docker run -d -p 3001:3001 louislam/uptime-kuma:2 | Start Uptime Kuma container |
docker compose up -d | Start with Docker Compose |
docker compose down | Stop all services |
docker compose logs -f | Follow container logs |
docker restart uptime-kuma | Restart the container |
docker stop uptime-kuma | Stop the container |
docker start uptime-kuma | Start stopped container |
node server/server.js | Start without Docker |
pm2 start server/server.js --name uptime-kuma | Start with PM2 |
pm2 restart uptime-kuma | Restart with PM2 |
pm2 stop uptime-kuma | Stop with PM2 |
pm2 logs uptime-kuma | View PM2 logs |
# Pull latest image
docker pull louislam/uptime-kuma:2
# Stop and remove current container
docker stop uptime-kuma
docker rm uptime-kuma
# Recreate with new image (data persists in volume)
docker run -d \
--restart=always \
-p 3001:3001 \
-v uptime-kuma:/app/data \
--name uptime-kuma \
louislam/uptime-kuma:2
# Pull and recreate
docker compose pull
docker compose up -d
cd uptime-kuma
git fetch --all
git checkout 2.1.3 --force
npm install --production
npm run download-dist
pm2 restart uptime-kuma
| Type | Description |
|---|
| HTTP(S) | Monitor website availability and response time |
| HTTP(S) Keyword | Check for specific text in response body |
| HTTP(S) JSON Query | Validate JSON response values |
| TCP Port | Check if a TCP port is open and responding |
| Ping | ICMP ping for host availability |
| DNS | Monitor DNS record resolution |
| Docker Container | Monitor container health status |
| Steam Game Server | Check game server availability |
| MQTT | Monitor MQTT broker connectivity |
| gRPC | Monitor gRPC service health |
| Radius | RADIUS authentication server check |
| GameDig | Game server monitoring (multiple protocols) |
| Push | Passive monitoring via heartbeat endpoint |
| Group | Logical grouping of monitors |
| Service | Description |
|---|
| Slack | Channel webhooks and bot notifications |
| Discord | Webhook-based channel notifications |
| Telegram | Bot API notifications |
| Microsoft Teams | Incoming webhook notifications |
| Mattermost | Self-hosted chat integration |
| Rocket.Chat | Webhook notifications |
| Google Chat | Space webhook notifications |
| Matrix | Decentralized chat notifications |
| Service | Description |
|---|
| Pushover | Mobile push notifications |
| Gotify | Self-hosted push server |
| ntfy | HTTP-based pub/sub notifications |
| Pushbullet | Cross-device push notifications |
| Signal | Secure messaging notifications |
| LINE | LINE Notify integration |
| Service | Description |
|---|
| PagerDuty | Incident alerting and on-call |
| Opsgenie | Alert management and escalation |
| Squadcast | Incident management |
| Splunk On-Call | VictorOps alert routing |
| Better Stack | Uptime and incident management |
| Service | Description |
|---|
| SMTP Email | Custom email notifications |
| Webhook | Custom HTTP endpoint calls |
| Home Assistant | Smart home automation triggers |
| Apprise | Universal notification gateway |
| Variable | Description | Default |
|---|
UPTIME_KUMA_PORT | Server port | 3001 |
UPTIME_KUMA_HOST | Bind address | :: |
DATA_DIR | Data storage directory | ./data |
TZ | Timezone | UTC |
UMASK | File permission mask | 0000 |
NODE_EXTRA_CA_CERTS | Custom CA certificates path | — |
SSL_CERT | SSL certificate path | — |
SSL_KEY | SSL private key path | — |
| Feature | Description |
|---|
| Multiple Pages | Create separate status pages for different services |
| Custom Domains | Map status pages to custom domain names |
| Custom CSS | Style status pages with custom CSS |
| Incident Posts | Create incident reports visible on status page |
| Maintenance | Schedule maintenance windows |
| Monitor Groups | Organize monitors into groups on status page |
# Install the Python API client
pip install uptime-kuma-api
| Operation | Description |
|---|
| Add monitor | Programmatically create monitors |
| Edit monitor | Update monitor configuration |
| Delete monitor | Remove monitors |
| Pause/Resume | Toggle monitor state |
| Get status | Retrieve current monitor status |
| Add notification | Configure notification providers |
| Get uptime | Query uptime statistics |
# compose.yaml
services:
uptime-kuma:
image: louislam/uptime-kuma:2
container_name: uptime-kuma
restart: always
ports:
- "3001:3001"
volumes:
- ./data:/app/data
# Optional: Docker container monitoring
# - /var/run/docker.sock:/var/run/docker.sock:ro
environment:
- TZ=UTC
| Feature | Description |
|---|
| 2FA | Two-factor authentication for login |
| Reverse Proxy | Deploy behind Nginx, Caddy, or Traefik |
| SSL/TLS | Built-in SSL or reverse proxy termination |
| Login Rate Limiting | Brute-force protection |
| API Keys | Token-based API access |
server {
listen 443 ssl;
server_name status.example.com;
location / {
proxy_pass http://127.0.0.1:3001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
| Command | Description |
|---|
docker cp uptime-kuma:/app/data ./backup | Backup data directory |
docker cp ./backup/. uptime-kuma:/app/data | Restore from backup |
docker volume inspect uptime-kuma | Find volume mount point |
| Issue | Solution |
|---|
| Cannot access UI | Check port binding: docker port uptime-kuma |
| NFS volume errors | Use local directories, not network file systems |
| WebSocket errors | Configure reverse proxy for WebSocket upgrade |
| Container not starting | Check logs: docker logs uptime-kuma |
| Data persistence lost | Ensure -v uptime-kuma:/app/data volume is used |
| Docker monitoring fails | Mount docker.sock: -v /var/run/docker.sock:/var/run/docker.sock:ro |
| Platform | Status |
|---|
| Linux (x64, ARM) | Supported |
| Windows 10+ (x64) | Supported |
| macOS | Supported |
| Docker Desktop | Supported |
| Kubernetes | Via Docker image |
| FreeBSD/OpenBSD | Not supported |
| Replit/Heroku | Not supported |
- Use Docker volumes (not bind mounts to NFS) for data persistence
- Deploy behind a reverse proxy with SSL termination for production use
- Enable 2FA for the admin account immediately after first login
- Set monitoring intervals to 60 seconds or more for external services
- Use the Push monitor type for services behind firewalls
- Configure at least 2 notification channels for redundancy
- Create separate status pages for internal and external audiences
- Back up the data directory regularly — it contains all configuration
- Set the timezone via the
TZ environment variable to match your operations center