Aller au contenu

Feuille de chaleur logique Sumo

Copier toutes les commandes logiques de Sumo Générer Sumo Logic PDF Guide

Sumo Logic est une plate-forme d'analyse de données de machine native en nuage qui fournit des informations en temps réel sur les applications, l'infrastructure et les données de sécurité. En tant que solution Software-as-a-Service (SaaS), Sumo Logic permet aux organisations de collecter, de rechercher et d'analyser des volumes massifs de données structurées et non structurées de toute leur pile technologique, offrant une visibilité complète pour l'intelligence opérationnelle, la surveillance de la sécurité et l'analyse commerciale.

Aperçu général de la plateforme

Architecture de base

Sumo Logic fonctionne sur une architecture multi-tenue et cloud-native conçue pour le traitement massif et en temps réel. La plateforme se compose de plusieurs composants clés qui travaillent ensemble pour fournir des capacités d'analyse de données complètes.

La couche de collecte de données utilise des capteurs légers qui peuvent être déployés comme capteurs installés sur des systèmes individuels ou des capteurs hébergés qui reçoivent des données via des terminaux HTTP. Ces collecteurs prennent en charge une grande variété de sources de données, y compris des fichiers journaux, des métriques, des traces et des applications personnalisées via des API et des webhooks.

Le moteur de traitement de données effectue l'analyse, l'enrichissement et l'indexation en temps réel des flux de données entrants. La technologie de recherche propriétaire de Sumo Logic permet des performances de requêtes de sous-secondes à travers les petaoctets de données, tandis que les algorithmes d'apprentissage automatique détectent automatiquement les modèles, les anomalies et les tendances des données.

Caractéristiques principales

# Core Platform Capabilities
- Real-time log analytics and search
- Metrics monitoring and alerting
- Security information and event management (SIEM)
- Application performance monitoring (APM)
- Infrastructure monitoring
- Compliance and audit reporting
- Machine learning and predictive analytics
- Custom dashboards and visualizations

Collecte de données et sources

Collecteurs installés

# Download and install collector (Linux)
wget https://collectors.sumologic.com/rest/download/linux/64 -O SumoCollector.sh
sudo bash SumoCollector.sh -q -Vsumo.accessid=<ACCESS_ID> -Vsumo.accesskey=<ACCESS_KEY>

# Install as service
sudo /opt/SumoCollector/collector install
sudo /opt/SumoCollector/collector start

# Check collector status
sudo /opt/SumoCollector/collector status

# View collector logs
tail -f /opt/SumoCollector/logs/collector.log
```_

### Collecteurs hébergés

```bash
# Create HTTP source endpoint
curl -X POST https://api.sumologic.com/api/v1/collectors/<COLLECTOR_ID>/sources \
  -H "Authorization: Basic <BASE64_CREDENTIALS>" \
  -H "Content-Type: application/json" \
  -d '\\\\{
    "source": \\\\{
      "name": "HTTP Source",
      "category": "prod/web/access",
      "hostName": "web-server-01",
      "sourceType": "HTTP"
    \\\\}
  \\\\}'

# Send data to HTTP endpoint
curl -X POST https://endpoint.collection.sumologic.com/receiver/v1/http/<UNIQUE_ID> \
  -H "Content-Type: application/json" \
  -d '\\\\{"timestamp": "2023-01-01T12:00:00Z", "level": "INFO", "message": "Application started"\\\\}'
```_

### Collecte de fichiers journal

```bash
# Configure local file source
\\\\{
  "source": \\\\{
    "name": "Application Logs",
    "category": "prod/app/logs",
    "pathExpression": "/var/log/myapp/*.log",
    "sourceType": "LocalFile",
    "multilineProcessingEnabled": true,
    "useAutolineMatching": true
  \\\\}
\\\\}

# Configure remote file source
\\\\{
  "source": \\\\{
    "name": "Remote Syslog",
    "category": "prod/system/syslog",
    "protocol": "UDP",
    "port": 514,
    "sourceType": "Syslog"
  \\\\}
\\\\}

Langue de recherche et questions

Syntaxe de recherche de base

# Simple keyword search
error

# Field-based search
_sourceCategory=prod/web/access

# Time range search
_sourceCategory=prod/web/access|where _messageTime > now() - 1h

# Boolean operators
error AND (database OR connection)
error NOT timeout
(status_code=500 OR status_code=404)

# Wildcard searches
error*
*connection*
user_id=12345*

Opérations de recherche avancées

# Parse and extract fields
_sourceCategory=prod/web/access
|parse "* * * [*] \"* * *\" * * \"*\" \"*\"" as src_ip, ident, user, timestamp, method, url, protocol, status_code, size, referer, user_agent

# Regular expression parsing
_sourceCategory=prod/app/logs
|parse regex "(?<timestamp>\d\\\\{4\\\\}-\d\\\\{2\\\\}-\d\\\\{2\\\\} \d\\\\{2\\\\}:\d\\\\{2\\\\}:\d\\\\{2\\\\}) \[(?<level>\w+)\] (?<message>.*)"

# JSON parsing
_sourceCategory=prod/api/logs
|json field=_raw "user_id" as user_id
|json field=_raw "action" as action
|json field=_raw "timestamp" as event_time

# CSV parsing
_sourceCategory=prod/data/csv
|csv _raw extract 1 as user_id, 2 as action, 3 as timestamp

Agrégation et statistiques

# Count operations
_sourceCategory=prod/web/access
|parse "* * * [*] \"* * *\" * *" as src_ip, ident, user, timestamp, method, url, protocol, status_code, size
|count by status_code

# Sum and average
_sourceCategory=prod/web/access
|parse "* * * [*] \"* * *\" * *" as src_ip, ident, user, timestamp, method, url, protocol, status_code, size
|sum(size) as total_bytes, avg(size) as avg_bytes by src_ip

# Timeslice aggregation
_sourceCategory=prod/web/access
|parse "* * * [*] \"* * *\" * *" as src_ip, ident, user, timestamp, method, url, protocol, status_code, size
|timeslice 1m
|count by _timeslice, status_code

# Percentile calculations
_sourceCategory=prod/app/performance
|parse "response_time=*" as response_time
|pct(response_time, 50, 90, 95, 99) by service_name

Transformation des données

# Field manipulation
_sourceCategory=prod/web/access
|parse "* * * [*] \"* * *\" * *" as src_ip, ident, user, timestamp, method, url, protocol, status_code, size
|if(status_code matches "2*", "success", "error") as result_type
|if(size > 1000000, "large", "normal") as file_size_category

# String operations
_sourceCategory=prod/app/logs
|parse "user=*" as user_id
|toUpperCase(user_id) as user_id_upper
|toLowerCase(user_id) as user_id_lower
|substring(user_id, 0, 3) as user_prefix

# Date and time operations
_sourceCategory=prod/app/logs
|parse "timestamp=*" as event_time
|parseDate(event_time, "yyyy-MM-dd HH:mm:ss") as parsed_time
|formatDate(parsed_time, "yyyy-MM-dd") as date_only
|formatDate(parsed_time, "HH:mm:ss") as time_only

Mesure et surveillance

Collecte de données

# Host metrics collection
\\\\{
  "source": \\\\{
    "name": "Host Metrics",
    "category": "prod/infrastructure/metrics",
    "sourceType": "SystemStats",
    "interval": 60000,
    "hostName": "web-server-01"
  \\\\}
\\\\}

# Custom metrics via HTTP
curl -X POST https://endpoint.collection.sumologic.com/receiver/v1/http/<UNIQUE_ID> \
  -H "Content-Type: application/vnd.sumologic.carbon2" \
  -d "metric=cpu.usage.percent host=web-01 service=nginx 85.2 1640995200"

# Application metrics
curl -X POST https://endpoint.collection.sumologic.com/receiver/v1/http/<UNIQUE_ID> \
  -H "Content-Type: application/vnd.sumologic.prometheus" \
  -d "# HELP http_requests_total Total HTTP requests
# TYPE http_requests_total counter
http_requests_total\\\\{method=\"GET\",status=\"200\"\\\\} 1234
http_requests_total\\\\{method=\"POST\",status=\"201\"\\\\} 567"

Demandes de renseignements

# Basic metrics query
metric=cpu.usage.percent host=web-01|avg by host

# Time series aggregation
metric=memory.usage.percent
|avg by host
|timeslice 5m

# Multiple metrics correlation
(metric=cpu.usage.percent OR metric=memory.usage.percent) host=web-01
|avg by metric, host
|timeslice 1m

# Metrics with thresholds
metric=disk.usage.percent
|where %"disk.usage.percent" > 80
|max by host, mount_point

Alertes et notifications

# Create scheduled search alert
\\\\{
  "searchName": "High Error Rate Alert",
  "searchDescription": "Alert when error rate exceeds 5%",
  "searchQuery": "_sourceCategory=prod/web/access|parse \"* * * [*] \\\"* * *\\\" * *\" as src_ip, ident, user, timestamp, method, url, protocol, status_code, size|where status_code matches \"5*\"|count as error_count|if(error_count > 100, \"CRITICAL\", \"OK\") as alert_level|where alert_level = \"CRITICAL\"",
  "searchSchedule": \\\\{
    "cronExpression": "0 */5 * * * ? *",
    "displayableTimeRange": "-5m",
    "parseableTimeRange": \\\\{
      "type": "BeginBoundedTimeRange",
      "from": \\\\{
        "type": "RelativeTimeRangeBoundary",
        "relativeTime": "-5m"
      \\\\}
    \\\\}
  \\\\},
  "searchNotification": \\\\{
    "taskType": "EmailSearchNotificationSyncDefinition",
    "toList": ["admin@company.com"],
    "subject": "High Error Rate Detected",
    "includeQuery": true,
    "includeResultSet": true,
    "includeHistogram": true
  \\\\}
\\\\}

Sécurité et capacités SIEM

Analyse des événements de sécurité

# Failed login detection
_sourceCategory=prod/security/auth
|parse "user=* action=* result=* src_ip=*" as user, action, result, src_ip
|where action = "login" and result = "failed"
|count by user, src_ip
|where _count > 5

# Suspicious network activity
_sourceCategory=prod/network/firewall
|parse "src=* dst=* port=* action=*" as src_ip, dst_ip, dst_port, action
|where action = "blocked"
|count by src_ip, dst_port
|sort by _count desc

# Malware detection
_sourceCategory=prod/security/antivirus
|parse "file=* threat=* action=*" as file_path, threat_name, action
|where action = "quarantined"
|count by threat_name
|sort by _count desc

Renseignements sur les menaces Intégration

# IP reputation lookup
_sourceCategory=prod/web/access
|parse "* * * [*] \"* * *\" * *" as src_ip, ident, user, timestamp, method, url, protocol, status_code, size
|lookup type="ip" input="src_ip" output="reputation", "country", "organization"
|where reputation = "malicious"

# Domain reputation analysis
_sourceCategory=prod/dns/logs
|parse "query=* response=*" as domain, ip_address
|lookup type="domain" input="domain" output="category", "reputation"
|where category contains "malware" or reputation = "suspicious"

# File hash analysis
_sourceCategory=prod/security/endpoint
|parse "file_hash=* file_name=*" as file_hash, file_name
|lookup type="hash" input="file_hash" output="malware_family", "first_seen"
|where isNotNull(malware_family)

Conformité et vérification

# PCI DSS compliance monitoring
_sourceCategory=prod/payment/logs
|parse "card_number=* transaction_id=* amount=*" as card_number, transaction_id, amount
|where card_number matches "*****"
|count by _timeslice(1h)

# GDPR data access logging
_sourceCategory=prod/app/audit
|parse "user=* action=* data_type=* record_id=*" as user, action, data_type, record_id
|where data_type = "personal_data" and action = "access"
|count by user, data_type

# SOX financial controls
_sourceCategory=prod/financial/system
|parse "user=* action=* amount=* approval_status=*" as user, action, amount, approval_status
|where amount > 10000 and approval_status != "approved"
|count by user, action

Tableaux de bord et visualisations

Création de tableaux de bord

# Create dashboard via API
curl -X POST https://api.sumologic.com/api/v1/dashboards \
  -H "Authorization: Basic <BASE64_CREDENTIALS>" \
  -H "Content-Type: application/json" \
  -d '\\\\{
    "title": "Web Application Performance",
    "description": "Real-time monitoring of web application metrics",
    "folderId": "000000000000000A",
    "topologyLabelMap": \\\\{
      "data": \\\\{\\\\}
    \\\\},
    "domain": "app",
    "panels": [
      \\\\{
        "id": "panel1",
        "key": "panel1",
        "title": "Request Rate",
        "visualSettings": "\\\\{\"general\":\\\\{\"mode\":\"timeSeries\",\"type\":\"line\"\\\\}\\\\}",
        "keepVisualSettingsConsistentWithParent": true,
        "panelType": "SumoSearchPanel",
        "queries": [
          \\\\{
            "queryString": "_sourceCategory=prod/web/access|timeslice 1m|count by _timeslice",
            "queryType": "Logs",
            "queryKey": "A",
            "metricsQueryMode": null,
            "metricsQueryData": null,
            "tracesQueryData": null,
            "parseMode": "Manual",
            "timeSource": "Message"
          \\\\}
        ]
      \\\\}
    ]
  \\\\}'

Types de graphiques et configurations

# Time series chart
\\\\{
  "visualSettings": \\\\{
    "general": \\\\{
      "mode": "timeSeries",
      "type": "line"
    \\\\},
    "series": \\\\{
      "A": \\\\{
        "color": "#1f77b4"
      \\\\}
    \\\\}
  \\\\}
\\\\}

# Bar chart
\\\\{
  "visualSettings": \\\\{
    "general": \\\\{
      "mode": "distribution",
      "type": "bar"
    \\\\}
  \\\\}
\\\\}

# Pie chart
\\\\{
  "visualSettings": \\\\{
    "general": \\\\{
      "mode": "distribution",
      "type": "pie"
    \\\\}
  \\\\}
\\\\}

# Single value display
\\\\{
  "visualSettings": \\\\{
    "general": \\\\{
      "mode": "singleValue",
      "type": "svp"
    \\\\}
  \\\\}
\\\\}

Intégration et automatisation des API

Authentification de l'API REST

# Generate access credentials
curl -X POST https://api.sumologic.com/api/v1/accessKeys \
  -H "Authorization: Basic <BASE64_CREDENTIALS>" \
  -H "Content-Type: application/json" \
  -d '\\\\{
    "label": "API Integration Key",
    "corsHeaders": ["*"]
  \\\\}'

# Use access key for authentication
ACCESS_ID="your_access_id"
ACCESS_KEY="your_access_key"
CREDENTIALS=$(echo -n "$ACCESS_ID:$ACCESS_KEY"|base64)

# Test API connection
curl -X GET https://api.sumologic.com/api/v1/collectors \
  -H "Authorization: Basic $CREDENTIALS"

Recherche Gestion des emplois

# Create search job
curl -X POST https://api.sumologic.com/api/v1/search/jobs \
  -H "Authorization: Basic <BASE64_CREDENTIALS>" \
  -H "Content-Type: application/json" \
  -d '\\\\{
    "query": "_sourceCategory=prod/web/access|count by status_code",
    "from": "2023-01-01T00:00:00Z",
    "to": "2023-01-01T23:59:59Z",
    "timeZone": "UTC"
  \\\\}'

# Check search job status
curl -X GET https://api.sumologic.com/api/v1/search/jobs/<JOB_ID> \
  -H "Authorization: Basic <BASE64_CREDENTIALS>"

# Get search results
curl -X GET https://api.sumologic.com/api/v1/search/jobs/<JOB_ID>/records \
  -H "Authorization: Basic <BASE64_CREDENTIALS>"

# Delete search job
curl -X DELETE https://api.sumologic.com/api/v1/search/jobs/<JOB_ID> \
  -H "Authorization: Basic <BASE64_CREDENTIALS>"

Gestion du contenu

# Export content
curl -X POST https://api.sumologic.com/api/v2/content/<CONTENT_ID>/export \
  -H "Authorization: Basic <BASE64_CREDENTIALS>" \
  -H "Content-Type: application/json" \
  -d '\\\\{
    "isAdminMode": false
  \\\\}'

# Import content
curl -X POST https://api.sumologic.com/api/v2/content/folders/<FOLDER_ID>/import \
  -H "Authorization: Basic <BASE64_CREDENTIALS>" \
  -H "Content-Type: application/json" \
  -d '\\\\{
    "content": "<EXPORTED_CONTENT>",
    "overwrite": false
  \\\\}'

# List folder contents
curl -X GET https://api.sumologic.com/api/v2/content/folders/<FOLDER_ID> \
  -H "Authorization: Basic <BASE64_CREDENTIALS>"

Optimisation des performances

Optimisation des requêtes

# Use specific source categories
_sourceCategory=prod/web/access  # Good
*  # Avoid - searches all data

# Limit time ranges
_sourceCategory=prod/web/access|where _messageTime > now() - 1h  # Good
_sourceCategory=prod/web/access  # Avoid - searches all time

# Use early filtering
_sourceCategory=prod/web/access
|where status_code = "500"  # Good - filter early
|parse "* * * [*] \"* * *\" * *" as src_ip, ident, user, timestamp, method, url, protocol, status_code, size

# Optimize parsing
_sourceCategory=prod/web/access
|parse "* * * [*] \"* * *\" * *" as src_ip, ident, user, timestamp, method, url, protocol, status_code, size
|where status_code = "500"  # Less efficient - parse then filter

Gestion du volume de données

# Monitor data volume
_index=sumologic_volume
|where _sourceCategory matches "*"
|sum(sizeInBytes) as totalBytes by _sourceCategory
|sort by totalBytes desc

# Set up data volume alerts
_index=sumologic_volume
|where _sourceCategory = "prod/web/access"
|sum(sizeInBytes) as dailyBytes
|where dailyBytes > 10000000000  # 10GB threshold

# Optimize collection
\\\\{
  "source": \\\\{
    "name": "Optimized Log Source",
    "category": "prod/app/logs",
    "pathExpression": "/var/log/myapp/*.log",
    "sourceType": "LocalFile",
    "filters": [
      \\\\{
        "filterType": "Exclude",
        "name": "Exclude Debug Logs",
        "regexp": ".*DEBUG.*"
      \\\\}
    ]
  \\\\}
\\\\}

Dépannage et meilleures pratiques

Questions communes

# Check collector connectivity
curl -v https://collectors.sumologic.com/receiver/v1/http/<UNIQUE_ID>

# Verify data ingestion
_sourceCategory=<YOUR_CATEGORY>
|count by _sourceHost, _sourceCategory
|sort by _count desc

# Debug parsing issues
_sourceCategory=prod/app/logs
|limit 10
|parse "timestamp=*" as event_time
|where isNull(event_time)

# Monitor search performance
_index=sumologic_search_usage
|where query_user = "your_username"
|avg(scan_bytes), avg(execution_time_ms) by query_user

Pratiques exemplaires en matière de sécurité

# Implement role-based access control
\\\\{
  "roleName": "Security Analyst",
  "description": "Read-only access to security logs",
  "filterPredicate": "_sourceCategory=prod/security/*",
  "capabilities": [
    "viewCollectors",
    "searchAuditIndex"
  ]
\\\\}

# Set up audit logging
_index=sumologic_audit
|where event_name = "SearchQueryExecuted"
|count by user_name, source_ip
|sort by _count desc

# Monitor privileged access
_index=sumologic_audit
|where event_name matches "*Admin*"
|count by user_name, event_name
|sort by _count desc

Surveillance de la performance

# Monitor search performance
_index=sumologic_search_usage
|avg(scan_bytes) as avg_scan_bytes, avg(execution_time_ms) as avg_execution_time
|sort by avg_execution_time desc

# Track data ingestion rates
_index=sumologic_volume
|timeslice 1h
|sum(messageCount) as messages_per_hour by _timeslice
|sort by _timeslice desc

# Monitor collector health
_sourceCategory=sumo/collector/health
|parse "status=*" as collector_status
|count by collector_status, _sourceHost
|where collector_status != "healthy"

Ressources