Zum Inhalt

Details zum Event Cheat Sheet

generieren

Überblick

ScoutSuite ist ein Open-Source-Multi-Cloud-Sicherheits-Auditing-Tool, das die Sicherheit Haltungsbewertung von Cloud-Umgebungen ermöglicht. Es sammelt Konfigurationsdaten für manuelle Inspektion und unterstreicht Risikobereiche. ScoutSuite unterstützt Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), Alibaba Cloud und Oracle Cloud Infrastructure (OCI). Das Tool erstellt detaillierte HTML-Berichte mit von Service organisierten Ergebnissen und beinhaltet Abhilfe-Leitungen.

RECHT *Key Features: Multi-Cloud-Unterstützung, umfassende Sicherheitsüberprüfungen, detaillierte HTML-Berichte, Regelanpassung, Compliance-Mapping, API-gesteuertes Scannen und umfangreiche Konfigurationsanalysen über 100+ Services.

Installation und Inbetriebnahme

Python Paket Installation

```bash

Install Python 3.7+ (required)

python3 --version

Install ScoutSuite via pip

pip3 install scoutsuite

Alternative: Install with all dependencies

pip3 install scoutsuite[all]

Verify installation

scout --version

Update ScoutSuite

pip3 install --upgrade scoutsuite

Install specific version

pip3 install scoutsuite==5.13.0

Install development version

pip3 install git+https://github.com/nccgroup/ScoutSuite.git ```_

Docker Installation

```bash

Pull ScoutSuite Docker image

docker pull nccgroup/scoutsuite:latest

Run ScoutSuite in Docker

docker run --rm -it \ -v ~/.aws:/root/.aws \ -v ~/.azure:/root/.azure \ -v ~/.config/gcloud:/root/.config/gcloud \ -v $(pwd)/scoutsuite-results:/opt/scoutsuite/scoutsuite-results \ nccgroup/scoutsuite:latest \ aws

Create Docker alias for easier usage

echo 'alias scoutsuite="docker run --rm -it -v ~/.aws:/root/.aws -v ~/.azure:/root/.azure -v ~/.config/gcloud:/root/.config/gcloud -v $(pwd)/scoutsuite-results:/opt/scoutsuite/scoutsuite-results nccgroup/scoutsuite:latest"' >> ~/.bashrc source ~/.bashrc

Create Docker Compose file

cat > docker-compose.yml << 'EOF' version: '3.8' services: scoutsuite: image: nccgroup/scoutsuite:latest volumes: - ~/.aws:/root/.aws - ~/.azure:/root/.azure - ~/.config/gcloud:/root/.config/gcloud - ./scoutsuite-results:/opt/scoutsuite/scoutsuite-results environment: - AWS_PROFILE=default - AZURE_SUBSCRIPTION_ID=${AZURE_SUBSCRIPTION_ID} - GOOGLE_APPLICATION_CREDENTIALS=/root/.config/gcloud/application_default_credentials.json EOF

Run with Docker Compose

docker-compose run scoutsuite aws ```_

Quelle Installation

```bash

Clone ScoutSuite repository

git clone https://github.com/nccgroup/ScoutSuite.git cd ScoutSuite

Install dependencies

pip3 install -r requirements.txt

Install in development mode

pip3 install -e .

Verify installation

python3 -m ScoutSuite --version

Create symbolic link

sudo ln -sf $(pwd)/scout.py /usr/local/bin/scout

Set up virtual environment (recommended)

python3 -m venv scoutsuite-env source scoutsuite-env/bin/activate pip3 install -r requirements.txt pip3 install -e . ```_

Cloud Provider Konfiguration

AWS Konfiguration

```bash

Install AWS CLI

curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" unzip awscliv2.zip sudo ./aws/install

Configure AWS credentials

aws configure

Enter: Access Key ID, Secret Access Key, Region, Output format

Alternative: Use environment variables

export AWS_ACCESS_KEY_ID="your-access-key" export AWS_SECRET_ACCESS_KEY="your-secret-key" export AWS_DEFAULT_REGION="us-east-1"

Alternative: Use IAM roles (recommended for EC2)

Attach IAM role with required permissions to EC2 instance

Test AWS configuration

aws sts get-caller-identity

Configure multiple profiles

aws configure --profile production aws configure --profile development aws configure --profile staging

List configured profiles

aws configure list-profiles

Set default profile

export AWS_PROFILE=production

Required AWS permissions for ScoutSuite

cat > scoutsuite-aws-policy.json << 'EOF' { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "access-analyzer:List", "account:Get", "acm:Describe", "acm:List", "apigateway:GET", "application-autoscaling:Describe", "autoscaling:Describe", "backup:List", "cloudformation:Describe", "cloudformation:Get", "cloudformation:List", "cloudfront:Get", "cloudfront:List", "cloudtrail:Describe", "cloudtrail:Get", "cloudtrail:List", "cloudwatch:Describe", "cloudwatch:Get", "cloudwatch:List", "config:Describe", "config:Get", "config:List", "directconnect:Describe", "dms:Describe", "dms:List", "ds:Describe", "ds:Get", "ds:List", "dynamodb:Describe", "dynamodb:List", "ec2:Describe", "ec2:Get", "ecr:Describe", "ecr:Get", "ecr:List", "ecs:Describe", "ecs:List", "efs:Describe", "eks:Describe", "eks:List", "elasticache:Describe", "elasticbeanstalk:Describe", "elasticfilesystem:Describe", "elasticloadbalancing:Describe", "elasticmapreduce:Describe", "elasticmapreduce:List", "es:Describe", "es:List", "events:Describe", "events:List", "firehose:Describe", "firehose:List", "guardduty:Get", "guardduty:List", "iam:Generate", "iam:Get", "iam:List", "iam:Simulate", "inspector:Describe", "inspector:Get", "inspector:List", "kinesis:Describe", "kinesis:List", "kms:Describe", "kms:Get", "kms:List", "lambda:Get", "lambda:List", "logs:Describe", "logs:Get", "logs:List", "organizations:Describe", "organizations:List", "rds:Describe", "rds:List", "redshift:Describe", "route53:Get", "route53:List", "route53domains:Get", "route53domains:List", "s3:Get", "s3:List", "secretsmanager:Describe", "secretsmanager:Get", "secretsmanager:List", "securityhub:Describe", "securityhub:Get", "securityhub:List", "ses:Get", "ses:List", "shield:Describe", "shield:Get", "shield:List", "sns:Get", "sns:List", "sqs:Get", "sqs:List", "ssm:Describe", "ssm:Get", "ssm:List", "support:Describe", "trustedadvisor:Describe", "waf:Get", "waf:List", "wafv2:Get", "wafv2:List" ], "Resource": "" } ] } EOF

Create IAM policy

aws iam create-policy \ --policy-name ScoutSuitePolicy \ --policy-document file://scoutsuite-aws-policy.json

Attach policy to user

aws iam attach-user-policy \ --user-name your-username \ --policy-arn arn:aws:iam::ACCOUNT-ID:policy/ScoutSuitePolicy ```_

Azure Konfiguration

```bash

Install Azure CLI

curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash

Login to Azure

az login

List subscriptions

az account list --output table

Set default subscription

az account set --subscription "subscription-id"

Create service principal for ScoutSuite

az ad sp create-for-rbac \ --name "ScoutSuite" \ --role "Reader" \ --scopes "/subscriptions/subscription-id"

Alternative: Use environment variables

export AZURE_CLIENT_ID="your-client-id" export AZURE_CLIENT_SECRET="your-client-secret" export AZURE_TENANT_ID="your-tenant-id" export AZURE_SUBSCRIPTION_ID="your-subscription-id"

Test Azure configuration

az account show

Required Azure permissions for ScoutSuite

Reader role is sufficient for most checks

Additional permissions may be needed for specific services

Create custom role for ScoutSuite (optional)

cat > scoutsuite-azure-role.json << 'EOF' { "Name": "ScoutSuite Reader", "Description": "Custom role for ScoutSuite security scanning", "Actions": [ "/read", "Microsoft.Authorization//read", "Microsoft.Security//read", "Microsoft.PolicyInsights//read" ], "NotActions": [], "DataActions": [], "NotDataActions": [], "AssignableScopes": [ "/subscriptions/subscription-id" ] } EOF

Create custom role

az role definition create --role-definition scoutsuite-azure-role.json

Assign custom role to service principal

az role assignment create \ --assignee "service-principal-id" \ --role "ScoutSuite Reader" \ --scope "/subscriptions/subscription-id" ```_

Google Cloud Platform Konfiguration

```bash

Install Google Cloud SDK

curl https://sdk.cloud.google.com | bash exec -l $SHELL

Initialize gcloud

gcloud init

Login to GCP

gcloud auth login

Set default project

gcloud config set project your-project-id

List projects

gcloud projects list

Create service account for ScoutSuite

gcloud iam service-accounts create scoutsuite \ --display-name="ScoutSuite Service Account" \ --description="Service account for ScoutSuite security scanning"

Grant required roles to service account

gcloud projects add-iam-policy-binding your-project-id \ --member="serviceAccount:scoutsuite@your-project-id.iam.gserviceaccount.com" \ --role="roles/viewer"

gcloud projects add-iam-policy-binding your-project-id \ --member="serviceAccount:scoutsuite@your-project-id.iam.gserviceaccount.com" \ --role="roles/security.securityReviewer"

gcloud projects add-iam-policy-binding your-project-id \ --member="serviceAccount:scoutsuite@your-project-id.iam.gserviceaccount.com" \ --role="roles/cloudsql.viewer"

Create and download service account key

gcloud iam service-accounts keys create scoutsuite-key.json \ --iam-account=scoutsuite@your-project-id.iam.gserviceaccount.com

Set environment variable for authentication

export GOOGLE_APPLICATION_CREDENTIALS="$(pwd)/scoutsuite-key.json"

Alternative: Use application default credentials

gcloud auth application-default login

Test GCP configuration

gcloud auth list gcloud config list

Enable required APIs

gcloud services enable cloudresourcemanager.googleapis.com gcloud services enable iam.googleapis.com gcloud services enable compute.googleapis.com gcloud services enable storage.googleapis.com gcloud services enable sqladmin.googleapis.com gcloud services enable container.googleapis.com ```_

Basisnutzung und Scanning

AWS Scanning

```bash

Basic AWS scan

scout aws

Scan with specific profile

scout aws --profile production

Scan specific regions

scout aws --regions us-east-1 scout aws --regions us-east-1,us-west-2,eu-west-1

Scan all regions

scout aws --all-regions

Scan with custom report name

scout aws --report-name "aws-security-audit-2024"

Scan with custom output directory

scout aws --report-dir ./security-reports

Scan specific services

scout aws --services s3 scout aws --services s3,ec2,iam

List available services

scout aws --list-services

Scan with custom rules

scout aws --ruleset-name custom-rules

Scan with excluded rules

scout aws --skip-rules s3-bucket-world-readable

Scan with custom filters

scout aws --filters filters.json

Scan with debug output

scout aws --debug

Scan with quiet mode

scout aws --quiet

Scan with no browser (headless)

scout aws --no-browser

Scan with custom thread count

scout aws --max-workers 10

Scan with timeout

scout aws --timeout 3600 ```_

Azure Scanning

```bash

Basic Azure scan

scout azure

Scan with specific subscription

scout azure --subscription-id "subscription-id"

Scan with service principal

scout azure \ --client-id "client-id" \ --client-secret "client-secret" \ --tenant-id "tenant-id"

Scan with custom report name

scout azure --report-name "azure-security-audit-2024"

Scan specific services

scout azure --services keyvault scout azure --services keyvault,storage,network

List available services

scout azure --list-services

Scan with custom rules

scout azure --ruleset-name custom-azure-rules

Scan with excluded rules

scout azure --skip-rules storage-account-https-only

Scan all subscriptions (if permissions allow)

scout azure --all-subscriptions

Scan with debug output

scout azure --debug

Scan with custom output directory

scout azure --report-dir ./azure-reports

Scan with no browser

scout azure --no-browser ```_

Google Cloud Platform Scanning

```bash

Basic GCP scan

scout gcp

Scan with specific project

scout gcp --project-id "your-project-id"

Scan with service account key

scout gcp --service-account-key-file scoutsuite-key.json

Scan with custom report name

scout gcp --report-name "gcp-security-audit-2024"

Scan specific services

scout gcp --services compute scout gcp --services compute,storage,iam

List available services

scout gcp --list-services

Scan with custom rules

scout gcp --ruleset-name custom-gcp-rules

Scan with excluded rules

scout gcp --skip-rules compute-instance-public-ip

Scan all projects (if permissions allow)

scout gcp --all-projects

Scan with debug output

scout gcp --debug

Scan with custom output directory

scout gcp --report-dir ./gcp-reports

Scan with no browser

scout gcp --no-browser

Scan with custom folder/organization

scout gcp --folder-id "folder-id" scout gcp --organization-id "organization-id" ```_

Multi-Cloud-Scanning

```bash

Scan multiple cloud providers sequentially

scout aws --report-name "multi-cloud-aws" scout azure --report-name "multi-cloud-azure" scout gcp --report-name "multi-cloud-gcp"

Create multi-cloud scanning script

cat > multi_cloud_scan.sh << 'EOF'

!/bin/bash

Multi-cloud ScoutSuite scanning

TIMESTAMP=$(date +%Y%m%d_%H%M%S) REPORT_DIR="multi_cloud_scan_$TIMESTAMP"

mkdir -p "$REPORT_DIR"

echo "Starting multi-cloud security scan..."

AWS scan

echo "Scanning AWS..." scout aws \ --report-name "aws_scan_$TIMESTAMP" \ --report-dir "$REPORT_DIR" \ --no-browser \ --quiet

Azure scan

echo "Scanning Azure..." scout azure \ --report-name "azure_scan_$TIMESTAMP" \ --report-dir "$REPORT_DIR" \ --no-browser \ --quiet

GCP scan

echo "Scanning GCP..." scout gcp \ --report-name "gcp_scan_$TIMESTAMP" \ --report-dir "$REPORT_DIR" \ --no-browser \ --quiet

echo "Multi-cloud scan completed. Reports in: $REPORT_DIR"

Generate summary

python3 << 'PYTHON' import json import glob import os

def generate_multi_cloud_summary(report_dir): summary = { "timestamp": "$TIMESTAMP", "providers": {}, "total_findings": 0, "critical_findings": 0, "high_findings": 0 }

# Process each provider's results
for provider in ["aws", "azure", "gcp"]:
    provider_file = f"{report_dir}/{provider}_scan_$TIMESTAMP.js"

    if os.path.exists(provider_file):
        try:
            # ScoutSuite generates JavaScript files, need to extract JSON
            with open(provider_file, 'r') as f:
                content = f.read()

            # Extract JSON data from JavaScript
            start_marker = "scoutsuite_results = "
            end_marker = ";"

            start_idx = content.find(start_marker)
            if start_idx != -1:
                start_idx += len(start_marker)
                end_idx = content.rfind(end_marker)

                if end_idx != -1:
                    json_str = content[start_idx:end_idx]
                    data = json.loads(json_str)

                    # Count findings
                    findings = 0
                    critical = 0
                    high = 0

                    # Navigate through ScoutSuite data structure
                    if 'services' in data:
                        for service_name, service_data in data['services'].items():
                            if 'findings' in service_data:
                                for finding_key, finding_data in service_data['findings'].items():
                                    if 'items' in finding_data:
                                        finding_count = len(finding_data['items'])
                                        findings += finding_count

                                        # Categorize by severity
                                        level = finding_data.get('level', 'unknown')
                                        if level == 'danger':
                                            critical += finding_count
                                        elif level == 'warning':
                                            high += finding_count

                    summary["providers"][provider] = {
                        "findings": findings,
                        "critical": critical,
                        "high": high
                    }

                    summary["total_findings"] += findings
                    summary["critical_findings"] += critical
                    summary["high_findings"] += high

        except Exception as e:
            print(f"Error processing {provider} results: {e}")
            summary["providers"][provider] = {"error": str(e)}

# Save summary
with open(f"{report_dir}/multi_cloud_summary.json", 'w') as f:
    json.dump(summary, f, indent=2)

print(f"Multi-cloud summary generated: {report_dir}/multi_cloud_summary.json")
print(f"Total findings across all providers: {summary['total_findings']}")
print(f"Critical findings: {summary['critical_findings']}")
print(f"High findings: {summary['high_findings']}")

generate_multi_cloud_summary("$REPORT_DIR") PYTHON

EOF

chmod +x multi_cloud_scan.sh ./multi_cloud_scan.sh ```_

Erweiterte Konfiguration und Anpassung

Kundenspezifische Regeln und Regeln

```python

!/usr/bin/env python3

Custom ScoutSuite rules

""" Custom rule example: Check for S3 buckets with public read access """

import json from ScoutSuite.core.console import print_exception from ScoutSuite.providers.aws.facade.base import AWSFacade from ScoutSuite.providers.aws.resources.base import AWSResources

class S3BucketPublicReadRule: """Custom rule to check for S3 buckets with public read access"""

def __init__(self):
    self.rule_name = "s3-bucket-public-read-custom"
    self.rule_description = "S3 bucket allows public read access"
    self.rule_level = "danger"  # danger, warning, info
    self.rule_path = "s3.buckets.id.bucket_policy"

def check_bucket_policy(self, bucket_policy):
    """Check if bucket policy allows public read access"""

    if not bucket_policy:
        return False

    try:
        policy = json.loads(bucket_policy)

        for statement in policy.get('Statement', []):
            # Check for public read permissions
            if (statement.get('Effect') == 'Allow' and
                statement.get('Principal') == '*' and
                any(action in ['s3:GetObject', 's3:GetObjectVersion', 's3:*'] 
                    for action in statement.get('Action', []))):
                return True

    except (json.JSONDecodeError, TypeError):
        pass

    return False

def run(self, aws_config):
    """Run the custom rule against AWS configuration"""

    findings = []

    for region_name, region_data in aws_config['services']['s3']['regions'].items():
        for bucket_name, bucket_data in region_data.get('buckets', {}).items():

            bucket_policy = bucket_data.get('bucket_policy')

            if self.check_bucket_policy(bucket_policy):
                findings.append({
                    'bucket_name': bucket_name,
                    'region': region_name,
                    'description': f"Bucket {bucket_name} allows public read access",
                    'risk': "High",
                    'remediation': "Review and restrict bucket policy to prevent public access"
                })

    return findings

Custom ruleset configuration

custom_ruleset = { "about": "Custom ScoutSuite ruleset for enhanced security checks", "rules": { "s3-bucket-public-read-custom": { "description": "S3 bucket allows public read access", "path": "s3.buckets.id.bucket_policy", "conditions": [ "and", [ "s3.buckets.id.bucket_policy", "containsString", "\"Principal\": \"*\"" ], [ "s3.buckets.id.bucket_policy", "containsString", "s3:GetObject" ] ], "level": "danger" }, "ec2-instance-public-ip-custom": { "description": "EC2 instance has public IP address", "path": "ec2.regions.id.vpcs.id.instances.id.public_ip_address", "conditions": [ "ec2.regions.id.vpcs.id.instances.id.public_ip_address", "notNull", "" ], "level": "warning" }, "iam-user-no-mfa-custom": { "description": "IAM user does not have MFA enabled", "path": "iam.users.id.mfa_devices", "conditions": [ "iam.users.id.mfa_devices", "empty", "" ], "level": "warning" }, "rds-instance-public-custom": { "description": "RDS instance is publicly accessible", "path": "rds.regions.id.vpcs.id.instances.id.publicly_accessible", "conditions": [ "rds.regions.id.vpcs.id.instances.id.publicly_accessible", "true", "" ], "level": "danger" }, "cloudtrail-not-encrypted-custom": { "description": "CloudTrail is not encrypted with KMS", "path": "cloudtrail.regions.id.trails.id.kms_key_id", "conditions": [ "cloudtrail.regions.id.trails.id.kms_key_id", "null", "" ], "level": "warning" } } }

Save custom ruleset

with open('custom_ruleset.json', 'w') as f: json.dump(custom_ruleset, f, indent=2)

print("Custom ruleset created: custom_ruleset.json") ```_

Erweiterte Filterung und Konfiguration

json { "filters": { "description": "Custom filters for ScoutSuite scanning", "filters": [ { "description": "Exclude test environments", "path": "*.*.*.*.tags.Environment", "operator": "notEqual", "value": "test" }, { "description": "Include only production resources", "path": "*.*.*.*.tags.Environment", "operator": "equal", "value": "production" }, { "description": "Exclude specific S3 buckets", "path": "s3.buckets.*.name", "operator": "notIn", "value": ["test-bucket", "dev-bucket", "temp-bucket"] }, { "description": "Include only specific regions", "path": "*.regions.*", "operator": "in", "value": ["us-east-1", "us-west-2", "eu-west-1"] }, { "description": "Exclude terminated EC2 instances", "path": "ec2.regions.*.vpcs.*.instances.*.state", "operator": "notEqual", "value": "terminated" } ] } }_

Benutzerdefinierte Berichte Vorlagen

```html

Custom ScoutSuite Security Report

🔒 Custom Security Assessment Report

Generated by ScoutSuite on: {{ timestamp }}

Cloud Provider: {{ provider }}

Account/Subscription: {{ account_id }}

Executive Summary

MetricCount
Total Findings{{ total_findings }}
Critical (Danger){{ danger_count }}
High (Warning){{ warning_count }}
Info{{ info_count }}
{% for service_name, service_data in services.items() %}

{{ service_name.upper() }} Service

{% for finding_name, finding_data in service_data.findings.items() %}

{{ finding_data.description }}

Severity: {{ finding_data.level.title() }}

Affected Resources: {{ finding_data.items|length }}

{% if finding_data.items %}
Show affected resources ({{ finding_data.items|length }})
    {% for item in finding_data.items[:10] %}
  • {{ item }}
  • {% endfor %} {% if finding_data.items|length > 10 %}
  • ... and {{ finding_data.items|length - 10 }} more
  • {% endif %}
{% endif %}
{% endfor %}
{% endfor %}

Recommendations

  1. Address all critical (danger) findings immediately
  2. Review and remediate high (warning) findings
  3. Implement security monitoring and alerting
  4. Regular security assessments with ScoutSuite
  5. Follow cloud provider security best practices

```_

Report Analyse und Automatisierung

Report Processing und Analyse

```python

!/usr/bin/env python3

ScoutSuite report analysis and processing

import json import re import os import glob from datetime import datetime import pandas as pd import matplotlib.pyplot as plt import seaborn as sns

class ScoutSuiteAnalyzer: """Advanced analysis for ScoutSuite reports"""

def __init__(self):
    self.timestamp = datetime.now().strftime('%Y-%m-%d_%H-%M-%S')

def extract_data_from_js(self, js_file_path):
    """Extract JSON data from ScoutSuite JavaScript report file"""

    try:
        with open(js_file_path, 'r') as f:
            content = f.read()

        # Extract JSON data from JavaScript
        start_marker = "scoutsuite_results = "
        end_marker = ";"

        start_idx = content.find(start_marker)
        if start_idx != -1:
            start_idx += len(start_marker)
            end_idx = content.rfind(end_marker)

            if end_idx != -1:
                json_str = content[start_idx:end_idx]
                return json.loads(json_str)

    except Exception as e:
        print(f"Error extracting data from {js_file_path}: {e}")

    return None

def analyze_findings(self, scoutsuite_data):
    """Analyze ScoutSuite findings and generate statistics"""

    analysis = {
        'total_findings': 0,
        'severity_breakdown': {'danger': 0, 'warning': 0, 'info': 0},
        'service_breakdown': {},
        'top_findings': [],
        'compliance_issues': [],
        'remediation_priorities': []
    }

    if 'services' not in scoutsuite_data:
        return analysis

    for service_name, service_data in scoutsuite_data['services'].items():
        if 'findings' not in service_data:
            continue

        service_findings = 0
        service_critical = 0

        for finding_key, finding_data in service_data['findings'].items():
            if 'items' not in finding_data:
                continue

            finding_count = len(finding_data['items'])
            service_findings += finding_count
            analysis['total_findings'] += finding_count

            # Count by severity
            level = finding_data.get('level', 'info')
            if level in analysis['severity_breakdown']:
                analysis['severity_breakdown'][level] += finding_count

            if level == 'danger':
                service_critical += finding_count

            # Collect top findings
            if finding_count > 0:
                analysis['top_findings'].append({
                    'service': service_name,
                    'finding': finding_data.get('description', finding_key),
                    'count': finding_count,
                    'severity': level,
                    'items': finding_data['items'][:5]  # Sample items
                })

        # Service breakdown
        analysis['service_breakdown'][service_name] = {
            'total_findings': service_findings,
            'critical_findings': service_critical
        }

    # Sort top findings by severity and count
    severity_order = {'danger': 3, 'warning': 2, 'info': 1}
    analysis['top_findings'].sort(
        key=lambda x: (severity_order.get(x['severity'], 0), x['count']), 
        reverse=True
    )

    # Generate remediation priorities
    analysis['remediation_priorities'] = self._generate_remediation_priorities(analysis)

    return analysis

def _generate_remediation_priorities(self, analysis):
    """Generate remediation priorities based on findings"""

    priorities = []

    # Critical findings first
    critical_count = analysis['severity_breakdown']['danger']
    if critical_count > 0:
        priorities.append({
            'priority': 'IMMEDIATE',
            'title': 'Address Critical Security Issues',
            'description': f"Immediately remediate {critical_count} critical security findings",
            'timeline': '24-48 hours'
        })

    # High-impact services
    for service, stats in analysis['service_breakdown'].items():
        if stats['critical_findings'] > 5:
            priorities.append({
                'priority': 'HIGH',
                'title': f'Secure {service.upper()} Service',
                'description': f"Address {stats['critical_findings']} critical issues in {service}",
                'timeline': '1 week'
            })

    # Warning-level issues
    warning_count = analysis['severity_breakdown']['warning']
    if warning_count > 10:
        priorities.append({
            'priority': 'MEDIUM',
            'title': 'Reduce Warning-Level Findings',
            'description': f"Address {warning_count} warning-level security findings",
            'timeline': '2-4 weeks'
        })

    return priorities[:5]  # Top 5 priorities

def generate_executive_report(self, analysis, provider, account_id):
    """Generate executive summary report"""

    total_findings = analysis['total_findings']
    critical_findings = analysis['severity_breakdown']['danger']

    # Calculate risk score
    risk_score = self._calculate_risk_score(analysis)

    executive_summary = {
        'provider': provider,
        'account_id': account_id,
        'scan_date': self.timestamp,
        'total_findings': total_findings,
        'critical_findings': critical_findings,
        'risk_score': risk_score,
        'risk_level': self._get_risk_level(risk_score),
        'top_services_at_risk': self._get_top_services_at_risk(analysis['service_breakdown']),
        'remediation_priorities': analysis['remediation_priorities'],
        'compliance_status': self._assess_compliance_status(analysis)
    }

    return executive_summary

def _calculate_risk_score(self, analysis):
    """Calculate overall risk score (0-100)"""

    total = analysis['total_findings']
    if total == 0:
        return 0

    # Weight by severity
    critical_weight = 10
    warning_weight = 3
    info_weight = 1

    weighted_score = (
        analysis['severity_breakdown']['danger'] * critical_weight +
        analysis['severity_breakdown']['warning'] * warning_weight +
        analysis['severity_breakdown']['info'] * info_weight
    )

    # Normalize to 0-100 scale
    max_possible_score = total * critical_weight
    risk_score = (weighted_score / max_possible_score * 100) if max_possible_score > 0 else 0

    return min(100, round(risk_score, 1))

def _get_risk_level(self, risk_score):
    """Determine risk level based on score"""

    if risk_score >= 80:
        return "CRITICAL"
    elif risk_score >= 60:
        return "HIGH"
    elif risk_score >= 40:
        return "MEDIUM"
    elif risk_score >= 20:
        return "LOW"
    else:
        return "MINIMAL"

def _get_top_services_at_risk(self, service_breakdown):
    """Get services with highest risk"""

    services_at_risk = []
    for service, stats in service_breakdown.items():
        if stats['total_findings'] > 0:
            risk_ratio = stats['critical_findings'] / stats['total_findings']
            services_at_risk.append({
                'service': service,
                'total_findings': stats['total_findings'],
                'critical_findings': stats['critical_findings'],
                'risk_ratio': round(risk_ratio, 2)
            })

    return sorted(services_at_risk, key=lambda x: x['risk_ratio'], reverse=True)[:5]

def _assess_compliance_status(self, analysis):
    """Assess compliance status based on findings"""

    critical_count = analysis['severity_breakdown']['danger']
    warning_count = analysis['severity_breakdown']['warning']

    if critical_count == 0 and warning_count <= 5:
        return "COMPLIANT"
    elif critical_count <= 2 and warning_count <= 15:
        return "MOSTLY_COMPLIANT"
    elif critical_count <= 5:
        return "NON_COMPLIANT"
    else:
        return "SEVERELY_NON_COMPLIANT"

def generate_charts(self, analysis, output_dir):
    """Generate charts and visualizations"""

    os.makedirs(output_dir, exist_ok=True)

    # Set style
    plt.style.use('seaborn-v0_8')

    # Create figure with subplots
    fig, axes = plt.subplots(2, 3, figsize=(18, 12))

    # 1. Severity breakdown pie chart
    severity_data = {k: v for k, v in analysis['severity_breakdown'].items() if v > 0}
    if severity_data:
        colors = ['#e74c3c', '#f39c12', '#3498db']
        axes[0, 0].pie(severity_data.values(), labels=severity_data.keys(), 
                      autopct='%1.1f%%', colors=colors)
        axes[0, 0].set_title('Findings by Severity')

    # 2. Service breakdown bar chart
    services = list(analysis['service_breakdown'].keys())[:10]
    service_counts = [analysis['service_breakdown'][s]['total_findings'] for s in services]
    axes[0, 1].barh(services, service_counts, color='#e74c3c')
    axes[0, 1].set_title('Findings by Service')
    axes[0, 1].set_xlabel('Number of Findings')

    # 3. Critical findings by service
    critical_counts = [analysis['service_breakdown'][s]['critical_findings'] for s in services]
    axes[0, 2].bar(services, critical_counts, color='#c0392b')
    axes[0, 2].set_title('Critical Findings by Service')
    axes[0, 2].set_ylabel('Critical Findings')
    axes[0, 2].tick_params(axis='x', rotation=45)

    # 4. Top findings
    top_findings = analysis['top_findings'][:10]
    finding_names = [f['finding'][:30] + '...' if len(f['finding']) > 30 else f['finding'] 
                    for f in top_findings]
    finding_counts = [f['count'] for f in top_findings]
    axes[1, 0].barh(finding_names, finding_counts, color='#f39c12')
    axes[1, 0].set_title('Top 10 Findings')
    axes[1, 0].set_xlabel('Count')

    # 5. Risk distribution
    risk_levels = ['Critical', 'Warning', 'Info']
    risk_counts = [analysis['severity_breakdown']['danger'],
                  analysis['severity_breakdown']['warning'],
                  analysis['severity_breakdown']['info']]
    axes[1, 1].bar(risk_levels, risk_counts, color=['#e74c3c', '#f39c12', '#3498db'])
    axes[1, 1].set_title('Risk Distribution')
    axes[1, 1].set_ylabel('Number of Findings')

    # 6. Compliance trend (placeholder)
    months = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun']
    compliance_scores = [65, 70, 75, 80, 85, 90]  # Example data
    axes[1, 2].plot(months, compliance_scores, marker='o', color='#27ae60')
    axes[1, 2].set_title('Compliance Score Trend')
    axes[1, 2].set_ylabel('Compliance Score (%)')
    axes[1, 2].set_ylim(0, 100)

    plt.tight_layout()
    chart_file = f"{output_dir}/scoutsuite_analysis_{self.timestamp}.png"
    plt.savefig(chart_file, dpi=300, bbox_inches='tight')
    plt.close()

    print(f"Charts generated: {chart_file}")
    return chart_file

def process_report(self, report_file, provider, account_id, output_dir):
    """Process ScoutSuite report and generate analysis"""

    # Extract data from JavaScript report
    scoutsuite_data = self.extract_data_from_js(report_file)
    if not scoutsuite_data:
        print(f"Failed to extract data from {report_file}")
        return None

    # Analyze findings
    analysis = self.analyze_findings(scoutsuite_data)

    # Generate executive report
    executive_summary = self.generate_executive_report(analysis, provider, account_id)

    # Create output directory
    os.makedirs(output_dir, exist_ok=True)

    # Save analysis results
    analysis_file = f"{output_dir}/scoutsuite_analysis_{self.timestamp}.json"
    with open(analysis_file, 'w') as f:
        json.dump({
            'executive_summary': executive_summary,
            'detailed_analysis': analysis
        }, f, indent=2)

    # Generate charts
    chart_file = self.generate_charts(analysis, output_dir)

    # Generate CSV report
    csv_file = self._generate_csv_report(analysis, output_dir)

    print(f"Analysis completed:")
    print(f"  - Analysis file: {analysis_file}")
    print(f"  - Charts: {chart_file}")
    print(f"  - CSV report: {csv_file}")

    return {
        'analysis_file': analysis_file,
        'chart_file': chart_file,
        'csv_file': csv_file,
        'executive_summary': executive_summary
    }

def _generate_csv_report(self, analysis, output_dir):
    """Generate CSV report of findings"""

    findings_data = []

    for finding in analysis['top_findings']:
        findings_data.append({
            'Service': finding['service'],
            'Finding': finding['finding'],
            'Severity': finding['severity'],
            'Count': finding['count'],
            'Sample_Resources': ', '.join(finding['items'][:3])
        })

    df = pd.DataFrame(findings_data)
    csv_file = f"{output_dir}/scoutsuite_findings_{self.timestamp}.csv"
    df.to_csv(csv_file, index=False)

    return csv_file

def main(): """Main function for ScoutSuite analysis"""

import argparse

parser = argparse.ArgumentParser(description='ScoutSuite Report Analyzer')
parser.add_argument('report_file', help='ScoutSuite JavaScript report file')
parser.add_argument('--provider', default='aws', help='Cloud provider (aws, azure, gcp)')
parser.add_argument('--account-id', default='unknown', help='Account/Subscription ID')
parser.add_argument('--output-dir', default='analysis_results', help='Output directory')

args = parser.parse_args()

analyzer = ScoutSuiteAnalyzer()
results = analyzer.process_report(
    args.report_file, 
    args.provider, 
    args.account_id, 
    args.output_dir
)

if results:
    print("\nExecutive Summary:")
    summary = results['executive_summary']
    print(f"Risk Level: {summary['risk_level']}")
    print(f"Risk Score: {summary['risk_score']}/100")
    print(f"Total Findings: {summary['total_findings']}")
    print(f"Critical Findings: {summary['critical_findings']}")
    print(f"Compliance Status: {summary['compliance_status']}")

if name == "main": main() ```_

CI/CD Integration und Automatisierung

GitHub Aktionen Integration

```yaml

.github/workflows/scoutsuite-security-scan.yml

name: ScoutSuite Multi-Cloud Security Scan

on: push: branches: [ main, develop ] pull_request: branches: [ main ] schedule: # Run daily at 3 AM UTC - cron: '0 3 * * *' workflow_dispatch: inputs: cloud_provider: description: 'Cloud provider to scan' required: false default: 'all' type: choice options: - all - aws - azure - gcp

jobs: scoutsuite-scan: runs-on: ubuntu-latest

strategy:
  matrix:
    provider: [aws, azure, gcp]
    include:
      - provider: aws
        setup_script: setup_aws.sh
      - provider: azure
        setup_script: setup_azure.sh
      - provider: gcp
        setup_script: setup_gcp.sh

steps:
- name: Checkout code
  uses: actions/checkout@v3

- name: Setup Python
  uses: actions/setup-python@v4
  with:
    python-version: '3.9'

- name: Install ScoutSuite
  run: |
    pip install scoutsuite
    scout --version

- name: Setup AWS credentials
  if: matrix.provider == 'aws'
  uses: aws-actions/configure-aws-credentials@v2
  with:
    aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
    aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
    aws-region: us-east-1

- name: Setup Azure credentials
  if: matrix.provider == 'azure'
  uses: azure/login@v1
  with:
    creds: ${{ secrets.AZURE_CREDENTIALS }}

- name: Setup GCP credentials
  if: matrix.provider == 'gcp'
  uses: google-github-actions/auth@v1
  with:
    credentials_json: ${{ secrets.GCP_SA_KEY }}

- name: Run ScoutSuite scan
  run: |
    mkdir -p scoutsuite-results

    # Skip if provider not selected in manual trigger
    if [ "${{ github.event.inputs.cloud_provider }}" != "all" ] && [ "${{ github.event.inputs.cloud_provider }}" != "${{ matrix.provider }}" ]; then
      echo "Skipping ${{ matrix.provider }} scan"
      exit 0
    fi

    # Run scan based on provider
    case "${{ matrix.provider }}" in
      aws)
        scout aws \
          --report-name "aws_security_scan_$(date +%Y%m%d_%H%M%S)" \
          --report-dir scoutsuite-results \
          --no-browser \
          --max-workers 5
        ;;
      azure)
        scout azure \
          --report-name "azure_security_scan_$(date +%Y%m%d_%H%M%S)" \
          --report-dir scoutsuite-results \
          --no-browser \
          --max-workers 5
        ;;
      gcp)
        scout gcp \
          --report-name "gcp_security_scan_$(date +%Y%m%d_%H%M%S)" \
          --report-dir scoutsuite-results \
          --no-browser \
          --max-workers 5
        ;;
    esac

- name: Analyze results
  run: |
    pip install pandas matplotlib seaborn

    # Find the latest report file
    REPORT_FILE=$(find scoutsuite-results -name "*.js" -type f | head -1)

    if [ -n "$REPORT_FILE" ]; then
      python scripts/scoutsuite_analyzer.py "$REPORT_FILE" \
        --provider "${{ matrix.provider }}" \
        --account-id "github-actions" \
        --output-dir scoutsuite-results
    fi

- name: Security gate check
  run: |
    python << 'EOF'

import json import sys import glob

Find analysis file

analysis_files = glob.glob('scoutsuite-results/scoutsuite_analysis_*.json')

if not analysis_files: print("No analysis file found") sys.exit(0)

with open(analysis_files[0], 'r') as f: data = json.load(f)

executive_summary = data['executive_summary'] critical_findings = executive_summary['critical_findings'] risk_level = executive_summary['risk_level']

print(f"Security Assessment Results for ${{ matrix.provider }}: ") print(f"Critical findings: {critical_findings}") print(f"Risk level: {risk_level}")

Security gate logic

if risk_level in ['CRITICAL'] and critical_findings > 5: print("❌ CRITICAL SECURITY ISSUES FOUND!") print("Build failed due to critical security issues.") sys.exit(1)

if risk_level in ['HIGH'] and critical_findings > 10: print("⚠️ WARNING: High number of critical issues found!") sys.exit(1)

print("✅ Security gate passed") EOF

- name: Upload scan results
  uses: actions/upload-artifact@v3
  with:
    name: scoutsuite-results-${{ matrix.provider }}
    path: scoutsuite-results/

- name: Comment PR with results
  if: github.event_name == 'pull_request'
  uses: actions/github-script@v6
  with:
    script: |
      const fs = require('fs');
      const glob = require('glob');

      // Find analysis file
      const analysisFiles = glob.sync('scoutsuite-results/scoutsuite_analysis_*.json');

      if (analysisFiles.length === 0) {
        console.log('No analysis file found');
        return;
      }

      const data = JSON.parse(fs.readFileSync(analysisFiles[0], 'utf8'));
      const summary = data.executive_summary;

      const comment = `## 🔒 ScoutSuite Security Scan Results (${{ matrix.provider }})

      **Risk Level: ** ${summary.risk_level}
      **Risk Score: ** ${summary.risk_score}/100

      **Summary: **
      - 🔴 Critical: ${summary.critical_findings}
      - 📊 Total Findings: ${summary.total_findings}
      - 📋 Compliance: ${summary.compliance_status}

      **Top Services at Risk: **
      ${summary.top_services_at_risk.slice(0, 3).map(s => 
        `- ${s.service}: ${s.critical_findings} critical findings`
      ).join('\n')}

      ${summary.risk_level === 'CRITICAL' ? '⚠️ **Critical security issues found! Please review and remediate.**' : '✅ No critical security issues found.'}

      [View detailed report](https: //github.com/${{ github.repository }}/actions/runs/${{ github.run_id }})`;

      github.rest.issues.createComment({
        issue_number: context.issue.number,
        owner: context.repo.owner,
        repo: context.repo.repo,
        body: comment
      });

consolidate-results: needs: scoutsuite-scan runs-on: ubuntu-latest if: always()

steps:
- name: Download all artifacts
  uses: actions/download-artifact@v3

- name: Consolidate multi-cloud results
  run: |
    mkdir -p consolidated_results

    # Combine results from all providers
    python << 'EOF'

import json import glob import os

consolidated_summary = { "scan_timestamp": "$(date -u +%Y-%m-%dT%H:%M:%SZ)", "providers": {}, "overall_risk_score": 0, "total_critical_findings": 0, "total_findings": 0 }

providers = ["aws", "azure", "gcp"] total_risk_score = 0 provider_count = 0

for provider in providers: analysis_pattern = f"scoutsuite-results-{provider}/scoutsuite_analysis_*.json" analysis_files = glob.glob(analysis_pattern)

if analysis_files:
    with open(analysis_files[0], 'r') as f:
        data = json.load(f)

    summary = data['executive_summary']
    consolidated_summary["providers"][provider] = summary
    consolidated_summary["total_critical_findings"] += summary['critical_findings']
    consolidated_summary["total_findings"] += summary['total_findings']

    total_risk_score += summary['risk_score']
    provider_count += 1

if provider_count > 0: consolidated_summary["overall_risk_score"] = round(total_risk_score / provider_count, 1)

Save consolidated results

with open('consolidated_results/multi_cloud_security_summary.json', 'w') as f: json.dump(consolidated_summary, f, indent=2)

print(f"Multi-cloud security summary generated") print(f"Overall risk score: {consolidated_summary['overall_risk_score']}") print(f"Total critical findings: {consolidated_summary['total_critical_findings']}") EOF

- name: Upload consolidated results
  uses: actions/upload-artifact@v3
  with:
    name: scoutsuite-consolidated-results
    path: consolidated_results/

```_

Leistungsoptimierung und Fehlerbehebung

Leistungsoptimierung

```bash

!/bin/bash

ScoutSuite performance optimization

optimize_scoutsuite_performance() { echo "Optimizing ScoutSuite performance..."

# 1. Create performance configuration
cat > scoutsuite_performance_config.json << 'EOF'

{ "performance": { "max_workers": 10, "timeout": 300, "retry_attempts": 3, "retry_delay": 5, "rate_limiting": { "enabled": true, "requests_per_second": 10 } }, "scanning": { "skip_empty_regions": true, "parallel_service_scanning": true, "cache_results": true, "cache_duration": 3600 }, "output": { "compress_reports": true, "minimal_output": false, "include_raw_data": false } } EOF

# 2. Create optimized scanning script
cat > optimized_scoutsuite_scan.sh << 'EOF'

!/bin/bash

Optimized ScoutSuite scanning script

Performance environment variables

export PYTHONUNBUFFERED=1 export PYTHONDONTWRITEBYTECODE=1

Function to run optimized scan

run_optimized_scan() { local provider="$1" local regions="$2" local services="$3" local output_dir="$4"

echo "Running optimized scan for $provider..."

case "$provider" in
    aws)
        # Use timeout to prevent hanging
        timeout 3600 scout aws \
            --regions "$regions" \
            --services "$services" \
            --report-dir "$output_dir" \
            --report-name "optimized_${provider}_scan" \
            --no-browser \
            --max-workers 8 \
            --quiet
        ;;
    azure)
        timeout 3600 scout azure \
            --services "$services" \
            --report-dir "$output_dir" \
            --report-name "optimized_${provider}_scan" \
            --no-browser \
            --max-workers 8 \
            --quiet
        ;;
    gcp)
        timeout 3600 scout gcp \
            --services "$services" \
            --report-dir "$output_dir" \
            --report-name "optimized_${provider}_scan" \
            --no-browser \
            --max-workers 8 \
            --quiet
        ;;
esac

local exit_code=$?

if [ $exit_code -eq 124 ]; then
    echo "Warning: Scan for $provider timed out"
elif [ $exit_code -ne 0 ]; then
    echo "Error: Scan for $provider failed with exit code $exit_code"
else
    echo "Completed scan for $provider"
fi

return $exit_code

}

Main optimization function

main() { local timestamp=$(date +%Y%m%d_%H%M%S) local output_dir="optimized_scan_$timestamp"

mkdir -p "$output_dir"

# Essential services for quick scan
local aws_services="iam,s3,ec2,vpc"
local azure_services="keyvault,storage,network"
local gcp_services="iam,storage,compute"

# Essential regions
local aws_regions="us-east-1,us-west-2,eu-west-1"

echo "Starting optimized ScoutSuite scan..."

# Run scans in parallel
run_optimized_scan "aws" "$aws_regions" "$aws_services" "$output_dir" &
AWS_PID=$!

run_optimized_scan "azure" "" "$azure_services" "$output_dir" &
AZURE_PID=$!

run_optimized_scan "gcp" "" "$gcp_services" "$output_dir" &
GCP_PID=$!

# Wait for all scans to complete
wait $AWS_PID
wait $AZURE_PID
wait $GCP_PID

echo "Optimized scan completed. Results in: $output_dir"

}

Run optimization

main "$@" EOF

chmod +x optimized_scoutsuite_scan.sh

echo "Performance optimization setup complete"

}

Memory optimization

optimize_memory_usage() { echo "Optimizing ScoutSuite memory usage..."

# Create memory monitoring script
cat > monitor_scoutsuite_memory.sh << 'EOF'

!/bin/bash

Monitor ScoutSuite memory usage

SCOUTSUITE_PID="" MEMORY_LOG="scoutsuite_memory.log"

Function to monitor resources

monitor_resources() { echo "Timestamp,Memory_MB,CPU_Percent" > "$MEMORY_LOG"

while true; do
    if [ -n "$SCOUTSUITE_PID" ] && kill -0 "$SCOUTSUITE_PID" 2>/dev/null; then
        timestamp=$(date '+%Y-%m-%d %H:%M:%S')

        # Memory usage in MB
        memory_kb=$(ps -p "$SCOUTSUITE_PID" -o rss --no-headers 2>/dev/null)
        if [ -n "$memory_kb" ]; then
            memory_mb=$((memory_kb / 1024))

            # CPU usage
            cpu_percent=$(ps -p "$SCOUTSUITE_PID" -o %cpu --no-headers 2>/dev/null)

            echo "$timestamp,$memory_mb,$cpu_percent" >> "$MEMORY_LOG"
        fi
    else
        break
    fi

    sleep 10
done

}

Start monitoring in background

monitor_resources & MONITOR_PID=$!

Run ScoutSuite with memory optimization

export PYTHONUNBUFFERED=1 export PYTHONDONTWRITEBYTECODE=1

Start ScoutSuite and capture PID

scout aws --regions us-east-1 --services iam --no-browser & SCOUTSUITE_PID=$!

Wait for ScoutSuite to complete

wait $SCOUTSUITE_PID

Stop monitoring

kill $MONITOR_PID 2>/dev/null

Analyze memory usage

python3 << 'PYTHON' import pandas as pd import matplotlib.pyplot as plt

try: # Read memory data memory_df = pd.read_csv('scoutsuite_memory.log') memory_df['Timestamp'] = pd.to_datetime(memory_df['Timestamp'])

# Create memory usage chart
plt.figure(figsize=(12, 8))

plt.subplot(2, 1, 1)
plt.plot(memory_df['Timestamp'], memory_df['Memory_MB'])
plt.title('ScoutSuite Memory Usage Over Time')
plt.ylabel('Memory Usage (MB)')
plt.grid(True)

plt.subplot(2, 1, 2)
plt.plot(memory_df['Timestamp'], memory_df['CPU_Percent'])
plt.title('ScoutSuite CPU Usage Over Time')
plt.xlabel('Time')
plt.ylabel('CPU Usage (%)')
plt.grid(True)

plt.tight_layout()
plt.savefig('scoutsuite_resource_usage.png')

# Print statistics
print(f"Average Memory Usage: {memory_df['Memory_MB'].mean():.1f} MB")
print(f"Peak Memory Usage: {memory_df['Memory_MB'].max():.1f} MB")
print(f"Average CPU Usage: {memory_df['CPU_Percent'].mean():.1f}%")
print(f"Peak CPU Usage: {memory_df['CPU_Percent'].max():.1f}%")

except Exception as e: print(f"Error analyzing resource usage: {e}") PYTHON

echo "Memory monitoring completed" EOF

chmod +x monitor_scoutsuite_memory.sh

echo "Memory optimization complete"

}

Run optimizations

optimize_scoutsuite_performance optimize_memory_usage ```_

Leitfaden zur Fehlerbehebung

```bash

!/bin/bash

ScoutSuite troubleshooting guide

troubleshoot_scoutsuite() { echo "ScoutSuite Troubleshooting Guide" echo "================================"

# Check Python installation
if ! command -v python3 &> /dev/null; then
    echo "❌ Python 3 not found"
    echo "Solution: Install Python 3.7 or later"
    echo "  sudo apt update && sudo apt install python3 python3-pip"
    return 1
fi

python_version=$(python3 --version | cut -d' ' -f2)
echo "✅ Python found: $python_version"

# Check ScoutSuite installation
if ! command -v scout &> /dev/null; then
    echo "❌ ScoutSuite not found"
    echo "Solution: Install ScoutSuite"
    echo "  pip3 install scoutsuite"
    return 1
fi

scout_version=$(scout --version 2>&1 | head -n1)
echo "✅ ScoutSuite found: $scout_version"

# Check cloud CLI tools
echo ""
echo "Checking cloud CLI tools..."

# AWS CLI
if command -v aws &> /dev/null; then
    aws_version=$(aws --version 2>&1)
    echo "✅ AWS CLI found: $aws_version"

    if aws sts get-caller-identity > /dev/null 2>&1; then
        account_id=$(aws sts get-caller-identity --query Account --output text)
        echo "✅ AWS credentials configured (Account: $account_id)"
    else
        echo "⚠️  AWS credentials not configured"
    fi
else
    echo "⚠️  AWS CLI not found"
fi

# Azure CLI
if command -v az &> /dev/null; then
    azure_version=$(az --version | head -n1)
    echo "✅ Azure CLI found: $azure_version"

    if az account show > /dev/null 2>&1; then
        subscription_id=$(az account show --query id --output tsv)
        echo "✅ Azure credentials configured (Subscription: $subscription_id)"
    else
        echo "⚠️  Azure credentials not configured"
    fi
else
    echo "⚠️  Azure CLI not found"
fi

# Google Cloud SDK
if command -v gcloud &> /dev/null; then
    gcloud_version=$(gcloud --version | head -n1)
    echo "✅ Google Cloud SDK found: $gcloud_version"

    if gcloud auth list --filter=status:ACTIVE --format="value(account)" | head -n1 > /dev/null 2>&1; then
        project_id=$(gcloud config get-value project 2>/dev/null)
        echo "✅ GCP credentials configured (Project: $project_id)"
    else
        echo "⚠️  GCP credentials not configured"
    fi
else
    echo "⚠️  Google Cloud SDK not found"
fi

# Test basic functionality
echo ""
echo "Testing basic functionality..."

if scout --help > /dev/null 2>&1; then
    echo "✅ ScoutSuite help command works"
else
    echo "❌ ScoutSuite help command failed"
    echo "Solution: Reinstall ScoutSuite"
    echo "  pip3 uninstall scoutsuite && pip3 install scoutsuite"
fi

# Check system resources
echo ""
echo "Checking system resources..."

available_memory=$(free -m | awk 'NR==2{printf "%.1f", $7/1024}')
if (( $(echo "$available_memory < 2.0" | bc -l) )); then
    echo "⚠️  Low available memory: ${available_memory}GB"
    echo "Recommendation: Ensure at least 4GB available memory for large scans"
else
    echo "✅ Available memory: ${available_memory}GB"
fi

# Check disk space

| disk_usage=$(df . | tail -1 | awk '{print $5}' | sed 's/%//') | if [ "$disk_usage" -gt 90 ]; then echo "⚠️ High disk usage: ${disk_usage}%" echo "Solution: Free up disk space" else echo "✅ Disk usage: ${disk_usage}%" fi

echo ""
echo "Troubleshooting completed"

}

Common error solutions

fix_common_errors() { echo "Common ScoutSuite Errors and Solutions" echo "====================================="

cat << 'EOF'
  1. "ModuleNotFoundError: No module named 'ScoutSuite'" Solution:

    • Install ScoutSuite: pip3 install scoutsuite
    • Check Python path: python3 -c "import sys; print(sys.path)"
  2. "AWS credentials not configured" Solution:

    • Run: aws configure
    • Or set environment variables: export AWS_ACCESS_KEY_ID=your-key export AWS_SECRET_ACCESS_KEY=your-secret
  3. "Azure credentials not configured" Solution:

    • Run: az login
    • Or use service principal: export AZURE_CLIENT_ID=your-client-id export AZURE_CLIENT_SECRET=your-client-secret export AZURE_TENANT_ID=your-tenant-id
  4. "GCP credentials not configured" Solution:

    • Run: gcloud auth login
    • Or use service account: export GOOGLE_APPLICATION_CREDENTIALS=path/to/key.json
  5. "AccessDenied" or "Forbidden" errors Solution:

    • Check IAM permissions
    • Ensure user/role has required policies attached
    • Use ScoutSuite IAM policies from documentation
  6. "Timeout" or "Connection timeout" Solution:

    • Increase timeout: --timeout 3600
    • Check internet connectivity
    • Reduce parallel requests: --max-workers 5
  7. "Memory allocation failed" or "Out of memory" Solution:

    • Scan fewer regions: --regions us-east-1
    • Scan specific services: --services s3,iam
    • Increase system memory
  8. "Report generation failed" Solution:

    • Check disk space
    • Ensure write permissions in output directory
    • Use different output directory: --report-dir /tmp/scoutsuite
  9. "SSL/TLS certificate errors" Solution:

    • Update certificates: sudo apt update && sudo apt install ca-certificates
    • Check system time: timedatectl status
    • Update Python requests: pip3 install --upgrade requests
  10. "Scan takes too long" or "Hangs" Solution:

    • Use region filtering: --regions us-east-1
    • Scan specific services: --services iam,s3
    • Monitor with timeout: timeout 3600 scout aws
    • Use fewer workers: --max-workers 3 EOF }

Performance diagnostics

diagnose_performance() { echo "Diagnosing ScoutSuite Performance" echo "================================="

# Test scan performance
echo "Running performance test..."

start_time=$(date +%s.%N)

# Run a simple scan
timeout 300 scout aws --regions us-east-1 --services iam --no-browser > /dev/null 2>&1
exit_code=$?

end_time=$(date +%s.%N)
duration=$(echo "$end_time - $start_time" | bc)

if [ $exit_code -eq 0 ]; then
    echo "✅ Performance test completed in ${duration}s"
elif [ $exit_code -eq 124 ]; then
    echo "⚠️  Performance test timed out (>300s)"
    echo "Recommendation: Check network connectivity and cloud API performance"
else
    echo "❌ Performance test failed"
    echo "Recommendation: Check configuration and credentials"
fi

# Check Python performance

| python_startup_time=$(python3 -c "import time; start=time.time(); import ScoutSuite; print(f'{time.time()-start:.2f}')" 2>/dev/null | | echo "N/A") | echo "Python import time: ${python_startup_time}s"

# System load

| load_avg=$(uptime | awk -F'load average:' '{print $2}' | awk '{print $1}' | sed 's/,//') | echo "System load average: $load_avg"

# Network connectivity test
echo "Testing cloud API connectivity..."

# AWS API test
if command -v aws &> /dev/null; then

| aws_api_time=$(time (aws sts get-caller-identity > /dev/null 2>&1) 2>&1 | grep real | awk '{print $2}') | echo "AWS API response time: $aws_api_time" fi

# Azure API test
if command -v az &> /dev/null; then

| azure_api_time=$(time (az account show > /dev/null 2>&1) 2>&1 | grep real | awk '{print $2}') | echo "Azure API response time: $azure_api_time" fi

# GCP API test
if command -v gcloud &> /dev/null; then

| gcp_api_time=$(time (gcloud auth list > /dev/null 2>&1) 2>&1 | grep real | awk '{print $2}') | echo "GCP API response time: $gcp_api_time" fi

# Recommendations
echo ""
echo "Performance Recommendations:"
echo "- Use region filtering for faster scans"
echo "- Scan specific services instead of all services"
echo "- Use --max-workers to control parallelism"
echo "- Increase system memory for large environments"
echo "- Use SSD storage for better I/O performance"
echo "- Monitor cloud API rate limits"
echo "- Use --no-browser for headless environments"

}

Main troubleshooting function

main() { troubleshoot_scoutsuite echo "" fix_common_errors echo "" diagnose_performance }

Run troubleshooting

main ```_

Ressourcen und Dokumentation

Offizielle Mittel

Cloud Provider Ressourcen

Integrationsbeispiele