PacBot Cheat Blatt
Überblick
PacBot (Policy as Code Bot) ist eine Open-Source-Plattform für kontinuierliche Compliance-Überwachung, Alarmierung und Berichterstattung. Ursprünglich von T-Mobile entwickelt, bietet PacBot eine umfassende Lösung für das Cloud Governance- und Sicherheitshaltungsmanagement. Die Plattform automatisiert die Durchsetzung der Politik, verfolgt Compliance-Verstöße und bietet detaillierte Berichterstattung über AWS-Umgebungen. PacBot ermöglicht Organisationen, Politik-as-Code-Praktiken umzusetzen, um einheitliche Sicherheitsstandards und regulatorische Compliance zu gewährleisten.
RECHT *Key Features: Kontinuierliche Compliance-Überwachung, automatisierte Richtliniendurchführung, Echtzeit-Benachrichtigung, umfassende Berichterstattung, Multi-Account-Unterstützung, benutzerdefinierte Regel-Engine, API-getriebene Architektur und Integration mit bestehenden DevOps Workflows.
Installation und Inbetriebnahme
Voraussetzungen und Umweltschutz
```bash
System requirements check
echo "Checking system requirements for PacBot..."
Check Java installation (Java 8 or higher required)
if command -v java &> /dev/null; then | java_version=$(java -version 2>&1 | head -n1 | awk -F '"' '{print $2}') | echo "✅ Java found: $java_version" else echo "❌ Java not found. Installing OpenJDK 11..." sudo apt update sudo apt install -y openjdk-11-jdk fi
Check Maven installation
if command -v mvn &> /dev/null; then maven_version=$(mvn --version | head -n1) echo "✅ Maven found: $maven_version" else echo "❌ Maven not found. Installing Maven..." sudo apt install -y maven fi
Check Node.js installation (for UI components)
if command -v node &> /dev/null; then node_version=$(node --version) echo "✅ Node.js found: $node_version" else echo "❌ Node.js not found. Installing Node.js..." curl -fsSL https://deb.nodesource.com/setup_16.x | sudo -E bash - sudo apt install -y nodejs fi
Check Docker installation
if command -v docker &> /dev/null; then docker_version=$(docker --version) echo "✅ Docker found: $docker_version" else echo "❌ Docker not found. Installing Docker..." curl -fsSL https://get.docker.com -o get-docker.sh sudo sh get-docker.sh sudo usermod -aG docker $USER fi
Check AWS CLI installation
if command -v aws &> /dev/null; then aws_version=$(aws --version) echo "✅ AWS CLI found: $aws_version" else echo "❌ AWS CLI not found. Installing AWS CLI..." curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" unzip awscliv2.zip sudo ./aws/install fi
Verify installations
echo "Verifying installations..." java -version mvn --version node --version docker --version aws --version
echo "Prerequisites check completed" ```_
Quelle Installation
```bash
Clone PacBot repository
git clone https://github.com/tmobile/pacbot.git cd pacbot
Check repository structure
ls -la echo "Repository structure:" find . -maxdepth 2 -type d | sort
Set up environment variables
export PACBOT_HOME=$(pwd) export JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64 export PATH=$JAVA_HOME/bin:$PATH
Create configuration directory
mkdir -p $HOME/.pacbot mkdir -p $HOME/.pacbot/config mkdir -p $HOME/.pacbot/logs
Set up AWS credentials for PacBot
aws configure
Enter: Access Key ID, Secret Access Key, Region (us-east-1), Output format (json)
Verify AWS configuration
aws sts get-caller-identity
Create PacBot configuration file
cat > $HOME/.pacbot/config/application.properties << 'EOF'
PacBot Configuration
pacbot.env=dev pacbot.auto.fix.enabled=false pacbot.auto.fix.orphan.enabled=false
Database configuration
spring.datasource.url=jdbc:mysql://localhost:3306/pacbot spring.datasource.username=pacbot spring.datasource.password=pacbot123 spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver
Elasticsearch configuration
elastic.host=localhost elastic.port=9200 elastic.cluster=pacbot
AWS configuration
aws.region=us-east-1 aws.role.cross.account.enabled=false
Email configuration
spring.mail.host=smtp.gmail.com spring.mail.port=587 spring.mail.username=your-email@gmail.com spring.mail.password=your-app-password
Logging configuration
logging.level.com.tmobile.pacbot=DEBUG logging.file.path=$HOME/.pacbot/logs/ EOF
echo "PacBot source installation completed" ```_
Docker Installation
```bash
Create Docker Compose setup for PacBot
mkdir -p pacbot-docker cd pacbot-docker
Create Docker Compose file
cat > docker-compose.yml << 'EOF' version: '3.8'
services: # MySQL Database mysql: image: mysql:8.0 container_name: pacbot-mysql environment: MYSQL_ROOT_PASSWORD: rootpassword MYSQL_DATABASE: pacbot MYSQL_USER: pacbot MYSQL_PASSWORD: pacbot123 ports: - "3306:3306" volumes: - mysql_data:/var/lib/mysql - ./init-scripts:/docker-entrypoint-initdb.d networks: - pacbot-network
# Elasticsearch elasticsearch: image: docker.elastic.co/elasticsearch/elasticsearch:7.17.0 container_name: pacbot-elasticsearch environment: - discovery.type=single-node - "ES_JAVA_OPTS=-Xms512m -Xmx512m" - xpack.security.enabled=false ports: - "9200:9200" - "9300:9300" volumes: - elasticsearch_data:/usr/share/elasticsearch/data networks: - pacbot-network
# Redis redis: image: redis:6.2-alpine container_name: pacbot-redis ports: - "6379:6379" volumes: - redis_data:/data networks: - pacbot-network
# PacBot API pacbot-api: image: tmobile/pacbot-api:latest container_name: pacbot-api depends_on: - mysql - elasticsearch - redis environment: - SPRING_PROFILES_ACTIVE=docker - DB_HOST=mysql - DB_PORT=3306 - DB_NAME=pacbot - DB_USERNAME=pacbot - DB_PASSWORD=pacbot123 - ELASTIC_HOST=elasticsearch - ELASTIC_PORT=9200 - REDIS_HOST=redis - REDIS_PORT=6379 - AWS_REGION=us-east-1 ports: - "8080:8080" volumes: - ./config:/app/config - ./logs:/app/logs networks: - pacbot-network
# PacBot UI pacbot-ui: image: tmobile/pacbot-ui:latest container_name: pacbot-ui depends_on: - pacbot-api environment: - API_BASE_URL=http://pacbot-api:8080 ports: - "4200:80" networks: - pacbot-network
# PacBot Jobs pacbot-jobs: image: tmobile/pacbot-jobs:latest container_name: pacbot-jobs depends_on: - mysql - elasticsearch - redis environment: - SPRING_PROFILES_ACTIVE=docker - DB_HOST=mysql - DB_PORT=3306 - DB_NAME=pacbot - DB_USERNAME=pacbot - DB_PASSWORD=pacbot123 - ELASTIC_HOST=elasticsearch - ELASTIC_PORT=9200 - AWS_REGION=us-east-1 volumes: - ./config:/app/config - ./logs:/app/logs networks: - pacbot-network
volumes: mysql_data: elasticsearch_data: redis_data:
networks: pacbot-network: driver: bridge EOF
Create configuration directory
mkdir -p config logs init-scripts
Create MySQL initialization script
cat > init-scripts/01-init-pacbot.sql << 'EOF' -- PacBot Database Initialization
USE pacbot;
-- Create tables for PacBot CREATE TABLE IF NOT EXISTS cf_Accounts ( accountId VARCHAR(50) PRIMARY KEY, accountName VARCHAR(100), accountStatus VARCHAR(20), createdDate TIMESTAMP DEFAULT CURRENT_TIMESTAMP );
CREATE TABLE IF NOT EXISTS cf_AssetGroupDetails ( groupId VARCHAR(50) PRIMARY KEY, groupName VARCHAR(100), dataSource VARCHAR(50), targetType VARCHAR(50), groupType VARCHAR(20), createdDate TIMESTAMP DEFAULT CURRENT_TIMESTAMP );
CREATE TABLE IF NOT EXISTS cf_PolicyTable ( policyId VARCHAR(50) PRIMARY KEY, policyName VARCHAR(200), policyDesc TEXT, resolution TEXT, policyUrl VARCHAR(500), policyVersion VARCHAR(10), policyParams TEXT, dataSource VARCHAR(50), targetType VARCHAR(50), assetGroup VARCHAR(50), alexaKeyword VARCHAR(100), policyCategory VARCHAR(50), policyType VARCHAR(20), severity VARCHAR(20), status VARCHAR(20), userId VARCHAR(50), createdDate TIMESTAMP DEFAULT CURRENT_TIMESTAMP, modifiedDate TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP );
CREATE TABLE IF NOT EXISTS cf_RuleInstance ( ruleId VARCHAR(50) PRIMARY KEY, ruleName VARCHAR(200), targetType VARCHAR(50), assetGroup VARCHAR(50), ruleParams TEXT, ruleFrequency VARCHAR(20), ruleExecutable VARCHAR(500), ruleRestUrl VARCHAR(500), ruleType VARCHAR(20), status VARCHAR(20), userId VARCHAR(50), displayName VARCHAR(200), createdDate TIMESTAMP DEFAULT CURRENT_TIMESTAMP, modifiedDate TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP );
-- Insert sample data INSERT INTO cf_Accounts (accountId, accountName, accountStatus) VALUES ('123456789012', 'Production Account', 'active'), ('123456789013', 'Development Account', 'active'), ('123456789014', 'Staging Account', 'active');
INSERT INTO cf_AssetGroupDetails (groupId, groupName, dataSource, targetType, groupType) VALUES ('aws-all', 'AWS All Resources', 'aws', 'ec2,s3,iam,rds', 'user'), ('aws-prod', 'AWS Production', 'aws', 'ec2,s3,iam', 'user'), ('aws-dev', 'AWS Development', 'aws', 'ec2,s3', 'user');
-- Insert sample policies INSERT INTO cf_PolicyTable ( policyId, policyName, policyDesc, resolution, policyUrl, policyVersion, dataSource, targetType, assetGroup, policyCategory, policyType, severity, status, userId ) VALUES ('PacBot_S3BucketPublicAccess_version-1', 'S3 Bucket Public Access Check', 'Checks if S3 buckets have public read or write access', 'Remove public access from S3 bucket by updating bucket policy', 'https://docs.aws.amazon.com/s3/latest/userguide/access-control-block-public-access.html', '1.0', 'aws', 's3', 'aws-all', 'security', 'Mandatory', 'high', 'ENABLED', 'admin'),
('PacBot_EC2SecurityGroupOpenToWorld_version-1', 'EC2 Security Group Open to World', 'Checks if EC2 security groups allow unrestricted access from internet', 'Restrict security group rules to specific IP ranges or security groups', 'https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html', '1.0', 'aws', 'ec2', 'aws-all', 'security', 'Mandatory', 'critical', 'ENABLED', 'admin'),
('PacBot_IAMUserWithoutMFA_version-1', 'IAM User Without MFA', 'Checks if IAM users have MFA enabled', 'Enable MFA for all IAM users with console access', 'https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa.html', '1.0', 'aws', 'iam', 'aws-all', 'security', 'Mandatory', 'medium', 'ENABLED', 'admin');
-- Insert sample rules INSERT INTO cf_RuleInstance ( ruleId, ruleName, targetType, assetGroup, ruleParams, ruleFrequency, ruleExecutable, ruleType, status, userId, displayName ) VALUES ('PacBot_S3BucketPublicAccess_version-1_S3Bucket_aws-all', 'S3 Bucket Public Access Check', 's3', 'aws-all', '{"severity":"high","category":"security"}', 'daily', 'com.tmobile.pacbot.aws.s3.S3BucketPublicAccessRule', 'Mandatory', 'ENABLED', 'admin', 'S3 Public Access Rule'),
('PacBot_EC2SecurityGroupOpenToWorld_version-1_SecurityGroup_aws-all', 'EC2 Security Group Open to World', 'ec2', 'aws-all', '{"severity":"critical","category":"security"}', 'daily', 'com.tmobile.pacbot.aws.ec2.SecurityGroupOpenToWorldRule', 'Mandatory', 'ENABLED', 'admin', 'Security Group Open Rule'),
('PacBot_IAMUserWithoutMFA_version-1_IAMUser_aws-all', 'IAM User Without MFA', 'iam', 'aws-all', '{"severity":"medium","category":"security"}', 'daily', 'com.tmobile.pacbot.aws.iam.IAMUserWithoutMFARule', 'Mandatory', 'ENABLED', 'admin', 'IAM MFA Rule');
COMMIT; EOF
Create application configuration for Docker
cat > config/application-docker.properties << 'EOF'
PacBot Docker Configuration
pacbot.env=docker pacbot.auto.fix.enabled=false
Database configuration
spring.datasource.url=jdbc:mysql://mysql:3306/pacbot?useSSL=false&allowPublicKeyRetrieval;=true spring.datasource.username=pacbot spring.datasource.password=pacbot123 spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver
Elasticsearch configuration
elastic.host=elasticsearch elastic.port=9200 elastic.cluster=pacbot
Redis configuration
spring.redis.host=redis spring.redis.port=6379
AWS configuration
aws.region=us-east-1 aws.role.cross.account.enabled=false
Logging configuration
logging.level.com.tmobile.pacbot=INFO logging.file.path=/app/logs/ EOF
Start PacBot with Docker Compose
echo "Starting PacBot with Docker Compose..." docker-compose up -d
Wait for services to start
echo "Waiting for services to start..." sleep 60
Check service status
docker-compose ps
Test connectivity
echo "Testing service connectivity..." | curl -f http://localhost:9200/_cluster/health | | echo "Elasticsearch not ready" | | curl -f http://localhost:8080/health | | echo "PacBot API not ready" | | curl -f http://localhost:4200 | | echo "PacBot UI not ready" |
echo "PacBot Docker installation completed" echo "Access PacBot UI at: http://localhost:4200" echo "Access PacBot API at: http://localhost:8080" ```_
AWS Infrastruktur Setup
```bash
Create AWS infrastructure for PacBot using CloudFormation
mkdir -p pacbot-aws-setup cd pacbot-aws-setup
Create CloudFormation template for PacBot infrastructure
cat > pacbot-infrastructure.yaml << 'EOF' AWSTemplateFormatVersion: '2010-09-09' Description: 'PacBot Infrastructure Setup'
Parameters: Environment: Type: String Default: dev AllowedValues: [dev, staging, prod] Description: Environment name
VpcCidr: Type: String Default: 10.0.0.0/16 Description: CIDR block for VPC
KeyPairName: Type: AWS::EC2::KeyPair::KeyName Description: EC2 Key Pair for instances
Resources: # VPC and Networking PacBotVPC: Type: AWS::EC2::VPC Properties: CidrBlock: !Ref VpcCidr EnableDnsHostnames: true EnableDnsSupport: true Tags: - Key: Name Value: !Sub 'pacbot-vpc-${Environment}' - Key: Environment Value: !Ref Environment
PacBotInternetGateway: Type: AWS::EC2::InternetGateway Properties: Tags: - Key: Name Value: !Sub 'pacbot-igw-${Environment}'
PacBotVPCGatewayAttachment: Type: AWS::EC2::VPCGatewayAttachment Properties: VpcId: !Ref PacBotVPC InternetGatewayId: !Ref PacBotInternetGateway
# Public Subnets PacBotPublicSubnet1: Type: AWS::EC2::Subnet Properties: VpcId: !Ref PacBotVPC CidrBlock: 10.0.1.0/24 AvailabilityZone: !Select [0, !GetAZs ''] MapPublicIpOnLaunch: true Tags: - Key: Name Value: !Sub 'pacbot-public-subnet-1-${Environment}'
PacBotPublicSubnet2: Type: AWS::EC2::Subnet Properties: VpcId: !Ref PacBotVPC CidrBlock: 10.0.2.0/24 AvailabilityZone: !Select [1, !GetAZs ''] MapPublicIpOnLaunch: true Tags: - Key: Name Value: !Sub 'pacbot-public-subnet-2-${Environment}'
# Private Subnets PacBotPrivateSubnet1: Type: AWS::EC2::Subnet Properties: VpcId: !Ref PacBotVPC CidrBlock: 10.0.3.0/24 AvailabilityZone: !Select [0, !GetAZs ''] Tags: - Key: Name Value: !Sub 'pacbot-private-subnet-1-${Environment}'
PacBotPrivateSubnet2: Type: AWS::EC2::Subnet Properties: VpcId: !Ref PacBotVPC CidrBlock: 10.0.4.0/24 AvailabilityZone: !Select [1, !GetAZs ''] Tags: - Key: Name Value: !Sub 'pacbot-private-subnet-2-${Environment}'
# Route Tables PacBotPublicRouteTable: Type: AWS::EC2::RouteTable Properties: VpcId: !Ref PacBotVPC Tags: - Key: Name Value: !Sub 'pacbot-public-rt-${Environment}'
PacBotPublicRoute: Type: AWS::EC2::Route DependsOn: PacBotVPCGatewayAttachment Properties: RouteTableId: !Ref PacBotPublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref PacBotInternetGateway
PacBotPublicSubnet1RouteTableAssociation: Type: AWS::EC2::SubnetRouteTableAssociation Properties: SubnetId: !Ref PacBotPublicSubnet1 RouteTableId: !Ref PacBotPublicRouteTable
PacBotPublicSubnet2RouteTableAssociation: Type: AWS::EC2::SubnetRouteTableAssociation Properties: SubnetId: !Ref PacBotPublicSubnet2 RouteTableId: !Ref PacBotPublicRouteTable
# Security Groups PacBotWebSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Security group for PacBot web tier VpcId: !Ref PacBotVPC SecurityGroupIngress: - IpProtocol: tcp FromPort: 80 ToPort: 80 CidrIp: 0.0.0.0/0 - IpProtocol: tcp FromPort: 443 ToPort: 443 CidrIp: 0.0.0.0/0 - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: 10.0.0.0/16 Tags: - Key: Name Value: !Sub 'pacbot-web-sg-${Environment}'
PacBotAppSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Security group for PacBot application tier VpcId: !Ref PacBotVPC SecurityGroupIngress: - IpProtocol: tcp FromPort: 8080 ToPort: 8080 SourceSecurityGroupId: !Ref PacBotWebSecurityGroup - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: 10.0.0.0/16 Tags: - Key: Name Value: !Sub 'pacbot-app-sg-${Environment}'
PacBotDBSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Security group for PacBot database tier VpcId: !Ref PacBotVPC SecurityGroupIngress: - IpProtocol: tcp FromPort: 3306 ToPort: 3306 SourceSecurityGroupId: !Ref PacBotAppSecurityGroup - IpProtocol: tcp FromPort: 9200 ToPort: 9200 SourceSecurityGroupId: !Ref PacBotAppSecurityGroup Tags: - Key: Name Value: !Sub 'pacbot-db-sg-${Environment}'
# RDS Subnet Group PacBotDBSubnetGroup: Type: AWS::RDS::DBSubnetGroup Properties: DBSubnetGroupDescription: Subnet group for PacBot RDS SubnetIds: - !Ref PacBotPrivateSubnet1 - !Ref PacBotPrivateSubnet2 Tags: - Key: Name Value: !Sub 'pacbot-db-subnet-group-${Environment}'
# RDS Instance PacBotDatabase: Type: AWS::RDS::DBInstance Properties: DBInstanceIdentifier: !Sub 'pacbot-db-${Environment}' DBInstanceClass: db.t3.micro Engine: mysql EngineVersion: '8.0' MasterUsername: pacbot MasterUserPassword: !Sub '{{resolve:secretsmanager:pacbot-db-password-${Environment}:SecretString:password}}' AllocatedStorage: 20 StorageType: gp2 VPCSecurityGroups: - !Ref PacBotDBSecurityGroup DBSubnetGroupName: !Ref PacBotDBSubnetGroup BackupRetentionPeriod: 7 MultiAZ: false PubliclyAccessible: false StorageEncrypted: true Tags: - Key: Name Value: !Sub 'pacbot-database-${Environment}' - Key: Environment Value: !Ref Environment
# Elasticsearch Domain PacBotElasticsearch: Type: AWS::Elasticsearch::Domain Properties: DomainName: !Sub 'pacbot-es-${Environment}' ElasticsearchVersion: '7.10' ElasticsearchClusterConfig: InstanceType: t3.small.elasticsearch InstanceCount: 1 DedicatedMasterEnabled: false EBSOptions: EBSEnabled: true VolumeType: gp2 VolumeSize: 20 VPCOptions: SecurityGroupIds: - !Ref PacBotDBSecurityGroup SubnetIds: - !Ref PacBotPrivateSubnet1 AccessPolicies: Version: '2012-10-17' Statement: - Effect: Allow Principal: AWS: '' Action: 'es:' Resource: !Sub 'arn:aws:es:${AWS::Region}:${AWS::AccountId}:domain/pacbot-es-${Environment}/*' Tags: - Key: Name Value: !Sub 'pacbot-elasticsearch-${Environment}' - Key: Environment Value: !Ref Environment
# IAM Role for PacBot PacBotRole: Type: AWS::IAM::Role Properties: RoleName: !Sub 'PacBot-Role-${Environment}' AssumeRolePolicyDocument: Version: '2012-10-17' Statement: - Effect: Allow Principal: Service: ec2.amazonaws.com Action: sts:AssumeRole ManagedPolicyArns: - arn:aws:iam::aws:policy/ReadOnlyAccess - arn:aws:iam::aws:policy/SecurityAudit Policies: - PolicyName: PacBotAdditionalPermissions PolicyDocument: Version: '2012-10-17' Statement: - Effect: Allow Action: - es:ESHttpGet - es:ESHttpPost - es:ESHttpPut - es:ESHttpDelete - rds:DescribeDBInstances - rds:DescribeDBClusters - rds:DescribeDBSnapshots - secretsmanager:GetSecretValue - secretsmanager:DescribeSecret Resource: '*'
PacBotInstanceProfile: Type: AWS::IAM::InstanceProfile Properties: Roles: - !Ref PacBotRole
# Launch Template for PacBot instances PacBotLaunchTemplate: Type: AWS::EC2::LaunchTemplate Properties: LaunchTemplateName: !Sub 'pacbot-launch-template-${Environment}' LaunchTemplateData: ImageId: ami-0c02fb55956c7d316 # Amazon Linux 2 AMI InstanceType: t3.medium KeyName: !Ref KeyPairName IamInstanceProfile: Arn: !GetAtt PacBotInstanceProfile.Arn SecurityGroupIds: - !Ref PacBotAppSecurityGroup UserData: Fn::Base64: !Sub | #!/bin/bash yum update -y yum install -y docker git java-11-openjdk-devel
# Start Docker
systemctl start docker
systemctl enable docker
usermod -a -G docker ec2-user
# Install Docker Compose
curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
# Clone PacBot
cd /opt
git clone https://github.com/tmobile/pacbot.git
chown -R ec2-user:ec2-user pacbot
# Configure PacBot
mkdir -p /opt/pacbot/config
cat > /opt/pacbot/config/application.properties << 'EOFCONFIG'
pacbot.env=${Environment}
spring.datasource.url=jdbc:mysql://${PacBotDatabase.Endpoint.Address}:3306/pacbot
spring.datasource.username=pacbot
spring.datasource.password=pacbot123
elastic.host=${PacBotElasticsearch.DomainEndpoint}
elastic.port=443
elastic.protocol=https
aws.region=${AWS::Region}
EOFCONFIG
# Signal completion
/opt/aws/bin/cfn-signal -e $? --stack ${AWS::StackName} --resource PacBotAutoScalingGroup --region ${AWS::Region}
Outputs: VPCId: Description: VPC ID Value: !Ref PacBotVPC Export: Name: !Sub '${AWS::StackName}-VPC-ID'
DatabaseEndpoint: Description: RDS Database Endpoint Value: !GetAtt PacBotDatabase.Endpoint.Address Export: Name: !Sub '${AWS::StackName}-DB-Endpoint'
ElasticsearchEndpoint: Description: Elasticsearch Domain Endpoint Value: !GetAtt PacBotElasticsearch.DomainEndpoint Export: Name: !Sub '${AWS::StackName}-ES-Endpoint'
PacBotRoleArn: Description: PacBot IAM Role ARN Value: !GetAtt PacBotRole.Arn Export: Name: !Sub '${AWS::StackName}-Role-ARN' EOF
Create database password secret
aws secretsmanager create-secret \ --name "pacbot-db-password-dev" \ --description "PacBot database password" \ --secret-string '{"password":"PacBot123!@#"}'
Deploy CloudFormation stack
aws cloudformation create-stack \ --stack-name pacbot-infrastructure-dev \ --template-body file://pacbot-infrastructure.yaml \ --parameters ParameterKey=Environment,ParameterValue=dev \ ParameterKey=KeyPairName,ParameterValue=my-key-pair \ --capabilities CAPABILITY_NAMED_IAM
Wait for stack creation
aws cloudformation wait stack-create-complete \ --stack-name pacbot-infrastructure-dev
Get stack outputs
aws cloudformation describe-stacks \ --stack-name pacbot-infrastructure-dev \ --query 'Stacks[0].Outputs'
echo "AWS infrastructure setup completed" ```_
Konfiguration und Policy Management
Grundkonfiguration
```bash
Create PacBot configuration management script
cat > configure_pacbot.sh << 'EOF'
!/bin/bash
PacBot Configuration Management
PACBOT_HOME=${PACBOT_HOME:-/opt/pacbot} CONFIG_DIR="$PACBOT_HOME/config" LOGS_DIR="$PACBOT_HOME/logs"
Create directories
mkdir -p "$CONFIG_DIR" "$LOGS_DIR"
Function to configure database
configure_database() { echo "Configuring PacBot database..."
cat > "$CONFIG_DIR/database.properties" << 'DBCONFIG'
Database Configuration
spring.datasource.url=jdbc:mysql://localhost:3306/pacbot?useSSL=false&allowPublicKeyRetrieval;=true spring.datasource.username=pacbot spring.datasource.password=pacbot123 spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver
Connection pool settings
spring.datasource.hikari.maximum-pool-size=20 spring.datasource.hikari.minimum-idle=5 spring.datasource.hikari.idle-timeout=300000 spring.datasource.hikari.max-lifetime=600000 spring.datasource.hikari.connection-timeout=30000
JPA settings
spring.jpa.hibernate.ddl-auto=validate spring.jpa.show-sql=false spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL8Dialect DBCONFIG
echo "Database configuration completed"
}
Function to configure Elasticsearch
configure_elasticsearch() { echo "Configuring Elasticsearch..."
cat > "$CONFIG_DIR/elasticsearch.properties" << 'ESCONFIG'
Elasticsearch Configuration
elastic.host=localhost elastic.port=9200 elastic.protocol=http elastic.cluster=pacbot elastic.index.prefix=pacbot
Connection settings
elastic.connection.timeout=30000 elastic.socket.timeout=60000 elastic.max.retry.timeout=120000
Bulk settings
elastic.bulk.size=1000 elastic.bulk.timeout=60000 ESCONFIG
echo "Elasticsearch configuration completed"
}
Function to configure AWS
configure_aws() { echo "Configuring AWS settings..."
cat > "$CONFIG_DIR/aws.properties" << 'AWSCONFIG'
AWS Configuration
aws.region=us-east-1 aws.role.cross.account.enabled=true aws.role.external.id=pacbot-external-id
S3 Configuration
aws.s3.bucket.name=pacbot-data-bucket aws.s3.region=us-east-1
SQS Configuration
aws.sqs.queue.url=https://sqs.us-east-1.amazonaws.com/123456789012/pacbot-queue
SNS Configuration
aws.sns.topic.arn=arn:aws:sns:us-east-1:123456789012:pacbot-notifications
Lambda Configuration
aws.lambda.function.prefix=pacbot aws.lambda.region=us-east-1 AWSCONFIG
echo "AWS configuration completed"
}
Function to configure application
configure_application() { echo "Configuring application settings..."
cat > "$CONFIG_DIR/application.properties" << 'APPCONFIG'
Application Configuration
pacbot.env=dev pacbot.auto.fix.enabled=false pacbot.auto.fix.orphan.enabled=false
Server Configuration
server.port=8080 server.servlet.context-path=/api
Security Configuration
security.oauth2.enabled=false security.basic.enabled=false
Email Configuration
spring.mail.host=smtp.gmail.com spring.mail.port=587 spring.mail.username=pacbot@company.com spring.mail.password=app-password spring.mail.properties.mail.smtp.auth=true spring.mail.properties.mail.smtp.starttls.enable=true
Notification Configuration
notification.email.enabled=true notification.slack.enabled=false notification.slack.webhook.url=https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK
Logging Configuration
logging.level.com.tmobile.pacbot=INFO logging.level.org.springframework=WARN logging.file.path=${LOGS_DIR}/ logging.file.name=pacbot.log logging.file.max-size=100MB logging.file.max-history=30
Actuator Configuration
management.endpoints.web.exposure.include=health,info,metrics management.endpoint.health.show-details=when-authorized APPCONFIG
echo "Application configuration completed"
}
Function to configure rules
configure_rules() { echo "Configuring PacBot rules..."
mkdir -p "$CONFIG_DIR/rules"
cat > "$CONFIG_DIR/rules/security-rules.json" << 'RULESCONFIG'
{ "rules": [ { "ruleId": "PacBot_S3BucketPublicAccess_version-1", "ruleName": "S3 Bucket Public Access Check", "description": "Checks if S3 buckets have public read or write access", "severity": "high", "category": "security", "targetType": "s3", "assetGroup": "aws-all", "ruleParams": { "checkPublicRead": true, "checkPublicWrite": true, "excludeBuckets": ["public-website-bucket"] }, "ruleFrequency": "daily", "autoFix": false, "enabled": true }, { "ruleId": "PacBot_EC2SecurityGroupOpenToWorld_version-1", "ruleName": "EC2 Security Group Open to World", "description": "Checks if EC2 security groups allow unrestricted access", "severity": "critical", "category": "security", "targetType": "ec2", "assetGroup": "aws-all", "ruleParams": { "checkInboundRules": true, "allowedPorts": [80, 443], "excludeSecurityGroups": ["web-tier-sg"] }, "ruleFrequency": "daily", "autoFix": false, "enabled": true }, { "ruleId": "PacBot_IAMUserWithoutMFA_version-1", "ruleName": "IAM User Without MFA", "description": "Checks if IAM users have MFA enabled", "severity": "medium", "category": "security", "targetType": "iam", "assetGroup": "aws-all", "ruleParams": { "excludeServiceAccounts": true, "excludeUsers": ["emergency-access-user"] }, "ruleFrequency": "daily", "autoFix": false, "enabled": true }, { "ruleId": "PacBot_RDSInstancePublic_version-1", "ruleName": "RDS Instance Public Access", "description": "Checks if RDS instances are publicly accessible", "severity": "high", "category": "security", "targetType": "rds", "assetGroup": "aws-all", "ruleParams": { "checkPubliclyAccessible": true }, "ruleFrequency": "daily", "autoFix": false, "enabled": true }, { "ruleId": "PacBot_CloudTrailNotEnabled_version-1", "ruleName": "CloudTrail Not Enabled", "description": "Checks if CloudTrail is enabled in all regions", "severity": "high", "category": "compliance", "targetType": "cloudtrail", "assetGroup": "aws-all", "ruleParams": { "checkAllRegions": true, "checkLogFileValidation": true }, "ruleFrequency": "daily", "autoFix": false, "enabled": true } ] } RULESCONFIG
echo "Rules configuration completed"
}
Function to configure asset groups
configure_asset_groups() { echo "Configuring asset groups..."
cat > "$CONFIG_DIR/asset-groups.json" << 'ASSETCONFIG'
{ "assetGroups": [ { "groupId": "aws-all", "groupName": "AWS All Resources", "description": "All AWS resources across all accounts", "dataSource": "aws", "targetTypes": ["ec2", "s3", "iam", "rds", "cloudtrail", "vpc", "elb"], "accounts": [""], "regions": [""], "tags": {}, "groupType": "system" }, { "groupId": "aws-prod", "groupName": "AWS Production", "description": "Production AWS resources", "dataSource": "aws", "targetTypes": ["ec2", "s3", "iam", "rds"], "accounts": ["123456789012"], "regions": ["us-east-1", "us-west-2"], "tags": { "Environment": "production" }, "groupType": "user" }, { "groupId": "aws-dev", "groupName": "AWS Development", "description": "Development AWS resources", "dataSource": "aws", "targetTypes": ["ec2", "s3"], "accounts": ["123456789013"], "regions": ["us-east-1"], "tags": { "Environment": "development" }, "groupType": "user" }, { "groupId": "aws-security-critical", "groupName": "Security Critical Resources", "description": "Security-critical AWS resources", "dataSource": "aws", "targetTypes": ["iam", "cloudtrail", "kms"], "accounts": [""], "regions": [""], "tags": { "SecurityLevel": "critical" }, "groupType": "user" } ] } ASSETCONFIG
echo "Asset groups configuration completed"
}
Main configuration function
main() { echo "Starting PacBot configuration..."
configure_database
configure_elasticsearch
configure_aws
configure_application
configure_rules
configure_asset_groups
# Set permissions
chmod -R 755 "$CONFIG_DIR"
echo "PacBot configuration completed successfully"
echo "Configuration files created in: $CONFIG_DIR"
echo "Log files will be created in: $LOGS_DIR"
}
Run configuration
main "$@" EOF
chmod +x configure_pacbot.sh ./configure_pacbot.sh ```_
Politik als Code Implementierung
```python
!/usr/bin/env python3
PacBot Policy as Code Implementation
import json import yaml import boto3 import requests from datetime import datetime import logging
class PacBotPolicyManager: """Manage PacBot policies as code"""
def __init__(self, config_file='pacbot_config.yaml'):
self.config = self._load_config(config_file)
self.api_base_url = self.config.get('api_base_url', 'http://localhost:8080/api')
self.setup_logging()
def _load_config(self, config_file):
"""Load configuration from YAML file"""
try:
with open(config_file, 'r') as f:
return yaml.safe_load(f)
except FileNotFoundError:
return self._create_default_config(config_file)
def _create_default_config(self, config_file):
"""Create default configuration file"""
default_config = {
'api_base_url': 'http://localhost:8080/api',
'database': {
'host': 'localhost',
'port': 3306,
'database': 'pacbot',
'username': 'pacbot',
'password': 'pacbot123'
},
'aws': {
'region': 'us-east-1',
'accounts': ['123456789012']
},
'notifications': {
'email': {
'enabled': True,
'smtp_host': 'smtp.gmail.com',
'smtp_port': 587,
'username': 'pacbot@company.com',
'password': 'app-password'
},
'slack': {
'enabled': False,
'webhook_url': 'https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK'
}
}
}
with open(config_file, 'w') as f:
yaml.dump(default_config, f, default_flow_style=False)
return default_config
def setup_logging(self):
"""Setup logging configuration"""
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
handlers=[
logging.FileHandler('pacbot_policy_manager.log'),
logging.StreamHandler()
]
)
self.logger = logging.getLogger(__name__)
def create_policy(self, policy_definition):
"""Create a new policy in PacBot"""
policy_data = {
'policyId': policy_definition['id'],
'policyName': policy_definition['name'],
'policyDesc': policy_definition['description'],
'resolution': policy_definition.get('resolution', ''),
'policyUrl': policy_definition.get('documentation_url', ''),
'policyVersion': policy_definition.get('version', '1.0'),
'policyParams': json.dumps(policy_definition.get('parameters', {})),
'dataSource': policy_definition.get('data_source', 'aws'),
'targetType': policy_definition['target_type'],
'assetGroup': policy_definition['asset_group'],
'alexaKeyword': policy_definition.get('alexa_keyword', ''),
'policyCategory': policy_definition.get('category', 'security'),
'policyType': policy_definition.get('type', 'Mandatory'),
'severity': policy_definition.get('severity', 'medium'),
'status': 'ENABLED',
'userId': 'policy-manager',
'createdDate': datetime.now().isoformat(),
'modifiedDate': datetime.now().isoformat()
}
try:
response = requests.post(
f"{self.api_base_url}/policies",
json=policy_data,
headers={'Content-Type': 'application/json'}
)
if response.status_code == 201:
self.logger.info(f"Policy created successfully: {policy_definition['id']}")
return True
else:
self.logger.error(f"Failed to create policy: {response.text}")
return False
except Exception as e:
self.logger.error(f"Error creating policy: {e}")
return False
def create_rule(self, rule_definition):
"""Create a rule instance for a policy"""
rule_data = {
'ruleId': f"{rule_definition['policy_id']}_{rule_definition['target_type']}_{rule_definition['asset_group']}",
'ruleName': rule_definition['name'],
'targetType': rule_definition['target_type'],
'assetGroup': rule_definition['asset_group'],
'ruleParams': json.dumps(rule_definition.get('parameters', {})),
'ruleFrequency': rule_definition.get('frequency', 'daily'),
'ruleExecutable': rule_definition.get('executable', ''),
'ruleRestUrl': rule_definition.get('rest_url', ''),
'ruleType': rule_definition.get('type', 'Mandatory'),
'status': 'ENABLED',
'userId': 'policy-manager',
'displayName': rule_definition.get('display_name', rule_definition['name']),
'createdDate': datetime.now().isoformat(),
'modifiedDate': datetime.now().isoformat()
}
try:
response = requests.post(
f"{self.api_base_url}/rules",
json=rule_data,
headers={'Content-Type': 'application/json'}
)
if response.status_code == 201:
self.logger.info(f"Rule created successfully: {rule_data['ruleId']}")
return True
else:
self.logger.error(f"Failed to create rule: {response.text}")
return False
except Exception as e:
self.logger.error(f"Error creating rule: {e}")
return False
def load_policies_from_file(self, policies_file):
"""Load policies from YAML file"""
try:
with open(policies_file, 'r') as f:
policies_data = yaml.safe_load(f)
policies_created = 0
rules_created = 0
for policy_def in policies_data.get('policies', []):
# Create policy
if self.create_policy(policy_def):
policies_created += 1
# Create associated rules
for rule_def in policy_def.get('rules', []):
rule_def['policy_id'] = policy_def['id']
if self.create_rule(rule_def):
rules_created += 1
self.logger.info(f"Loaded {policies_created} policies and {rules_created} rules")
return True
except Exception as e:
self.logger.error(f"Error loading policies from file: {e}")
return False
def validate_policy(self, policy_definition):
"""Validate policy definition"""
required_fields = ['id', 'name', 'description', 'target_type', 'asset_group']
for field in required_fields:
if field not in policy_definition:
self.logger.error(f"Missing required field: {field}")
return False
# Validate severity
valid_severities = ['low', 'medium', 'high', 'critical']
if policy_definition.get('severity', 'medium') not in valid_severities:
self.logger.error(f"Invalid severity: {policy_definition.get('severity')}")
return False
# Validate target type
valid_target_types = ['ec2', 's3', 'iam', 'rds', 'vpc', 'elb', 'cloudtrail', 'kms']
if policy_definition['target_type'] not in valid_target_types:
self.logger.error(f"Invalid target type: {policy_definition['target_type']}")
return False
return True
def generate_policy_template(self, policy_type='security'):
"""Generate policy template"""
template = {
'policies': [
{
'id': 'PacBot_ExamplePolicy_version-1',
'name': 'Example Security Policy',
'description': 'Example policy description',
'resolution': 'Steps to resolve the issue',
'documentation_url': 'https://docs.aws.amazon.com/example',
'version': '1.0',
'data_source': 'aws',
'target_type': 's3',
'asset_group': 'aws-all',
'category': 'security',
'type': 'Mandatory',
'severity': 'high',
'parameters': {
'checkPublicAccess': True,
'excludeBuckets': []
},
'rules': [
{
'name': 'S3 Bucket Security Check',
'target_type': 's3',
'asset_group': 'aws-all',
'frequency': 'daily',
'type': 'Mandatory',
'executable': 'com.tmobile.pacbot.aws.s3.S3BucketSecurityRule',
'parameters': {
'severity': 'high',
'category': 'security'
}
}
]
}
]
}
return template
def export_policies(self, output_file='pacbot_policies_export.yaml'):
"""Export existing policies to YAML file"""
try:
# Get policies from API
response = requests.get(f"{self.api_base_url}/policies")
if response.status_code == 200:
policies_data = response.json()
# Convert to YAML format
export_data = {
'export_date': datetime.now().isoformat(),
'policies': []
}
for policy in policies_data:
policy_export = {
'id': policy['policyId'],
'name': policy['policyName'],
'description': policy['policyDesc'],
'resolution': policy.get('resolution', ''),
'documentation_url': policy.get('policyUrl', ''),
'version': policy.get('policyVersion', '1.0'),
'data_source': policy.get('dataSource', 'aws'),
'target_type': policy['targetType'],
'asset_group': policy['assetGroup'],
'category': policy.get('policyCategory', 'security'),
'type': policy.get('policyType', 'Mandatory'),
'severity': policy.get('severity', 'medium'),
'status': policy.get('status', 'ENABLED'),
'parameters': json.loads(policy.get('policyParams', '{}'))
}
export_data['policies'].append(policy_export)
with open(output_file, 'w') as f:
yaml.dump(export_data, f, default_flow_style=False)
self.logger.info(f"Policies exported to: {output_file}")
return True
else:
self.logger.error(f"Failed to get policies: {response.text}")
return False
except Exception as e:
self.logger.error(f"Error exporting policies: {e}")
return False
def sync_policies_with_git(self, git_repo_url, branch='main'):
"""Sync policies with Git repository"""
import git
import tempfile
import shutil
try:
# Clone repository to temporary directory
with tempfile.TemporaryDirectory() as temp_dir:
repo = git.Repo.clone_from(git_repo_url, temp_dir)
repo.git.checkout(branch)
# Load policies from repository
policies_file = f"{temp_dir}/policies.yaml"
if os.path.exists(policies_file):
self.load_policies_from_file(policies_file)
self.logger.info(f"Synced policies from Git repository: {git_repo_url}")
return True
else:
self.logger.error(f"Policies file not found in repository: {policies_file}")
return False
except Exception as e:
self.logger.error(f"Error syncing with Git repository: {e}")
return False
def main(): """Main function for policy management"""
import argparse
parser = argparse.ArgumentParser(description='PacBot Policy Manager')
parser.add_argument('--action', choices=['create', 'load', 'export', 'template', 'sync'],
required=True, help='Action to perform')
parser.add_argument('--file', help='Policy file path')
parser.add_argument('--output', help='Output file path')
parser.add_argument('--git-repo', help='Git repository URL for sync')
parser.add_argument('--config', default='pacbot_config.yaml', help='Configuration file')
args = parser.parse_args()
manager = PacBotPolicyManager(args.config)
if args.action == 'template':
template = manager.generate_policy_template()
output_file = args.output or 'policy_template.yaml'
with open(output_file, 'w') as f:
yaml.dump(template, f, default_flow_style=False)
print(f"Policy template generated: {output_file}")
elif args.action == 'load' and args.file:
if manager.load_policies_from_file(args.file):
print(f"Policies loaded successfully from: {args.file}")
else:
print(f"Failed to load policies from: {args.file}")
elif args.action == 'export':
output_file = args.output or 'pacbot_policies_export.yaml'
if manager.export_policies(output_file):
print(f"Policies exported to: {output_file}")
else:
print("Failed to export policies")
elif args.action == 'sync' and args.git_repo:
if manager.sync_policies_with_git(args.git_repo):
print(f"Policies synced from Git repository: {args.git_repo}")
else:
print(f"Failed to sync policies from Git repository: {args.git_repo}")
else:
parser.print_help()
if name == "main": main() ```_
Überwachung und Berichterstattung
Compliance Monitoring
```python
!/usr/bin/env python3
PacBot Compliance Monitoring and Reporting
import json import mysql.connector import elasticsearch import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from datetime import datetime, timedelta import smtplib from email.mime.text import MimeText from email.mime.multipart import MimeMultipart from email.mime.base import MimeBase from email import encoders import logging
class PacBotComplianceMonitor: """Monitor and report on compliance status"""
def __init__(self, config):
self.config = config
self.setup_logging()
self.setup_connections()
def setup_logging(self):
"""Setup logging configuration"""
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
handlers=[
logging.FileHandler('pacbot_compliance_monitor.log'),
logging.StreamHandler()
]
)
self.logger = logging.getLogger(__name__)
def setup_connections(self):
"""Setup database and Elasticsearch connections"""
# MySQL connection
try:
self.db_connection = mysql.connector.connect(
host=self.config['database']['host'],
port=self.config['database']['port'],
database=self.config['database']['database'],
user=self.config['database']['username'],
password=self.config['database']['password']
)
self.logger.info("Database connection established")
except Exception as e:
self.logger.error(f"Failed to connect to database: {e}")
self.db_connection = None
# Elasticsearch connection
try:
self.es_client = elasticsearch.Elasticsearch([
f"http://{self.config['elasticsearch']['host']}:{self.config['elasticsearch']['port']}"
])
self.logger.info("Elasticsearch connection established")
except Exception as e:
self.logger.error(f"Failed to connect to Elasticsearch: {e}")
self.es_client = None
def get_compliance_summary(self, asset_group='aws-all', days=30):
"""Get compliance summary for asset group"""
if not self.db_connection:
return None
try:
cursor = self.db_connection.cursor(dictionary=True)
# Get total assets
cursor.execute("""
SELECT COUNT(*) as total_assets
FROM cf_AssetGroupDetails
WHERE groupId = %s
""", (asset_group,))
total_assets = cursor.fetchone()['total_assets']
# Get compliance violations
end_date = datetime.now()
start_date = end_date - timedelta(days=days)
cursor.execute("""
SELECT
p.severity,
COUNT(*) as violation_count,
COUNT(DISTINCT r.resourceId) as affected_resources
FROM cf_PolicyTable p
JOIN cf_RuleInstance ri ON p.policyId = ri.ruleId
JOIN cf_PolicyViolations pv ON ri.ruleId = pv.ruleId
JOIN cf_Resources r ON pv.resourceId = r.resourceId
WHERE ri.assetGroup = %s
AND pv.createdDate BETWEEN %s AND %s
GROUP BY p.severity
""", (asset_group, start_date, end_date))
violations_by_severity = cursor.fetchall()
# Calculate compliance score
total_violations = sum(v['violation_count'] for v in violations_by_severity)
compliance_score = max(0, 100 - (total_violations / max(total_assets, 1) * 100))
summary = {
'asset_group': asset_group,
'period_days': days,
'total_assets': total_assets,
'total_violations': total_violations,
'compliance_score': round(compliance_score, 2),
'violations_by_severity': violations_by_severity,
'generated_at': datetime.now().isoformat()
}
cursor.close()
return summary
except Exception as e:
self.logger.error(f"Error getting compliance summary: {e}")
return None
def get_policy_compliance_details(self, asset_group='aws-all', days=30):
"""Get detailed compliance information by policy"""
if not self.db_connection:
return None
try:
cursor = self.db_connection.cursor(dictionary=True)
end_date = datetime.now()
start_date = end_date - timedelta(days=days)
cursor.execute("""
SELECT
p.policyId,
p.policyName,
p.policyCategory,
p.severity,
COUNT(pv.violationId) as violation_count,
COUNT(DISTINCT pv.resourceId) as affected_resources,
AVG(CASE WHEN pv.status = 'open' THEN 1 ELSE 0 END) * 100 as open_percentage
FROM cf_PolicyTable p
JOIN cf_RuleInstance ri ON p.policyId = ri.ruleId
LEFT JOIN cf_PolicyViolations pv ON ri.ruleId = pv.ruleId
AND pv.createdDate BETWEEN %s AND %s
WHERE ri.assetGroup = %s
GROUP BY p.policyId, p.policyName, p.policyCategory, p.severity
ORDER BY violation_count DESC
""", (start_date, end_date, asset_group))
policy_details = cursor.fetchall()
cursor.close()
return policy_details
except Exception as e:
self.logger.error(f"Error getting policy compliance details: {e}")
return None
def get_trend_analysis(self, asset_group='aws-all', days=90):
"""Get compliance trend analysis"""
if not self.db_connection:
return None
try:
cursor = self.db_connection.cursor(dictionary=True)
end_date = datetime.now()
start_date = end_date - timedelta(days=days)
# Get daily violation counts
cursor.execute("""
SELECT
DATE(pv.createdDate) as violation_date,
p.severity,
COUNT(*) as violation_count
FROM cf_PolicyViolations pv
JOIN cf_RuleInstance ri ON pv.ruleId = ri.ruleId
JOIN cf_PolicyTable p ON ri.ruleId = p.policyId
WHERE ri.assetGroup = %s
AND pv.createdDate BETWEEN %s AND %s
GROUP BY DATE(pv.createdDate), p.severity
ORDER BY violation_date
""", (asset_group, start_date, end_date))
trend_data = cursor.fetchall()
cursor.close()
# Process trend data
df = pd.DataFrame(trend_data)
if not df.empty:
df['violation_date'] = pd.to_datetime(df['violation_date'])
trend_summary = df.pivot_table(
index='violation_date',
columns='severity',
values='violation_count',
fill_value=0
)
return trend_summary.to_dict('index')
return {}
except Exception as e:
self.logger.error(f"Error getting trend analysis: {e}")
return None
def generate_compliance_report(self, asset_group='aws-all', days=30):
"""Generate comprehensive compliance report"""
# Get compliance data
summary = self.get_compliance_summary(asset_group, days)
policy_details = self.get_policy_compliance_details(asset_group, days)
trend_data = self.get_trend_analysis(asset_group, days)
if not summary:
self.logger.error("Failed to generate compliance report")
return None
# Generate report
report = {
'report_metadata': {
'generated_at': datetime.now().isoformat(),
'asset_group': asset_group,
'period_days': days,
'report_type': 'compliance_summary'
},
'executive_summary': summary,
'policy_details': policy_details or [],
'trend_analysis': trend_data or {},
'recommendations': self._generate_recommendations(summary, policy_details)
}
return report
def _generate_recommendations(self, summary, policy_details):
"""Generate recommendations based on compliance data"""
recommendations = []
# High-level recommendations based on compliance score
compliance_score = summary.get('compliance_score', 0)
if compliance_score < 70:
recommendations.append({
'priority': 'HIGH',
'title': 'Critical Compliance Issues',
'description': f'Compliance score is {compliance_score}%. Immediate action required.',
'action': 'Review and remediate critical and high severity violations'
})
elif compliance_score < 85:
recommendations.append({
'priority': 'MEDIUM',
'title': 'Compliance Improvement Needed',
'description': f'Compliance score is {compliance_score}%. Room for improvement.',
'action': 'Focus on high and medium severity violations'
})
# Policy-specific recommendations
if policy_details:
# Find top violating policies
top_violations = sorted(policy_details, key=lambda x: x['violation_count'], reverse=True)[:5]
for policy in top_violations:
if policy['violation_count'] > 10:
recommendations.append({
'priority': 'HIGH' if policy['severity'] in ['critical', 'high'] else 'MEDIUM',
'title': f"Address {policy['policyName']} Violations",
'description': f"{policy['violation_count']} violations affecting {policy['affected_resources']} resources",
'action': f"Review and remediate {policy['policyName']} policy violations"
})
return recommendations
def create_compliance_dashboard(self, report_data, output_file='compliance_dashboard.png'):
"""Create compliance dashboard visualization"""
try:
# Set up the plotting style
plt.style.use('seaborn-v0_8')
fig, axes = plt.subplots(2, 3, figsize=(18, 12))
summary = report_data['executive_summary']
policy_details = report_data['policy_details']
# 1. Compliance Score Gauge
compliance_score = summary['compliance_score']
colors = ['#e74c3c' if compliance_score < 70 else '#f39c12' if compliance_score < 85 else '#27ae60']
axes[0, 0].pie([compliance_score, 100-compliance_score],
labels=[f'{compliance_score}%', ''],
colors=[colors[0], '#ecf0f1'],
startangle=90)
axes[0, 0].set_title('Compliance Score')
# 2. Violations by Severity
if summary['violations_by_severity']:
severities = [v['severity'] for v in summary['violations_by_severity']]
counts = [v['violation_count'] for v in summary['violations_by_severity']]
severity_colors = {'critical': '#e74c3c', 'high': '#f39c12', 'medium': '#f1c40f', 'low': '#3498db'}
colors = [severity_colors.get(s, '#95a5a6') for s in severities]
axes[0, 1].bar(severities, counts, color=colors)
axes[0, 1].set_title('Violations by Severity')
axes[0, 1].set_ylabel('Number of Violations')
# 3. Top Violating Policies
if policy_details:
top_policies = sorted(policy_details, key=lambda x: x['violation_count'], reverse=True)[:10]
policy_names = [p['policyName'][:30] + '...' if len(p['policyName']) > 30 else p['policyName']
for p in top_policies]
violation_counts = [p['violation_count'] for p in top_policies]
axes[0, 2].barh(policy_names, violation_counts, color='#e74c3c')
axes[0, 2].set_title('Top 10 Violating Policies')
axes[0, 2].set_xlabel('Violation Count')
# 4. Policy Categories
if policy_details:
category_counts = {}
for policy in policy_details:
category = policy['policyCategory']
category_counts[category] = category_counts.get(category, 0) + policy['violation_count']
if category_counts:
axes[1, 0].pie(category_counts.values(), labels=category_counts.keys(), autopct='%1.1f%%')
axes[1, 0].set_title('Violations by Policy Category')
# 5. Severity Distribution
if policy_details:
severity_dist = {}
for policy in policy_details:
severity = policy['severity']
severity_dist[severity] = severity_dist.get(severity, 0) + policy['violation_count']
if severity_dist:
severities = list(severity_dist.keys())
counts = list(severity_dist.values())
colors = [severity_colors.get(s, '#95a5a6') for s in severities]
axes[1, 1].bar(severities, counts, color=colors)
axes[1, 1].set_title('Total Violations by Severity')
axes[1, 1].set_ylabel('Total Violations')
# 6. Compliance Metrics Summary
metrics_text = f"""
Asset Group: {summary['asset_group']}
Total Assets: {summary['total_assets']}
Total Violations: {summary['total_violations']}
Compliance Score: {summary['compliance_score']}%
Period: {summary['period_days']} days
Generated: {summary['generated_at'][:10]}
"""
axes[1, 2].text(0.1, 0.5, metrics_text, fontsize=12, verticalalignment='center')
axes[1, 2].set_xlim(0, 1)
axes[1, 2].set_ylim(0, 1)
axes[1, 2].axis('off')
axes[1, 2].set_title('Compliance Metrics')
plt.tight_layout()
plt.savefig(output_file, dpi=300, bbox_inches='tight')
plt.close()
self.logger.info(f"Compliance dashboard created: {output_file}")
return output_file
except Exception as e:
self.logger.error(f"Error creating compliance dashboard: {e}")
return None
def send_compliance_report(self, report_data, dashboard_file=None):
"""Send compliance report via email"""
if not self.config.get('notifications', {}).get('email', {}).get('enabled', False):
self.logger.info("Email notifications disabled")
return False
try:
email_config = self.config['notifications']['email']
# Create email message
msg = MimeMultipart()
msg['From'] = email_config['username']
msg['To'] = email_config.get('recipients', 'admin@company.com')
msg['Subject'] = f"PacBot Compliance Report - {report_data['report_metadata']['asset_group']}"
# Create email body
summary = report_data['executive_summary']
body = f"""
PacBot Compliance Report
Asset Group: {summary['asset_group']}
Compliance Score: {summary['compliance_score']}%
Total Violations: {summary['total_violations']}
Period: {summary['period_days']} days
Recommendations:
"""
for rec in report_data['recommendations'][:5]:
body += f"\n- {rec['title']}: {rec['description']}"
body += f"\n\nGenerated at: {report_data['report_metadata']['generated_at']}"
msg.attach(MimeText(body, 'plain'))
# Attach dashboard if available
if dashboard_file:
with open(dashboard_file, 'rb') as attachment:
part = MimeBase('application', 'octet-stream')
part.set_payload(attachment.read())
encoders.encode_base64(part)
part.add_header(
'Content-Disposition',
f'attachment; filename= {dashboard_file}'
)
msg.attach(part)
# Send email
server = smtplib.SMTP(email_config['smtp_host'], email_config['smtp_port'])
server.starttls()
server.login(email_config['username'], email_config['password'])
server.send_message(msg)
server.quit()
self.logger.info("Compliance report sent via email")
return True
except Exception as e:
self.logger.error(f"Error sending compliance report: {e}")
return False
def run_compliance_monitoring(self, asset_groups=None, days=30):
"""Run complete compliance monitoring workflow"""
if not asset_groups:
asset_groups = ['aws-all', 'aws-prod', 'aws-dev']
for asset_group in asset_groups:
self.logger.info(f"Generating compliance report for: {asset_group}")
# Generate report
report = self.generate_compliance_report(asset_group, days)
if not report:
continue
# Save report to file
report_file = f"compliance_report_{asset_group}_{datetime.now().strftime('%Y%m%d_%H%M%S')}.json"
with open(report_file, 'w') as f:
json.dump(report, f, indent=2)
# Create dashboard
dashboard_file = f"compliance_dashboard_{asset_group}_{datetime.now().strftime('%Y%m%d_%H%M%S')}.png"
self.create_compliance_dashboard(report, dashboard_file)
# Send report
self.send_compliance_report(report, dashboard_file)
self.logger.info(f"Compliance monitoring completed for: {asset_group}")
def main(): """Main function for compliance monitoring"""
import argparse
parser = argparse.ArgumentParser(description='PacBot Compliance Monitor')
parser.add_argument('--config', default='pacbot_config.yaml', help='Configuration file')
parser.add_argument('--asset-groups', nargs='+', help='Asset groups to monitor')
parser.add_argument('--days', type=int, default=30, help='Number of days to analyze')
parser.add_argument('--output-dir', default='.', help='Output directory for reports')
args = parser.parse_args()
# Load configuration
import yaml
with open(args.config, 'r') as f:
config = yaml.safe_load(f)
# Run compliance monitoring
monitor = PacBotComplianceMonitor(config)
monitor.run_compliance_monitoring(args.asset_groups, args.days)
if name == "main": main() ```_
Automatisierung und Integration
CI/CD Integration
```yaml
.github/workflows/pacbot-compliance-check.yml
name: PacBot Compliance Check
on: push: branches: [ main, develop ] pull_request: branches: [ main ] schedule: # Run daily at 6 AM UTC - cron: '0 6 * * *' workflow_dispatch: inputs: asset_group: description: 'Asset group to check' required: false default: 'aws-all' type: choice options: - aws-all - aws-prod - aws-dev
jobs: pacbot-compliance: runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Setup Python
uses: actions/setup-python@v4
with:
python-version: '3.9'
- name: Install dependencies
run: |
pip install requests pyyaml mysql-connector-python elasticsearch pandas matplotlib seaborn
- name: Setup AWS credentials
uses: aws-actions/configure-aws-credentials@v2
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Run PacBot compliance check
env:
PACBOT_API_URL: ${{ secrets.PACBOT_API_URL }}
PACBOT_DB_HOST: ${{ secrets.PACBOT_DB_HOST }}
PACBOT_DB_PASSWORD: ${{ secrets.PACBOT_DB_PASSWORD }}
run: |
# Create configuration file
cat > pacbot_config.yaml << EOF
api_base_url: ${PACBOT_API_URL}
database:
host: ${PACBOT_DB_HOST}
port: 3306
database: pacbot
username: pacbot
password: ${PACBOT_DB_PASSWORD}
elasticsearch:
host: ${PACBOT_DB_HOST}
port: 9200
notifications:
email:
enabled: false
EOF
# Run compliance check
python scripts/pacbot_compliance_monitor.py \
--config pacbot_config.yaml \
| --asset-groups ${{ github.event.inputs.asset_group | | 'aws-all' }} \ | --days 7 \ --output-dir compliance-results
- name: Evaluate compliance gate
run: |
python << 'EOF'
import json
import sys
import glob
# Find compliance report
report_files = glob.glob('compliance-results/compliance_report_*.json')
if not report_files:
print("No compliance report found")
sys.exit(0)
with open(report_files[0], 'r') as f:
report = json.load(f)
summary = report['executive_summary']
compliance_score = summary['compliance_score']
total_violations = summary['total_violations']
print(f"Compliance Assessment Results: ")
print(f"Compliance Score: {compliance_score}%")
print(f"Total Violations: {total_violations}")
# Compliance gate logic
if compliance_score < 70:
print("❌ COMPLIANCE FAILURE!")
print("Compliance score below acceptable threshold (70%)")
sys.exit(1)
if total_violations > 50:
print("⚠️ WARNING: High number of violations!")
sys.exit(1)
print("✅ Compliance gate passed")
EOF
- name: Upload compliance results
uses: actions/upload-artifact@v3
with:
name: pacbot-compliance-results
path: compliance-results/
- name: Comment PR with compliance status
if: github.event_name == 'pull_request'
uses: actions/github-script@v6
with:
script: |
const fs = require('fs');
const glob = require('glob');
// Find compliance report
const reportFiles = glob.sync('compliance-results/compliance_report_*.json');
if (reportFiles.length === 0) {
console.log('No compliance report found');
return;
}
const report = JSON.parse(fs.readFileSync(reportFiles[0], 'utf8'));
const summary = report.executive_summary;
const comment = `## 🔒 PacBot Compliance Check Results
**Compliance Score: ** ${summary.compliance_score}%
**Total Violations: ** ${summary.total_violations}
**Asset Group: ** ${summary.asset_group}
**Violations by Severity: **
${summary.violations_by_severity.map(v =>
`- ${v.severity}: ${v.violation_count} violations`
).join('\n')}
**Top Recommendations: **
${report.recommendations.slice(0, 3).map(r =>
`- **${r.title}**: ${r.description}`
).join('\n')}
${summary.compliance_score < 70 ? '⚠️ **Compliance score below threshold! Please review and remediate violations.**' : '✅ Compliance check passed.'}
[View detailed report](https: //github.com/${{ github.repository }}/actions/runs/${{ github.run_id }})`;
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: comment
});
```_
Ressourcen und Dokumentation
Offizielle Mittel
- PacBot GitHub Repository - Quellcode und Dokumentation
- PacBot Wiki - Umfassende Setup- und Konfigurationsanleitungen
- T-Mobile Open Source - Open Source Initiativen von T-Mobile
- PacBot Architecture Guide - Dokumentation der Systemarchitektur
AWS Integration Ressourcen
- AWS Config Rules - AWS Config Integration
- AWS Security Best Practices - AWS Sicherheitsrichtlinien
- AWS Well-Architected Framework - AWS Architektur Best Practices
- AWS Compliance Programme - AWS Compliance Frameworks
Politik und Compliance Ressourcen
- NIST Cybersecurity Framework - NIST Sicherheitsrahmen
- CIS Controls - Zentrum für die Steuerung der Internetsicherheit
- SOC 2 Compliance - SOC 2 Compliance Framework
- PCI DSS Anforderungen - Zahlungskartenindustriestandards