Zum Inhalt

PlexTrac Cheat Sheet

generieren

Überblick

PlexTrac ist eine umfassende Penetration Testing Reporting und Sicherheitsmanagement-Plattform, die Sicherheitsbewertung Workflows, Ergebnisse Priorisierung und Sanierung Tracking optimiert. Es bietet zentralisiertes Management für Sicherheitsteams, um an Bewertungen zu kooperieren, Fortschritte bei der Abhilfe zu verfolgen und professionelle Berichte für Stakeholder zu erstellen.

ZEIT Note: Handelsplattform SaaS. Kostenlose Testversion verfügbar. Kontakt PlexTrac für Unternehmenspreise.

Erste Schritte

Account Setup und Initial Configuration

```bash

Access PlexTrac platform

1. Navigate to https://app.plextrac.com

2. Sign up for account or log in

3. Complete organization setup

4. Configure user roles and permissions

5. Set up initial client and project structure

Initial organization configuration:

- Organization name and details

- Default report templates

- User roles and permissions

- Integration settings

- Notification preferences

```_

Benutzermanagement und Roles

```bash

Default user roles in PlexTrac:

- Admin: Full system access and configuration

- Manager: Project management and team oversight

- Analyst: Create and edit findings, generate reports

- Viewer: Read-only access to assigned projects

- Client: Limited access to assigned reports and findings

Role-based permissions:

- Project creation and management

- Finding creation and editing

- Report generation and customization

- Client communication and collaboration

- System configuration and settings

```_

API Access und Authentication

```bash

Generate API key:

1. Navigate to User Settings > API Keys

2. Click "Generate New API Key"

3. Set expiration and permissions

4. Copy and securely store the key

API authentication

export PLEXTRAC_API_KEY="your_api_key_here" export PLEXTRAC_BASE_URL="https://app.plextrac.com/api/v1"

Test API connectivity

curl -H "Authorization: Bearer $PLEXTRAC_API_KEY" \ "$PLEXTRAC_BASE_URL/user/me" ```_

Projekt- und Kundenmanagement

Client Setup und Konfiguration

```bash

Create new client via API

curl -X POST \ -H "Authorization: Bearer $PLEXTRAC_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "name": "Example Corporation", "description": "Fortune 500 technology company", "industry": "Technology", "contact_email": "security@example.com", "contact_phone": "+1-555-123-4567", "address": { "street": "123 Business Ave", "city": "San Francisco", "state": "CA", "zip": "94105", "country": "USA" }, "tags": ["enterprise", "technology", "public"] }' \ "$PLEXTRAC_BASE_URL/clients"

Update client information

curl -X PUT \ -H "Authorization: Bearer $PLEXTRAC_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "name": "Example Corporation Inc.", "description": "Updated company description", "contact_email": "newsecurity@example.com" }' \ "$PLEXTRAC_BASE_URL/clients/12345"

List all clients

curl -H "Authorization: Bearer $PLEXTRAC_API_KEY" \ "$PLEXTRAC_BASE_URL/clients" ```_

Projektaufbau und -management

```bash

Create new penetration testing project

curl -X POST \ -H "Authorization: Bearer $PLEXTRAC_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "name": "Q1 2024 External Penetration Test", "description": "Quarterly external network and web application assessment", "client_id": "12345", "project_type": "penetration_test", "methodology": "OWASP", "start_date": "2024-01-15", "end_date": "2024-01-26", "scope": { "in_scope": [ "example.com", "app.example.com", "api.example.com", "192.168.1.0/24" ], "out_of_scope": [ "internal.example.com", "dev.example.com" ] }, "team_members": [ "analyst1@company.com", "analyst2@company.com" ], "tags": ["external", "quarterly", "web-app", "network"] }' \ "$PLEXTRAC_BASE_URL/projects"

Update project status

curl -X PATCH \ -H "Authorization: Bearer $PLEXTRAC_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "status": "in_progress", "completion_percentage": 45 }' \ "$PLEXTRAC_BASE_URL/projects/67890"

Add team member to project

curl -X POST \ -H "Authorization: Bearer $PLEXTRAC_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "user_email": "newanalyst@company.com", "role": "analyst" }' \ "$PLEXTRAC_BASE_URL/projects/67890/team" ```_

Projektvorlagen und Methoden

json { "project_templates": { "external_pentest": { "name": "External Penetration Test", "phases": [ "reconnaissance", "scanning_enumeration", "vulnerability_assessment", "exploitation", "post_exploitation", "reporting" ], "default_findings": [ "information_disclosure", "ssl_tls_configuration", "security_headers", "default_credentials" ], "report_template": "external_pentest_template" }, "web_app_assessment": { "name": "Web Application Security Assessment", "phases": [ "application_mapping", "authentication_testing", "session_management", "input_validation", "business_logic", "client_side_testing" ], "methodology": "OWASP_WSTG", "report_template": "web_app_template" }, "cloud_security_review": { "name": "Cloud Security Assessment", "phases": [ "configuration_review", "identity_access_management", "network_security", "data_protection", "monitoring_logging", "compliance_validation" ], "frameworks": ["CIS", "NIST", "CSA_CCM"], "report_template": "cloud_security_template" } } }_

Unternehmensführung

Erstellen und Verwalten von Findings

```bash

Create new security finding

curl -X POST \ -H "Authorization: Bearer $PLEXTRAC_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "title": "SQL Injection in Login Form", "description": "The login form is vulnerable to SQL injection attacks, allowing attackers to bypass authentication and potentially extract sensitive data from the database.", "severity": "high", "cvss_score": 8.1, "cvss_vector": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:N", "affected_assets": [ "https://app.example.com/login.php", "Database server (10.0.1.100)" ], "vulnerability_type": "injection", "cwe_id": "CWE-89", "owasp_category": "A03:2021 - Injection", "proof_of_concept": { "steps": [ "Navigate to https://app.example.com/login.php", "Enter the following in the username field: admin'\''--", "Enter any value in the password field", "Click Login button", "Observe successful authentication bypass" ], "screenshots": ["poc_screenshot_1.png", "poc_screenshot_2.png"], "request_response": "POST /login.php HTTP/1.1..." }, "impact": "An attacker could bypass authentication, access unauthorized data, modify database contents, or potentially gain administrative access to the application.", "remediation": { "short_term": "Implement input validation and parameterized queries", "long_term": "Conduct comprehensive code review and implement secure coding practices", "references": [ "https://owasp.org/www-community/attacks/SQL_Injection", "https://cwe.mitre.org/data/definitions/89.html" ] }, "status": "open", "assigned_to": "dev-team@example.com", "due_date": "2024-02-15" }' \ "$PLEXTRAC_BASE_URL/projects/67890/findings"

Update finding status

curl -X PATCH \ -H "Authorization: Bearer $PLEXTRAC_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "status": "in_remediation", "remediation_notes": "Development team has implemented parameterized queries. Testing in progress.", "updated_by": "analyst@company.com" }' \ "$PLEXTRAC_BASE_URL/findings/12345"

Add evidence to finding

curl -X POST \ -H "Authorization: Bearer $PLEXTRAC_API_KEY" \ -F "file=@screenshot.png" \ -F "description=SQL injection proof of concept" \ -F "evidence_type=screenshot" \ "$PLEXTRAC_BASE_URL/findings/12345/evidence" ```_

Vorlagen und Standardisierung finden

json { "finding_templates": { "sql_injection": { "title": "SQL Injection Vulnerability", "description_template": "The {affected_component} is vulnerable to SQL injection attacks through the {parameter} parameter. This vulnerability allows attackers to {impact_description}.", "severity_guidelines": { "critical": "Direct database access with admin privileges", "high": "Data extraction or authentication bypass possible", "medium": "Limited data exposure or functionality impact", "low": "Minimal impact or requires complex exploitation" }, "remediation_template": { "immediate": "Implement input validation and parameterized queries", "short_term": "Code review of all database interactions", "long_term": "Security training and secure coding standards" }, "references": [ "https://owasp.org/www-community/attacks/SQL_Injection", "https://cwe.mitre.org/data/definitions/89.html" ] }, "xss_reflected": { "title": "Reflected Cross-Site Scripting (XSS)", "description_template": "The {affected_page} reflects user input without proper sanitization in the {parameter} parameter, allowing execution of malicious JavaScript code.", "severity_guidelines": { "high": "Session hijacking or sensitive data theft possible", "medium": "Limited XSS with authentication required", "low": "Self-XSS or minimal impact scenarios" }, "remediation_template": { "immediate": "Implement output encoding and input validation", "short_term": "Content Security Policy (CSP) implementation", "long_term": "Secure development lifecycle integration" } } } }_

Bulk Finding Operationen

```python

Python script for bulk finding management

import requests import json import csv from typing import List, Dict

class PlexTracAPI: def init(self, api_key: str, base_url: str = "https://app.plextrac.com/api/v1"): self.api_key = api_key self.base_url = base_url self.headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" }

def bulk_create_findings(self, project_id: str, findings_data: List[Dict]) -> List[str]:
    """Create multiple findings from list"""
    created_findings = []

    for finding in findings_data:
        response = requests.post(
            f"{self.base_url}/projects/{project_id}/findings",
            headers=self.headers,
            json=finding
        )

        if response.status_code == 201:
            finding_id = response.json()["id"]
            created_findings.append(finding_id)
            print(f"Created finding: {finding['title']} (ID: {finding_id})")
        else:
            print(f"Failed to create finding: {finding['title']} - {response.text}")

    return created_findings

def import_findings_from_csv(self, project_id: str, csv_file: str) -> List[str]:
    """Import findings from CSV file"""
    findings_data = []

    with open(csv_file, 'r', newline='', encoding='utf-8') as file:
        reader = csv.DictReader(file)

        for row in reader:
            finding = {
                "title": row["title"],
                "description": row["description"],
                "severity": row["severity"].lower(),
                "cvss_score": float(row["cvss_score"]) if row["cvss_score"] else None,
                "affected_assets": row["affected_assets"].split(";") if row["affected_assets"] else [],
                "vulnerability_type": row["vulnerability_type"],
                "impact": row["impact"],
                "remediation": {
                    "short_term": row["remediation_short"],
                    "long_term": row["remediation_long"]
                },
                "status": "open"
            }
            findings_data.append(finding)

    return self.bulk_create_findings(project_id, findings_data)

def bulk_update_findings_status(self, finding_ids: List[str], status: str, notes: str = "") -> Dict:
    """Update status for multiple findings"""
    results = {"success": [], "failed": []}

    for finding_id in finding_ids:
        update_data = {
            "status": status,
            "remediation_notes": notes
        }

        response = requests.patch(
            f"{self.base_url}/findings/{finding_id}",
            headers=self.headers,
            json=update_data
        )

        if response.status_code == 200:
            results["success"].append(finding_id)
        else:
            results["failed"].append(finding_id)

    return results

def export_findings_to_csv(self, project_id: str, output_file: str):
    """Export project findings to CSV"""
    response = requests.get(
        f"{self.base_url}/projects/{project_id}/findings",
        headers=self.headers
    )

    if response.status_code == 200:
        findings = response.json()

        with open(output_file, 'w', newline='', encoding='utf-8') as file:
            if findings:
                fieldnames = [
                    "id", "title", "severity", "cvss_score", "status",
                    "vulnerability_type", "affected_assets", "created_date"
                ]
                writer = csv.DictWriter(file, fieldnames=fieldnames)
                writer.writeheader()

                for finding in findings:
                    writer.writerow({
                        "id": finding["id"],
                        "title": finding["title"],
                        "severity": finding["severity"],
                        "cvss_score": finding.get("cvss_score", ""),
                        "status": finding["status"],
                        "vulnerability_type": finding.get("vulnerability_type", ""),
                        "affected_assets": ";".join(finding.get("affected_assets", [])),
                        "created_date": finding["created_date"]
                    })

        print(f"Exported {len(findings)} findings to {output_file}")
    else:
        print(f"Failed to export findings: {response.text}")

Usage example

api = PlexTracAPI("your_api_key_here")

Import findings from CSV

finding_ids = api.import_findings_from_csv("project_123", "findings_import.csv") print(f"Imported {len(finding_ids)} findings")

Bulk update status

results = api.bulk_update_findings_status( finding_ids[:5], "in_remediation", "Development team working on fixes" ) print(f"Updated {len(results['success'])} findings successfully")

Export findings

api.export_findings_to_csv("project_123", "findings_export.csv") ```_

Reporting und Dokumentation

Erzeugung und Anpassung

```bash

Generate standard penetration test report

curl -X POST \ -H "Authorization: Bearer $PLEXTRAC_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "project_id": "67890", "template_id": "pentest_executive", "format": "pdf", "include_sections": [ "executive_summary", "methodology", "findings_summary", "detailed_findings", "recommendations", "appendices" ], "customizations": { "company_logo": true, "custom_branding": true, "executive_summary_length": "detailed", "technical_detail_level": "high" } }' \ "$PLEXTRAC_BASE_URL/reports/generate"

Generate compliance report

curl -X POST \ -H "Authorization: Bearer $PLEXTRAC_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "project_id": "67890", "template_id": "compliance_assessment", "format": "docx", "compliance_framework": "PCI_DSS", "include_sections": [ "compliance_overview", "control_assessment", "gap_analysis", "remediation_roadmap" ] }' \ "$PLEXTRAC_BASE_URL/reports/generate"

Check report generation status

curl -H "Authorization: Bearer $PLEXTRAC_API_KEY" \ "$PLEXTRAC_BASE_URL/reports/12345/status"

Download completed report

curl -H "Authorization: Bearer $PLEXTRAC_API_KEY" \ "$PLEXTRAC_BASE_URL/reports/12345/download" \ -o "pentest_report.pdf" ```_

Benutzerdefinierte Berichte Vorlagen

json { "custom_report_template": { "name": "Executive Security Assessment Report", "description": "High-level security assessment report for executives", "sections": [ { "name": "executive_summary", "title": "Executive Summary", "content_type": "narrative", "include_charts": true, "max_length": 500 }, { "name": "risk_overview", "title": "Risk Overview", "content_type": "dashboard", "charts": [ "risk_distribution", "severity_breakdown", "remediation_timeline" ] }, { "name": "key_findings", "title": "Key Security Findings", "content_type": "findings_summary", "severity_filter": ["critical", "high"], "max_findings": 10, "include_remediation": true }, { "name": "compliance_status", "title": "Compliance Status", "content_type": "compliance_matrix", "frameworks": ["SOC2", "ISO27001", "NIST"], "show_gaps": true }, { "name": "recommendations", "title": "Strategic Recommendations", "content_type": "narrative", "prioritization": "business_impact", "timeline": "quarterly" } ], "styling": { "color_scheme": "corporate_blue", "font_family": "Arial", "include_company_branding": true, "page_layout": "professional" } } }_

Automatisierte Berichtsverteilung

```python

Automated report generation and distribution

import requests import smtplib import time from email.mime.multipart import MIMEMultipart from email.mime.base import MIMEBase from email.mime.text import MIMEText from email import encoders

class PlexTracReporting: def init(self, api_key: str): self.api_key = api_key self.base_url = "https://app.plextrac.com/api/v1" self.headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" }

def generate_and_distribute_report(self, project_id: str, distribution_list: list):
    """Generate report and distribute to stakeholders"""

    # Generate executive report
    exec_report_id = self.generate_report(
        project_id, 
        "executive_template", 
        "pdf"
    )

    # Generate technical report
    tech_report_id = self.generate_report(
        project_id, 
        "technical_template", 
        "pdf"
    )

    # Wait for reports to complete
    self.wait_for_report_completion([exec_report_id, tech_report_id])

    # Download reports
    exec_report_path = self.download_report(exec_report_id, "executive_report.pdf")
    tech_report_path = self.download_report(tech_report_id, "technical_report.pdf")

    # Distribute reports based on recipient roles
    for recipient in distribution_list:
        if recipient["role"] in ["executive", "manager"]:
            self.send_report_email(
                recipient["email"], 
                "Executive Security Assessment Report",
                exec_report_path,
                "executive"
            )

        if recipient["role"] in ["technical", "developer", "admin"]:
            self.send_report_email(
                recipient["email"],
                "Technical Security Assessment Report", 
                tech_report_path,
                "technical"
            )

def generate_report(self, project_id: str, template: str, format: str) -> str:
    """Generate report and return report ID"""
    report_data = {
        "project_id": project_id,
        "template_id": template,
        "format": format,
        "customizations": {
            "company_logo": True,
            "custom_branding": True
        }
    }

    response = requests.post(
        f"{self.base_url}/reports/generate",
        headers=self.headers,
        json=report_data
    )

    if response.status_code == 202:
        return response.json()["report_id"]
    else:
        raise Exception(f"Failed to generate report: {response.text}")

def wait_for_report_completion(self, report_ids: list, timeout: int = 600):
    """Wait for all reports to complete generation"""
    start_time = time.time()
    completed_reports = set()

    while len(completed_reports) < len(report_ids) and time.time() - start_time < timeout:
        for report_id in report_ids:
            if report_id not in completed_reports:
                response = requests.get(
                    f"{self.base_url}/reports/{report_id}/status",
                    headers=self.headers
                )

                if response.status_code == 200:
                    status = response.json()["status"]
                    if status == "completed":
                        completed_reports.add(report_id)
                        print(f"Report {report_id} completed")
                    elif status == "failed":
                        print(f"Report {report_id} failed to generate")
                        completed_reports.add(report_id)  # Mark as done to avoid infinite loop

        time.sleep(10)  # Check every 10 seconds

    if len(completed_reports) < len(report_ids):
        raise TimeoutError("Some reports did not complete within timeout")

def download_report(self, report_id: str, filename: str) -> str:
    """Download completed report"""
    response = requests.get(
        f"{self.base_url}/reports/{report_id}/download",
        headers=self.headers
    )

    if response.status_code == 200:
        with open(filename, 'wb') as f:
            f.write(response.content)
        return filename
    else:
        raise Exception(f"Failed to download report: {response.text}")

def send_report_email(self, recipient: str, subject: str, attachment_path: str, report_type: str):
    """Send report via email"""

    # Email configuration (customize as needed)
    smtp_server = "smtp.company.com"
    smtp_port = 587
    sender_email = "security@company.com"
    sender_password = "email_password"

    # Create message
    msg = MIMEMultipart()
    msg['From'] = sender_email
    msg['To'] = recipient
    msg['Subject'] = subject

    # Email body based on report type
    if report_type == "executive":
        body = """
        Dear Executive Team,

        Please find attached the executive summary of our recent security assessment.
        This report provides a high-level overview of our security posture and key recommendations.

        Key highlights will be discussed in our upcoming security review meeting.

        Best regards,
        Security Team
        """
    else:
        body = """
        Dear Technical Team,

        Please find attached the detailed technical security assessment report.
        This report contains specific vulnerabilities, proof of concepts, and remediation guidance.

        Please review the findings and begin remediation planning according to the priority levels indicated.

        Best regards,
        Security Team
        """

    msg.attach(MIMEText(body, 'plain'))

    # Attach report
    with open(attachment_path, "rb") as attachment:
        part = MIMEBase('application', 'octet-stream')
        part.set_payload(attachment.read())

    encoders.encode_base64(part)
    part.add_header(
        'Content-Disposition',
        f'attachment; filename= {attachment_path}'
    )
    msg.attach(part)

    # Send email
    try:
        server = smtplib.SMTP(smtp_server, smtp_port)
        server.starttls()
        server.login(sender_email, sender_password)
        text = msg.as_string()
        server.sendmail(sender_email, recipient, text)
        server.quit()
        print(f"Report sent to {recipient}")
    except Exception as e:
        print(f"Failed to send email to {recipient}: {str(e)}")

Usage example

reporting = PlexTracReporting("your_api_key")

distribution_list = [ {"email": "ceo@company.com", "role": "executive"}, {"email": "ciso@company.com", "role": "executive"}, {"email": "dev-lead@company.com", "role": "technical"}, {"email": "sysadmin@company.com", "role": "technical"} ]

reporting.generate_and_distribute_report("project_123", distribution_list) ```_

Entfernung Tracking und Workflow

Workflow Management

```bash

Create remediation ticket

curl -X POST \ -H "Authorization: Bearer $PLEXTRAC_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "finding_id": "finding_123", "assigned_to": "dev-team@company.com", "priority": "high", "due_date": "2024-02-15", "remediation_plan": { "steps": [ "Review affected code sections", "Implement parameterized queries", "Update input validation functions", "Conduct code review", "Deploy to staging environment", "Perform security testing", "Deploy to production" ], "estimated_effort": "16 hours", "resources_required": ["Senior Developer", "Security Analyst"] }, "acceptance_criteria": [ "All SQL queries use parameterized statements", "Input validation implemented for all user inputs", "Security testing confirms vulnerability is resolved", "Code review completed and approved" ] }' \ "$PLEXTRAC_BASE_URL/remediation/tickets"

Update remediation progress

curl -X PATCH \ -H "Authorization: Bearer $PLEXTRAC_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "status": "in_progress", "progress_percentage": 60, "completed_steps": [ "Review affected code sections", "Implement parameterized queries", "Update input validation functions" ], "notes": "Parameterized queries implemented. Currently working on input validation updates.", "updated_by": "developer@company.com" }' \ "$PLEXTRAC_BASE_URL/remediation/tickets/ticket_456"

Request remediation validation

curl -X POST \ -H "Authorization: Bearer $PLEXTRAC_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "ticket_id": "ticket_456", "validation_request": { "environment": "staging", "test_cases": [ "Attempt SQL injection on login form", "Test input validation on all form fields", "Verify error handling does not leak information" ], "requested_by": "developer@company.com", "requested_date": "2024-02-10" } }' \ "$PLEXTRAC_BASE_URL/remediation/validation-requests" ```_

Integration mit Ausgabeverfolgungssystemen

```python

Integration with Jira for remediation tracking

import requests import json from typing import Dict, List

class PlexTracJiraIntegration: def init(self, plextrac_api_key: str, jira_config: Dict): self.plextrac_api_key = plextrac_api_key self.plextrac_base_url = "https://app.plextrac.com/api/v1" self.jira_config = jira_config

    self.plextrac_headers = {
        "Authorization": f"Bearer {plextrac_api_key}",
        "Content-Type": "application/json"
    }

def sync_findings_to_jira(self, project_id: str) -> List[str]:
    """Sync PlexTrac findings to Jira tickets"""

    # Get findings from PlexTrac
    response = requests.get(
        f"{self.plextrac_base_url}/projects/{project_id}/findings",
        headers=self.plextrac_headers,
        params={"status": "open", "severity": "high,critical"}
    )

    findings = response.json()
    created_tickets = []

    for finding in findings:
        # Check if Jira ticket already exists
        if not finding.get("jira_ticket_id"):
            ticket_id = self.create_jira_ticket(finding)
            if ticket_id:
                # Update PlexTrac finding with Jira ticket ID
                self.update_finding_jira_reference(finding["id"], ticket_id)
                created_tickets.append(ticket_id)

    return created_tickets

def create_jira_ticket(self, finding: Dict) -> str:
    """Create Jira ticket for PlexTrac finding"""

    # Map PlexTrac severity to Jira priority
    priority_mapping = {
        "critical": "Highest",
        "high": "High", 
        "medium": "Medium",
        "low": "Low"
    }

    ticket_data = {
        "fields": {
            "project": {"key": self.jira_config["project_key"]},
            "summary": f"Security Finding: {finding['title']}",
            "description": f"""
            *Security Vulnerability Details*

            *Severity:* {finding['severity'].upper()}
            *CVSS Score:* {finding.get('cvss_score', 'N/A')}
            *Vulnerability Type:* {finding.get('vulnerability_type', 'N/A')}

            *Description:*
            {finding['description']}

            *Affected Assets:*
            {chr(10).join(f'• {asset}' for asset in finding.get('affected_assets', []))}

            *Impact:*
            {finding.get('impact', 'N/A')}

            *Remediation:*
            {finding.get('remediation', {}).get('short_term', 'See PlexTrac for details')}

            *PlexTrac Finding ID:* {finding['id']}
            """,
            "issuetype": {"name": "Bug"},
            "priority": {"name": priority_mapping.get(finding['severity'], "Medium")},
            "labels": [
                "security",
                "plextrac",
                finding['severity'],
                finding.get('vulnerability_type', '').replace(' ', '-').lower()
            ],
            "components": [{"name": "Security"}],
            "customfield_10001": finding['id']  # PlexTrac Finding ID custom field
        }
    }

    response = requests.post(
        f"{self.jira_config['base_url']}/rest/api/2/issue",
        auth=(self.jira_config["username"], self.jira_config["password"]),
        headers={"Content-Type": "application/json"},
        json=ticket_data
    )

    if response.status_code == 201:
        return response.json()["key"]
    else:
        print(f"Failed to create Jira ticket: {response.text}")
        return None

def update_finding_jira_reference(self, finding_id: str, jira_ticket_id: str):
    """Update PlexTrac finding with Jira ticket reference"""

    update_data = {
        "jira_ticket_id": jira_ticket_id,
        "external_references": [
            {
                "type": "jira_ticket",
                "id": jira_ticket_id,
                "url": f"{self.jira_config['base_url']}/browse/{jira_ticket_id}"
            }
        ]
    }

    requests.patch(
        f"{self.plextrac_base_url}/findings/{finding_id}",
        headers=self.plextrac_headers,
        json=update_data
    )

def sync_jira_status_to_plextrac(self, project_id: str):
    """Sync Jira ticket status back to PlexTrac findings"""

    # Get findings with Jira references
    response = requests.get(
        f"{self.plextrac_base_url}/projects/{project_id}/findings",
        headers=self.plextrac_headers,
        params={"has_jira_ticket": "true"}
    )

    findings = response.json()

    for finding in findings:
        jira_ticket_id = finding.get("jira_ticket_id")
        if jira_ticket_id:
            # Get Jira ticket status
            jira_response = requests.get(
                f"{self.jira_config['base_url']}/rest/api/2/issue/{jira_ticket_id}",
                auth=(self.jira_config["username"], self.jira_config["password"])
            )

            if jira_response.status_code == 200:
                jira_ticket = jira_response.json()
                jira_status = jira_ticket["fields"]["status"]["name"]

                # Map Jira status to PlexTrac status
                status_mapping = {
                    "To Do": "open",
                    "In Progress": "in_remediation", 
                    "Done": "resolved",
                    "Closed": "closed"
                }

                plextrac_status = status_mapping.get(jira_status)
                if plextrac_status and plextrac_status != finding["status"]:
                    # Update PlexTrac finding status
                    update_data = {
                        "status": plextrac_status,
                        "remediation_notes": f"Status synced from Jira ticket {jira_ticket_id}"
                    }

                    requests.patch(
                        f"{self.plextrac_base_url}/findings/{finding['id']}",
                        headers=self.plextrac_headers,
                        json=update_data
                    )

                    print(f"Updated finding {finding['id']} status to {plextrac_status}")

Usage example

jira_config = { "base_url": "https://company.atlassian.net", "username": "jira_user@company.com", "password": "jira_api_token", "project_key": "SEC" }

integration = PlexTracJiraIntegration("plextrac_api_key", jira_config)

Sync findings to Jira

created_tickets = integration.sync_findings_to_jira("project_123") print(f"Created {len(created_tickets)} Jira tickets")

Sync status back from Jira

integration.sync_jira_status_to_plextrac("project_123") ```_

Zusammenarbeit und Kommunikation

Client Portal und Kommunikation

```bash

Create client portal access

curl -X POST \ -H "Authorization: Bearer $PLEXTRAC_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "client_id": "client_123", "project_id": "project_456", "access_level": "findings_view", "permissions": [ "view_findings", "comment_on_findings", "view_reports", "download_reports" ], "expiration_date": "2024-06-30", "notification_preferences": { "email_updates": true, "finding_status_changes": true, "new_reports": true } }' \ "$PLEXTRAC_BASE_URL/client-portal/access"

Send client notification

curl -X POST \ -H "Authorization: Bearer $PLEXTRAC_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "recipient": "client@example.com", "subject": "Security Assessment Update", "message": "We have completed the initial assessment phase and identified several findings that require attention. Please review the updated findings in your client portal.", "include_summary": true, "priority": "normal" }' \ "$PLEXTRAC_BASE_URL/notifications/send"

Add comment to finding (client collaboration)

curl -X POST \ -H "Authorization: Bearer $PLEXTRAC_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "finding_id": "finding_789", "comment": "We have reviewed this finding and plan to implement the recommended fix during our next maintenance window scheduled for February 20th.", "author": "client@example.com", "visibility": "all_stakeholders", "comment_type": "remediation_plan" }' \ "$PLEXTRAC_BASE_URL/findings/finding_789/comments" ```_

Team Collaboration Features

```bash

Assign finding to team member

curl -X PATCH \ -H "Authorization: Bearer $PLEXTRAC_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "assigned_to": "analyst@company.com", "assignment_notes": "Please review and validate this SQL injection finding. Client has questions about the impact assessment.", "due_date": "2024-02-12", "priority": "high" }' \ "$PLEXTRAC_BASE_URL/findings/finding_789/assign"

Create team discussion thread

curl -X POST \ -H "Authorization: Bearer $PLEXTRAC_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "project_id": "project_456", "title": "Remediation Strategy Discussion", "description": "Discuss prioritization and timeline for critical findings", "participants": [ "lead@company.com", "analyst1@company.com", "analyst2@company.com" ], "related_findings": ["finding_123", "finding_456", "finding_789"] }' \ "$PLEXTRAC_BASE_URL/discussions"

Schedule team review meeting

curl -X POST \ -H "Authorization: Bearer $PLEXTRAC_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "project_id": "project_456", "meeting_type": "findings_review", "title": "Weekly Findings Review", "scheduled_date": "2024-02-15T14:00:00Z", "duration_minutes": 60, "attendees": [ "lead@company.com", "analyst1@company.com", "client@example.com" ], "agenda": [ "Review new critical findings", "Discuss remediation timelines", "Client questions and concerns" ] }' \ "$PLEXTRAC_BASE_URL/meetings/schedule" ```_

Analytics und Metriken

Sicherheits-Metrics und KPIs

```bash

Get project security metrics

curl -H "Authorization: Bearer $PLEXTRAC_API_KEY" \ "$PLEXTRAC_BASE_URL/projects/project_456/metrics" \ | jq '{ total_findings: .findings.total, critical_findings: .findings.by_severity.critical, high_findings: .findings.by_severity.high, remediation_rate: .remediation.completion_rate, avg_remediation_time: .remediation.average_time_days, risk_score: .risk.overall_score }'

Get organizational security trends

curl -H "Authorization: Bearer $PLEXTRAC_API_KEY" \ "$PLEXTRAC_BASE_URL/analytics/trends?period=6months" \ | jq '{ vulnerability_trends: .vulnerabilities.monthly_counts, remediation_trends: .remediation.monthly_completion_rates, risk_trends: .risk.monthly_scores }'

Generate compliance dashboard data

curl -H "Authorization: Bearer $PLEXTRAC_API_KEY" \ "$PLEXTRAC_BASE_URL/compliance/dashboard?frameworks=PCI_DSS,SOC2,ISO27001" ```_

Benutzerdefinierte Analyse und Reporting

```python

Advanced analytics and custom metrics

import requests import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from datetime import datetime, timedelta

class PlexTracAnalytics: def init(self, api_key: str): self.api_key = api_key self.base_url = "https://app.plextrac.com/api/v1" self.headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" }

def get_findings_dataframe(self, project_ids: list = None) -> pd.DataFrame:
    """Get findings data as pandas DataFrame for analysis"""

    if project_ids:
        all_findings = []
        for project_id in project_ids:
            response = requests.get(
                f"{self.base_url}/projects/{project_id}/findings",
                headers=self.headers
            )
            if response.status_code == 200:
                all_findings.extend(response.json())
    else:
        response = requests.get(
            f"{self.base_url}/findings",
            headers=self.headers
        )
        all_findings = response.json()

    # Convert to DataFrame
    df = pd.DataFrame(all_findings)

    # Data preprocessing
    df['created_date'] = pd.to_datetime(df['created_date'])
    df['severity_numeric'] = df['severity'].map({
        'critical': 4,
        'high': 3,
        'medium': 2,
        'low': 1
    })

    return df

def analyze_vulnerability_trends(self, df: pd.DataFrame) -> dict:
    """Analyze vulnerability trends over time"""

    # Group by month and severity
    monthly_trends = df.groupby([
        df['created_date'].dt.to_period('M'),
        'severity'
    ]).size().unstack(fill_value=0)

    # Calculate trend metrics
    total_by_month = df.groupby(df['created_date'].dt.to_period('M')).size()

    # Calculate month-over-month change
    mom_change = total_by_month.pct_change() * 100

    return {
        'monthly_counts': monthly_trends.to_dict(),
        'total_by_month': total_by_month.to_dict(),
        'month_over_month_change': mom_change.to_dict(),
        'average_monthly_findings': total_by_month.mean()
    }

def calculate_remediation_metrics(self, df: pd.DataFrame) -> dict:
    """Calculate remediation performance metrics"""

    # Filter resolved findings
    resolved_findings = df[df['status'].isin(['resolved', 'closed'])]

    if len(resolved_findings) == 0:
        return {"error": "No resolved findings found"}

    # Calculate remediation time
    resolved_findings['remediation_time'] = (
        pd.to_datetime(resolved_findings['resolved_date']) - 
        resolved_findings['created_date']
    ).dt.days

    # Metrics by severity
    remediation_by_severity = resolved_findings.groupby('severity')['remediation_time'].agg([
        'mean', 'median', 'std', 'min', 'max'
    ]).round(2)

    # SLA compliance (example SLAs)
    sla_targets = {'critical': 7, 'high': 14, 'medium': 30, 'low': 90}

    sla_compliance = {}
    for severity, target in sla_targets.items():
        severity_findings = resolved_findings[resolved_findings['severity'] == severity]
        if len(severity_findings) > 0:
            compliant = (severity_findings['remediation_time'] <= target).sum()
            total = len(severity_findings)
            sla_compliance[severity] = {
                'compliance_rate': round((compliant / total) * 100, 2),
                'compliant_count': compliant,
                'total_count': total,
                'sla_target_days': target
            }

    return {
        'remediation_by_severity': remediation_by_severity.to_dict(),
        'sla_compliance': sla_compliance,
        'overall_avg_remediation_time': resolved_findings['remediation_time'].mean()
    }

def generate_risk_dashboard(self, df: pd.DataFrame) -> dict:
    """Generate risk dashboard metrics"""

    # Risk scoring based on severity and count
    risk_weights = {'critical': 10, 'high': 5, 'medium': 2, 'low': 1}

    current_risk_score = sum(
        df[df['status'] == 'open']['severity'].map(risk_weights).fillna(0)
    )

    # Risk by category
    risk_by_category = df[df['status'] == 'open'].groupby('vulnerability_type').apply(
        lambda x: sum(x['severity'].map(risk_weights).fillna(0))
    ).sort_values(ascending=False)

    # Risk trends over time
    monthly_risk = df.groupby([
        df['created_date'].dt.to_period('M'),
        'status'
    ]).apply(
        lambda x: sum(x['severity'].map(risk_weights).fillna(0))
    ).unstack(fill_value=0)

    return {
        'current_risk_score': current_risk_score,
        'risk_by_category': risk_by_category.to_dict(),
        'monthly_risk_trends': monthly_risk.to_dict(),
        'risk_distribution': df[df['status'] == 'open']['severity'].value_counts().to_dict()
    }

def create_executive_dashboard(self, project_ids: list = None):
    """Create comprehensive executive dashboard"""

    df = self.get_findings_dataframe(project_ids)

    # Generate all metrics
    vulnerability_trends = self.analyze_vulnerability_trends(df)
    remediation_metrics = self.calculate_remediation_metrics(df)
    risk_dashboard = self.generate_risk_dashboard(df)

    # Create visualizations
    fig, axes = plt.subplots(2, 2, figsize=(15, 10))

    # Vulnerability trends
    monthly_counts = pd.DataFrame(vulnerability_trends['monthly_counts'])
    monthly_counts.plot(kind='bar', stacked=True, ax=axes[0,0])
    axes[0,0].set_title('Monthly Vulnerability Trends')
    axes[0,0].set_xlabel('Month')
    axes[0,0].set_ylabel('Number of Findings')

    # Risk distribution
    risk_dist = pd.Series(risk_dashboard['risk_distribution'])
    risk_dist.plot(kind='pie', ax=axes[0,1], autopct='%1.1f%%')
    axes[0,1].set_title('Current Risk Distribution')

    # Remediation SLA compliance
    if 'sla_compliance' in remediation_metrics:
        sla_data = {k: v['compliance_rate'] for k, v in remediation_metrics['sla_compliance'].items()}
        pd.Series(sla_data).plot(kind='bar', ax=axes[1,0])
        axes[1,0].set_title('SLA Compliance by Severity')
        axes[1,0].set_ylabel('Compliance Rate (%)')

    # Risk by category
    risk_by_cat = pd.Series(risk_dashboard['risk_by_category']).head(10)
    risk_by_cat.plot(kind='barh', ax=axes[1,1])
    axes[1,1].set_title('Top Risk Categories')
    axes[1,1].set_xlabel('Risk Score')

    plt.tight_layout()
    plt.savefig('executive_dashboard.png', dpi=300, bbox_inches='tight')

    return {
        'vulnerability_trends': vulnerability_trends,
        'remediation_metrics': remediation_metrics,
        'risk_dashboard': risk_dashboard,
        'dashboard_image': 'executive_dashboard.png'
    }

Usage example

analytics = PlexTracAnalytics("your_api_key")

Generate comprehensive dashboard

dashboard_data = analytics.create_executive_dashboard(['project_123', 'project_456'])

print("Executive Dashboard Generated:") print(f"Current Risk Score: {dashboard_data['risk_dashboard']['current_risk_score']}") print(f"Average Remediation Time: {dashboard_data['remediation_metrics']['overall_avg_remediation_time']:.1f} days") print(f"Dashboard saved as: {dashboard_data['dashboard_image']}") ```_

Best Practices und Optimierung

Sicherheit und Zugriffskontrolle

```bash

Configure role-based access control

curl -X POST \ -H "Authorization: Bearer $PLEXTRAC_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "role_name": "Senior Analyst", "permissions": [ "create_projects", "edit_all_findings", "generate_reports", "manage_team_members", "view_analytics" ], "restrictions": [ "cannot_delete_projects", "cannot_modify_billing", "cannot_access_admin_settings" ] }' \ "$PLEXTRAC_BASE_URL/roles"

Enable two-factor authentication

curl -X POST \ -H "Authorization: Bearer $PLEXTRAC_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "user_id": "user_123", "mfa_method": "totp", "require_mfa": true }' \ "$PLEXTRAC_BASE_URL/users/user_123/mfa"

Configure audit logging

curl -X POST \ -H "Authorization: Bearer $PLEXTRAC_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "audit_settings": { "log_all_actions": true, "log_api_access": true, "log_report_downloads": true, "retention_days": 365, "export_format": "json" } }' \ "$PLEXTRAC_BASE_URL/audit/configure" ```_

Leistungsoptimierung

```bash

Optimize API usage with pagination

curl -H "Authorization: Bearer $PLEXTRAC_API_KEY" \ "$PLEXTRAC_BASE_URL/findings?page=1&limit;=50&sort;=severityℴ=desc"

Use filtering to reduce data transfer

curl -H "Authorization: Bearer $PLEXTRAC_API_KEY" \ "$PLEXTRAC_BASE_URL/findings?severity=critical,high&status;=open&created;_after=2024-01-01"

Bulk operations for efficiency

curl -X PATCH \ -H "Authorization: Bearer $PLEXTRAC_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "finding_ids": ["finding_123", "finding_456", "finding_789"], "updates": { "status": "in_remediation", "assigned_to": "dev-team@company.com" } }' \ "$PLEXTRAC_BASE_URL/findings/bulk-update" ```_

Workflow Automation

```python

Automated workflow for new findings

import requests import schedule import time

class PlexTracAutomation: def init(self, api_key: str): self.api_key = api_key self.base_url = "https://app.plextrac.com/api/v1" self.headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" }

def auto_assign_findings(self):
    """Automatically assign new findings based on rules"""

    # Get unassigned critical/high findings
    response = requests.get(
        f"{self.base_url}/findings",
        headers=self.headers,
        params={
            "status": "open",
            "severity": "critical,high",
            "assigned_to": "null"
        }
    )

    findings = response.json()

    # Assignment rules
    assignment_rules = {
        "web_application": "webapp-team@company.com",
        "network": "network-team@company.com", 
        "database": "dba-team@company.com",
        "default": "security-team@company.com"
    }

    for finding in findings:
        # Determine assignment based on vulnerability type
        vuln_type = finding.get("vulnerability_type", "").lower()
        assigned_to = assignment_rules.get(vuln_type, assignment_rules["default"])

        # Calculate due date based on severity
        due_days = 7 if finding["severity"] == "critical" else 14
        due_date = (datetime.now() + timedelta(days=due_days)).isoformat()

        # Update finding
        update_data = {
            "assigned_to": assigned_to,
            "due_date": due_date,
            "assignment_notes": f"Auto-assigned based on vulnerability type: {vuln_type}"
        }

        requests.patch(
            f"{self.base_url}/findings/{finding['id']}",
            headers=self.headers,
            json=update_data
        )

        print(f"Assigned finding {finding['id']} to {assigned_to}")

def send_overdue_notifications(self):
    """Send notifications for overdue findings"""

    # Get overdue findings
    response = requests.get(
        f"{self.base_url}/findings",
        headers=self.headers,
        params={
            "status": "open,in_remediation",
            "overdue": "true"
        }
    )

    overdue_findings = response.json()

    # Group by assignee
    by_assignee = {}
    for finding in overdue_findings:
        assignee = finding.get("assigned_to")
        if assignee:
            if assignee not in by_assignee:
                by_assignee[assignee] = []
            by_assignee[assignee].append(finding)

    # Send notifications
    for assignee, findings in by_assignee.items():
        notification_data = {
            "recipient": assignee,
            "subject": f"Overdue Security Findings - Action Required",
            "message": f"You have {len(findings)} overdue security findings that require immediate attention.",
            "findings": [f["id"] for f in findings],
            "priority": "high"
        }

        requests.post(
            f"{self.base_url}/notifications/send",
            headers=self.headers,
            json=notification_data
        )

def auto_escalate_critical_findings(self):
    """Escalate critical findings that haven't been addressed"""

    # Get critical findings older than 24 hours
    cutoff_date = (datetime.now() - timedelta(hours=24)).isoformat()

    response = requests.get(
        f"{self.base_url}/findings",
        headers=self.headers,
        params={
            "severity": "critical",
            "status": "open",
            "created_before": cutoff_date
        }
    )

    critical_findings = response.json()

    for finding in critical_findings:
        # Escalate to management
        escalation_data = {
            "finding_id": finding["id"],
            "escalated_to": "security-manager@company.com",
            "escalation_reason": "Critical finding not addressed within 24 hours",
            "escalation_level": "management"
        }

        requests.post(
            f"{self.base_url}/escalations",
            headers=self.headers,
            json=escalation_data
        )

        print(f"Escalated critical finding {finding['id']} to management")

Set up automated workflows

automation = PlexTracAutomation("your_api_key")

Schedule automated tasks

schedule.every(1).hours.do(automation.auto_assign_findings) schedule.every(1).days.at("09:00").do(automation.send_overdue_notifications) schedule.every(1).hours.do(automation.auto_escalate_critical_findings)

Run scheduler

while True: schedule.run_pending() time.sleep(60) ```_

Ressourcen

Dokumentation und Support

Gemeinschaft und Ausbildung

  • [PlexTrac Community](LINK_12 -%20(_LINK_12)
  • [Webinar Series](LINK_12 -%20(_LINK_12)

Integrationsressourcen

  • [Integration Marketplace](LINK_12 -%20[Integrationsbeispiele](__LINK_12___ -%20Webhook%20Dokumentation
  • [Drittanbieter-Anschlüsse](LINK_12