Zum Inhalt

JSParser Cheat Sheet

generieren

Überblick

JSParser ist ein Python-Tool, um JavaScript-Dateien zu parsieren und nützliche Informationen wie Endpunkte, Geheimnisse und andere sensible Daten zu extrahieren. Es ist besonders nützlich für Bug bounty Jäger und Penetration Tester, die JavaScript-Dateien für potenzielle Sicherheitslücken und versteckte Endpunkte analysieren müssen, die möglicherweise nicht in der Hauptanwendung dokumentiert werden.

RECHT *Key Features: Endpoint-Extraktion, geheime Erkennung, URL-Pasing, Domain-Enumeration, Parameter-Erkennung und umfassende JavaScript-Analyse für Sicherheitstests.

Installation und Inbetriebnahme

Python Installation

```bash

Clone the repository

git clone https://github.com/nahamsec/JSParser.git cd JSParser

Install Python dependencies

pip install -r requirements.txt

Make the script executable

chmod +x jsparser.py

Create symbolic link for global access

sudo ln -s $(pwd)/jsparser.py /usr/local/bin/jsparser

Verify installation

jsparser --help python3 jsparser.py --help ```_

Virtual Environment Setup

```bash

Create virtual environment

python3 -m venv jsparser-env source jsparser-env/bin/activate

Install dependencies

pip install requests beautifulsoup4 argparse urllib3

Clone and setup JSParser

git clone https://github.com/nahamsec/JSParser.git cd JSParser pip install -r requirements.txt

Test installation

python3 jsparser.py --help

Create activation script

cat > activate_jsparser.sh << 'EOF'

!/bin/bash

cd /path/to/JSParser source jsparser-env/bin/activate python3 jsparser.py "$@" EOF

chmod +x activate_jsparser.sh ```_

Docker Installation

```bash

Create Dockerfile

cat > Dockerfile << 'EOF' FROM python:3.9-alpine

RUN apk add --no-cache git

WORKDIR /app

RUN git clone https://github.com/nahamsec/JSParser.git . RUN pip install -r requirements.txt

ENTRYPOINT ["python3", "jsparser.py"] EOF

Build Docker image

docker build -t jsparser .

Run JSParser in Docker

docker run --rm jsparser --help

Run with volume mount for output

docker run --rm -v $(pwd):/output jsparser -f /output/script.js -o /output/results.txt

Create alias for easier usage

echo 'alias jsparser="docker run --rm -v $(pwd):/output jsparser"' >> ~/.bashrc source ~/.bashrc ```_

Abhängigkeiten und Anforderungen

```bash

Install required Python packages

pip install requests beautifulsoup4 argparse urllib3 colorama

Install additional useful packages

pip install tldextract validators python-magic

Create requirements file

cat > requirements.txt << 'EOF' requests>=2.25.1 beautifulsoup4>=4.9.3 argparse>=1.4.0 urllib3>=1.26.5 colorama>=0.4.4 tldextract>=3.1.0 validators>=0.18.2 python-magic>=0.4.24 EOF

Install from requirements

pip install -r requirements.txt

Verify all dependencies

python3 -c "import requests, bs4, argparse, urllib3; print('All dependencies installed successfully')" ```_

Konfiguration und Setup

```bash

Create configuration directory

mkdir -p ~/.jsparser

Create configuration file

cat > ~/.jsparser/config.json << 'EOF' { "default_options": { "timeout": 30, "user_agent": "JSParser/1.0", "max_retries": 3, "output_format": "txt" }, "patterns": { "endpoints": [ | "(?:\" | \')([a-zA-Z0-9_\-\/\.]+\.(?:php | asp | aspx | jsp | json | action | html | js | txt | xml))(?:\" | \')", | | "(?:\" | \')(\/[a-zA-Z0-9_\-\/\.]+)(?:\" | \')", | | "(?:api | endpoint | url)[\"'\s][:=][\"'\s]([^\"'\s]+)" | ], "secrets": [ "(?i)(api[\-]?key|apikey)[\"'\s][:=][\"'\s]([a-zA-Z0-9\-]{10,})", | "(?i)(secret | password | pwd)[\"'\s][:=][\"'\s]([a-zA-Z0-9_\-]{8,})", | "(?i)(token|auth)[\"'\s][:=][\"'\s]([a-zA-Z0-9_\-]{10,})" ], "domains": [ "(?:https?:\/\/)?([a-zA-Z0-9][a-zA-Z0-9-]{1,61}[a-zA-Z0-9]\.[a-zA-Z]{2,})", | "(?:\" | \')([a-zA-Z0-9][a-zA-Z0-9-]{1,61}[a-zA-Z0-9]\.[a-zA-Z]{2,})(?:\" | \')" | ] }, "output": { "save_raw": true, "save_parsed": true, "create_reports": true } } EOF

Set environment variables

export JSPARSER_CONFIG=~/.jsparser/config.json export JSPARSER_OUTPUT=~/.jsparser/output ```_

Grundlegende Verwendung und Befehle

Analyse von Dateien

```bash

Basic JavaScript file parsing

jsparser -f script.js

Parse with output file

jsparser -f script.js -o results.txt

Parse multiple files

jsparser -f script1.js script2.js script3.js

Parse with custom output directory

jsparser -f script.js -o /path/to/output/

Parse with verbose output

jsparser -f script.js -v

Parse with specific patterns only

jsparser -f script.js --endpoints-only jsparser -f script.js --secrets-only jsparser -f script.js --domains-only ```_

URL-basierte Analyse

```bash

Parse JavaScript from URL

jsparser -u https://example.com/script.js

Parse multiple URLs

jsparser -u https://example.com/script1.js https://example.com/script2.js

Parse with custom headers

jsparser -u https://example.com/script.js --header "Authorization: Bearer token123"

Parse with custom user agent

jsparser -u https://example.com/script.js --user-agent "Custom Parser 1.0"

Parse with proxy

jsparser -u https://example.com/script.js --proxy http://127.0.0.1:8080

Parse with timeout

jsparser -u https://example.com/script.js --timeout 60

Parse and save raw content

jsparser -u https://example.com/script.js --save-raw ```_

Massenanalyse

```bash

Parse from file list

echo -e "https://example.com/script1.js\nhttps://example.com/script2.js" > js_urls.txt jsparser -l js_urls.txt

Parse all JS files in directory

find /path/to/js/files -name "*.js" | while read file; do jsparser -f "$file" -o "results_$(basename "$file" .js).txt" done

Parse with threading

jsparser -l js_urls.txt --threads 10

Parse with rate limiting

jsparser -l js_urls.txt --delay 2

Parse with output format

jsparser -l js_urls.txt --output-format json jsparser -l js_urls.txt --output-format csv ```_

Erweiterte JavaScript-Analyse

Individuelle Musterextraktion

```python

!/usr/bin/env python3

Enhanced JSParser with custom patterns and analysis

import re import json import requests import argparse import threading import time from urllib.parse import urljoin, urlparse from concurrent.futures import ThreadPoolExecutor, as_completed import os

class AdvancedJSParser: def init(self, config_file=None): self.results = [] self.lock = threading.Lock() self.session = requests.Session()

    # Default patterns
    self.patterns = {
        'endpoints': [

| r'(?:" | \')([a-zA-Z0-9_-\/.]+.(?:php | asp | aspx | jsp | json | action | html | js | txt | xml))(?:" | \')', | | r'(?:" | \')(\/[a-zA-Z0-9_-\/.]+)(?:" | \')', | | r'(?:api | endpoint | url)["\'\s][:=]["\'\s]([^"\'\s]+)', | | r'(?:fetch | axios | ajax)\s\("\'([a-zA-Z0-9][a-zA-Z0-9-]{1,61}[a-zA-Z0-9].[a-zA-Z]{2,})(?:" | \')', | | r'(?:subdomain | host | domain)["\'\s][:=]["\'\s]([a-zA-Z0-9.-]+)' | ], 'parameters': [ | r'(?:param | parameter | arg)["\'\s][:=]["\'\s]([a-zA-Z0-9_\-]+)', | r'(?:\?|&)([a-zA-Z0-9_\-]+)=', r'(?:data|params)\s\[\s"\'([A-Za-z0-9+/]{20,}={0,2})(?:" | \')', | r'(?:btoa|atob)\("\'(https?:\/\/[^"\'\s]+)(?:" | \')', | | r'(?:url | link | href)["\'\s][:=]["\'\s]*(https?:\/\/[^"\'\s]+)' | ] }

    # Load custom configuration if provided
    if config_file and os.path.exists(config_file):
        with open(config_file, 'r') as f:
            config = json.load(f)
            if 'patterns' in config:
                self.patterns.update(config['patterns'])

def parse_javascript_content(self, content, source_url=None):
    """Parse JavaScript content and extract various patterns"""

    results = {
        'source': source_url or 'local_file',
        'size': len(content),
        'endpoints': [],
        'secrets': [],
        'domains': [],
        'parameters': [],
        'functions': [],
        'comments': [],
        'base64': [],
        'urls': [],
        'analysis': {
            'obfuscated': False,
            'minified': False,
            'webpack_bundle': False,
            'react_app': False,
            'angular_app': False,
            'vue_app': False,
            'jquery_usage': False,
            'ajax_calls': 0,
            'external_requests': 0
        }
    }

    # Extract patterns
    for pattern_type, patterns in self.patterns.items():
        if pattern_type in results:
            for pattern in patterns:
                matches = re.findall(pattern, content, re.MULTILINE | re.IGNORECASE)
                if matches:
                    # Handle tuple results from groups
                    for match in matches:
                        if isinstance(match, tuple):
                            # Take the last non-empty group
                            match = next((m for m in reversed(match) if m), match[0])

                        if match and match not in results[pattern_type]:
                            results[pattern_type].append(match)

    # Perform content analysis
    results['analysis'] = self.analyze_javascript_content(content)

    # Clean and validate results
    results = self.clean_and_validate_results(results, source_url)

    return results

def analyze_javascript_content(self, content):
    """Analyze JavaScript content for various characteristics"""

    analysis = {
        'obfuscated': False,
        'minified': False,
        'webpack_bundle': False,
        'react_app': False,
        'angular_app': False,
        'vue_app': False,
        'jquery_usage': False,
        'ajax_calls': 0,
        'external_requests': 0,
        'eval_usage': 0,
        'document_write': 0,
        'local_storage': 0,
        'session_storage': 0,
        'cookies': 0
    }

    # Check for obfuscation
    if re.search(r'\\x[0-9a-fA-F]{2}', content) or len(re.findall(r'[a-zA-Z_$][a-zA-Z0-9_$]*', content)) < len(content) / 20:
        analysis['obfuscated'] = True

    # Check for minification
    if '\\n' not in content[:1000] and len(content) > 1000:
        analysis['minified'] = True

    # Check for frameworks and libraries
    if 'webpack' in content.lower() or '__webpack_require__' in content:
        analysis['webpack_bundle'] = True

    if 'React' in content or 'react' in content.lower():
        analysis['react_app'] = True

    if 'angular' in content.lower() or 'ng-' in content:
        analysis['angular_app'] = True

    if 'Vue' in content or 'vue' in content.lower():
        analysis['vue_app'] = True

    if 'jQuery' in content or '$(' in content:
        analysis['jquery_usage'] = True

    # Count specific patterns

| analysis['ajax_calls'] = len(re.findall(r'(?:ajax | fetch | axios | XMLHttpRequest)', content, re.IGNORECASE)) | | analysis['external_requests'] = len(re.findall(r'https?:\/\/(?!(?:localhost | 127.0.0.1 | 0.0.0.0))', content)) | analysis['eval_usage'] = len(re.findall(r'\beval\s*\(', content)) analysis['document_write'] = len(re.findall(r'document\.write', content)) analysis['local_storage'] = len(re.findall(r'localStorage', content)) analysis['session_storage'] = len(re.findall(r'sessionStorage', content)) analysis['cookies'] = len(re.findall(r'document\.cookie', content))

    return analysis

def clean_and_validate_results(self, results, source_url):
    """Clean and validate extracted results"""

    # Clean endpoints
    cleaned_endpoints = []
    for endpoint in results['endpoints']:
        # Remove quotes and clean up
        endpoint = endpoint.strip('\'"')

        # Skip if too short or contains invalid characters
        if len(endpoint) < 2 or any(char in endpoint for char in ['<', '>', '{', '}', '\\n', '\\t']):
            continue

        # Convert relative URLs to absolute if source URL is provided
        if source_url and not endpoint.startswith(('http://', 'https://')):
            if endpoint.startswith('/'):
                base_url = f"{urlparse(source_url).scheme}://{urlparse(source_url).netloc}"
                endpoint = urljoin(base_url, endpoint)
            else:
                endpoint = urljoin(source_url, endpoint)

        if endpoint not in cleaned_endpoints:
            cleaned_endpoints.append(endpoint)

    results['endpoints'] = cleaned_endpoints

    # Clean secrets (remove obvious false positives)
    cleaned_secrets = []
    for secret in results['secrets']:
        if isinstance(secret, tuple) and len(secret) >= 2:
            key, value = secret[0], secret[1]
            # Skip common false positives
            if value.lower() not in ['password', 'secret', 'token', 'key', 'null', 'undefined', 'true', 'false']:
                cleaned_secrets.append({'type': key, 'value': value})

    results['secrets'] = cleaned_secrets

    # Clean domains
    cleaned_domains = []
    for domain in results['domains']:
        domain = domain.strip('\'"')
        # Basic domain validation
        if '.' in domain and len(domain) > 3 and not any(char in domain for char in [' ', '<', '>', '{', '}']):
            if domain not in cleaned_domains:
                cleaned_domains.append(domain)

    results['domains'] = cleaned_domains

    # Clean parameters
    cleaned_parameters = []
    for param in results['parameters']:
        param = param.strip('\'"')
        if len(param) > 0 and param.isalnum() or '_' in param or '-' in param:
            if param not in cleaned_parameters:
                cleaned_parameters.append(param)

    results['parameters'] = cleaned_parameters

    return results

def parse_from_url(self, url, headers=None, timeout=30):
    """Parse JavaScript from URL"""

    try:
        if headers:
            self.session.headers.update(headers)

        response = self.session.get(url, timeout=timeout)
        response.raise_for_status()

        content = response.text
        results = self.parse_javascript_content(content, url)
        results['http_status'] = response.status_code
        results['content_type'] = response.headers.get('content-type', '')
        results['content_length'] = len(content)

        return results

    except Exception as e:
        return {
            'source': url,
            'error': str(e),
            'endpoints': [],
            'secrets': [],
            'domains': [],
            'parameters': [],
            'functions': [],
            'analysis': {}
        }

def parse_from_file(self, file_path):
    """Parse JavaScript from local file"""

    try:
        with open(file_path, 'r', encoding='utf-8', errors='ignore') as f:
            content = f.read()

        results = self.parse_javascript_content(content, file_path)
        results['file_size'] = os.path.getsize(file_path)

        return results

    except Exception as e:
        return {
            'source': file_path,
            'error': str(e),
            'endpoints': [],
            'secrets': [],
            'domains': [],
            'parameters': [],
            'functions': [],
            'analysis': {}
        }

def bulk_parse(self, targets, max_workers=10, headers=None, timeout=30):
    """Parse multiple JavaScript files/URLs in parallel"""

    print(f"Starting bulk parsing of {len(targets)} targets")
    print(f"Max workers: {max_workers}")

    with ThreadPoolExecutor(max_workers=max_workers) as executor:
        # Submit all tasks
        future_to_target = {}

        for target in targets:
            if target.startswith(('http://', 'https://')):
                future = executor.submit(self.parse_from_url, target, headers, timeout)
            else:
                future = executor.submit(self.parse_from_file, target)

            future_to_target[future] = target

        # Process completed tasks
        for future in as_completed(future_to_target):
            target = future_to_target[future]
            try:
                result = future.result()

                with self.lock:
                    self.results.append(result)

                if 'error' not in result:
                    endpoint_count = len(result['endpoints'])
                    secret_count = len(result['secrets'])
                    domain_count = len(result['domains'])
                    print(f"✓ {target}: {endpoint_count} endpoints, {secret_count} secrets, {domain_count} domains")
                else:
                    print(f"✗ {target}: {result['error']}")

            except Exception as e:
                print(f"✗ Error parsing {target}: {e}")

    return self.results

def generate_report(self, output_file='jsparser_analysis_report.json'):
    """Generate comprehensive analysis report"""

    # Calculate statistics
    total_targets = len(self.results)
    successful_parses = sum(1 for r in self.results if 'error' not in r)

    # Aggregate results
    all_endpoints = []
    all_secrets = []
    all_domains = []
    all_parameters = []
    framework_stats = {
        'react': 0,
        'angular': 0,
        'vue': 0,
        'jquery': 0,
        'webpack': 0
    }

    for result in self.results:
        if 'error' not in result:
            all_endpoints.extend(result['endpoints'])
            all_secrets.extend(result['secrets'])
            all_domains.extend(result['domains'])
            all_parameters.extend(result['parameters'])

            # Count frameworks
            analysis = result.get('analysis', {})
            if analysis.get('react_app'):
                framework_stats['react'] += 1
            if analysis.get('angular_app'):
                framework_stats['angular'] += 1
            if analysis.get('vue_app'):
                framework_stats['vue'] += 1
            if analysis.get('jquery_usage'):
                framework_stats['jquery'] += 1
            if analysis.get('webpack_bundle'):
                framework_stats['webpack'] += 1

    # Remove duplicates and sort
    unique_endpoints = sorted(list(set(all_endpoints)))
    unique_domains = sorted(list(set(all_domains)))
    unique_parameters = sorted(list(set(all_parameters)))

    # Count secret types
    secret_types = {}
    for secret in all_secrets:
        if isinstance(secret, dict):
            secret_type = secret.get('type', 'unknown')
            if secret_type not in secret_types:
                secret_types[secret_type] = 0
            secret_types[secret_type] += 1

    report = {
        'scan_summary': {
            'total_targets': total_targets,
            'successful_parses': successful_parses,
            'success_rate': (successful_parses / total_targets * 100) if total_targets > 0 else 0,
            'scan_date': time.strftime('%Y-%m-%d %H:%M:%S')
        },
        'findings_summary': {
            'total_endpoints': len(unique_endpoints),
            'total_secrets': len(all_secrets),
            'total_domains': len(unique_domains),
            'total_parameters': len(unique_parameters),
            'secret_types': secret_types
        },
        'framework_analysis': framework_stats,
        'unique_findings': {
            'endpoints': unique_endpoints[:50],  # Top 50 endpoints
            'domains': unique_domains,
            'parameters': unique_parameters[:50]  # Top 50 parameters
        },
        'detailed_results': self.results
    }

    # Save report
    with open(output_file, 'w') as f:
        json.dump(report, f, indent=2)

    print(f"\\nJSParser Analysis Report:")
    print(f"Total targets parsed: {total_targets}")
    print(f"Successful parses: {successful_parses}")
    print(f"Success rate: {report['scan_summary']['success_rate']:.1f}%")
    print(f"Unique endpoints found: {len(unique_endpoints)}")
    print(f"Secrets found: {len(all_secrets)}")
    print(f"Unique domains found: {len(unique_domains)}")
    print(f"Report saved to: {output_file}")

    return report

Usage example

if name == "main": # Create parser instance parser = AdvancedJSParser()

# Parse single file
result = parser.parse_from_file('example.js')
print(json.dumps(result, indent=2))

# Parse from URL
result = parser.parse_from_url('https://example.com/script.js')
print(json.dumps(result, indent=2))

# Bulk parsing
targets = [
    'https://example.com/script1.js',
    'https://example.com/script2.js',
    'local_script.js'
]

results = parser.bulk_parse(targets, max_workers=5)
report = parser.generate_report('comprehensive_jsparser_report.json')

```_

Endpoint Discovery Automation

```bash

!/bin/bash

Automated JavaScript endpoint discovery script

TARGET_DOMAIN="$1" OUTPUT_DIR="$2"

| if [ -z "$TARGET_DOMAIN" ] | | [ -z "$OUTPUT_DIR" ]; then | echo "Usage: $0 " exit 1 fi

echo "Starting JavaScript endpoint discovery for: $TARGET_DOMAIN" echo "Output directory: $OUTPUT_DIR"

mkdir -p "$OUTPUT_DIR"

Step 1: Discover JavaScript files

echo "Step 1: Discovering JavaScript files..."

Use various methods to find JS files

echo "Finding JS files with waybackurls..." | echo "$TARGET_DOMAIN" | waybackurls | grep -E "\.js$" | sort -u > "$OUTPUT_DIR/wayback_js_files.txt" |

echo "Finding JS files with gau..." | echo "$TARGET_DOMAIN" | gau | grep -E "\.js$" | sort -u > "$OUTPUT_DIR/gau_js_files.txt" |

echo "Finding JS files with hakrawler..." | echo "https://$TARGET_DOMAIN" | hakrawler -js | sort -u > "$OUTPUT_DIR/hakrawler_js_files.txt" |

Combine all JS file lists

cat "$OUTPUT_DIR"/*_js_files.txt | sort -u > "$OUTPUT_DIR/all_js_files.txt" JS_COUNT=$(wc -l < "$OUTPUT_DIR/all_js_files.txt")

echo "Found $JS_COUNT JavaScript files"

Step 2: Parse JavaScript files

echo "Step 2: Parsing JavaScript files..."

mkdir -p "$OUTPUT_DIR/parsed_results"

Parse each JS file

while read -r js_url; do if [ -n "$js_url" ]; then echo "Parsing: $js_url"

    # Create safe filename
    safe_filename=$(echo "$js_url" | sed 's/[^a-zA-Z0-9]/_/g')

    # Parse with JSParser
    jsparser -u "$js_url" -o "$OUTPUT_DIR/parsed_results/${safe_filename}.txt" 2>/dev/null

    # Also try with curl and local parsing
    curl -s -L "$js_url" -o "$OUTPUT_DIR/parsed_results/${safe_filename}.js" 2>/dev/null
    if [ -f "$OUTPUT_DIR/parsed_results/${safe_filename}.js" ]; then
        jsparser -f "$OUTPUT_DIR/parsed_results/${safe_filename}.js" -o "$OUTPUT_DIR/parsed_results/${safe_filename}_local.txt" 2>/dev/null
    fi
fi

done < "$OUTPUT_DIR/all_js_files.txt"

Step 3: Aggregate results

echo "Step 3: Aggregating results..."

Combine all endpoints

| find "$OUTPUT_DIR/parsed_results" -name "*.txt" -exec cat {} \; | grep -E "^https?://" | sort -u > "$OUTPUT_DIR/all_endpoints.txt" |

Extract relative endpoints

| find "$OUTPUT_DIR/parsed_results" -name "*.txt" -exec cat {} \; | grep -E "^/" | sort -u > "$OUTPUT_DIR/relative_endpoints.txt" |

Extract API endpoints

| grep -E "(api | endpoint | /v[0-9])" "$OUTPUT_DIR/all_endpoints.txt" > "$OUTPUT_DIR/api_endpoints.txt" |

Extract parameters

| find "$OUTPUT_DIR/parsed_results" -name ".txt" -exec cat {} \; | grep -E "\?.=" | sed 's/.*?//' | sed 's/&/\n/g' | cut -d'=' -f1 | sort -u > "$OUTPUT_DIR/parameters.txt" |

Step 4: Generate summary

echo "Step 4: Generating summary..."

ENDPOINT_COUNT=$(wc -l < "$OUTPUT_DIR/all_endpoints.txt") RELATIVE_COUNT=$(wc -l < "$OUTPUT_DIR/relative_endpoints.txt") API_COUNT=$(wc -l < "$OUTPUT_DIR/api_endpoints.txt") PARAM_COUNT=$(wc -l < "$OUTPUT_DIR/parameters.txt")

cat > "$OUTPUT_DIR/discovery_summary.txt" << EOF JavaScript Endpoint Discovery Summary ==================================== Target Domain: $TARGET_DOMAIN Discovery Date: $(date)

Files Analyzed: $JS_COUNT JavaScript files Endpoints Found: $ENDPOINT_COUNT absolute URLs Relative Endpoints: $RELATIVE_COUNT paths API Endpoints: $API_COUNT endpoints Parameters: $PARAM_COUNT unique parameters

Top 10 Endpoints: $(head -10 "$OUTPUT_DIR/all_endpoints.txt")

Top 10 Parameters: $(head -10 "$OUTPUT_DIR/parameters.txt") EOF

echo "Discovery completed!" echo "Summary saved to: $OUTPUT_DIR/discovery_summary.txt" echo "All endpoints: $OUTPUT_DIR/all_endpoints.txt" echo "API endpoints: $OUTPUT_DIR/api_endpoints.txt" echo "Parameters: $OUTPUT_DIR/parameters.txt" ```_

Integration mit anderen Tools

Integration von Burp Suite

```python

!/usr/bin/env python3

Burp Suite extension for JSParser integration

from burp import IBurpExtender, IHttpListener, ITab from javax.swing import JPanel, JButton, JTextArea, JScrollPane, JLabel from java.awt import BorderLayout, GridLayout import json import re

class BurpExtender(IBurpExtender, IHttpListener, ITab): def registerExtenderCallbacks(self, callbacks): self._callbacks = callbacks self._helpers = callbacks.getHelpers()

    callbacks.setExtensionName("JSParser Integration")
    callbacks.registerHttpListener(self)

    # Create UI
    self._create_ui()
    callbacks.addSuiteTab(self)

    print("JSParser Integration loaded successfully")

def _create_ui(self):
    self._main_panel = JPanel(BorderLayout())

    # Control panel
    control_panel = JPanel(GridLayout(2, 1))

    self._parse_button = JButton("Parse JavaScript", actionPerformed=self._parse_js)
    self._clear_button = JButton("Clear Results", actionPerformed=self._clear_results)

    control_panel.add(self._parse_button)
    control_panel.add(self._clear_button)

    # Results area
    self._results_area = JTextArea(20, 80)
    self._results_area.setEditable(False)
    results_scroll = JScrollPane(self._results_area)

    # Add components
    self._main_panel.add(control_panel, BorderLayout.NORTH)
    self._main_panel.add(results_scroll, BorderLayout.CENTER)

def getTabCaption(self):
    return "JSParser"

def getUiComponent(self):
    return self._main_panel

def processHttpMessage(self, toolFlag, messageIsRequest, messageInfo):
    # Auto-parse JavaScript responses
    if not messageIsRequest:
        response = messageInfo.getResponse()
        if response:
            response_info = self._helpers.analyzeResponse(response)
            headers = response_info.getHeaders()

            # Check if response is JavaScript
            content_type = ""
            for header in headers:
                if header.lower().startswith("content-type:"):
                    content_type = header.lower()
                    break

            if "javascript" in content_type or "application/js" in content_type:
                body_offset = response_info.getBodyOffset()
                body = response[body_offset:].tostring()

                # Parse JavaScript content
                results = self._parse_javascript_content(body)

                if results['endpoints'] or results['secrets']:
                    url = str(messageInfo.getUrl())
                    self._add_results(f"Auto-parsed: {url}", results)

def _parse_js(self, event):
    # Get selected request/response
    selected_messages = self._callbacks.getSelectedMessages()

    if not selected_messages:
        self._results_area.append("No request/response selected\\n")
        return

    for message in selected_messages:
        response = message.getResponse()
        if response:
            response_info = self._helpers.analyzeResponse(response)
            body_offset = response_info.getBodyOffset()
            body = response[body_offset:].tostring()

            # Parse JavaScript content
            results = self._parse_javascript_content(body)
            url = str(message.getUrl())

            self._add_results(f"Manual parse: {url}", results)

def _parse_javascript_content(self, content):
    """Parse JavaScript content using JSParser patterns"""

    results = {
        'endpoints': [],
        'secrets': [],
        'domains': [],
        'parameters': []
    }

    # Endpoint patterns
    endpoint_patterns = [

| r'(?:" | \')([a-zA-Z0-9_\-\/\.]+\.(?:php | asp | aspx | jsp | json | action | html | js | txt | xml))(?:" | \')', | | r'(?:" | \')(\/[a-zA-Z0-9_\-\/\.]+)(?:" | \')', | | r'(?:api | endpoint | url)["\'\\s][:=]["\'\\s]([^"\'\\s]+)' | ]

    # Secret patterns
    secret_patterns = [
        r'(?i)(api[_\\\\-]?key|apikey)["\'\\\\s]*[:=]["\'\\\\s]*([a-zA-Z0-9_\\\\-]{10,})',

| r'(?i)(secret | password | pwd)["\'\\s][:=]["\'\\s]([a-zA-Z0-9_\\-]{8,})', | r'(?i)(token|auth)["\'\\s][:=]["\'\\s]([a-zA-Z0-9_\\-]{10,})' ]

    # Domain patterns
    domain_patterns = [
        r'(?:https?:\\/\\/)?([a-zA-Z0-9][a-zA-Z0-9-]{1,61}[a-zA-Z0-9]\\.[a-zA-Z]{2,})',

| r'(?:" | \')([a-zA-Z0-9][a-zA-Z0-9-]{1,61}[a-zA-Z0-9]\.[a-zA-Z]{2,})(?:" | \')' | ]

    # Parameter patterns
    parameter_patterns = [
        r'(?:\\\\?|&)([a-zA-Z0-9_\\\\-]+)=',

| r'(?:param | parameter | arg)["\'\\s][:=]["\'\\s]([a-zA-Z0-9_\\-]+)' | ]

    # Extract endpoints
    for pattern in endpoint_patterns:
        matches = re.findall(pattern, content, re.MULTILINE | re.IGNORECASE)
        for match in matches:
            if match and match not in results['endpoints']:
                results['endpoints'].append(match)

    # Extract secrets
    for pattern in secret_patterns:
        matches = re.findall(pattern, content, re.MULTILINE | re.IGNORECASE)
        for match in matches:
            if isinstance(match, tuple) and len(match) >= 2:
                secret_info = f"{match[0]}: {match[1]}"
                if secret_info not in results['secrets']:
                    results['secrets'].append(secret_info)

    # Extract domains
    for pattern in domain_patterns:
        matches = re.findall(pattern, content, re.MULTILINE | re.IGNORECASE)
        for match in matches:
            if match and match not in results['domains']:
                results['domains'].append(match)

    # Extract parameters
    for pattern in parameter_patterns:
        matches = re.findall(pattern, content, re.MULTILINE | re.IGNORECASE)
        for match in matches:
            if match and match not in results['parameters']:
                results['parameters'].append(match)

    return results

def _add_results(self, source, results):
    """Add parsing results to the UI"""

    self._results_area.append(f"\\n=== {source} ===\\n")

    if results['endpoints']:
        self._results_area.append(f"Endpoints ({len(results['endpoints'])}):

") for endpoint in results['endpoints'][:10]: # Show first 10 self._results_area.append(f" - {endpoint}\n") if len(results['endpoints']) > 10: self._results_area.append(f" ... and {len(results['endpoints']) - 10} more\n")

    if results['secrets']:
        self._results_area.append(f"Secrets ({len(results['secrets'])}):

") for secret in results['secrets'][:5]: # Show first 5 self._results_area.append(f" - {secret}\n") if len(results['secrets']) > 5: self._results_area.append(f" ... and {len(results['secrets']) - 5} more\n")

    if results['domains']:
        self._results_area.append(f"Domains ({len(results['domains'])}):

") for domain in results['domains'][:10]: # Show first 10 self._results_area.append(f" - {domain}\n")

    if results['parameters']:
        self._results_area.append(f"Parameters ({len(results['parameters'])}):

") for param in results['parameters'][:10]: # Show first 10 self._results_area.append(f" - {param}\n")

    self._results_area.append("\\n" + "="*50 + "\\n")

def _clear_results(self, event):
    self._results_area.setText("")

```_

OWASP ZAP Integration

```python

!/usr/bin/env python3

OWASP ZAP script for JSParser integration

import json import re import requests from zapv2 import ZAPv2

class ZAPJSParser: def init(self, zap_proxy='http://127.0.0.1:8080'): self.zap = ZAPv2(proxies={'http': zap_proxy, 'https': zap_proxy})

def get_js_responses(self, target_url):
    """Get all JavaScript responses from ZAP history"""

    js_responses = []

    # Get all messages from history
    messages = self.zap.core.messages()

    for message in messages:
        msg_detail = self.zap.core.message(message)

        # Check if response is JavaScript
        response_header = msg_detail.get('responseHeader', '')
        if 'javascript' in response_header.lower() or 'application/js' in response_header.lower():
            js_responses.append({
                'url': msg_detail.get('requestHeader', '').split('\\n')[0].split(' ')[1],
                'response_body': msg_detail.get('responseBody', ''),
                'message_id': message
            })

    return js_responses

def parse_js_responses(self, js_responses):
    """Parse JavaScript responses for endpoints and secrets"""

    all_results = []

    for response in js_responses:
        content = response['response_body']
        url = response['url']

        results = self.parse_javascript_content(content)
        results['source_url'] = url
        results['message_id'] = response['message_id']

        all_results.append(results)

    return all_results

def parse_javascript_content(self, content):
    """Parse JavaScript content for various patterns"""

    results = {
        'endpoints': [],
        'secrets': [],
        'domains': [],
        'parameters': []
    }

    # Endpoint patterns
    endpoint_patterns = [

| r'(?:" | \')([a-zA-Z0-9_\-\/\.]+\.(?:php | asp | aspx | jsp | json | action | html | js | txt | xml))(?:" | \')', | | r'(?:" | \')(\/[a-zA-Z0-9_\-\/\.]+)(?:" | \')', | | r'(?:fetch | axios | ajax)\s*\("\'["\']' | ]

    # Secret patterns
    secret_patterns = [
        r'(?i)(api[_\\-]?key|apikey)["\'\\s]*[:=]["\'\\s]*([a-zA-Z0-9_\\-]{10,})',

| r'(?i)(secret | password | pwd)["\'\s][:=]["\'\s]([a-zA-Z0-9_\-]{8,})', | r'(?i)(token|auth)["\'\s][:=]["\'\s]([a-zA-Z0-9_\-]{10,})' ]

    # Extract patterns
    for pattern in endpoint_patterns:
        matches = re.findall(pattern, content, re.MULTILINE | re.IGNORECASE)
        results['endpoints'].extend([m for m in matches if m not in results['endpoints']])

    for pattern in secret_patterns:
        matches = re.findall(pattern, content, re.MULTILINE | re.IGNORECASE)
        for match in matches:
            if isinstance(match, tuple) and len(match) >= 2:
                secret_info = {'type': match[0], 'value': match[1]}
                if secret_info not in results['secrets']:
                    results['secrets'].append(secret_info)

    return results

def create_zap_alerts(self, parsed_results):
    """Create ZAP alerts for discovered endpoints and secrets"""

    for result in parsed_results:
        source_url = result['source_url']
        message_id = result['message_id']

        # Create alerts for secrets
        for secret in result['secrets']:
            alert_data = {
                'alert': 'Sensitive Information in JavaScript',
                'description': f"Potential secret found: {secret['type']} = {secret['value']}",
                'risk': 'Medium',
                'confidence': 'Medium',
                'url': source_url,
                'param': secret['type'],
                'evidence': secret['value']
            }

            # Add alert to ZAP
            self.zap.core.add_alert(
                messageId=message_id,
                name=alert_data['alert'],
                riskId=2,  # Medium risk
                confidenceId=2,  # Medium confidence
                description=alert_data['description'],
                param=alert_data['param'],
                evidence=alert_data['evidence']
            )

        # Create alerts for interesting endpoints
        for endpoint in result['endpoints']:
            if any(keyword in endpoint.lower() for keyword in ['admin', 'api', 'secret', 'config', 'debug']):
                alert_data = {
                    'alert': 'Interesting Endpoint in JavaScript',
                    'description': f"Potentially interesting endpoint found: {endpoint}",
                    'risk': 'Low',
                    'confidence': 'Medium',
                    'url': source_url,
                    'evidence': endpoint
                }

                self.zap.core.add_alert(
                    messageId=message_id,
                    name=alert_data['alert'],
                    riskId=1,  # Low risk
                    confidenceId=2,  # Medium confidence
                    description=alert_data['description'],
                    evidence=alert_data['evidence']
                )

Usage example

if name == "main": # Initialize ZAP JSParser zap_parser = ZAPJSParser()

# Get JavaScript responses from ZAP history
js_responses = zap_parser.get_js_responses("https://example.com")

# Parse responses
parsed_results = zap_parser.parse_js_responses(js_responses)

# Create ZAP alerts
zap_parser.create_zap_alerts(parsed_results)

# Print summary
total_endpoints = sum(len(r['endpoints']) for r in parsed_results)
total_secrets = sum(len(r['secrets']) for r in parsed_results)

print(f"Parsed {len(js_responses)} JavaScript files")
print(f"Found {total_endpoints} endpoints and {total_secrets} secrets")
print("Alerts created in ZAP")

```_

Leistungsoptimierung und Fehlerbehebung

Leistung Tuning

```bash

Optimize JSParser for different scenarios

Fast parsing with minimal patterns

jsparser -f script.js --endpoints-only

Memory-efficient parsing for large files

jsparser -f large_script.js --max-size 10MB

Parallel processing for multiple files

find /path/to/js/files -name "*.js" | xargs -P 10 -I {} jsparser -f {}

Network-optimized parsing

jsparser -u https://example.com/script.js --timeout 30 --retries 3

Performance monitoring script

!/bin/bash

monitor_jsparser_performance() { local target="$1" local output_file="jsparser-performance-$(date +%s).log"

echo "Monitoring JSParser performance for: $target"

# Start monitoring
{
    echo "Timestamp,CPU%,Memory(MB),Status"
    while true; do
        if pgrep -f "jsparser" > /dev/null; then
            local cpu=$(ps -p $(pgrep -f "jsparser") -o %cpu --no-headers)
            local mem=$(ps -p $(pgrep -f "jsparser") -o rss --no-headers | awk '{print $1/1024}')
            echo "$(date +%s),$cpu,$mem,running"
        fi
        sleep 1
    done
} > "$output_file" &

local monitor_pid=$!

# Run JSParser
time jsparser -f "$target"

# Stop monitoring
kill $monitor_pid 2>/dev/null

echo "Performance monitoring completed: $output_file"

}

Usage

monitor_jsparser_performance "large_script.js" ```_

Probleme bei der Fehlerbehebung

```bash

Troubleshooting script for JSParser

troubleshoot_jsparser() { echo "JSParser Troubleshooting Guide" echo "============================="

# Check if Python is installed
if ! command -v python3 &> /dev/null; then
    echo "❌ Python 3 not found"
    echo "Solution: Install Python 3"
    return 1
fi

echo "✅ Python 3 found: $(python3 --version)"

# Check if JSParser is available
if ! command -v jsparser &> /dev/null && [ ! -f "jsparser.py" ]; then
    echo "❌ JSParser not found"
    echo "Solution: Clone JSParser repository or install from source"
    return 1
fi

echo "✅ JSParser found"

# Check Python dependencies
local required_modules=("requests" "beautifulsoup4" "argparse")
for module in "${required_modules[@]}"; do
    if python3 -c "import $module" 2>/dev/null; then
        echo "✅ Python module '$module': OK"
    else
        echo "❌ Python module '$module': Missing"
        echo "Solution: pip install $module"
    fi
done

# Check network connectivity
if ! curl -s --connect-timeout 5 https://httpbin.org/get > /dev/null; then
    echo "❌ Network connectivity issues"
    echo "Solution: Check internet connection"
    return 1
fi

echo "✅ Network connectivity OK"

# Test basic functionality
echo "Testing basic JSParser functionality..."

# Create test JavaScript file
cat > test_script.js << 'EOF'

var apiEndpoint = "/api/users"; var secretKey = "abc123def456"; var domain = "example.com"; function testFunction() { fetch("/api/data"); } EOF

if jsparser -f test_script.js > /dev/null 2>&1; then
    echo "✅ Basic functionality test passed"
else
    echo "❌ Basic functionality test failed"
    echo "Solution: Check JSParser installation and dependencies"
fi

# Clean up
rm -f test_script.js

echo "Troubleshooting completed"

}

Common error solutions

fix_common_jsparser_errors() { echo "Common JSParser Error Solutions" echo "=============================="

echo "1. 'ModuleNotFoundError: No module named ...'"
echo "   Solution: pip install -r requirements.txt"
echo "   Alternative: pip install requests beautifulsoup4"
echo ""

echo "2. 'Permission denied' when running jsparser"
echo "   Solution: chmod +x jsparser.py"
echo "   Alternative: python3 jsparser.py [options]"
echo ""

echo "3. 'Connection timeout' or network errors"
echo "   Solution: Increase timeout with --timeout option"
echo "   Example: jsparser -u <url> --timeout 60"
echo ""

echo "4. 'No results found' (false negatives)"
echo "   Solution: Check if file contains JavaScript code"
echo "   Alternative: Use custom patterns or manual inspection"
echo ""

echo "5. 'Memory error' for large files"
echo "   Solution: Process files in chunks or increase system memory"
echo "   Alternative: Use streaming processing"
echo ""

echo "6. 'SSL certificate error'"
echo "   Solution: Use --no-ssl-verify option (not recommended for production)"
echo ""

echo "7. 'Too many false positives'"
echo "   Solution: Use more specific patterns or post-processing filters"
echo "   Example: Filter out common false positives"

}

Run troubleshooting

troubleshoot_jsparser fix_common_jsparser_errors ```_

Ressourcen und Dokumentation

Offizielle Mittel

Gemeinschaftsmittel

Integrationsbeispiele