Aller au contenu

Fiche de Référence Cutter

Aperçu

Cutter est une plateforme moderne, gratuite et open-source de rétro-ingénierie, alimentée par le framework Rizin, conçue pour fournir une interface graphique intuitive et puissante pour l’analyse binaire et les tâches de rétro-ingénierie. Développé comme une interface graphique Qt pour Rizin (anciennement Radare2), Cutter combine les puissantes capacités d’analyse des outils de rétro-ingénierie en ligne de commande avec une interface graphique conviviale qui rend l’analyse binaire avancée accessible aux débutants comme aux ingénieurs en rétro-ingénierie expérimentés. La plateforme a gagné une adoption significative dans la communauté de cybersécurité comme alternative viable aux outils de rétro-ingénierie commerciaux coûteux, offrant des fonctionnalités comparables sans frais de licence.

La force principale de Cutter réside dans son moteur d’analyse complet qui peut gérer plusieurs architectures, notamment x86, x64, ARM, MIPS, PowerPC, et bien d’autres, ce qui le rend adapté à l’analyse de binaires provenant de diverses plateformes, y compris Windows, Linux, macOS, Android et systèmes embarqués. Cutter offre des fonctionnalités avancées telles que la détection automatique de fonctions, la génération de graphes de flux de contrôle, l’analyse de références croisées, l’analyse de chaînes et l’identification de fonctions cryptographiques. L’architecture modulaire de la plateforme permet une personnalisation étendue via des plugins et des scripts, permettant aux utilisateurs d’étendre sa fonctionnalité pour des tâches d’analyse spécialisées.

La conception de l’interface moderne de Cutter se concentre sur l’efficacité du workflow, avec plusieurs vues synchronisées, notamment le désassemblage, le vidage hexadécimal, la vue graphique et la sortie du décompilateur, qui peuvent être organisées et personnalisées selon les préférences de l’utilisateur. La plateforme s’intègre parfaitement aux capacités de débogage, prend en charge divers formats de fichiers, notamment PE, ELF, Mach-O et les binaires bruts, et offre des fonctionnalités collaboratives pour les projets de rétro-ingénierie en équipe. Sa communauté de développement active et sa documentation exhaustive en font un excellent choix pour l’analyse de logiciels malveillants, la recherche de vulnérabilités, l’évaluation de la sécurité logicielle et des objectifs éducatifs en rétro-ingénierie et analyse binaire.

Would you like me to continue translating the rest of the document?```bash

Download from GitHub releases

Visit: https://github.com/rizinorg/cutter/releases

Download Windows installer

cutter-v2.3.4-Windows-x86_64.exe

Run installer as administrator

cutter-v2.3.4-Windows-x86_64.exe

Alternative: Portable version

Download: cutter-v2.3.4-Windows-x86_64.zip

Extract to desired location

Run cutter.exe

Install via Chocolatey

choco install cutter

Install via Scoop

scoop bucket add extras scoop install cutter

Verify installation

cutter —version


### Linux Installation

Installing Cutter on Linux distributions:

```bash
# Ubuntu/Debian installation
sudo apt update
sudo apt install cutter

# Alternative: Download AppImage
wget https://github.com/rizinorg/cutter/releases/download/v2.3.4/Cutter-v2.3.4-Linux-x86_64.AppImage
chmod +x Cutter-v2.3.4-Linux-x86_64.AppImage
./Cutter-v2.3.4-Linux-x86_64.AppImage

# Arch Linux installation
sudo pacman -S cutter

# Fedora installation
sudo dnf install cutter

# Build from source
git clone https://github.com/rizinorg/cutter.git
cd cutter
git submodule update --init --recursive

# Install dependencies
sudo apt install qt5-default libqt5svg5-dev qttools5-dev qttools5-dev-tools

# Build
mkdir build && cd build
cmake ..
make -j$(nproc)
sudo make install

macOS Installation

# Install via Homebrew
brew install --cask cutter

# Alternative: Download DMG
# Visit: https://github.com/rizinorg/cutter/releases
# Download: Cutter-v2.3.4-macOS-x86_64.dmg
# Install by dragging to Applications

# Build from source
git clone https://github.com/rizinorg/cutter.git
cd cutter
git submodule update --init --recursive

# Install dependencies
brew install qt5 cmake

# Build
mkdir build && cd build
cmake ..
make -j$(sysctl -n hw.ncpu)

Docker Installation

# Create Cutter Docker environment
cat > Dockerfile << 'EOF'
FROM ubuntu:20.04
ENV DEBIAN_FRONTEND=noninteractive

RUN apt-get update && apt-get install -y \
    cutter \
    xvfb x11vnc fluxbox \
    wget curl

# Setup VNC for GUI access
EXPOSE 5900

# Start script
COPY start.sh /start.sh
RUN chmod +x /start.sh

CMD ["/start.sh"]
EOF

# Create start script
cat > start.sh << 'EOF'
#!/bin/bash
Xvfb :1 -screen 0 1024x768x16 &
export DISPLAY=:1
fluxbox &
x11vnc -display :1 -nopw -listen localhost -xkb &
cutter
EOF

# Build and run
docker build -t cutter-re .
docker run -p 5900:5900 -v $(pwd)/samples:/samples cutter-re

Basic Usage

Opening and Loading Files

Basic file operations in Cutter:

# Launch Cutter
cutter

# Open file via command line
cutter /path/to/binary

# Open file with specific options
cutter -A /path/to/binary  # Auto-analysis
cutter -e bin.cache=true /path/to/binary  # Enable caching

# Load file in Cutter GUI
# File -> Open File
# Select binary file
# Choose analysis options:
#   - Auto-analysis level (0-4)
#   - Architecture (if not auto-detected)
#   - Bits (32/64)
#   - Endianness
#   - Base address

# Load raw binary
# File -> Open File
# Select "Raw binary" format
# Specify architecture and base address

# Load from URL
# File -> Open URL
# Enter URL to binary file

Basic Navigation

Navigating through the binary:

# Navigation shortcuts
# G - Go to address/function
# Space - Switch between graph and linear view
# Tab - Switch between panels
# Ctrl+F - Search
# Ctrl+G - Go to address

# Address navigation
# Click on addresses in disassembly
# Use address bar at top
# Right-click -> "Go to" options

# Function navigation
# Functions panel (left sidebar)
# Click function name to navigate
# Use function list dropdown

# Cross-references
# Right-click on instruction
# "Show X-Refs" to see references
# "Show X-Refs to" to see what references this

# Bookmarks
# Right-click -> "Add bookmark"
# View -> Bookmarks panel
# Navigate to saved locations

Basic Analysis

Performing basic binary analysis:

# Automatic analysis
# Analysis -> Auto Analysis
# Choose analysis level:
#   - Level 0: Basic (fast)
#   - Level 1: Advanced (recommended)
#   - Level 2: Expert (slow but thorough)

# Manual analysis commands (in console)
# View -> Console to open Rizin console

# Basic information
i          # File information
ii         # Imports
ie         # Exports
is         # Symbols
iz         # Strings
iS         # Sections

# Function analysis
afl        # List functions
af         # Analyze function at current address
afi        # Function information
afv        # Function variables

# String analysis
izz        # All strings
iz~password  # Search for strings containing "password"

# Cross-references
axt        # Cross-references to current address
axf        # Cross-references from current address

Advanced Features

Graph View Analysis

Using the graph view for control flow analysis:

# Switch to graph view
# Press Space or View -> Graph

# Graph navigation
# Mouse wheel - Zoom in/out
# Middle mouse drag - Pan
# Click nodes to navigate
# Double-click to enter function

# Graph layout options
# Right-click in graph area
# Layout options:
#   - Hierarchical
#   - Radial
#   - Force-directed

# Minimap
# View -> Show Minimap
# Navigate large graphs quickly

# Graph analysis features
# Highlight paths between nodes
# Identify loops and branches
# Analyze function complexity
# Export graph as image

# Custom graph views
# Create custom graphs for specific analysis
# Filter nodes by criteria
# Focus on specific code paths

Decompiler Integration

Using the built-in decompiler:

# Enable decompiler view
# View -> Decompiler
# Or press F5 in function

# Decompiler options
# Right-click in decompiler view
# Options:
#   - Rename variables
#   - Change variable types
#   - Add comments
#   - Export decompiled code

# Decompiler backends
# Preferences -> Decompiler
# Available backends:
#   - Ghidra decompiler (r2ghidra)
#   - RetDec
#   - Snowman

# Synchronization
# Decompiler view syncs with disassembly
# Click in decompiler to highlight assembly
# Modifications reflect in both views

# Export decompiled code
# Right-click -> Export
# Save as C source file
# Include comments and annotations

Debugging Integration

Debugging capabilities in Cutter:

# Start debugging session
# Debug -> Start Debug Session
# Choose debugger backend:
#   - Native debugger
#   - GDB
#   - WinDbg (Windows)

# Set breakpoints
# Click on line number in disassembly
# Right-click -> Toggle breakpoint
# Conditional breakpoints available

# Debug controls
# F9 - Continue
# F10 - Step over
# F11 - Step into
# Shift+F11 - Step out
# Ctrl+F2 - Restart

# Watch variables
# Debug -> Registers panel
# Debug -> Stack panel
# Debug -> Memory panel
# Add custom watches

# Memory examination
# View -> Memory Map
# Examine memory regions
# Modify memory values
# Search memory patterns

Plugin System

Extending Cutter with plugins:

# Plugin management
# Edit -> Preferences -> Plugins
# Enable/disable plugins
# Install new plugins

# Popular plugins
# r2ghidra - Ghidra decompiler integration
# r2dec - Alternative decompiler
# r2pipe - Python scripting
# r2yara - YARA rule integration

# Python scripting
# Tools -> Python Console
# Write custom analysis scripts
# Automate repetitive tasks

# Example Python script
import cutter

# Get current function
func = cutter.cmdj("afij")
print(f"Function: \\\\{func[0]['name']\\\\}")

# Get strings
strings = cutter.cmdj("izj")
for s in strings:
    print(f"String: \\\\{s['string']\\\\}")

# Custom analysis
# Create custom analysis plugins
# Extend Cutter functionality
# Share plugins with community

Automation Scripts

Automated Malware Analysis

#!/usr/bin/env python3
# Automated malware analysis with Cutter

import cutter
import json
import os
import hashlib
from datetime import datetime

class CutterMalwareAnalyzer:
    def __init__(self, sample_path):
        self.sample_path = sample_path
        self.analysis_results = \\\\{\\\\}
        self.indicators = []

        # Calculate file hash
        with open(sample_path, 'rb') as f:
            self.file_hash = hashlib.sha256(f.read()).hexdigest()

    def basic_analysis(self):
        """Perform basic static analysis"""
        print("Performing basic analysis...")

        # File information
        file_info = cutter.cmdj("ij")
        self.analysis_results["file_info"] = file_info

        # Sections
        sections = cutter.cmdj("iSj")
        self.analysis_results["sections"] = sections

        # Imports
        imports = cutter.cmdj("iij")
        self.analysis_results["imports"] = imports

        # Exports
        exports = cutter.cmdj("iej")
        self.analysis_results["exports"] = exports

        # Strings
        strings = cutter.cmdj("izzj")
        self.analysis_results["strings"] = strings

        print(f"Found \\\\{len(imports)\\\\} imports, \\\\{len(exports)\\\\} exports, \\\\{len(strings)\\\\} strings")

    def function_analysis(self):
        """Analyze functions in the binary"""
        print("Analyzing functions...")

        # Auto-analyze functions
        cutter.cmd("aaa")

        # Get function list
        functions = cutter.cmdj("aflj")
        self.analysis_results["functions"] = functions

        # Analyze suspicious functions
        suspicious_functions = []

        for func in functions:
            func_name = func.get("name", "")

            # Check for suspicious function names
            suspicious_keywords = [
                "crypt", "encode", "decode", "obfus", "pack",
                "inject", "hook", "patch", "shell", "exec",
                "download", "upload", "connect", "socket"
            ]

            if any(keyword in func_name.lower() for keyword in suspicious_keywords):
                suspicious_functions.append(func)
                self.indicators.append(\\\\{
                    "type": "suspicious_function",
                    "value": func_name,
                    "address": func.get("offset"),
                    "description": f"Suspicious function name: \\\\{func_name\\\\}"
                \\\\})

        self.analysis_results["suspicious_functions"] = suspicious_functions
        print(f"Found \\\\{len(suspicious_functions)\\\\} suspicious functions")

    def string_analysis(self):
        """Analyze strings for indicators"""
        print("Analyzing strings...")

        strings = self.analysis_results.get("strings", [])

        # Suspicious string patterns
        suspicious_patterns = [
            r"http[s]?://",  # URLs
            r"\b\d\\\\{1,3\\\\}\.\d\\\\{1,3\\\\}\.\d\\\\{1,3\\\\}\.\d\\\\{1,3\\\\}\b",  # IP addresses
            r"[A-Za-z0-9+/]\\\\{20,\\\\}=\\\\{0,2\\\\}",  # Base64
            r"\\x[0-9a-fA-F]\\\\{2\\\\}",  # Hex encoded
            r"cmd\.exe|powershell|bash|sh",  # Shell commands
            r"CreateProcess|ShellExecute|WinExec",  # Process creation
            r"RegOpenKey|RegSetValue|RegDeleteKey",  # Registry operations
            r"CreateFile|WriteFile|ReadFile",  # File operations
            r"socket|connect|send|recv",  # Network operations
        ]

        import re

        suspicious_strings = []

        for string_obj in strings:
            string_value = string_obj.get("string", "")

            for pattern in suspicious_patterns:
                if re.search(pattern, string_value, re.IGNORECASE):
                    suspicious_strings.append(string_obj)
                    self.indicators.append(\\\\{
                        "type": "suspicious_string",
                        "value": string_value,
                        "address": string_obj.get("vaddr"),
                        "pattern": pattern,
                        "description": f"Suspicious string matching pattern: \\\\{pattern\\\\}"
                    \\\\})
                    break

        self.analysis_results["suspicious_strings"] = suspicious_strings
        print(f"Found \\\\{len(suspicious_strings)\\\\} suspicious strings")

    def import_analysis(self):
        """Analyze imports for suspicious APIs"""
        print("Analyzing imports...")

        imports = self.analysis_results.get("imports", [])

        # Suspicious API categories
        suspicious_apis = \\\\{
            "process_injection": [
                "CreateRemoteThread", "WriteProcessMemory", "VirtualAllocEx",
                "OpenProcess", "NtCreateThreadEx", "RtlCreateUserThread"
            ],
            "persistence": [
                "RegSetValueEx", "RegCreateKeyEx", "CreateService",
                "SetWindowsHookEx", "SetTimer"
            ],
            "evasion": [
                "IsDebuggerPresent", "CheckRemoteDebuggerPresent",
                "GetTickCount", "QueryPerformanceCounter", "Sleep"
            ],
            "network": [
                "WSAStartup", "socket", "connect", "send", "recv",
                "InternetOpen", "HttpOpenRequest", "HttpSendRequest"
            ],
            "crypto": [
                "CryptAcquireContext", "CryptCreateHash", "CryptEncrypt",
                "CryptDecrypt", "CryptGenKey"
            ]
        \\\\}

        suspicious_imports = []

        for import_obj in imports:
            import_name = import_obj.get("name", "")

            for category, apis in suspicious_apis.items():
                if import_name in apis:
                    suspicious_imports.append(\\\\{
                        "import": import_obj,
                        "category": category,
                        "api": import_name
                    \\\\})

                    self.indicators.append(\\\\{
                        "type": "suspicious_import",
                        "value": import_name,
                        "category": category,
                        "description": f"Suspicious API import: \\\\{import_name\\\\} (\\\\{category\\\\})"
                    \\\\})

        self.analysis_results["suspicious_imports"] = suspicious_imports
        print(f"Found \\\\{len(suspicious_imports)\\\\} suspicious imports")

    def entropy_analysis(self):
        """Analyze entropy of sections"""
        print("Analyzing entropy...")

        sections = self.analysis_results.get("sections", [])
        high_entropy_sections = []

        for section in sections:
            # Get section data
            section_name = section.get("name", "")
            section_addr = section.get("vaddr", 0)
            section_size = section.get("vsize", 0)

            if section_size > 0:
                # Calculate entropy (simplified)
                try:
                    data = cutter.cmd(f"p8 \\\\{section_size\\\\} @ \\\\{section_addr\\\\}")
                    if data:
                        entropy = self.calculate_entropy(bytes.fromhex(data))
                        section["entropy"] = entropy

                        # High entropy might indicate packed/encrypted data
                        if entropy > 7.0:
                            high_entropy_sections.append(section)
                            self.indicators.append(\\\\{
                                "type": "high_entropy_section",
                                "value": section_name,
                                "entropy": entropy,
                                "description": f"High entropy section: \\\\{section_name\\\\} (entropy: \\\\{entropy:.2f\\\\})"
                            \\\\})
                except:
                    pass

        self.analysis_results["high_entropy_sections"] = high_entropy_sections
        print(f"Found \\\\{len(high_entropy_sections)\\\\} high entropy sections")

    def calculate_entropy(self, data):
        """Calculate Shannon entropy of data"""
        import math
        from collections import Counter

        if not data:
            return 0

        # Count byte frequencies
        byte_counts = Counter(data)
        data_len = len(data)

        # Calculate entropy
        entropy = 0
        for count in byte_counts.values():
            probability = count / data_len
            entropy -= probability * math.log2(probability)

        return entropy

    def generate_report(self, output_file=None):
        """Generate analysis report"""

        if not output_file:
            output_file = f"malware_analysis_\\\\{self.file_hash[:8]\\\\}.json"

        report = \\\\{
            "analysis_info": \\\\{
                "file_path": self.sample_path,
                "file_hash": self.file_hash,
                "timestamp": datetime.now().isoformat(),
                "total_indicators": len(self.indicators)
            \\\\},
            "analysis_results": self.analysis_results,
            "indicators": self.indicators,
            "summary": \\\\{
                "suspicious_functions": len(self.analysis_results.get("suspicious_functions", [])),
                "suspicious_strings": len(self.analysis_results.get("suspicious_strings", [])),
                "suspicious_imports": len(self.analysis_results.get("suspicious_imports", [])),
                "high_entropy_sections": len(self.analysis_results.get("high_entropy_sections", []))
            \\\\}
        \\\\}

        with open(output_file, 'w') as f:
            json.dump(report, f, indent=2)

        print(f"Analysis report saved: \\\\{output_file\\\\}")
        return report

    def run_full_analysis(self):
        """Run complete malware analysis"""
        print(f"Starting malware analysis of: \\\\{self.sample_path\\\\}")

        self.basic_analysis()
        self.function_analysis()
        self.string_analysis()
        self.import_analysis()
        self.entropy_analysis()

        report = self.generate_report()

        print(f"Analysis completed. Found \\\\{len(self.indicators)\\\\} indicators.")
        return report

# Usage in Cutter
if __name__ == "__main__":
    # This script should be run within Cutter's Python console
    sample_path = "/path/to/malware/sample"

    analyzer = CutterMalwareAnalyzer(sample_path)
    report = analyzer.run_full_analysis()

Batch Binary Analysis

#!/usr/bin/env python3
# Batch binary analysis script

import os
import json
import subprocess
import hashlib
from datetime import datetime
from pathlib import Path

class CutterBatchAnalyzer:
    def __init__(self, input_dir, output_dir):
        self.input_dir = Path(input_dir)
        self.output_dir = Path(output_dir)
        self.output_dir.mkdir(exist_ok=True)
        self.results = []

    def analyze_binary(self, binary_path):
        """Analyze single binary with Cutter"""

        print(f"Analyzing: \\\\{binary_path\\\\}")

        # Calculate file hash
        with open(binary_path, 'rb') as f:
            file_hash = hashlib.sha256(f.read()).hexdigest()

        # Create Cutter script for analysis
        script_content = f"""
import cutter
import json

# Basic analysis
cutter.cmd("aaa")

# Collect information
results = \\\\{\\\\{
    "file_info": cutter.cmdj("ij"),
    "functions": cutter.cmdj("aflj"),
    "imports": cutter.cmdj("iij"),
    "exports": cutter.cmdj("iej"),
    "strings": cutter.cmdj("izzj"),
    "sections": cutter.cmdj("iSj")
\\\\}\\\\}

# Save results
with open("/tmp/cutter_results_\\\\{file_hash\\\\}.json", "w") as f:
    json.dump(results, f, indent=2)

# Exit Cutter
cutter.cmd("q")
"""

        script_path = f"/tmp/cutter_script_\\\\{file_hash\\\\}.py"
        with open(script_path, 'w') as f:
            f.write(script_content)

        try:
            # Run Cutter with script
            cmd = [
                "cutter",
                "-A",  # Auto-analysis
                "-i", script_path,  # Run script
                str(binary_path)
            ]

            result = subprocess.run(
                cmd,
                capture_output=True,
                text=True,
                timeout=300  # 5 minute timeout
            )

            # Load results
            results_file = f"/tmp/cutter_results_\\\\{file_hash\\\\}.json"
            if os.path.exists(results_file):
                with open(results_file, 'r') as f:
                    analysis_results = json.load(f)

                # Clean up temporary files
                os.remove(script_path)
                os.remove(results_file)

                return \\\\{
                    "file_path": str(binary_path),
                    "file_hash": file_hash,
                    "status": "success",
                    "analysis_results": analysis_results,
                    "timestamp": datetime.now().isoformat()
                \\\\}
            else:
                return \\\\{
                    "file_path": str(binary_path),
                    "file_hash": file_hash,
                    "status": "failed",
                    "error": "No results file generated",
                    "timestamp": datetime.now().isoformat()
                \\\\}

        except subprocess.TimeoutExpired:
            return \\\\{
                "file_path": str(binary_path),
                "file_hash": file_hash,
                "status": "timeout",
                "error": "Analysis timed out",
                "timestamp": datetime.now().isoformat()
            \\\\}

        except Exception as e:
            return \\\\{
                "file_path": str(binary_path),
                "file_hash": file_hash,
                "status": "error",
                "error": str(e),
                "timestamp": datetime.now().isoformat()
            \\\\}

    def find_binaries(self):
        """Find binary files in input directory"""

        binary_extensions = ['.exe', '.dll', '.so', '.dylib', '.bin']
        binaries = []

        for file_path in self.input_dir.rglob('*'):
            if file_path.is_file():
                # Check by extension
                if file_path.suffix.lower() in binary_extensions:
                    binaries.append(file_path)
                # Check by file command
                elif self.is_binary_file(file_path):
                    binaries.append(file_path)

        return binaries

    def is_binary_file(self, file_path):
        """Check if file is binary using file command"""
        try:
            result = subprocess.run(
                ['file', str(file_path)],
                capture_output=True,
                text=True
            )

            binary_indicators = [
                'executable', 'ELF', 'PE32', 'Mach-O',
                'shared object', 'dynamic library'
            ]

            return any(indicator in result.stdout for indicator in binary_indicators)

        except:
            return False

    def run_batch_analysis(self):
        """Run analysis on all binaries"""

        binaries = self.find_binaries()
        print(f"Found \\\\{len(binaries)\\\\} binary files to analyze")

        for i, binary_path in enumerate(binaries, 1):
            print(f"Progress: \\\\{i\\\\}/\\\\{len(binaries)\\\\}")

            result = self.analyze_binary(binary_path)
            self.results.append(result)

            # Save individual result
            result_file = self.output_dir / f"result_\\\\{result['file_hash'][:8]\\\\}.json"
            with open(result_file, 'w') as f:
                json.dump(result, f, indent=2)

        # Generate summary report
        self.generate_summary_report()

        print(f"Batch analysis completed. Results saved in: \\\\{self.output_dir\\\\}")

    def generate_summary_report(self):
        """Generate summary report"""

        successful = len([r for r in self.results if r['status'] == 'success'])
        failed = len([r for r in self.results if r['status'] == 'failed'])
        timeout = len([r for r in self.results if r['status'] == 'timeout'])
        error = len([r for r in self.results if r['status'] == 'error'])

        summary = \\\\{
            "batch_analysis_summary": \\\\{
                "total_files": len(self.results),
                "successful": successful,
                "failed": failed,
                "timeout": timeout,
                "error": error,
                "success_rate": (successful / len(self.results)) * 100 if self.results else 0
            \\\\},
            "results": self.results,
            "timestamp": datetime.now().isoformat()
        \\\\}

        summary_file = self.output_dir / "batch_analysis_summary.json"
        with open(summary_file, 'w') as f:
            json.dump(summary, f, indent=2)

        # Generate HTML report
        self.generate_html_report(summary)

    def generate_html_report(self, summary):
        """Generate HTML summary report"""

        html_template = """
<!DOCTYPE html>
<html>
<head>
    <title>Cutter Batch Analysis Report</title>
    <style>
        body \\\\{ font-family: Arial, sans-serif; margin: 20px; \\\\}
        .header \\\\{ background-color: #f0f0f0; padding: 20px; \\\\}
        .summary \\\\{ background-color: #e6f3ff; padding: 15px; margin: 20px 0; \\\\}
        .result \\\\{ margin: 10px 0; padding: 10px; border-left: 4px solid #ccc; \\\\}
        .success \\\\{ border-left-color: #4caf50; \\\\}
        .failed \\\\{ border-left-color: #f44336; \\\\}
        .timeout \\\\{ border-left-color: #ff9800; \\\\}
        .error \\\\{ border-left-color: #9c27b0; \\\\}
    </style>
</head>
<body>
    <div class="header">
        <h1>Cutter Batch Analysis Report</h1>
        <p>Generated: \\\\{timestamp\\\\}</p>
    </div>

    <div class="summary">
        <h2>Summary</h2>
        <p>Total Files: \\\\{total_files\\\\}</p>
        <p>Successful: \\\\{successful\\\\}</p>
        <p>Failed: \\\\{failed\\\\}</p>
        <p>Timeout: \\\\{timeout\\\\}</p>
        <p>Error: \\\\{error\\\\}</p>
        <p>Success Rate: \\\\{success_rate:.1f\\\\}%</p>
    </div>

    <h2>Results</h2>
    \\\\{results_html\\\\}
</body>
</html>
        """

        results_html = ""
        for result in summary["results"]:
            status_class = result["status"]
            results_html += f"""
            <div class="result \\\\{status_class\\\\}">
                <h3>\\\\{os.path.basename(result['file_path'])\\\\}</h3>
                <p>Status: \\\\{result['status'].upper()\\\\}</p>
                <p>Hash: \\\\{result['file_hash']\\\\}</p>
                <p>Timestamp: \\\\{result['timestamp']\\\\}</p>
                \\\\{f"<p>Error: \\\\{result.get('error', '')\\\\}</p>" if 'error' in result else ""\\\\}
            </div>
            """

        html_content = html_template.format(
            timestamp=summary["timestamp"],
            total_files=summary["batch_analysis_summary"]["total_files"],
            successful=summary["batch_analysis_summary"]["successful"],
            failed=summary["batch_analysis_summary"]["failed"],
            timeout=summary["batch_analysis_summary"]["timeout"],
            error=summary["batch_analysis_summary"]["error"],
            success_rate=summary["batch_analysis_summary"]["success_rate"],
            results_html=results_html
        )

        html_file = self.output_dir / "batch_analysis_report.html"
        with open(html_file, 'w') as f:
            f.write(html_content)

# Usage
if __name__ == "__main__":
    input_directory = "/path/to/binaries"
    output_directory = "/path/to/results"

    analyzer = CutterBatchAnalyzer(input_directory, output_directory)
    analyzer.run_batch_analysis()

Function Signature Analysis

#!/usr/bin/env python3
# Function signature analysis and matching

import cutter
import json
import hashlib

class FunctionSignatureAnalyzer:
    def __init__(self):
        self.function_signatures = \\\\{\\\\}
        self.known_signatures = self.load_known_signatures()

    def load_known_signatures(self):
        """Load known function signatures database"""

        # This would typically load from a database or file
        # For demo purposes, we'll use a small set
        return \\\\{
            "crypto_functions": \\\\{
                "md5_init": \\\\{
                    "pattern": "mov.*0x67452301",
                    "description": "MD5 initialization constant"
                \\\\},
                "sha1_init": \\\\{
                    "pattern": "mov.*0x67452301.*0xefcdab89",
                    "description": "SHA1 initialization constants"
                \\\\},
                "aes_sbox": \\\\{
                    "pattern": "0x63.*0x7c.*0x77.*0x7b",
                    "description": "AES S-box constants"
                \\\\}
            \\\\},
            "compression": \\\\{
                "zlib_header": \\\\{
                    "pattern": "0x78.*0x9c",
                    "description": "ZLIB header magic"
                \\\\}
            \\\\},
            "network": \\\\{
                "socket_init": \\\\{
                    "pattern": "WSAStartup.*0x0202",
                    "description": "Winsock initialization"
                \\\\}
            \\\\}
        \\\\}

    def extract_function_signature(self, func_addr):
        """Extract signature from function"""

        # Get function information
        func_info = cutter.cmdj(f"afij @ \\\\{func_addr\\\\}")
        if not func_info:
            return None

        func_info = func_info[0]
        func_size = func_info.get("size", 0)

        if func_size == 0:
            return None

        # Get function bytes
        func_bytes = cutter.cmd(f"p8 \\\\{func_size\\\\} @ \\\\{func_addr\\\\}")

        # Calculate hash
        func_hash = hashlib.md5(bytes.fromhex(func_bytes)).hexdigest()

        # Get disassembly
        disasm = cutter.cmd(f"pdf @ \\\\{func_addr\\\\}")

        # Extract constants and patterns
        constants = self.extract_constants(disasm)
        patterns = self.extract_patterns(disasm)

        signature = \\\\{
            "address": func_addr,
            "name": func_info.get("name", f"fcn.\\\\{func_addr:08x\\\\}"),
            "size": func_size,
            "hash": func_hash,
            "constants": constants,
            "patterns": patterns,
            "disassembly": disasm
        \\\\}

        return signature

    def extract_constants(self, disassembly):
        """Extract constants from disassembly"""
        import re

        constants = []

        # Look for immediate values
        const_patterns = [
            r'0x[0-9a-fA-F]+',  # Hex constants
            r'\b\d+\b',         # Decimal constants
        ]

        for pattern in const_patterns:
            matches = re.findall(pattern, disassembly)
            constants.extend(matches)

        # Remove duplicates and sort
        return sorted(list(set(constants)))

    def extract_patterns(self, disassembly):
        """Extract instruction patterns from disassembly"""

        lines = disassembly.split('\n')
        patterns = []

        for line in lines:
            # Extract instruction mnemonic
            parts = line.strip().split()
            if len(parts) >= 2:
                instruction = parts[1]  # Skip address
                patterns.append(instruction)

        return patterns

    def match_signature(self, signature):
        """Match signature against known signatures"""

        matches = []

        for category, signatures in self.known_signatures.items():
            for sig_name, sig_data in signatures.items():
                pattern = sig_data["pattern"]
                description = sig_data["description"]

                # Check if pattern matches in disassembly
                if pattern in signature["disassembly"]:
                    matches.append(\\\\{
                        "category": category,
                        "name": sig_name,
                        "description": description,
                        "confidence": "high"
                    \\\\})

                # Check constants
                for const in signature["constants"]:
                    if const in pattern:
                        matches.append(\\\\{
                            "category": category,
                            "name": sig_name,
                            "description": f"Constant match: \\\\{const\\\\}",
                            "confidence": "medium"
                        \\\\})

        return matches

    def analyze_all_functions(self):
        """Analyze all functions in the binary"""

        print("Analyzing function signatures...")

        # Get all functions
        functions = cutter.cmdj("aflj")

        results = []

        for func in functions:
            func_addr = func.get("offset")
            func_name = func.get("name", "")

            print(f"Analyzing function: \\\\{func_name\\\\} @ 0x\\\\{func_addr:x\\\\}")

            # Extract signature
            signature = self.extract_function_signature(func_addr)

            if signature:
                # Match against known signatures
                matches = self.match_signature(signature)

                result = \\\\{
                    "function": signature,
                    "matches": matches
                \\\\}

                results.append(result)

                if matches:
                    print(f"  Found \\\\{len(matches)\\\\} signature matches")

        return results

    def generate_signature_report(self, results, output_file="signature_analysis.json"):
        """Generate signature analysis report"""

        # Count matches by category
        category_counts = \\\\{\\\\}
        total_matches = 0

        for result in results:
            for match in result["matches"]:
                category = match["category"]
                category_counts[category] = category_counts.get(category, 0) + 1
                total_matches += 1

        report = \\\\{
            "signature_analysis": \\\\{
                "total_functions": len(results),
                "total_matches": total_matches,
                "category_counts": category_counts
            \\\\},
            "results": results
        \\\\}

        with open(output_file, 'w') as f:
            json.dump(report, f, indent=2)

        print(f"Signature analysis report saved: \\\\{output_file\\\\}")
        print(f"Total functions analyzed: \\\\{len(results)\\\\}")
        print(f"Total signature matches: \\\\{total_matches\\\\}")

        return report

# Usage in Cutter
if __name__ == "__main__":
    analyzer = FunctionSignatureAnalyzer()
    results = analyzer.analyze_all_functions()
    report = analyzer.generate_signature_report(results)

Integration Examples

IDA Pro Migration

#!/usr/bin/env python3
# IDA Pro to Cutter migration helper

import cutter
import json

class IDACutterMigration:
    def __init__(self):
        self.ida_commands = \\\\{
            # IDA command -> Cutter equivalent
            "MakeCode": "af",
            "MakeFunction": "af",
            "MakeName": "afn",
            "MakeComm": "CC",
            "Jump": "s",
            "GetFunctionName": "afi~name",
            "GetString": "ps",
            "FindBinary": "/x",
            "GetBytes": "p8",
            "PatchByte": "wx",
            "ScreenEA": "s",
            "here": "s",
            "BADADDR": "0xffffffff"
        \\\\}

    def convert_ida_script(self, ida_script):
        """Convert IDA Python script to Cutter"""

        # Basic conversion patterns
        conversions = [
            ("idc.MakeCode", "cutter.cmd('af')"),
            ("idc.MakeFunction", "cutter.cmd('af')"),
            ("idc.GetFunctionName", "cutter.cmdj('afi')['name']"),
            ("idc.Jump", "cutter.cmd('s')"),
            ("idaapi.get_bytes", "cutter.cmd('p8')"),
            ("idc.here()", "cutter.cmd('s')"),
            ("print", "print")  # Keep print statements
        ]

        converted_script = ida_script

        for ida_pattern, cutter_pattern in conversions:
            converted_script = converted_script.replace(ida_pattern, cutter_pattern)

        return converted_script

    def export_ida_database(self, output_file="ida_export.json"):
        """Export IDA-like database information"""

        # Collect information similar to IDA database
        database = \\\\{
            "functions": cutter.cmdj("aflj"),
            "segments": cutter.cmdj("iSj"),
            "imports": cutter.cmdj("iij"),
            "exports": cutter.cmdj("iej"),
            "strings": cutter.cmdj("izzj"),
            "comments": self.get_all_comments(),
            "names": self.get_all_names()
        \\\\}

        with open(output_file, 'w') as f:
            json.dump(database, f, indent=2)

        print(f"Database exported to: \\\\{output_file\\\\}")
        return database

    def get_all_comments(self):
        """Get all comments in the binary"""
        # This would collect all comments
        # Implementation depends on Cutter's comment system
        return []

    def get_all_names(self):
        """Get all named locations"""
        # This would collect all named locations
        # Implementation depends on Cutter's naming system
        return []

# Usage
migration = IDACutterMigration()
database = migration.export_ida_database()

Ghidra Integration

#!/usr/bin/env python3
# Ghidra and Cutter integration

import cutter
import json
import subprocess
import tempfile

class GhidraCutterIntegration:
    def __init__(self, ghidra_path="/opt/ghidra"):
        self.ghidra_path = ghidra_path

    def export_to_ghidra(self, binary_path, project_name):
        """Export binary to Ghidra project"""

        # Create Ghidra headless script
        script_content = f"""
import ghidra.app.util.importer.MessageLog;
import ghidra.app.util.Option;
import ghidra.app.util.bin.format.pe.PortableExecutable;
import ghidra.program.model.listing.Program;
import ghidra.util.task.TaskMonitor;

// Import binary
File binaryFile = new File("\\\\{binary_path\\\\}");
Program program = importProgram(binaryFile);

// Auto-analyze
analyzeProgram(program, TaskMonitor.DUMMY);

// Export analysis results
exportAnalysisResults(program, "\\\\{project_name\\\\}_analysis.json");
"""

        # Run Ghidra headless
        with tempfile.NamedTemporaryFile(mode='w', suffix='.java', delete=False) as f:
            f.write(script_content)
            script_path = f.name

        cmd = [
            f"\\\\{self.ghidra_path\\\\}/support/analyzeHeadless",
            "/tmp/ghidra_projects",
            project_name,
            "-import", binary_path,
            "-postScript", script_path
        ]

        try:
            result = subprocess.run(cmd, capture_output=True, text=True)
            return result.returncode == 0
        except Exception as e:
            print(f"Error running Ghidra: \\\\{e\\\\}")
            return False

    def import_ghidra_analysis(self, analysis_file):
        """Import Ghidra analysis results into Cutter"""

        try:
            with open(analysis_file, 'r') as f:
                ghidra_data = json.load(f)

            # Import functions
            if "functions" in ghidra_data:
                for func in ghidra_data["functions"]:
                    addr = func.get("address")
                    name = func.get("name")

                    if addr and name:
                        cutter.cmd(f"af @ \\\\{addr\\\\}")
                        cutter.cmd(f"afn \\\\{name\\\\} @ \\\\{addr\\\\}")

            # Import comments
            if "comments" in ghidra_data:
                for comment in ghidra_data["comments"]:
                    addr = comment.get("address")
                    text = comment.get("text")

                    if addr and text:
                        cutter.cmd(f"CC \\\\{text\\\\} @ \\\\{addr\\\\}")

            print("Ghidra analysis imported successfully")
            return True

        except Exception as e:
            print(f"Error importing Ghidra analysis: \\\\{e\\\\}")
            return False

# Usage
ghidra_integration = GhidraCutterIntegration()
ghidra_integration.export_to_ghidra("/path/to/binary", "analysis_project")

Troubleshooting

Common Issues

Installation Problems:

# Qt dependency issues
sudo apt install qt5-default libqt5svg5-dev

# Build dependency issues
sudo apt install cmake build-essential git

# Python plugin issues
pip install r2pipe

# AppImage execution issues
chmod +x Cutter-*.AppImage
./Cutter-*.AppImage --appimage-extract-and-run

Performance Issues:

# Large binary analysis
# Disable auto-analysis for large files
cutter -A 0 large_binary.exe

# Memory usage optimization
# Limit analysis depth
# Use project files to save state
# Close unused views

# Graph rendering issues
# Reduce graph complexity
# Use linear view for large functions
# Adjust graph layout settings

Analysis Issues:

# Function detection problems
# Manual function creation: af @ address
# Adjust analysis settings
# Use different analysis levels

# Decompiler issues
# Try different decompiler backends
# Check function boundaries
# Verify architecture detection

# Import/export problems
# Check file format support
# Verify file permissions
# Use appropriate import options

Débogage

Activer le débogage et le dépannage :

# Verbose output
cutter -v binary_file

# Debug mode
cutter -d binary_file

# Console debugging
# View -> Console
# Use Rizin commands for debugging

# Log file analysis
# Check ~/.local/share/RadareOrg/Cutter/
# Review log files for errors

# Plugin debugging
# Check plugin loading in preferences
# Verify plugin compatibility
# Review plugin logs

Considérations de Sécurité

Pratiques d’Analyse Sécurisées

Sécurité d’Analyse de Malware :

  • Utiliser des machines virtuelles isolées pour l’analyse de malware
  • Désactiver la connectivité réseau lors de l’analyse de malware
  • Utiliser des instantanés pour restaurer un état propre
  • Mettre en place des mesures de confinement appropriées
  • Surveiller le comportement du système pendant l’analyse

Protection des Données :

  • Chiffrer les résultats d’analyse sensibles
  • Stockage sécurisé des échantillons binaires
  • Mettre en place des contrôles d’accès
  • Sauvegardes régulières des données d’analyse
  • Élimination sécurisée des fichiers temporaires

Considérations Légales et Éthiques

Éthique de l’Ingénierie Inverse :

  • Respecter les licences logicielles et les conditions de service
  • Se conformer aux lois et réglementations applicables
  • Utiliser l’ingénierie inverse à des fins légitimes
  • Éviter la violation de droits d’auteur
  • Suivre les pratiques de divulgation responsable

Meilleures Pratiques :

  • Documenter la méthodologie d’analyse
  • Maintenir la chaîne de conservation des preuves
  • Mettre en place des processus d’assurance qualité
  • Formation continue et développement des compétences
  • Rester à jour avec les exigences légales

Références

Site Web Officiel de CutterDépôt GitHub de Cutterhttps://www.amazon.com/Practical-Reverse-Engineering-Reversing-Obfuscation/dp/1118787315[Documentation de Rizin](