Vai al contenuto

FRONTMATTER_31_# Kit Sleuth Cheat Foglio

__HTML_TAG_21_📋 Copia Tutti i comandi_HTML_TAG_22__ __HTML_TAG_23_📄 Generare PDF

Panoramica

Lo Sleuth Kit (TSK) è una raccolta completa di strumenti di forense digitali di linea di comando che consente agli investigatori di analizzare le immagini del disco e i file system per recuperare le prove digitali. Sviluppato da Brian Carrier, TSK funge da base per molte piattaforme forensi digitali, tra cui Autopsy, e fornisce accesso a basso livello alle strutture di file system e ai metadati. Il toolkit supporta più file system tra cui NTFS, FAT, ext⅔/4, HFS+ e UFS, rendendolo versatile per l'analisi di prove da vari sistemi operativi e dispositivi di archiviazione.

La forza di TSK risiede nella sua architettura modulare e nell'interfaccia di linea di comando, che consente un controllo preciso sui processi di analisi forensi e consente l'automazione attraverso lo scripting. Il toolkit include strumenti per l'analisi del file system, la creazione di timeline, l'estrazione di metadati, il recupero dei file cancellati e il calcolo hash. La sua capacità di lavorare direttamente con immagini del disco grezzo e strutture del file system lo rende inestimabile per esami forensi dettagliati in cui gli strumenti GUI non possono fornire un controllo granulare sufficiente.

Il Kit Sleuth è diventato lo standard de facto per la forense digitale di linea di comando, ampiamente adottato da agenzie di polizia, team di sicurezza aziendale e professionisti di risposta agli incidenti. La sua natura open source e la sua vasta documentazione l'hanno resa una pietra angolare dell'educazione e della ricerca scientifica digitale. Le capacità di integrazione del toolkit con altri strumenti forensi e il suo supporto per vari formati di output lo rendono un componente essenziale dei flussi di lavoro forensi digitali completi.

Installazione

## Package Manager Installazione

Installazione di TSK attraverso i gestori dei pacchetti di sistema:

# Ubuntu/Debian installation
sudo apt update
sudo apt install sleuthkit

# Kali Linux (pre-installed)
tsk_recover --help

# CentOS/RHEL installation
sudo yum install epel-release
sudo yum install sleuthkit

# Arch Linux installation
sudo pacman -S sleuthkit

# macOS installation
brew install sleuthkit

# Verify installation
mmls --version
fls --version

Source Compilation

Compilare TSK dal codice sorgente:

# Install dependencies
sudo apt install build-essential autoconf automake libtool
sudo apt install libafflib-dev libewf-dev zlib1g-dev

# Download source code
wget https://github.com/sleuthkit/sleuthkit/releases/download/sleuthkit-4.12.0/sleuthkit-4.12.0.tar.gz
tar -xzf sleuthkit-4.12.0.tar.gz
cd sleuthkit-4.12.0

# Configure build
./configure --enable-java --with-afflib --with-libewf

# Compile and install
make
sudo make install

# Update library cache
sudo ldconfig

# Verify installation
mmls --version

Installazione Docker

# Create TSK Docker environment
cat > Dockerfile << 'EOF'
FROM ubuntu:20.04
ENV DEBIAN_FRONTEND=noninteractive

RUN apt-get update && apt-get install -y \
    sleuthkit libewf-tools afflib-tools \
    python3 python3-pip file hexdump

WORKDIR /evidence
CMD ["/bin/bash"]
EOF

# Build container
docker build -t sleuthkit-forensics .

# Run with evidence mounted
docker run -it -v $(pwd)/evidence:/evidence sleuthkit-forensics

# Example usage in container
docker run -it sleuthkit-forensics mmls /evidence/disk_image.dd

Uso di base

Analisi immagine disco

Analisi delle immagini del disco e delle strutture di partizione:

# Display partition table
mmls disk_image.dd

# Display detailed partition information
mmls -t dos disk_image.dd
mmls -t gpt disk_image.dd
mmls -t mac disk_image.dd

# Display partition table with sector offsets
mmls -a disk_image.dd

# Analyze specific partition
mmls -o 2048 disk_image.dd

# Display file system information
fsstat -o 2048 disk_image.dd

Analisi del file system

Analisi dei file system e delle strutture directory:

# List files in root directory
fls -o 2048 disk_image.dd

# List files recursively
fls -r -o 2048 disk_image.dd

# List deleted files
fls -d -o 2048 disk_image.dd

# List files with full paths
fls -p -o 2048 disk_image.dd

# List files with metadata
fls -l -o 2048 disk_image.dd

# List files in specific directory (inode)
fls -o 2048 disk_image.dd 1234

Recupero file

Recuperare i file e analizzare il contenuto dei file:

# Extract file by inode
icat -o 2048 disk_image.dd 5678 > recovered_file.txt

# Extract file with metadata preservation
icat -s -o 2048 disk_image.dd 5678 > recovered_file.txt

# Display file metadata
istat -o 2048 disk_image.dd 5678

# Display directory entry information
ffind -o 2048 disk_image.dd 5678

# Find files by name
ffind -n filename -o 2048 disk_image.dd

Caratteristiche avanzate

Timeline Analysis

Creazione e analisi dei tempi del file system:

# Create timeline in body format
fls -r -m / -o 2048 disk_image.dd > timeline.body

# Create timeline with deleted files
fls -r -d -m / -o 2048 disk_image.dd >> timeline.body

# Convert body file to timeline
mactime -b timeline.body -d > timeline.csv

# Create timeline for specific date range
mactime -b timeline.body -d 2024-01-01..2024-01-31 > january_timeline.csv

# Create timeline with time zone
mactime -b timeline.body -d -z EST5EDT > timeline_est.csv

# Create timeline in different formats
mactime -b timeline.body -d -m > timeline_monthly.csv

Deleted File Recovery

Tecniche di recupero file cancellate avanzate:

# List all deleted files
fls -d -r -o 2048 disk_image.dd

# Recover deleted files by pattern
fls -d -r -o 2048 disk_image.dd|grep "\.doc$"

# Recover deleted file content
icat -o 2048 disk_image.dd 1234-128-1 > deleted_file.doc

# Search for file signatures in unallocated space
blkls -o 2048 disk_image.dd|sigfind -t jpeg

# Carve files from unallocated space
blkls -o 2048 disk_image.dd|foremost -t all -i -

# Search for specific strings in unallocated space
blkls -o 2048 disk_image.dd|strings|grep "password"

Hash Analysis

Calcolo e verifica dell'hash file:

# Calculate MD5 hashes for all files
fls -r -o 2048 disk_image.dd|while read line; do
    inode=$(echo $line|awk '\\\\{print $2\\\\}'|cut -d: -f1)
    filename=$(echo $line|awk '\\\\{print $3\\\\}')
    if [[ $inode =~ ^[0-9]+$ ]]; then
        hash=$(icat -o 2048 disk_image.dd $inode|md5sum|cut -d' ' -f1)
        echo "$hash  $filename"
    fi
done > file_hashes.md5

# Calculate SHA-256 hashes
fls -r -o 2048 disk_image.dd|while read line; do
    inode=$(echo $line|awk '\\\\{print $2\\\\}'|cut -d: -f1)
    filename=$(echo $line|awk '\\\\{print $3\\\\}')
    if [[ $inode =~ ^[0-9]+$ ]]; then
        hash=$(icat -o 2048 disk_image.dd $inode|sha256sum|cut -d' ' -f1)
        echo "$hash  $filename"
    fi
done > file_hashes.sha256

# Compare against known hash database
hashdeep -c sha256 -k known_hashes.txt file_hashes.sha256

Analisi dei metadati

Analisi dei metadati del file e del file system:

# Display detailed file metadata
istat -o 2048 disk_image.dd 5678

# Display file system metadata
fsstat -o 2048 disk_image.dd

# Display journal information (ext3/4)
jls -o 2048 disk_image.dd

# Display journal entries
jcat -o 2048 disk_image.dd 1234

# Display NTFS MFT entries
istat -o 2048 disk_image.dd 0

# Analyze alternate data streams (NTFS)
fls -a -o 2048 disk_image.dd

Automation Scripts

Analisi completa del disco

#!/bin/bash
# Comprehensive TSK disk analysis script

DISK_IMAGE="$1"
OUTPUT_DIR="tsk_analysis_$(date +%Y%m%d_%H%M%S)"

if [ -z "$DISK_IMAGE" ]; then
    echo "Usage: $0 <disk_image>"
    exit 1
fi

if [ ! -f "$DISK_IMAGE" ]; then
    echo "Error: Disk image file not found: $DISK_IMAGE"
    exit 1
fi

echo "Starting comprehensive TSK analysis of: $DISK_IMAGE"
echo "Output directory: $OUTPUT_DIR"

# Create output directory
mkdir -p "$OUTPUT_DIR"

# Step 1: Analyze partition table
echo "Step 1: Analyzing partition table..."
mmls "$DISK_IMAGE" > "$OUTPUT_DIR/partition_table.txt" 2>&1

# Extract partition information
PARTITIONS=$(mmls "$DISK_IMAGE" 2>/dev/null|grep -E "^[0-9]"|awk '\\\\{print $3\\\\}')

if [ -z "$PARTITIONS" ]; then
    echo "No partitions found, analyzing as single file system..."
    PARTITIONS="0"
fi

# Step 2: Analyze each partition
for OFFSET in $PARTITIONS; do
    echo "Step 2: Analyzing partition at offset $OFFSET..."

    PART_DIR="$OUTPUT_DIR/partition_$OFFSET"
    mkdir -p "$PART_DIR"

    # File system information
    echo "Getting file system information..."
    fsstat -o "$OFFSET" "$DISK_IMAGE" > "$PART_DIR/filesystem_info.txt" 2>&1

    # File listing
    echo "Creating file listing..."
    fls -r -p -o "$OFFSET" "$DISK_IMAGE" > "$PART_DIR/file_listing.txt" 2>&1

    # Deleted files
    echo "Listing deleted files..."
    fls -r -d -p -o "$OFFSET" "$DISK_IMAGE" > "$PART_DIR/deleted_files.txt" 2>&1

    # Timeline creation
    echo "Creating timeline..."
    fls -r -m "/" -o "$OFFSET" "$DISK_IMAGE" > "$PART_DIR/timeline.body" 2>&1
    fls -r -d -m "/" -o "$OFFSET" "$DISK_IMAGE" >> "$PART_DIR/timeline.body" 2>&1

    if [ -s "$PART_DIR/timeline.body" ]; then
        mactime -b "$PART_DIR/timeline.body" -d > "$PART_DIR/timeline.csv" 2>&1
    fi

    # Hash calculation for active files
    echo "Calculating file hashes..."
    fls -r -o "$OFFSET" "$DISK_IMAGE"|while IFS= read -r line; do
        if [[ $line =~ ^r/r ]]; then
            inode=$(echo "$line"|awk '\\\\{print $2\\\\}'|cut -d: -f1)
            filename=$(echo "$line"|awk '\\\\{print $3\\\\}')

            if [[ $inode =~ ^[0-9]+$ ]] && [ "$inode" != "0" ]; then
                hash=$(icat -o "$OFFSET" "$DISK_IMAGE" "$inode" 2>/dev/null|md5sum 2>/dev/null|cut -d' ' -f1)
                if [ ! -z "$hash" ]; then
                    echo "$hash  $filename"
                fi
            fi
        fi
    done > "$PART_DIR/file_hashes.md5"

    # Unallocated space analysis
    echo "Analyzing unallocated space..."
    blkls -o "$OFFSET" "$DISK_IMAGE" > "$PART_DIR/unallocated_space.raw" 2>&1

    # String extraction from unallocated space
    if [ -s "$PART_DIR/unallocated_space.raw" ]; then
        strings "$PART_DIR/unallocated_space.raw" > "$PART_DIR/unallocated_strings.txt"

        # Search for common patterns
        grep -i "password\|username\|email\|http\|ftp" "$PART_DIR/unallocated_strings.txt" > "$PART_DIR/interesting_strings.txt"
    fi

    echo "Completed analysis of partition at offset $OFFSET"
done

# Step 3: Generate summary report
echo "Step 3: Generating summary report..."
cat > "$OUTPUT_DIR/analysis_summary.txt" << EOF
TSK Analysis Summary
===================
Image File: $DISK_IMAGE
Analysis Date: $(date)
Output Directory: $OUTPUT_DIR

Partition Analysis:
EOF

for OFFSET in $PARTITIONS; do
    PART_DIR="$OUTPUT_DIR/partition_$OFFSET"

    if [ -d "$PART_DIR" ]; then
        echo "Partition Offset: $OFFSET" >> "$OUTPUT_DIR/analysis_summary.txt"

        if [ -f "$PART_DIR/file_listing.txt" ]; then
            file_count=$(wc -l < "$PART_DIR/file_listing.txt")
            echo "  Total Files: $file_count" >> "$OUTPUT_DIR/analysis_summary.txt"
        fi

        if [ -f "$PART_DIR/deleted_files.txt" ]; then
            deleted_count=$(wc -l < "$PART_DIR/deleted_files.txt")
            echo "  Deleted Files: $deleted_count" >> "$OUTPUT_DIR/analysis_summary.txt"
        fi

        if [ -f "$PART_DIR/file_hashes.md5" ]; then
            hash_count=$(wc -l < "$PART_DIR/file_hashes.md5")
            echo "  Files Hashed: $hash_count" >> "$OUTPUT_DIR/analysis_summary.txt"
        fi

        echo "" >> "$OUTPUT_DIR/analysis_summary.txt"
    fi
done

echo "Analysis completed successfully!"
echo "Results saved in: $OUTPUT_DIR"
echo "Summary report: $OUTPUT_DIR/analysis_summary.txt"

Recupero file automatizzato

#!/usr/bin/env python3
# Automated file recovery using TSK

import subprocess
import os
import re
import csv
from datetime import datetime

class TSKFileRecovery:
    def __init__(self, image_path, output_dir):
        self.image_path = image_path
        self.output_dir = output_dir
        self.partitions = []
        self.recovered_files = []

    def discover_partitions(self):
        """Discover partitions in disk image"""
        try:
            result = subprocess.run(['mmls', self.image_path],
                                  capture_output=True, text=True)

            for line in result.stdout.split('\n'):
                if re.match(r'^\d+:', line):
                    parts = line.split()
                    if len(parts) >= 4:
                        offset = parts[2]
                        self.partitions.append(offset)

            if not self.partitions:
                self.partitions = ['0']  # Single file system

        except Exception as e:
            print(f"Error discovering partitions: \\\\{e\\\\}")
            self.partitions = ['0']

    def list_deleted_files(self, offset):
        """List deleted files in partition"""
        deleted_files = []

        try:
            cmd = ['fls', '-d', '-r', '-p', '-o', offset, self.image_path]
            result = subprocess.run(cmd, capture_output=True, text=True)

            for line in result.stdout.split('\n'):
                if line.strip() and not line.startswith('d/d'):
                    parts = line.split('\t')
                    if len(parts) >= 2:
                        inode_info = parts[0].split()
                        if len(inode_info) >= 2:
                            inode = inode_info[1].split(':')[0]
                            filename = parts[1] if len(parts) > 1 else 'unknown'

                            deleted_files.append(\\\\{
                                'inode': inode,
                                'filename': filename,
                                'full_line': line.strip()
                            \\\\})

        except Exception as e:
            print(f"Error listing deleted files: \\\\{e\\\\}")

        return deleted_files

    def recover_file(self, offset, inode, output_filename):
        """Recover individual file by inode"""
        try:
            output_path = os.path.join(self.output_dir, output_filename)

            # Create output directory if needed
            os.makedirs(os.path.dirname(output_path), exist_ok=True)

            cmd = ['icat', '-o', offset, self.image_path, inode]

            with open(output_path, 'wb') as f:
                result = subprocess.run(cmd, stdout=f, stderr=subprocess.PIPE)

            if result.returncode == 0 and os.path.getsize(output_path) > 0:
                return \\\\{
                    'status': 'success',
                    'output_path': output_path,
                    'size': os.path.getsize(output_path)
                \\\\}
            else:
                os.remove(output_path)
                return \\\\{
                    'status': 'failed',
                    'error': result.stderr.decode()
                \\\\}

        except Exception as e:
            return \\\\{
                'status': 'error',
                'error': str(e)
            \\\\}

    def get_file_metadata(self, offset, inode):
        """Get file metadata using istat"""
        try:
            cmd = ['istat', '-o', offset, self.image_path, inode]
            result = subprocess.run(cmd, capture_output=True, text=True)

            if result.returncode == 0:
                return result.stdout
            else:
                return None

        except Exception as e:
            print(f"Error getting metadata for inode \\\\{inode\\\\}: \\\\{e\\\\}")
            return None

    def recover_files_by_extension(self, extensions, max_files=100):
        """Recover files by file extension"""

        print(f"Starting file recovery for extensions: \\\\{extensions\\\\}")

        # Discover partitions
        self.discover_partitions()
        print(f"Found partitions at offsets: \\\\{self.partitions\\\\}")

        total_recovered = 0

        for offset in self.partitions:
            print(f"Processing partition at offset \\\\{offset\\\\}")

            # List deleted files
            deleted_files = self.list_deleted_files(offset)
            print(f"Found \\\\{len(deleted_files)\\\\} deleted files")

            # Filter by extension
            target_files = []
            for file_info in deleted_files:
                filename = file_info['filename'].lower()
                for ext in extensions:
                    if filename.endswith(f'.\\\\{ext.lower()\\\\}'):
                        target_files.append(file_info)
                        break

            print(f"Found \\\\{len(target_files)\\\\} files matching target extensions")

            # Recover files
            for i, file_info in enumerate(target_files[:max_files]):
                if total_recovered >= max_files:
                    break

                inode = file_info['inode']
                original_filename = file_info['filename']

                # Create safe filename
                safe_filename = re.sub(r'[^\w\-_\.]', '_', original_filename)
                output_filename = f"partition_\\\\{offset\\\\}/recovered_\\\\{i:04d\\\\}_\\\\{safe_filename\\\\}"

                print(f"Recovering file \\\\{i+1\\\\}/\\\\{len(target_files)\\\\}: \\\\{original_filename\\\\}")

                # Recover file
                recovery_result = self.recover_file(offset, inode, output_filename)

                # Get metadata
                metadata = self.get_file_metadata(offset, inode)

                # Record recovery result
                recovery_record = \\\\{
                    'partition_offset': offset,
                    'inode': inode,
                    'original_filename': original_filename,
                    'recovered_filename': output_filename,
                    'recovery_status': recovery_result['status'],
                    'file_size': recovery_result.get('size', 0),
                    'recovery_time': datetime.now().isoformat(),
                    'metadata': metadata
                \\\\}

                if recovery_result['status'] == 'success':
                    recovery_record['output_path'] = recovery_result['output_path']
                    total_recovered += 1
                else:
                    recovery_record['error'] = recovery_result.get('error', 'Unknown error')

                self.recovered_files.append(recovery_record)

        print(f"Recovery completed. Total files recovered: \\\\{total_recovered\\\\}")
        return self.recovered_files

    def generate_recovery_report(self):
        """Generate recovery report"""

        report_file = os.path.join(self.output_dir, 'recovery_report.csv')

        if not self.recovered_files:
            print("No recovery data to report")
            return

        # Write CSV report
        fieldnames = [
            'partition_offset', 'inode', 'original_filename',
            'recovered_filename', 'recovery_status', 'file_size',
            'recovery_time', 'output_path', 'error'
        ]

        with open(report_file, 'w', newline='') as csvfile:
            writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
            writer.writeheader()

            for record in self.recovered_files:
                # Remove metadata from CSV (too large)
                csv_record = \\\\{k: v for k, v in record.items() if k != 'metadata'\\\\}
                writer.writerow(csv_record)

        # Generate summary
        successful_recoveries = len([r for r in self.recovered_files if r['recovery_status'] == 'success'])
        total_attempts = len(self.recovered_files)

        summary = f"""
File Recovery Summary
====================
Image: \\\\{self.image_path\\\\}
Output Directory: \\\\{self.output_dir\\\\}
Recovery Date: \\\\{datetime.now().strftime('%Y-%m-%d %H:%M:%S')\\\\}

Results:
- Total Recovery Attempts: \\\\{total_attempts\\\\}
- Successful Recoveries: \\\\{successful_recoveries\\\\}
- Failed Recoveries: \\\\{total_attempts - successful_recoveries\\\\}
- Success Rate: \\\\{(successful_recoveries/total_attempts*100):.1f\\\\}%

Detailed report saved to: \\\\{report_file\\\\}
"""

        summary_file = os.path.join(self.output_dir, 'recovery_summary.txt')
        with open(summary_file, 'w') as f:
            f.write(summary)

        print(summary)

# Usage
if __name__ == "__main__":
    image_path = "/evidence/disk_image.dd"
    output_dir = "/recovered_files"

    recovery = TSKFileRecovery(image_path, output_dir)

    # Recover common document types
    extensions = ['doc', 'docx', 'pdf', 'txt', 'jpg', 'png']
    recovered_files = recovery.recover_files_by_extension(extensions, max_files=50)

    # Generate report
    recovery.generate_recovery_report()

Timeline Analysis

#!/usr/bin/env python3
# TSK timeline analysis script

import subprocess
import csv
import json
from datetime import datetime, timedelta
from collections import defaultdict

class TSKTimelineAnalyzer:
    def __init__(self, image_path):
        self.image_path = image_path
        self.timeline_data = []
        self.analysis_results = \\\\{\\\\}

    def create_timeline(self, offset='0', output_file='timeline.csv'):
        """Create timeline using TSK tools"""

        print("Creating timeline from disk image...")

        # Create body file
        body_file = 'timeline.body'

        try:
            # Generate body file for active files
            cmd1 = ['fls', '-r', '-m', '/', '-o', offset, self.image_path]
            with open(body_file, 'w') as f:
                subprocess.run(cmd1, stdout=f, check=True)

            # Add deleted files to body file
            cmd2 = ['fls', '-r', '-d', '-m', '/', '-o', offset, self.image_path]
            with open(body_file, 'a') as f:
                subprocess.run(cmd2, stdout=f, check=True)

            # Convert body file to timeline
            cmd3 = ['mactime', '-b', body_file, '-d']
            with open(output_file, 'w') as f:
                result = subprocess.run(cmd3, stdout=f, text=True)

            if result.returncode == 0:
                print(f"Timeline created successfully: \\\\{output_file\\\\}")
                return output_file
            else:
                print("Error creating timeline")
                return None

        except Exception as e:
            print(f"Error creating timeline: \\\\{e\\\\}")
            return None

    def parse_timeline(self, timeline_file):
        """Parse timeline CSV file"""

        timeline_data = []

        try:
            with open(timeline_file, 'r') as f:
                # Skip header lines
                lines = f.readlines()

                for line in lines:
                    if line.strip() and not line.startswith('Date'):
                        parts = line.strip().split(',')

                        if len(parts) >= 5:
                            timeline_entry = \\\\{
                                'date': parts[0],
                                'size': parts[1],
                                'type': parts[2],
                                'mode': parts[3],
                                'uid': parts[4],
                                'gid': parts[5] if len(parts) > 5 else '',
                                'meta': parts[6] if len(parts) > 6 else '',
                                'filename': ','.join(parts[7:]) if len(parts) > 7 else ''
                            \\\\}
                            timeline_data.append(timeline_entry)

            self.timeline_data = timeline_data
            print(f"Parsed \\\\{len(timeline_data)\\\\} timeline entries")
            return timeline_data

        except Exception as e:
            print(f"Error parsing timeline: \\\\{e\\\\}")
            return []

    def analyze_activity_patterns(self):
        """Analyze activity patterns in timeline"""

        if not self.timeline_data:
            print("No timeline data available for analysis")
            return \\\\{\\\\}

        # Activity by hour
        hourly_activity = defaultdict(int)
        daily_activity = defaultdict(int)
        file_types = defaultdict(int)

        for entry in self.timeline_data:
            try:
                # Parse date
                date_str = entry['date']
                if date_str and date_str != '0':
                    dt = datetime.strptime(date_str.split()[0] + ' ' + date_str.split()[1], '%a %b %d %Y %H:%M:%S')

                    # Count by hour
                    hour_key = dt.strftime('%H:00')
                    hourly_activity[hour_key] += 1

                    # Count by day
                    day_key = dt.strftime('%Y-%m-%d')
                    daily_activity[day_key] += 1

                # Count file types
                filename = entry.get('filename', '')
                if '.' in filename:
                    ext = filename.split('.')[-1].lower()
                    file_types[ext] += 1

            except Exception as e:
                continue

        analysis = \\\\{
            'total_entries': len(self.timeline_data),
            'hourly_activity': dict(hourly_activity),
            'daily_activity': dict(daily_activity),
            'file_types': dict(file_types),
            'peak_hour': max(hourly_activity, key=hourly_activity.get) if hourly_activity else None,
            'peak_day': max(daily_activity, key=daily_activity.get) if daily_activity else None
        \\\\}

        self.analysis_results = analysis
        return analysis

    def find_suspicious_activity(self):
        """Identify potentially suspicious activity"""

        suspicious_indicators = []

        if not self.timeline_data:
            return suspicious_indicators

        # Look for activity during unusual hours (late night/early morning)
        unusual_hours = ['00', '01', '02', '03', '04', '05']
        unusual_activity = 0

        for entry in self.timeline_data:
            try:
                date_str = entry['date']
                if date_str and date_str != '0':
                    dt = datetime.strptime(date_str.split()[0] + ' ' + date_str.split()[1], '%a %b %d %Y %H:%M:%S')

                    if dt.strftime('%H') in unusual_hours:
                        unusual_activity += 1

            except:
                continue

        if unusual_activity > 10:  # Threshold for suspicious activity
            suspicious_indicators.append(\\\\{
                'type': 'unusual_hours',
                'description': f'High activity during unusual hours: \\\\{unusual_activity\\\\} events',
                'severity': 'medium'
            \\\\})

        # Look for rapid file creation/deletion
        rapid_activity_threshold = 100  # files per minute

        # Group by minute
        minute_activity = defaultdict(int)
        for entry in self.timeline_data:
            try:
                date_str = entry['date']
                if date_str and date_str != '0':
                    dt = datetime.strptime(date_str.split()[0] + ' ' + date_str.split()[1], '%a %b %d %Y %H:%M:%S')
                    minute_key = dt.strftime('%Y-%m-%d %H:%M')
                    minute_activity[minute_key] += 1
            except:
                continue

        for minute, count in minute_activity.items():
            if count > rapid_activity_threshold:
                suspicious_indicators.append(\\\\{
                    'type': 'rapid_activity',
                    'description': f'Rapid file activity at \\\\{minute\\\\}: \\\\{count\\\\} events',
                    'severity': 'high'
                \\\\})

        # Look for suspicious file extensions
        suspicious_extensions = ['exe', 'bat', 'cmd', 'scr', 'pif', 'com']

        for entry in self.timeline_data:
            filename = entry.get('filename', '').lower()
            for ext in suspicious_extensions:
                if filename.endswith(f'.\\\\{ext\\\\}'):
                    suspicious_indicators.append(\\\\{
                        'type': 'suspicious_file',
                        'description': f'Suspicious file: \\\\{entry.get("filename", "")\\\\}',
                        'severity': 'medium',
                        'timestamp': entry.get('date', ''),
                        'filename': entry.get('filename', '')
                    \\\\})
                    break

        return suspicious_indicators

    def generate_analysis_report(self, output_file='timeline_analysis.json'):
        """Generate comprehensive timeline analysis report"""

        # Perform analysis
        patterns = self.analyze_activity_patterns()
        suspicious = self.find_suspicious_activity()

        report = \\\\{
            'analysis_timestamp': datetime.now().isoformat(),
            'image_file': self.image_path,
            'timeline_statistics': patterns,
            'suspicious_indicators': suspicious,
            'summary': \\\\{
                'total_timeline_entries': patterns.get('total_entries', 0),
                'suspicious_events': len(suspicious),
                'peak_activity_hour': patterns.get('peak_hour'),
                'peak_activity_day': patterns.get('peak_day')
            \\\\}
        \\\\}

        # Save report
        with open(output_file, 'w') as f:
            json.dump(report, f, indent=2)

        print(f"Timeline analysis report saved: \\\\{output_file\\\\}")

        # Print summary
        print("\nTimeline Analysis Summary:")
        print(f"Total timeline entries: \\\\{patterns.get('total_entries', 0)\\\\}")
        print(f"Suspicious indicators found: \\\\{len(suspicious)\\\\}")
        print(f"Peak activity hour: \\\\{patterns.get('peak_hour', 'Unknown')\\\\}")
        print(f"Peak activity day: \\\\{patterns.get('peak_day', 'Unknown')\\\\}")

        if suspicious:
            print("\nSuspicious Activity Detected:")
            for indicator in suspicious[:5]:  # Show first 5
                print(f"- \\\\{indicator['type']\\\\}: \\\\{indicator['description']\\\\}")

        return report

# Usage
if __name__ == "__main__":
    image_path = "/evidence/disk_image.dd"

    analyzer = TSKTimelineAnalyzer(image_path)

    # Create timeline
    timeline_file = analyzer.create_timeline()

    if timeline_file:
        # Parse timeline
        analyzer.parse_timeline(timeline_file)

        # Generate analysis report
        analyzer.generate_analysis_report()

Esempi di integrazione

Integrazione dell'autopsia

#!/bin/bash
# TSK and Autopsy integration script

IMAGE_PATH="$1"
CASE_NAME="$2"
CASE_DIR="/cases/$CASE_NAME"

if [ -z "$IMAGE_PATH" ]||[ -z "$CASE_NAME" ]; then
    echo "Usage: $0 <image_path> <case_name>"
    exit 1
fi

echo "Creating integrated TSK/Autopsy analysis for: $IMAGE_PATH"

# Create case directory
mkdir -p "$CASE_DIR"

# Step 1: TSK preliminary analysis
echo "Step 1: Running TSK preliminary analysis..."
mmls "$IMAGE_PATH" > "$CASE_DIR/partition_table.txt"
fsstat "$IMAGE_PATH" > "$CASE_DIR/filesystem_info.txt"

# Step 2: Create timeline with TSK
echo "Step 2: Creating timeline with TSK..."
fls -r -m "/" "$IMAGE_PATH" > "$CASE_DIR/timeline.body"
mactime -b "$CASE_DIR/timeline.body" -d > "$CASE_DIR/timeline.csv"

# Step 3: Extract key files with TSK
echo "Step 3: Extracting key files..."
mkdir -p "$CASE_DIR/extracted_files"

# Extract registry files (Windows)
fls "$IMAGE_PATH"|grep -i "system\|software\|sam\|security"|while read line; do
    inode=$(echo "$line"|awk '\\\\{print $2\\\\}'|cut -d: -f1)
    filename=$(echo "$line"|awk '\\\\{print $3\\\\}')

    if [[ $inode =~ ^[0-9]+$ ]]; then
        icat "$IMAGE_PATH" "$inode" > "$CASE_DIR/extracted_files/$filename"
    fi
done

# Step 4: Import into Autopsy (if available)
if command -v autopsy &> /dev/null; then
    echo "Step 4: Importing into Autopsy..."
    # Autopsy command-line import would go here
    # This depends on Autopsy version and configuration
fi

echo "Integrated analysis completed. Results in: $CASE_DIR"

YARA Integrazione

#!/usr/bin/env python3
# TSK and YARA integration for malware detection

import subprocess
import yara
import os
import tempfile

class TSKYaraScanner:
    def __init__(self, image_path, yara_rules_path):
        self.image_path = image_path
        self.yara_rules = yara.compile(yara_rules_path)
        self.matches = []

    def scan_files(self, offset='0'):
        """Scan files in disk image with YARA rules"""

        # Get file listing
        cmd = ['fls', '-r', '-o', offset, self.image_path]
        result = subprocess.run(cmd, capture_output=True, text=True)

        for line in result.stdout.split('\n'):
            if line.strip() and line.startswith('r/r'):
                parts = line.split()
                if len(parts) >= 3:
                    inode = parts[1].split(':')[0]
                    filename = parts[2]

                    # Extract file content
                    try:
                        with tempfile.NamedTemporaryFile() as temp_file:
                            extract_cmd = ['icat', '-o', offset, self.image_path, inode]
                            subprocess.run(extract_cmd, stdout=temp_file, check=True)

                            # Scan with YARA
                            matches = self.yara_rules.match(temp_file.name)

                            if matches:
                                self.matches.append(\\\\{
                                    'filename': filename,
                                    'inode': inode,
                                    'matches': [str(match) for match in matches]
                                \\\\})

                                print(f"YARA match in \\\\{filename\\\\}: \\\\{matches\\\\}")

                    except Exception as e:
                        print(f"Error scanning \\\\{filename\\\\}: \\\\{e\\\\}")

        return self.matches

# Usage
scanner = TSKYaraScanner("/evidence/disk_image.dd", "/rules/malware.yar")
matches = scanner.scan_files()

Risoluzione dei problemi

Questioni comuni

**Image Format Issues: ** Traduzione:

**Partition Detection Issues: ** Traduzione:

File System Issues: Traduzione:

Debugging

Abilitare il debug dettagliato e la segnalazione di errori:

# Verbose output
fls -v disk_image.dd

# Debug mode (if available)
TSK_DEBUG=1 fls disk_image.dd

# Check TSK version and capabilities
mmls -V
fls -V

# Monitor system calls
strace fls disk_image.dd

# Check for library dependencies
ldd $(which fls)

Considerazioni di sicurezza

Evidence Integrity

** Protezione della scrittura - Lavora sempre con copie di prova di sola lettura - Utilizzare blocchi di scrittura hardware quando possibile - Verifica l'integrità dell'immagine con le ciglia crittografiche - Documento tutte le procedure di analisi - Mantenere i record di custodia

Hash Verification: Traduzione:

Legale e Compliance

**Documentazione Requisiti: ** - Mantenere registri dettagliati di tutti i comandi TSK eseguiti - Metodologia e procedure di analisi dei documenti - Registra tutti i risultati e il loro significato - Conservare i risultati originali delle prove e delle analisi - Seguire i requisiti legali e normativi applicabili

**Le migliori pratiche: ** - Utilizzare procedure forensi standardizzate - Validare strumenti e tecniche regolarmente - Mantenere la conoscenza corrente dei requisiti legali - Implementare processi di garanzia della qualità - Regolari aggiornamenti di formazione e certificazione

Referenze

  1. Il Kit Sleuth Sito Ufficiale_
  2. Documentazione e Wiki di TSK
  3. Digital Forensics with The Sleuth Kit
  4. Analisi scientifica del sistema di calcolo_
  5. NIST Computer Forensics Tool Testing