Sleuth Kit Cheat Sheet
"Clase de la hoja" id="copy-btn" class="copy-btn" onclick="copyAllCommands()" Copiar todos los comandos id="pdf-btn" class="pdf-btn" onclick="generatePDF()" Generar PDF seleccionado/button ■/div titulada
Sinopsis
El Sleuth Kit (TSK) es una completa colección de herramientas forenses digitales de línea de comandos que permite a los investigadores analizar imágenes de disco y sistemas de archivos para recuperar evidencia digital. Desarrollado por Brian Carrier, TSK sirve de base para muchas plataformas forenses digitales, incluyendo Autopsia, y proporciona acceso de bajo nivel a estructuras de sistemas de archivos y metadatos. El toolkit admite múltiples sistemas de archivos, como NTFS, FAT, ext2/3/4, HFS+ y UFS, lo que hace que sea versátil para analizar pruebas de varios sistemas operativos y dispositivos de almacenamiento.
La fuerza de TSK está en su interfaz modular de arquitectura y línea de comandos, que permite un control preciso sobre los procesos de análisis forense y permite la automatización a través de scripting. El toolkit incluye herramientas para el análisis del sistema de archivos, la creación de línea de tiempo, la extracción de metadatos, la recuperación de archivos eliminada y el cálculo de hash. Su capacidad de trabajar directamente con imágenes de disco crudo y estructuras de sistemas de archivos hace que sea invaluable para exámenes forenses detallados donde las herramientas de GUI pueden no proporcionar suficiente control granular.
El Kit Sleuth se ha convertido en la norma de facto para los forenses digitales de línea de mando, ampliamente adoptada por los organismos encargados de hacer cumplir la ley, los equipos de seguridad empresarial y los profesionales de respuesta a incidentes. Su carácter de código abierto y su amplia documentación lo han convertido en una piedra angular de la educación y la investigación forense digital. Las capacidades de integración del toolkit con otras herramientas forenses y su apoyo a diversos formatos de salida lo convierten en un componente esencial de flujos de trabajo forenses digitales completos.
Instalación
Paquete Manager Instalación
Instalar TSK a través de gestores de paquetes del sistema:
# Ubuntu/Debian installation
sudo apt update
sudo apt install sleuthkit
# Kali Linux (pre-installed)
tsk_recover --help
# CentOS/RHEL installation
sudo yum install epel-release
sudo yum install sleuthkit
# Arch Linux installation
sudo pacman -S sleuthkit
# macOS installation
brew install sleuthkit
# Verify installation
mmls --version
fls --version
Compilación de fuentes
Compiling TSK from source code:
# Install dependencies
sudo apt install build-essential autoconf automake libtool
sudo apt install libafflib-dev libewf-dev zlib1g-dev
# Download source code
wget https://github.com/sleuthkit/sleuthkit/releases/download/sleuthkit-4.12.0/sleuthkit-4.12.0.tar.gz
tar -xzf sleuthkit-4.12.0.tar.gz
cd sleuthkit-4.12.0
# Configure build
./configure --enable-java --with-afflib --with-libewf
# Compile and install
make
sudo make install
# Update library cache
sudo ldconfig
# Verify installation
mmls --version
Docker Instalación
# Create TSK Docker environment
cat > Dockerfile << 'EOF'
FROM ubuntu:20.04
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y \
sleuthkit libewf-tools afflib-tools \
python3 python3-pip file hexdump
WORKDIR /evidence
CMD ["/bin/bash"]
EOF
# Build container
docker build -t sleuthkit-forensics .
# Run with evidence mounted
docker run -it -v $(pwd)/evidence:/evidence sleuthkit-forensics
# Example usage in container
docker run -it sleuthkit-forensics mmls /evidence/disk_image.dd
Uso básico
Análisis de imagen de disco
Analizar imágenes de disco y estructuras de partición:
# Display partition table
mmls disk_image.dd
# Display detailed partition information
mmls -t dos disk_image.dd
mmls -t gpt disk_image.dd
mmls -t mac disk_image.dd
# Display partition table with sector offsets
mmls -a disk_image.dd
# Analyze specific partition
mmls -o 2048 disk_image.dd
# Display file system information
fsstat -o 2048 disk_image.dd
Análisis del sistema de archivos
Analizar sistemas de archivos y estructuras de directorios:
# List files in root directory
fls -o 2048 disk_image.dd
# List files recursively
fls -r -o 2048 disk_image.dd
# List deleted files
fls -d -o 2048 disk_image.dd
# List files with full paths
fls -p -o 2048 disk_image.dd
# List files with metadata
fls -l -o 2048 disk_image.dd
# List files in specific directory (inode)
fls -o 2048 disk_image.dd 1234
Recuperación de archivos
Recuperar archivos y analizar contenido de archivos:
# Extract file by inode
icat -o 2048 disk_image.dd 5678 > recovered_file.txt
# Extract file with metadata preservation
icat -s -o 2048 disk_image.dd 5678 > recovered_file.txt
# Display file metadata
istat -o 2048 disk_image.dd 5678
# Display directory entry information
ffind -o 2048 disk_image.dd 5678
# Find files by name
ffind -n filename -o 2048 disk_image.dd
Características avanzadas
Timeline Analysis
Creación y análisis de tiempo del sistema de archivos:
# Create timeline in body format
fls -r -m / -o 2048 disk_image.dd > timeline.body
# Create timeline with deleted files
fls -r -d -m / -o 2048 disk_image.dd >> timeline.body
# Convert body file to timeline
mactime -b timeline.body -d > timeline.csv
# Create timeline for specific date range
mactime -b timeline.body -d 2024-01-01..2024-01-31 > january_timeline.csv
# Create timeline with time zone
mactime -b timeline.body -d -z EST5EDT > timeline_est.csv
# Create timeline in different formats
mactime -b timeline.body -d -m > timeline_monthly.csv
Recuperación de archivos eliminada
Técnicas de recuperación de archivos eliminadas avanzadas:
# List all deleted files
fls -d -r -o 2048 disk_image.dd
# Recover deleted files by pattern
fls -d -r -o 2048 disk_image.dd|grep "\.doc$"
# Recover deleted file content
icat -o 2048 disk_image.dd 1234-128-1 > deleted_file.doc
# Search for file signatures in unallocated space
blkls -o 2048 disk_image.dd|sigfind -t jpeg
# Carve files from unallocated space
blkls -o 2048 disk_image.dd|foremost -t all -i -
# Search for specific strings in unallocated space
blkls -o 2048 disk_image.dd|strings|grep "password"
Hash Analysis
Cálculo y verificación del hash de archivo:
# Calculate MD5 hashes for all files
fls -r -o 2048 disk_image.dd|while read line; do
inode=$(echo $line|awk '\\\\{print $2\\\\}'|cut -d: -f1)
filename=$(echo $line|awk '\\\\{print $3\\\\}')
if [[ $inode =~ ^[0-9]+$ ]]; then
hash=$(icat -o 2048 disk_image.dd $inode|md5sum|cut -d' ' -f1)
echo "$hash $filename"
fi
done > file_hashes.md5
# Calculate SHA-256 hashes
fls -r -o 2048 disk_image.dd|while read line; do
inode=$(echo $line|awk '\\\\{print $2\\\\}'|cut -d: -f1)
filename=$(echo $line|awk '\\\\{print $3\\\\}')
if [[ $inode =~ ^[0-9]+$ ]]; then
hash=$(icat -o 2048 disk_image.dd $inode|sha256sum|cut -d' ' -f1)
echo "$hash $filename"
fi
done > file_hashes.sha256
# Compare against known hash database
hashdeep -c sha256 -k known_hashes.txt file_hashes.sha256
Análisis de metadatos
Analizar los metadatos del sistema de archivos y archivos:
# Display detailed file metadata
istat -o 2048 disk_image.dd 5678
# Display file system metadata
fsstat -o 2048 disk_image.dd
# Display journal information (ext3/4)
jls -o 2048 disk_image.dd
# Display journal entries
jcat -o 2048 disk_image.dd 1234
# Display NTFS MFT entries
istat -o 2048 disk_image.dd 0
# Analyze alternate data streams (NTFS)
fls -a -o 2048 disk_image.dd
Scripts de automatización
Análisis integral del disco
#!/bin/bash
# Comprehensive TSK disk analysis script
DISK_IMAGE="$1"
OUTPUT_DIR="tsk_analysis_$(date +%Y%m%d_%H%M%S)"
if [ -z "$DISK_IMAGE" ]; then
echo "Usage: $0 <disk_image>"
exit 1
fi
if [ ! -f "$DISK_IMAGE" ]; then
echo "Error: Disk image file not found: $DISK_IMAGE"
exit 1
fi
echo "Starting comprehensive TSK analysis of: $DISK_IMAGE"
echo "Output directory: $OUTPUT_DIR"
# Create output directory
mkdir -p "$OUTPUT_DIR"
# Step 1: Analyze partition table
echo "Step 1: Analyzing partition table..."
mmls "$DISK_IMAGE" > "$OUTPUT_DIR/partition_table.txt" 2>&1
# Extract partition information
PARTITIONS=$(mmls "$DISK_IMAGE" 2>/dev/null|grep -E "^[0-9]"|awk '\\\\{print $3\\\\}')
if [ -z "$PARTITIONS" ]; then
echo "No partitions found, analyzing as single file system..."
PARTITIONS="0"
fi
# Step 2: Analyze each partition
for OFFSET in $PARTITIONS; do
echo "Step 2: Analyzing partition at offset $OFFSET..."
PART_DIR="$OUTPUT_DIR/partition_$OFFSET"
mkdir -p "$PART_DIR"
# File system information
echo "Getting file system information..."
fsstat -o "$OFFSET" "$DISK_IMAGE" > "$PART_DIR/filesystem_info.txt" 2>&1
# File listing
echo "Creating file listing..."
fls -r -p -o "$OFFSET" "$DISK_IMAGE" > "$PART_DIR/file_listing.txt" 2>&1
# Deleted files
echo "Listing deleted files..."
fls -r -d -p -o "$OFFSET" "$DISK_IMAGE" > "$PART_DIR/deleted_files.txt" 2>&1
# Timeline creation
echo "Creating timeline..."
fls -r -m "/" -o "$OFFSET" "$DISK_IMAGE" > "$PART_DIR/timeline.body" 2>&1
fls -r -d -m "/" -o "$OFFSET" "$DISK_IMAGE" >> "$PART_DIR/timeline.body" 2>&1
if [ -s "$PART_DIR/timeline.body" ]; then
mactime -b "$PART_DIR/timeline.body" -d > "$PART_DIR/timeline.csv" 2>&1
fi
# Hash calculation for active files
echo "Calculating file hashes..."
fls -r -o "$OFFSET" "$DISK_IMAGE"|while IFS= read -r line; do
if [[ $line =~ ^r/r ]]; then
inode=$(echo "$line"|awk '\\\\{print $2\\\\}'|cut -d: -f1)
filename=$(echo "$line"|awk '\\\\{print $3\\\\}')
if [[ $inode =~ ^[0-9]+$ ]] && [ "$inode" != "0" ]; then
hash=$(icat -o "$OFFSET" "$DISK_IMAGE" "$inode" 2>/dev/null|md5sum 2>/dev/null|cut -d' ' -f1)
if [ ! -z "$hash" ]; then
echo "$hash $filename"
fi
fi
fi
done > "$PART_DIR/file_hashes.md5"
# Unallocated space analysis
echo "Analyzing unallocated space..."
blkls -o "$OFFSET" "$DISK_IMAGE" > "$PART_DIR/unallocated_space.raw" 2>&1
# String extraction from unallocated space
if [ -s "$PART_DIR/unallocated_space.raw" ]; then
strings "$PART_DIR/unallocated_space.raw" > "$PART_DIR/unallocated_strings.txt"
# Search for common patterns
grep -i "password\|username\|email\|http\|ftp" "$PART_DIR/unallocated_strings.txt" > "$PART_DIR/interesting_strings.txt"
fi
echo "Completed analysis of partition at offset $OFFSET"
done
# Step 3: Generate summary report
echo "Step 3: Generating summary report..."
cat > "$OUTPUT_DIR/analysis_summary.txt" << EOF
TSK Analysis Summary
===================
Image File: $DISK_IMAGE
Analysis Date: $(date)
Output Directory: $OUTPUT_DIR
Partition Analysis:
EOF
for OFFSET in $PARTITIONS; do
PART_DIR="$OUTPUT_DIR/partition_$OFFSET"
if [ -d "$PART_DIR" ]; then
echo "Partition Offset: $OFFSET" >> "$OUTPUT_DIR/analysis_summary.txt"
if [ -f "$PART_DIR/file_listing.txt" ]; then
file_count=$(wc -l < "$PART_DIR/file_listing.txt")
echo " Total Files: $file_count" >> "$OUTPUT_DIR/analysis_summary.txt"
fi
if [ -f "$PART_DIR/deleted_files.txt" ]; then
deleted_count=$(wc -l < "$PART_DIR/deleted_files.txt")
echo " Deleted Files: $deleted_count" >> "$OUTPUT_DIR/analysis_summary.txt"
fi
if [ -f "$PART_DIR/file_hashes.md5" ]; then
hash_count=$(wc -l < "$PART_DIR/file_hashes.md5")
echo " Files Hashed: $hash_count" >> "$OUTPUT_DIR/analysis_summary.txt"
fi
echo "" >> "$OUTPUT_DIR/analysis_summary.txt"
fi
done
echo "Analysis completed successfully!"
echo "Results saved in: $OUTPUT_DIR"
echo "Summary report: $OUTPUT_DIR/analysis_summary.txt"
Recuperación de archivos automatizada
#!/usr/bin/env python3
# Automated file recovery using TSK
import subprocess
import os
import re
import csv
from datetime import datetime
class TSKFileRecovery:
def __init__(self, image_path, output_dir):
self.image_path = image_path
self.output_dir = output_dir
self.partitions = []
self.recovered_files = []
def discover_partitions(self):
"""Discover partitions in disk image"""
try:
result = subprocess.run(['mmls', self.image_path],
capture_output=True, text=True)
for line in result.stdout.split('\n'):
if re.match(r'^\d+:', line):
parts = line.split()
if len(parts) >= 4:
offset = parts[2]
self.partitions.append(offset)
if not self.partitions:
self.partitions = ['0'] # Single file system
except Exception as e:
print(f"Error discovering partitions: \\\\{e\\\\}")
self.partitions = ['0']
def list_deleted_files(self, offset):
"""List deleted files in partition"""
deleted_files = []
try:
cmd = ['fls', '-d', '-r', '-p', '-o', offset, self.image_path]
result = subprocess.run(cmd, capture_output=True, text=True)
for line in result.stdout.split('\n'):
if line.strip() and not line.startswith('d/d'):
parts = line.split('\t')
if len(parts) >= 2:
inode_info = parts[0].split()
if len(inode_info) >= 2:
inode = inode_info[1].split(':')[0]
filename = parts[1] if len(parts) > 1 else 'unknown'
deleted_files.append(\\\\{
'inode': inode,
'filename': filename,
'full_line': line.strip()
\\\\})
except Exception as e:
print(f"Error listing deleted files: \\\\{e\\\\}")
return deleted_files
def recover_file(self, offset, inode, output_filename):
"""Recover individual file by inode"""
try:
output_path = os.path.join(self.output_dir, output_filename)
# Create output directory if needed
os.makedirs(os.path.dirname(output_path), exist_ok=True)
cmd = ['icat', '-o', offset, self.image_path, inode]
with open(output_path, 'wb') as f:
result = subprocess.run(cmd, stdout=f, stderr=subprocess.PIPE)
if result.returncode == 0 and os.path.getsize(output_path) > 0:
return \\\\{
'status': 'success',
'output_path': output_path,
'size': os.path.getsize(output_path)
\\\\}
else:
os.remove(output_path)
return \\\\{
'status': 'failed',
'error': result.stderr.decode()
\\\\}
except Exception as e:
return \\\\{
'status': 'error',
'error': str(e)
\\\\}
def get_file_metadata(self, offset, inode):
"""Get file metadata using istat"""
try:
cmd = ['istat', '-o', offset, self.image_path, inode]
result = subprocess.run(cmd, capture_output=True, text=True)
if result.returncode == 0:
return result.stdout
else:
return None
except Exception as e:
print(f"Error getting metadata for inode \\\\{inode\\\\}: \\\\{e\\\\}")
return None
def recover_files_by_extension(self, extensions, max_files=100):
"""Recover files by file extension"""
print(f"Starting file recovery for extensions: \\\\{extensions\\\\}")
# Discover partitions
self.discover_partitions()
print(f"Found partitions at offsets: \\\\{self.partitions\\\\}")
total_recovered = 0
for offset in self.partitions:
print(f"Processing partition at offset \\\\{offset\\\\}")
# List deleted files
deleted_files = self.list_deleted_files(offset)
print(f"Found \\\\{len(deleted_files)\\\\} deleted files")
# Filter by extension
target_files = []
for file_info in deleted_files:
filename = file_info['filename'].lower()
for ext in extensions:
if filename.endswith(f'.\\\\{ext.lower()\\\\}'):
target_files.append(file_info)
break
print(f"Found \\\\{len(target_files)\\\\} files matching target extensions")
# Recover files
for i, file_info in enumerate(target_files[:max_files]):
if total_recovered >= max_files:
break
inode = file_info['inode']
original_filename = file_info['filename']
# Create safe filename
safe_filename = re.sub(r'[^\w\-_\.]', '_', original_filename)
output_filename = f"partition_\\\\{offset\\\\}/recovered_\\\\{i:04d\\\\}_\\\\{safe_filename\\\\}"
print(f"Recovering file \\\\{i+1\\\\}/\\\\{len(target_files)\\\\}: \\\\{original_filename\\\\}")
# Recover file
recovery_result = self.recover_file(offset, inode, output_filename)
# Get metadata
metadata = self.get_file_metadata(offset, inode)
# Record recovery result
recovery_record = \\\\{
'partition_offset': offset,
'inode': inode,
'original_filename': original_filename,
'recovered_filename': output_filename,
'recovery_status': recovery_result['status'],
'file_size': recovery_result.get('size', 0),
'recovery_time': datetime.now().isoformat(),
'metadata': metadata
\\\\}
if recovery_result['status'] == 'success':
recovery_record['output_path'] = recovery_result['output_path']
total_recovered += 1
else:
recovery_record['error'] = recovery_result.get('error', 'Unknown error')
self.recovered_files.append(recovery_record)
print(f"Recovery completed. Total files recovered: \\\\{total_recovered\\\\}")
return self.recovered_files
def generate_recovery_report(self):
"""Generate recovery report"""
report_file = os.path.join(self.output_dir, 'recovery_report.csv')
if not self.recovered_files:
print("No recovery data to report")
return
# Write CSV report
fieldnames = [
'partition_offset', 'inode', 'original_filename',
'recovered_filename', 'recovery_status', 'file_size',
'recovery_time', 'output_path', 'error'
]
with open(report_file, 'w', newline='') as csvfile:
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
for record in self.recovered_files:
# Remove metadata from CSV (too large)
csv_record = \\\\{k: v for k, v in record.items() if k != 'metadata'\\\\}
writer.writerow(csv_record)
# Generate summary
successful_recoveries = len([r for r in self.recovered_files if r['recovery_status'] == 'success'])
total_attempts = len(self.recovered_files)
summary = f"""
File Recovery Summary
====================
Image: \\\\{self.image_path\\\\}
Output Directory: \\\\{self.output_dir\\\\}
Recovery Date: \\\\{datetime.now().strftime('%Y-%m-%d %H:%M:%S')\\\\}
Results:
- Total Recovery Attempts: \\\\{total_attempts\\\\}
- Successful Recoveries: \\\\{successful_recoveries\\\\}
- Failed Recoveries: \\\\{total_attempts - successful_recoveries\\\\}
- Success Rate: \\\\{(successful_recoveries/total_attempts*100):.1f\\\\}%
Detailed report saved to: \\\\{report_file\\\\}
"""
summary_file = os.path.join(self.output_dir, 'recovery_summary.txt')
with open(summary_file, 'w') as f:
f.write(summary)
print(summary)
# Usage
if __name__ == "__main__":
image_path = "/evidence/disk_image.dd"
output_dir = "/recovered_files"
recovery = TSKFileRecovery(image_path, output_dir)
# Recover common document types
extensions = ['doc', 'docx', 'pdf', 'txt', 'jpg', 'png']
recovered_files = recovery.recover_files_by_extension(extensions, max_files=50)
# Generate report
recovery.generate_recovery_report()
Timeline Analysis
#!/usr/bin/env python3
# TSK timeline analysis script
import subprocess
import csv
import json
from datetime import datetime, timedelta
from collections import defaultdict
class TSKTimelineAnalyzer:
def __init__(self, image_path):
self.image_path = image_path
self.timeline_data = []
self.analysis_results = \\\\{\\\\}
def create_timeline(self, offset='0', output_file='timeline.csv'):
"""Create timeline using TSK tools"""
print("Creating timeline from disk image...")
# Create body file
body_file = 'timeline.body'
try:
# Generate body file for active files
cmd1 = ['fls', '-r', '-m', '/', '-o', offset, self.image_path]
with open(body_file, 'w') as f:
subprocess.run(cmd1, stdout=f, check=True)
# Add deleted files to body file
cmd2 = ['fls', '-r', '-d', '-m', '/', '-o', offset, self.image_path]
with open(body_file, 'a') as f:
subprocess.run(cmd2, stdout=f, check=True)
# Convert body file to timeline
cmd3 = ['mactime', '-b', body_file, '-d']
with open(output_file, 'w') as f:
result = subprocess.run(cmd3, stdout=f, text=True)
if result.returncode == 0:
print(f"Timeline created successfully: \\\\{output_file\\\\}")
return output_file
else:
print("Error creating timeline")
return None
except Exception as e:
print(f"Error creating timeline: \\\\{e\\\\}")
return None
def parse_timeline(self, timeline_file):
"""Parse timeline CSV file"""
timeline_data = []
try:
with open(timeline_file, 'r') as f:
# Skip header lines
lines = f.readlines()
for line in lines:
if line.strip() and not line.startswith('Date'):
parts = line.strip().split(',')
if len(parts) >= 5:
timeline_entry = \\\\{
'date': parts[0],
'size': parts[1],
'type': parts[2],
'mode': parts[3],
'uid': parts[4],
'gid': parts[5] if len(parts) > 5 else '',
'meta': parts[6] if len(parts) > 6 else '',
'filename': ','.join(parts[7:]) if len(parts) > 7 else ''
\\\\}
timeline_data.append(timeline_entry)
self.timeline_data = timeline_data
print(f"Parsed \\\\{len(timeline_data)\\\\} timeline entries")
return timeline_data
except Exception as e:
print(f"Error parsing timeline: \\\\{e\\\\}")
return []
def analyze_activity_patterns(self):
"""Analyze activity patterns in timeline"""
if not self.timeline_data:
print("No timeline data available for analysis")
return \\\\{\\\\}
# Activity by hour
hourly_activity = defaultdict(int)
daily_activity = defaultdict(int)
file_types = defaultdict(int)
for entry in self.timeline_data:
try:
# Parse date
date_str = entry['date']
if date_str and date_str != '0':
dt = datetime.strptime(date_str.split()[0] + ' ' + date_str.split()[1], '%a %b %d %Y %H:%M:%S')
# Count by hour
hour_key = dt.strftime('%H:00')
hourly_activity[hour_key] += 1
# Count by day
day_key = dt.strftime('%Y-%m-%d')
daily_activity[day_key] += 1
# Count file types
filename = entry.get('filename', '')
if '.' in filename:
ext = filename.split('.')[-1].lower()
file_types[ext] += 1
except Exception as e:
continue
analysis = \\\\{
'total_entries': len(self.timeline_data),
'hourly_activity': dict(hourly_activity),
'daily_activity': dict(daily_activity),
'file_types': dict(file_types),
'peak_hour': max(hourly_activity, key=hourly_activity.get) if hourly_activity else None,
'peak_day': max(daily_activity, key=daily_activity.get) if daily_activity else None
\\\\}
self.analysis_results = analysis
return analysis
def find_suspicious_activity(self):
"""Identify potentially suspicious activity"""
suspicious_indicators = []
if not self.timeline_data:
return suspicious_indicators
# Look for activity during unusual hours (late night/early morning)
unusual_hours = ['00', '01', '02', '03', '04', '05']
unusual_activity = 0
for entry in self.timeline_data:
try:
date_str = entry['date']
if date_str and date_str != '0':
dt = datetime.strptime(date_str.split()[0] + ' ' + date_str.split()[1], '%a %b %d %Y %H:%M:%S')
if dt.strftime('%H') in unusual_hours:
unusual_activity += 1
except:
continue
if unusual_activity > 10: # Threshold for suspicious activity
suspicious_indicators.append(\\\\{
'type': 'unusual_hours',
'description': f'High activity during unusual hours: \\\\{unusual_activity\\\\} events',
'severity': 'medium'
\\\\})
# Look for rapid file creation/deletion
rapid_activity_threshold = 100 # files per minute
# Group by minute
minute_activity = defaultdict(int)
for entry in self.timeline_data:
try:
date_str = entry['date']
if date_str and date_str != '0':
dt = datetime.strptime(date_str.split()[0] + ' ' + date_str.split()[1], '%a %b %d %Y %H:%M:%S')
minute_key = dt.strftime('%Y-%m-%d %H:%M')
minute_activity[minute_key] += 1
except:
continue
for minute, count in minute_activity.items():
if count > rapid_activity_threshold:
suspicious_indicators.append(\\\\{
'type': 'rapid_activity',
'description': f'Rapid file activity at \\\\{minute\\\\}: \\\\{count\\\\} events',
'severity': 'high'
\\\\})
# Look for suspicious file extensions
suspicious_extensions = ['exe', 'bat', 'cmd', 'scr', 'pif', 'com']
for entry in self.timeline_data:
filename = entry.get('filename', '').lower()
for ext in suspicious_extensions:
if filename.endswith(f'.\\\\{ext\\\\}'):
suspicious_indicators.append(\\\\{
'type': 'suspicious_file',
'description': f'Suspicious file: \\\\{entry.get("filename", "")\\\\}',
'severity': 'medium',
'timestamp': entry.get('date', ''),
'filename': entry.get('filename', '')
\\\\})
break
return suspicious_indicators
def generate_analysis_report(self, output_file='timeline_analysis.json'):
"""Generate comprehensive timeline analysis report"""
# Perform analysis
patterns = self.analyze_activity_patterns()
suspicious = self.find_suspicious_activity()
report = \\\\{
'analysis_timestamp': datetime.now().isoformat(),
'image_file': self.image_path,
'timeline_statistics': patterns,
'suspicious_indicators': suspicious,
'summary': \\\\{
'total_timeline_entries': patterns.get('total_entries', 0),
'suspicious_events': len(suspicious),
'peak_activity_hour': patterns.get('peak_hour'),
'peak_activity_day': patterns.get('peak_day')
\\\\}
\\\\}
# Save report
with open(output_file, 'w') as f:
json.dump(report, f, indent=2)
print(f"Timeline analysis report saved: \\\\{output_file\\\\}")
# Print summary
print("\nTimeline Analysis Summary:")
print(f"Total timeline entries: \\\\{patterns.get('total_entries', 0)\\\\}")
print(f"Suspicious indicators found: \\\\{len(suspicious)\\\\}")
print(f"Peak activity hour: \\\\{patterns.get('peak_hour', 'Unknown')\\\\}")
print(f"Peak activity day: \\\\{patterns.get('peak_day', 'Unknown')\\\\}")
if suspicious:
print("\nSuspicious Activity Detected:")
for indicator in suspicious[:5]: # Show first 5
print(f"- \\\\{indicator['type']\\\\}: \\\\{indicator['description']\\\\}")
return report
# Usage
if __name__ == "__main__":
image_path = "/evidence/disk_image.dd"
analyzer = TSKTimelineAnalyzer(image_path)
# Create timeline
timeline_file = analyzer.create_timeline()
if timeline_file:
# Parse timeline
analyzer.parse_timeline(timeline_file)
# Generate analysis report
analyzer.generate_analysis_report()
Ejemplos de integración
Integración autopsia
#!/bin/bash
# TSK and Autopsy integration script
IMAGE_PATH="$1"
CASE_NAME="$2"
CASE_DIR="/cases/$CASE_NAME"
if [ -z "$IMAGE_PATH" ]||[ -z "$CASE_NAME" ]; then
echo "Usage: $0 <image_path> <case_name>"
exit 1
fi
echo "Creating integrated TSK/Autopsy analysis for: $IMAGE_PATH"
# Create case directory
mkdir -p "$CASE_DIR"
# Step 1: TSK preliminary analysis
echo "Step 1: Running TSK preliminary analysis..."
mmls "$IMAGE_PATH" > "$CASE_DIR/partition_table.txt"
fsstat "$IMAGE_PATH" > "$CASE_DIR/filesystem_info.txt"
# Step 2: Create timeline with TSK
echo "Step 2: Creating timeline with TSK..."
fls -r -m "/" "$IMAGE_PATH" > "$CASE_DIR/timeline.body"
mactime -b "$CASE_DIR/timeline.body" -d > "$CASE_DIR/timeline.csv"
# Step 3: Extract key files with TSK
echo "Step 3: Extracting key files..."
mkdir -p "$CASE_DIR/extracted_files"
# Extract registry files (Windows)
fls "$IMAGE_PATH"|grep -i "system\|software\|sam\|security"|while read line; do
inode=$(echo "$line"|awk '\\\\{print $2\\\\}'|cut -d: -f1)
filename=$(echo "$line"|awk '\\\\{print $3\\\\}')
if [[ $inode =~ ^[0-9]+$ ]]; then
icat "$IMAGE_PATH" "$inode" > "$CASE_DIR/extracted_files/$filename"
fi
done
# Step 4: Import into Autopsy (if available)
if command -v autopsy &> /dev/null; then
echo "Step 4: Importing into Autopsy..."
# Autopsy command-line import would go here
# This depends on Autopsy version and configuration
fi
echo "Integrated analysis completed. Results in: $CASE_DIR"
YARA Integration
#!/usr/bin/env python3
# TSK and YARA integration for malware detection
import subprocess
import yara
import os
import tempfile
class TSKYaraScanner:
def __init__(self, image_path, yara_rules_path):
self.image_path = image_path
self.yara_rules = yara.compile(yara_rules_path)
self.matches = []
def scan_files(self, offset='0'):
"""Scan files in disk image with YARA rules"""
# Get file listing
cmd = ['fls', '-r', '-o', offset, self.image_path]
result = subprocess.run(cmd, capture_output=True, text=True)
for line in result.stdout.split('\n'):
if line.strip() and line.startswith('r/r'):
parts = line.split()
if len(parts) >= 3:
inode = parts[1].split(':')[0]
filename = parts[2]
# Extract file content
try:
with tempfile.NamedTemporaryFile() as temp_file:
extract_cmd = ['icat', '-o', offset, self.image_path, inode]
subprocess.run(extract_cmd, stdout=temp_file, check=True)
# Scan with YARA
matches = self.yara_rules.match(temp_file.name)
if matches:
self.matches.append(\\\\{
'filename': filename,
'inode': inode,
'matches': [str(match) for match in matches]
\\\\})
print(f"YARA match in \\\\{filename\\\\}: \\\\{matches\\\\}")
except Exception as e:
print(f"Error scanning \\\\{filename\\\\}: \\\\{e\\\\}")
return self.matches
# Usage
scanner = TSKYaraScanner("/evidence/disk_image.dd", "/rules/malware.yar")
matches = scanner.scan_files()
Solución de problemas
Cuestiones comunes
** Cuestiones de formato de imagen:**
# Check image format
file disk_image.dd
# Convert image formats
dd if=disk_image.raw of=disk_image.dd bs=512
# Handle E01 images
ewfmount disk_image.E01 /mnt/ewf
# Then use /mnt/ewf/ewf1 as image path
# Handle split images
cat disk_image.001 disk_image.002 > disk_image.dd
** Cuestiones de detección de la participación**
# Force partition table type
mmls -t dos disk_image.dd
mmls -t gpt disk_image.dd
# Manual offset calculation
fdisk -l disk_image.dd
# Check for damaged partition table
testdisk disk_image.dd
# Use hexdump to examine boot sector
hexdump -C disk_image.dd|head -20
** Cuestiones relativas al sistema financiero**
# Check file system type
fsstat disk_image.dd
# Force file system type
fls -f ntfs disk_image.dd
fls -f ext3 disk_image.dd
# Check for file system damage
fsck.ext4 -n disk_image.dd
# Use alternative tools
debugfs disk_image.dd
Debugging
Permitir la depuración detallada y la notificación de errores:
# Verbose output
fls -v disk_image.dd
# Debug mode (if available)
TSK_DEBUG=1 fls disk_image.dd
# Check TSK version and capabilities
mmls -V
fls -V
# Monitor system calls
strace fls disk_image.dd
# Check for library dependencies
ldd $(which fls)
Consideraciones de seguridad
Integridad de las pruebas
Protección de la palabra: - Trabajar siempre con solo lectura copias de pruebas - Use bloqueadores de escritura de hardware cuando sea posible - Verificar la integridad de la imagen con hashes criptográficos - Documentar todos los procedimientos de análisis - Mantener la cadena de registros de custodia
** Verificación de Hash:**
# Calculate image hash before analysis
md5sum disk_image.dd > disk_image.md5
sha256sum disk_image.dd > disk_image.sha256
# Verify hash after analysis
md5sum -c disk_image.md5
sha256sum -c disk_image.sha256
Legal and Compliance
Documentación Requisitos - Mantener registros detallados de todos los comandos TSK ejecutados - Metodología y procedimientos de análisis de documentos - Recordar todos los hallazgos y su significado - Preserve evidencia original y resultados de análisis - Seguir los requisitos legales y reglamentarios aplicables
Las mejores prácticas: - Utilizar procedimientos forenses estandarizados - Validar herramientas y técnicas regularmente - Mantener el conocimiento actual de los requisitos legales - Implementar procesos de garantía de calidad - Actualizaciones periódicas de capacitación y certificación