Fiche de Référence Autopsy
```bash
Download Autopsy installer
Visit: https://www.autopsy.com/download/
Run installer as administrator
autopsy-4.20.0-64bit.msi
Verify installation
“C:\Program Files\Autopsy-4.20.0\bin\autopsy64.exe” —version
Install additional dependencies
Java 8+ (included with installer)
Microsoft Visual C++ Redistributable
Configure Autopsy
Launch Autopsy
Configure case directory
Set up user preferences
Autopsy est une plateforme complète de forensique numérique qui fournit une interface graphique pour The Sleuth Kit (TSK) et d'autres outils de forensique numérique. Développé par Basis Technology, Autopsy est considéré comme la norme de facto pour les investigations de forensique numérique dans les domaines de l'application de la loi, de la sécurité d'entreprise et des scénarios de réponse aux incidents. La plateforme combine des capacités puissantes d'analyse forensique avec une interface utilisateur intuitive, rendant les techniques avancées de forensique numérique accessibles aux enquêteurs de différents niveaux de compétence.
La force principale d'Autopsy réside dans sa capacité à traiter et analyser différents types de preuves numériques, incluant des images de disque, des dumps mémoire, des extractions d'appareils mobiles et des captures de paquets réseau. La plateforme supporte plusieurs systèmes de fichiers (NTFS, FAT, ext2/3/4, HFS+) et peut récupérer des fichiers supprimés, analyser les métadonnées de fichiers, extraire des artefacts d'applications et effectuer une analyse chronologique. L'architecture modulaire d'Autopsy permet l'extensibilité via des plugins, permettant aux enquêteurs de personnaliser la plateforme selon les besoins spécifiques d'une investigation.
Autopsy a considérablement évolué depuis ses origines en ligne de commande pour devenir un poste de travail forensique sophistiqué capable de gérer des investigations complexes. La plateforme inclut des fonctionnalités avancées telles que la recherche par mots-clés, l'analyse de hachage pour l'identification de fichiers connus, l'analyse d'emails, l'extraction d'artefacts web et des capacités de reporting complètes. Son intégration avec d'autres outils et bases de données forensiques en fait un composant essentiel des laboratoires de forensique numérique modernes et des équipes de réponse aux incidents.
```bash
# Ubuntu/Debian installation
sudo apt update
sudo apt install autopsy sleuthkit
# Install dependencies
sudo apt install openjdk-8-jdk testdisk photorec
# Download latest Autopsy
wget https://github.com/sleuthkit/autopsy/releases/download/autopsy-4.20.0/autopsy-4.20.0.zip
# Extract and install
unzip autopsy-4.20.0.zip
cd autopsy-4.20.0
# Run installation script
sudo ./unix_setup.sh
# Start Autopsy
./bin/autopsy
```## Installation
### Installation sur Windows
Installation d'Autopsy sur les systèmes Windows :
```bash
# Create Autopsy Docker environment
cat > Dockerfile << 'EOF'
FROM ubuntu:20.04
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y \
autopsy sleuthkit openjdk-8-jdk \
testdisk photorec ewf-tools \
libewf-dev python3 python3-pip
WORKDIR /cases
EXPOSE 9999
CMD ["autopsy"]
EOF
# Build container
docker build -t autopsy-forensics .
# Run with case directory mounted
docker run -it -p 9999:9999 -v $(pwd)/cases:/cases autopsy-forensics
# Access web interface
# http://localhost:9999/autopsy
```### Installation sur Linux
Installation d'Autopsy sur les distributions Linux :
```bash
# Download forensics VM with Autopsy
# SANS SIFT Workstation
wget https://digital-forensics.sans.org/community/downloads
# Import VM
VBoxManage import SIFT-Workstation.ova
# Configure VM resources
VBoxManage modifyvm "SIFT" --memory 8192 --cpus 4
# Start VM
VBoxManage startvm "SIFT"
# Access Autopsy
autopsy &
```### Installation Docker
```bash
# Start Autopsy
autopsy
# Create new case via web interface
# Navigate to http://localhost:9999/autopsy
# Case creation parameters
Case Name: "Investigation_2024_001"
Case Directory: "/cases/investigation_001"
Investigator: "John Doe"
Description: "Malware incident investigation"
# Add data source
# Select image file or physical device
# Choose processing options
```### Configuration de Machine Virtuelle
```bash
# Add disk image
# File -> Add Data Source
# Select disk image file (.dd, .raw, .E01)
# Add logical files
# Select directory or individual files
# Useful for targeted analysis
# Add unallocated space
# Analyze free space and deleted files
# Recover deleted data
# Configure ingest modules
# Enable relevant analysis modules
# Hash calculation, keyword search, etc.
```## Utilisation de Base
### Création de Cas
Création et gestion de cas forensiques :
```bash
# Browse file system
# Navigate directory structure
# View file metadata and properties
# Recover deleted files
# Check "Deleted Files" node
# Analyze file signatures
# Recover based on file headers
# Timeline analysis
# Generate timeline of file activity
# Correlate events across time
# Identify suspicious patterns
# File type analysis
# Analyze files by type
# Identify misnamed files
# Check file signatures
```### Analyse de Source de Données
Ajout et analyse de sources de données :
```bash
# Configure keyword lists
# Tools -> Options -> Keyword Search
# Create custom keyword lists
# Add specific terms related to investigation
# Include regular expressions
# Search configuration
Search Type: "Exact Match" or "Regular Expression"
Encoding: "UTF-8", "UTF-16", "ASCII"
Language: "English" (for indexing)
# Advanced search options
# Case sensitive search
# Whole word matching
# Search in slack space
# Search unallocated space
# Search results analysis
# Review keyword hits
# Examine context around matches
# Export search results
```### Analyse de Système de Fichiers
Analyse de systèmes de fichiers et récupération de données :
```bash
# Configure hash databases
# Tools -> Options -> Hash Database
# Import NSRL database
# Download from https://www.nist.gov/itl/ssd/software-quality-group/nsrl-download
# Import hash sets for known good files
# Import custom hash sets
# Create hash sets for known bad files
# Import malware hash databases
# Add organization-specific hash sets
# Hash calculation
# Enable "Hash Lookup" ingest module
# Calculate MD5, SHA-1, SHA-256 hashes
# Compare against known hash databases
# Notable files identification
# Identify unknown files
# Flag potentially malicious files
# Prioritize analysis efforts
```## Fonctionnalités Avancées
### Recherche par Mots-Clés
Mise en œuvre de recherches par mots-clés complètes :
```bash
# Email artifact extraction
# Enable "Email Parser" ingest module
# Support for PST, OST, MBOX formats
# Extract email metadata and content
# Email analysis features
# View email headers and routing
# Analyze attachments
# Extract embedded images
# Timeline email communications
# Advanced email analysis
# Keyword search in email content
# Identify email patterns
# Analyze sender/recipient relationships
# Export email evidence
# Webmail analysis
# Extract webmail artifacts from browsers
# Analyze cached email content
# Recover deleted webmail messages
```### Analyse de Hachage
Réalisation d'analyses de hachage pour l'identification de fichiers connus :
```bash
# Web artifact extraction
# Enable "Recent Activity" ingest module
# Extract browser history, cookies, downloads
# Analyze cached web content
# Browser support
# Chrome, Firefox, Internet Explorer
# Safari, Edge browsers
# Mobile browser artifacts
# Analysis capabilities
# Timeline web activity
# Identify visited websites
# Analyze search queries
# Extract form data
# Advanced web analysis
# Recover deleted browser history
# Analyze private browsing artifacts
# Extract stored passwords
# Identify malicious websites
```### Analyse d'Emails
Analyse d'artefacts et de communications email :
```python
#!/usr/bin/env python3
# Autopsy batch case processing script
import os
import subprocess
import json
import time
from datetime import datetime
class AutopsyBatchProcessor:
def __init__(self, autopsy_path="/opt/autopsy/bin/autopsy"):
self.autopsy_path = autopsy_path
self.cases_dir = "/cases"
self.results = \\\\{\\\\}
def create_case(self, case_name, evidence_file, investigator="Automated"):
"""Create new Autopsy case"""
case_dir = os.path.join(self.cases_dir, case_name)
# Create case directory
os.makedirs(case_dir, exist_ok=True)
# Case configuration
case_config = \\\\{
"case_name": case_name,
"case_dir": case_dir,
"investigator": investigator,
"created": datetime.now().isoformat(),
"evidence_file": evidence_file
\\\\}
# Save case configuration
with open(os.path.join(case_dir, "case_config.json"), "w") as f:
json.dump(case_config, f, indent=2)
return case_config
def run_autopsy_analysis(self, case_config):
"""Run Autopsy analysis on case"""
# Create Autopsy command-line script
script_content = f"""
# Autopsy batch processing script
import org.sleuthkit.autopsy.casemodule.Case
import org.sleuthkit.autopsy.coreutils.Logger
import org.sleuthkit.autopsy.ingest.IngestManager
# Create case
case = Case.createAsCurrentCase(
Case.CaseType.SINGLE_USER_CASE,
"\\\\{case_config['case_name']\\\\}",
"\\\\{case_config['case_dir']\\\\}",
Case.CaseDetails("\\\\{case_config['case_name']\\\\}", "\\\\{case_config['investigator']\\\\}", "", "", "")
)
# Add data source
dataSource = case.addDataSource("\\\\{case_config['evidence_file']\\\\}")
# Configure ingest modules
ingestJobSettings = IngestJobSettings()
ingestJobSettings.setProcessUnallocatedSpace(True)
ingestJobSettings.setProcessKnownFilesFilter(True)
# Start ingest
ingestManager = IngestManager.getInstance()
ingestJob = ingestManager.beginIngestJob(dataSource, ingestJobSettings)
# Wait for completion
while ingestJob.getStatus() != IngestJob.Status.COMPLETED:
time.sleep(30)
print("Analysis completed for case: \\\\{case_config['case_name']\\\\}")
"""
# Save script
script_file = os.path.join(case_config['case_dir'], "analysis_script.py")
with open(script_file, "w") as f:
f.write(script_content)
# Run Autopsy with script
cmd = [
self.autopsy_path,
"--script", script_file,
"--case-dir", case_config['case_dir']
]
try:
result = subprocess.run(cmd, capture_output=True, text=True, timeout=7200)
if result.returncode == 0:
return \\\\{
"status": "success",
"output": result.stdout,
"completion_time": datetime.now().isoformat()
\\\\}
else:
return \\\\{
"status": "failed",
"error": result.stderr,
"completion_time": datetime.now().isoformat()
\\\\}
except subprocess.TimeoutExpired:
return \\\\{
"status": "timeout",
"completion_time": datetime.now().isoformat()
\\\\}
except Exception as e:
return \\\\{
"status": "error",
"error": str(e),
"completion_time": datetime.now().isoformat()
\\\\}
def extract_artifacts(self, case_dir):
"""Extract key artifacts from completed case"""
artifacts = \\\\{\\\\}
# Define artifact paths
artifact_paths = \\\\{
"timeline": "timeline.csv",
"keyword_hits": "keyword_hits.csv",
"hash_analysis": "hash_analysis.csv",
"web_artifacts": "web_artifacts.csv",
"email_artifacts": "email_artifacts.csv"
\\\\}
for artifact_type, filename in artifact_paths.items():
artifact_file = os.path.join(case_dir, "Reports", filename)
if os.path.exists(artifact_file):
artifacts[artifact_type] = artifact_file
print(f"Found \\\\{artifact_type\\\\}: \\\\{artifact_file\\\\}")
return artifacts
def generate_summary_report(self, case_config, analysis_result, artifacts):
"""Generate case summary report"""
report = \\\\{
"case_info": case_config,
"analysis_result": analysis_result,
"artifacts_found": list(artifacts.keys()),
"report_generated": datetime.now().isoformat()
\\\\}
# Add artifact statistics
for artifact_type, artifact_file in artifacts.items():
try:
with open(artifact_file, 'r') as f:
lines = f.readlines()
report[f"\\\\{artifact_type\\\\}_count"] = len(lines) - 1 # Exclude header
except:
report[f"\\\\{artifact_type\\\\}_count"] = 0
# Save report
report_file = os.path.join(case_config['case_dir'], "summary_report.json")
with open(report_file, "w") as f:
json.dump(report, f, indent=2)
return report
def process_evidence_batch(self, evidence_list):
"""Process multiple evidence files"""
for i, evidence_file in enumerate(evidence_list):
case_name = f"batch_case_\\\\{i+1:03d\\\\}"
print(f"Processing case \\\\{i+1\\\\}/\\\\{len(evidence_list)\\\\}: \\\\{case_name\\\\}")
# Create case
case_config = self.create_case(case_name, evidence_file)
# Run analysis
analysis_result = self.run_autopsy_analysis(case_config)
# Extract artifacts
artifacts = self.extract_artifacts(case_config['case_dir'])
# Generate report
summary = self.generate_summary_report(case_config, analysis_result, artifacts)
# Store results
self.results[case_name] = summary
print(f"Completed case: \\\\{case_name\\\\}")
# Generate batch summary
self.generate_batch_summary()
def generate_batch_summary(self):
"""Generate summary of all processed cases"""
batch_summary = \\\\{
"total_cases": len(self.results),
"successful_cases": len([r for r in self.results.values() if r['analysis_result']['status'] == 'success']),
"failed_cases": len([r for r in self.results.values() if r['analysis_result']['status'] != 'success']),
"processing_time": datetime.now().isoformat(),
"cases": self.results
\\\\}
with open(os.path.join(self.cases_dir, "batch_summary.json"), "w") as f:
json.dump(batch_summary, f, indent=2)
print(f"Batch processing completed: \\\\{batch_summary['successful_cases']\\\\}/\\\\{batch_summary['total_cases']\\\\} successful")
# Usage
if __name__ == "__main__":
processor = AutopsyBatchProcessor()
evidence_files = [
"/evidence/disk_image_1.dd",
"/evidence/disk_image_2.E01",
"/evidence/memory_dump.raw"
]
processor.process_evidence_batch(evidence_files)
```### Analyse d'Artefacts Web
Extraction et analyse d'artefacts de navigation web :
```python
#!/usr/bin/env python3
# Automated artifact extraction from Autopsy cases
import sqlite3
import csv
import json
import os
from datetime import datetime
class AutopsyArtifactExtractor:
def __init__(self, case_db_path):
self.case_db_path = case_db_path
self.artifacts = \\\\{\\\\}
def connect_to_case_db(self):
"""Connect to Autopsy case database"""
try:
conn = sqlite3.connect(self.case_db_path)
return conn
except Exception as e:
print(f"Error connecting to case database: \\\\{e\\\\}")
return None
def extract_timeline_artifacts(self):
"""Extract timeline artifacts"""
conn = self.connect_to_case_db()
if not conn:
return []
query = """
SELECT
tsk_files.name,
tsk_files.crtime,
tsk_files.mtime,
tsk_files.atime,
tsk_files.ctime,
tsk_files.size,
tsk_files.parent_path
FROM tsk_files
WHERE tsk_files.meta_type = 1
ORDER BY tsk_files.crtime
"""
cursor = conn.cursor()
cursor.execute(query)
timeline_data = []
for row in cursor.fetchall():
timeline_data.append(\\\\{
"filename": row[0],
"created": row[1],
"modified": row[2],
"accessed": row[3],
"changed": row[4],
"size": row[5],
"path": row[6]
\\\\})
conn.close()
return timeline_data
def extract_web_artifacts(self):
"""Extract web browsing artifacts"""
conn = self.connect_to_case_db()
if not conn:
return []
query = """
SELECT
blackboard_artifacts.artifact_id,
blackboard_attributes.value_text,
blackboard_attributes.value_int32,
blackboard_attributes.value_int64,
blackboard_attribute_types.display_name
FROM blackboard_artifacts
JOIN blackboard_attributes ON blackboard_artifacts.artifact_id = blackboard_attributes.artifact_id
JOIN blackboard_attribute_types ON blackboard_attributes.attribute_type_id = blackboard_attribute_types.attribute_type_id
WHERE blackboard_artifacts.artifact_type_id IN (1, 2, 3, 4, 5)
"""
cursor = conn.cursor()
cursor.execute(query)
web_artifacts = []
for row in cursor.fetchall():
web_artifacts.append(\\\\{
"artifact_id": row[0],
"value_text": row[1],
"value_int32": row[2],
"value_int64": row[3],
"attribute_type": row[4]
\\\\})
conn.close()
return web_artifacts
def extract_email_artifacts(self):
"""Extract email artifacts"""
conn = self.connect_to_case_db()
if not conn:
return []
query = """
SELECT
blackboard_artifacts.artifact_id,
blackboard_attributes.value_text,
blackboard_attribute_types.display_name
FROM blackboard_artifacts
JOIN blackboard_attributes ON blackboard_artifacts.artifact_id = blackboard_attributes.artifact_id
JOIN blackboard_attribute_types ON blackboard_attributes.attribute_type_id = blackboard_attribute_types.attribute_type_id
WHERE blackboard_artifacts.artifact_type_id = 12
"""
cursor = conn.cursor()
cursor.execute(query)
email_artifacts = []
for row in cursor.fetchall():
email_artifacts.append(\\\\{
"artifact_id": row[0],
"content": row[1],
"attribute_type": row[2]
\\\\})
conn.close()
return email_artifacts
def extract_keyword_hits(self):
"""Extract keyword search hits"""
conn = self.connect_to_case_db()
if not conn:
return []
query = """
SELECT
blackboard_artifacts.artifact_id,
blackboard_attributes.value_text,
tsk_files.name,
tsk_files.parent_path
FROM blackboard_artifacts
JOIN blackboard_attributes ON blackboard_artifacts.artifact_id = blackboard_attributes.artifact_id
JOIN tsk_files ON blackboard_artifacts.obj_id = tsk_files.obj_id
WHERE blackboard_artifacts.artifact_type_id = 9
"""
cursor = conn.cursor()
cursor.execute(query)
keyword_hits = []
for row in cursor.fetchall():
keyword_hits.append(\\\\{
"artifact_id": row[0],
"keyword": row[1],
"filename": row[2],
"file_path": row[3]
\\\\})
conn.close()
return keyword_hits
def extract_hash_hits(self):
"""Extract hash analysis results"""
conn = self.connect_to_case_db()
if not conn:
return []
query = """
SELECT
tsk_files.name,
tsk_files.md5,
tsk_files.sha256,
tsk_files.parent_path,
tsk_files.size
FROM tsk_files
WHERE tsk_files.known = 2
"""
cursor = conn.cursor()
cursor.execute(query)
hash_hits = []
for row in cursor.fetchall():
hash_hits.append(\\\\{
"filename": row[0],
"md5": row[1],
"sha256": row[2],
"path": row[3],
"size": row[4]
\\\\})
conn.close()
return hash_hits
def export_artifacts_to_csv(self, output_dir):
"""Export all artifacts to CSV files"""
os.makedirs(output_dir, exist_ok=True)
# Extract all artifact types
artifacts = \\\\{
"timeline": self.extract_timeline_artifacts(),
"web_artifacts": self.extract_web_artifacts(),
"email_artifacts": self.extract_email_artifacts(),
"keyword_hits": self.extract_keyword_hits(),
"hash_hits": self.extract_hash_hits()
\\\\}
# Export to CSV
for artifact_type, data in artifacts.items():
if data:
csv_file = os.path.join(output_dir, f"\\\\{artifact_type\\\\}.csv")
with open(csv_file, 'w', newline='') as f:
if data:
writer = csv.DictWriter(f, fieldnames=data[0].keys())
writer.writeheader()
writer.writerows(data)
print(f"Exported \\\\{len(data)\\\\} \\\\{artifact_type\\\\} to \\\\{csv_file\\\\}")
return artifacts
def generate_artifact_summary(self, artifacts, output_file):
"""Generate summary of extracted artifacts"""
summary = \\\\{
"extraction_time": datetime.now().isoformat(),
"case_database": self.case_db_path,
"artifact_counts": \\\\{
artifact_type: len(data) for artifact_type, data in artifacts.items()
\\\\},
"total_artifacts": sum(len(data) for data in artifacts.values())
\\\\}
with open(output_file, 'w') as f:
json.dump(summary, f, indent=2)
print(f"Artifact summary saved to \\\\{output_file\\\\}")
return summary
# Usage
if __name__ == "__main__":
case_db = "/cases/investigation_001/case.db"
output_dir = "/cases/investigation_001/extracted_artifacts"
extractor = AutopsyArtifactExtractor(case_db)
artifacts = extractor.export_artifacts_to_csv(output_dir)
summary = extractor.generate_artifact_summary(artifacts, os.path.join(output_dir, "summary.json"))
```## Scripts d'Automatisation
### Traitement de Cas par Lot
```python
#!/usr/bin/env python3
# Autopsy report generation script
import os
import json
import csv
from datetime import datetime
from jinja2 import Template
class AutopsyReportGenerator:
def __init__(self, case_dir):
self.case_dir = case_dir
self.artifacts_dir = os.path.join(case_dir, "extracted_artifacts")
self.report_data = \\\\{\\\\}
def load_artifact_data(self):
"""Load extracted artifact data"""
artifact_files = \\\\{
"timeline": "timeline.csv",
"web_artifacts": "web_artifacts.csv",
"email_artifacts": "email_artifacts.csv",
"keyword_hits": "keyword_hits.csv",
"hash_hits": "hash_hits.csv"
\\\\}
for artifact_type, filename in artifact_files.items():
file_path = os.path.join(self.artifacts_dir, filename)
if os.path.exists(file_path):
with open(file_path, 'r') as f:
reader = csv.DictReader(f)
self.report_data[artifact_type] = list(reader)
else:
self.report_data[artifact_type] = []
def analyze_timeline_data(self):
"""Analyze timeline data for patterns"""
timeline_data = self.report_data.get("timeline", [])
if not timeline_data:
return \\\\{\\\\}
# Analyze file creation patterns
creation_times = [item["created"] for item in timeline_data if item["created"]]
# Group by hour
hourly_activity = \\\\{\\\\}
for timestamp in creation_times:
try:
hour = datetime.fromisoformat(timestamp).hour
hourly_activity[hour] = hourly_activity.get(hour, 0) + 1
except:
continue
return \\\\{
"total_files": len(timeline_data),
"files_with_timestamps": len(creation_times),
"peak_activity_hour": max(hourly_activity, key=hourly_activity.get) if hourly_activity else None,
"hourly_distribution": hourly_activity
\\\\}
def analyze_web_activity(self):
"""Analyze web browsing activity"""
web_data = self.report_data.get("web_artifacts", [])
if not web_data:
return \\\\{\\\\}
# Extract URLs and domains
urls = []
domains = set()
for artifact in web_data:
if artifact.get("attribute_type") == "TSK_URL":
url = artifact.get("value_text", "")
if url:
urls.append(url)
try:
domain = url.split("//")[1].split("/")[0]
domains.add(domain)
except:
continue
return \\\\{
"total_web_artifacts": len(web_data),
"unique_urls": len(set(urls)),
"unique_domains": len(domains),
"top_domains": list(domains)[:10]
\\\\}
def analyze_keyword_hits(self):
"""Analyze keyword search results"""
keyword_data = self.report_data.get("keyword_hits", [])
if not keyword_data:
return \\\\{\\\\}
# Group by keyword
keyword_counts = \\\\{\\\\}
for hit in keyword_data:
keyword = hit.get("keyword", "")
keyword_counts[keyword] = keyword_counts.get(keyword, 0) + 1
return \\\\{
"total_keyword_hits": len(keyword_data),
"unique_keywords": len(keyword_counts),
"top_keywords": sorted(keyword_counts.items(), key=lambda x: x[1], reverse=True)[:10]
\\\\}
def generate_html_report(self, output_file):
"""Generate comprehensive HTML report"""
# Load artifact data
self.load_artifact_data()
# Perform analysis
timeline_analysis = self.analyze_timeline_data()
web_analysis = self.analyze_web_activity()
keyword_analysis = self.analyze_keyword_hits()
# HTML template
html_template = """
<!DOCTYPE html>
<html>
<head>
<title>Autopsy Forensic Analysis Report</title>
<style>
body \\\\{ font-family: Arial, sans-serif; margin: 20px; \\\\}
.header \\\\{ background-color: #f0f0f0; padding: 20px; border-radius: 5px; \\\\}
.section \\\\{ margin: 20px 0; padding: 15px; border: 1px solid #ddd; border-radius: 5px; \\\\}
.artifact-count \\\\{ font-weight: bold; color: #2c5aa0; \\\\}
table \\\\{ width: 100%; border-collapse: collapse; margin: 10px 0; \\\\}
th, td \\\\{ border: 1px solid #ddd; padding: 8px; text-align: left; \\\\}
th \\\\{ background-color: #f2f2f2; \\\\}
.summary-stats \\\\{ display: flex; justify-content: space-around; margin: 20px 0; \\\\}
.stat-box \\\\{ text-align: center; padding: 15px; background-color: #e8f4f8; border-radius: 5px; \\\\}
</style>
</head>
<body>
<div class="header">
<h1>Digital Forensic Analysis Report</h1>
<p><strong>Case Directory:</strong> \\\\{\\\\{ case_dir \\\\}\\\\}</p>
<p><strong>Report Generated:</strong> \\\\{\\\\{ report_time \\\\}\\\\}</p>
<p><strong>Analysis Tool:</strong> Autopsy Digital Forensics Platform</p>
</div>
<div class="summary-stats">
<div class="stat-box">
<h3>\\\\{\\\\{ timeline_analysis.total_files \\\\}\\\\}</h3>
<p>Total Files Analyzed</p>
</div>
<div class="stat-box">
<h3>\\\\{\\\\{ web_analysis.unique_urls \\\\}\\\\}</h3>
<p>Unique URLs Found</p>
</div>
<div class="stat-box">
<h3>\\\\{\\\\{ keyword_analysis.total_keyword_hits \\\\}\\\\}</h3>
<p>Keyword Hits</p>
</div>
</div>
<div class="section">
<h2>Timeline Analysis</h2>
<p>Total files with timestamps: <span class="artifact-count">\\\\{\\\\{ timeline_analysis.files_with_timestamps \\\\}\\\\}</span></p>
\\\\{% if timeline_analysis.peak_activity_hour %\\\\}
<p>Peak activity hour: <span class="artifact-count">\\\\{\\\\{ timeline_analysis.peak_activity_hour \\\\}\\\\}:00</span></p>
\\\\{% endif %\\\\}
</div>
<div class="section">
<h2>Web Activity Analysis</h2>
<p>Total web artifacts: <span class="artifact-count">\\\\{\\\\{ web_analysis.total_web_artifacts \\\\}\\\\}</span></p>
<p>Unique domains visited: <span class="artifact-count">\\\\{\\\\{ web_analysis.unique_domains \\\\}\\\\}</span></p>
\\\\{% if web_analysis.top_domains %\\\\}
<h3>Top Visited Domains</h3>
<ul>
\\\\{% for domain in web_analysis.top_domains %\\\\}
<li>\\\\{\\\\{ domain \\\\}\\\\}</li>
\\\\{% endfor %\\\\}
</ul>
\\\\{% endif %\\\\}
</div>
<div class="section">
<h2>Keyword Analysis</h2>
<p>Total keyword hits: <span class="artifact-count">\\\\{\\\\{ keyword_analysis.total_keyword_hits \\\\}\\\\}</span></p>
<p>Unique keywords: <span class="artifact-count">\\\\{\\\\{ keyword_analysis.unique_keywords \\\\}\\\\}</span></p>
\\\\{% if keyword_analysis.top_keywords %\\\\}
<h3>Top Keywords</h3>
<table>
<tr><th>Keyword</th><th>Occurrences</th></tr>
\\\\{% for keyword, count in keyword_analysis.top_keywords %\\\\}
<tr><td>\\\\{\\\\{ keyword \\\\}\\\\}</td><td>\\\\{\\\\{ count \\\\}\\\\}</td></tr>
\\\\{% endfor %\\\\}
</table>
\\\\{% endif %\\\\}
</div>
<div class="section">
<h2>Artifact Summary</h2>
<table>
<tr><th>Artifact Type</th><th>Count</th></tr>
<tr><td>Timeline Events</td><td>\\\\{\\\\{ timeline_analysis.total_files \\\\}\\\\}</td></tr>
<tr><td>Web Artifacts</td><td>\\\\{\\\\{ web_analysis.total_web_artifacts \\\\}\\\\}</td></tr>
<tr><td>Email Artifacts</td><td>\\\\{\\\\{ report_data.email_artifacts|length \\\\}\\\\}</td></tr>
<tr><td>Keyword Hits</td><td>\\\\{\\\\{ keyword_analysis.total_keyword_hits \\\\}\\\\}</td></tr>
<tr><td>Hash Hits</td><td>\\\\{\\\\{ report_data.hash_hits|length \\\\}\\\\}</td></tr>
</table>
</div>
</body>
</html>
"""
# Render template
template = Template(html_template)
html_content = template.render(
case_dir=self.case_dir,
report_time=datetime.now().strftime("%Y-%m-%d %H:%M:%S"),
timeline_analysis=timeline_analysis,
web_analysis=web_analysis,
keyword_analysis=keyword_analysis,
report_data=self.report_data
)
# Save report
with open(output_file, 'w') as f:
f.write(html_content)
print(f"HTML report generated: \\\\{output_file\\\\}")
# Usage
if __name__ == "__main__":
case_dir = "/cases/investigation_001"
output_file = os.path.join(case_dir, "forensic_report.html")
generator = AutopsyReportGenerator(case_dir)
generator.generate_html_report(output_file)
```### Extraction Automatisée d'Artefacts
```python
#!/usr/bin/env python3
# Autopsy SIEM integration
import json
import requests
from datetime import datetime
class AutopsySIEMIntegration:
def __init__(self, siem_endpoint, api_key):
self.siem_endpoint = siem_endpoint
self.api_key = api_key
self.headers = \\\\{
"Authorization": f"Bearer \\\\{api_key\\\\}",
"Content-Type": "application/json"
\\\\}
def send_artifacts_to_siem(self, artifacts):
"""Send Autopsy artifacts to SIEM"""
for artifact_type, data in artifacts.items():
for item in data:
siem_event = self.format_for_siem(artifact_type, item)
self.send_event(siem_event)
def format_for_siem(self, artifact_type, artifact_data):
"""Format artifact data for SIEM ingestion"""
base_event = \\\\{
"timestamp": datetime.now().isoformat(),
"source": "autopsy",
"artifact_type": artifact_type,
"event_type": "forensic_artifact"
\\\\}
# Add artifact-specific data
base_event.update(artifact_data)
return base_event
def send_event(self, event_data):
"""Send event to SIEM"""
try:
response = requests.post(
f"\\\\{self.siem_endpoint\\\\}/events",
headers=self.headers,
json=event_data
)
if response.status_code == 200:
print(f"Event sent successfully: \\\\{event_data['artifact_type']\\\\}")
else:
print(f"Failed to send event: \\\\{response.status_code\\\\}")
except Exception as e:
print(f"Error sending event to SIEM: \\\\{e\\\\}")
# Usage
siem_integration = AutopsySIEMIntegration("https://siem.company.com/api", "api_key")
# siem_integration.send_artifacts_to_siem(extracted_artifacts)
```### Génération de Rapports
```bash
# Check case database integrity
sqlite3 /cases/case.db "PRAGMA integrity_check;"
# Repair corrupted database
sqlite3 /cases/case.db ".recover"|sqlite3 /cases/case_recovered.db
# Check database permissions
ls -la /cases/case.db
chmod 644 /cases/case.db
```## Exemples d'Intégration
### Intégration SIEM
```bash
# Increase Java heap size
export JAVA_OPTS="-Xmx8g -Xms4g"
# Monitor memory usage
top -p $(pgrep java)
# Check disk space
df -h /cases
# Optimize case database
sqlite3 /cases/case.db "VACUUM;"
```## Dépannage
### Problèmes Courants
**Problèmes de Connexion à la Base de Données :**
```bash
# Check module dependencies
autopsy --check-modules
# Verify Python modules
python3 -c "import autopsy_modules"
# Check log files
tail -f /var/log/autopsy/autopsy.log
# Reset module configuration
rm -rf ~/.autopsy/modules
```**Problèmes de Mémoire et de Performance :**
**Problèmes de Chargement de Modules :**
### Débogage
Activer le débogage et la journalisation détaillés :```bash
# Enable debug logging
autopsy --debug --log-level DEBUG
# Monitor case processing
tail -f /cases/case.log
# Check ingest module status
autopsy --status --case /cases/investigation_001
# Verify evidence integrity
md5sum /evidence/disk_image.dd
Considérations de Sécurité
Intégrité des Preuves
Chaîne de Possession :
- Documenter toutes les procédures de manipulation des preuves
- Maintenir des journaux détaillés des accès et modifications
- Utiliser des hachages cryptographiques pour vérifier l’intégrité
- Implémenter des protocoles de stockage des preuves appropriés
- Procédures régulières de vérification de l’intégrité
Protection des Données :
- Chiffrer les bases de données de cas et les fichiers de preuves
- Implémenter des contrôles d’accès et une authentification
- Procédures sécurisées de sauvegarde et de récupération
- Surveiller les tentatives d’accès non autorisées
- Évaluations de sécurité régulières de l’infrastructure forensique
Aspects Juridiques et Conformité
Exigences Légales :
- Suivre les lois et réglementations applicables
- Maintenir une documentation et des registres appropriés
- Implémenter des procédures forensiques défendables
- Assurer l’admissibilité des preuves numériques
- Formation régulière sur les exigences légales
Considérations de Confidentialité :
- Respecter les droits de confidentialité et les réglementations
- Implémenter des principes de minimisation des données
- Manipulation sécurisée des informations personnelles
- Conservation et élimination appropriées des données
- Conformité avec les lois de protection des données
Références
Autopsy Digital Forensics PlatformThe Sleuth Kit Documentationhttps://www.swgde.org/documents[Digital Forensics Best Practices](