コンテンツにスキップ

DBeaver チートシート

DBeaver - Universal Database Tool

DBeaver は、開発者、データベース管理者、アナリスト、およびデータベースを扱う必要のあるすべての人のための、無料のオープンソースのユニバーサルデータベースツールです。MySQL、PostgreSQL、SQLite、Oracle、DB2、SQL Server、Sybase、MS Access、Teradata、Firebird、Apache Hive、Phoenix、Presto などを含むすべての主要なデータベースをサポートしています。

[No text to translate] ```powershell # Download from official website # https://dbeaver.io/download/

Install via Chocolatey

choco install dbeaver

Install via winget

winget install dbeaver.dbeaver

Install specific version

choco install dbeaver —version 23.2.5

Silent installation

dbeaver-ce-23.2.5-x86_64-setup.exe /S

Portable version

Download portable ZIP and extract

No installation required

   - Windowsにインストールするには、公式DBeaver Webサイトからインストーラーをダウンロードします
   - 64ビット版をダウンロードすることを推奨
   - インストーラーを実行し、画面の指示に従ってインストールを完了
```bash
# Download from official website
# https://dbeaver.io/download/

# Install via Homebrew
brew install --cask dbeaver-community

# Install specific version
brew install --cask dbeaver-community@23.2.5

# Manual installation
curl -O https://dbeaver.io/files/dbeaver-ce-23.2.5-macos-x86_64.dmg
open dbeaver-ce-23.2.5-macos-x86_64.dmg
```### macOS インストール
   - Homebrewを使用してインストール: `brew install --cask dbeaver-community`
   - 公式Webサイトからdmgファイルをダウンロードしてインストール
   - インストール後、アプリケーションフォルダーにドラッグ&ドロップ
```bash
# Ubuntu/Debian
wget https://dbeaver.io/files/dbeaver-ce_23.2.5_amd64.deb
sudo dpkg -i dbeaver-ce_23.2.5_amd64.deb

# Fix dependencies if needed
sudo apt-get install -f

# CentOS/RHEL/Fedora
wget https://dbeaver.io/files/dbeaver-ce-23.2.5.x86_64.rpm
sudo rpm -i dbeaver-ce-23.2.5.x86_64.rpm

# Or using yum/dnf
sudo yum install dbeaver-ce-23.2.5.x86_64.rpm
sudo dnf install dbeaver-ce-23.2.5.x86_64.rpm

# Snap package
sudo snap install dbeaver-ce

# Flatpak
flatpak install flathub io.dbeaver.DBeaverCommunity

# AppImage
wget https://dbeaver.io/files/dbeaver-ce-23.2.5-linux.gtk.x86_64.tar.gz
tar -xzf dbeaver-ce-23.2.5-linux.gtk.x86_64.tar.gz
cd dbeaver
./dbeaver
```### Linux インストール
   - Ubuntuの場合: `sudo snap install dbeaver-ce`
   - Fedoraの場合: `sudo dnf install dbeaver`
   - Snapパッケージまたは公式リポジトリからインストール可能
```bash
# Run DBeaver in Docker (with X11 forwarding)
docker run -it --rm \
  -e DISPLAY=$DISPLAY \
  -v /tmp/.X11-unix:/tmp/.X11-unix \
  -v $HOME/.dbeaver:/opt/dbeaver-data \
  --net=host \
  dbeaver/cloudbeaver

# CloudBeaver (web-based version)
docker run -d --name cloudbeaver \
  -p 8978:8978 \
  -v cloudbeaver-data:/opt/cloudbeaver/workspace \
  dbeaver/cloudbeaver:latest

# Access CloudBeaver at http://localhost:8978
```### Docker インストール
   - Docker Hubからイメージをプル: `docker pull dbeaver/dbeaver`
   - コンテナを実行: `docker run -d -p 8080:8080 dbeaver/dbeaver`
   - Docker Composeを使用することも可能

Minimum Requirements:

  • OS: Windows 7+, macOS 10.12+, Linux (any modern distribution)
  • RAM: 512 MB
  • Disk Space: 200 MB
  • Java: 11+ (bundled with installer)

Recommended:

  • RAM: 2 GB or more
  • Disk Space: 1 GB or more
  • Java: 17+ for best performance
  • SSD for better performance with large databases
   - Java Runtime Environment (JRE) 8以降
   - 最小2GB RAM
   - 推奨4GB以上のRAM
   - 200MB以上のディスク空き容量
   - 64ビットオペレーティングシステム
```sql
-- Connection parameters for different databases

-- MySQL
Host: localhost
Port: 3306
Database: myapp
Username: dbuser
Password: password
Driver: MySQL JDBC Driver

-- PostgreSQL
Host: localhost
Port: 5432
Database: myapp
Username: postgres
Password: password
Driver: PostgreSQL JDBC Driver

-- SQL Server
Host: localhost
Port: 1433
Database: myapp
Username: sa
Password: password
Driver: Microsoft SQL Server JDBC Driver

-- Oracle
Host: localhost
Port: 1521
SID/Service: XE
Username: hr
Password: password
Driver: Oracle JDBC Driver

-- SQLite
Database Path: /path/to/database.db
Driver: SQLite JDBC Driver

-- MongoDB
Host: localhost
Port: 27017
Database: myapp
Username: user
Password: password
Authentication Database: admin
Driver: MongoDB JDBC Driver
```### 接続設定
   - 「新しい接続」ボタンをクリック
   - データベースタイプを選択
   - 接続パラメータを入力(ホスト、ポート、ユーザー名、パスワード)
   - 「テスト接続」で接続を確認
```properties
# Connection properties file
# MySQL example
driver=com.mysql.cj.jdbc.Driver
url=jdbc:mysql://localhost:3306/myapp
user=dbuser
password=password

# Connection pool settings
initialSize=5
maxActive=20
maxIdle=10
minIdle=2
maxWait=10000

# SSL settings
useSSL=true
requireSSL=true
verifyServerCertificate=true
trustCertificateKeyStoreUrl=/path/to/truststore.jks
trustCertificateKeyStorePassword=password

# Connection timeout settings
connectTimeout=30000
socketTimeout=60000
loginTimeout=30

# Additional parameters
characterEncoding=UTF-8
useUnicode=true
allowMultiQueries=true
rewriteBatchedStatements=true
```### 詳細接続設定
    - SSL接続の設定
    - タイムアウト値の調整
    - 接続プールの設定
    - 文字エンコーディングの選択
```bash
# SSH tunnel settings in DBeaver
SSH Host: ssh-server.example.com
SSH Port: 22
SSH Username: ssh-user
SSH Authentication: Password/Key

# Local port forwarding
Local Host: localhost
Local Port: 3307
Remote Host: database-server
Remote Port: 3306

# SSH key authentication
Private Key: /path/to/private-key.pem
Passphrase: key-passphrase

# SSH tunnel command equivalent
ssh -L 3307:database-server:3306 ssh-user@ssh-server.example.com

# Test SSH connection
ssh -T ssh-user@ssh-server.example.com
```### SSH トンネル設定
    - SSHホスト情報を入力
    - 認証方法(パスワードまたは鍵)を選択
    - ポートフォワーディングを設定
```json
{
  "templates": [
    {
      "name": "Local MySQL Development",
      "driver": "mysql8",
      "host": "localhost",
      "port": 3306,
      "database": "dev_db",
      "properties": {
        "useSSL": "false",
        "allowPublicKeyRetrieval": "true"
      }
    },
    {
      "name": "Production PostgreSQL",
      "driver": "postgresql",
      "host": "prod-db.example.com",
      "port": 5432,
      "database": "production",
      "properties": {
        "ssl": "true",
        "sslmode": "require"
      }
    },
    {
      "name": "AWS RDS MySQL",
      "driver": "mysql8",
      "host": "mydb.cluster-xyz.us-east-1.rds.amazonaws.com",
      "port": 3306,
      "database": "myapp",
      "properties": {
        "useSSL": "true",
        "requireSSL": "true"
      }
    }
  ]
}
```### 接続テンプレート
    - 頻繁に使用する接続の保存
    - テンプレートから新しい接続を簡単に作成
    - 接続パラメータのプリセット
```sql
-- Data Definition Language (DDL)
CREATE TABLE users (
    id INT PRIMARY KEY AUTO_INCREMENT,
    username VARCHAR(50) UNIQUE NOT NULL,
    email VARCHAR(100) NOT NULL,
    password_hash VARCHAR(255) NOT NULL,
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
    updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
);

CREATE INDEX idx_users_email ON users(email);
CREATE INDEX idx_users_created_at ON users(created_at);

-- Data Manipulation Language (DML)
INSERT INTO users (username, email, password_hash) VALUES
('john_doe', 'john@example.com', 'hashed_password_1'),
('jane_smith', 'jane@example.com', 'hashed_password_2'),
('bob_wilson', 'bob@example.com', 'hashed_password_3');

UPDATE users 
SET email = 'john.doe@example.com' 
WHERE username = 'john_doe';

DELETE FROM users 
WHERE created_at < DATE_SUB(NOW(), INTERVAL 1 YEAR);

-- Data Query Language (DQL)
SELECT u.username, u.email, u.created_at
FROM users u
WHERE u.created_at >= DATE_SUB(NOW(), INTERVAL 30 DAY)
ORDER BY u.created_at DESC
LIMIT 10;

-- Complex queries with joins
SELECT 
    u.username,
    u.email,
    COUNT(o.id) as order_count,
    SUM(o.total_amount) as total_spent,
    AVG(o.total_amount) as avg_order_value
FROM users u
LEFT JOIN orders o ON u.id = o.user_id
WHERE u.created_at >= '2023-01-01'
GROUP BY u.id, u.username, u.email
HAVING COUNT(o.id) > 0
ORDER BY total_spent DESC;
```### 基本的なSQL操作
    - テーブルの作成、挿入、更新、削除
    - SELECT、INSERT、UPDATE、DELETEクエリ
    - シンプルなJOIN操作
```sql
-- Auto-completion and syntax highlighting
-- DBeaver provides intelligent code completion

-- Code formatting (Ctrl+Shift+F)
SELECT u.username,u.email,o.total FROM users u JOIN orders o ON u.id=o.user_id;

-- Formatted result:
SELECT 
    u.username,
    u.email,
    o.total
FROM users u
JOIN orders o ON u.id = o.user_id;

-- Code templates and snippets
-- Type 'sel' and press Tab for SELECT template
-- Type 'ins' and press Tab for INSERT template
-- Type 'upd' and press Tab for UPDATE template

-- SQL validation and error highlighting
-- DBeaver highlights syntax errors in real-time

-- Multiple query execution
-- Separate queries with semicolons
SELECT COUNT(*) FROM users;
SELECT COUNT(*) FROM orders;
SELECT COUNT(*) FROM products;

-- Execute specific query (Ctrl+Enter)
-- Execute all queries (Ctrl+Alt+Shift+X)
-- Execute current statement (Alt+X)
```### SQLエディタ機能
    - 構文ハイライト
    - 自動補完
    - エラー検出
    - クエリ実行計画の表示
```sql
-- Query history is automatically saved
-- Access via Window -> Show View -> Query History

-- Bookmark frequently used queries
-- Right-click in SQL editor -> Add Bookmark

-- Saved queries examples:
-- Name: "Active Users Count"
SELECT COUNT(*) as active_users 
FROM users 
WHERE last_login >= DATE_SUB(NOW(), INTERVAL 30 DAY);

-- Name: "Top Products by Sales"
SELECT 
    p.name,
    SUM(oi.quantity) as total_sold,
    SUM(oi.quantity * oi.price) as total_revenue
FROM products p
JOIN order_items oi ON p.id = oi.product_id
JOIN orders o ON oi.order_id = o.id
WHERE o.created_at >= DATE_SUB(NOW(), INTERVAL 1 MONTH)
GROUP BY p.id, p.name
ORDER BY total_revenue DESC
LIMIT 10;

-- Name: "Database Size Analysis"
SELECT 
    table_name,
    ROUND(((data_length + index_length) / 1024 / 1024), 2) as size_mb,
    table_rows
FROM information_schema.tables 
WHERE table_schema = DATABASE()
ORDER BY (data_length + index_length) DESC;
```### クエリ履歴とブックマーク
    - 実行済みクエリの履歴
    - クエリのブックマーク
    - クエリの保存と再利用
```sql
-- Execute SQL scripts from files
-- File -> Open SQL Script
-- Or drag and drop .sql files

-- Script execution options:
-- 1. Execute entire script
-- 2. Execute selected statements
-- 3. Execute statement at cursor
-- 4. Execute with transaction control

-- Transaction control in scripts
START TRANSACTION;

INSERT INTO users (username, email, password_hash) 
VALUES ('test_user', 'test@example.com', 'hash');

UPDATE users 
SET email = 'updated@example.com' 
WHERE username = 'test_user';

-- Commit or rollback based on results
COMMIT;
-- ROLLBACK;

-- Script with error handling
DELIMITER //
CREATE PROCEDURE safe_user_update(
    IN p_user_id INT,
    IN p_email VARCHAR(100)
)
BEGIN
    DECLARE EXIT HANDLER FOR SQLEXCEPTION
    BEGIN
        ROLLBACK;
        RESIGNAL;
    END;
    
    START TRANSACTION;
    
    UPDATE users 
    SET email = p_email, updated_at = NOW()
    WHERE id = p_user_id;
    
    COMMIT;
END //
DELIMITER ;
```### SQLスクリプト実行
    - 複数のクエリの実行
    - バッチ処理
    - スクリプトのデバッグ
```sql
-- Data viewer provides multiple view modes:
-- 1. Grid view (default)
-- 2. Form view (record-by-record)
-- 3. JSON view (for JSON data)
-- 4. XML view (for XML data)
-- 5. Image view (for binary image data)

-- Grid view features:
-- - Sort by clicking column headers
-- - Filter data using column filters
-- - Resize columns
-- - Freeze columns
-- - Show/hide columns
-- - Export selected data

-- Column operations:
-- Right-click column header for options:
-- - Sort Ascending/Descending
-- - Filter
-- - Hide Column
-- - Resize to Fit
-- - Copy Column Name
-- - Set as Primary Key (for editing)

-- Row operations:
-- Right-click row for options:
-- - Edit Row
-- - Duplicate Row
-- - Delete Row
-- - Copy Row
-- - Export Row
```### グリッドビュー機能
    - データの表形式表示
    - 列のソートと並べ替え
    - 列幅の調整
```sql
-- Column filters (click filter icon in column header)
-- Text filters:
-- - Contains
-- - Starts with
-- - Ends with
-- - Equals
-- - Not equals
-- - Regular expression

-- Numeric filters:
-- - Equals
-- - Not equals
-- - Greater than
-- - Less than
-- - Between
-- - Is null
-- - Is not null

-- Date filters:
-- - Equals
-- - Before
-- - After
-- - Between
-- - Today
-- - This week
-- - This month
-- - This year

-- Advanced filtering with SQL WHERE clause
-- Use the filter panel at bottom of data viewer
WHERE created_at >= '2023-01-01' 
  AND status = 'active' 
  AND email LIKE '%@gmail.com'

-- Multiple column filters are combined with AND
-- Use custom WHERE clause for OR conditions
WHERE (status = 'active' OR status = 'pending')
  AND created_at >= '2023-01-01'
```### データのフィルタリング
    - 高度な検索条件
    - フィルターの保存
    - 動的フィルタリング
```sql
-- Enable data editing:
-- 1. Ensure table has primary key
-- 2. Enable edit mode (pencil icon in toolbar)
-- 3. Double-click cell to edit

-- Inline editing features:
-- - Edit cells directly in grid
-- - Tab to move between cells
-- - Enter to confirm changes
-- - Escape to cancel changes
-- - Auto-save or manual save options

-- Bulk operations:
-- - Select multiple rows
-- - Delete selected rows
-- - Copy/paste between tables
-- - Duplicate rows

-- Data validation:
-- - Type checking (numeric, date, etc.)
-- - Length validation
-- - Foreign key constraints
-- - Custom validation rules

-- Transaction control:
-- - Auto-commit mode
-- - Manual transaction mode
-- - Rollback changes
-- - Save changes

-- Example: Bulk update using SQL
UPDATE users 
SET status = 'inactive' 
WHERE last_login < DATE_SUB(NOW(), INTERVAL 6 MONTH);

-- Example: Safe delete with confirmation
DELETE FROM temp_data 
WHERE created_at < DATE_SUB(NOW(), INTERVAL 1 DAY)
LIMIT 1000;
```### データ編集
    - セルの直接編集
    - 一括更新
    - 変更の取り消しとリドゥ
```sql
-- Navigation features:
-- - Page through large result sets
-- - Jump to specific page
-- - Configure page size (100, 500, 1000, etc.)
-- - Fetch all data (for smaller result sets)

-- Keyboard shortcuts:
-- Ctrl+Home: Go to first row
-- Ctrl+End: Go to last row
-- Page Up/Down: Navigate pages
-- Ctrl+F: Find in data
-- F3: Find next
-- Shift+F3: Find previous

-- Search and find:
-- - Find text in current column
-- - Find text in all columns
-- - Case-sensitive search
-- - Regular expression search
-- - Replace functionality

-- Bookmarks and favorites:
-- - Bookmark specific rows
-- - Save filtered views
-- - Quick access to bookmarked data

-- Data refresh:
-- - Manual refresh (F5)
-- - Auto-refresh intervals
-- - Refresh on focus
-- - Show refresh timestamp
```### データナビゲーション
    - ページネーション
    - 大量データのスクロール
    - エクスポート/インポート機能```sql
-- Schema browser shows database structure:
-- - Databases/Schemas
-- - Tables
-- - Views
-- - Stored Procedures
-- - Functions
-- - Triggers
-- - Indexes
-- - Sequences
-- - User-defined types

-- Table information:
-- - Column definitions
-- - Primary keys
-- - Foreign keys
-- - Indexes
-- - Triggers
-- - Constraints
-- - Table statistics

-- View table DDL:
-- Right-click table -> View DDL
CREATE TABLE users (
    id INT AUTO_INCREMENT PRIMARY KEY,
    username VARCHAR(50) UNIQUE NOT NULL,
    email VARCHAR(100) NOT NULL,
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
    INDEX idx_email (email),
    INDEX idx_created_at (created_at)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;

-- View table properties:
-- - Row count
-- - Table size
-- - Index size
-- - Creation date
-- - Last modified date
-- - Engine type (MySQL)
-- - Character set
-- - Collation

Object Operations

-- Table operations (right-click table):
-- - Open in new tab
-- - View data
-- - Edit data
-- - Generate SQL
-- - Export data
-- - Import data
-- - Truncate table
-- - Drop table
-- - Rename table
-- - Copy table structure

-- Generate SQL scripts:
-- - CREATE TABLE statement
-- - INSERT statements (with data)
-- - SELECT statement
-- - UPDATE statement template
-- - DELETE statement template

-- Example generated INSERT:
INSERT INTO users (username, email, password_hash) VALUES
('john_doe', 'john@example.com', 'hash1'),
('jane_smith', 'jane@example.com', 'hash2');

-- Column operations:
-- - Add column
-- - Modify column
-- - Drop column
-- - Rename column
-- - Set as primary key
-- - Create index

-- Index operations:
-- - Create index
-- - Drop index
-- - Rebuild index
-- - View index usage statistics

CREATE INDEX idx_users_status_created 
ON users(status, created_at);

-- Constraint operations:
-- - Add foreign key
-- - Drop constraint
-- - Check constraint
-- - Unique constraint

ALTER TABLE orders 
ADD CONSTRAINT fk_orders_user_id 
FOREIGN KEY (user_id) REFERENCES users(id);

Schema Comparison

-- Compare database schemas:
-- Tools -> Compare -> Schemas

-- Comparison features:
-- - Structure differences
-- - Data differences
-- - Missing objects
-- - Different object definitions

-- Generate synchronization script:
-- - CREATE statements for missing objects
-- - ALTER statements for differences
-- - DROP statements for extra objects

-- Example sync script:
-- Add missing table
CREATE TABLE new_feature (
    id INT AUTO_INCREMENT PRIMARY KEY,
    name VARCHAR(100) NOT NULL,
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

-- Modify existing table
ALTER TABLE users 
ADD COLUMN phone VARCHAR(20),
ADD INDEX idx_phone (phone);

-- Update stored procedure
DROP PROCEDURE IF EXISTS get_user_stats;
DELIMITER //
CREATE PROCEDURE get_user_stats()
BEGIN
    SELECT 
        COUNT(*) as total_users,
        COUNT(CASE WHEN status = 'active' THEN 1 END) as active_users,
        COUNT(CASE WHEN created_at >= DATE_SUB(NOW(), INTERVAL 30 DAY) THEN 1 END) as new_users
    FROM users;
END //
DELIMITER ;

ER Diagrams

-- Generate Entity-Relationship diagrams:
-- Right-click database -> Generate ER Diagram

-- ER diagram features:
-- - Visual table relationships
-- - Primary/foreign key connections
-- - Column data types
-- - Index visualization
-- - Constraint display

-- Diagram customization:
-- - Layout algorithms (hierarchical, circular, etc.)
-- - Show/hide columns
-- - Show/hide data types
-- - Color coding
-- - Export as image (PNG, SVG, PDF)

-- Reverse engineering:
-- - Generate diagrams from existing database
-- - Include/exclude specific tables
-- - Filter by schema
-- - Group related tables

-- Forward engineering:
-- - Design database visually
-- - Generate DDL from diagram
-- - Validate relationships
-- - Export schema script

Query Execution

Execution Plans

-- View query execution plans:
-- Enable "Show execution plan" in SQL editor
-- Or use EXPLAIN statement

-- MySQL execution plan:
EXPLAIN SELECT 
    u.username, 
    COUNT(o.id) as order_count
FROM users u
LEFT JOIN orders o ON u.id = o.user_id
WHERE u.created_at >= '2023-01-01'
GROUP BY u.id, u.username;

-- PostgreSQL execution plan:
EXPLAIN (ANALYZE, BUFFERS) 
SELECT 
    u.username, 
    COUNT(o.id) as order_count
FROM users u
LEFT JOIN orders o ON u.id = o.user_id
WHERE u.created_at >= '2023-01-01'
GROUP BY u.id, u.username;

-- SQL Server execution plan:
SET SHOWPLAN_ALL ON;
SELECT 
    u.username, 
    COUNT(o.id) as order_count
FROM users u
LEFT JOIN orders o ON u.id = o.user_id
WHERE u.created_at >= '2023-01-01'
GROUP BY u.id, u.username;

-- Execution plan analysis:
-- - Table scans vs index seeks
-- - Join algorithms (nested loop, hash, merge)
-- - Sort operations
-- - Filter operations
-- - Cost estimates
-- - Row count estimates

Performance Monitoring

-- Query performance metrics:
-- - Execution time
-- - Rows affected
-- - Rows fetched
-- - Memory usage
-- - CPU usage

-- Enable query profiling:
-- Preferences -> Database -> SQL Editor -> Show query info

-- Performance optimization tips:
-- 1. Use appropriate indexes
CREATE INDEX idx_orders_user_date ON orders(user_id, created_at);

-- 2. Limit result sets
SELECT * FROM large_table LIMIT 1000;

-- 3. Use EXISTS instead of IN for subqueries
SELECT * FROM users u
WHERE EXISTS (
    SELECT 1 FROM orders o 
    WHERE o.user_id = u.id
);

-- 4. Avoid SELECT *
SELECT id, username, email FROM users;

-- 5. Use UNION ALL instead of UNION when duplicates are acceptable
SELECT username FROM active_users
UNION ALL
SELECT username FROM inactive_users;

-- Query timeout settings:
-- Set query timeout in connection properties
-- Default: 20 seconds
-- Long-running queries: 300+ seconds
-- Batch operations: 3600+ seconds

Batch Operations

-- Execute multiple statements:
-- Separate with semicolons
-- Use transaction control

START TRANSACTION;

-- Batch insert
INSERT INTO users (username, email, password_hash) VALUES
('user1', 'user1@example.com', 'hash1'),
('user2', 'user2@example.com', 'hash2'),
('user3', 'user3@example.com', 'hash3');

-- Batch update
UPDATE products 
SET price = price * 1.1 
WHERE category = 'electronics';

-- Batch delete
DELETE FROM logs 
WHERE created_at < DATE_SUB(NOW(), INTERVAL 30 DAY);

COMMIT;

-- Bulk data operations:
-- Use LOAD DATA for large imports (MySQL)
LOAD DATA INFILE '/path/to/data.csv'
INTO TABLE temp_import
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n'
IGNORE 1 ROWS;

-- Use COPY for bulk operations (PostgreSQL)
COPY users(username, email, password_hash)
FROM '/path/to/users.csv'
DELIMITER ','
CSV HEADER;

-- Use BULK INSERT for SQL Server
BULK INSERT temp_import
FROM '/path/to/data.csv'
WITH (
    FIELDTERMINATOR = ',',
    ROWTERMINATOR = '\n',
    FIRSTROW = 2
);

Query Debugging

-- Debug complex queries step by step:

-- Step 1: Test base query
SELECT COUNT(*) FROM users WHERE created_at >= '2023-01-01';

-- Step 2: Add joins one by one
SELECT COUNT(*) 
FROM users u
JOIN orders o ON u.id = o.user_id
WHERE u.created_at >= '2023-01-01';

-- Step 3: Add grouping and aggregation
SELECT 
    u.username,
    COUNT(o.id) as order_count
FROM users u
JOIN orders o ON u.id = o.user_id
WHERE u.created_at >= '2023-01-01'
GROUP BY u.id, u.username;

-- Step 4: Add filtering and sorting
SELECT 
    u.username,
    COUNT(o.id) as order_count,
    SUM(o.total_amount) as total_spent
FROM users u
JOIN orders o ON u.id = o.user_id
WHERE u.created_at >= '2023-01-01'
GROUP BY u.id, u.username
HAVING COUNT(o.id) >= 5
ORDER BY total_spent DESC;

-- Use LIMIT for testing large queries
SELECT * FROM large_table LIMIT 10;

-- Check intermediate results
SELECT DISTINCT status FROM orders; -- Check available values
SELECT MIN(created_at), MAX(created_at) FROM orders; -- Check date range

Data Export/Import

Export Formats

-- Supported export formats:
-- - CSV (Comma-separated values)
-- - TSV (Tab-separated values)
-- - Excel (XLSX)
-- - JSON
-- - XML
-- - SQL INSERT statements
-- - HTML
-- - PDF (via print)

-- Export configuration:
-- File -> Export Data
-- Or right-click table -> Export Data

-- CSV export options:
{
  "delimiter": ",",
  "quote_char": "\"",
  "escape_char": "\\",
  "line_separator": "\n",
  "encoding": "UTF-8",
  "include_header": true,
  "quote_all_values": false,
  "null_value": "",
  "date_format": "yyyy-MM-dd",
  "timestamp_format": "yyyy-MM-dd HH:mm:ss"
}

-- JSON export options:
{
  "format": "array",  // array or object
  "pretty_print": true,
  "include_metadata": false,
  "date_format": "ISO",
  "encoding": "UTF-8"
}

-- SQL export options:
{
  "include_ddl": true,
  "include_data": true,
  "batch_size": 1000,
  "use_qualified_names": true,
  "include_comments": true
}

Import Data

-- Import data from various sources:
-- File -> Import Data
-- Or right-click table -> Import Data

-- CSV import configuration:
{
  "file_path": "/path/to/data.csv",
  "delimiter": ",",
  "quote_char": "\"",
  "escape_char": "\\",
  "encoding": "UTF-8",
  "header_row": 1,
  "skip_rows": 0,
  "null_values": ["", "NULL", "null"],
  "trim_values": true,
  "empty_string_as_null": true
}

-- Column mapping:
{
  "mappings": [
    {"source": "user_name", "target": "username"},
    {"source": "email_address", "target": "email"},
    {"source": "registration_date", "target": "created_at"}
  ],
  "type_conversions": [
    {"column": "created_at", "type": "timestamp", "format": "MM/dd/yyyy"},
    {"column": "age", "type": "integer"},
    {"column": "salary", "type": "decimal", "precision": 10, "scale": 2}
  ]
}

-- Import modes:
-- - INSERT: Add new records
-- - UPDATE: Update existing records
-- - UPSERT: Insert or update based on key
-- - REPLACE: Replace existing records

-- Error handling:
{
  "on_error": "skip",  // skip, stop, or log
  "max_errors": 100,
  "log_errors": true,
  "error_log_file": "/path/to/errors.log"
}

Bulk Data Operations

-- Large dataset import strategies:

-- 1. Batch processing
{
  "batch_size": 1000,
  "commit_frequency": 10000,
  "use_transactions": true,
  "parallel_threads": 4
}

-- 2. Disable constraints during import
ALTER TABLE target_table DISABLE KEYS;
-- Import data
ALTER TABLE target_table ENABLE KEYS;

-- 3. Use staging tables
CREATE TABLE staging_users LIKE users;
-- Import to staging table first
-- Validate and clean data
-- Transfer to production table
INSERT INTO users SELECT * FROM staging_users WHERE validation_check();

-- 4. Optimize for bulk operations
SET autocommit = 0;
SET unique_checks = 0;
SET foreign_key_checks = 0;
-- Perform bulk operations
SET foreign_key_checks = 1;
SET unique_checks = 1;
SET autocommit = 1;

-- Progress monitoring:
-- DBeaver shows import progress
-- - Records processed
-- - Processing speed (records/second)
-- - Estimated time remaining
-- - Error count
-- - Success rate

Data Synchronization

-- Synchronize data between databases:
-- Tools -> Database -> Compare/Synchronize

-- Sync configuration:
{
  "source": {
    "connection": "source_db",
    "schema": "production",
    "tables": ["users", "orders", "products"]
  },
  "target": {
    "connection": "target_db", 
    "schema": "staging",
    "tables": ["users", "orders", "products"]
  },
  "options": {
    "compare_data": true,
    "compare_structure": true,
    "generate_script": true,
    "execute_immediately": false
  }
}

-- Sync strategies:
-- 1. Full synchronization
-- 2. Incremental synchronization
-- 3. Bidirectional synchronization
-- 4. One-way synchronization

-- Generated sync script example:
-- Update existing records
UPDATE target.users t
JOIN source.users s ON t.id = s.id
SET t.email = s.email,
    t.updated_at = s.updated_at
WHERE t.email != s.email;

-- Insert new records
INSERT INTO target.users (id, username, email, created_at)
SELECT s.id, s.username, s.email, s.created_at
FROM source.users s
LEFT JOIN target.users t ON s.id = t.id
WHERE t.id IS NULL;

-- Delete removed records
DELETE t FROM target.users t
LEFT JOIN source.users s ON t.id = s.id
WHERE s.id IS NULL;

Database Administration

User Management

-- User and privilege management:
-- Navigate to Security -> Users in schema browser

-- Create new user (MySQL):
CREATE USER 'app_user'@'localhost' IDENTIFIED BY 'secure_password';
CREATE USER 'app_user'@'%' IDENTIFIED BY 'secure_password';

-- Grant privileges:
GRANT SELECT, INSERT, UPDATE, DELETE ON myapp.* TO 'app_user'@'localhost';
GRANT ALL PRIVILEGES ON myapp.* TO 'admin_user'@'localhost';

-- Create role and assign to user:
CREATE ROLE 'app_role';
GRANT SELECT, INSERT, UPDATE ON myapp.* TO 'app_role';
GRANT 'app_role' TO 'app_user'@'localhost';

-- PostgreSQL user management:
CREATE USER app_user WITH PASSWORD 'secure_password';
CREATE ROLE app_role;
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO app_role;
GRANT app_role TO app_user;

-- SQL Server user management:
CREATE LOGIN app_user WITH PASSWORD = 'SecurePassword123!';
USE myapp;
CREATE USER app_user FOR LOGIN app_user;
ALTER ROLE db_datareader ADD MEMBER app_user;
ALTER ROLE db_datawriter ADD MEMBER app_user;

-- View user privileges:
SHOW GRANTS FOR 'app_user'@'localhost';

-- Revoke privileges:
REVOKE INSERT, UPDATE ON myapp.* FROM 'app_user'@'localhost';

-- Drop user:
DROP USER 'app_user'@'localhost';

Backup and Restore

-- Database backup using DBeaver:
-- Right-click database -> Tools -> Backup

-- Backup options:
{
  "backup_type": "full",  // full, incremental, differential
  "include_data": true,
  "include_structure": true,
  "include_procedures": true,
  "include_triggers": true,
  "include_views": true,
  "compression": "gzip",
  "output_format": "sql"  // sql, binary, custom
}

-- MySQL backup (mysqldump):
mysqldump -u username -p --single-transaction --routines --triggers myapp > backup.sql

-- PostgreSQL backup (pg_dump):
pg_dump -U username -h localhost -d myapp -f backup.sql

-- SQL Server backup:
BACKUP DATABASE myapp 
TO DISK = 'C:\Backups\myapp_backup.bak'
WITH FORMAT, COMPRESSION;

-- Restore database:
-- Right-click database -> Tools -> Restore

-- MySQL restore:
mysql -u username -p myapp < backup.sql

-- PostgreSQL restore:
psql -U username -h localhost -d myapp -f backup.sql

-- SQL Server restore:
RESTORE DATABASE myapp 
FROM DISK = 'C:\Backups\myapp_backup.bak'
WITH REPLACE;

-- Automated backup script:
#!/bin/bash
DATE=$(date +%Y%m%d_%H%M%S)
BACKUP_DIR="/backups"
DB_NAME="myapp"

mysqldump -u backup_user -p$BACKUP_PASSWORD \
  --single-transaction \
  --routines \
  --triggers \
  $DB_NAME | gzip > $BACKUP_DIR/${DB_NAME}_${DATE}.sql.gz

# Keep only last 7 days of backups
find $BACKUP_DIR -name "${DB_NAME}_*.sql.gz" -mtime +7 -delete

Performance Monitoring

-- Monitor database performance:
-- Window -> Show View -> Database Monitor

-- Key performance metrics:
-- - Active connections
-- - Query execution time
-- - Lock waits
-- - Buffer cache hit ratio
-- - Disk I/O statistics
-- - Memory usage

-- MySQL performance queries:
-- Show processlist
SHOW PROCESSLIST;

-- Show status variables
SHOW STATUS LIKE 'Threads_connected';
SHOW STATUS LIKE 'Slow_queries';
SHOW STATUS LIKE 'Questions';

-- Show engine status
SHOW ENGINE INNODB STATUS;

-- Performance schema queries
SELECT 
    event_name,
    count_star,
    sum_timer_wait/1000000000 as total_time_sec,
    avg_timer_wait/1000000000 as avg_time_sec
FROM performance_schema.events_statements_summary_global_by_event_name
WHERE count_star > 0
ORDER BY sum_timer_wait DESC
LIMIT 10;

-- PostgreSQL performance queries:
-- Active queries
SELECT 
    pid,
    now() - pg_stat_activity.query_start AS duration,
    query,
    state
FROM pg_stat_activity
WHERE (now() - pg_stat_activity.query_start) > interval '5 minutes';

-- Database statistics
SELECT 
    datname,
    numbackends,
    xact_commit,
    xact_rollback,
    blks_read,
    blks_hit,
    tup_returned,
    tup_fetched,
    tup_inserted,
    tup_updated,
    tup_deleted
FROM pg_stat_database;

-- Table statistics
SELECT 
    schemaname,
    tablename,
    seq_scan,
    seq_tup_read,
    idx_scan,
    idx_tup_fetch,
    n_tup_ins,
    n_tup_upd,
    n_tup_del
FROM pg_stat_user_tables
ORDER BY seq_scan DESC;

Index Management

-- Index analysis and optimization:
-- Navigate to table -> Indexes tab

-- Create indexes:
CREATE INDEX idx_users_email ON users(email);
CREATE INDEX idx_orders_user_date ON orders(user_id, created_at);
CREATE UNIQUE INDEX idx_users_username ON users(username);

-- Composite indexes:
CREATE INDEX idx_products_category_price ON products(category, price);
CREATE INDEX idx_orders_status_date ON orders(status, created_at DESC);

-- Partial indexes (PostgreSQL):
CREATE INDEX idx_active_users ON users(email) WHERE status = 'active';

-- Functional indexes:
CREATE INDEX idx_users_lower_email ON users(LOWER(email));
CREATE INDEX idx_orders_year ON orders(YEAR(created_at));

-- Index usage analysis (MySQL):
SELECT 
    t.table_name,
    s.index_name,
    s.column_name,
    s.cardinality,
    ROUND(((s.cardinality / t.table_rows) * 100), 2) AS selectivity
FROM information_schema.statistics s
JOIN information_schema.tables t ON s.table_name = t.table_name
WHERE s.table_schema = DATABASE()
  AND t.table_schema = DATABASE()
ORDER BY selectivity DESC;

-- Unused indexes (PostgreSQL):
SELECT 
    schemaname,
    tablename,
    indexname,
    idx_scan,
    idx_tup_read,
    idx_tup_fetch
FROM pg_stat_user_indexes
WHERE idx_scan = 0
ORDER BY schemaname, tablename;

-- Index size analysis:
SELECT 
    table_name,
    index_name,
    ROUND(stat_value * @@innodb_page_size / 1024 / 1024, 2) AS size_mb
FROM mysql.innodb_index_stats
WHERE stat_name = 'size'
ORDER BY stat_value DESC;

-- Drop unused indexes:
DROP INDEX idx_unused_index ON table_name;

-- Rebuild indexes:
ALTER TABLE table_name ENGINE=InnoDB; -- MySQL
REINDEX TABLE table_name; -- PostgreSQL

Visual Query Builder

Query Builder Interface

-- Access Visual Query Builder:
-- SQL Editor -> Open Visual Query Builder
-- Or click "Design" tab in SQL Editor

-- Query builder components:
-- 1. Table/View selector
-- 2. Join configuration
-- 3. Column selection
-- 4. WHERE conditions
-- 5. GROUP BY clauses
-- 6. HAVING conditions
-- 7. ORDER BY specification
-- 8. LIMIT/OFFSET settings

-- Example query built visually:
-- Tables: users, orders, order_items, products
-- Joins: 
--   users.id = orders.user_id
--   orders.id = order_items.order_id
--   order_items.product_id = products.id

-- Generated SQL:
SELECT 
    u.username,
    u.email,
    COUNT(DISTINCT o.id) as order_count,
    SUM(oi.quantity * oi.price) as total_spent,
    AVG(oi.price) as avg_item_price
FROM users u
INNER JOIN orders o ON u.id = o.user_id
INNER JOIN order_items oi ON o.id = oi.order_id
INNER JOIN products p ON oi.product_id = p.id
WHERE o.created_at >= '2023-01-01'
  AND o.status = 'completed'
GROUP BY u.id, u.username, u.email
HAVING COUNT(DISTINCT o.id) >= 5
ORDER BY total_spent DESC
LIMIT 100;

Join Configuration

-- Join types available in visual builder:
-- 1. INNER JOIN
-- 2. LEFT JOIN (LEFT OUTER JOIN)
-- 3. RIGHT JOIN (RIGHT OUTER JOIN)
-- 4. FULL JOIN (FULL OUTER JOIN)
-- 5. CROSS JOIN

-- Join condition types:
-- 1. Equality (=)
-- 2. Inequality (!=, <>, <, >, <=, >=)
-- 3. LIKE pattern matching
-- 4. IN/NOT IN
-- 5. EXISTS/NOT EXISTS

-- Complex join example:
SELECT 
    u.username,
    p.name as product_name,
    oi.quantity,
    oi.price
FROM users u
INNER JOIN orders o ON u.id = o.user_id
INNER JOIN order_items oi ON o.id = oi.order_id
INNER JOIN products p ON oi.product_id = p.id
LEFT JOIN product_reviews pr ON p.id = pr.product_id AND pr.user_id = u.id
WHERE o.status = 'completed'
  AND o.created_at >= DATE_SUB(NOW(), INTERVAL 30 DAY);

-- Self-join example:
SELECT 
    e1.name as employee,
    e2.name as manager
FROM employees e1
LEFT JOIN employees e2 ON e1.manager_id = e2.id;

-- Multiple table join with aggregation:
SELECT 
    c.name as category,
    COUNT(p.id) as product_count,
    AVG(p.price) as avg_price,
    SUM(oi.quantity) as total_sold
FROM categories c
LEFT JOIN products p ON c.id = p.category_id
LEFT JOIN order_items oi ON p.id = oi.product_id
LEFT JOIN orders o ON oi.order_id = o.id
WHERE o.status = 'completed' OR o.status IS NULL
GROUP BY c.id, c.name
ORDER BY total_sold DESC NULLS LAST;

Condition Builder

-- WHERE condition builder supports:
-- 1. Simple conditions
-- 2. Complex conditions with AND/OR
-- 3. Nested conditions with parentheses
-- 4. Subqueries
-- 5. Functions and expressions

-- Simple conditions:
WHERE users.status = 'active'
  AND users.created_at >= '2023-01-01'
  AND users.email LIKE '%@gmail.com'

-- Complex conditions with grouping:
WHERE (users.status = 'active' OR users.status = 'premium')
  AND users.created_at >= '2023-01-01'
  AND (users.country = 'US' OR users.country = 'CA')

-- Subquery conditions:
WHERE users.id IN (
    SELECT DISTINCT user_id 
    FROM orders 
    WHERE total_amount > 1000
)

-- EXISTS conditions:
WHERE EXISTS (
    SELECT 1 
    FROM orders o 
    WHERE o.user_id = users.id 
      AND o.status = 'completed'
)

-- Function-based conditions:
WHERE YEAR(users.created_at) = 2023
  AND MONTH(users.created_at) >= 6
  AND LENGTH(users.username) > 5

-- NULL handling:
WHERE users.phone IS NOT NULL
  AND users.last_login IS NOT NULL
  AND COALESCE(users.middle_name, '') != ''

-- Date range conditions:
WHERE orders.created_at BETWEEN '2023-01-01' AND '2023-12-31'
  AND orders.updated_at >= DATE_SUB(NOW(), INTERVAL 7 DAY)

-- Pattern matching:
WHERE products.name LIKE '%laptop%'
  AND products.description REGEXP 'gaming|performance'
  AND products.sku NOT LIKE 'DISC-%'

Aggregation and Grouping

-- GROUP BY builder supports:
-- 1. Single column grouping
-- 2. Multiple column grouping
-- 3. Expression-based grouping
-- 4. Date/time grouping functions

-- Basic grouping:
SELECT 
    category,
    COUNT(*) as product_count,
    AVG(price) as avg_price,
    MIN(price) as min_price,
    MAX(price) as max_price
FROM products
GROUP BY category;

-- Multiple column grouping:
SELECT 
    category,
    brand,
    COUNT(*) as product_count,
    SUM(stock_quantity) as total_stock
FROM products
GROUP BY category, brand
ORDER BY category, brand;

-- Date-based grouping:
SELECT 
    YEAR(created_at) as year,
    MONTH(created_at) as month,
    COUNT(*) as order_count,
    SUM(total_amount) as monthly_revenue
FROM orders
WHERE created_at >= '2023-01-01'
GROUP BY YEAR(created_at), MONTH(created_at)
ORDER BY year, month;

-- Expression-based grouping:
SELECT 
    CASE 
        WHEN price < 100 THEN 'Budget'
        WHEN price < 500 THEN 'Mid-range'
        ELSE 'Premium'
    END as price_category,
    COUNT(*) as product_count,
    AVG(price) as avg_price
FROM products
GROUP BY CASE 
    WHEN price < 100 THEN 'Budget'
    WHEN price < 500 THEN 'Mid-range'
    ELSE 'Premium'
END;

-- HAVING conditions:
SELECT 
    user_id,
    COUNT(*) as order_count,
    SUM(total_amount) as total_spent
FROM orders
WHERE status = 'completed'
GROUP BY user_id
HAVING COUNT(*) >= 5
   AND SUM(total_amount) > 1000
ORDER BY total_spent DESC;

-- Window functions (advanced):
SELECT 
    user_id,
    order_date,
    total_amount,
    ROW_NUMBER() OVER (PARTITION BY user_id ORDER BY order_date) as order_sequence,
    SUM(total_amount) OVER (PARTITION BY user_id ORDER BY order_date) as running_total,
    LAG(total_amount) OVER (PARTITION BY user_id ORDER BY order_date) as previous_order_amount
FROM orders
WHERE status = 'completed'
ORDER BY user_id, order_date;

Data Transfer

Database Migration

-- Database-to-database transfer:
-- Tools -> Database -> Transfer Data

-- Transfer configuration:
{
  "source": {
    "connection": "source_mysql",
    "database": "production",
    "tables": ["users", "orders", "products"]
  },
  "target": {
    "connection": "target_postgresql",
    "database": "staging",
    "create_tables": true,
    "drop_existing": false
  },
  "options": {
    "transfer_data": true,
    "transfer_structure": true,
    "batch_size": 1000,
    "use_transactions": true,
    "parallel_jobs": 4
  }
}

-- Data type mapping (MySQL to PostgreSQL):
{
  "type_mappings": [
    {"source": "INT AUTO_INCREMENT", "target": "SERIAL"},
    {"source": "VARCHAR(255)", "target": "VARCHAR(255)"},
    {"source": "TEXT", "target": "TEXT"},
    {"source": "DATETIME", "target": "TIMESTAMP"},
    {"source": "TINYINT(1)", "target": "BOOLEAN"},
    {"source": "DECIMAL(10,2)", "target": "NUMERIC(10,2)"}
  ]
}

-- Generated migration script:
-- Create target tables
CREATE TABLE users (
    id SERIAL PRIMARY KEY,
    username VARCHAR(50) UNIQUE NOT NULL,
    email VARCHAR(100) NOT NULL,
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

-- Transfer data with transformation
INSERT INTO target.users (username, email, created_at)
SELECT 
    username,
    email,
    created_at
FROM source.users
WHERE status = 'active';

-- Handle data conflicts:
INSERT INTO target.users (username, email, created_at)
SELECT 
    CASE 
        WHEN EXISTS (SELECT 1 FROM target.users WHERE username = s.username)
        THEN CONCAT(s.username, '_migrated')
        ELSE s.username
    END,
    s.email,
    s.created_at
FROM source.users s;
```### クロスプラットフォームのデータ転送
```sql
-- Transfer between different database systems:

-- Oracle to MySQL:
{
  "source": "Oracle 19c",
  "target": "MySQL 8.0",
  "challenges": [
    "Date format differences",
    "Sequence to AUTO_INCREMENT",
    "PL/SQL to MySQL procedures",
    "Data type compatibility"
  ],
  "solutions": [
    "Use DATE_FORMAT() for date conversion",
    "Convert sequences to AUTO_INCREMENT",
    "Rewrite stored procedures",
    "Map Oracle types to MySQL equivalents"
  ]
}

-- SQL Server to PostgreSQL:
{
  "source": "SQL Server 2019",
  "target": "PostgreSQL 15",
  "type_mappings": [
    {"source": "NVARCHAR(MAX)", "target": "TEXT"},
    {"source": "UNIQUEIDENTIFIER", "target": "UUID"},
    {"source": "DATETIME2", "target": "TIMESTAMP"},
    {"source": "BIT", "target": "BOOLEAN"},
    {"source": "MONEY", "target": "NUMERIC(19,4)"}
  ]
}

-- Data transformation during transfer:
-- Source (SQL Server):
SELECT 
    Id,
    FirstName + ' ' + LastName as FullName,
    CONVERT(VARCHAR, BirthDate, 120) as BirthDate,
    CASE WHEN IsActive = 1 THEN 'true' ELSE 'false' END as IsActive
FROM Users;

-- Target (PostgreSQL):
INSERT INTO users (id, full_name, birth_date, is_active)
VALUES (?, ?, ?::timestamp, ?::boolean);

-- Handle encoding issues:
{
  "encoding": {
    "source": "latin1",
    "target": "utf8",
    "conversion": "iconv -f latin1 -t utf8"
  }
}
```### ETL操作
```sql
-- Extract, Transform, Load operations:

-- Extract phase:
SELECT 
    customer_id,
    order_date,
    product_id,
    quantity,
    unit_price,
    quantity * unit_price as line_total
FROM raw_orders
WHERE order_date >= '2023-01-01';

-- Transform phase:
SELECT 
    o.customer_id,
    c.customer_name,
    o.order_date,
    p.product_name,
    p.category,
    o.quantity,
    o.unit_price,
    o.line_total,
    CASE 
        WHEN o.line_total > 1000 THEN 'High Value'
        WHEN o.line_total > 100 THEN 'Medium Value'
        ELSE 'Low Value'
    END as order_category,
    ROW_NUMBER() OVER (PARTITION BY o.customer_id ORDER BY o.order_date) as order_sequence
FROM (
    SELECT 
        customer_id,
        order_date,
        product_id,
        quantity,
        unit_price,
        quantity * unit_price as line_total
    FROM raw_orders
    WHERE order_date >= '2023-01-01'
) o
JOIN customers c ON o.customer_id = c.id
JOIN products p ON o.product_id = p.id;

-- Load phase:
INSERT INTO analytics.order_facts (
    customer_id,
    customer_name,
    order_date,
    product_name,
    category,
    quantity,
    unit_price,
    line_total,
    order_category,
    order_sequence
)
SELECT 
    customer_id,
    customer_name,
    order_date,
    product_name,
    category,
    quantity,
    unit_price,
    line_total,
    order_category,
    order_sequence
FROM transformed_orders;

-- Data quality checks:
-- Check for duplicates
SELECT customer_id, order_date, product_id, COUNT(*)
FROM order_facts
GROUP BY customer_id, order_date, product_id
HAVING COUNT(*) > 1;

-- Check for missing references
SELECT DISTINCT customer_id
FROM order_facts
WHERE customer_id NOT IN (SELECT id FROM customers);

-- Check data ranges
SELECT 
    MIN(order_date) as earliest_date,
    MAX(order_date) as latest_date,
    MIN(line_total) as min_total,
    MAX(line_total) as max_total,
    COUNT(*) as total_records
FROM order_facts;
```## 拡張機能とプラグイン
```sql
-- DBeaver supports numerous database drivers:
-- - MySQL/MariaDB
-- - PostgreSQL
-- - SQLite
-- - Oracle
-- - SQL Server
-- - DB2
-- - Sybase
-- - Firebird
-- - H2
-- - HSQLDB
-- - Derby
-- - MongoDB
-- - Cassandra
-- - Redis
-- - ClickHouse
-- - Snowflake
-- - BigQuery
-- - And many more...

-- Driver installation:
-- Database -> Driver Manager
-- Download/Update drivers automatically
-- Or manually add JAR files

-- Custom driver configuration:
{
  "driver_name": "Custom MySQL",
  "driver_class": "com.mysql.cj.jdbc.Driver",
  "jar_files": [
    "/path/to/mysql-connector-java-8.0.33.jar"
  ],
  "url_template": "jdbc:mysql://{host}[:{port}]/[{database}]",
  "default_port": 3306,
  "supports_transactions": true,
  "supports_multiple_results": true
}
```### データベースドライバー管理
```sql
-- Available DBeaver plugins:
-- 1. Git integration
-- 2. ERD (Entity Relationship Diagrams)
-- 3. Data visualization
-- 4. Query profiling
-- 5. Database documentation
-- 6. Code formatting
-- 7. Schema comparison
-- 8. Data masking
-- 9. Backup/restore tools
-- 10. Performance monitoring

-- Install plugins:
-- Help -> Install New Software
-- Or download from DBeaver marketplace

-- Popular plugins:

-- 1. DBeaver PRO features:
-- - NoSQL databases support
-- - Cloud databases integration
-- - Advanced data visualization
-- - Query profiling and optimization
-- - Data masking and anonymization
-- - Advanced export/import formats

-- 2. Git integration:
-- - Version control for SQL scripts
-- - Collaborative development
-- - Change tracking
-- - Branch management

-- 3. ERD plugin:
-- - Visual database design
-- - Automatic diagram generation
-- - Relationship visualization
-- - Export to various formats

-- 4. Documentation generator:
-- - Automatic schema documentation
-- - HTML/PDF reports
-- - Table and column descriptions
-- - Relationship documentation
```### プラグインエコシステム
```java
// Create custom DBeaver extension:
// 1. Set up Eclipse plugin development environment
// 2. Create new plugin project
// 3. Implement DBeaver extension points

// Example: Custom data export extension
public class CustomExportExtension implements IDataExporter {
    
    @Override
    public void exportData(IDataExportSite site) throws DBException {
        // Custom export logic
        IResultSet resultSet = site.getResultSet();
        
        while (resultSet.nextRow()) {
            Object[] row = resultSet.getRowData();
            // Process row data
            processRow(row);
        }
    }
    
    private void processRow(Object[] row) {
        // Custom row processing
        for (Object value : row) {
            // Transform and export value
            String transformedValue = transformValue(value);
            writeToOutput(transformedValue);
        }
    }
}

// Plugin manifest (plugin.xml):
<?xml version="1.0" encoding="UTF-8"?>
<plugin>
   <extension point="org.jkiss.dbeaver.dataExporter">
      <exporter
            class="com.example.CustomExportExtension"
            description="Custom data export format"
            id="custom.exporter"
            label="Custom Export"
            order="100">
      </exporter>
   </extension>
</plugin>
```### カスタム拡張機能
```sql
-- DBeaver configuration files:
-- Windows: %APPDATA%\DBeaverData\workspace6\.metadata
-- macOS: ~/Library/DBeaverData/workspace6/.metadata  
-- Linux: ~/.local/share/DBeaverData/workspace6/.metadata

-- Key configuration files:
-- - .metadata/.plugins/org.eclipse.core.runtime/.settings/
-- - connections.xml (connection configurations)
-- - drivers.xml (driver configurations)
-- - preferences.properties (user preferences)

-- Workspace customization:
{
  "sql_editor": {
    "auto_completion": true,
    "syntax_highlighting": true,
    "line_numbers": true,
    "word_wrap": false,
    "tab_size": 4,
    "font_size": 12,
    "theme": "dark"
  },
  "data_viewer": {
    "max_rows": 200,
    "auto_refresh": false,
    "show_row_numbers": true,
    "null_value_display": "<NULL>",
    "date_format": "yyyy-MM-dd HH:mm:ss"
  },
  "performance": {
    "result_set_max_size": 100000,
    "query_timeout": 20000,
    "connection_pool_size": 10,
    "memory_limit": "2G"
  }
}

-- Export/import configuration:
-- File -> Export -> General -> Preferences
-- File -> Import -> General -> Preferences

-- Team configuration sharing:
-- Export workspace settings
-- Share connection templates
-- Standardize formatting rules
-- Common driver configurations
```### 設定とカスタマイズ
```sql
-- 1. Connection management:
-- Use connection pooling
-- Close unused connections
-- Configure appropriate timeouts
-- Monitor connection usage

-- Connection pool configuration:
{
  "initial_size": 2,
  "max_active": 10,
  "max_idle": 5,
  "min_idle": 1,
  "max_wait": 10000,
  "validation_query": "SELECT 1",
  "test_on_borrow": true,
  "test_while_idle": true
}

-- 2. Query optimization:
-- Use LIMIT for large result sets
SELECT * FROM large_table LIMIT 1000;

-- Use appropriate indexes
CREATE INDEX idx_orders_date_status ON orders(order_date, status);

-- Avoid SELECT * in production queries
SELECT id, name, email FROM users WHERE status = 'active';

-- Use EXISTS instead of IN for subqueries
SELECT * FROM customers c
WHERE EXISTS (
    SELECT 1 FROM orders o 
    WHERE o.customer_id = c.id 
      AND o.status = 'completed'
);

-- 3. Memory management:
-- Configure appropriate result set limits
-- Use streaming for large exports
-- Monitor memory usage
-- Close result sets properly

-- 4. Network optimization:
-- Use compression for remote connections
-- Minimize data transfer
-- Use appropriate fetch sizes
-- Enable connection keep-alive
```## ベストプラクティス
```sql
-- 1. Connection security:
-- Use SSL/TLS for remote connections
-- Implement proper authentication
-- Use least privilege principle
-- Regular password rotation

-- SSL connection configuration:
{
  "ssl_mode": "require",
  "ssl_cert": "/path/to/client-cert.pem",
  "ssl_key": "/path/to/client-key.pem",
  "ssl_ca": "/path/to/ca-cert.pem",
  "ssl_verify_server_cert": true
}

-- 2. Access control:
-- Create application-specific users
-- Grant minimal required privileges
-- Use roles for permission management
-- Regular access reviews

-- Example secure user setup:
CREATE USER 'app_readonly'@'%' IDENTIFIED BY 'SecurePassword123!';
GRANT SELECT ON myapp.* TO 'app_readonly'@'%';

CREATE USER 'app_writer'@'%' IDENTIFIED BY 'SecurePassword456!';
GRANT SELECT, INSERT, UPDATE ON myapp.* TO 'app_writer'@'%';

-- 3. Data protection:
-- Mask sensitive data in development
-- Use parameterized queries
-- Implement audit logging
-- Regular security updates

-- Data masking example:
SELECT 
    id,
    username,
    CONCAT(LEFT(email, 3), '***@', SUBSTRING_INDEX(email, '@', -1)) as masked_email,
    '***-***-' + RIGHT(phone, 4) as masked_phone
FROM users;

-- 4. Backup security:
-- Encrypt backup files
-- Secure backup storage
-- Test restore procedures
-- Document recovery processes
```### パフォーマンス最適化
```sql
-- 1. Environment management:
-- Separate development, staging, production
-- Use consistent naming conventions
-- Version control for database schemas
-- Automated deployment processes

-- Environment configuration:
{
  "development": {
    "host": "dev-db.internal",
    "database": "myapp_dev",
    "auto_commit": true,
    "query_timeout": 30
  },
  "staging": {
    "host": "staging-db.internal", 
    "database": "myapp_staging",
    "auto_commit": false,
    "query_timeout": 60
  },
  "production": {
    "host": "prod-db.internal",
    "database": "myapp_prod",
    "auto_commit": false,
    "query_timeout": 120,
    "read_only": true
  }
}

-- 2. Code organization:
-- Use folders for script organization
-- Consistent naming conventions
-- Document complex queries
-- Version control integration

-- Script organization:
/sql-scripts/
  /migrations/
    001_create_users_table.sql
    002_add_email_index.sql
    003_create_orders_table.sql
  /queries/
    /reports/
      monthly_sales_report.sql
      user_activity_report.sql
    /maintenance/
      cleanup_old_logs.sql
      rebuild_indexes.sql
  /procedures/
    calculate_user_stats.sql
    process_orders.sql

-- 3. Testing practices:
-- Test queries on sample data
-- Validate performance on production-size data
-- Use transactions for testing
-- Document test cases

-- Testing template:
START TRANSACTION;

-- Test data setup
INSERT INTO test_users (username, email) VALUES
('test_user_1', 'test1@example.com'),
('test_user_2', 'test2@example.com');

-- Execute test query
SELECT COUNT(*) FROM test_users WHERE email LIKE '%@example.com';

-- Verify results
-- Expected: 2 rows

-- Cleanup
ROLLBACK;

-- 4. Documentation:
-- Document database schema
-- Explain complex business logic
-- Maintain change logs
-- Create user guides

-- Schema documentation template:
/*
Table: users
Purpose: Store user account information
Created: 2023-01-01
Last Modified: 2023-12-01

Columns:
- id: Primary key, auto-increment
- username: Unique username, 3-50 characters
- email: User email address, must be valid format
- password_hash: Bcrypt hashed password
- status: Account status (active, inactive, suspended)
- created_at: Account creation timestamp
- updated_at: Last modification timestamp

Indexes:
- PRIMARY KEY (id)
- UNIQUE KEY uk_username (username)
- KEY idx_email (email)
- KEY idx_status_created (status, created_at)

Related Tables:
- orders (user_id references users.id)
- user_profiles (user_id references users.id)
*/
```### セキュリティベストプラクティス
```sql
-- 1. Regular maintenance tasks:
-- Monitor database growth
-- Analyze query performance
-- Update statistics
-- Rebuild indexes when needed

-- Database size monitoring:
SELECT 
    table_schema as database_name,
    ROUND(SUM(data_length + index_length) / 1024 / 1024, 2) as size_mb
FROM information_schema.tables
GROUP BY table_schema
ORDER BY size_mb DESC;

-- Table growth monitoring:
SELECT 
    table_name,
    table_rows,
    ROUND(((data_length + index_length) / 1024 / 1024), 2) as size_mb,
    ROUND((data_length / 1024 / 1024), 2) as data_mb,
    ROUND((index_length / 1024 / 1024), 2) as index_mb
FROM information_schema.tables
WHERE table_schema = DATABASE()
ORDER BY (data_length + index_length) DESC;

-- 2. Performance monitoring:
-- Track slow queries
-- Monitor connection usage
-- Analyze index effectiveness
-- Review execution plans

-- Slow query analysis:
SELECT 
    query_time,
    lock_time,
    rows_sent,
    rows_examined,
    sql_text
FROM mysql.slow_log
WHERE start_time >= DATE_SUB(NOW(), INTERVAL 1 DAY)
ORDER BY query_time DESC
LIMIT 10;

-- 3. Backup verification:
-- Test restore procedures
-- Verify backup integrity
-- Monitor backup completion
-- Document recovery procedures

-- Backup verification script:
#!/bin/bash
BACKUP_FILE="/backups/myapp_$(date +%Y%m%d).sql"
TEST_DB="myapp_restore_test"

# Create test database
mysql -u root -p -e "CREATE DATABASE IF NOT EXISTS $TEST_DB;"

# Restore backup to test database
mysql -u root -p $TEST_DB < $BACKUP_FILE

# Verify restore
TABLES=$(mysql -u root -p -e "USE $TEST_DB; SHOW TABLES;" | wc -l)
echo "Restored $TABLES tables"

# Cleanup
mysql -u root -p -e "DROP DATABASE $TEST_DB;"

-- 4. Capacity planning:
-- Monitor growth trends
-- Plan for scaling
-- Optimize resource usage
-- Prepare for peak loads

-- Growth trend analysis:
SELECT 
    DATE(created_at) as date,
    COUNT(*) as daily_records,
    SUM(COUNT(*)) OVER (ORDER BY DATE(created_at)) as cumulative_records
FROM large_table
WHERE created_at >= DATE_SUB(NOW(), INTERVAL 30 DAY)
GROUP BY DATE(created_at)
ORDER BY date;
```### 開発ワークフロー