DonnéesGrip Cheatsheet
DataGrip - Base de données JetBrains IDE
DataGrip est une puissante base de données IDE de JetBrains qui fournit une console de requête intelligente, une navigation de schéma efficace et des outils de développement de base de données complets. Il prend en charge plusieurs systèmes de base de données et offre des fonctionnalités avancées pour les professionnels de la base de données.
Sommaire
- [Installation et configuration] (LINK_0)
- [Connexions à la base de données] (LINK_0)
- [Console de secours] (LINK_0)
- [Schema Navigation] (LINK_0)
- [Éditeur de données] (LINK_0)
- [Production de code] (LINK_0)
- [Contrôle de la version] (LINK_0)
- [Débogage] (LINK_0)
- [Analyse du rendement] (LINK_0)
- [Luminaires et extensions] (LINK_0)
- [Shorts clavier] (LINK_0)
- [Meilleures pratiques] (LINK_0)
Installation et configuration
Exigences du système
# Minimum system requirements
# RAM: 2 GB (8 GB recommended)
# CPU: Any modern CPU
# Disk: 3.5 GB
# OS: Windows 10+, macOS 10.14+, Linux with glibc 2.17+
# Check Java version (required for DataGrip)
java -version
# DataGrip requires Java 11 or higher
# If not installed, download from:
# https://adoptium.net/
Méthodes d'installation
# Windows - Download from JetBrains website
# https://www.jetbrains.com/datagrip/download/
# macOS - Using Homebrew
brew install --cask datagrip
# Linux - Using Snap
sudo snap install datagrip --classic
# Linux - Using Flatpak
flatpak install flathub com.jetbrains.DataGrip
# Linux - Manual installation
wget https://download.jetbrains.com/datagrip/datagrip-2023.3.tar.gz
tar -xzf datagrip-2023.3.tar.gz
cd DataGrip-2023.3/bin
./datagrip.sh
# Toolbox App (recommended for managing JetBrains products)
# Download JetBrains Toolbox App
# Install DataGrip through Toolbox for easy updates
```_
### Configuration initiale
```bash
# Configuration directories
# Windows: %APPDATA%\JetBrains\DataGrip2023.3
# macOS: ~/Library/Application Support/JetBrains/DataGrip2023.3
# Linux: ~/.config/JetBrains/DataGrip2023.3
# Import settings from previous version
# File → Import Settings → Select settings.zip
# Configure IDE settings
# File → Settings (Ctrl+Alt+S)
```_
### Configuration de la licence
```bash
# License options:
# 1. 30-day free trial
# 2. Individual license ($199/year)
# 3. Commercial license ($199/year per user)
# 4. Educational license (free for students/teachers)
# 5. Open Source license (free for qualifying projects)
# Activate license
# Help → Register → Enter license key or login to JetBrains account
# Offline activation
# Help → Register → Activation code
Connexions aux bases de données
Configuration de connexion
-- Supported database systems:
-- MySQL, PostgreSQL, SQLite, Oracle, SQL Server,
-- DB2, Sybase, H2, HSQLDB, Derby, Exasol,
-- ClickHouse, MongoDB, Cassandra, Redis, and more
-- MySQL connection example
Host: localhost
Port: 3306
Database: myapp
User: dbuser
Password: password
URL: jdbc:mysql://localhost:3306/myapp
-- PostgreSQL connection
Host: localhost
Port: 5432
Database: myapp
User: postgres
Password: password
URL: jdbc:postgresql://localhost:5432/myapp
-- SQL Server connection
Host: localhost
Port: 1433
Database: myapp
User: sa
Password: password
URL: jdbc:sqlserver://localhost:1433;databaseName=myapp
-- Oracle connection
Host: localhost
Port: 1521
SID: XE
User: hr
Password: password
URL: jdbc:oracle:thin:@localhost:1521:XE
-- SQLite connection
File: /path/to/database.sqlite
URL: jdbc:sqlite:/path/to/database.sqlite
SSL/TLS Configuration
# MySQL SSL configuration
useSSL=true
requireSSL=true
verifyServerCertificate=true
trustCertificateKeyStoreUrl=file:///path/to/truststore.jks
trustCertificateKeyStorePassword=password
clientCertificateKeyStoreUrl=file:///path/to/keystore.jks
clientCertificateKeyStorePassword=password
# PostgreSQL SSL configuration
ssl=true
sslmode=require
sslcert=/path/to/client.crt
sslkey=/path/to/client.key
sslrootcert=/path/to/ca.crt
# SQL Server SSL configuration
encrypt=true
trustServerCertificate=false
trustStore=/path/to/truststore.jks
trustStorePassword=password
Mise en commun des connexions
# HikariCP connection pool settings (default in DataGrip)
maximumPoolSize=10
minimumIdle=5
connectionTimeout=30000
idleTimeout=600000
maxLifetime=1800000
leakDetectionThreshold=60000
# Custom connection pool configuration
# File → Settings → Database → Advanced
# Connection pool settings:
# - Initial connections: 1
# - Maximum connections: 10
# - Connection timeout: 30s
# - Idle timeout: 10m
Modèles de sources de données
<!-- Custom data source template -->
<!-- File → Settings → Database → Data Sources → Templates -->
<dataSource>
<name>MySQL Production Template</name>
<driver>mysql</driver>
<url>jdbc:mysql://{host}:{port}/{database}</url>
<properties>
<property name="useSSL" value="true"/>
<property name="requireSSL" value="true"/>
<property name="useUnicode" value="true"/>
<property name="characterEncoding" value="UTF-8"/>
<property name="zeroDateTimeBehavior" value="convertToNull"/>
<property name="autoReconnect" value="true"/>
<property name="failOverReadOnly" value="false"/>
<property name="maxReconnects" value="3"/>
</properties>
</dataSource>
Console de requête
Opérations de requête de base
-- Execute query: Ctrl+Enter (current statement) or Ctrl+Shift+Enter (all)
SELECT * FROM users WHERE created_at >= '2023-01-01';
-- Execute selection: Select text and press Ctrl+Enter
SELECT id, username, email FROM users;
-- Multiple queries (separate with semicolons)
SELECT COUNT(*) FROM users;
SELECT COUNT(*) FROM orders;
SELECT COUNT(*) FROM products;
-- Query with parameters
SELECT * FROM users WHERE id = ?;
-- DataGrip will prompt for parameter values
-- Named parameters
SELECT * FROM users WHERE status = :status AND created_at >= :start_date;
Demande Histoire et Favoris
-- Access query history: Ctrl+Shift+H
-- Recent queries are automatically saved
-- Save query as favorite
-- Right-click in console → Add to Favorites
-- Or use Ctrl+Shift+F
-- Organize favorites in folders
-- Database tool window → Favorites → Create folder
-- Search in query history
-- Ctrl+Shift+H → Type to search
-- Export query history
-- Database tool window → Query History → Export
Plans d'exécution des requêtes
-- View execution plan: Ctrl+Shift+E
EXPLAIN SELECT
u.username,
COUNT(o.id) as order_count
FROM users u
LEFT JOIN orders o ON u.id = o.user_id
GROUP BY u.id, u.username;
-- PostgreSQL execution plan with analysis
EXPLAIN (ANALYZE, BUFFERS, VERBOSE)
SELECT * FROM large_table WHERE indexed_column = 'value';
-- SQL Server execution plan
SET SHOWPLAN_ALL ON;
SELECT * FROM users WHERE email = 'user@example.com';
-- Oracle execution plan
EXPLAIN PLAN FOR
SELECT * FROM users WHERE email = 'user@example.com';
SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY);
-- MySQL execution plan
EXPLAIN FORMAT=JSON
SELECT * FROM users WHERE email = 'user@example.com';
Profil de requête
-- Enable query profiling in MySQL
SET profiling = 1;
-- Execute query
SELECT * FROM users u
JOIN orders o ON u.id = o.user_id
WHERE u.created_at >= '2023-01-01';
-- Show profiles
SHOW PROFILES;
-- Show detailed profile for specific query
SHOW PROFILE FOR QUERY 1;
-- Show CPU usage
SHOW PROFILE CPU FOR QUERY 1;
-- Show memory usage
SHOW PROFILE MEMORY FOR QUERY 1;
-- PostgreSQL query profiling with pg_stat_statements
SELECT
query,
calls,
total_time,
mean_time,
rows
FROM pg_stat_statements
ORDER BY total_time DESC
LIMIT 10;
Fonctions avancées de requête
-- Live templates (code snippets)
-- Type abbreviation and press Tab
-- sel → SELECT * FROM table_name;
-- ins → INSERT INTO table_name (columns) VALUES (values);
-- upd → UPDATE table_name SET column = value WHERE condition;
-- Custom live templates
-- File → Settings → Editor → Live Templates → SQL
-- Query parameters with default values
SELECT * FROM users
WHERE created_at >= COALESCE(:start_date, '2023-01-01')
AND status = COALESCE(:status, 'active');
-- Conditional queries using DataGrip variables
-- #if ($condition)
SELECT * FROM users WHERE active = true;
-- #else
SELECT * FROM users;
-- #end
-- Query with file input
-- Use $FILE$ placeholder to read from file
SELECT * FROM users WHERE id IN ($FILE$/path/to/ids.txt$);
-- Batch operations
-- Execute query for each line in a file
-- Tools → Database → Run SQL Script
Système de navigation
Explorateur de bases de données
-- Navigate database structure
-- Database tool window (Alt+1)
-- Expand data sources to see:
-- - Schemas/Databases
-- - Tables
-- - Views
-- - Stored Procedures
-- - Functions
-- - Triggers
-- - Indexes
-- Quick navigation shortcuts:
-- Ctrl+N: Navigate to table/view
-- Ctrl+Shift+N: Navigate to any database object
-- Ctrl+Alt+Shift+N: Navigate to symbol
-- Search in database objects
-- Ctrl+Shift+F: Find in database
-- Search by name, content, or SQL
-- Filter objects
-- Use filter in Database tool window
-- Filter by name pattern or object type
Tableau et schéma
-- View table structure
-- Double-click table → DDL tab
-- Or right-click → Show DDL
-- Table information query
SELECT
column_name,
data_type,
is_nullable,
column_default,
character_maximum_length
FROM information_schema.columns
WHERE table_name = 'users'
ORDER BY ordinal_position;
-- Index information
SELECT
index_name,
column_name,
non_unique,
seq_in_index
FROM information_schema.statistics
WHERE table_name = 'users'
ORDER BY index_name, seq_in_index;
-- Foreign key relationships
SELECT
constraint_name,
column_name,
referenced_table_name,
referenced_column_name
FROM information_schema.key_column_usage
WHERE table_name = 'orders'
AND referenced_table_name IS NOT NULL;
-- Table size and row count
SELECT
table_name,
table_rows,
ROUND(((data_length + index_length) / 1024 / 1024), 2) AS size_mb
FROM information_schema.tables
WHERE table_schema = DATABASE()
ORDER BY size_mb DESC;
Diagrammes et relations
-- Generate ER diagram
-- Right-click database → Diagrams → Show Visualization
-- Or Tools → Database → ER Diagram
-- Customize diagram
-- Show/hide columns, indexes, foreign keys
-- Arrange tables manually or auto-layout
-- Export diagram as image
-- Dependency analysis
-- Right-click object → Analyze → Dependencies
-- Shows what depends on the object and what it depends on
-- Find usages
-- Right-click object → Find Usages (Alt+F7)
-- Shows where the object is referenced
Comparaison des schémas
-- Compare schemas
-- Tools → Database → Compare Schemas
-- Select source and target schemas
-- View differences in structure and data
-- Generate migration script
-- After comparison, click "Generate Script"
-- Review and execute migration
-- Synchronize schemas
-- Tools → Database → Synchronize Schemas
-- Apply changes from one schema to another
-- Export schema comparison
-- Save comparison results as HTML or text
Éditeur de données
Affichage et édition des données
-- Open table data
-- Double-click table → Data tab
-- Or right-click table → Edit Data
-- Navigation shortcuts:
-- Ctrl+Page Up/Down: Navigate between tabs
-- F4: Edit cell
-- Ctrl+Enter: Commit changes
-- Escape: Cancel edit
-- Filtering data
-- Click filter icon in column header
-- Use quick filters or custom conditions
-- Combine multiple filters
-- Sorting data
-- Click column header to sort
-- Ctrl+click for multi-column sort
-- Right-click for sort options
-- Search in data
-- Ctrl+F: Find in current view
-- Ctrl+R: Find and replace
-- Use regular expressions for advanced search
Modification des données
-- Insert new row
-- Click "+" button or press Alt+Insert
-- Fill in values and press Ctrl+Enter to commit
-- Update existing data
-- Double-click cell or press F4
-- Edit value and press Enter
-- Commit with Ctrl+Enter
-- Delete rows
-- Select rows and press Delete
-- Confirm deletion
-- Bulk operations
-- Select multiple rows for bulk delete
-- Use clipboard for bulk insert/update
-- Import data from CSV/Excel
-- Undo/Redo changes
-- Ctrl+Z: Undo
-- Ctrl+Shift+Z: Redo
-- View pending changes before commit
Importation/Exportation de données
-- Import data
-- Right-click table → Import Data from File
-- Supported formats: CSV, TSV, Excel, JSON, XML
-- Configure import mapping and options
-- Export data
-- Right-click table → Export Data to File
-- Choose format and destination
-- Configure export options
-- Copy data
-- Select rows → Ctrl+C
-- Paste in external applications
-- Various copy formats available
-- Generate INSERT statements
-- Select rows → Copy as INSERT statements
-- Useful for data migration scripts
Caractéristiques avancées des données
-- Aggregation view
-- Right-click column → Show Aggregation
-- View COUNT, SUM, AVG, MIN, MAX
-- Transpose view
-- Tools → Database → Transpose
-- Switch rows and columns view
-- Data comparison
-- Select two rows → Compare
-- Highlight differences between records
-- Value editor
-- F4 on large text/binary fields
-- Specialized editors for JSON, XML, images
-- Calculated columns
-- Add virtual columns with expressions
-- Useful for data analysis without modifying schema
Génération de codes
Génération SQL
-- Generate SELECT statement
-- Right-click table → SQL Generator → SELECT
-- Customize columns, conditions, joins
-- Generate INSERT statement
-- Right-click table → SQL Generator → INSERT
-- Include all columns or select specific ones
-- Generate UPDATE statement
-- Right-click table → SQL Generator → UPDATE
-- Specify SET clauses and WHERE conditions
-- Generate DELETE statement
-- Right-click table → SQL Generator → DELETE
-- Add WHERE conditions for safe deletion
-- Generate DDL
-- Right-click object → SQL Scripts → DDL
-- Create table, index, or constraint definitions
Modèles de code
-- Create custom templates
-- File → Settings → Editor → Live Templates → SQL
-- Common templates:
-- Table creation template
CREATE TABLE ${TABLE_NAME} (
id ${ID_TYPE} PRIMARY KEY,
${COLUMN_NAME} ${COLUMN_TYPE},
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
);
-- Stored procedure template
DELIMITER //
CREATE PROCEDURE ${PROCEDURE_NAME}(${PARAMETERS})
BEGIN
${BODY}
END //
DELIMITER ;
-- Function template
CREATE FUNCTION ${FUNCTION_NAME}(${PARAMETERS})
RETURNS ${RETURN_TYPE}
READS SQL DATA
DETERMINISTIC
BEGIN
${BODY}
RETURN ${RETURN_VALUE};
END;
-- Trigger template
CREATE TRIGGER ${TRIGGER_NAME}
${TIMING} ${EVENT} ON ${TABLE_NAME}
FOR EACH ROW
BEGIN
${BODY}
END;
Production de documents
-- Generate database documentation
-- Tools → Database → Generate Documentation
-- Include tables, columns, relationships, indexes
-- Custom documentation templates
-- Modify templates in DataGrip settings
-- Add custom sections and formatting
-- Export documentation
-- HTML format with navigation
-- Include diagrams and statistics
-- Customizable styling and layout
Scénarios de migration
-- Generate migration scripts
-- Compare schemas and generate differences
-- Version control integration for migrations
-- Migration script template
-- Migration: Add user preferences table
-- Date: 2023-12-01
-- Author: Developer Name
-- Up migration
CREATE TABLE user_preferences (
id INT AUTO_INCREMENT PRIMARY KEY,
user_id INT NOT NULL,
preference_key VARCHAR(100) NOT NULL,
preference_value TEXT,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE,
UNIQUE KEY uk_user_preference (user_id, preference_key)
);
-- Down migration
DROP TABLE IF EXISTS user_preferences;
-- Data migration example
INSERT INTO user_preferences (user_id, preference_key, preference_value)
SELECT
id,
'theme',
CASE
WHEN profile_data->>'$.theme' IS NOT NULL
THEN profile_data->>'$.theme'
ELSE 'default'
END
FROM users
WHERE profile_data IS NOT NULL;
Contrôle de version
Intégration Git
# Initialize Git repository
git init
git add .
git commit -m "Initial database schema"
# DataGrip Git integration
# VCS → Enable Version Control Integration → Git
# VCS → Git → Add to VCS
# Commit changes
# Ctrl+K: Commit changes
# Review changes before commit
# Add commit message and commit
# View history
# VCS → Git → Show History
# See file changes over time
# Compare versions
# Branching
# VCS → Git → Branches
# Create, switch, merge branches
# Useful for feature development
Version des bases de données
-- Schema versioning table
CREATE TABLE schema_versions (
version VARCHAR(50) PRIMARY KEY,
description TEXT,
applied_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
applied_by VARCHAR(100),
checksum VARCHAR(64)
);
-- Track schema changes
INSERT INTO schema_versions (version, description, applied_by, checksum)
VALUES ('1.0.0', 'Initial schema', USER(), SHA2('schema_content', 256));
-- Migration tracking
CREATE TABLE migrations (
id INT AUTO_INCREMENT PRIMARY KEY,
migration_name VARCHAR(255) NOT NULL,
batch INT NOT NULL,
executed_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
-- Version control best practices
-- 1. Always version control schema changes
-- 2. Use descriptive commit messages
-- 3. Include rollback scripts
-- 4. Test migrations on staging first
-- 5. Keep migrations atomic and reversible
Suivi du changement
-- Enable change tracking in DataGrip
-- File → Settings → Database → General → Track changes
-- View local changes
-- Database tool window → Local Changes
-- See modified objects and SQL
-- Revert changes
-- Right-click object → Local History → Revert
-- Or use VCS → Revert
-- Compare with repository
-- Right-click object → Compare with Repository
-- See differences between local and committed versions
-- Shelve changes
-- VCS → Shelve Changes
-- Temporarily save changes without committing
Déboguement
Déboguage SQL
-- Debug stored procedures (MySQL, PostgreSQL, Oracle)
-- Set breakpoints in procedure code
-- Step through execution line by line
-- Inspect variable values
-- MySQL debugging example
DELIMITER //
CREATE PROCEDURE debug_example(IN user_id INT)
BEGIN
DECLARE user_count INT DEFAULT 0;
DECLARE order_total DECIMAL(10,2) DEFAULT 0;
-- Breakpoint here
SELECT COUNT(*) INTO user_count FROM users WHERE id = user_id;
IF user_count > 0 THEN
-- Breakpoint here
SELECT SUM(total_amount) INTO order_total
FROM orders WHERE user_id = user_id;
SELECT user_count, order_total;
ELSE
SELECT 'User not found' as message;
END IF;
END //
DELIMITER ;
-- Debug execution
-- Right-click procedure → Debug
-- Set breakpoints and start debugging
Débogue de la performance de requête
-- Analyze slow queries
-- Tools → Database → Performance Monitor
-- View real-time query performance
-- Query execution statistics
SELECT
sql_text,
executions,
avg_timer_wait/1000000000 as avg_time_seconds,
sum_timer_wait/1000000000 as total_time_seconds
FROM performance_schema.events_statements_summary_by_digest
ORDER BY sum_timer_wait DESC
LIMIT 10;
-- Index usage analysis
SELECT
object_schema,
object_name,
index_name,
count_read,
count_write,
count_fetch,
count_insert,
count_update,
count_delete
FROM performance_schema.table_io_waits_summary_by_index_usage
WHERE object_schema = 'myapp'
ORDER BY count_read DESC;
-- Lock analysis
SELECT
r.trx_id waiting_trx_id,
r.trx_mysql_thread_id waiting_thread,
r.trx_query waiting_query,
b.trx_id blocking_trx_id,
b.trx_mysql_thread_id blocking_thread,
b.trx_query blocking_query
FROM information_schema.innodb_lock_waits w
INNER JOIN information_schema.innodb_trx b ON b.trx_id = w.blocking_trx_id
INNER JOIN information_schema.innodb_trx r ON r.trx_id = w.requesting_trx_id;
Diagnostic d'erreur
-- Common error patterns and solutions
-- 1. Deadlock detection
-- Error: Deadlock found when trying to get lock
-- Solution: Retry transaction, optimize query order
-- 2. Lock timeout
-- Error: Lock wait timeout exceeded
-- Solution: Optimize queries, reduce transaction time
-- 3. Connection issues
-- Error: Too many connections
-- Solution: Optimize connection pooling, increase max_connections
-- 4. Memory issues
-- Error: Out of memory
-- Solution: Optimize queries, increase memory limits
-- 5. Syntax errors
-- Use DataGrip's syntax highlighting and error detection
-- Real-time error highlighting in query console
-- Error log analysis
SELECT
logged,
thread_id,
prio,
error_code,
subsystem,
data
FROM performance_schema.error_log
WHERE logged >= DATE_SUB(NOW(), INTERVAL 1 HOUR)
ORDER BY logged DESC;
Analyse des résultats
Surveillance du rendement des requêtes
-- Enable performance monitoring
-- Tools → Database → Performance Monitor
-- Real-time monitoring of database performance
-- Query execution time analysis
SELECT
digest_text,
count_star as executions,
avg_timer_wait/1000000000 as avg_seconds,
max_timer_wait/1000000000 as max_seconds,
sum_timer_wait/1000000000 as total_seconds,
sum_rows_examined/count_star as avg_rows_examined,
sum_rows_sent/count_star as avg_rows_sent
FROM performance_schema.events_statements_summary_by_digest
WHERE digest_text IS NOT NULL
ORDER BY sum_timer_wait DESC
LIMIT 20;
-- Index efficiency analysis
SELECT
table_schema,
table_name,
index_name,
cardinality,
ROUND(cardinality/table_rows*100, 2) as selectivity_percent
FROM information_schema.statistics s
JOIN information_schema.tables t ON s.table_schema = t.table_schema
AND s.table_name = t.table_name
WHERE s.table_schema = 'myapp'
AND t.table_rows > 0
ORDER BY selectivity_percent DESC;
-- Buffer pool analysis (MySQL)
SELECT
pool_id,
pool_size,
free_buffers,
database_pages,
old_database_pages,
modified_database_pages,
pending_decompress,
pending_reads,
pending_flush_lru,
pending_flush_list
FROM information_schema.innodb_buffer_pool_stats;
Analyse de l'utilisation des ressources
-- Connection analysis
SELECT
user,
host,
db,
command,
time,
state,
info
FROM information_schema.processlist
WHERE command != 'Sleep'
ORDER BY time DESC;
-- Table space usage
SELECT
table_schema,
table_name,
table_rows,
data_length,
index_length,
data_length + index_length as total_size,
ROUND((data_length + index_length)/1024/1024, 2) as size_mb
FROM information_schema.tables
WHERE table_schema = 'myapp'
ORDER BY total_size DESC;
-- Temporary table usage
SELECT
digest_text,
count_star,
sum_created_tmp_tables,
sum_created_tmp_disk_tables,
ROUND(sum_created_tmp_disk_tables/sum_created_tmp_tables*100, 2) as disk_tmp_percent
FROM performance_schema.events_statements_summary_by_digest
WHERE sum_created_tmp_tables > 0
ORDER BY sum_created_tmp_disk_tables DESC;
-- File I/O analysis
SELECT
file_name,
event_name,
count_read,
count_write,
sum_number_of_bytes_read,
sum_number_of_bytes_write,
ROUND(sum_number_of_bytes_read/1024/1024, 2) as read_mb,
ROUND(sum_number_of_bytes_write/1024/1024, 2) as write_mb
FROM performance_schema.file_summary_by_instance
WHERE event_name LIKE 'wait/io/file/%'
ORDER BY sum_number_of_bytes_read + sum_number_of_bytes_write DESC;
Optimisation des performances
-- Query optimization recommendations
-- 1. Use EXPLAIN to analyze query execution plans
-- 2. Add appropriate indexes for WHERE, JOIN, ORDER BY clauses
-- 3. Avoid SELECT * in production queries
-- 4. Use LIMIT for large result sets
-- 5. Optimize JOIN order and conditions
-- Index optimization
-- Create covering indexes for frequently used queries
CREATE INDEX idx_orders_user_status_date ON orders(user_id, status, created_at);
-- Partial indexes for filtered queries
CREATE INDEX idx_active_users ON users(email) WHERE is_active = true;
-- Composite indexes for multi-column searches
CREATE INDEX idx_products_category_price ON products(category_id, price);
-- Query rewriting for better performance
-- Original slow query
SELECT * FROM orders o
JOIN users u ON o.user_id = u.id
WHERE o.created_at >= '2023-01-01'
AND u.status = 'active';
-- Optimized version
SELECT o.id, o.order_number, o.total_amount, u.username
FROM orders o
INNER JOIN users u ON o.user_id = u.id
WHERE o.created_at >= '2023-01-01'
AND u.status = 'active'
AND o.status != 'cancelled'
ORDER BY o.created_at DESC
LIMIT 1000;
-- Batch processing for large operations
-- Instead of single large transaction
UPDATE large_table SET status = 'processed' WHERE status = 'pending';
-- Use batched approach
UPDATE large_table
SET status = 'processed'
WHERE status = 'pending'
AND id BETWEEN 1 AND 10000;
-- Repeat for next batch
Plugins et Extensions
Greffons essentiels
# Install plugins
# File → Settings → Plugins → Marketplace
# Recommended plugins:
# 1. Database Navigator - Enhanced database browsing
# 2. SQL Query Plugin - Advanced SQL features
# 3. Rainbow Brackets - Better code readability
# 4. String Manipulation - Text processing tools
# 5. CSV Plugin - CSV file editing
# 6. JSON Parser - JSON formatting and validation
# 7. Grep Console - Enhanced console output
# 8. GitToolBox - Advanced Git integration
# 9. Key Promoter X - Learn keyboard shortcuts
# 10. Presentation Assistant - Show shortcuts during demos
Développement de plugins personnalisés
// Basic plugin structure for DataGrip
// plugin.xml
<idea-plugin>
<id>com.example.datagrip.plugin</id>
<name>Custom DataGrip Plugin</name>
<version>1.0</version>
<vendor>Your Company</vendor>
<description>Custom functionality for DataGrip</description>
<depends>com.intellij.modules.platform</depends>
<depends>com.intellij.database</depends>
<extensions defaultExtensionNs="com.intellij">
<toolWindow id="CustomTool"
secondary="true"
anchor="right"
factoryClass="com.example.CustomToolWindowFactory"/>
</extensions>
<actions>
<action id="CustomAction"
class="com.example.CustomAction"
text="Custom Action"
description="Performs custom database operation">
<add-to-group group-id="DatabaseViewPopupMenu" anchor="last"/>
</action>
</actions>
</idea-plugin>
// Custom action implementation
public class CustomAction extends AnAction {
@Override
public void actionPerformed(AnActionEvent e) {
// Get database context
DatabaseView databaseView = DatabaseView.DATABASE_VIEW_KEY.getData(e.getDataContext());
if (databaseView != null) {
// Perform custom database operation
performCustomOperation(databaseView);
}
}
private void performCustomOperation(DatabaseView view) {
// Custom logic here
// Access database connections, execute queries, etc.
}
}
Intégration des scripts
# External tool integration
# File → Settings → Tools → External Tools
# Add custom database scripts
# Name: Backup Database
# Program: mysqldump
# Arguments: -u $USER$ -p$PASSWORD$ $DATABASE$ > backup_$DATE$.sql
# Working directory: $ProjectFileDir$
# Python script integration
# Name: Data Analysis
# Program: python
# Arguments: scripts/analyze_data.py $SelectedText$
# Working directory: $ProjectFileDir$
# Shell script for maintenance
# Name: Database Maintenance
# Program: bash
# Arguments: scripts/db_maintenance.sh $DATABASE$
# Working directory: $ProjectFileDir$
Raccourcis clavier
Raccourcis essentiels
# Navigation
Ctrl+N # Navigate to table/view
Ctrl+Shift+N # Navigate to any database object
Ctrl+Alt+Shift+N # Navigate to symbol
Alt+F1 # Select in Database view
Ctrl+E # Recent files
Ctrl+Shift+E # Recent locations
# Query Console
Ctrl+Enter # Execute current statement
Ctrl+Shift+Enter # Execute all statements
Ctrl+Shift+F10 # Execute selection
Ctrl+/ # Comment/uncomment line
Ctrl+Shift+/ # Block comment
Ctrl+D # Duplicate line
Ctrl+Y # Delete line
# Data Editor
F4 # Edit cell
Ctrl+Enter # Commit changes
Escape # Cancel edit
Alt+Insert # Insert new row
Delete # Delete selected rows
Ctrl+F # Find in data
Ctrl+R # Find and replace
# Code Editing
Ctrl+Space # Code completion
Ctrl+Shift+Space # Smart code completion
Ctrl+P # Parameter info
Ctrl+Q # Quick documentation
Alt+Enter # Show intention actions
Ctrl+Alt+L # Reformat code
Ctrl+Alt+O # Optimize imports
# Debugging
F8 # Step over
F7 # Step into
Shift+F8 # Step out
F9 # Resume program
Ctrl+F8 # Toggle breakpoint
Ctrl+Shift+F8 # View breakpoints
# Version Control
Ctrl+K # Commit changes
Ctrl+T # Update project
Alt+` # VCS operations popup
Ctrl+Alt+Z # Rollback changes
Ctrl+Shift+K # Push changes
# General
Ctrl+Alt+S # Settings
Ctrl+Shift+A # Find action
Ctrl+Shift+F12 # Hide all tool windows
Alt+1 # Database tool window
Alt+4 # Run tool window
Alt+9 # Version control tool window
Raccourcis personnalisés
# Create custom shortcuts
# File → Settings → Keymap
# Useful custom shortcuts to add:
# Execute EXPLAIN: Ctrl+Shift+E
# Show table DDL: Ctrl+Shift+D
# Export data: Ctrl+Shift+X
# Import data: Ctrl+Shift+I
# Generate SQL: Ctrl+Shift+G
# Compare schemas: Ctrl+Shift+C
# Database console: Ctrl+Shift+Q
# Refresh database: F5
# New query console: Ctrl+Shift+N
# Close query console: Ctrl+W
# Macro recording
# Edit → Macros → Start Macro Recording
# Perform actions
# Edit → Macros → Stop Macro Recording
# Assign shortcut to macro
Meilleures pratiques
Développement des bases de données
-- 1. Environment Setup
-- Use separate databases for development, staging, production
-- Configure connection profiles for each environment
-- Use version control for all database changes
-- 2. Schema Design
-- Follow naming conventions consistently
-- Use appropriate data types and constraints
-- Document tables and columns
-- Plan for scalability and performance
-- 3. Query Development
-- Start with simple queries and build complexity gradually
-- Use EXPLAIN to analyze query performance
-- Test queries with realistic data volumes
-- Optimize before deploying to production
-- 4. Code Organization
-- Use folders to organize queries by feature/module
-- Create reusable query templates
-- Document complex queries with comments
-- Version control query files
-- 5. Testing Strategy
-- Test schema changes on sample data
-- Validate data integrity after migrations
-- Performance test with production-like data
-- Have rollback plans for all changes
Pratiques exemplaires en matière de sécurité
-- 1. Connection Security
-- Use SSL/TLS for database connections
-- Implement proper authentication
-- Use connection pooling appropriately
-- Limit connection privileges
-- 2. Access Control
-- Follow principle of least privilege
-- Use role-based access control
-- Regularly audit user permissions
-- Remove unused accounts
-- 3. Data Protection
-- Encrypt sensitive data at rest
-- Use parameterized queries to prevent SQL injection
-- Implement audit logging
-- Regular security assessments
-- 4. Development Security
-- Don't store credentials in code
-- Use environment variables for configuration
-- Implement proper error handling
-- Regular security updates
-- Example secure connection configuration
-- Use environment variables for credentials
DB_HOST=${DB_HOST}
DB_PORT=${DB_PORT}
DB_NAME=${DB_NAME}
DB_USER=${DB_USER}
DB_PASSWORD=${DB_PASSWORD}
DB_SSL_MODE=require
Meilleures pratiques en matière de rendement
-- 1. Query Optimization
-- Use appropriate indexes
-- Avoid SELECT * in production
-- Use LIMIT for large result sets
-- Optimize JOIN operations
-- 2. Index Strategy
-- Create indexes for WHERE clauses
-- Use composite indexes for multi-column searches
-- Monitor index usage and remove unused indexes
-- Consider partial indexes for filtered queries
-- 3. Connection Management
-- Use connection pooling
-- Set appropriate timeout values
-- Monitor connection usage
-- Close connections properly
-- 4. Monitoring and Maintenance
-- Regular performance monitoring
-- Analyze slow query logs
-- Update table statistics
-- Regular maintenance tasks
-- Example performance monitoring query
SELECT
table_name,
table_rows,
avg_row_length,
data_length,
index_length,
ROUND((index_length/data_length)*100, 2) as index_ratio
FROM information_schema.tables
WHERE table_schema = DATABASE()
AND table_rows > 0
ORDER BY data_length DESC;
Meilleures pratiques en matière de collaboration
-- 1. Team Workflow
-- Use shared connection configurations
-- Standardize naming conventions
-- Document database changes
-- Code review for schema changes
-- 2. Version Control
-- Track all schema changes
-- Use meaningful commit messages
-- Branch for feature development
-- Tag releases appropriately
-- 3. Documentation
-- Document database schema
-- Maintain query documentation
-- Keep migration scripts
-- Document deployment procedures
-- 4. Communication
-- Share query templates and snippets
-- Communicate schema changes early
-- Use issue tracking for database tasks
-- Regular team sync on database changes
-- Example documentation template
/*
Query: User Order Summary
Purpose: Generate summary of user orders for reporting
Author: Developer Name
Created: 2023-12-01
Last Modified: 2023-12-01
Dependencies: users, orders tables
Performance: ~500ms for 100K users
Notes: Requires index on orders(user_id, created_at)
*/
SELECT
u.id,
u.username,
u.email,
COUNT(o.id) as total_orders,
SUM(o.total_amount) as total_spent,
MAX(o.created_at) as last_order_date
FROM users u
LEFT JOIN orders o ON u.id = o.user_id
WHERE u.is_active = TRUE
AND u.created_at >= DATE_SUB(NOW(), INTERVAL 1 YEAR)
GROUP BY u.id, u.username, u.email
HAVING total_orders > 0
ORDER BY total_spent DESC
LIMIT 1000;
Résumé
DataGrip est une base de données complète qui fournit des outils puissants pour le développement, l'administration et l'analyse de bases de données. Cette feuille de triche couvre les caractéristiques essentielles et les meilleures pratiques pour un travail de base de données efficace.
Principales forces: - ** Support multi-base de données: fonctionne avec tous les principaux systèmes de base de données - ** Assistance de code intelligent: Achèvement et analyse avancées SQL - ** Puissant Console de requêtes: Exécuter, analyser et optimiser les requêtes - ** Navigation Schema: Outils complets d'exploration de bases de données - ** Intégration du contrôle de la version : suivre et gérer les changements apportés à la base de données - ** Analyse du rendement : outils intégrés de surveillance et d'optimisation
Cas de la meilleure utilisation: - Développement et administration des bases de données - Développement et optimisation des requêtes - Conception et migration du schéma - Analyse et réglage des performances - Collaboration des équipes sur les projets de base de données - Analyse et communication des données
** Considérations importantes :** - Nécessite une licence JetBrains pour une utilisation commerciale - Application à forte intensité de ressources - Courbe d'apprentissage pour les caractéristiques avancées - Mises à jour régulières recommandées - Configuration adéquate nécessaire pour une performance optimale
En suivant les pratiques et les techniques décrites dans cette feuille de triche, vous pouvez utiliser efficacement DataGrip pour gérer les bases de données, développer des requêtes efficaces et maintenir des applications de base de données de haute qualité tout en collaborant efficacement avec votre équipe.