Aller au contenu

PostgreSQL Feuilles de chaleur

PostgreSQL - La base de données Open Source la plus avancée du monde

Postgre SQL est un puissant système de base de données en open-source avec plus de 30 ans de développement actif. Connu pour sa fiabilité, sa robustesse et sa performance, il prend en charge les requêtes SQL et JSON.

Copier toutes les commandes Générer PDF

Sommaire

  • [Installation] (LINK_0)
  • [Commandes de base] (LINK_0)
  • [Opérations de base de données] (LINK_0)
  • [Table des opérations] (__LINK_0___)
  • Types de données
  • [Opérations du CRUD] (LINK_0)
  • [Indexes] (LINK_0)
  • [Contredit] (LINK_0)
  • [Basque avancé] (LINK_0)
  • [Fonctions et procédures] (LINK_0)
  • [Triggers] (LINK_0)
  • Vues et vues matérialisées
  • [Gestion de l'utilisateur] (LINK_0)
  • Retour et restauration
  • [Optimisation du rendement] (LINK_0)
  • [Réplication] (LINK_0)
  • [Extensions] (LINK_0)
  • [JSON et JSONB] (LINK_0)
  • [Recherche en texte intégral] (LINK_0)
  • [Surveiller] (LINK_0)
  • [Meilleures pratiques] (LINK_0)

Installation

Ubuntu/Debian

# Update package index
sudo apt update

# Install PostgreSQL
sudo apt install postgresql postgresql-contrib

# Install specific version
sudo apt install postgresql-14 postgresql-contrib-14

# Start PostgreSQL service
sudo systemctl start postgresql
sudo systemctl enable postgresql

# Check status
sudo systemctl status postgresql

# Switch to postgres user
sudo -i -u postgres

# Access PostgreSQL prompt
psql

CentOS/RHEL/Fedora

# Install PostgreSQL repository
sudo dnf install postgresql-server postgresql-contrib

# Initialize database
sudo postgresql-setup --initdb

# Start and enable PostgreSQL
sudo systemctl start postgresql
sudo systemctl enable postgresql

# Switch to postgres user
sudo -i -u postgres
psql
```_

### MACOS
```bash
# Using Homebrew
brew install postgresql

# Start PostgreSQL
brew services start postgresql

# Create database
createdb mydatabase

# Connect to PostgreSQL
psql mydatabase

# Using PostgreSQL.app
# Download from https://postgresapp.com/
```_

### Fenêtres
```bash
# Download installer from https://www.postgresql.org/download/windows/
# Run the installer and follow the setup wizard

# Connect using psql
psql -U postgres -h localhost

# Or use pgAdmin GUI tool

Coq

# Pull PostgreSQL image
docker pull postgres:15

# Run PostgreSQL container
docker run --name postgres-container \
  -e POSTGRES_PASSWORD=mypassword \
  -e POSTGRES_DB=mydatabase \
  -p 5432:5432 \
  -d postgres:15

# Connect to PostgreSQL in container
docker exec -it postgres-container psql -U postgres -d mydatabase

# Run with persistent data
docker run --name postgres-container \
  -e POSTGRES_PASSWORD=mypassword \
  -e POSTGRES_DB=mydatabase \
  -p 5432:5432 \
  -v postgres-data:/var/lib/postgresql/data \
  -d postgres:15

Commandes de base

Connexion à PostgreSQLTM

-- Connect to PostgreSQL
psql -U username -d database_name

-- Connect to remote server
psql -h hostname -p 5432 -U username -d database_name

-- Connect with SSL
psql "host=hostname port=5432 dbname=database_name user=username sslmode=require"

-- Connect and execute command
psql -U username -d database_name -c "SELECT version();"

-- Connect from file
psql -U username -d database_name -f script.sql

Informations de base

-- Show PostgreSQL version
SELECT version();

-- Show current user
SELECT current_user;

-- Show current database
SELECT current_database();

-- Show current date and time
SELECT now();

-- Show server settings
SHOW ALL;

-- Show specific setting
SHOW shared_buffers;

-- Show active connections
SELECT * FROM pg_stat_activity;

-- Show database size
SELECT pg_size_pretty(pg_database_size(current_database()));

Méta-commandes psql

-- List databases
\l

-- Connect to database
\c database_name

-- List tables
\dt

-- List all relations (tables, views, sequences)
\d

-- Describe table
\d table_name

-- List schemas
\dn

-- List users/roles
\du

-- List functions
\df

-- Show table sizes
\dt+

-- Execute system command
\! ls -la

-- Quit psql
\q

-- Help
\?

-- SQL help
\h SELECT

Opérations de bases de données

Création de bases de données

-- Create database
CREATE DATABASE mydatabase;

-- Create database with options
CREATE DATABASE mydatabase
    WITH OWNER = myuser
    ENCODING = 'UTF8'
    LC_COLLATE = 'en_US.UTF-8'
    LC_CTYPE = 'en_US.UTF-8'
    TEMPLATE = template0;

-- Create database with specific tablespace
CREATE DATABASE mydatabase
    WITH TABLESPACE = my_tablespace;

-- Create database if not exists (PostgreSQL 9.1+)
CREATE DATABASE IF NOT EXISTS mydatabase;

Gestion des bases de données

-- List all databases
\l
SELECT datname FROM pg_database;

-- Connect to database
\c mydatabase

-- Show current database
SELECT current_database();

-- Rename database
ALTER DATABASE old_name RENAME TO new_name;

-- Change database owner
ALTER DATABASE mydatabase OWNER TO new_owner;

-- Drop database
DROP DATABASE mydatabase;

-- Drop database if exists
DROP DATABASE IF EXISTS mydatabase;

-- Terminate connections to database
SELECT pg_terminate_backend(pg_stat_activity.pid)
FROM pg_stat_activity
WHERE pg_stat_activity.datname = 'mydatabase'
  AND pid <> pg_backend_pid();

Informations sur la base de données

-- Show database size
SELECT 
    datname as database_name,
    pg_size_pretty(pg_database_size(datname)) as size
FROM pg_database
ORDER BY pg_database_size(datname) DESC;

-- Show table sizes in current database
SELECT 
    schemaname,
    tablename,
    pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) as size,
    pg_size_pretty(pg_relation_size(schemaname||'.'||tablename)) as table_size,
    pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename) - pg_relation_size(schemaname||'.'||tablename)) as index_size
FROM pg_tables
ORDER BY pg_total_relation_size(schemaname||'.'||tablename) DESC;

-- Show database connections
SELECT 
    datname,
    numbackends,
    xact_commit,
    xact_rollback,
    blks_read,
    blks_hit
FROM pg_stat_database;

Tableau des opérations

Création de tableaux

-- Basic table creation
CREATE TABLE users (
    id SERIAL PRIMARY KEY,
    username VARCHAR(50) NOT NULL UNIQUE,
    email VARCHAR(100) NOT NULL,
    password VARCHAR(255) NOT NULL,
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

-- Table with various data types
CREATE TABLE products (
    id SERIAL PRIMARY KEY,
    name VARCHAR(100) NOT NULL,
    description TEXT,
    price DECIMAL(10,2) NOT NULL CHECK (price > 0),
    category_id INTEGER REFERENCES categories(id),
    tags TEXT[],
    metadata JSONB,
    is_active BOOLEAN DEFAULT TRUE,
    created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
    updated_at TIMESTAMP WITH TIME ZONE DEFAULT NOW()
);

-- Table with constraints
CREATE TABLE orders (
    id SERIAL PRIMARY KEY,
    user_id INTEGER NOT NULL REFERENCES users(id) ON DELETE CASCADE,
    order_date DATE NOT NULL DEFAULT CURRENT_DATE,
    total_amount DECIMAL(10,2) NOT NULL,
    status VARCHAR(20) NOT NULL DEFAULT 'pending',
    CONSTRAINT valid_status CHECK (status IN ('pending', 'processing', 'shipped', 'delivered', 'cancelled'))
);

-- Temporary table
CREATE TEMPORARY TABLE temp_data (
    id INTEGER,
    value TEXT
);

-- Table with inheritance
CREATE TABLE vehicles (
    id SERIAL PRIMARY KEY,
    make VARCHAR(50),
    model VARCHAR(50),
    year INTEGER
);

CREATE TABLE cars (
    doors INTEGER
) INHERITS (vehicles);

Modifier les tableaux

-- Add column
ALTER TABLE users ADD COLUMN phone VARCHAR(20);

-- Add column with default value
ALTER TABLE users ADD COLUMN middle_name VARCHAR(50) DEFAULT '';

-- Modify column type
ALTER TABLE users ALTER COLUMN email TYPE VARCHAR(150);

-- Set column default
ALTER TABLE users ALTER COLUMN created_at SET DEFAULT NOW();

-- Drop column default
ALTER TABLE users ALTER COLUMN created_at DROP DEFAULT;

-- Rename column
ALTER TABLE users RENAME COLUMN username TO user_name;

-- Drop column
ALTER TABLE users DROP COLUMN phone;

-- Add constraint
ALTER TABLE users ADD CONSTRAINT unique_email UNIQUE (email);

-- Drop constraint
ALTER TABLE users DROP CONSTRAINT unique_email;

-- Add foreign key
ALTER TABLE posts ADD CONSTRAINT fk_user_id 
FOREIGN KEY (user_id) REFERENCES users(id);

-- Rename table
ALTER TABLE old_table_name RENAME TO new_table_name;

-- Change table owner
ALTER TABLE users OWNER TO new_owner;

-- Set table schema
ALTER TABLE users SET SCHEMA new_schema;

Renseignements sur le tableau

-- Describe table structure
\d users
\d+ users  -- with additional info

-- Show table columns
SELECT 
    column_name,
    data_type,
    is_nullable,
    column_default
FROM information_schema.columns
WHERE table_name = 'users';

-- Show table constraints
SELECT 
    constraint_name,
    constraint_type,
    column_name
FROM information_schema.table_constraints tc
JOIN information_schema.constraint_column_usage ccu
    ON tc.constraint_name = ccu.constraint_name
WHERE tc.table_name = 'users';

-- Show table indexes
SELECT 
    indexname,
    indexdef
FROM pg_indexes
WHERE tablename = 'users';

-- Show foreign keys
SELECT
    tc.constraint_name,
    tc.table_name,
    kcu.column_name,
    ccu.table_name AS foreign_table_name,
    ccu.column_name AS foreign_column_name
FROM information_schema.table_constraints AS tc
JOIN information_schema.key_column_usage AS kcu
    ON tc.constraint_name = kcu.constraint_name
JOIN information_schema.constraint_column_usage AS ccu
    ON ccu.constraint_name = tc.constraint_name
WHERE tc.constraint_type = 'FOREIGN KEY'
    AND tc.table_name = 'users';

Tables déroulantes

-- Drop table
DROP TABLE table_name;

-- Drop table if exists
DROP TABLE IF EXISTS table_name;

-- Drop multiple tables
DROP TABLE table1, table2, table3;

-- Drop table cascade (remove dependent objects)
DROP TABLE table_name CASCADE;

-- Truncate table (delete all data, keep structure)
TRUNCATE TABLE table_name;

-- Truncate with cascade
TRUNCATE TABLE table_name CASCADE;

-- Truncate and restart identity
TRUNCATE TABLE table_name RESTART IDENTITY;

Types de données

Types numériques

-- Integer types
SMALLINT        -- 2 bytes, -32,768 to 32,767
INTEGER         -- 4 bytes, -2,147,483,648 to 2,147,483,647
BIGINT          -- 8 bytes, -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807

-- Auto-incrementing integers
SMALLSERIAL     -- 2 bytes, 1 to 32,767
SERIAL          -- 4 bytes, 1 to 2,147,483,647
BIGSERIAL       -- 8 bytes, 1 to 9,223,372,036,854,775,807

-- Decimal types
DECIMAL(precision, scale)   -- Exact numeric
NUMERIC(precision, scale)   -- Exact numeric (same as DECIMAL)
REAL                        -- 4 bytes, single precision
DOUBLE PRECISION            -- 8 bytes, double precision

-- Examples
CREATE TABLE numeric_examples (
    id SERIAL PRIMARY KEY,
    small_int SMALLINT,
    regular_int INTEGER,
    big_int BIGINT,
    price DECIMAL(10,2),
    percentage REAL,
    scientific DOUBLE PRECISION
);

Types de caractères

-- Character types
CHAR(n)             -- Fixed-length character string
VARCHAR(n)          -- Variable-length character string
TEXT                -- Variable-length character string (unlimited)

-- Examples
CREATE TABLE text_examples (
    id SERIAL PRIMARY KEY,
    code CHAR(5),           -- Fixed 5 characters
    name VARCHAR(100),      -- Up to 100 characters
    description TEXT        -- Unlimited length
);

Types de date et d'heure

-- Date and time types
DATE                        -- Date only (YYYY-MM-DD)
TIME                        -- Time only (HH:MM:SS)
TIME WITH TIME ZONE         -- Time with timezone
TIMESTAMP                   -- Date and time
TIMESTAMP WITH TIME ZONE    -- Date and time with timezone
INTERVAL                    -- Time interval

-- Examples
CREATE TABLE datetime_examples (
    id SERIAL PRIMARY KEY,
    birth_date DATE,
    meeting_time TIME,
    created_at TIMESTAMP DEFAULT NOW(),
    updated_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
    duration INTERVAL
);

-- Insert examples
INSERT INTO datetime_examples (birth_date, meeting_time, duration) VALUES
('1990-05-15', '14:30:00', '2 hours 30 minutes');

Type booléen

-- Boolean type
BOOLEAN             -- TRUE, FALSE, or NULL

CREATE TABLE boolean_examples (
    id SERIAL PRIMARY KEY,
    is_active BOOLEAN DEFAULT TRUE,
    is_verified BOOLEAN DEFAULT FALSE
);

Types d'images

-- Array types
INTEGER[]           -- Array of integers
TEXT[]              -- Array of text
VARCHAR(50)[]       -- Array of varchar

CREATE TABLE array_examples (
    id SERIAL PRIMARY KEY,
    tags TEXT[],
    scores INTEGER[],
    emails VARCHAR(100)[]
);

-- Insert array data
INSERT INTO array_examples (tags, scores, emails) VALUES
(ARRAY['tag1', 'tag2', 'tag3'], ARRAY[85, 92, 78], ARRAY['email1@example.com', 'email2@example.com']);

-- Alternative syntax
INSERT INTO array_examples (tags, scores) VALUES
('{"tag1", "tag2", "tag3"}', '{85, 92, 78}');

-- Query array data
SELECT * FROM array_examples WHERE 'tag1' = ANY(tags);
SELECT * FROM array_examples WHERE tags @> ARRAY['tag1'];

Types JSON et JSONB

-- JSON types
JSON                -- JSON data (stored as text)
JSONB               -- Binary JSON (more efficient)

CREATE TABLE json_examples (
    id SERIAL PRIMARY KEY,
    data JSON,
    metadata JSONB
);

-- Insert JSON data
INSERT INTO json_examples (data, metadata) VALUES
('{"name": "John", "age": 30}', '{"preferences": {"theme": "dark", "language": "en"}}');

-- Query JSON data
SELECT data->>'name' as name FROM json_examples;
SELECT * FROM json_examples WHERE metadata @> '{"preferences": {"theme": "dark"}}';

UUID Type

-- UUID type
UUID                -- Universally Unique Identifier

-- Enable uuid extension
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";

CREATE TABLE uuid_examples (
    id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
    name VARCHAR(100)
);

Types géométriques

-- Geometric types
POINT               -- Point on a plane
LINE                -- Infinite line
LSEG                -- Line segment
BOX                 -- Rectangular box
PATH                -- Geometric path
POLYGON             -- Closed geometric path
CIRCLE              -- Circle

CREATE TABLE geometric_examples (
    id SERIAL PRIMARY KEY,
    location POINT,
    area BOX,
    boundary POLYGON
);

-- Insert geometric data
INSERT INTO geometric_examples (location, area) VALUES
(POINT(1, 2), BOX(POINT(0, 0), POINT(10, 10)));

CRUD Opérations

INSCRIRE Opérations

-- Basic insert
INSERT INTO users (username, email, password) 
VALUES ('john_doe', 'john@example.com', 'hashed_password');

-- Insert multiple rows
INSERT INTO users (username, email, password) VALUES
('alice', 'alice@example.com', 'password1'),
('bob', 'bob@example.com', 'password2'),
('charlie', 'charlie@example.com', 'password3');

-- Insert with returning
INSERT INTO users (username, email, password) 
VALUES ('dave', 'dave@example.com', 'password4')
RETURNING id, username, created_at;

-- Insert from another table
INSERT INTO users_backup (username, email)
SELECT username, email FROM users WHERE created_at < '2023-01-01';

-- Insert with ON CONFLICT (upsert)
INSERT INTO users (id, username, email) 
VALUES (1, 'john_doe', 'newemail@example.com')
ON CONFLICT (id) 
DO UPDATE SET email = EXCLUDED.email, updated_at = NOW();

-- Insert with DO NOTHING on conflict
INSERT INTO users (username, email, password) 
VALUES ('existing_user', 'existing@example.com', 'password')
ON CONFLICT (username) DO NOTHING;

-- Insert array data
INSERT INTO products (name, tags) 
VALUES ('Product 1', ARRAY['electronics', 'gadget', 'new']);

-- Insert JSON data
INSERT INTO user_preferences (user_id, preferences) 
VALUES (1, '{"theme": "dark", "notifications": true}');

SELECT Opérations

-- Basic select
SELECT * FROM users;

-- Select specific columns
SELECT username, email FROM users;

-- Select with alias
SELECT username AS user_name, email AS user_email FROM users;

-- Select with WHERE clause
SELECT * FROM users WHERE id = 1;
SELECT * FROM users WHERE username = 'john_doe';
SELECT * FROM users WHERE created_at > '2023-01-01';

-- Select with multiple conditions
SELECT * FROM users WHERE username = 'john_doe' AND email = 'john@example.com';
SELECT * FROM users WHERE id IN (1, 2, 3);
SELECT * FROM users WHERE username LIKE 'john%';
SELECT * FROM users WHERE email IS NOT NULL;

-- Select with ORDER BY
SELECT * FROM users ORDER BY created_at DESC;
SELECT * FROM users ORDER BY username ASC, created_at DESC;

-- Select with LIMIT and OFFSET
SELECT * FROM users LIMIT 10;
SELECT * FROM users LIMIT 10 OFFSET 20;
SELECT * FROM users ORDER BY id LIMIT 5;

-- Select with GROUP BY
SELECT COUNT(*) as user_count FROM users;
SELECT DATE(created_at) as date, COUNT(*) as daily_users 
FROM users GROUP BY DATE(created_at);

-- Select with HAVING
SELECT DATE(created_at) as date, COUNT(*) as daily_users 
FROM users 
GROUP BY DATE(created_at) 
HAVING COUNT(*) > 5;

-- Select with window functions
SELECT 
    username,
    created_at,
    ROW_NUMBER() OVER (ORDER BY created_at) as row_num,
    RANK() OVER (ORDER BY created_at) as rank,
    LAG(username) OVER (ORDER BY created_at) as prev_user
FROM users;

-- Select with CTE (Common Table Expression)
WITH recent_users AS (
    SELECT * FROM users WHERE created_at > '2023-01-01'
)
SELECT COUNT(*) FROM recent_users;

-- Recursive CTE
WITH RECURSIVE employee_hierarchy AS (
    SELECT id, name, manager_id, 1 as level
    FROM employees
    WHERE manager_id IS NULL

    UNION ALL

    SELECT e.id, e.name, e.manager_id, eh.level + 1
    FROM employees e
    JOIN employee_hierarchy eh ON e.manager_id = eh.id
)
SELECT * FROM employee_hierarchy;

MISE À JOUR Opérations

-- Basic update
UPDATE users SET email = 'newemail@example.com' WHERE id = 1;

-- Update multiple columns
UPDATE users SET 
    email = 'newemail@example.com',
    updated_at = NOW()
WHERE id = 1;

-- Update with conditions
UPDATE users SET password = 'new_password' WHERE username = 'john_doe';

-- Update multiple rows
UPDATE users SET is_active = FALSE WHERE created_at < '2022-01-01';

-- Update with FROM clause
UPDATE users SET status = 'premium'
FROM subscriptions s
WHERE users.id = s.user_id AND s.plan = 'premium';

-- Update with subquery
UPDATE users SET status = 'premium'
WHERE id IN (SELECT user_id FROM subscriptions WHERE plan = 'premium');

-- Update with CASE
UPDATE users SET 
    status = CASE 
        WHEN created_at > '2023-01-01' THEN 'new'
        WHEN created_at > '2022-01-01' THEN 'regular'
        ELSE 'old'
    END;

-- Update with RETURNING
UPDATE users SET email = 'updated@example.com' 
WHERE id = 1 
RETURNING id, username, email, updated_at;

-- Update array elements
UPDATE products SET tags = array_append(tags, 'featured') WHERE id = 1;
UPDATE products SET tags = array_remove(tags, 'old') WHERE id = 1;

-- Update JSON data
UPDATE user_preferences 
SET preferences = preferences || '{"new_setting": true}'
WHERE user_id = 1;

DELETE Opérations

-- Basic delete
DELETE FROM users WHERE id = 1;

-- Delete with conditions
DELETE FROM users WHERE username = 'john_doe';
DELETE FROM users WHERE created_at < '2022-01-01';

-- Delete multiple rows
DELETE FROM users WHERE id IN (1, 2, 3);

-- Delete with USING clause
DELETE FROM users
USING user_profiles p
WHERE users.id = p.user_id AND p.is_deleted = TRUE;

-- Delete with subquery
DELETE FROM users 
WHERE id IN (SELECT user_id FROM inactive_users);

-- Delete with RETURNING
DELETE FROM users WHERE id = 1 
RETURNING id, username, email;

-- Delete all rows (keep table structure)
DELETE FROM users;

-- Truncate table (faster for deleting all rows)
TRUNCATE TABLE users;
TRUNCATE TABLE users RESTART IDENTITY;

Indices

Création d'index

-- Create index on single column
CREATE INDEX idx_users_username ON users(username);

-- Create index on multiple columns
CREATE INDEX idx_users_name_email ON users(username, email);

-- Create unique index
CREATE UNIQUE INDEX idx_users_email ON users(email);

-- Create partial index
CREATE INDEX idx_active_users ON users(username) WHERE is_active = TRUE;

-- Create expression index
CREATE INDEX idx_users_lower_email ON users(LOWER(email));

-- Create index with specific method
CREATE INDEX idx_users_username_hash ON users USING HASH (username);

-- Create index concurrently (doesn't block writes)
CREATE INDEX CONCURRENTLY idx_users_created_at ON users(created_at);

-- Create GIN index for arrays
CREATE INDEX idx_products_tags ON products USING GIN (tags);

-- Create GiST index for full-text search
CREATE INDEX idx_posts_content ON posts USING GiST (to_tsvector('english', content));

-- Create BRIN index for large tables
CREATE INDEX idx_logs_timestamp ON logs USING BRIN (timestamp);

Gestion des index

-- List indexes for table
\d users
SELECT indexname, indexdef FROM pg_indexes WHERE tablename = 'users';

-- Show index usage statistics
SELECT 
    schemaname,
    tablename,
    indexname,
    idx_tup_read,
    idx_tup_fetch
FROM pg_stat_user_indexes
WHERE tablename = 'users';

-- Show index size
SELECT 
    indexname,
    pg_size_pretty(pg_relation_size(indexname::regclass)) as size
FROM pg_indexes
WHERE tablename = 'users';

-- Drop index
DROP INDEX idx_users_username;

-- Drop index concurrently
DROP INDEX CONCURRENTLY idx_users_username;

-- Reindex
REINDEX INDEX idx_users_username;
REINDEX TABLE users;
REINDEX DATABASE mydatabase;

-- Analyze index usage
EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM users WHERE username = 'john_doe';

Types d'index

-- B-tree index (default)
CREATE INDEX idx_btree ON users(username);

-- Hash index (for equality comparisons)
CREATE INDEX idx_hash ON users(id) USING HASH;

-- GIN index (for arrays, JSONB, full-text search)
CREATE INDEX idx_gin_tags ON products USING GIN (tags);
CREATE INDEX idx_gin_jsonb ON products USING GIN (metadata);

-- GiST index (for geometric data, full-text search)
CREATE INDEX idx_gist_location ON locations USING GiST (coordinates);

-- SP-GiST index (for non-balanced data structures)
CREATE INDEX idx_spgist ON table_name USING SPGIST (column_name);

-- BRIN index (for very large tables with natural ordering)
CREATE INDEX idx_brin_timestamp ON large_table USING BRIN (timestamp);

-- Covering index (includes additional columns)
CREATE INDEX idx_covering ON users(username) INCLUDE (email, created_at);

Contraintes

Clé primaire

-- Add primary key during table creation
CREATE TABLE users (
    id SERIAL PRIMARY KEY,
    username VARCHAR(50)
);

-- Add primary key to existing table
ALTER TABLE users ADD PRIMARY KEY (id);

-- Composite primary key
CREATE TABLE order_items (
    order_id INTEGER,
    product_id INTEGER,
    quantity INTEGER,
    PRIMARY KEY (order_id, product_id)
);

-- Named primary key constraint
CREATE TABLE users (
    id SERIAL,
    username VARCHAR(50),
    CONSTRAINT pk_users PRIMARY KEY (id)
);

-- Drop primary key
ALTER TABLE users DROP CONSTRAINT users_pkey;

Clé étrangère

-- Add foreign key during table creation
CREATE TABLE posts (
    id SERIAL PRIMARY KEY,
    user_id INTEGER REFERENCES users(id),
    title VARCHAR(200)
);

-- Add foreign key to existing table
ALTER TABLE posts ADD CONSTRAINT fk_posts_user_id 
FOREIGN KEY (user_id) REFERENCES users(id);

-- Foreign key with actions
ALTER TABLE posts ADD CONSTRAINT fk_posts_user_id 
FOREIGN KEY (user_id) REFERENCES users(id)
ON DELETE CASCADE
ON UPDATE CASCADE;

-- Composite foreign key
ALTER TABLE order_items ADD CONSTRAINT fk_order_items_order
FOREIGN KEY (order_id, customer_id) REFERENCES orders(id, customer_id);

-- Drop foreign key
ALTER TABLE posts DROP CONSTRAINT fk_posts_user_id;

-- Show foreign keys
SELECT
    tc.constraint_name,
    tc.table_name,
    kcu.column_name,
    ccu.table_name AS foreign_table_name,
    ccu.column_name AS foreign_column_name,
    rc.update_rule,
    rc.delete_rule
FROM information_schema.table_constraints AS tc
JOIN information_schema.key_column_usage AS kcu
    ON tc.constraint_name = kcu.constraint_name
JOIN information_schema.constraint_column_usage AS ccu
    ON ccu.constraint_name = tc.constraint_name
JOIN information_schema.referential_constraints AS rc
    ON tc.constraint_name = rc.constraint_name
WHERE tc.constraint_type = 'FOREIGN KEY';

Contrainte unique

-- Add unique constraint during table creation
CREATE TABLE users (
    id SERIAL PRIMARY KEY,
    username VARCHAR(50) UNIQUE,
    email VARCHAR(100) UNIQUE
);

-- Add unique constraint to existing table
ALTER TABLE users ADD CONSTRAINT uk_users_email UNIQUE (email);

-- Composite unique constraint
ALTER TABLE users ADD CONSTRAINT uk_users_username_email UNIQUE (username, email);

-- Partial unique constraint
CREATE UNIQUE INDEX uk_active_users_email ON users(email) WHERE is_active = TRUE;

-- Drop unique constraint
ALTER TABLE users DROP CONSTRAINT uk_users_email;

Vérifier la contrainte

-- Add check constraint during table creation
CREATE TABLE products (
    id SERIAL PRIMARY KEY,
    name VARCHAR(100),
    price DECIMAL(10,2) CHECK (price > 0),
    stock_quantity INTEGER CHECK (stock_quantity >= 0)
);

-- Add check constraint to existing table
ALTER TABLE products ADD CONSTRAINT chk_products_price CHECK (price > 0);

-- Complex check constraint
ALTER TABLE users ADD CONSTRAINT chk_users_email 
CHECK (email ~* '^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}$');

-- Check constraint with multiple conditions
ALTER TABLE orders ADD CONSTRAINT chk_orders_status 
CHECK (status IN ('pending', 'processing', 'shipped', 'delivered', 'cancelled'));

-- Drop check constraint
ALTER TABLE products DROP CONSTRAINT chk_products_price;

-- Disable/Enable constraint
ALTER TABLE products DISABLE TRIGGER ALL;
ALTER TABLE products ENABLE TRIGGER ALL;

Pas de contrainte null

-- Add NOT NULL during table creation
CREATE TABLE users (
    id SERIAL PRIMARY KEY,
    username VARCHAR(50) NOT NULL,
    email VARCHAR(100) NOT NULL
);

-- Add NOT NULL to existing column
ALTER TABLE users ALTER COLUMN email SET NOT NULL;

-- Remove NOT NULL constraint
ALTER TABLE users ALTER COLUMN phone DROP NOT NULL;

-- Add column with NOT NULL and default
ALTER TABLE users ADD COLUMN status VARCHAR(20) NOT NULL DEFAULT 'active';

Contrainte à l'exclusion

-- Exclusion constraint (PostgreSQL specific)
CREATE TABLE reservations (
    id SERIAL PRIMARY KEY,
    room_id INTEGER,
    during TSRANGE,
    EXCLUDE USING GIST (room_id WITH =, during WITH &&)
);

-- Exclusion constraint with condition
ALTER TABLE reservations ADD CONSTRAINT exclude_overlapping_reservations
EXCLUDE USING GIST (room_id WITH =, during WITH &&) WHERE (status = 'confirmed');

SQL avancé

Fonctions de la fenêtre

-- ROW_NUMBER, RANK, DENSE_RANK
SELECT 
    username,
    created_at,
    ROW_NUMBER() OVER (ORDER BY created_at) as row_num,
    RANK() OVER (ORDER BY created_at) as rank,
    DENSE_RANK() OVER (ORDER BY created_at) as dense_rank
FROM users;

-- Partition by
SELECT 
    username,
    department,
    salary,
    ROW_NUMBER() OVER (PARTITION BY department ORDER BY salary DESC) as dept_rank,
    AVG(salary) OVER (PARTITION BY department) as dept_avg_salary
FROM employees;

-- LAG and LEAD
SELECT 
    username,
    created_at,
    LAG(created_at) OVER (ORDER BY created_at) as prev_created_at,
    LEAD(created_at) OVER (ORDER BY created_at) as next_created_at
FROM users;

-- FIRST_VALUE and LAST_VALUE
SELECT 
    username,
    salary,
    FIRST_VALUE(salary) OVER (ORDER BY salary DESC) as highest_salary,
    LAST_VALUE(salary) OVER (ORDER BY salary DESC ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) as lowest_salary
FROM employees;

-- NTILE (divide into buckets)
SELECT 
    username,
    salary,
    NTILE(4) OVER (ORDER BY salary) as salary_quartile
FROM employees;

-- Cumulative sum
SELECT 
    order_date,
    amount,
    SUM(amount) OVER (ORDER BY order_date) as running_total
FROM orders;

Expressions de tableau communes (ETC)

-- Basic CTE
WITH recent_users AS (
    SELECT * FROM users WHERE created_at > '2023-01-01'
)
SELECT COUNT(*) FROM recent_users;

-- Multiple CTEs
WITH 
active_users AS (
    SELECT * FROM users WHERE is_active = TRUE
),
user_stats AS (
    SELECT 
        COUNT(*) as total_users,
        AVG(EXTRACT(YEAR FROM AGE(birth_date))) as avg_age
    FROM active_users
)
SELECT * FROM user_stats;

-- Recursive CTE
WITH RECURSIVE employee_hierarchy AS (
    -- Base case
    SELECT id, name, manager_id, 1 as level, name as path
    FROM employees
    WHERE manager_id IS NULL

    UNION ALL

    -- Recursive case
    SELECT 
        e.id, 
        e.name, 
        e.manager_id, 
        eh.level + 1,
        eh.path || ' -> ' || e.name
    FROM employees e
    JOIN employee_hierarchy eh ON e.manager_id = eh.id
    WHERE eh.level < 10  -- Prevent infinite recursion
)
SELECT * FROM employee_hierarchy ORDER BY level, path;

-- CTE for data modification
WITH deleted_users AS (
    DELETE FROM users 
    WHERE last_login < '2022-01-01' 
    RETURNING *
)
INSERT INTO archived_users SELECT * FROM deleted_users;

LATÉRAUX Rejoignez

-- LATERAL join (correlated subquery in FROM clause)
SELECT 
    u.username,
    recent_posts.title,
    recent_posts.created_at
FROM users u
CROSS JOIN LATERAL (
    SELECT title, created_at
    FROM posts p
    WHERE p.user_id = u.id
    ORDER BY created_at DESC
    LIMIT 3
) recent_posts;

-- LATERAL with function
SELECT 
    u.username,
    user_stats.post_count,
    user_stats.avg_post_length
FROM users u
CROSS JOIN LATERAL (
    SELECT 
        COUNT(*) as post_count,
        AVG(LENGTH(content)) as avg_post_length
    FROM posts p
    WHERE p.user_id = u.id
) user_stats;

CAS Expressions

-- Simple CASE
SELECT 
    username,
    CASE status
        WHEN 'active' THEN 'User is active'
        WHEN 'inactive' THEN 'User is inactive'
        WHEN 'banned' THEN 'User is banned'
        ELSE 'Unknown status'
    END as status_description
FROM users;

-- Searched CASE
SELECT 
    username,
    salary,
    CASE 
        WHEN salary < 30000 THEN 'Low'
        WHEN salary < 60000 THEN 'Medium'
        WHEN salary < 100000 THEN 'High'
        ELSE 'Very High'
    END as salary_category
FROM employees;

-- CASE in aggregate
SELECT 
    department,
    COUNT(*) as total_employees,
    COUNT(CASE WHEN salary > 50000 THEN 1 END) as high_earners,
    COUNT(CASE WHEN salary <= 50000 THEN 1 END) as low_earners
FROM employees
GROUP BY department;

Sous-requêtes

-- Scalar subquery
SELECT 
    username,
    (SELECT COUNT(*) FROM posts WHERE user_id = users.id) as post_count
FROM users;

-- EXISTS subquery
SELECT * FROM users u
WHERE EXISTS (SELECT 1 FROM posts p WHERE p.user_id = u.id);

-- NOT EXISTS subquery
SELECT * FROM users u
WHERE NOT EXISTS (SELECT 1 FROM posts p WHERE p.user_id = u.id);

-- IN subquery
SELECT * FROM users 
WHERE id IN (SELECT user_id FROM posts WHERE created_at > '2023-01-01');

-- ANY/ALL subqueries
SELECT * FROM products 
WHERE price > ANY (SELECT price FROM products WHERE category = 'electronics');

SELECT * FROM products 
WHERE price > ALL (SELECT price FROM products WHERE category = 'books');

Fonctions et procédures

Fonctions intégrées

-- String functions
SELECT 
    CONCAT('Hello', ' ', 'World'),                    -- Hello World
    LENGTH('Hello World'),                            -- 11
    UPPER('hello world'),                             -- HELLO WORLD
    LOWER('HELLO WORLD'),                             -- hello world
    SUBSTRING('Hello World', 1, 5),                   -- Hello
    LEFT('Hello World', 5),                           -- Hello
    RIGHT('Hello World', 5),                          -- World
    TRIM('  Hello World  '),                          -- Hello World
    REPLACE('Hello World', 'World', 'PostgreSQL'),    -- Hello PostgreSQL
    REVERSE('Hello'),                                 -- olleH
    REPEAT('Ha', 3),                                  -- HaHaHa
    LPAD('123', 5, '0'),                             -- 00123
    RPAD('123', 5, '0');                             -- 12300

-- String search and pattern matching
SELECT 
    POSITION('World' IN 'Hello World'),               -- 7
    STRPOS('Hello World', 'World'),                   -- 7
    'Hello World' LIKE 'Hello%',                      -- true
    'Hello World' ILIKE 'hello%',                     -- true (case insensitive)
    'Hello World' ~ '^Hello',                         -- true (regex)
    'Hello World' ~* '^hello';                        -- true (case insensitive regex)

-- Numeric functions
SELECT 
    ABS(-15),                                         -- 15
    CEIL(4.3),                                        -- 5
    FLOOR(4.7),                                       -- 4
    ROUND(4.567, 2),                                  -- 4.57
    TRUNC(4.567, 2),                                  -- 4.56
    MOD(10, 3),                                       -- 1
    POWER(2, 3),                                      -- 8
    SQRT(16),                                         -- 4
    RANDOM(),                                         -- Random number 0-1
    GREATEST(1, 5, 3, 9, 2),                         -- 9
    LEAST(1, 5, 3, 9, 2);                            -- 1

-- Date and time functions
SELECT 
    NOW(),                                            -- Current timestamp
    CURRENT_DATE,                                     -- Current date
    CURRENT_TIME,                                     -- Current time
    EXTRACT(YEAR FROM NOW()),                         -- Current year
    EXTRACT(MONTH FROM NOW()),                        -- Current month
    EXTRACT(DAY FROM NOW()),                          -- Current day
    DATE_PART('year', NOW()),                         -- Current year
    AGE('1990-05-15'),                               -- Age from birth date
    DATE_TRUNC('month', NOW()),                       -- First day of current month
    NOW() + INTERVAL '1 day',                         -- Tomorrow
    NOW() - INTERVAL '1 week';                        -- Last week

-- Aggregate functions
SELECT 
    COUNT(*),
    COUNT(DISTINCT username),
    SUM(salary),
    AVG(salary),
    MIN(salary),
    MAX(salary),
    STDDEV(salary),
    VARIANCE(salary),
    STRING_AGG(username, ', ' ORDER BY username)      -- Concatenate strings
FROM employees;

Fonctions définies par l'utilisateur

-- Simple function
CREATE OR REPLACE FUNCTION get_user_count()
RETURNS INTEGER AS $$
BEGIN
    RETURN (SELECT COUNT(*) FROM users);
END;
$$ LANGUAGE plpgsql;

-- Function with parameters
CREATE OR REPLACE FUNCTION get_user_by_id(user_id INTEGER)
RETURNS TABLE(id INTEGER, username VARCHAR, email VARCHAR) AS $$
BEGIN
    RETURN QUERY
    SELECT u.id, u.username, u.email
    FROM users u
    WHERE u.id = user_id;
END;
$$ LANGUAGE plpgsql;

-- Function with default parameters
CREATE OR REPLACE FUNCTION get_users_by_status(
    user_status VARCHAR DEFAULT 'active',
    limit_count INTEGER DEFAULT 10
)
RETURNS TABLE(id INTEGER, username VARCHAR, status VARCHAR) AS $$
BEGIN
    RETURN QUERY
    SELECT u.id, u.username, u.status
    FROM users u
    WHERE u.status = user_status
    LIMIT limit_count;
END;
$$ LANGUAGE plpgsql;

-- Function with OUT parameters
CREATE OR REPLACE FUNCTION get_user_stats(
    OUT total_users INTEGER,
    OUT active_users INTEGER,
    OUT inactive_users INTEGER
) AS $$
BEGIN
    SELECT COUNT(*) INTO total_users FROM users;
    SELECT COUNT(*) INTO active_users FROM users WHERE is_active = TRUE;
    SELECT COUNT(*) INTO inactive_users FROM users WHERE is_active = FALSE;
END;
$$ LANGUAGE plpgsql;

-- Function with exception handling
CREATE OR REPLACE FUNCTION safe_divide(a NUMERIC, b NUMERIC)
RETURNS NUMERIC AS $$
BEGIN
    IF b = 0 THEN
        RAISE EXCEPTION 'Division by zero';
    END IF;
    RETURN a / b;
EXCEPTION
    WHEN division_by_zero THEN
        RAISE NOTICE 'Cannot divide by zero, returning NULL';
        RETURN NULL;
END;
$$ LANGUAGE plpgsql;

Procédures stockées

-- Simple procedure
CREATE OR REPLACE PROCEDURE update_user_status(
    user_id INTEGER,
    new_status VARCHAR
) AS $$
BEGIN
    UPDATE users SET status = new_status WHERE id = user_id;

    IF NOT FOUND THEN
        RAISE EXCEPTION 'User with id % not found', user_id;
    END IF;

    COMMIT;
END;
$$ LANGUAGE plpgsql;

-- Procedure with transaction control
CREATE OR REPLACE PROCEDURE transfer_funds(
    from_account INTEGER,
    to_account INTEGER,
    amount DECIMAL
) AS $$
DECLARE
    from_balance DECIMAL;
BEGIN
    -- Start transaction
    BEGIN
        -- Check balance
        SELECT balance INTO from_balance 
        FROM accounts 
        WHERE id = from_account 
        FOR UPDATE;

        IF from_balance < amount THEN
            RAISE EXCEPTION 'Insufficient funds';
        END IF;

        -- Debit from source account
        UPDATE accounts 
        SET balance = balance - amount 
        WHERE id = from_account;

        -- Credit to destination account
        UPDATE accounts 
        SET balance = balance + amount 
        WHERE id = to_account;

        -- Log transaction
        INSERT INTO transactions (from_account, to_account, amount, timestamp)
        VALUES (from_account, to_account, amount, NOW());

        COMMIT;
        RAISE NOTICE 'Transfer completed successfully';

    EXCEPTION
        WHEN OTHERS THEN
            ROLLBACK;
            RAISE EXCEPTION 'Transfer failed: %', SQLERRM;
    END;
END;
$$ LANGUAGE plpgsql;

-- Call procedures
CALL update_user_status(1, 'inactive');
CALL transfer_funds(100, 200, 500.00);

Gestion des fonctions et procédures

-- List functions
\df
SELECT routine_name, routine_type 
FROM information_schema.routines 
WHERE routine_schema = 'public';

-- Show function definition
\df+ function_name
SELECT pg_get_functiondef(oid) 
FROM pg_proc 
WHERE proname = 'function_name';

-- Drop function
DROP FUNCTION IF EXISTS get_user_count();

-- Drop procedure
DROP PROCEDURE IF EXISTS update_user_status(INTEGER, VARCHAR);

-- Call function
SELECT get_user_count();
SELECT * FROM get_user_by_id(1);

-- Call function with OUT parameters
SELECT * FROM get_user_stats();

Déclencheurs

Création de déclencheurs

-- Create trigger function
CREATE OR REPLACE FUNCTION update_modified_column()
RETURNS TRIGGER AS $$
BEGIN
    NEW.updated_at = NOW();
    RETURN NEW;
END;
$$ LANGUAGE plpgsql;

-- Create trigger
CREATE TRIGGER update_users_updated_at
    BEFORE UPDATE ON users
    FOR EACH ROW
    EXECUTE FUNCTION update_modified_column();

-- BEFORE INSERT trigger
CREATE OR REPLACE FUNCTION before_user_insert()
RETURNS TRIGGER AS $$
BEGIN
    NEW.created_at = NOW();
    NEW.username = LOWER(NEW.username);
    RETURN NEW;
END;
$$ LANGUAGE plpgsql;

CREATE TRIGGER before_user_insert_trigger
    BEFORE INSERT ON users
    FOR EACH ROW
    EXECUTE FUNCTION before_user_insert();

-- AFTER INSERT trigger
CREATE OR REPLACE FUNCTION after_user_insert()
RETURNS TRIGGER AS $$
BEGIN
    INSERT INTO user_audit (user_id, action, timestamp)
    VALUES (NEW.id, 'INSERT', NOW());
    RETURN NEW;
END;
$$ LANGUAGE plpgsql;

CREATE TRIGGER after_user_insert_trigger
    AFTER INSERT ON users
    FOR EACH ROW
    EXECUTE FUNCTION after_user_insert();

-- BEFORE UPDATE trigger with conditions
CREATE OR REPLACE FUNCTION before_user_update()
RETURNS TRIGGER AS $$
BEGIN
    NEW.updated_at = NOW();

    -- Check if email changed
    IF NEW.email != OLD.email THEN
        NEW.email_verified = FALSE;
    END IF;

    -- Prevent username changes
    IF NEW.username != OLD.username THEN
        RAISE EXCEPTION 'Username cannot be changed';
    END IF;

    RETURN NEW;
END;
$$ LANGUAGE plpgsql;

CREATE TRIGGER before_user_update_trigger
    BEFORE UPDATE ON users
    FOR EACH ROW
    EXECUTE FUNCTION before_user_update();

-- AFTER UPDATE trigger
CREATE OR REPLACE FUNCTION after_user_update()
RETURNS TRIGGER AS $$
BEGIN
    INSERT INTO user_audit (user_id, action, old_values, new_values, timestamp)
    VALUES (
        NEW.id, 
        'UPDATE', 
        row_to_json(OLD),
        row_to_json(NEW),
        NOW()
    );
    RETURN NEW;
END;
$$ LANGUAGE plpgsql;

CREATE TRIGGER after_user_update_trigger
    AFTER UPDATE ON users
    FOR EACH ROW
    EXECUTE FUNCTION after_user_update();

-- BEFORE DELETE trigger
CREATE OR REPLACE FUNCTION before_user_delete()
RETURNS TRIGGER AS $$
BEGIN
    -- Archive user before deletion
    INSERT INTO deleted_users (original_id, username, email, deleted_at)
    VALUES (OLD.id, OLD.username, OLD.email, NOW());
    RETURN OLD;
END;
$$ LANGUAGE plpgsql;

CREATE TRIGGER before_user_delete_trigger
    BEFORE DELETE ON users
    FOR EACH ROW
    EXECUTE FUNCTION before_user_delete();

Déclencheurs avancés

-- Statement-level trigger
CREATE OR REPLACE FUNCTION log_table_changes()
RETURNS TRIGGER AS $$
BEGIN
    IF TG_OP = 'INSERT' THEN
        INSERT INTO table_audit (table_name, operation, row_count, timestamp)
        VALUES (TG_TABLE_NAME, TG_OP, TG_NARGS, NOW());
    ELSIF TG_OP = 'UPDATE' THEN
        INSERT INTO table_audit (table_name, operation, row_count, timestamp)
        VALUES (TG_TABLE_NAME, TG_OP, TG_NARGS, NOW());
    ELSIF TG_OP = 'DELETE' THEN
        INSERT INTO table_audit (table_name, operation, row_count, timestamp)
        VALUES (TG_TABLE_NAME, TG_OP, TG_NARGS, NOW());
    END IF;
    RETURN NULL;
END;
$$ LANGUAGE plpgsql;

CREATE TRIGGER log_users_changes
    AFTER INSERT OR UPDATE OR DELETE ON users
    FOR EACH STATEMENT
    EXECUTE FUNCTION log_table_changes();

-- Conditional trigger
CREATE OR REPLACE FUNCTION update_product_stock()
RETURNS TRIGGER AS $$
DECLARE
    current_stock INTEGER;
BEGIN
    SELECT stock_quantity INTO current_stock 
    FROM products 
    WHERE id = NEW.product_id;

    IF current_stock >= NEW.quantity THEN
        UPDATE products 
        SET stock_quantity = stock_quantity - NEW.quantity 
        WHERE id = NEW.product_id;
    ELSE
        RAISE EXCEPTION 'Insufficient stock. Available: %, Requested: %', 
                       current_stock, NEW.quantity;
    END IF;

    RETURN NEW;
END;
$$ LANGUAGE plpgsql;

CREATE TRIGGER update_product_stock_trigger
    AFTER INSERT ON order_items
    FOR EACH ROW
    EXECUTE FUNCTION update_product_stock();

-- Trigger with WHEN clause
CREATE TRIGGER update_expensive_products_only
    BEFORE UPDATE ON products
    FOR EACH ROW
    WHEN (NEW.price > 100)
    EXECUTE FUNCTION log_expensive_product_changes();

Gestion des déclencheurs

-- List triggers
\dS
SELECT trigger_name, event_manipulation, event_object_table
FROM information_schema.triggers
WHERE trigger_schema = 'public';

-- Show trigger definition
\d+ table_name

-- Disable trigger
ALTER TABLE users DISABLE TRIGGER update_users_updated_at;

-- Enable trigger
ALTER TABLE users ENABLE TRIGGER update_users_updated_at;

-- Disable all triggers on table
ALTER TABLE users DISABLE TRIGGER ALL;

-- Enable all triggers on table
ALTER TABLE users ENABLE TRIGGER ALL;

-- Drop trigger
DROP TRIGGER IF EXISTS update_users_updated_at ON users;

-- Drop trigger function
DROP FUNCTION IF EXISTS update_modified_column();

Vues et matérialisés Vues

Création de vues

-- Simple view
CREATE VIEW active_users AS
SELECT id, username, email, created_at
FROM users
WHERE is_active = TRUE;

-- View with JOIN
CREATE VIEW user_posts AS
SELECT 
    u.username,
    u.email,
    p.title,
    p.content,
    p.created_at as post_date
FROM users u
JOIN posts p ON u.id = p.user_id;

-- View with aggregation
CREATE VIEW user_stats AS
SELECT 
    u.id,
    u.username,
    COUNT(p.id) as post_count,
    MAX(p.created_at) as last_post_date,
    AVG(LENGTH(p.content)) as avg_post_length
FROM users u
LEFT JOIN posts p ON u.id = p.user_id
GROUP BY u.id, u.username;

-- View with window functions
CREATE VIEW user_rankings AS
SELECT 
    username,
    post_count,
    RANK() OVER (ORDER BY post_count DESC) as rank,
    PERCENT_RANK() OVER (ORDER BY post_count DESC) as percentile
FROM (
    SELECT 
        u.username,
        COUNT(p.id) as post_count
    FROM users u
    LEFT JOIN posts p ON u.id = p.user_id
    GROUP BY u.username
) user_post_counts;

-- Recursive view
CREATE RECURSIVE VIEW employee_hierarchy AS
    SELECT id, name, manager_id, 1 as level
    FROM employees
    WHERE manager_id IS NULL

    UNION ALL

    SELECT e.id, e.name, e.manager_id, eh.level + 1
    FROM employees e
    JOIN employee_hierarchy eh ON e.manager_id = eh.id;

matérialisé Vues

-- Create materialized view
CREATE MATERIALIZED VIEW user_summary AS
SELECT 
    DATE_TRUNC('month', created_at) as month,
    COUNT(*) as user_count,
    COUNT(*) FILTER (WHERE is_active = TRUE) as active_count
FROM users
GROUP BY DATE_TRUNC('month', created_at)
ORDER BY month;

-- Create materialized view with index
CREATE MATERIALIZED VIEW product_sales_summary AS
SELECT 
    p.id,
    p.name,
    SUM(oi.quantity) as total_sold,
    SUM(oi.quantity * p.price) as total_revenue
FROM products p
JOIN order_items oi ON p.id = oi.product_id
GROUP BY p.id, p.name;

CREATE INDEX idx_product_sales_summary_revenue 
ON product_sales_summary(total_revenue DESC);

-- Materialized view with data
CREATE MATERIALIZED VIEW monthly_sales AS
SELECT 
    DATE_TRUNC('month', order_date) as month,
    SUM(total_amount) as total_sales,
    COUNT(*) as order_count,
    AVG(total_amount) as avg_order_value
FROM orders
WHERE order_date >= '2023-01-01'
GROUP BY DATE_TRUNC('month', order_date)
WITH DATA;

-- Materialized view without data (populate later)
CREATE MATERIALIZED VIEW large_summary AS
SELECT * FROM very_large_table
WITH NO DATA;

Gestion des vues

-- List views
\dv
SELECT table_name, view_definition
FROM information_schema.views
WHERE table_schema = 'public';

-- List materialized views
\dm
SELECT schemaname, matviewname, ispopulated
FROM pg_matviews;

-- Show view definition
\d+ view_name

-- Query views
SELECT * FROM active_users;
SELECT * FROM user_stats WHERE post_count > 5;

-- Update view (if updatable)
UPDATE active_users SET email = 'new@example.com' WHERE id = 1;

-- Refresh materialized view
REFRESH MATERIALIZED VIEW user_summary;

-- Refresh materialized view concurrently (requires unique index)
REFRESH MATERIALIZED VIEW CONCURRENTLY product_sales_summary;

-- Populate materialized view
REFRESH MATERIALIZED VIEW large_summary;

-- Drop view
DROP VIEW IF EXISTS active_users;

-- Drop materialized view
DROP MATERIALIZED VIEW IF EXISTS user_summary;

-- Alter view
CREATE OR REPLACE VIEW active_users AS
SELECT id, username, email, created_at, last_login
FROM users
WHERE is_active = TRUE;

Vues à jour

-- Create updatable view
CREATE VIEW user_profiles AS
SELECT id, username, email, first_name, last_name
FROM users
WHERE is_active = TRUE;

-- Insert through view
INSERT INTO user_profiles (username, email, first_name, last_name)
VALUES ('newuser', 'new@example.com', 'New', 'User');

-- Update through view
UPDATE user_profiles 
SET email = 'updated@example.com' 
WHERE id = 1;

-- Delete through view
DELETE FROM user_profiles WHERE id = 1;

-- View with INSTEAD OF triggers for complex updates
CREATE OR REPLACE FUNCTION update_user_profile()
RETURNS TRIGGER AS $$
BEGIN
    IF TG_OP = 'INSERT' THEN
        INSERT INTO users (username, email, first_name, last_name, is_active)
        VALUES (NEW.username, NEW.email, NEW.first_name, NEW.last_name, TRUE);
        RETURN NEW;
    ELSIF TG_OP = 'UPDATE' THEN
        UPDATE users SET
            username = NEW.username,
            email = NEW.email,
            first_name = NEW.first_name,
            last_name = NEW.last_name
        WHERE id = NEW.id;
        RETURN NEW;
    ELSIF TG_OP = 'DELETE' THEN
        UPDATE users SET is_active = FALSE WHERE id = OLD.id;
        RETURN OLD;
    END IF;
    RETURN NULL;
END;
$$ LANGUAGE plpgsql;

CREATE TRIGGER user_profiles_trigger
    INSTEAD OF INSERT OR UPDATE OR DELETE ON user_profiles
    FOR EACH ROW
    EXECUTE FUNCTION update_user_profile();

Gestion des utilisateurs

Création d'utilisateurs et de rôles

-- Create user
CREATE USER username WITH PASSWORD 'password';

-- Create user with options
CREATE USER username WITH 
    PASSWORD 'password'
    CREATEDB
    CREATEROLE
    LOGIN
    VALID UNTIL '2024-12-31';

-- Create role (cannot login by default)
CREATE ROLE role_name;

-- Create role with login capability
CREATE ROLE role_name WITH LOGIN PASSWORD 'password';

-- Create role with specific attributes
CREATE ROLE app_role WITH
    NOLOGIN
    CREATEDB
    CREATEROLE
    CONNECTION LIMIT 10;

Accorder des privilèges

-- Grant database privileges
GRANT CONNECT ON DATABASE mydatabase TO username;
GRANT CREATE ON DATABASE mydatabase TO username;
GRANT ALL PRIVILEGES ON DATABASE mydatabase TO username;

-- Grant schema privileges
GRANT USAGE ON SCHEMA public TO username;
GRANT CREATE ON SCHEMA public TO username;
GRANT ALL ON SCHEMA public TO username;

-- Grant table privileges
GRANT SELECT ON users TO username;
GRANT INSERT, UPDATE, DELETE ON users TO username;
GRANT ALL PRIVILEGES ON users TO username;

-- Grant privileges on all tables in schema
GRANT SELECT ON ALL TABLES IN SCHEMA public TO username;
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO username;

-- Grant privileges on future tables
ALTER DEFAULT PRIVILEGES IN SCHEMA public 
GRANT SELECT ON TABLES TO username;

-- Grant sequence privileges
GRANT USAGE, SELECT ON SEQUENCE users_id_seq TO username;
GRANT ALL ON ALL SEQUENCES IN SCHEMA public TO username;

-- Grant function privileges
GRANT EXECUTE ON FUNCTION function_name TO username;
GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA public TO username;

-- Grant column-level privileges
GRANT SELECT (username, email), UPDATE (email) ON users TO username;

Gestion des rôles

-- Grant role to user
GRANT role_name TO username;

-- Grant role with admin option
GRANT role_name TO username WITH ADMIN OPTION;

-- Revoke role from user
REVOKE role_name FROM username;

-- Set default role
ALTER USER username SET ROLE role_name;

-- Create role hierarchy
CREATE ROLE read_only;
CREATE ROLE read_write;
CREATE ROLE admin;

GRANT read_only TO read_write;
GRANT read_write TO admin;

-- Grant privileges to roles
GRANT SELECT ON ALL TABLES IN SCHEMA public TO read_only;
GRANT INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO read_write;
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO admin;

Gestion des privilèges des utilisateurs

-- Show current user
SELECT current_user;

-- Show current role
SELECT current_role;

-- Show user privileges
\du
SELECT * FROM pg_user;
SELECT * FROM pg_roles;

-- Show table privileges
\dp table_name
SELECT grantee, privilege_type 
FROM information_schema.role_table_grants 
WHERE table_name = 'users';

-- Show database privileges
SELECT datname, datacl FROM pg_database WHERE datname = 'mydatabase';

-- Revoke privileges
REVOKE SELECT ON users FROM username;
REVOKE ALL PRIVILEGES ON users FROM username;
REVOKE CONNECT ON DATABASE mydatabase FROM username;

-- Change user password
ALTER USER username WITH PASSWORD 'new_password';

-- Change user attributes
ALTER USER username WITH CREATEDB;
ALTER USER username WITH NOCREATEDB;
ALTER USER username WITH CREATEROLE;
ALTER USER username WITH NOCREATEROLE;

-- Set connection limit
ALTER USER username WITH CONNECTION LIMIT 5;

-- Set password expiration
ALTER USER username WITH VALID UNTIL '2024-12-31';

-- Lock user account
ALTER USER username WITH NOLOGIN;

-- Unlock user account
ALTER USER username WITH LOGIN;

-- Drop user
DROP USER username;

-- Drop role
DROP ROLE role_name;

Sécurité au niveau de la ligne

-- Enable RLS on table
ALTER TABLE users ENABLE ROW LEVEL SECURITY;

-- Create policy
CREATE POLICY user_policy ON users
    FOR ALL
    TO application_role
    USING (user_id = current_setting('app.current_user_id')::INTEGER);

-- Create policy for SELECT
CREATE POLICY select_own_posts ON posts
    FOR SELECT
    TO application_role
    USING (user_id = current_setting('app.current_user_id')::INTEGER);

-- Create policy for INSERT
CREATE POLICY insert_own_posts ON posts
    FOR INSERT
    TO application_role
    WITH CHECK (user_id = current_setting('app.current_user_id')::INTEGER);

-- Create policy for UPDATE
CREATE POLICY update_own_posts ON posts
    FOR UPDATE
    TO application_role
    USING (user_id = current_setting('app.current_user_id')::INTEGER)
    WITH CHECK (user_id = current_setting('app.current_user_id')::INTEGER);

-- Create policy for DELETE
CREATE POLICY delete_own_posts ON posts
    FOR DELETE
    TO application_role
    USING (user_id = current_setting('app.current_user_id')::INTEGER);

-- Show policies
\d+ table_name
SELECT * FROM pg_policies WHERE tablename = 'users';

-- Disable RLS
ALTER TABLE users DISABLE ROW LEVEL SECURITY;

-- Drop policy
DROP POLICY user_policy ON users;

-- Force RLS for table owners
ALTER TABLE users FORCE ROW LEVEL SECURITY;

Sauvegarde et restauration

pg_dump Sauvegarde

# Backup single database
pg_dump -U username -h hostname database_name > backup.sql

# Backup with specific options
pg_dump -U username -h hostname \
  --verbose \
  --clean \
  --no-owner \
  --no-privileges \
  database_name > backup.sql

# Backup in custom format (compressed)
pg_dump -U username -h hostname -Fc database_name > backup.dump

# Backup in directory format
pg_dump -U username -h hostname -Fd database_name -f backup_dir

# Backup specific tables
pg_dump -U username -h hostname -t users -t posts database_name > tables_backup.sql

# Backup specific schema
pg_dump -U username -h hostname -n public database_name > schema_backup.sql

# Backup structure only (no data)
pg_dump -U username -h hostname --schema-only database_name > structure.sql

# Backup data only (no structure)
pg_dump -U username -h hostname --data-only database_name > data.sql

# Backup with parallel jobs
pg_dump -U username -h hostname -Fd -j 4 database_name -f backup_dir

# Exclude specific tables
pg_dump -U username -h hostname --exclude-table=logs database_name > backup.sql

# Backup all databases
pg_dumpall -U username -h hostname > all_databases.sql

# Backup only global objects (roles, tablespaces)
pg_dumpall -U username -h hostname --globals-only > globals.sql

Restauration de sauvegarde

# Restore from SQL file
psql -U username -h hostname -d database_name < backup.sql

# Restore and create database
createdb -U username -h hostname database_name
psql -U username -h hostname -d database_name < backup.sql

# Restore from custom format
pg_restore -U username -h hostname -d database_name backup.dump

# Restore with specific options
pg_restore -U username -h hostname \
  --verbose \
  --clean \
  --no-owner \
  --no-privileges \
  -d database_name backup.dump

# Restore specific tables
pg_restore -U username -h hostname -d database_name -t users backup.dump

# Restore with parallel jobs
pg_restore -U username -h hostname -d database_name -j 4 backup.dump

# List contents of backup file
pg_restore --list backup.dump

# Restore from directory format
pg_restore -U username -h hostname -d database_name backup_dir

# Restore all databases
psql -U username -h hostname < all_databases.sql

Récupération ponctuelle (PITR)

-- Enable WAL archiving (in postgresql.conf)
-- wal_level = replica
-- archive_mode = on
-- archive_command = 'cp %p /path/to/archive/%f'

-- Create base backup
SELECT pg_start_backup('backup_label');
-- Copy data directory
SELECT pg_stop_backup();

-- Or use pg_basebackup
# Create base backup using pg_basebackup
pg_basebackup -U username -h hostname -D /backup/base -Ft -z -P

# Point-in-time recovery
# 1. Stop PostgreSQL
# 2. Replace data directory with base backup
# 3. Create recovery.conf
# 4. Start PostgreSQL

# recovery.conf example
restore_command = 'cp /path/to/archive/%f %p'
recovery_target_time = '2023-12-25 14:30:00'

Archivage continu

# Configure WAL archiving (postgresql.conf)
wal_level = replica
archive_mode = on
archive_command = 'test ! -f /backup/archive/%f && cp %p /backup/archive/%f'
archive_timeout = 300

# Create base backup
pg_basebackup -U replication_user -h hostname -D /backup/base -Ft -z -P -W

# Automated backup script
#!/bin/bash
BACKUP_DIR="/backup/postgresql"
DATE=$(date +%Y%m%d_%H%M%S)
DB_NAME="mydatabase"

# Create backup directory
mkdir -p $BACKUP_DIR/$DATE

# Perform base backup
pg_basebackup -U postgres -D $BACKUP_DIR/$DATE -Ft -z -P

# Clean old backups (keep last 7 days)
find $BACKUP_DIR -type d -mtime +7 -exec rm -rf {} \;

echo "Backup completed: $BACKUP_DIR/$DATE"

Optimisation des performances

Optimisation des requêtes

-- Use EXPLAIN to analyze queries
EXPLAIN SELECT * FROM users WHERE username = 'john_doe';

-- Use EXPLAIN ANALYZE for actual execution stats
EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM users WHERE username = 'john_doe';

-- Use EXPLAIN with different options
EXPLAIN (ANALYZE, BUFFERS, VERBOSE, FORMAT JSON) 
SELECT u.username, COUNT(p.id) as post_count
FROM users u
LEFT JOIN posts p ON u.id = p.user_id
GROUP BY u.username;

-- Optimize WHERE clauses
-- Bad: Function on column prevents index usage
SELECT * FROM users WHERE UPPER(username) = 'JOHN_DOE';

-- Good: Use functional index or rewrite query
CREATE INDEX idx_users_upper_username ON users(UPPER(username));
-- Or
SELECT * FROM users WHERE username = 'john_doe';

-- Use appropriate JOIN types
-- Use EXISTS instead of IN for better performance
SELECT * FROM users u
WHERE EXISTS (SELECT 1 FROM posts p WHERE p.user_id = u.id);

-- Use LIMIT for large result sets
SELECT * FROM users ORDER BY created_at DESC LIMIT 10;

-- Use partial indexes for filtered queries
CREATE INDEX idx_active_users ON users(username) WHERE is_active = TRUE;

Optimisation de l'indice

-- Analyze table statistics
ANALYZE users;

-- Update all table statistics
ANALYZE;

-- Check index usage
SELECT 
    schemaname,
    tablename,
    indexname,
    idx_tup_read,
    idx_tup_fetch,
    idx_scan
FROM pg_stat_user_indexes
ORDER BY idx_scan DESC;

-- Find unused indexes
SELECT 
    schemaname,
    tablename,
    indexname,
    idx_scan,
    pg_size_pretty(pg_relation_size(indexrelid)) as size
FROM pg_stat_user_indexes
WHERE idx_scan = 0
ORDER BY pg_relation_size(indexrelid) DESC;

-- Check index bloat
SELECT 
    schemaname,
    tablename,
    indexname,
    pg_size_pretty(pg_relation_size(indexrelid)) as size,
    CASE 
        WHEN pg_relation_size(indexrelid) = 0 THEN 0
        ELSE (pgstatindex(indexrelid)).avg_leaf_density
    END as avg_leaf_density
FROM pg_stat_user_indexes;

-- Reindex to reduce bloat
REINDEX INDEX idx_users_username;
REINDEX TABLE users;
REINDEX DATABASE mydatabase;

-- Create covering indexes
CREATE INDEX idx_users_covering ON users(username) INCLUDE (email, created_at);

Optimisation de la configuration

-- Show current configuration
SHOW ALL;

-- Important parameters to tune
SHOW shared_buffers;          -- Should be 25% of RAM
SHOW effective_cache_size;    -- Should be 50-75% of RAM
SHOW work_mem;                -- Per-operation memory
SHOW maintenance_work_mem;    -- For maintenance operations
SHOW max_connections;         -- Based on application needs
SHOW checkpoint_segments;     -- For write-heavy workloads
SHOW wal_buffers;            -- WAL buffer size

-- Set parameters for session
SET work_mem = '256MB';
SET enable_seqscan = off;

-- Show query execution statistics
SELECT 
    query,
    calls,
    total_time,
    mean_time,
    rows
FROM pg_stat_statements
ORDER BY total_time DESC
LIMIT 10;

-- Reset statistics
SELECT pg_stat_reset();
SELECT pg_stat_statements_reset();

Surveillance de la performance

-- Monitor active queries
SELECT 
    pid,
    now() - pg_stat_activity.query_start AS duration,
    query,
    state
FROM pg_stat_activity
WHERE (now() - pg_stat_activity.query_start) > interval '5 minutes'
ORDER BY duration DESC;

-- Monitor locks
SELECT 
    blocked_locks.pid AS blocked_pid,
    blocked_activity.usename AS blocked_user,
    blocking_locks.pid AS blocking_pid,
    blocking_activity.usename AS blocking_user,
    blocked_activity.query AS blocked_statement,
    blocking_activity.query AS current_statement_in_blocking_process
FROM pg_catalog.pg_locks blocked_locks
JOIN pg_catalog.pg_stat_activity blocked_activity ON blocked_activity.pid = blocked_locks.pid
JOIN pg_catalog.pg_locks blocking_locks 
    ON blocking_locks.locktype = blocked_locks.locktype
    AND blocking_locks.DATABASE IS NOT DISTINCT FROM blocked_locks.DATABASE
    AND blocking_locks.relation IS NOT DISTINCT FROM blocked_locks.relation
    AND blocking_locks.page IS NOT DISTINCT FROM blocked_locks.page
    AND blocking_locks.tuple IS NOT DISTINCT FROM blocked_locks.tuple
    AND blocking_locks.virtualxid IS NOT DISTINCT FROM blocked_locks.virtualxid
    AND blocking_locks.transactionid IS NOT DISTINCT FROM blocked_locks.transactionid
    AND blocking_locks.classid IS NOT DISTINCT FROM blocked_locks.classid
    AND blocking_locks.objid IS NOT DISTINCT FROM blocked_locks.objid
    AND blocking_locks.objsubid IS NOT DISTINCT FROM blocked_locks.objsubid
    AND blocking_locks.pid != blocked_locks.pid
JOIN pg_catalog.pg_stat_activity blocking_activity ON blocking_activity.pid = blocking_locks.pid
WHERE NOT blocked_locks.GRANTED;

-- Monitor table and index sizes
SELECT 
    schemaname,
    tablename,
    pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) as total_size,
    pg_size_pretty(pg_relation_size(schemaname||'.'||tablename)) as table_size,
    pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename) - pg_relation_size(schemaname||'.'||tablename)) as index_size
FROM pg_tables
WHERE schemaname = 'public'
ORDER BY pg_total_relation_size(schemaname||'.'||tablename) DESC;

-- Monitor database connections
SELECT 
    datname,
    numbackends,
    xact_commit,
    xact_rollback,
    blks_read,
    blks_hit,
    temp_files,
    temp_bytes
FROM pg_stat_database;

Aspirateur et entretien

-- Manual vacuum
VACUUM users;
VACUUM ANALYZE users;
VACUUM FULL users;  -- Reclaims space but locks table

-- Show vacuum statistics
SELECT 
    schemaname,
    tablename,
    n_tup_ins,
    n_tup_upd,
    n_tup_del,
    n_dead_tup,
    last_vacuum,
    last_autovacuum,
    last_analyze,
    last_autoanalyze
FROM pg_stat_user_tables;

-- Configure autovacuum (postgresql.conf)
-- autovacuum = on
-- autovacuum_max_workers = 3
-- autovacuum_naptime = 1min
-- autovacuum_vacuum_threshold = 50
-- autovacuum_analyze_threshold = 50
-- autovacuum_vacuum_scale_factor = 0.2
-- autovacuum_analyze_scale_factor = 0.1

-- Per-table autovacuum settings
ALTER TABLE large_table SET (
    autovacuum_vacuum_scale_factor = 0.1,
    autovacuum_analyze_scale_factor = 0.05
);

-- Check table bloat
SELECT 
    schemaname,
    tablename,
    n_dead_tup,
    n_live_tup,
    ROUND(n_dead_tup * 100.0 / GREATEST(n_live_tup + n_dead_tup, 1), 2) as dead_tuple_percent
FROM pg_stat_user_tables
WHERE n_dead_tup > 0
ORDER BY dead_tuple_percent DESC;

Réplication

Configuration de la réplication en streaming

-- Master configuration (postgresql.conf)
-- wal_level = replica
-- max_wal_senders = 3
-- wal_keep_segments = 64
-- archive_mode = on
-- archive_command = 'cp %p /path/to/archive/%f'

-- Create replication user on master
CREATE USER replication_user WITH REPLICATION LOGIN PASSWORD 'replication_password';

-- Configure pg_hba.conf on master
-- host replication replication_user slave_ip/32 md5

-- Get master status
SELECT pg_current_wal_lsn();
# Create base backup for slave
pg_basebackup -h master_ip -D /var/lib/postgresql/data -U replication_user -P -W

# Slave configuration (postgresql.conf)
# hot_standby = on

# Create recovery.conf on slave
standby_mode = 'on'
primary_conninfo = 'host=master_ip port=5432 user=replication_user password=replication_password'
trigger_file = '/tmp/postgresql.trigger'

# Start slave
systemctl start postgresql

Surveillance des réplications

-- On master: Check replication status
SELECT 
    client_addr,
    state,
    sent_lsn,
    write_lsn,
    flush_lsn,
    replay_lsn,
    sync_state
FROM pg_stat_replication;

-- On slave: Check replication lag
SELECT 
    now() - pg_last_xact_replay_timestamp() AS replication_lag;

-- Check if in recovery mode
SELECT pg_is_in_recovery();

-- Show recovery status
SELECT 
    pg_last_wal_receive_lsn(),
    pg_last_wal_replay_lsn(),
    pg_last_xact_replay_timestamp();

Réplication logique (PostgreSQL 10+)

-- On publisher: Create publication
CREATE PUBLICATION my_publication FOR ALL TABLES;
-- Or for specific tables
CREATE PUBLICATION my_publication FOR TABLE users, posts;

-- On subscriber: Create subscription
CREATE SUBSCRIPTION my_subscription 
CONNECTION 'host=publisher_ip port=5432 user=replication_user password=password dbname=mydatabase'
PUBLICATION my_publication;

-- Monitor logical replication
SELECT * FROM pg_publication;
SELECT * FROM pg_subscription;
SELECT * FROM pg_stat_subscription;

-- Add table to publication
ALTER PUBLICATION my_publication ADD TABLE new_table;

-- Remove table from publication
ALTER PUBLICATION my_publication DROP TABLE old_table;

-- Refresh subscription
ALTER SUBSCRIPTION my_subscription REFRESH PUBLICATION;

-- Drop subscription
DROP SUBSCRIPTION my_subscription;

-- Drop publication
DROP PUBLICATION my_publication;

Échec et basculement

# Promote slave to master
pg_ctl promote -D /var/lib/postgresql/data

# Or create trigger file
touch /tmp/postgresql.trigger

# Planned switchover
# 1. Stop writes to master
# 2. Wait for slave to catch up
# 3. Promote slave
# 4. Reconfigure applications
# 5. Set up old master as new slave

Prorogations

Installation d'extensions

-- List available extensions
SELECT * FROM pg_available_extensions ORDER BY name;

-- List installed extensions
\dx
SELECT * FROM pg_extension;

-- Install extension
CREATE EXTENSION IF NOT EXISTS extension_name;

-- Install extension in specific schema
CREATE EXTENSION extension_name SCHEMA schema_name;

-- Update extension
ALTER EXTENSION extension_name UPDATE;

-- Drop extension
DROP EXTENSION extension_name;

Extensions populaires

-- UUID generation
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
SELECT uuid_generate_v4();

-- Cryptographic functions
CREATE EXTENSION IF NOT EXISTS pgcrypto;
SELECT crypt('password', gen_salt('bf'));
SELECT digest('text', 'sha256');

-- Full-text search
CREATE EXTENSION IF NOT EXISTS unaccent;
SELECT unaccent('Café');

-- PostGIS for geographic data
CREATE EXTENSION IF NOT EXISTS postgis;

-- Foreign data wrappers
CREATE EXTENSION IF NOT EXISTS postgres_fdw;

-- Additional data types
CREATE EXTENSION IF NOT EXISTS hstore;
CREATE EXTENSION IF NOT EXISTS ltree;
CREATE EXTENSION IF NOT EXISTS citext;

-- Statistics and monitoring
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;

-- Fuzzy string matching
CREATE EXTENSION IF NOT EXISTS fuzzystrmatch;
SELECT soundex('Smith');
SELECT levenshtein('kitten', 'sitting');

Utilisation des extensions

-- hstore examples
CREATE TABLE products (
    id SERIAL PRIMARY KEY,
    name VARCHAR(100),
    attributes HSTORE
);

INSERT INTO products (name, attributes) VALUES
('Laptop', 'brand=>Dell, ram=>16GB, storage=>512GB SSD');

SELECT name, attributes->'brand' as brand FROM products;
SELECT * FROM products WHERE attributes @> 'brand=>Dell';

-- ltree examples (hierarchical data)
CREATE TABLE categories (
    id SERIAL PRIMARY KEY,
    path LTREE,
    name VARCHAR(100)
);

INSERT INTO categories (path, name) VALUES
('electronics', 'Electronics'),
('electronics.computers', 'Computers'),
('electronics.computers.laptops', 'Laptops');

SELECT * FROM categories WHERE path ~ '*.computers.*';
SELECT * FROM categories WHERE path <@ 'electronics';

-- citext examples (case-insensitive text)
CREATE TABLE users_ci (
    id SERIAL PRIMARY KEY,
    username CITEXT UNIQUE,
    email CITEXT
);

INSERT INTO users_ci (username, email) VALUES
('JohnDoe', 'JOHN@EXAMPLE.COM');

SELECT * FROM users_ci WHERE username = 'johndoe';  -- Case insensitive

JSON et JSONB

JSON vs JSONB

-- JSON: stored as text, preserves formatting
-- JSONB: binary format, more efficient, supports indexing

CREATE TABLE json_test (
    id SERIAL PRIMARY KEY,
    data JSON,
    metadata JSONB
);

-- Insert JSON data
INSERT INTO json_test (data, metadata) VALUES
('{"name": "John", "age": 30, "city": "New York"}', 
 '{"preferences": {"theme": "dark", "language": "en"}, "tags": ["user", "premium"]}');

Opérations JSON

-- Extract JSON field
SELECT data->>'name' as name FROM json_test;
SELECT data->'age' as age FROM json_test;

-- Extract nested JSON
SELECT metadata->'preferences'->>'theme' as theme FROM json_test;

-- Extract array element
SELECT metadata->'tags'->0 as first_tag FROM json_test;

-- Check if key exists
SELECT * FROM json_test WHERE data ? 'name';
SELECT * FROM json_test WHERE metadata ? 'preferences';

-- Check if any array element exists
SELECT * FROM json_test WHERE metadata->'tags' ? 'premium';

-- Contains operator (JSONB only)
SELECT * FROM json_test WHERE metadata @> '{"preferences": {"theme": "dark"}}';

-- Contained by operator (JSONB only)
SELECT * FROM json_test WHERE '{"theme": "dark"}' <@ metadata->'preferences';

-- Path exists
SELECT * FROM json_test WHERE metadata @? '$.preferences.theme';

-- Path match
SELECT * FROM json_test WHERE metadata @@ '$.tags[*] == "premium"';

Fonctions JSON

-- JSON creation functions
SELECT json_build_object('name', 'John', 'age', 30);
SELECT json_build_array('apple', 'banana', 'orange');
SELECT to_json(row(1, 'John', 'john@example.com'));

-- JSON aggregation
SELECT json_agg(username) FROM users;
SELECT json_object_agg(username, email) FROM users;

-- JSONB aggregation
SELECT jsonb_agg(username) FROM users;
SELECT jsonb_object_agg(username, email) FROM users;

-- JSON modification (JSONB only)
SELECT metadata || '{"new_field": "value"}' FROM json_test;
SELECT metadata - 'preferences' FROM json_test;
SELECT metadata #- '{preferences,theme}' FROM json_test;

-- Set value
SELECT jsonb_set(metadata, '{preferences,theme}', '"light"') FROM json_test;

-- Insert value
SELECT jsonb_insert(metadata, '{tags,1}', '"new_tag"') FROM json_test;

-- Pretty print
SELECT jsonb_pretty(metadata) FROM json_test;

Indexation JSON

-- GIN index on JSONB column
CREATE INDEX idx_metadata_gin ON json_test USING GIN (metadata);

-- GIN index on specific JSON path
CREATE INDEX idx_metadata_preferences ON json_test USING GIN ((metadata->'preferences'));

-- B-tree index on extracted value
CREATE INDEX idx_metadata_theme ON json_test ((metadata->'preferences'->>'theme'));

-- Partial index
CREATE INDEX idx_premium_users ON json_test USING GIN (metadata) 
WHERE metadata->'tags' ? 'premium';

-- Expression index
CREATE INDEX idx_metadata_keys ON json_test USING GIN (jsonb_object_keys(metadata));

Validation JSON

-- Check if valid JSON
SELECT '{"name": "John"}' IS JSON;  -- PostgreSQL 12+

-- Custom validation function
CREATE OR REPLACE FUNCTION is_valid_email_json(data JSONB)
RETURNS BOOLEAN AS $$
BEGIN
    RETURN data->>'email' ~* '^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}$';
END;
$$ LANGUAGE plpgsql;

-- Add check constraint
ALTER TABLE json_test ADD CONSTRAINT valid_email_in_data 
CHECK (is_valid_email_json(metadata));

Recherche en texte intégral

Recherche en texte intégral de base

-- Create table with text data
CREATE TABLE documents (
    id SERIAL PRIMARY KEY,
    title VARCHAR(200),
    content TEXT
);

INSERT INTO documents (title, content) VALUES
('PostgreSQL Tutorial', 'Learn PostgreSQL database management system'),
('Advanced SQL', 'Master complex SQL queries and optimization'),
('Database Design', 'Best practices for designing relational databases');

-- Basic text search
SELECT * FROM documents 
WHERE to_tsvector('english', title || ' ' || content) @@ to_tsquery('english', 'PostgreSQL');

-- Search with ranking
SELECT 
    title,
    ts_rank(to_tsvector('english', title || ' ' || content), to_tsquery('english', 'database')) as rank
FROM documents
WHERE to_tsvector('english', title || ' ' || content) @@ to_tsquery('english', 'database')
ORDER BY rank DESC;

Configuration de recherche de texte

-- Show available text search configurations
SELECT cfgname FROM pg_ts_config;

-- Show available dictionaries
SELECT dictname FROM pg_ts_dict;

-- Show available parsers
SELECT prsname FROM pg_ts_parser;

-- Create custom text search configuration
CREATE TEXT SEARCH CONFIGURATION my_config (COPY = english);

-- Modify configuration
ALTER TEXT SEARCH CONFIGURATION my_config
    ALTER MAPPING FOR word WITH english_stem, simple;

-- Set default configuration
SET default_text_search_config = 'my_config';

Recherche de texte avancée

-- Add tsvector column for better performance
ALTER TABLE documents ADD COLUMN search_vector TSVECTOR;

-- Update search vector
UPDATE documents SET search_vector = 
    to_tsvector('english', coalesce(title, '') || ' ' || coalesce(content, ''));

-- Create GIN index
CREATE INDEX idx_documents_search ON documents USING GIN (search_vector);

-- Create trigger to maintain search vector
CREATE OR REPLACE FUNCTION update_search_vector()
RETURNS TRIGGER AS $$
BEGIN
    NEW.search_vector := to_tsvector('english', coalesce(NEW.title, '') || ' ' || coalesce(NEW.content, ''));
    RETURN NEW;
END;
$$ LANGUAGE plpgsql;

CREATE TRIGGER update_documents_search_vector
    BEFORE INSERT OR UPDATE ON documents
    FOR EACH ROW
    EXECUTE FUNCTION update_search_vector();

-- Complex queries
SELECT * FROM documents 
WHERE search_vector @@ to_tsquery('english', 'PostgreSQL & database');

SELECT * FROM documents 
WHERE search_vector @@ to_tsquery('english', 'PostgreSQL | MySQL');

SELECT * FROM documents 
WHERE search_vector @@ to_tsquery('english', 'database & !Oracle');

-- Phrase search
SELECT * FROM documents 
WHERE search_vector @@ phraseto_tsquery('english', 'database management system');

-- Fuzzy search with similarity
CREATE EXTENSION IF NOT EXISTS pg_trgm;

SELECT * FROM documents 
WHERE similarity(title, 'PostgreSQL') > 0.3
ORDER BY similarity(title, 'PostgreSQL') DESC;

Fonctions de recherche de texte

-- Highlighting search results
SELECT 
    title,
    ts_headline('english', content, to_tsquery('english', 'database'), 
                'StartSel=<b>, StopSel=</b>, MaxWords=35, MinWords=15') as snippet
FROM documents
WHERE search_vector @@ to_tsquery('english', 'database');

-- Get search statistics
SELECT 
    word,
    ndoc,
    nentry
FROM ts_stat('SELECT search_vector FROM documents');

-- Parse text into tokens
SELECT * FROM ts_parse('default', 'PostgreSQL is a powerful database system');

-- Convert text to tsvector with positions
SELECT to_tsvector('english', 'PostgreSQL is a powerful database system');

-- Show lexemes
SELECT unnest(tsvector_to_array(to_tsvector('english', 'PostgreSQL databases')));

Surveillance

Surveillance du système

-- Current activity
SELECT 
    pid,
    usename,
    application_name,
    client_addr,
    state,
    query_start,
    now() - query_start as duration,
    query
FROM pg_stat_activity
WHERE state != 'idle'
ORDER BY query_start;

-- Database statistics
SELECT 
    datname,
    numbackends,
    xact_commit,
    xact_rollback,
    blks_read,
    blks_hit,
    tup_returned,
    tup_fetched,
    tup_inserted,
    tup_updated,
    tup_deleted
FROM pg_stat_database;

-- Table statistics
SELECT 
    schemaname,
    tablename,
    seq_scan,
    seq_tup_read,
    idx_scan,
    idx_tup_fetch,
    n_tup_ins,
    n_tup_upd,
    n_tup_del,
    n_live_tup,
    n_dead_tup
FROM pg_stat_user_tables
ORDER BY seq_scan DESC;

-- Index statistics
SELECT 
    schemaname,
    tablename,
    indexname,
    idx_scan,
    idx_tup_read,
    idx_tup_fetch
FROM pg_stat_user_indexes
ORDER BY idx_scan DESC;

Surveillance de la performance

-- Long running queries
SELECT 
    pid,
    now() - pg_stat_activity.query_start AS duration,
    query,
    state
FROM pg_stat_activity
WHERE (now() - pg_stat_activity.query_start) > interval '5 minutes'
ORDER BY duration DESC;

-- Blocking queries
SELECT 
    blocked_locks.pid AS blocked_pid,
    blocked_activity.usename AS blocked_user,
    blocking_locks.pid AS blocking_pid,
    blocking_activity.usename AS blocking_user,
    blocked_activity.query AS blocked_statement,
    blocking_activity.query AS current_statement_in_blocking_process
FROM pg_catalog.pg_locks blocked_locks
JOIN pg_catalog.pg_stat_activity blocked_activity ON blocked_activity.pid = blocked_locks.pid
JOIN pg_catalog.pg_locks blocking_locks 
    ON blocking_locks.locktype = blocked_locks.locktype
    AND blocking_locks.pid != blocked_locks.pid
JOIN pg_catalog.pg_stat_activity blocking_activity ON blocking_activity.pid = blocking_locks.pid
WHERE NOT blocked_locks.granted;

-- Cache hit ratio
SELECT 
    'index hit rate' as name,
    (sum(idx_blks_hit)) / nullif(sum(idx_blks_hit + idx_blks_read),0) as ratio
FROM pg_stat_user_indexes
UNION ALL
SELECT 
    'table hit rate' as name,
    sum(heap_blks_hit) / nullif(sum(heap_blks_hit) + sum(heap_blks_read),0) as ratio
FROM pg_stat_user_tables;

-- Connection count by state
SELECT 
    state,
    count(*)
FROM pg_stat_activity
GROUP BY state;

Surveillance des ressources

-- Database sizes
SELECT 
    datname,
    pg_size_pretty(pg_database_size(datname)) as size
FROM pg_database
ORDER BY pg_database_size(datname) DESC;

-- Table sizes
SELECT 
    schemaname,
    tablename,
    pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) as total_size,
    pg_size_pretty(pg_relation_size(schemaname||'.'||tablename)) as table_size,
    pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename) - pg_relation_size(schemaname||'.'||tablename)) as index_size
FROM pg_tables
WHERE schemaname = 'public'
ORDER BY pg_total_relation_size(schemaname||'.'||tablename) DESC;

-- WAL statistics
SELECT 
    pg_current_wal_lsn(),
    pg_wal_lsn_diff(pg_current_wal_lsn(), '0/0') as wal_bytes;

-- Checkpoint statistics
SELECT 
    checkpoints_timed,
    checkpoints_req,
    checkpoint_write_time,
    checkpoint_sync_time,
    buffers_checkpoint,
    buffers_clean,
    buffers_backend
FROM pg_stat_bgwriter;

Scénario de suivi automatisé

#!/bin/bash
# postgresql_monitor.sh

DB_HOST="localhost"
DB_PORT="5432"
DB_USER="monitor_user"
DB_NAME="postgres"

# Check connection count
CONNECTIONS=$(psql -h $DB_HOST -p $DB_PORT -U $DB_USER -d $DB_NAME -t -c "SELECT count(*) FROM pg_stat_activity;")

if [ $CONNECTIONS -gt 80 ]; then
    echo "WARNING: High connection count: $CONNECTIONS"
fi

# Check for long running queries
LONG_QUERIES=$(psql -h $DB_HOST -p $DB_PORT -U $DB_USER -d $DB_NAME -t -c "SELECT count(*) FROM pg_stat_activity WHERE (now() - query_start) > interval '10 minutes';")

if [ $LONG_QUERIES -gt 0 ]; then
    echo "WARNING: $LONG_QUERIES long running queries detected"
fi

# Check replication lag (if applicable)
LAG=$(psql -h $DB_HOST -p $DB_PORT -U $DB_USER -d $DB_NAME -t -c "SELECT EXTRACT(EPOCH FROM (now() - pg_last_xact_replay_timestamp()));")

if [ ! -z "$LAG" ] && [ $(echo "$LAG > 60" | bc) -eq 1 ]; then
    echo "WARNING: Replication lag: $LAG seconds"
fi

echo "PostgreSQL monitoring completed at $(date)"

Meilleures pratiques

Conception de la base de données

-- Use appropriate data types
-- Bad: Using TEXT for everything
CREATE TABLE bad_design (
    id TEXT,
    age TEXT,
    price TEXT,
    is_active TEXT
);

-- Good: Specific data types
CREATE TABLE good_design (
    id SERIAL PRIMARY KEY,
    age SMALLINT CHECK (age >= 0 AND age <= 150),
    price DECIMAL(10,2) CHECK (price >= 0),
    is_active BOOLEAN DEFAULT TRUE
);

-- Normalize your database
-- Avoid redundant data
-- Use foreign keys to maintain referential integrity

-- Use meaningful names
-- Bad
CREATE TABLE u (
    i INTEGER,
    n VARCHAR(50),
    e VARCHAR(100)
);

-- Good
CREATE TABLE users (
    id SERIAL PRIMARY KEY,
    username VARCHAR(50) NOT NULL UNIQUE,
    email VARCHAR(100) NOT NULL UNIQUE
);

-- Add appropriate constraints
ALTER TABLE posts ADD CONSTRAINT fk_posts_user_id 
FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE;

ALTER TABLE products ADD CONSTRAINT chk_products_price 
CHECK (price > 0);

-- Use partial indexes for filtered queries
CREATE INDEX idx_active_users ON users(username) WHERE is_active = TRUE;

-- Use covering indexes to avoid table lookups
CREATE INDEX idx_users_covering ON users(username) INCLUDE (email, created_at);

Demander les meilleures pratiques

-- Use prepared statements to prevent SQL injection
PREPARE get_user AS SELECT * FROM users WHERE id = $1;
EXECUTE get_user(1);
DEALLOCATE get_user;

-- Use specific columns instead of SELECT *
-- Bad
SELECT * FROM users;

-- Good
SELECT id, username, email FROM users;

-- Use EXISTS instead of IN for better performance
-- Bad
SELECT * FROM users WHERE id IN (SELECT user_id FROM posts);

-- Good
SELECT * FROM users u WHERE EXISTS (SELECT 1 FROM posts p WHERE p.user_id = u.id);

-- Use LIMIT for large result sets
SELECT * FROM users ORDER BY created_at DESC LIMIT 10;

-- Use proper JOIN syntax
SELECT u.username, p.title
FROM users u
INNER JOIN posts p ON u.id = p.user_id
WHERE u.is_active = TRUE;

-- Use window functions instead of subqueries when possible
-- Bad
SELECT 
    username,
    (SELECT COUNT(*) FROM posts WHERE user_id = users.id) as post_count
FROM users;

-- Good
SELECT 
    u.username,
    COUNT(p.id) as post_count
FROM users u
LEFT JOIN posts p ON u.id = p.user_id
GROUP BY u.username;

-- Use CTEs for complex queries
WITH recent_users AS (
    SELECT * FROM users WHERE created_at > '2023-01-01'
),
user_stats AS (
    SELECT 
        u.id,
        u.username,
        COUNT(p.id) as post_count
    FROM recent_users u
    LEFT JOIN posts p ON u.id = p.user_id
    GROUP BY u.id, u.username
)
SELECT * FROM user_stats WHERE post_count > 5;

Meilleures pratiques en matière de rendement

-- Regular maintenance
ANALYZE;
VACUUM;
REINDEX DATABASE mydatabase;

-- Monitor query performance
EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM users WHERE username = 'john_doe';

-- Use connection pooling in applications
-- Configure appropriate connection limits

-- Partition large tables
CREATE TABLE sales (
    id SERIAL,
    sale_date DATE,
    amount DECIMAL(10,2)
) PARTITION BY RANGE (sale_date);

CREATE TABLE sales_2023 PARTITION OF sales
    FOR VALUES FROM ('2023-01-01') TO ('2024-01-01');

-- Use read replicas for read-heavy workloads
-- Separate read and write operations

-- Configure autovacuum appropriately
ALTER TABLE large_table SET (
    autovacuum_vacuum_scale_factor = 0.1,
    autovacuum_analyze_scale_factor = 0.05
);

-- Use materialized views for expensive queries
CREATE MATERIALIZED VIEW user_summary AS
SELECT 
    DATE_TRUNC('month', created_at) as month,
    COUNT(*) as user_count
FROM users
GROUP BY DATE_TRUNC('month', created_at);

-- Refresh materialized views regularly
REFRESH MATERIALIZED VIEW user_summary;

Pratiques exemplaires en matière de sécurité

-- Use least privilege principle
-- Grant only necessary permissions
GRANT SELECT, INSERT, UPDATE ON users TO app_user;

-- Use row level security
ALTER TABLE users ENABLE ROW LEVEL SECURITY;

CREATE POLICY user_policy ON users
    FOR ALL
    TO app_role
    USING (user_id = current_setting('app.current_user_id')::INTEGER);

-- Use SSL connections
-- Configure pg_hba.conf to require SSL
-- hostssl all all 0.0.0.0/0 md5

-- Encrypt sensitive data
CREATE EXTENSION IF NOT EXISTS pgcrypto;

-- Hash passwords
INSERT INTO users (username, password_hash) 
VALUES ('john', crypt('password', gen_salt('bf')));

-- Validate passwords
SELECT * FROM users 
WHERE username = 'john' 
AND password_hash = crypt('password', password_hash);

-- Use audit logging
CREATE TABLE audit_log (
    id SERIAL PRIMARY KEY,
    table_name VARCHAR(50),
    operation VARCHAR(10),
    user_name VARCHAR(50),
    timestamp TIMESTAMP DEFAULT NOW(),
    old_values JSONB,
    new_values JSONB
);

-- Regular security updates
-- Monitor for suspicious activity
-- Backup encryption keys securely

Les meilleures pratiques de sauvegarde

# Regular automated backups
# Full backup daily
pg_dump -U postgres -Fc mydatabase > backup_$(date +%Y%m%d).dump

# Point-in-time recovery setup
# Configure WAL archiving
# Take regular base backups

# Test restore procedures regularly
pg_restore -U postgres -d test_database backup.dump

# Store backups in multiple locations
# Encrypt backup files
# Document recovery procedures

# Monitor backup success
# Set up alerts for backup failures
# Verify backup integrity

Résumé

PostgreSQLTM est un système de base de données relationnelle avancé et riche en fonctionnalités qui offre une excellente performance, fiabilité et extensibilité. Cette feuille de triche complète couvre les opérations essentielles de PostgreSQLTM depuis l'installation de base jusqu'aux sujets avancés comme la réplication, la recherche en texte intégral et l'optimisation des performances.

Principales forces: - ** Conformité de l'ACID : Soutien complet de la transaction avec une forte cohérence - Extension: Riche extension écosystème et types de données personnalisés - Caractéristiques avancées: JSON/JSONB, recherche en texte intégral, fonctions de fenêtre, CTE - Performance: planificateur de requête sophistiqué et fonctionnalités d'optimisation - ** Conformité standard: Excellente conformité standard SQL

Cas de la meilleure utilisation: - Applications web complexes nécessitant des fonctionnalités SQL avancées - Applications d'entreposage et d'analyse de données - Applications géospatiales (avec extension PostGIS) - Demandes nécessitant un stockage de documents JSON - Applications d'entreprise avec une logique commerciale complexe

** Considérations importantes :** - Un entretien régulier (VACUUM, ANALYZE) est crucial pour une performance optimale - Une bonne stratégie d'indexation est essentielle pour la performance des requêtes - Le réglage de la configuration devrait être basé sur les caractéristiques de la charge de travail - La surveillance et l'alerte devraient être mises en œuvre pour les systèmes de production.

En suivant les pratiques et techniques décrites dans cette feuille de triche, vous pouvez concevoir, mettre en œuvre et gérer efficacement les bases de données PostgreSQLTM qui sont sécurisées, performantes et fiables pour toute échelle d'application.