Skip to content

Redis - In-Memory Data Structure Store

Redis (Remote Dictionary Server) stands as the world's most popular in-memory data structure store, serving as a database, cache, message broker, and streaming engine. Created by Salvatore Sanfilippo in 2009, Redis has evolved into a versatile, high-performance solution that powers some of the world's most demanding applications. Unlike traditional disk-based databases, Redis stores data entirely in memory, enabling sub-millisecond response times and supporting millions of operations per second. Its rich set of data structures, including strings, hashes, lists, sets, sorted sets, bitmaps, hyperloglogs, and streams, makes it incredibly flexible for various use cases ranging from simple caching to complex real-time analytics, session management, leaderboards, and pub/sub messaging systems.

Installation and Setup

Redis Installation

bash
# Ubuntu/Debian installation
sudo apt update
sudo apt install redis-server

# CentOS/RHEL/Fedora installation
sudo dnf install redis

# macOS installation using Homebrew
brew install redis

# Windows installation (using WSL or Docker recommended)
# Docker approach:
docker run --name redis-server -p 6379:6379 -d redis:latest

# Compile from source
wget https://download.redis.io/redis-stable.tar.gz
tar xzf redis-stable.tar.gz
cd redis-stable
make
sudo make install

# Start Redis server
redis-server

# Start with configuration file
redis-server /etc/redis/redis.conf

# Start as daemon
redis-server --daemonize yes

# Start with custom port
redis-server --port 6380

# Connect to Redis
redis-cli

# Connect to specific host and port
redis-cli -h localhost -p 6379

# Connect with authentication
redis-cli -a password

# Connect to specific database
redis-cli -n 1

# Execute command directly
redis-cli SET mykey "Hello World"
redis-cli GET mykey

Redis Configuration

bash
# Redis configuration file (redis.conf)
# Location: /etc/redis/redis.conf

# Basic configuration
bind 127.0.0.1 ::1          # Bind to specific interfaces
port 6379                   # Default port
timeout 0                   # Client idle timeout (0 = disabled)
tcp-keepalive 300          # TCP keepalive

# Memory configuration
maxmemory 2gb              # Maximum memory usage
maxmemory-policy allkeys-lru # Eviction policy when memory limit reached

# Persistence configuration
save 900 1                 # Save if at least 1 key changed in 900 seconds
save 300 10                # Save if at least 10 keys changed in 300 seconds
save 60 10000              # Save if at least 10000 keys changed in 60 seconds

# RDB configuration
dbfilename dump.rdb        # RDB file name
dir /var/lib/redis         # Working directory

# AOF configuration
appendonly yes             # Enable AOF persistence
appendfilename "appendonly.aof"
appendfsync everysec       # AOF sync policy (always/everysec/no)

# Security configuration
requirepass mypassword     # Set password
rename-command FLUSHDB ""  # Disable dangerous commands
rename-command FLUSHALL "" 
rename-command CONFIG "CONFIG_9a8b7c6d5e4f"

# Logging configuration
loglevel notice           # Log level (debug/verbose/notice/warning)
logfile /var/log/redis/redis-server.log

# Client configuration
maxclients 10000          # Maximum number of clients

# Slow log configuration
slowlog-log-slower-than 10000  # Log queries slower than 10ms
slowlog-max-len 128            # Maximum slow log entries

# Lua scripting configuration
lua-time-limit 5000       # Lua script timeout (ms)

# Cluster configuration (for Redis Cluster)
cluster-enabled yes
cluster-config-file nodes-6379.conf
cluster-node-timeout 15000

# Replication configuration (for master-slave setup)
replicaof 192.168.1.100 6379  # Replica of master
masterauth mypassword          # Master password

Redis CLI Basics

bash
# Connect and basic commands
redis-cli

# Test connection
PING
# Response: PONG

# Authentication
AUTH password

# Select database (0-15 by default)
SELECT 1

# Get server information
INFO
INFO memory
INFO replication
INFO stats

# Monitor commands in real-time
MONITOR

# Get configuration
CONFIG GET "*"
CONFIG GET maxmemory

# Set configuration
CONFIG SET maxmemory 1gb

# Save configuration to file
CONFIG REWRITE

# Database operations
FLUSHDB        # Clear current database
FLUSHALL       # Clear all databases
DBSIZE         # Number of keys in current database
LASTSAVE       # Last save timestamp

# Key operations
KEYS *         # List all keys (use with caution in production)
SCAN 0         # Iterate through keys safely
EXISTS mykey   # Check if key exists
TYPE mykey     # Get key type
TTL mykey      # Get time to live
EXPIRE mykey 60 # Set expiration (seconds)
PERSIST mykey  # Remove expiration
DEL mykey      # Delete key
RENAME oldkey newkey # Rename key

# Server operations
SHUTDOWN       # Shutdown server
SAVE          # Force save to disk
BGSAVE        # Background save
BGREWRITEAOF  # Background AOF rewrite
CLIENT LIST   # List connected clients
CLIENT KILL ip:port # Kill client connection

Data Types and Operations

Strings

bash
# String operations - most basic Redis data type
# Can store text, numbers, or binary data up to 512MB

# Set and get
SET mykey "Hello World"
GET mykey

# Set with expiration
SET session:123 "user_data" EX 3600  # Expires in 1 hour
SETEX session:123 3600 "user_data"   # Same as above

# Set if not exists
SET mykey "value" NX
SETNX mykey "value"

# Set if exists
SET mykey "value" XX

# Set multiple keys
MSET key1 "value1" key2 "value2" key3 "value3"
MGET key1 key2 key3

# Append to string
APPEND mykey " - Redis"

# Get string length
STRLEN mykey

# Get substring
GETRANGE mykey 0 4    # Get characters 0-4
SETRANGE mykey 6 "Redis"  # Replace from position 6

# Numeric operations
SET counter 10
INCR counter          # Increment by 1
INCRBY counter 5      # Increment by 5
DECR counter          # Decrement by 1
DECRBY counter 3      # Decrement by 3
INCRBYFLOAT price 0.1 # Increment float

# Bit operations
SETBIT mykey 7 1      # Set bit at position 7
GETBIT mykey 7        # Get bit at position 7
BITCOUNT mykey        # Count set bits
BITOP AND result key1 key2  # Bitwise AND

# Use cases for strings:
# - Caching web pages or API responses
# - Session storage
# - Counters and metrics
# - Feature flags
# - Rate limiting tokens

Hashes

bash
# Hash operations - field-value pairs, like objects/dictionaries
# Perfect for representing objects

# Set and get hash fields
HSET user:1000 name "John Doe"
HSET user:1000 email "john@example.com"
HSET user:1000 age 30

# Set multiple fields at once
HMSET user:1000 name "John Doe" email "john@example.com" age 30

# Get single field
HGET user:1000 name

# Get multiple fields
HMGET user:1000 name email age

# Get all fields and values
HGETALL user:1000

# Get all field names
HKEYS user:1000

# Get all values
HVALS user:1000

# Check if field exists
HEXISTS user:1000 name

# Delete field
HDEL user:1000 age

# Get number of fields
HLEN user:1000

# Increment numeric field
HINCRBY user:1000 login_count 1
HINCRBYFLOAT user:1000 balance 10.50

# Set field if not exists
HSETNX user:1000 created_at "2024-01-15"

# Scan hash fields
HSCAN user:1000 0 MATCH "email*"

# Use cases for hashes:
# - User profiles and objects
# - Shopping carts
# - Configuration settings
# - Metrics and counters grouped by category
# - Rate limiting per user/IP

Lists

bash
# List operations - ordered collections, can contain duplicates
# Implemented as linked lists, fast insertion/deletion at ends

# Push elements to list
LPUSH mylist "first"      # Push to left (beginning)
RPUSH mylist "last"       # Push to right (end)
LPUSH mylist "a" "b" "c"  # Push multiple elements

# Pop elements from list
LPOP mylist               # Pop from left
RPOP mylist               # Pop from right
BLPOP mylist 10           # Blocking pop (wait up to 10 seconds)
BRPOP mylist 10           # Blocking pop from right

# Get elements by index
LINDEX mylist 0           # Get first element
LINDEX mylist -1          # Get last element

# Get range of elements
LRANGE mylist 0 -1        # Get all elements
LRANGE mylist 0 2         # Get first 3 elements
LRANGE mylist -3 -1       # Get last 3 elements

# Set element at index
LSET mylist 0 "new_first"

# Insert element
LINSERT mylist BEFORE "existing" "new"
LINSERT mylist AFTER "existing" "new"

# Remove elements
LREM mylist 2 "value"     # Remove first 2 occurrences of "value"
LREM mylist -1 "value"    # Remove last occurrence
LREM mylist 0 "value"     # Remove all occurrences

# Trim list to range
LTRIM mylist 0 99         # Keep only first 100 elements

# Get list length
LLEN mylist

# Move element between lists
RPOPLPUSH source destination
BRPOPLPUSH source destination 10  # Blocking version

# Use cases for lists:
# - Message queues
# - Activity feeds
# - Recent items/history
# - Task queues
# - Undo operations

Sets

bash
# Set operations - unordered collections of unique strings
# Fast membership testing and set operations

# Add members to set
SADD myset "member1"
SADD myset "member1" "member2" "member3"

# Remove members
SREM myset "member1"

# Check membership
SISMEMBER myset "member1"

# Get all members
SMEMBERS myset

# Get random member
SRANDMEMBER myset
SRANDMEMBER myset 3       # Get 3 random members

# Pop random member
SPOP myset
SPOP myset 2              # Pop 2 random members

# Get set size
SCARD myset

# Move member between sets
SMOVE source destination "member"

# Set operations
SINTER set1 set2          # Intersection
SUNION set1 set2          # Union
SDIFF set1 set2           # Difference (in set1 but not set2)

# Store set operation results
SINTERSTORE result set1 set2
SUNIONSTORE result set1 set2
SDIFFSTORE result set1 set2

# Scan set members
SSCAN myset 0 MATCH "prefix*"

# Use cases for sets:
# - Tags and categories
# - Unique visitors tracking
# - Friend lists
# - Permissions and roles
# - Blacklists/whitelists

Sorted Sets (ZSets)

bash
# Sorted set operations - sets ordered by score
# Members are unique, but scores can be repeated

# Add members with scores
ZADD leaderboard 100 "player1"
ZADD leaderboard 200 "player2" 150 "player3"

# Get member score
ZSCORE leaderboard "player1"

# Increment member score
ZINCRBY leaderboard 10 "player1"

# Get member rank (0-based, lowest score first)
ZRANK leaderboard "player1"
ZREVRANK leaderboard "player1"  # Highest score first

# Get members by rank
ZRANGE leaderboard 0 -1         # All members (lowest to highest)
ZRANGE leaderboard 0 -1 WITHSCORES
ZREVRANGE leaderboard 0 9       # Top 10 (highest to lowest)
ZREVRANGE leaderboard 0 9 WITHSCORES

# Get members by score
ZRANGEBYSCORE leaderboard 100 200
ZRANGEBYSCORE leaderboard 100 200 WITHSCORES LIMIT 0 10
ZREVRANGEBYSCORE leaderboard 200 100

# Count members in score range
ZCOUNT leaderboard 100 200

# Remove members
ZREM leaderboard "player1"
ZREMRANGEBYRANK leaderboard 0 2    # Remove by rank
ZREMRANGEBYSCORE leaderboard 0 100 # Remove by score

# Get sorted set size
ZCARD leaderboard

# Lexicographical operations (when scores are same)
ZRANGEBYLEX myset "[a" "[z"
ZLEXCOUNT myset "[a" "[z"
ZREMRANGEBYLEX myset "[a" "[c"

# Set operations
ZINTERSTORE result 2 set1 set2 WEIGHTS 1 2
ZUNIONSTORE result 2 set1 set2 AGGREGATE MAX

# Scan sorted set
ZSCAN leaderboard 0 MATCH "player*"

# Use cases for sorted sets:
# - Leaderboards and rankings
# - Priority queues
# - Time-based data (timestamps as scores)
# - Rate limiting with sliding windows
# - Auto-complete suggestions

Bitmaps

bash
# Bitmap operations - bit arrays, memory efficient for boolean data
# Useful for tracking binary states for large numbers of items

# Set bit
SETBIT users:online 123 1    # User 123 is online
SETBIT users:online 456 1    # User 456 is online

# Get bit
GETBIT users:online 123      # Check if user 123 is online

# Count set bits
BITCOUNT users:online        # Count online users
BITCOUNT users:online 0 100  # Count in byte range

# Find first set/unset bit
BITPOS users:online 1        # First online user
BITPOS users:online 0        # First offline user

# Bitwise operations
BITOP AND result users:online users:premium
BITOP OR result users:today users:yesterday
BITOP XOR result users:today users:yesterday
BITOP NOT result users:online

# Use cases for bitmaps:
# - User activity tracking
# - Feature flags per user
# - Real-time analytics
# - A/B testing participation
# - Daily/monthly active users

HyperLogLog

bash
# HyperLogLog operations - probabilistic data structure for cardinality estimation
# Estimates unique count with very low memory usage

# Add elements
PFADD unique_visitors "user1" "user2" "user3"
PFADD unique_visitors "user1" "user4"  # user1 already exists

# Get estimated count
PFCOUNT unique_visitors

# Merge HyperLogLogs
PFMERGE result hll1 hll2 hll3

# Use cases for HyperLogLog:
# - Unique visitors counting
# - Distinct IP addresses
# - Unique search queries
# - Cardinality of large datasets
# - Real-time analytics with memory constraints

Streams

bash
# Stream operations - append-only log data structure
# Perfect for event sourcing and message queues

# Add entry to stream
XADD mystream * field1 value1 field2 value2
XADD mystream 1609459200000-0 temperature 20.5 humidity 65

# Read from stream
XREAD STREAMS mystream 0        # Read all entries
XREAD STREAMS mystream $        # Read new entries
XREAD BLOCK 1000 STREAMS mystream $  # Block for new entries

# Read range
XRANGE mystream - +             # All entries
XRANGE mystream 1609459200000 1609459300000  # Time range

# Get stream length
XLEN mystream

# Trim stream
XTRIM mystream MAXLEN 1000      # Keep last 1000 entries
XTRIM mystream MINID 1609459200000-0  # Remove entries before ID

# Consumer groups
XGROUP CREATE mystream mygroup $ MKSTREAM
XREADGROUP GROUP mygroup consumer1 STREAMS mystream >

# Acknowledge message processing
XACK mystream mygroup 1609459200000-0

# Get pending messages
XPENDING mystream mygroup

# Use cases for streams:
# - Event sourcing
# - Activity feeds
# - IoT sensor data
# - Chat applications
# - Audit logs

Advanced Operations and Patterns

Transactions and Pipelining

bash
# Transactions with MULTI/EXEC
MULTI                    # Start transaction
SET key1 "value1"
SET key2 "value2"
INCR counter
EXEC                     # Execute all commands atomically

# Watch keys for changes (optimistic locking)
WATCH mykey
MULTI
SET mykey "new_value"
EXEC                     # Will fail if mykey was modified

# Discard transaction
MULTI
SET key1 "value1"
DISCARD                  # Cancel transaction

# Pipelining (batch commands for better performance)
# In redis-cli, use --pipe option or client libraries
echo -e "SET key1 value1\nSET key2 value2\nGET key1" | redis-cli --pipe

# Lua scripting for atomic operations
EVAL "return redis.call('SET', KEYS[1], ARGV[1])" 1 mykey myvalue

# Load and execute script
SCRIPT LOAD "return redis.call('GET', KEYS[1])"
# Returns SHA1 hash
EVALSHA sha1_hash 1 mykey

# Check if script exists
SCRIPT EXISTS sha1_hash

# Flush all scripts
SCRIPT FLUSH

Pub/Sub Messaging

bash
# Publisher commands
PUBLISH channel1 "Hello World"
PUBLISH news:sports "Team wins championship"
PUBLISH user:123:notifications "New message"

# Subscriber commands (in separate client)
SUBSCRIBE channel1 channel2
PSUBSCRIBE news:*           # Pattern subscription
PSUBSCRIBE user:*:notifications

# Unsubscribe
UNSUBSCRIBE channel1
PUNSUBSCRIBE news:*

# Check active channels
PUBSUB CHANNELS            # List active channels
PUBSUB CHANNELS news:*     # Pattern match
PUBSUB NUMSUB channel1     # Number of subscribers
PUBSUB NUMPAT              # Number of pattern subscriptions

# Use cases for pub/sub:
# - Real-time notifications
# - Chat applications
# - Live updates
# - Event broadcasting
# - Microservice communication

Geospatial Operations

bash
# Add geospatial data
GEOADD locations 13.361389 38.115556 "Palermo"
GEOADD locations 15.087269 37.502669 "Catania"
GEOADD locations -122.27652 37.805186 "San Francisco"

# Get coordinates
GEOPOS locations "Palermo" "Catania"

# Calculate distance
GEODIST locations "Palermo" "Catania" km

# Find nearby locations
GEORADIUS locations 15 37 200 km WITHDIST WITHCOORD
GEORADIUSBYMEMBER locations "Palermo" 200 km

# Get geohash
GEOHASH locations "Palermo" "Catania"

# Use cases for geospatial:
# - Location-based services
# - Nearby search
# - Delivery tracking
# - Geofencing
# - Store locators

Memory Optimization and Persistence

bash
# Memory analysis
MEMORY USAGE mykey         # Memory used by key
MEMORY STATS              # Memory statistics
INFO memory               # Memory information

# Key expiration
EXPIRE mykey 60           # Expire in 60 seconds
EXPIREAT mykey 1609459200 # Expire at timestamp
PEXPIRE mykey 60000       # Expire in 60000 milliseconds
TTL mykey                 # Time to live in seconds
PTTL mykey                # Time to live in milliseconds

# Persistence commands
SAVE                      # Synchronous save
BGSAVE                    # Background save
LASTSAVE                  # Last save timestamp
BGREWRITEAOF             # Rewrite AOF file

# Configuration for memory optimization
CONFIG SET maxmemory-policy allkeys-lru
CONFIG SET maxmemory 1gb

# Memory policies:
# noeviction - return errors when memory limit reached
# allkeys-lru - evict least recently used keys
# volatile-lru - evict least recently used keys with expire set
# allkeys-random - evict random keys
# volatile-random - evict random keys with expire set
# volatile-ttl - evict keys with shortest TTL

Performance Optimization and Monitoring

Performance Monitoring

bash
# Server information
INFO                      # All server info
INFO server              # Server info
INFO memory              # Memory usage
INFO stats               # Statistics
INFO replication         # Replication info
INFO cpu                 # CPU usage
INFO commandstats        # Command statistics
INFO keyspace            # Database info

# Real-time monitoring
MONITOR                  # Monitor all commands (use carefully)

# Slow log
SLOWLOG GET 10           # Get last 10 slow queries
SLOWLOG LEN              # Number of slow queries
SLOWLOG RESET            # Clear slow log

# Client information
CLIENT LIST              # List connected clients
CLIENT INFO              # Current client info
CLIENT KILL ip:port      # Kill client
CLIENT SETNAME "myapp"   # Set client name
CLIENT GETNAME           # Get client name

# Latency monitoring
LATENCY LATEST           # Latest latency samples
LATENCY HISTORY event    # Latency history for event
LATENCY RESET            # Reset latency data

# Memory profiling
MEMORY DOCTOR            # Memory usage advice
MEMORY MALLOC-STATS      # Memory allocator stats

Performance Optimization Techniques

bash
# Use appropriate data structures
# Strings for simple values
# Hashes for objects with many fields
# Lists for ordered data
# Sets for unique collections
# Sorted sets for ranked data

# Optimize key naming
# Use consistent naming patterns
# Keep key names short but descriptive
# Use namespaces: user:1000:profile

# Batch operations
MSET key1 val1 key2 val2 key3 val3  # Better than multiple SET
MGET key1 key2 key3                 # Better than multiple GET

# Use pipelining for multiple commands
# Reduces network round trips

# Set appropriate expiration times
EXPIRE session:123 3600             # Session expires in 1 hour
EXPIRE cache:page:home 300          # Cache expires in 5 minutes

# Use Lua scripts for complex atomic operations
EVAL "
  local current = redis.call('GET', KEYS[1])
  if current == false then
    return redis.call('SET', KEYS[1], ARGV[1])
  else
    return current
  end
" 1 mykey myvalue

# Connection pooling in applications
# Reuse connections instead of creating new ones

# Use read replicas for read-heavy workloads
# Configure read preference in application

# Monitor and tune memory usage
CONFIG SET maxmemory-policy allkeys-lru
CONFIG SET maxmemory 2gb

# Use compression for large values
# Compress data before storing in Redis

# Partition data across multiple Redis instances
# Use consistent hashing for distribution

Redis Configuration Tuning

bash
# Memory optimization
maxmemory 2gb
maxmemory-policy allkeys-lru
maxmemory-samples 5

# Network optimization
tcp-keepalive 300
timeout 0
tcp-backlog 511

# Persistence optimization
# For cache-only usage
save ""
appendonly no

# For durability
save 900 1
save 300 10
save 60 10000
appendonly yes
appendfsync everysec

# Performance tuning
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64

# Client optimization
maxclients 10000
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit replica 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60

# Logging optimization
loglevel notice
syslog-enabled yes
syslog-ident redis

Redis Clustering and High Availability

Redis Sentinel (High Availability)

bash
# Sentinel configuration (sentinel.conf)
port 26379
sentinel monitor mymaster 127.0.0.1 6379 2
sentinel auth-pass mymaster password
sentinel down-after-milliseconds mymaster 30000
sentinel parallel-syncs mymaster 1
sentinel failover-timeout mymaster 180000

# Start Sentinel
redis-sentinel /etc/redis/sentinel.conf

# Sentinel commands
SENTINEL masters
SENTINEL slaves mymaster
SENTINEL sentinels mymaster
SENTINEL get-master-addr-by-name mymaster
SENTINEL failover mymaster
SENTINEL reset mymaster

Redis Cluster

bash
# Cluster configuration
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 15000
cluster-announce-ip 192.168.1.100
cluster-announce-port 6379

# Create cluster
redis-cli --cluster create \
  192.168.1.100:6379 192.168.1.101:6379 192.168.1.102:6379 \
  192.168.1.103:6379 192.168.1.104:6379 192.168.1.105:6379 \
  --cluster-replicas 1

# Cluster management
redis-cli --cluster info 192.168.1.100:6379
redis-cli --cluster check 192.168.1.100:6379
redis-cli --cluster fix 192.168.1.100:6379
redis-cli --cluster reshard 192.168.1.100:6379
redis-cli --cluster rebalance 192.168.1.100:6379

# Add node to cluster
redis-cli --cluster add-node 192.168.1.106:6379 192.168.1.100:6379

# Remove node from cluster
redis-cli --cluster del-node 192.168.1.100:6379 node-id

# Cluster commands
CLUSTER NODES
CLUSTER INFO
CLUSTER SLOTS
CLUSTER KEYSLOT mykey
CLUSTER COUNTKEYSINSLOT 12345
CLUSTER GETKEYSINSLOT 12345 10

Replication

bash
# Master-slave replication configuration
# On slave/replica server
replicaof 192.168.1.100 6379
masterauth password

# Replication commands
INFO replication
REPLICAOF 192.168.1.100 6379  # Become replica
REPLICAOF NO ONE              # Stop replication (become master)

# Read-only replica
replica-read-only yes

# Replica priority (for Sentinel)
replica-priority 100

Redis's exceptional performance, rich data structures, and versatile functionality make it an indispensable tool in modern application architectures. Whether used as a cache to accelerate database queries, a session store for web applications, a message broker for real-time communication, or a primary database for specific use cases, Redis provides the speed and flexibility needed to build responsive, scalable applications. Its active community, comprehensive documentation, and extensive ecosystem of tools and integrations ensure that Redis remains at the forefront of in-memory data storage solutions.