Redis Cheatsheet
Redis - In-Memory Data Structure Store
Redis is an open-source, in-memory data structure store, used as a database, cache, and message broker. It supports data structures such as strings, hashes, lists, sets, sorted sets with range queries, bitmaps, hyperloglogs, geospatial indexes, and streams.
Table of Contents
- Installation
- Basic Commands
- Key Operations
- String Operations
- Hash Operations
- List Operations
- Set Operations
- Sorted Set Operations
- HyperLogLog Operations
- Geospatial Operations
- Stream Operations
- Bit Operations
- Pub/Sub
- Transactions
- Scripting (Lua)
- Persistence
- Replication
- Sentinel
- Cluster
- Security
- Performance Optimization
- Monitoring
- Best Practices
Installation
Ubuntu/Debian
# Install Redis
sudo apt-get update
sudo apt-get install redis-server
# Start Redis
sudo systemctl start redis-server
sudo systemctl enable redis-server
# Check status
sudo systemctl status redis-server
# Connect to Redis
redis-cli
CentOS/RHEL/Fedora
# Install Redis
sudo yum install epel-release
sudo yum install redis
# Start Redis
sudo systemctl start redis
sudo systemctl enable redis
# Connect to Redis
redis-cli
macOS
# Using Homebrew
brew install redis
# Start Redis
brew services start redis
# Connect to Redis
redis-cli
Windows
# Download from https://github.com/microsoftarchive/redis/releases
# Run the .msi installer
# Or install via Chocolatey
choco install redis-64
Docker
# Pull Redis image
docker pull redis:7
# Run Redis container
docker run --name redis-container -p 6379:6379 -d redis:7
# Connect to Redis in container
docker exec -it redis-container redis-cli
# Run with persistence
docker run --name redis-container -p 6379:6379 -v redis-data:/data -d redis:7 redis-server --appendonly yes
# Docker Compose
cat > docker-compose.yml << EOF
version: '3.8'
services:
redis:
image: redis:7
container_name: redis
restart: always
ports:
- "6379:6379"
volumes:
- redis_data:/data
command: redis-server --appendonly yes
volumes:
redis_data:
EOF
docker-compose up -d
Basic Commands
Connecting to Redis
# Connect to local Redis
redis-cli
# Connect to remote Redis
redis-cli -h hostname -p 6379
# Connect with password
redis-cli -h hostname -p 6379 -a "password"
# Ping server
PING
# Expected response: PONG
# Authenticate
AUTH password
# Select database
SELECT 0 # Default database is 0
# Quit
QUIT
Server Information
# Get server info
INFO
INFO server
INFO clients
INFO memory
INFO persistence
INFO stats
INFO replication
INFO cpu
INFO cluster
INFO keyspace
# Get configuration
CONFIG GET *
CONFIG GET maxmemory
# Set configuration
CONFIG SET maxmemory 1gb
# Get database size
DBSIZE
# Get last save time
LASTSAVE
# Get server time
TIME
# Monitor commands
MONITOR
# Flush database
FLUSHDB # Flush current database
FLUSHALL # Flush all databases
Key Operations
Basic Key Commands
# Set key
SET mykey "Hello"
# Get key
GET mykey
# Check if key exists
EXISTS mykey
# Delete key
DEL mykey
# Get key type
TYPE mykey
# Rename key
RENAME mykey newkey
# Rename key if newkey doesn't exist
RENAMENX mykey newkey
# Get random key
RANDOMKEY
# Find keys by pattern
KEYS * # All keys (use with caution in production)
KEYS user:*
KEYS *name*
Key Expiration
# Set key with expiration (in seconds)
SETEX mykey 60 "Hello"
# Set key with expiration (in milliseconds)
PSETEX mykey 60000 "Hello"
# Set expiration on existing key (in seconds)
EXPIRE mykey 60
# Set expiration on existing key (in milliseconds)
PEXPIRE mykey 60000
# Set expiration with timestamp (in seconds)
EXPIREAT mykey 1672531199
# Set expiration with timestamp (in milliseconds)
PEXPIREAT mykey 1672531199000
# Get TTL (Time To Live) in seconds
TTL mykey
# Get TTL in milliseconds
PTTL mykey
# Remove expiration
PERSIST mykey
Advanced Key Operations
# Move key to another database
MOVE mykey 1
# Dump key in serialized format
DUMP mykey
# Restore key from dump
RESTORE newkey 0 "\x00\x05Hello\x06\x00\x83\xbf\x0e\x8a\x8f\x8e\x01\x00"
# Scan keys iteratively
SCAN 0 MATCH user:* COUNT 100
# Get object information
OBJECT ENCODING mykey
OBJECT FREQ mykey
OBJECT IDLETIME mykey
OBJECT REFCOUNT mykey
String Operations
Basic String Commands
# Set string value
SET name "John Doe"
# Get string value
GET name
# Get multiple keys
MGET key1 key2 key3
# Set multiple keys
MSET key1 "value1" key2 "value2" key3 "value3"
# Set if not exists
SETNX mykey "value"
# Get and set
GETSET mykey "newvalue"
# Get substring
GETRANGE mykey 0 4
# Set substring
SETRANGE mykey 6 "World"
# Get string length
STRLEN mykey
# Append to string
APPEND mykey "!!!"
Integer Operations
# Increment value
INCR counter
# Increment by value
INCRBY counter 10
# Decrement value
DECR counter
# Decrement by value
DECRBY counter 10
# Increment float value
INCRBYFLOAT amount 10.5
Hash Operations
Basic Hash Commands
# Set field in hash
HSET user:1 name "John Doe"
HSET user:1 email "john@example.com"
# Get field from hash
HGET user:1 name
# Set multiple fields in hash
HMSET user:1 name "John Doe" email "john@example.com" age 30
# Get multiple fields from hash
HMGET user:1 name email age
# Get all fields and values from hash
HGETALL user:1
# Get all keys from hash
HKEYS user:1
# Get all values from hash
HVALS user:1
# Get number of fields in hash
HLEN user:1
# Check if field exists in hash
HEXISTS user:1 email
# Delete field from hash
HDEL user:1 age
Advanced Hash Operations
# Set field if not exists
HSETNX user:1 name "John Doe"
# Increment integer field
HINCRBY user:1 age 1
# Increment float field
HINCRBYFLOAT user:1 balance 10.5
# Get string length of field value
HSTRLEN user:1 name
# Scan hash fields iteratively
HSCAN user:1 0 MATCH field* COUNT 10
List Operations
Basic List Commands
# Push to left (prepend)
LPUSH mylist "world"
LPUSH mylist "hello"
# Push to right (append)
RPUSH mylist "!"
# Pop from left (remove and get first element)
LPOP mylist
# Pop from right (remove and get last element)
RPOP mylist
# Get list length
LLEN mylist
# Get range of elements
LRANGE mylist 0 -1 # Get all elements
LRANGE mylist 0 4
# Get element by index
LINDEX mylist 0
# Set element by index
LSET mylist 0 "new value"
# Insert element before/after pivot
LINSERT mylist BEFORE "world" "hello"
LINSERT mylist AFTER "world" "!"
# Remove elements by value
LREM mylist 2 "hello" # Remove 2 occurrences of "hello"
# Trim list to range
LTRIM mylist 0 4
Blocking List Operations
# Blocking pop from left
BLPOP mylist 10 # Timeout 10 seconds
# Blocking pop from right
BRPOP mylist 10
# Blocking pop from multiple lists
BLPOP list1 list2 10
# Pop from right of one list, push to left of another
RPOPLPUSH source destination
# Blocking version of RPOPLPUSH
BRPOPLPUSH source destination 10
Set Operations
Basic Set Commands
# Add members to set
SADD myset "a" "b" "c"
# Get all members of set
SMEMBERS myset
# Check if member exists in set
SISMEMBER myset "a"
# Get number of members in set
SCARD myset
# Remove members from set
SREM myset "c"
# Pop random member from set
SPOP myset
# Get random members from set
SRANDMEMBER myset 2
Set Operations
# Set difference
SDIFF set1 set2
# Store set difference
SDIFFSTORE destset set1 set2
# Set intersection
SINTER set1 set2
# Store set intersection
SINTERSTORE destset set1 set2
# Set union
SUNION set1 set2
# Store set union
SUNIONSTORE destset set1 set2
# Move member from one set to another
SMOVE sourceset destset "member"
# Scan set members iteratively
SSCAN myset 0 MATCH a* COUNT 10
Sorted Set Operations
Basic Sorted Set Commands
# Add members with scores
ZADD myzset 1 "one" 2 "two" 3 "three"
# Get members by rank (ascending)
ZRANGE myzset 0 -1 WITHSCORES
# Get members by rank (descending)
ZREVRANGE myzset 0 -1 WITHSCORES
# Get members by score range
ZRANGEBYSCORE myzset 1 2 WITHSCORES
ZRANGEBYSCORE myzset (1 2 # Exclusive range
# Get members by score range (descending)
ZREVRANGEBYSCORE myzset 2 1 WITHSCORES
# Get score of member
ZSCORE myzset "one"
# Get number of members in sorted set
ZCARD myzset
# Get number of members in score range
ZCOUNT myzset 1 2
# Get rank of member (ascending)
ZRANK myzset "two"
# Get rank of member (descending)
ZREVRANK myzset "two"
# Remove members
ZREM myzset "one"
# Remove members by rank
ZREMRANGEBYRANK myzset 0 0
# Remove members by score
ZREMRANGEBYSCORE myzset 1 2
# Increment score of member
ZINCRBY myzset 2 "one"
Advanced Sorted Set Operations
# Lexicographical range queries
ZADD mylexset 0 a 0 b 0 c 0 d 0 e
ZRANGEBYLEX mylexset [b (d
# Pop members with lowest scores
ZPOPMIN myzset 2
# Pop members with highest scores
ZPOPMAX myzset 2
# Blocking pop from sorted set
BZPOPMIN myzset 10
BZPOPMAX myzset 10
# Sorted set intersection
ZINTERSTORE destset 2 set1 set2 WEIGHTS 2 3 AGGREGATE SUM
# Sorted set union
ZUNIONSTORE destset 2 set1 set2 WEIGHTS 2 3 AGGREGATE MAX
# Scan sorted set members iteratively
ZSCAN myzset 0 MATCH a* COUNT 10
HyperLogLog Operations
Basic HyperLogLog Commands
# Add items to HyperLogLog
PFADD myhll "a" "b" "c"
# Get approximate cardinality
PFCOUNT myhll
# Merge multiple HyperLogLogs
PFMERGE dest_hll hll1 hll2
Geospatial Operations
Basic Geospatial Commands
# Add geospatial items
GEOADD locations 13.361389 38.115556 "Palermo" 15.087269 37.502669 "Catania"
# Get position of members
GEOPOS locations "Palermo" "Catania"
# Get distance between members
GEODIST locations "Palermo" "Catania" km
# Find members within radius
GEORADIUS locations 15 37 200 km WITHDIST WITHCOORD COUNT 5 ASC
# Find members within radius by member
GEORADIUSBYMEMBER locations "Palermo" 100 km
# Get geohash of members
GEOHASH locations "Palermo" "Catania"
Stream Operations
Basic Stream Commands
# Add entry to stream
XADD mystream * field1 "value1" field2 "value2"
# Get entries from stream
XRANGE mystream - + COUNT 2
# Get entries from stream (reverse)
XREVRANGE mystream + - COUNT 2
# Read from stream (blocking)
XREAD COUNT 2 STREAMS mystream 0
XREAD BLOCK 5000 STREAMS mystream $
# Get stream length
XLEN mystream
# Delete entries from stream
XDEL mystream 1672531199000-0
# Trim stream
XTRIM mystream MAXLEN 1000
Consumer Groups
# Create consumer group
XGROUP CREATE mystream mygroup $
# Read from stream as consumer
XREADGROUP GROUP mygroup consumer1 COUNT 1 STREAMS mystream >
# Acknowledge message
XACK mystream mygroup 1672531199000-0
# Get pending messages
XPENDING mystream mygroup
# Claim pending message
XCLAIM mystream mygroup consumer2 3600000 1672531199000-0
# Get consumer group info
XINFO GROUPS mystream
# Get consumer info
XINFO CONSUMERS mystream mygroup
# Destroy consumer group
XGROUP DESTROY mystream mygroup
Bit Operations
Basic Bit Commands
# Set bit
SETBIT mykey 7 1
# Get bit
GETBIT mykey 7
# Count set bits
BITCOUNT mykey
BITCOUNT mykey 0 0 # In first byte
# Bitwise operations
BITOP AND destkey key1 key2
BITOP OR destkey key1 key2
BITOP XOR destkey key1 key2
BITOP NOT destkey key1
# Find first set/unset bit
BITPOS mykey 1
BITPOS mykey 0
Pub/Sub
Basic Pub/Sub Commands
# Subscribe to channels
SUBSCRIBE channel1 channel2
# Unsubscribe from channels
UNSUBSCRIBE channel1
# Publish message to channel
PUBLISH channel1 "Hello, world!"
# Subscribe to channels by pattern
PSUBSCRIBE news.*
# Unsubscribe from channels by pattern
PUNSUBSCRIBE news.*
# Get pub/sub info
PUBSUB CHANNELS
PUBSUB NUMSUB channel1
PUBSUB NUMPAT
Transactions
Basic Transaction Commands
# Start transaction
MULTI
# Queue commands
SET a 1
SET b 2
INCR a
GET a
# Execute transaction
EXEC
# Discard transaction
DISCARD
# Watch keys for changes
WATCH mykey
# Unwatch keys
UNWATCH
# Example with WATCH
WATCH mykey
val = GET mykey
val = val + 1
MULTI
SET mykey val
EXEC # Fails if mykey was changed by another client
Scripting (Lua)
Basic Scripting Commands
# Evaluate Lua script
EVAL "return redis.call("GET", KEYS[1])" 1 mykey
# Evaluate script with arguments
EVAL "return KEYS[1] .. ARGV[1]" 1 mykey " world"
# Load script into cache
SCRIPT LOAD "return redis.call("GET", KEYS[1])"
# Evaluate cached script by SHA1
EVALSHA <sha1> 1 mykey
# Check if scripts exist in cache
SCRIPT EXISTS <sha1> <sha2>
# Flush script cache
SCRIPT FLUSH
# Kill running script
SCRIPT KILL
Lua Script Example
-- atomic_incr_with_limit.lua
local current = redis.call("GET", KEYS[1])
if not current then
current = 0
end
local limit = tonumber(ARGV[1])
if tonumber(current) < limit then
return redis.call("INCR", KEYS[1])
else
return tonumber(current)
end
-- Execute script
EVAL "...script content..." 1 mycounter 100
Persistence
RDB (Redis Database)
# Configuration (redis.conf)
save 900 1 # Save if 1 key changed in 900s
save 300 10 # Save if 10 keys changed in 300s
save 60 10000 # Save if 10000 keys changed in 60s
dbfilename dump.rdb
dir /var/lib/redis
# Manual save
SAVE # Blocking save
BGSAVE # Background save
AOF (Append Only File)
# Configuration (redis.conf)
appendonly yes
appendfilename "appendonly.aof"
# AOF fsync policy
# appendfsync always # Slowest, most durable
# appendfsync everysec # Default, good balance
# appendfsync no # Fastest, least durable
# Rewrite AOF file
BGREWRITEAOF
Replication
Master-Slave Replication
# On slave (redis.conf)
replicaof <master_ip> <master_port>
# or SLAVEOF command
# On master (redis.conf)
# No specific configuration needed
# Check replication status
INFO replication
# Promote slave to master
REPLICAOF NO ONE
# or SLAVEOF NO ONE
Sentinel
Sentinel Configuration
# sentinel.conf
sentinel monitor mymaster 127.0.0.1 6379 2
sentinel down-after-milliseconds mymaster 30000
sentinel parallel-syncs mymaster 1
sentinel failover-timeout mymaster 180000
# Start Sentinel
redis-sentinel /path/to/sentinel.conf
Sentinel Commands
# Connect to Sentinel
redis-cli -p 26379
# Get master info
SENTINEL get-master-addr-by-name mymaster
# Get master status
SENTINEL master mymaster
# Get slaves status
SENTINEL slaves mymaster
# Force failover
SENTINEL failover mymaster
Cluster
Cluster Setup
# redis.conf for cluster nodes
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
# Create cluster
redis-cli --cluster create 127.0.0.1:7000 127.0.0.1:7001 \
127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005 \
--cluster-replicas 1
Cluster Commands
# Connect to cluster
redis-cli -c -p 7000
# Check cluster status
CLUSTER INFO
CLUSTER NODES
# Add node to cluster
redis-cli --cluster add-node 127.0.0.1:7006 127.0.0.1:7000
# Add replica to master
redis-cli --cluster add-node 127.0.0.1:7007 127.0.0.1:7000 --cluster-slave --cluster-master-id <master_id>
# Reshard cluster
redis-cli --cluster reshard 127.0.0.1:7000
# Delete node from cluster
redis-cli --cluster del-node 127.0.0.1:7006 <node_id>
# Get keys in slot
CLUSTER GETKEYSINSLOT <slot> <count>
# Get slot for key
CLUSTER KEYSLOT <key>
Security
Password Protection
# redis.conf
requirepass your_strong_password
# Authenticate
AUTH your_strong_password
Command Renaming
# redis.conf
rename-command CONFIG ""
rename-command FLUSHALL ""
rename-command DEBUG ""
Network Security
# redis.conf
bind 127.0.0.1 # Bind to localhost
protected-mode yes
Performance Optimization
Memory Optimization
# Use appropriate data structures
# Hashes for objects, sets for unique items, etc.
# Use memory-efficient encodings
# ziplist for small lists/hashes, intset for small sets of integers
# Configure maxmemory policy
# redis.conf
maxmemory 2gb
maxmemory-policy allkeys-lru
# Policies: volatile-lru, allkeys-lru, volatile-random, allkeys-random, volatile-ttl, noeviction
# Monitor memory usage
INFO memory
MEMORY USAGE mykey
Latency Optimization
# Use pipelining
(printf "PING\r\nSET key value\r\nGET key\r\n"; sleep 1) | nc localhost 6379
# Use Lua scripts for complex atomic operations
# Avoid slow commands in production
# KEYS, FLUSHALL, FLUSHDB, DEBUG, MONITOR
# Monitor slow log
SLOWLOG GET 10
SLOWLOG LEN
SLOWLOG RESET
# redis.conf
slowlog-log-slower-than 10000 # in microseconds
slowlog-max-len 128
Monitoring
Redis Monitoring Tools
# redis-cli
MONITOR
INFO
SLOWLOG GET
# Redis Live
# Web-based monitoring tool
# Prometheus + Grafana
# Use redis_exporter
# Datadog, New Relic, etc.
# Use their Redis integrations
Best Practices
General Best Practices
- Use appropriate data structures for your use case.
- Set expirations on keys to manage memory.
- Use pipelining for multiple commands to reduce latency.
- Use Lua scripts for complex atomic operations.
- Avoid slow commands in production environments.
- Configure persistence (RDB/AOF) based on your durability needs.
- Use replication for high availability.
- Use Sentinel for automatic failover.
- Use Cluster for horizontal scaling.
- Secure your Redis instance with passwords and network binding.
- Monitor your Redis instance for performance and health.
Caching Best Practices
- Cache-Aside Pattern: Application code checks cache first, then database.
- Read-Through/Write-Through: Redis is the primary data store, application interacts only with Redis.
- Write-Back (Write-Behind): Writes go to Redis, then asynchronously to database.
- Cache Eviction Policies: Choose the right
maxmemory-policy
. - Time-To-Live (TTL): Set appropriate TTLs for cached data.
Session Store Best Practices
- Use
SETEX
to set session data with expiration. - Use Hashes to store session attributes.
- Use
EXPIRE
to update session TTL on activity.
Message Broker Best Practices
- Use Lists for simple message queues.
- Use Pub/Sub for fan-out messaging.
- Use Streams for persistent, multi-consumer message queues.
Summary
Redis is a versatile and high-performance in-memory data store that can be used for a wide range of applications. This cheatsheet provides a comprehensive overview of Redis commands and best practices, from basic key-value operations to advanced features like streams, clustering, and scripting.
Key Strengths: - Performance: In-memory storage provides extremely fast read and write operations. - Data Structures: Rich set of data structures beyond simple key-value pairs. - Versatility: Can be used as a database, cache, message broker, and more. - Scalability: Supports replication, Sentinel for high availability, and Cluster for horizontal scaling.
Best Use Cases: - Caching (web pages, database queries, API responses) - Session management - Real-time analytics (leaderboards, counters) - Message queues and job processing - Pub/Sub messaging systems - Geospatial applications
Important Considerations: - Being in-memory, data size is limited by available RAM. - Persistence configuration is crucial to prevent data loss. - Security must be configured properly to prevent unauthorized access. - Understanding data structures is key to effective use.
By leveraging the commands and patterns in this cheatsheet, you can build powerful, scalable, and high-performance applications with Redis.