Technology

Redis Tutorial: Caching Patterns & Real-World Use Cases

A hands-on Next.js tutorial app for learning Redis database caching patterns and real-world use cases. The app uses an in-memory SQLite database as the primary data store and Redis as the caching layer, letting you observe cache hits, misses, and invalidation in real time.

30 min read
Published

Complete Tutorial Code

Follow along with the complete source code for this Redis tutorial. Includes four interactive tabs covering basic caching, advanced caching techniques, and real-world production patterns.

View on GitHub

Why Redis?

Redis is one of the most widely adopted tools in distributed systems. It shows up everywhere — from startup side projects to systems handling millions of requests per second at Twitter, GitHub, Snapchat, and Stack Overflow. Understanding Redis is a core skill for any backend engineer.

Blazing Fast

Redis stores all data in memory, making reads and writes orders of magnitude faster than a traditional disk-based database. Sub-millisecond response times are the norm.

Shared Cache Across Servers

In a distributed system with dozens of application servers behind a load balancer, Redis acts as a single shared cache that every server reads from and writes to — keeping data consistent across the fleet.

Reduces Database Load

Databases are often the bottleneck in high-traffic systems. By serving repeated reads from Redis instead of hitting the database every time, you can handle far more traffic without scaling up your database.

Atomic Operations

Redis executes every command atomically. In a distributed system where dozens of servers hit the cache simultaneously, atomicity guarantees that operations like INCR, SETNX, and ZADD complete as a single, indivisible step — no race conditions.

Beyond simple key-value strings, Redis supports Hashes, Lists, Sets, Sorted Sets, and more — making it useful for caching, session storage, leaderboards, pub/sub messaging, rate limiting, and queues, all in one tool. Redis also has built-in TTL (Time To Live), so cached data automatically disappears after a set time without manual cleanup.

Prerequisites

You only need two things to run this tutorial:

  • Node.js (v18 or later)
  • Docker — to run Redis as a container (no local Redis installation required)

Getting Started

  1. 1
    Start Redis Stack as a Docker container:
    docker run -d --name redis-tutorial -p 6379:6379 -p 8001:8001 redis/redis-stack:latest

    This bundles Redis with Redis Insight (a visual browser UI) in a single container. Open http://localhost:8001 to browse keys, inspect values, and watch TTLs count down in real time.

  2. 2
    Clone the repository:
    git clone https://github.com/audoir/redis-tutorial.git
  3. 3
    Install dependencies:
    npm install
  4. 4
    Run the development server:
    npm run dev

    Open http://localhost:3000 in your browser.

Project Structure

The project is organized around four tabs, each backed by dedicated API routes and UI components:

redis-tutorial/
├── app/
│   ├── api/
│   │   ├── database/          # Serves raw SQLite data to the View Database tab
│   │   ├── basic-cache/       # Cache-Aside and Write-Through API routes
│   │   ├── advanced-cache/    # Advanced caching technique API routes
│   │   │   └── techniques/    # One file per caching technique
│   │   ├── real-world/        # Real-world use case API routes
│   │   │   └── use-cases/     # One file per use case (GET + POST handlers)
│   │   └── redis-status/      # Redis connection health check
│   ├── components/
│   │   ├── PageHeader.tsx               # App header with Redis connection status
│   │   ├── TabNavigation.tsx            # Top-level tab bar
│   │   ├── DatabaseView.tsx             # Tab 1 — View Database
│   │   ├── BasicDatabaseCaching/        # Tab 2 — Basic Caching
│   │   ├── AdvancedDatabaseCaching/     # Tab 3 — Advanced Caching
│   │   ├── RealWorldUseCases/           # Tab 4 — Real World Use Cases
│   │   │   └── panels/                 # Per-use-case control panels
│   │   └── shared/CachingShared.tsx     # Shared UI components
│   └── page.tsx               # Root page with tab navigation
├── lib/
│   ├── db.ts                  # SQLite (better-sqlite3) setup & seed data
│   ├── redis.ts               # Redis (ioredis) client
│   └── types.ts               # Shared TypeScript types
└── README.md

Tutorial Overview

The app has four tabs. Work through them in order for the best learning experience — each one builds on the concepts introduced before it.

1

🗄️ Tab 1 — View Database

A live view of the underlying SQLite database. Browse the inventory (10 products), customers (8 records), and sales (20 records with joins) tables. The SQL query being executed is shown above each table — this is the exact query that Redis will cache in the next tabs.

SQLitebetter-sqlite3Live Data
2

⚡ Tab 2 — Basic Database Caching

The two most fundamental Redis caching patterns: Cache-Aside (lazy loading) and Write-Through (proactive caching). Watch cache hits and misses in the Activity Log and observe the key difference between reactive and proactive caching.

Cache-AsideWrite-ThroughTTLCache Invalidation
3

🚀 Tab 3 — Advanced Database Caching

Six advanced caching techniques including the four relational database caching strategies from the AWS Database Caching Strategies whitepaper, plus two bonus aggregate-query patterns. Each technique uses a different Redis data structure and cache key strategy.

SQL ResultSetJSON FieldsRedis HashSerialized ObjectAggregationJOIN + GROUP BY
4

🌍 Tab 4 — Real World Use Cases

Seven production-grade Redis patterns that power real applications. Each use case highlights a different Redis data structure and command set — from API caching and session management to distributed locking and batch write buffers.

API CachingSessionsRate LimitingLeaderboardPub/SubDistributed LockBatch Write

Tab 2 — Basic Database Caching

🔵 Pattern A — Cache-Aside (Lazy Loading)

Cache-Aside is a reactive pattern. The cache is only populated when data is actually requested. The app checks Redis first; on a cache miss it queries SQLite, stores the result in Redis with a 60-second TTL, then returns the data.

// Cache-Aside flow
1. Check Redis: GET query:inventory
2. Cache HIT  → return data immediately (fast ⚡)
3. Cache MISS → query SQLite
              → SET query:inventory <result> EX 60
              → return data

🟣 Pattern B — Write-Through

Write-Through is a proactive pattern. The cache is updated at the same time as the database, so reads almost always find data in the cache. With Write-Through, the very first read after a write is always a cache hit — unlike Cache-Aside where the first read after a miss always hits the database.

// Write-Through flow
1. Write to SQLite (UPDATE / INSERT)
2. Immediately refresh Redis: SET query:inventory <result> EX 60
3. Next read → Cache HIT (cache was pre-populated by the write)

Tab 3 — Advanced Database Caching

This tab explores six advanced caching techniques. All use a 60-second TTL. Select a technique, run a query, and watch the Activity Log.

📋 Technique 1 — Cache SQL ResultSet

Redis type: StringKey: sql:<base64(SQL)>

The SQL query string itself is used as the cache key (base64-encoded). The full result set is serialized and stored as a single Redis string. Ideal when the same query is repeated frequently with the same parameters.

📄 Technique 2 — Cache Fields as JSON

Redis type: String (JSON)Key: customer:id:<id>

Only a selected subset of customer fields is serialized as a JSON string and stored in Redis — not the entire row. Useful when your app only needs specific fields and you want to minimize cache size.

🗂️ Technique 3 — Cache into Redis Hash

Redis type: Hash (HSET / HGETALL)Key: customer:hash:<id>

Each customer field is stored as a separate field in a Redis Hash. This allows you to read or update individual fields without deserializing the whole object — a key advantage over the JSON string approach.

🧩 Technique 4 — Cache Serialized Object

Redis type: String (JSON)Key: customer:object:<id>

A rich application-level object is constructed (including computed fields like totalSpent and recentPurchases) and serialized as a single JSON string. The most feature-rich caching approach — derived fields are computed once at cache-population time.

🏆 Technique 5 — Cache Aggregated Query (Leaderboard)

Redis type: String (JSON array)Key: leaderboard:top-products

An expensive GROUP BY aggregation query (top-selling products by revenue) is cached as a single key shared by all users. Ideal for dashboards and leaderboards where the same expensive query is run by many users.

🌆 Technique 6 — Cache JOIN + GROUP BY

Redis type: String (JSON array)Key: aggregate:city-summary

The most expensive query type — a multi-table JOIN combined with a GROUP BY — is cached as a single result set. This technique shows the biggest performance difference between a cache miss and a cache hit.

Tab 4 — Real World Use Cases

This tab goes beyond database caching and demonstrates seven production-grade Redis patterns. Each use case highlights a different Redis data structure and command set.

1

🌐 API Response Caching

String (JSON blob)api-cache:<endpoint>

External API calls can be slow (100–500ms) and expensive (rate-limited or billed per request). Redis caches the response so subsequent callers get sub-millisecond reads. Choose from three simulated endpoints and configurable TTLs.

2

🔐 Session Management

Hash (HSET / HGETALL)session:<sessionId>

Sessions need fast reads on every authenticated request. Redis Hashes store structured session data with per-field updates and automatic TTL-based expiry — no cron job or database polling needed. Demonstrates login, refresh, and logout flows.

3

🚦 Rate Limiting

String (INCR)Sorted Set (ZADD)Token Bucket

Redis is ideal for rate limiting because its operations are atomic — no race conditions. Three algorithms are demonstrated: Fixed Window (simple, fast), Sliding Window (accurate, no boundary bursts), and Token Bucket (allows short bursts up to capacity).

4

Real-Time Leaderboard

Sorted Set (ZADD / ZREVRANGE / ZINCRBY)leaderboard:game-scores

Redis Sorted Sets maintain a ranked list automatically. ZADD updates a score in O(log N), ZREVRANGE retrieves the top-N in O(log N + N). No SQL GROUP BY or ORDER BY needed. Seed players, update scores, and watch the leaderboard update instantly.

5

Pub/Sub & Event Log

Pub/Sub (PUBLISH)List (LPUSH / LRANGE)pubsub:event-log

Redis PUBLISH broadcasts messages to all subscribers in real time. A Redis List acts as a ring buffer for the event log — LPUSH prepends new events, LTRIM caps the list at 50 entries, and LRANGE reads them back. Publish events across four channels and multiple event types.

6

Distributed Locking

String (SET NX EX)lock:<resource>

Multiple services may try to modify the same resource simultaneously, causing race conditions. Redis SET NX EX is atomic — only one caller gets OK. The lock value stores the owner ID so only the holder can release it, and the TTL guarantees automatic expiry if the holder crashes (deadlock prevention).

7

📦 Batch Write Buffer

List (RPUSH / LRANGE / DEL)batch-write:inventory-updates

High-frequency writes (stock updates, click events, sensor readings) can overwhelm a relational database if each event triggers an individual UPDATE. Redis acts as a write buffer — updates are appended to a List with RPUSH (microseconds), then flushed to the database in a single transaction. Multiple updates to the same row are aggregated before the flush, dramatically reducing DB writes.

Key Concepts Demonstrated

Cache-Aside vs. Write-Through

The two fundamental caching strategies — reactive lazy loading vs. proactive cache population — and when to use each.

Redis Data Structures

Hands-on use of Strings, Hashes, Lists, and Sorted Sets — each suited to different caching and data management patterns.

TTL & Cache Invalidation

Setting expiry on cache keys, observing automatic TTL-based expiry, and manually invalidating cache entries to force a fresh database read.

Atomic Operations

Using Redis atomic commands (INCR, SETNX, ZADD) for race-condition-free rate limiting, distributed locking, and leaderboard updates.

Pub/Sub Messaging

Broadcasting events to subscribers in real time using Redis Pub/Sub, combined with a List-based ring buffer for persistent event logging.

Production Patterns

Real-world patterns used in production systems: session management, API response caching, batch write buffering, and distributed locking with deadlock prevention.

Stopping Redis

When you're done with the tutorial, stop and remove the Redis container:

# Stop the container
docker stop redis-tutorial

# Remove the container (optional)
docker rm redis-tutorial

# To start it again later, re-run the docker run command from Step 1

Learn More