EngineeringRediscachingperformance

Redis Caching in Production: Patterns, Pitfalls, and Practical Code

A deep technical guide to Redis caching patterns for production — cache-aside, write-through, invalidation strategies, and ioredis examples you can copy.

R

RaidFrame Team

January 16, 2026 · 7 min read

TL;DR — Redis caching cuts database load by 90%+ when done right. Use cache-aside for reads, TTL-based invalidation for simplicity, and never cache without a TTL. This guide covers the patterns, the code, and the mistakes to avoid.

Why do you need a cache?

Three things are slow in every web application: database queries, external API calls, and expensive computation. A cache stores results so you don't repeat work.

A typical PostgreSQL query takes 2-20ms. A Redis lookup takes 0.1-0.5ms. If you're scaling PostgreSQL and hitting read bottlenecks, a Redis cache is often the fastest fix before you reach for read replicas.

Cache-aside pattern (the one you should use first)

Cache-aside (lazy-loading) is the most common pattern. Check cache, fall back to source on miss, populate cache for next time.

import Redis from "ioredis";
const redis = new Redis(process.env.REDIS_URL);
 
async function cacheAside<T>(
  key: string, ttlSeconds: number, fetchFn: () => Promise<T>
): Promise<T> {
  const cached = await redis.get(key);
  if (cached) return JSON.parse(cached);
 
  const data = await fetchFn();
  redis.set(key, JSON.stringify(data), "EX", ttlSeconds);
  return data;
}
 
// Usage
const user = await cacheAside(`user:${id}`, 300, () =>
  db.query("SELECT * FROM users WHERE id = $1", [id])
);

This is the right default. It only caches data that's actually requested and handles misses gracefully.

What about write-through caching?

Write-through updates the cache and database simultaneously on every write. Useful for data read far more often than written — user profiles, product details, configuration.

async function updateUser(id: string, data: Partial<User>) {
  const updated = await db.query(
    "UPDATE users SET name = $1, email = $2 WHERE id = $3 RETURNING *",
    [data.name, data.email, id]
  );
  await redis.set(`user:${id}`, JSON.stringify(updated), "EX", 300);
  return updated;
}

How should you handle cache invalidation?

TTL-based invalidation

Set an expiration and let Redis handle it. Good enough for 80% of use cases.

await redis.set("rate:user:123", count, "EX", 60);       // Volatile: 60s
await redis.set("api:weather:nyc", data, "EX", 300);      // API responses: 5min
await redis.set("config:feature-flags", flags, "EX", 3600); // Static: 1hr+

Event-driven invalidation

Delete the cache key when underlying data changes. More complex but fresher.

async function updateProduct(id: string, data: Partial<Product>) {
  await db.query("UPDATE products SET ... WHERE id = $1", [id]);
  await redis.del(`product:${id}`);
  await redis.del(`product-list:category:${data.categoryId}`);
}

Versioned keys

Append a version to cache keys. Bump to invalidate everything at once.

const version = await redis.get("cache:version:products") || "1";
const key = `products:v${version}:list`;
// Invalidate all product caches:
await redis.incr("cache:version:products");

Try RaidFrame free

Deploy your first app in 60 seconds. No credit card required.

Start free

What should you cache?

Good candidates: session data, API responses from third-party services, computed aggregates (dashboard stats, leaderboard rankings), rate limit counters, and feature flags. These are read-heavy and either slow to compute or expensive to fetch.

What should you NOT cache?

Avoid caching user-specific mutable data that changes every request, real-time financial balances, or anything requiring strong consistency. If stale data causes a bug, don't cache it.

Which Redis data structures should you use?

StringsGET/SET with JSON. The default for most caching needs.

Hashes — store object fields without serialization:

await redis.hset("user:123", { name: "Alice", plan: "pro", loginCount: "42" });
const plan = await redis.hget("user:123", "plan");

Sorted sets — leaderboards and ranked data:

await redis.zadd("leaderboard:weekly", score, `user:${id}`);
const top10 = await redis.zrevrange("leaderboard:weekly", 0, 9, "WITHSCORES");

Lists — lightweight queues for background jobs or activity feeds.

How do you configure ioredis for production?

const redis = new Redis(process.env.REDIS_URL, {
  maxRetriesPerRequest: 3,
  retryStrategy: (times) => Math.min(times * 200, 2000),
  reconnectOnError: (err) => err.message.includes("READONLY"),
  lazyConnect: true,
});
redis.on("error", (err) => console.error("Redis error:", err));

Never let a cache failure crash your application. Redis is an optimization layer. If it's down, fall through to the database.

How should you handle cache stampedes?

A cache stampede happens when a popular key expires and hundreds of requests simultaneously hit the database. Use a mutex lock — only the first request fetches, others wait:

async function cacheAsideWithLock<T>(key: string, ttl: number, fetchFn: () => Promise<T>): Promise<T> {
  const cached = await redis.get(key);
  if (cached) return JSON.parse(cached);
 
  const acquired = await redis.set(`lock:${key}`, "1", "EX", 5, "NX");
  if (acquired) {
    const data = await fetchFn();
    await redis.set(key, JSON.stringify(data), "EX", ttl);
    await redis.del(`lock:${key}`);
    return data;
  }
  await new Promise((r) => setTimeout(r, 100));
  return cacheAsideWithLock(key, ttl, fetchFn);
}

Memory management and eviction

allkeys-lru is the recommended eviction policy for caching. It evicts the least recently used key across all keys. volatile-lru only evicts keys with a TTL set, which breaks if you forget TTL on some keys.

maxmemory 512mb
maxmemory-policy allkeys-lru

Always set a TTL. Keys without one accumulate, consume memory, and serve stale data indefinitely.

What should you monitor?

Four metrics: hit rate (keyspace_hits / (keyspace_hits + keyspace_misses) — below 80% means wrong caching targets), memory usage (used_memory vs maxmemory), eviction count (rising = need more memory), and command latency (should be sub-millisecond).

Common mistakes that will burn you

Caching everything. If data is only read once, caching adds overhead with no benefit. No TTL on keys. Stale data is worse than slow data. Always set a TTL. Not handling Redis failures. Wrap cache calls in try/catch and fall through to the source. Storing huge values. Cache specific fields, not 10MB JSON blobs.

Practical example: rate limiter

async function rateLimit(userId: string, limit: number, windowSec: number): Promise<boolean> {
  const key = `rate:${userId}`;
  const current = await redis.incr(key);
  if (current === 1) await redis.expire(key, windowSec);
  return current <= limit;
}
 
const allowed = await rateLimit("user:123", 100, 60);
if (!allowed) return res.status(429).json({ error: "Too many requests" });

Deploy Redis on RaidFrame

Adding Redis to your RaidFrame app takes one command:

# Add a managed Redis instance to your project
rf add redis

That's it. RaidFrame provisions Redis on the same private network as your application — no external connections, no VPC peering, no latency overhead. Your REDIS_URL environment variable is auto-injected:

# Verify the connection string is set
rf env list
# REDIS_URL=redis://default:****@redis-xxxxx.internal:6379
 
# Or set it manually if you need a custom variable name
rf env set CACHE_URL $(rf env get REDIS_URL)

What you get out of the box

  • Sub-millisecond latency — Redis runs on the same private network as your app, not across the internet
  • allkeys-lru eviction — configured for caching workloads by default
  • Persistence — RDB snapshots so your cache survives restarts
  • Monitoring — memory usage, hit rate, and command latency visible in rf logs and the dashboard
  • No connection limits — no per-connection charges or artificial caps

Full stack example

# Deploy your app with Postgres and Redis in three commands
rf deploy
rf add postgres
rf add redis
# REDIS_URL and DATABASE_URL are auto-injected. Start building.

RaidFrame manages eviction policies, persistence, and monitoring so you don't have to tune Redis configs in production. Whether you're deploying a Next.js app, building a real-time WebSocket service, or running infrastructure that needs auto-scaling, Redis is one command away. Start free on RaidFrame.

FAQ

How much memory do I need for Redis caching?

Start with 256MB. A million cached JSON objects averaging 200 bytes fits in ~200MB. Most applications never need more than 1GB for caching.

Should I use Redis or Memcached?

Redis. More data structures, built-in persistence, pub/sub, Lua scripting. Unless you have a specific reason for Memcached, Redis wins.

Can I use Redis as my primary database?

You can, but you shouldn't. Use it for caching, sessions, rate limiting, and real-time features. Use PostgreSQL for primary data.

How do I handle cache warming on deploy?

Let cache-aside populate naturally. Warm critical hot data (feature flags, config) on startup. A cold cache fills within minutes under normal traffic.

What's a good cache hit rate?

Above 90% is excellent. 80-90% is good. Below 80% means wrong data or short TTLs.

Should I use Redis Cluster?

Only if you need 25GB+ of cache. A single instance handles 100,000+ ops/sec. Start simple.

How do I test caching locally?

docker run -p 6379:6379 redis:7-alpine. Same ioredis code, different REDIS_URL.

RediscachingperformanceNode.jsinfrastructure

Ship faster with RaidFrame

Auto-scaling compute, managed databases, global CDN, and zero-config CI/CD. Free tier included.