Redis Caching Patterns Every MERN Developer Should Know
I added Redis to both Linkite and NeoGPT, and each time I learned something new about when and how to cache. These are the patterns I wish I'd read before I started โ explained with actual code from my projects, not toy examples.
Cache-aside (lazy loading)
Cache-aside is the pattern you reach for most often. The application checks the cache before hitting the database. On a miss, it reads from the database and writes the result to the cache. Simple, effective, and easy to reason about.
๐ก Cache-aside is read-optimised. Writes still go directly to the database. If you write a lot and consistency is critical, you need a different pattern.
Write-through
Write-through keeps the cache in sync with every write. When you update the database, you also update the cache. More writes, but you never serve stale data. I use this in NeoGPT for thread objects โ when a new message comes in, I update MongoDB and the Redis cache at the same time.
TTL strategy โ not all data ages equally
One mistake I made early on: giving everything the same TTL. A URL that gets clicked 1000 times a day deserves a much longer TTL than one that was created and never touched again. Think about access patterns when you set TTLs:
- โบShort-lived sessions (auth tokens): 15โ60 minutes
- โบUser profile data: 1โ6 hours
- โบURL redirect targets: 24 hours (they rarely change)
- โบChat thread context: 1 hour after last message
- โบAnalytics aggregates: 5โ15 minutes
Eviction policies
Redis has several eviction policies that kick in when memory is full. The one I recommend for most MERN apps is allkeys-lru โ evict the least recently used key from anywhere. It means your hottest data stays in cache automatically without you having to manually manage what to keep.
Set it in your Redis config with: maxmemory-policy allkeys-lru. And always set a maxmemory limit โ without it, Redis will happily eat all available RAM.
One thing to watch out for: cache stampede
If a cached item expires and you get a burst of requests at that exact moment, all of them will miss the cache, all go to the DB simultaneously, and suddenly your database is handling 50 concurrent identical queries. This is called a cache stampede.
The simplest fix: jitter your TTLs. Instead of setex(key, 3600, value), use setex(key, 3600 + Math.random() * 300, value). Stagger expirations so not everything expires at once. For high-traffic keys, you can also implement a 'lock' pattern where only one process regenerates the cache while others serve the stale value.
Final thought
Caching is one of those things where a little bit goes a long way. Even a simple cache-aside layer in front of MongoDB makes a measurable difference in response times. Start simple, measure, then tune your TTLs and patterns based on real access data.