MongoDB

Designing a Distributed Cache

Learn how to design a distributed caching layer using Redis or Memcached — covering eviction policies, replication, and cache patterns.

S

srikanthtelkalapally888@gmail.com

Designing a Distributed Cache

A distributed cache stores frequently accessed data in memory across multiple nodes to reduce database load and latency.

Why Cache?

  • Database queries: ~10ms
  • Cache hits: ~0.1ms
  • 100x faster reads

Cache Patterns

Cache-Aside (Lazy Loading)

1. App checks cache
2. Miss → Query DB
3. Store result in cache
4. Return data

Write-Through

Write to cache and DB simultaneously. Pros: Cache always consistent Cons: Extra write latency

Write-Behind

Write to cache immediately, async write to DB. Pros: Fast writes Cons: Risk of data loss

Read-Through

Cache layer automatically loads from DB on miss.

Eviction Policies

  • LRU (Least Recently Used): Most common
  • LFU (Least Frequently Used): Frequency-based
  • TTL: Expire after fixed duration

Cache Invalidation

Strategy 1: TTL-based expiry
Strategy 2: Event-driven invalidation (Kafka)
Strategy 3: Write-through consistency

Redis Cluster

  • 16,384 hash slots distributed across nodes
  • Each node handles a subset of slots
  • Automatic failover with replicas

Cache Stampede

When cache expires and many requests hit DB simultaneously.

Solution: Mutex lock + probabilistic early expiration.

Conclusion

Cache-aside with LRU eviction and TTL is the most practical pattern for most production systems.

Share this article