MongoDB

Designing a Distributed Lock Manager

Learn how to implement distributed locks using Redis SETNX, Redlock, and Zookeeper to coordinate access to shared resources.

S

srikanthtelkalapally888@gmail.com

Designing a Distributed Lock Manager

Distributed locks ensure only one process/node can access a shared resource at a time across a distributed system.

Use Cases

  • Prevent duplicate cron job execution
  • Coordinate inventory deduction
  • Leader election
  • Mutual exclusion for shared resources

Redis SETNX Lock

SET lock:{resource} {token} NX PX 30000
  • NX: Only set if not exists
  • PX 30000: Expire in 30 seconds (prevent deadlock if process dies)

Release lock:

-- Atomic check-and-delete with Lua
if redis.call('GET', KEYS[1]) == ARGV[1] then
  return redis.call('DEL', KEYS[1])
else
  return 0
end

Fencing Tokens

Problem: Process pauses (GC), lock expires, another process acquires lock, first process resumes and corrupts data.

Solution: Monotonically increasing fencing token.

Lock acquired: token=33
Storage service rejects writes with token < 33

Redlock (Multi-Node)

For fault tolerance, acquire lock on N Redis nodes (e.g., 5):

1. Record start time
2. Try to acquire lock on all 5 nodes
3. If majority (3+) acquired AND time < TTL → lock acquired
4. Otherwise release all and retry

Zookeeper Lock

Creates ephemeral sequential znodes:

/locks/resource-0000001
/locks/resource-0000002  ← I watch the one before me

When lowest-numbered node = mine, I have the lock.

Comparison

RedisZookeeper
SimplicityHighMedium
Fault toleranceRedlock neededBuilt-in
PerformanceHighLower

Conclusion

Redis SETNX is simple and fast for single-node. Use Redlock for multi-node. Zookeeper for strong guarantees when performance is secondary.

Share this article