Concurrency in LLD
Concurrency is the most commonly underestimated aspect of LLD interviews. Almost every real system — parking lots, booking platforms, rate limiters, inventory systems — has multiple users acting simultaneously. This guide covers race conditions, every major synchronization mechanism, deadlock prevention, and lock-free approaches with the depth needed to ace any LLD interview.
Reading time
22 min
Why Concurrency is Unavoidable in LLD
Every LLD interview question implicitly involves concurrency. The interviewers never ask you to design a single-user system. Consider:
- Parking Lot — two cars arrive simultaneously and try to book the last spot
- Movie Ticket Booking — two users select the same seat at the same time
- Rate Limiter — thousands of requests per second hitting a shared counter
- Inventory Management — two orders simultaneously reduce the same item count to zero
- Bank Account — concurrent deposits and withdrawals on the same account
Ignoring concurrency in these designs produces a solution that is fundamentally broken in production. Demonstrating that you think about it proactively is one of the clearest signals of seniority.
Understanding Race Conditions
A race condition occurs when the correctness of a program depends on the relative timing of operations by multiple threads. It is a bug that only manifests under specific timing conditions — often extremely hard to reproduce and debug.
The Classic Read-Modify-Write Race
class UnsafeSeatBooking {
private boolean[] seats = new boolean[100]; // false = available
// BUG: Thread A and Thread B can both pass the check simultaneously
public boolean bookSeat(int seatId) {
if (!seats[seatId]) { // Step 1: READ — both threads read 'false'
seats[seatId] = true; // Step 2: MODIFY — both threads set 'true'
return true; // Step 3: WRITE — both think they succeeded
}
return false;
}
}
// Result: two users think they booked the same seatWhy this happens: Steps 1, 2, and 3 are not atomic. The JVM can switch between threads between any two steps. If Thread A reads step 1, then the scheduler switches to Thread B which completes all three steps, then switches back to Thread A — Thread A saw false but the seat is now taken.
Visibility Problems
Even without race conditions, threads can see stale values due to CPU caches and compiler reordering:
class UnsafeFlag {
private boolean done = false; // Not volatile!
public void producer() {
done = true; // Written to CPU cache, may not be visible to other CPUs
}
public void consumer() {
while (!done) { /* may loop forever if stale value in cache */ }
}
}The volatile keyword forces reads and writes to go to main memory, ensuring visibility across threads.
Synchronization Mechanism 1: synchronized
The synchronized keyword is Java's built-in locking mechanism. It uses the object's intrinsic monitor lock.
Method-Level Synchronization
class SafeSeatBooking {
private boolean[] seats = new boolean[100];
// Only one thread can execute this method at a time
public synchronized boolean bookSeat(int seatId) {
if (!seats[seatId]) {
seats[seatId] = true;
return true;
}
return false;
}
public synchronized boolean cancelBooking(int seatId) {
if (seats[seatId]) {
seats[seatId] = false;
return true;
}
return false;
}
}Block-Level Synchronization (Preferred for Fine-Grained Locking)
class TheatreBookingSystem {
private final boolean[] seats = new boolean[100];
private final Object[] seatLocks = new Object[100];
public TheatreBookingSystem() {
// One lock per seat — threads booking different seats don't block each other
for (int i = 0; i < 100; i++) {
seatLocks[i] = new Object();
}
}
public boolean bookSeat(int seatId) {
synchronized (seatLocks[seatId]) { // Lock only the specific seat
if (!seats[seatId]) {
seats[seatId] = true;
return true;
}
return false;
}
}
}Why per-seat locks matter: If you lock the entire booking system, threads trying to book seat 1 and seat 50 block each other unnecessarily. Per-seat locks allow 100 concurrent bookings for different seats with zero contention.
Synchronization Mechanism 2: ReentrantLock
ReentrantLock provides more flexibility than synchronized. It is "reentrant" — a thread holding the lock can acquire it again without deadlocking.
import java.util.concurrent.locks.ReentrantLock;
import java.util.concurrent.TimeUnit;
class ParkingLot {
private final int[] slots; // 0 = empty, vehicleId otherwise
private final ReentrantLock[] slotLocks;
public ParkingLot(int capacity) {
slots = new int[capacity];
slotLocks = new ReentrantLock[capacity];
for (int i = 0; i < capacity; i++) {
slotLocks[i] = new ReentrantLock(true); // fair=true: FIFO ordering
}
}
public boolean parkVehicle(int slotId, int vehicleId) {
// tryLock with timeout — don't wait forever
try {
if (slotLocks[slotId].tryLock(100, TimeUnit.MILLISECONDS)) {
try {
if (slots[slotId] == 0) {
slots[slotId] = vehicleId;
return true;
}
return false;
} finally {
slotLocks[slotId].unlock(); // Always unlock in finally
}
}
return false; // Could not acquire lock in time
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
return false;
}
}
}ReentrantLock advantages over synchronized:
tryLock(timeout)— attempt to acquire, give up after a timeout (prevents indefinite blocking)lockInterruptibly()— can be interrupted while waiting (useful for responsive UIs)new ReentrantLock(true)— fair lock: threads acquire in FIFO order, preventing starvation- Ability to lock and unlock in different methods (dangerous but sometimes necessary)
getQueueLength()— inspect how many threads are waiting
Synchronization Mechanism 3: ReadWriteLock
Many data structures have an asymmetric access pattern: reads are frequent, writes are rare. A standard lock treats reads and writes identically — only one thread at a time. A ReadWriteLock allows concurrent reads while still protecting writes.
import java.util.concurrent.locks.ReadWriteLock;
import java.util.concurrent.locks.ReentrantReadWriteLock;
class ConfigurationStore {
private final Map<String, String> config = new HashMap<>();
private final ReadWriteLock rwLock = new ReentrantReadWriteLock();
// Multiple threads can read simultaneously
public String get(String key) {
rwLock.readLock().lock();
try {
return config.get(key);
} finally {
rwLock.readLock().unlock();
}
}
// Only one thread can write; all reads blocked during write
public void set(String key, String value) {
rwLock.writeLock().lock();
try {
config.put(key, value);
} finally {
rwLock.writeLock().unlock();
}
}
}When ReadWriteLock shines:
- In-memory caches (thousands of reads, occasional cache invalidation)
- Configuration stores (read on every request, update rarely)
- Route tables, feature flags, A/B test configs
When it doesn't help: If writes are as frequent as reads, the coordination overhead of ReadWriteLock is worse than a simple mutex.
Synchronization Mechanism 4: Atomic Classes
For simple counter and reference operations, atomic classes provide lock-free thread safety using CPU-level compare-and-swap (CAS) instructions — faster than locks for single-value operations.
import java.util.concurrent.atomic.*;
class RateLimiter {
private final int maxRequests;
private final AtomicInteger counter = new AtomicInteger(0);
private volatile long windowStart = System.currentTimeMillis();
public RateLimiter(int maxRequestsPerSecond) {
this.maxRequests = maxRequestsPerSecond;
}
public boolean allowRequest() {
long now = System.currentTimeMillis();
if (now - windowStart >= 1000) {
counter.set(0);
windowStart = now;
}
// incrementAndGet is atomic — no race condition
return counter.incrementAndGet() <= maxRequests;
}
}
// Other useful atomic classes:
AtomicLong sequenceGenerator = new AtomicLong(0);
long nextId = sequenceGenerator.incrementAndGet(); // Thread-safe ID generation
AtomicReference<CacheEntry> cache = new AtomicReference<>();
cache.compareAndSet(null, new CacheEntry(value)); // Only sets if currently null
AtomicBoolean initialized = new AtomicBoolean(false);
if (initialized.compareAndSet(false, true)) {
// Only one thread enters this block
expensiveInitialization();
}Deadlock: Detection, Prevention, and Avoidance
A deadlock occurs when two or more threads each hold a lock the other needs, causing all to wait forever.
Classic Deadlock Example
class DeadlockExample {
private final Object lockA = new Object();
private final Object lockB = new Object();
// Thread 1 calls this
public void method1() {
synchronized (lockA) { // Acquires lockA
Thread.sleep(100); // Meanwhile, Thread 2 acquires lockB
synchronized (lockB) { // Waits for lockB — DEADLOCK
System.out.println("Method 1");
}
}
}
// Thread 2 calls this simultaneously
public void method2() {
synchronized (lockB) { // Acquires lockB
Thread.sleep(100); // Meanwhile, Thread 1 acquires lockA
synchronized (lockA) { // Waits for lockA — DEADLOCK
System.out.println("Method 2");
}
}
}
}Four Coffman Conditions (all must hold for deadlock)
1. Mutual exclusion — only one thread can hold a lock
2. Hold and wait — threads hold existing locks while waiting for new ones
3. No preemption — locks cannot be forcibly taken from a thread
4. Circular wait — Thread A waits for Thread B, Thread B waits for Thread A
Break any one condition to prevent deadlock.
Prevention Strategies
Strategy 1: Lock Ordering (break circular wait)
// Always acquire locks in the same global order
// Order by System.identityHashCode to get a consistent ordering
public void transfer(Account from, Account to, double amount) {
Account first = System.identityHashCode(from) < System.identityHashCode(to) ? from : to;
Account second = first == from ? to : from;
synchronized (first) {
synchronized (second) {
from.debit(amount);
to.credit(amount);
}
}
}Strategy 2: tryLock with timeout (break hold and wait)
public boolean transfer(Account from, Account to, double amount) {
while (true) {
if (from.lock.tryLock()) {
try {
if (to.lock.tryLock()) {
try {
from.debit(amount);
to.credit(amount);
return true;
} finally {
to.lock.unlock();
}
}
} finally {
from.lock.unlock();
}
}
// Back off and retry — prevents livelock with random sleep
Thread.sleep((long)(Math.random() * 10));
}
}Strategy 3: Single global lock (eliminate multiple locks)
For simpler systems, use one coarse-grained lock. You trade concurrency for simplicity. Acceptable when contention is low.
Concurrent Data Structures
Java's java.util.concurrent package provides high-performance thread-safe collections:
// ConcurrentHashMap — segment-level locking, far better than synchronized HashMap
ConcurrentHashMap<String, User> userCache = new ConcurrentHashMap<>();
userCache.putIfAbsent(userId, user); // Atomic: only inserts if key absent
// CopyOnWriteArrayList — writes create a new copy, reads need no lock
// Best for: lists read frequently, written rarely (event listeners, whitelist)
CopyOnWriteArrayList<EventListener> listeners = new CopyOnWriteArrayList<>();
// LinkedBlockingQueue — thread-safe queue for producer-consumer patterns
BlockingQueue<Task> taskQueue = new LinkedBlockingQueue<>(1000);
taskQueue.put(task); // Blocks if queue full
Task t = taskQueue.take(); // Blocks if queue empty
// Semaphore — limit concurrent access to a resource pool
Semaphore dbPoolSemaphore = new Semaphore(10); // Max 10 concurrent DB connections
dbPoolSemaphore.acquire(); // Blocks if 10 threads already holding permits
try { useDatabase(); } finally { dbPoolSemaphore.release(); }Interview Application: Thread-Safe Parking Lot
class ParkingLot {
private final int capacity;
private final AtomicInteger availableSpots;
private final ConcurrentHashMap<String, Integer> vehicleToSpot;
private final boolean[] spots; // protected by spot-level locks
private final ReentrantLock[] spotLocks;
public ParkingLot(int capacity) {
this.capacity = capacity;
this.availableSpots = new AtomicInteger(capacity);
this.vehicleToSpot = new ConcurrentHashMap<>();
this.spots = new boolean[capacity];
this.spotLocks = new ReentrantLock[capacity];
for (int i = 0; i < capacity; i++) spotLocks[i] = new ReentrantLock();
}
public Optional<Integer> park(String vehicleId) {
// Quick check without acquiring any lock
if (availableSpots.get() == 0) return Optional.empty();
for (int i = 0; i < capacity; i++) {
if (!spots[i] && spotLocks[i].tryLock()) {
try {
if (!spots[i]) { // Double-check after acquiring lock
spots[i] = true;
availableSpots.decrementAndGet();
vehicleToSpot.put(vehicleId, i);
return Optional.of(i);
}
} finally {
spotLocks[i].unlock();
}
}
}
return Optional.empty();
}
public boolean exit(String vehicleId) {
Integer spotId = vehicleToSpot.remove(vehicleId);
if (spotId == null) return false;
spotLocks[spotId].lock();
try {
spots[spotId] = false;
availableSpots.incrementAndGet();
} finally {
spotLocks[spotId].unlock();
}
return true;
}
}Interview Tip
In any LLD interview, after designing the class structure, proactively say: "Let me think about thread safety. Two vehicles could try to book the same spot simultaneously — that's a race condition on the spot's availability flag. I'd use per-spot ReentrantLocks rather than a single global lock, so that threads competing for different spots don't block each other. For the overall available count, I'd use an AtomicInteger so I can do a fast check before acquiring any lock."
This shows you understand: (1) where the concurrency problem is, (2) the right granularity of locking, and (3) lock-free optimisations. That's a senior-level answer.