Java concurrency questions separate candidates who've read documentation from those who've debugged a race condition at 2am. The questions aren't really about whether you know the API — they're about whether you understand why the API exists, what can go wrong, and what the tradeoffs are between approaches.
These ten questions come up repeatedly at senior and lead-level Java interviews. Each answer goes beyond the surface-level response and into the details that actually matter.
synchronized and ReentrantLock?Both protect a shared resource from concurrent access, but they solve different problems.
synchronized is simpler — built into the language, automatically released when the block exits (even if an exception is thrown), and carries no explicit overhead.
ReentrantLock from java.util.concurrent.locks gives you control that synchronized doesn't:
ReentrantLock lock = new ReentrantLock(); // Try acquiring without blocking if (lock.tryLock(500, TimeUnit.MILLISECONDS)) { try { // critical section } finally { lock.unlock(); // must always be in finally } } else { // handle lock not acquired }
Key differences:
| Feature | synchronized | ReentrantLock |
|---|---|---|
| Interruptible lock wait | No | Yes (lockInterruptibly()) |
| Timed lock attempt | No | Yes (tryLock(timeout)) |
| Fairness policy | No | Yes (constructor arg) |
| Multiple conditions | No | Yes (newCondition()) |
| Manual unlock required | No | Yes (in finally) |
The biggest gotcha with ReentrantLock: forgetting unlock() in the finally block. Unlike synchronized, if you throw an exception inside the critical section without a finally, the lock is never released and every subsequent thread hangs forever.
When to use ReentrantLock: when you need tryLock (deadlock avoidance), interruptible waiting, or fairness guarantees. Otherwise, synchronized is less code and harder to misuse.
volatile keyword and when isn't it enough?volatile has two effects:
volatile variable are immediately visible to all threads (no CPU cache inconsistency)// Without volatile: other threads may see stale value cached in their CPU register private volatile boolean running = true; public void stop() { running = false; // visible to all threads immediately } public void run() { while (running) { // process work } }
But volatile does not make compound operations atomic. This is a common source of bugs:
private volatile int counter = 0; // This is NOT thread-safe even with volatile // read-modify-write is three operations, not one counter++;
For increment, you need AtomicInteger:
private AtomicInteger counter = new AtomicInteger(0); counter.incrementAndGet(); // atomic
Rule of thumb: use volatile for a single boolean flag or a reference that's written by one thread and read by many. As soon as you need read-modify-write atomicity, reach for AtomicXxx or a lock.
This is one of those questions that sounds academic but has practical consequences. "Happens-before" is the Java Memory Model's guarantee that if operation A happens-before operation B, then A's effects are visible when B executes.
Key happens-before rules:
volatile field happens-before every subsequent read of that fieldThread.start() happens-before any action in the started threadThread.join() returnslock.unlock() happens-before a subsequent lock.lock()Why does this matter? Without a happens-before relationship, the JVM and CPU can reorder instructions for performance. This is legal. The double-checked locking bug from pre-Java 5 was caused exactly by this:
// Broken without volatile (Java < 5 or without volatile) class Singleton { private static Singleton instance; public static Singleton getInstance() { if (instance == null) { synchronized (Singleton.class) { if (instance == null) { instance = new Singleton(); // not atomic! // CPU can reorder: allocate memory, assign reference, THEN initialize // Another thread can see a non-null but uninitialized instance } } } return instance; } } // Fixed: volatile prevents reordering private static volatile Singleton instance;
wait()/notify() and Condition?Both are for inter-thread coordination — one thread waits for a condition, another signals it. But Condition (from java.util.concurrent.locks) is more flexible.
// Old way: Object.wait() / notify() synchronized (lock) { while (!conditionMet) { lock.wait(); // releases lock and waits } // do work } // Somewhere else synchronized (lock) { conditionMet = true; lock.notifyAll(); }
// Modern way: ReentrantLock + Condition ReentrantLock lock = new ReentrantLock(); Condition notFull = lock.newCondition(); Condition notEmpty = lock.newCondition(); // Producer lock.lock(); try { while (queue.isFull()) notFull.await(); queue.add(item); notEmpty.signal(); // signal only consumers, not producers } finally { lock.unlock(); }
The key advantage: a single ReentrantLock can have multiple conditions. With synchronized, you only have one wait set per object, so notifyAll() wakes all waiting threads — producers and consumers — even if only one type needs to wake up. That's wasted CPU and extra synchronization. Multiple Condition objects let you signal only the right threads.
CompletableFuture work and what are the threading rules?CompletableFuture lets you compose async operations without blocking:
CompletableFuture<UserProfile> profile = CompletableFuture .supplyAsync(() -> userService.fetch(userId)) // runs on ForkJoinPool .thenApply(user -> enrichmentService.enrich(user)) // runs on same thread .thenCompose(user -> preferencesService.fetch(user.id())); // chains another CF
Threading rules — this is where candidates stumble:
supplyAsync without an executor uses the common ForkJoinPoolthenApply, thenAccept, thenRun run on the thread that completed the previous stage — which could be the calling thread or the async threadthenApplyAsync explicitly dispatches to the common pool (or a provided executor)// Potential issue: thenApply runs on the completing thread // If that's a ForkJoinPool worker doing CPU work, you're fine // If that's your HTTP server thread, you've just blocked it CompletableFuture<String> result = longRunningIO() .thenApply(data -> heavyCpuWork(data)); // runs on IO thread — bad // Explicit async dispatch: CompletableFuture<String> result = longRunningIO() .thenApplyAsync(data -> heavyCpuWork(data), cpuExecutor);
The other common mistake: using get() (which throws checked exceptions) inside a stream or lambda. Use join() instead — same blocking semantics but throws unchecked CompletionException.
📚 Related: CompletableFuture Deep Dive — covers pipelines, error handling, and timeouts.
The classic thread pool model: you create a pool of N threads, and under high concurrency, all N get blocked waiting on IO (database queries, HTTP calls). The 101st request waits in a queue even though no CPU work is happening — you're just waiting.
// Traditional: 200 threads, 200 concurrent requests max ExecutorService pool = Executors.newFixedThreadPool(200); // Each request blocks a real OS thread during IO pool.submit(() -> { String result = database.query("SELECT ..."); // thread blocked here return result; });
Virtual threads (Java 21+) are JVM-managed, not OS-managed. You can create millions of them. When a virtual thread blocks on IO, the JVM unmounts it from its carrier thread — the carrier thread is free to run another virtual thread. When the IO completes, the virtual thread is remounted.
// Virtual threads: create one per task, millions if needed try (ExecutorService executor = Executors.newVirtualThreadPerTaskExecutor()) { for (Request req : incomingRequests) { executor.submit(() -> { String result = database.query("SELECT ..."); // virtual thread blocks, carrier continues return result; }); } }
The practical result: for IO-heavy workloads (which most web services are), virtual threads let you write blocking, readable code while getting the throughput of reactive/async code — without callback hell.
Where virtual threads don't help: CPU-bound work. If your thread is grinding numbers, it can't be unmounted. The carrier thread is genuinely busy. For CPU-bound work, you still need a bounded pool sized to your core count.
ConcurrentHashMap and Collections.synchronizedMap()?Both are thread-safe maps but with very different performance characteristics.
Collections.synchronizedMap() wraps a regular HashMap with a single lock on every operation. Every get, put, remove, size — all serialized through one lock. Under high concurrency, this becomes a bottleneck.
ConcurrentHashMap uses segment-level locking (or since Java 8, CAS-based node-level locking). Reads require no locking at all. Writes only lock the specific bucket being modified. This gives you dramatically higher throughput under concurrent access.
// synchronizedMap: every operation holds a single lock Map<String, Integer> map = Collections.synchronizedMap(new HashMap<>()); // Even reads block each other! // ConcurrentHashMap: reads never block, writes lock only affected bucket ConcurrentHashMap<String, Integer> map = new ConcurrentHashMap<>(); map.computeIfAbsent("key", k -> expensiveComputation(k)); // atomic, no race
The gotcha: even with Collections.synchronizedMap(), compound operations are not atomic:
Map<String, Integer> map = Collections.synchronizedMap(new HashMap<>()); // NOT thread-safe: check then act is two separate synchronized operations synchronized (map) { // you must synchronize the block explicitly if (!map.containsKey("key")) { map.put("key", value); } } // ConcurrentHashMap does this atomically map.putIfAbsent("key", value); map.computeIfAbsent("key", k -> computeValue(k));
Use ConcurrentHashMap for any shared, mutable map. Use synchronizedMap only when wrapping a pre-existing non-thread-safe map that you can't replace.
A deadlock needs four conditions to occur simultaneously: mutual exclusion, hold-and-wait, no preemption, and circular wait. Breaking any one prevents it.
The most common Java deadlock: two threads acquiring the same two locks in opposite order.
// Thread 1: acquires lockA, then tries lockB synchronized (lockA) { synchronized (lockB) { /* ... */ } } // Thread 2: acquires lockB, then tries lockA synchronized (lockB) { synchronized (lockA) { /* ... */ } // deadlock if Thread 1 holds lockA }
Prevention strategies:
// Both threads lock in consistent order Object first = System.identityHashCode(lockA) < System.identityHashCode(lockB) ? lockA : lockB; Object second = first == lockA ? lockB : lockA; synchronized (first) { synchronized (second) { /* ... */ } }
tryLock with timeout — don't wait forever:if (lockA.tryLock(500, TimeUnit.MILLISECONDS)) { try { if (lockB.tryLock(500, TimeUnit.MILLISECONDS)) { try { // critical section } finally { lockB.unlock(); } } } finally { lockA.unlock(); } }
Reduce lock scope — the less time you hold a lock, the lower the chance of deadlock.
Prefer higher-level concurrency utilities — ConcurrentHashMap, BlockingQueue, Semaphore handle locking internally and are tested against deadlock.
To detect deadlocks in production: jstack <pid> prints thread dumps and explicitly identifies deadlocked threads.
ThreadLocal and when does it cause memory leaks?ThreadLocal gives each thread its own copy of a variable — useful for storing per-request state (user context, transaction info, locale) without passing it through every method call.
public class RequestContext { private static final ThreadLocal<String> userId = new ThreadLocal<>(); public static void set(String id) { userId.set(id); } public static String get() { return userId.get(); } public static void clear() { userId.remove(); } // critical } // In a Spring filter: @Override public void doFilter(ServletRequest req, ServletResponse res, FilterChain chain) { RequestContext.set(extractUserId(req)); try { chain.doFilter(req, res); } finally { RequestContext.clear(); // if you forget this, you have a leak } }
The memory leak happens in thread pool environments (which is every web server). Threads are reused across requests. If you call ThreadLocal.set() but never ThreadLocal.remove(), the value sits in the thread's ThreadLocalMap indefinitely — across all future requests that reuse that thread. Worse, if the value holds a reference to a class loaded by a webapp classloader, you get a classloader leak that causes OutOfMemoryError: Metaspace on repeated redeployments.
Rule: always call remove() in a finally block. If you're using ThreadLocal in a Spring app, consider RequestScope beans instead — Spring cleans them up automatically.
Platform threads are Java wrappers around operating system threads. They are relatively expensive, so applications usually reuse them through a bounded thread pool.
Virtual threads, finalized in Java 21, are lightweight threads managed by the JVM. They let you write normal blocking code for I/O-heavy workloads without tying up an OS thread for every request.
try (var executor = Executors.newVirtualThreadPerTaskExecutor()) { Future<User> user = executor.submit(() -> userService.fetch(userId)); Future<List<Order>> orders = executor.submit(() -> orderService.fetch(userId)); return new UserProfile(user.get(), orders.get()); }
The big win is scalability for blocking I/O: HTTP calls, database queries, file reads, queue consumers, and similar work. When a virtual thread blocks, the JVM can unmount it from its carrier thread and run something else.
They don't make CPU-bound code faster. If the task burns CPU, it still needs CPU cores. For CPU-heavy work, use a fixed-size executor sized around available processors.
Don't pool virtual threads. Create one per task and limit the real bottleneck instead: database connections, API rate limits, queue partitions, or memory. A service can spawn 50,000 virtual threads, but it still won't get more than 20 concurrent database queries through a 20-connection pool.
📚 Related: Spring Boot Concurrency · CompletableFuture Deep Dive
| Question area | Key concept | Common gotcha |
|---|---|---|
synchronized vs ReentrantLock | ReentrantLock adds tryLock, fairness, multiple conditions | Must unlock() in finally |
volatile | Visibility + ordering, not atomicity | counter++ is still broken |
| Happens-before | JMM ordering guarantees | Double-checked locking without volatile |
wait/notify vs Condition | Multiple conditions per lock | notifyAll wakes wrong threads |
CompletableFuture threading | thenApply runs on completing thread | Use thenApplyAsync for CPU work |
| Virtual threads | JVM-scheduled, millions possible | Don't help CPU-bound work |
ConcurrentHashMap | Lock-free reads, CAS writes | synchronizedMap still needs external sync for compound ops |
| Deadlock prevention | Lock ordering or tryLock | jstack to diagnose in production |
ThreadLocal leaks | Always remove() in finally | Thread pool threads are reused |
📚 Related: Process vs Thread — Deep Dive · Spring Boot Concurrency · CompletableFuture Guide
A comprehensive guide to mastering CompletableFuture in modern Java, with real-world patterns, error handling strategies, and how it fits alongside virtual threads and structured concurrency.
Learn parallel task execution in Java and Spring Boot using CompletableFuture, @Async, Virtual Threads, and Structured Concurrency for better performance.
Learn about the Structured concurrency API in Java 25, new way to write concurrent code
A comprehensive guide to understanding processes and threads, their differences, advantages, and when to use each. Includes practical examples with Java multithreading.
Find the most popular YouTube creators in tech categories like AI, Java, JavaScript, Python, .NET, and developer conferences. Perfect for learning, inspiration, and staying updated with the best tech content.

Get instant AI-powered summaries of YouTube videos and websites. Save time while enhancing your learning experience.