Contents

The Java Memory Model (JMM) defines the rules by which threads read and write shared variables. Without synchronization, compilers and CPUs are free to reorder instructions and cache values in registers, so threads can observe stale or inconsistent data. The JMM provides happens-before guarantees — formal rules that tell you exactly when a write by one thread is guaranteed to be visible to a read in another thread.

// The JMM defines the rules for how threads interact through memory. // Key problem: modern hardware and compilers reorder and cache memory accesses. // Without synchronization, this code is BROKEN: class BrokenFlag { boolean ready = false; // no visibility guarantee int value = 0; // Thread 1: void writer() { value = 42; ready = true; // may be reordered BEFORE value=42 by the compiler } // Thread 2 may see ready==true but value==0 — reordering happened! void reader() { while (!ready) { /* spin */ } System.out.println(value); // might print 0 } } // Three properties to reason about: // VISIBILITY — does a read see the most recent write? // ATOMICITY — does the operation complete without interference? // ORDERING — do actions happen in the expected sequence? // int/long reads and writes are NOT atomic on 32-bit JVMs for longs! // Only reads/writes of reference types and int-or-smaller primitives are atomic. // long and double may be torn (upper/lower 32 bits written separately). The JMM does NOT say that threads always work with private copies of variables. It says that without proper synchronization, reads are permitted to observe stale values — the actual mechanism (CPU cache, register, reorder buffer) is an implementation detail.

Happens-before is the JMM's guarantee that if action A happens-before action B, every write performed by A (and everything before it) is visible to B. This relationship is established by several mechanisms: exiting and entering a synchronized block on the same monitor, a volatile write followed by a volatile read of the same variable, a Thread.start() call, and a Thread.join() return. Without one of these edges in the chain, there is no visibility guarantee between threads.

// Happens-Before (HB): if action A happens-before action B, // then ALL effects of A (and everything before A) are visible to B. // Built-in happens-before rules: // 1. Program order rule — within one thread, each action HB the next // 2. Monitor lock rule — unlock HB every subsequent lock on the SAME monitor // 3. Volatile variable rule — write to volatile HB every subsequent read // 4. Thread start rule — Thread.start() HB any action in the started thread // 5. Thread join rule — all actions in a thread HB Thread.join() returning // 6. Transitivity — if A HB B and B HB C, then A HB C // Practical example showing rule 2 (monitor lock): class Counter { private int count = 0; synchronized void increment() { count++; // write inside synchronized } // unlock happens here synchronized int get() { return count; // lock happens here — guaranteed to see the write } } // unlock of increment() HB lock of get() // → get() always sees the incremented count // Practical example showing rule 5 (join): int[] result = new int[1]; Thread t = new Thread(() -> result[0] = compute()); t.start(); t.join(); // HB guarantee: after join() returns, result[0] is visible System.out.println(result[0]); // safe — no synchronization needed here Happens-before is a guarantee, not a physical ordering. The JVM may still reorder instructions as long as the HB rules are satisfied. Only actions connected by an HB chain are guaranteed visible.

A synchronized method uses the instance (this) as the monitor lock — or the Class object for static methods. A synchronized(object) block lets you choose an explicit lock object, which allows finer granularity: you can protect different sections of a class with different locks, reducing contention. Only one thread can hold a given monitor at a time; all other threads that attempt to acquire it are blocked until the current holder exits the synchronized region.

// synchronized METHOD — locks on 'this' (instance) or the Class object (static) class BankAccount { private int balance; public synchronized void deposit(int amount) { balance += amount; // read-modify-write is safe inside synchronized } public synchronized void withdraw(int amount) { if (balance >= amount) balance -= amount; } public synchronized int getBalance() { return balance; } // static synchronized — locks on BankAccount.class public static synchronized int totalAccounts() { return count; } } // synchronized BLOCK — explicit lock object; smaller critical section class Cache { private final Object lock = new Object(); // dedicated lock private final Map<String, String> map = new HashMap<>(); public String get(String key) { synchronized (lock) { // only lock for the map operation return map.get(key); } } public void put(String key, String value) { // Validate outside lock — validation is thread-safe (immutable args) if (key == null) throw new IllegalArgumentException(); synchronized (lock) { map.put(key, value); } } } // synchronized gives you THREE things at once: // 1. Mutual exclusion (only one thread in the block at a time) // 2. Visibility (unlock flushes writes; lock reads fresh values) // 3. Ordering (actions inside blocks are not reordered across the lock boundary) Synchronizing on this exposes the lock to callers who can also synchronize on your object, potentially causing deadlocks. Prefer a private final Object lock = new Object() that only your class knows about.

Every Java object carries a built-in intrinsic lock, also called a monitor. The synchronized keyword automatically acquires the monitor on entry and releases it on exit — whether the exit is normal or via exception. Intrinsic locks are reentrant: if a thread already holds the lock, it can enter another synchronized block on the same object without deadlocking, and the lock is only fully released once the outermost synchronized region exits.

// Every Java object has a built-in monitor lock (intrinsic lock). // synchronized uses this monitor lock automatically. // Monitor state machine: // BLOCKED — waiting to acquire the lock (entered by another thread) // WAITING — inside lock, called wait() — released lock, waiting for notify // TIMED_WAITING — like WAITING but with a timeout (wait(ms), sleep(ms)) class ProducerConsumer { private final Queue<Integer> queue = new LinkedList<>(); private final int CAPACITY = 5; // Producer thread synchronized void produce(int item) throws InterruptedException { while (queue.size() == CAPACITY) { wait(); // releases the intrinsic lock; thread goes WAITING } queue.add(item); notifyAll(); // wake all threads waiting on THIS object's monitor } // Consumer thread synchronized int consume() throws InterruptedException { while (queue.isEmpty()) { wait(); // releases lock while waiting } int item = queue.poll(); notifyAll(); return item; } } // Object.wait() / notify() / notifyAll() must be called from within // a synchronized block on the same object — otherwise IllegalMonitorStateException // notify() wakes ONE arbitrary waiting thread — can cause starvation // notifyAll() wakes ALL waiting threads — safer, they re-check the condition // Always use notifyAll() unless you are certain all waiters are interchangeable // Thread states visible in jstack / jconsole: // BLOCKED — waiting for synchronized lock // WAITING — called Object.wait() or LockSupport.park() // TIMED_WAITING — Thread.sleep() or Object.wait(timeout)

Declaring a field volatile instructs the JVM to read its value directly from main memory and write it directly back, bypassing CPU register caches. This guarantees that every thread always sees the most recently written value. However, volatile does not make compound operations atomic: an increment like i++ is a read-modify-write sequence of three operations, and another thread can interleave between them even on a volatile field.

// volatile provides TWO guarantees: // 1. VISIBILITY — every read of a volatile variable sees the most recent write // (no CPU-cache staleness) // 2. ORDERING — writes/reads of volatile are not reordered with surrounding code // (acts as a memory fence) // Classic safe flag: class Runner { private volatile boolean stop = false; // volatile — visible across threads void run() { while (!stop) { // reads fresh value each iteration doWork(); } } void shutdown() { stop = true; // write is immediately visible to reader thread } } // volatile for a status field: class TaskManager { private volatile String status = "IDLE"; void start() { status = "RUNNING"; } void finish() { status = "DONE"; } String getStatus() { return status; } // always sees current value } // volatile long/double — guaranteed atomic read/write (unlike non-volatile) private volatile long timestamp = 0L; // Limitation — volatile does NOT make compound actions atomic: private volatile int counter = 0; void increment() { counter++; // BUG: read-modify-write is THREE operations — NOT atomic! // Even though each individual read/write is visible, // another thread can interleave between the read and the write. } // Use AtomicInteger for this pattern instead. // volatile read/write ordering (happens-before): // Thread A writes volatile x = 1 // Thread B reads volatile x and sees 1 // → ALL writes by Thread A before "x = 1" are visible to Thread B after reading x volatile is cheaper than synchronized because it does not acquire a lock — multiple threads can read simultaneously. Use it for simple flags and status fields where only a single write/read is needed per operation.

volatile is lighter than synchronized because it never blocks threads — multiple readers can proceed simultaneously. It is correct for single-read or single-write operations such as a stop flag written by one thread and read by others. synchronized is required when you need both visibility and atomicity together, for example any read-modify-write operation (counter increment, conditional update) or when multiple separate operations must be treated as a single atomic unit.

// VISIBILITY: both guarantee visibility of writes // ATOMICITY: only synchronized guarantees compound atomicity // MUTUAL EXCLUSION: only synchronized allows only one thread at a time // BLOCKING: synchronized can block threads; volatile never blocks // REENTRANCY: synchronized is reentrant; volatile is not a lock // When to use volatile: // - Simple flag or status variable written by one thread, read by many // - Singleton initialization with double-checked locking (see next section) // - Publishing an immutable object reference (write once, read many) // When to use synchronized: // - Read-modify-write operations (counter++, list.add+size check) // - Multiple operations that must be atomic together // - Wait/notify communication between threads // Side-by-side comparison: // volatile — correct for a simple flag private volatile boolean initialized = false; void init() { /* do work */; initialized = true; } boolean isReady() { return initialized; } // OK — single read // volatile — INCORRECT for a counter private volatile int hits = 0; void recordHit() { hits++; } // BROKEN — not atomic // synchronized — correct for a counter private int hits = 0; synchronized void recordHit() { hits++; } // OK synchronized int getHits() { return hits; } // OK // Concurrent alternative — AtomicInteger (lock-free, preferred for counters) private final AtomicInteger hits = new AtomicInteger(0); void recordHit() { hits.incrementAndGet(); } // atomic, no lock

Double-checked locking lazily initializes a singleton while avoiding synchronization on every access after the first. The field must be declared volatile so that the fully-constructed object is published to other threads before the reference is stored — without volatile, the compiler may reorder the constructor call and the assignment, letting another thread see a partially initialized object. The outer null check provides the fast path for the common case where the instance already exists.

// Double-Checked Locking (DCL) — lazy singleton initialization // WITHOUT volatile — BROKEN before Java 5 (and still logically broken) class BrokenSingleton { private static BrokenSingleton instance; public static BrokenSingleton getInstance() { if (instance == null) { // first check — no lock synchronized (BrokenSingleton.class) { if (instance == null) { // second check — inside lock instance = new BrokenSingleton(); // partially constructed // object may be published before constructor completes! } } } return instance; // could return partially initialized object } } // CORRECT DCL — volatile prevents reordering of construction and assignment class Singleton { private static volatile Singleton instance; // volatile is REQUIRED private Singleton() { /* expensive init */ } public static Singleton getInstance() { if (instance == null) { // fast path — no lock needed synchronized (Singleton.class) { if (instance == null) { // re-check inside lock instance = new Singleton(); // write to volatile // volatile write happens-before any subsequent volatile read // so other threads will see fully constructed object } } } return instance; } } // BETTER alternative — initialization-on-demand holder (no volatile/synchronized needed) class BetterSingleton { private BetterSingleton() {} private static class Holder { // Class loading is synchronized by the JVM; init happens once static final BetterSingleton INSTANCE = new BetterSingleton(); } public static BetterSingleton getInstance() { return Holder.INSTANCE; // triggers class load on first call } } The initialization-on-demand holder idiom is simpler and more readable than DCL. Prefer it for lazy singletons. DCL is mostly needed when you want to reassign a volatile reference after the first initialization (e.g. config reload).

The most common concurrency bugs fall into four categories: race conditions (a check-then-act sequence such as check-for-null then assign is not atomic, so another thread can interfere between the two steps), visibility bugs (a thread reads a stale cached value because volatile or synchronization is missing), deadlocks (two threads each hold a lock the other needs, so both wait forever), and livelocks (threads keep responding to each other and changing state but make no forward progress). Recognizing these patterns is the first step to writing correct concurrent code.

// --- BUG 1: VISIBILITY (stale read) --- class Visibility { private boolean flag = false; // missing volatile void set() { flag = true; } // Thread A void check() { while (!flag) {} // Thread B may spin forever — sees cached false System.out.println("done"); } // Fix: declare flag as volatile } // --- BUG 2: ATOMICITY (check-then-act race) --- class AtomicBug { private final Map<String, Integer> map = new HashMap<>(); // BROKEN — two operations not atomic together void putIfAbsent(String key, int value) { if (!map.containsKey(key)) { // Thread A checks — key absent // Thread B also checks — key still absent map.put(key, value); // Both threads put — second put overwrites } } // Fix: use ConcurrentHashMap.putIfAbsent() or synchronized block synchronized void putIfAbsentFixed(String key, int value) { if (!map.containsKey(key)) map.put(key, value); } } // --- BUG 3: ORDERING (publication without synchronization) --- class Publication { int x; boolean ready; // not volatile // Thread A void publish() { x = 42; ready = true; // may be reordered BEFORE x = 42 by the CPU/compiler } // Thread B may see ready=true but x=0 void consume() { if (ready) System.out.println(x); // might print 0! } // Fix: declare ready as volatile OR use synchronized on both methods } // --- BUG 4: DEADLOCK --- class Deadlock { final Object lockA = new Object(); final Object lockB = new Object(); // Thread 1: acquires lockA then tries lockB void method1() { synchronized (lockA) { synchronized (lockB) { doWork(); } } } // Thread 2: acquires lockB then tries lockA — DEADLOCK with method1 void method2() { synchronized (lockB) { // T2 holds lockB synchronized (lockA) { doWork(); } // T2 waits for lockA (held by T1) } // T1 waits for lockB (held by T2) — DEADLOCK } // Fix: always acquire locks in the same global order void method2Fixed() { synchronized (lockA) { // same order as method1 synchronized (lockB) { doWork(); } } } } // --- BUG 5: SPURIOUS WAKEUP (using if instead of while) --- class SpuriousWakeup { private final Queue<String> queue = new LinkedList<>(); // BROKEN — uses if, not while synchronized String takeBroken() throws InterruptedException { if (queue.isEmpty()) wait(); // thread wakes spuriously — queue still empty! return queue.poll(); // NPE or empty poll } // CORRECT — while loop re-checks condition after every wakeup synchronized String takeFixed() throws InterruptedException { while (queue.isEmpty()) wait(); return queue.poll(); } } Concurrency bugs are notoriously hard to reproduce. They often appear only under high load or on specific hardware. Write tests with tools like jcstress or use thread-safe data structures from java.util.concurrent whenever possible.