Contents

CopyOnWriteArrayList makes a fresh copy of the underlying array on every write. Reads are lock-free and always see a consistent snapshot. Writes are expensive (O(n) copy), so this is only suitable for read-heavy lists with rare writes.

import java.util.concurrent.CopyOnWriteArrayList; import java.util.concurrent.CopyOnWriteArraySet; // CopyOnWriteArrayList — ideal for event listener lists CopyOnWriteArrayList<EventListener> listeners = new CopyOnWriteArrayList<>(); // Add listeners from any thread — thread-safe listeners.add(new LoggingListener()); listeners.add(new MetricsListener()); // Iterate — snapshot of the list at the time iterator was created // Never throws ConcurrentModificationException // Even if listeners are added/removed during iteration, the loop is unaffected for (EventListener listener : listeners) { listener.onEvent(event); // listeners modified elsewhere won't affect this loop } // Add if not present (atomic check-and-add) listeners.addIfAbsent(new LoggingListener()); // no duplicate added // removeIf — thread-safe listeners.removeIf(l -> l instanceof LoggingListener); // CopyOnWriteArraySet — a Set backed by CopyOnWriteArrayList // Same trade-off: lock-free reads, O(n) writes CopyOnWriteArraySet<String> tags = new CopyOnWriteArraySet<>(); tags.add("java"); tags.add("concurrency"); tags.add("java"); // duplicate ignored System.out.println(tags.size()); // 2 // When to use CopyOnWrite: // ✓ Reads vastly outnumber writes (e.g., listener registries) // ✓ Iteration happens concurrently with occasional writes // ✓ Small collections (copying 10 elements is cheap; 100,000 is not) // ✗ Frequent writes — use ConcurrentHashMap or synchronized instead

ConcurrentHashMap uses fine-grained bucket-level locking (segmented locking in Java 7, CAS + synchronized in Java 8+) for high concurrency. It supports atomic compound operations missing from HashMap:

import java.util.concurrent.ConcurrentHashMap; ConcurrentHashMap<String, Integer> wordCount = new ConcurrentHashMap<>(); // putIfAbsent — atomic check-then-act wordCount.putIfAbsent("hello", 0); // merge — atomically update or insert // merge(key, value, remappingFunction): if absent, put value; else apply function wordCount.merge("hello", 1, Integer::sum); // hello → 1 wordCount.merge("hello", 1, Integer::sum); // hello → 2 // compute — atomically update based on key and current value wordCount.compute("world", (key, val) -> (val == null) ? 1 : val + 1); // computeIfAbsent — lazy initialization (very common pattern) ConcurrentHashMap<String, List<String>> groups = new ConcurrentHashMap<>(); groups.computeIfAbsent("fruits", k -> new CopyOnWriteArrayList<>()) .add("apple"); groups.computeIfAbsent("fruits", k -> new CopyOnWriteArrayList<>()) .add("banana"); // computeIfPresent — update only if the key exists wordCount.computeIfPresent("hello", (k, v) -> v * 2); // 4 // Bulk operations (Java 8+) — parallel, with a parallelism threshold ConcurrentHashMap<String, Long> scores = new ConcurrentHashMap<>(); // forEach — parallel if size > threshold scores.forEach(1, (k, v) -> System.out.println(k + ": " + v)); // reduce — parallel aggregation long total = scores.reduceValues(1, Long::sum); // search — find first matching entry String richUser = scores.search(1, (k, v) -> v > 1000 ? k : null); // mappingCount() — use instead of size() to avoid int overflow for large maps long count = wordCount.mappingCount(); ConcurrentHashMap does not allow null keys or values — unlike HashMap. This is intentional: null would be ambiguous (missing key vs. explicitly mapped to null), which is unacceptable in concurrent contexts.

ConcurrentSkipListMap is a concurrent, sorted map based on a skip list. It provides O(log n) operations and supports range queries, while being fully thread-safe without locking the whole structure.

import java.util.concurrent.ConcurrentSkipListMap; import java.util.concurrent.ConcurrentSkipListSet; // ConcurrentSkipListMap — sorted, concurrent alternative to TreeMap ConcurrentSkipListMap<Integer, String> scores = new ConcurrentSkipListMap<>(); scores.put(90, "Alice"); scores.put(75, "Bob"); scores.put(88, "Charlie"); scores.put(95, "Dave"); // Sorted navigation — all thread-safe System.out.println(scores.firstKey()); // 75 System.out.println(scores.lastKey()); // 95 System.out.println(scores.floorKey(89)); // 88 System.out.println(scores.ceilingKey(89)); // 90 // Range view — live, concurrent submap ConcurrentSkipListMap<Integer, String> topStudents = scores.tailMap(88, true); // scores >= 88 topStudents.forEach((score, name) -> System.out.println(name + ": " + score)); // Charlie:88, Alice:90, Dave:95 // headMap, subMap — also work scores.subMap(80, true, 95, false); // [80, 95) // ConcurrentSkipListSet — concurrent, sorted Set (backed by ConcurrentSkipListMap) ConcurrentSkipListSet<String> sortedNames = new ConcurrentSkipListSet<>(); sortedNames.add("Charlie"); sortedNames.add("Alice"); sortedNames.add("Bob"); System.out.println(sortedNames.first()); // Alice (sorted) System.out.println(sortedNames.headSet("Charlie")); // [Alice, Bob]

ConcurrentLinkedQueue is an unbounded, lock-free FIFO queue based on the Michael-Scott algorithm. offer() and poll() never block — poll() returns null on an empty queue. Use it when producers and consumers run concurrently and blocking is undesirable:

import java.util.concurrent.ConcurrentLinkedQueue; import java.util.concurrent.ConcurrentLinkedDeque; // ConcurrentLinkedQueue — non-blocking, lock-free FIFO queue (Michael-Scott algorithm) ConcurrentLinkedQueue<String> queue = new ConcurrentLinkedQueue<>(); // Producers queue.offer("task1"); // always returns true (unbounded) queue.offer("task2"); queue.offer("task3"); // Consumers String task = queue.poll(); // null if empty (non-blocking) String peek = queue.peek(); // look without removing // Iterate — snapshot-like (may not reflect concurrent changes) for (String t : queue) { System.out.println(t); } // ConcurrentLinkedDeque — double-ended deque (FIFO or LIFO) ConcurrentLinkedDeque<String> deque = new ConcurrentLinkedDeque<>(); deque.offerFirst("first"); // add to head deque.offerLast("last"); // add to tail String head = deque.pollFirst(); String tail = deque.pollLast(); // size() is O(n) for ConcurrentLinkedQueue/Deque — avoid in hot paths // Use isEmpty() instead if (!queue.isEmpty()) { process(queue.poll()); }

Each concurrent collection targets a specific access pattern. Matching the collection to the pattern avoids unnecessary overhead from locking or copying. This guide maps access patterns to the appropriate concurrent collection:

// Quick decision guide: // Need a thread-safe List? // - Rare writes, many reads, iteration: CopyOnWriteArrayList // - Frequent writes: Collections.synchronizedList() or explicit locks // Need a thread-safe Set? // - Rare writes: CopyOnWriteArraySet // - Sorted + concurrent: ConcurrentSkipListSet // - Keys of ConcurrentHashMap: ConcurrentHashMap.newKeySet() // Need a thread-safe Map? // - General purpose: ConcurrentHashMap (best choice in most cases) // - Sorted + concurrent + range queries: ConcurrentSkipListMap // - Enum keys: EnumMap + explicit synchronization (EnumMap is not thread-safe) // Need a thread-safe Queue? // - Unbounded, non-blocking: ConcurrentLinkedQueue // - Bounded, blocking (producer-consumer): ArrayBlockingQueue / LinkedBlockingQueue // - Priority queue: PriorityBlockingQueue // - Delay queue: DelayQueue // Example: Producer-consumer with blocking queue BlockingQueue<String> jobs = new LinkedBlockingQueue<>(100); // capacity 100 // Producer thread Thread producer = Thread.ofVirtual().start(() -> { while (true) { jobs.put(generateJob()); // blocks when full } }); // Consumer thread Thread consumer = Thread.ofVirtual().start(() -> { while (true) { String job = jobs.take(); // blocks when empty process(job); } }); Synchronized wrappers (Collections.synchronizedList(), Collections.synchronizedMap()) only make individual method calls atomic — compound operations like check-then-act still need external synchronization. The java.util.concurrent collections provide atomic compound operations (putIfAbsent, computeIfAbsent, etc.) without this limitation.