Contents

A Stream<T> is a sequence of elements that supports aggregate operations. Unlike a Collection, a stream does not store data — it pulls elements from a source (collection, array, generator, I/O channel) and pushes them through a pipeline of transformations. Streams are lazy: intermediate operations are not executed until a terminal operation is invoked, and they are single-use — once consumed, a stream cannot be replayed.

FeatureCollectionStream
StorageStores elements in memoryDoes not store data — computes on demand
IterationExternal (for loop, Iterator)Internal — the pipeline drives iteration
LazinessElements exist eagerlyIntermediate ops are deferred until terminal op
ConsumptionCan be iterated many timesSingle-use — consumed after terminal op
MutationElements can be added/removedDoes not modify the source
InfiniteAlways finiteCan represent infinite sequences

The pipeline model is: source (collection, array, generator) → zero or more intermediate operations (filter, map, sorted) → one terminal operation (collect, forEach, reduce). Nothing happens until the terminal operation triggers the pipeline.

import java.util.*; import java.util.stream.*; // --- Creating streams --- // 1. From a Collection List<String> names = List.of("Alice", "Bob", "Carol", "Diana"); Stream<String> fromList = names.stream(); // 2. Stream.of() — from explicit values Stream<String> fromValues = Stream.of("one", "two", "three"); // 3. Arrays.stream() — from an array int[] numbers = {10, 20, 30, 40}; IntStream fromArray = Arrays.stream(numbers); // 4. Stream.empty() — an empty stream (useful as a return value) Stream<String> empty = Stream.empty(); // 5. IntStream.range() / rangeClosed() IntStream zeroToNine = IntStream.range(0, 10); // 0..9 IntStream oneToTen = IntStream.rangeClosed(1, 10); // 1..10 // 6. Stream.iterate() — infinite sequence with a seed Stream<Integer> powersOfTwo = Stream.iterate(1, n -> n * 2); List<Integer> firstEight = powersOfTwo.limit(8).toList(); System.out.println(firstEight); // [1, 2, 4, 8, 16, 32, 64, 128] // Java 9+ bounded iterate (hasNext predicate) Stream<Integer> bounded = Stream.iterate(1, n -> n <= 100, n -> n * 2); System.out.println(bounded.toList()); // [1, 2, 4, 8, 16, 32, 64] // 7. Stream.generate() — infinite stream from a Supplier Stream<Double> randoms = Stream.generate(Math::random); List<Double> fiveRandoms = randoms.limit(5).toList(); System.out.println(fiveRandoms); // five random doubles // 8. From a String — chars() or codePoints() "Hello".chars().forEach(c -> System.out.print((char) c + " ")); // H e l l o Use Stream.iterate(seed, hasNext, next) (Java 9+) instead of Stream.iterate(seed, next).limit(n) when the bound depends on a condition rather than a fixed count — it is clearer and does not require calculating the count upfront.

filter(Predicate<T>) retains only the elements that match the predicate — it narrows the stream. map(Function<T, R>) transforms each element into a new value — it changes the stream's type or shape. Chaining them is the bread and butter of stream processing.

import java.util.*; import java.util.stream.*; record Employee(String name, String department, int salary, boolean active) {} List<Employee> employees = List.of( new Employee("Alice", "Engineering", 92_000, true), new Employee("Bob", "Marketing", 65_000, true), new Employee("Carol", "Engineering", 81_000, false), new Employee("Diana", "HR", 57_000, true), new Employee("Ethan", "Marketing", 73_000, true), new Employee("Fiona", "Engineering", 98_000, true), new Employee("George", "HR", 51_000, false) ); // filter — keep only active engineers List<String> activeEngineers = employees.stream() .filter(Employee::active) .filter(e -> e.department().equals("Engineering")) .map(Employee::name) .toList(); System.out.println(activeEngineers); // [Alice, Fiona] // map — transform to a different type List<String> summaries = employees.stream() .map(e -> e.name() + " (" + e.department() + ")") .toList(); // [Alice (Engineering), Bob (Marketing), Carol (Engineering), ...] // Chaining filter + map for a salary report employees.stream() .filter(e -> e.salary() > 70_000) .map(e -> String.format("%-8s $%,d", e.name(), e.salary())) .forEach(System.out::println); // Alice $92,000 // Carol $81,000 // Ethan $73,000 // Fiona $98,000 // mapToInt — avoid boxing when you need primitive int values int totalSalary = employees.stream() .filter(Employee::active) .mapToInt(Employee::salary) .sum(); System.out.println("Total active salary: $" + totalSalary); // $385,000 // mapToDouble for double results double avgSalary = employees.stream() .mapToDouble(Employee::salary) .average() .orElse(0.0); System.out.printf("Average salary: $%,.0f%n", avgSalary); // $73,857 Use mapToInt(), mapToLong(), and mapToDouble() to convert to primitive streams whenever you plan to call sum(), average(), or max(). This avoids boxing overhead and gives you specialised terminal operations for free.

reduce() combines all elements of a stream into a single result by repeatedly applying a binary operator. The two-argument form takes an identity value (the starting/default value) and an accumulator (how to merge the next element). The one-argument form returns an Optional since there may be no elements.

import java.util.*; import java.util.stream.*; List<Integer> numbers = List.of(3, 7, 2, 9, 5, 1, 8); // Two-argument reduce: identity + accumulator int sum = numbers.stream().reduce(0, Integer::sum); System.out.println("Sum: " + sum); // 35 int product = numbers.stream().reduce(1, (a, b) -> a * b); System.out.println("Product: " + product); // 15120 // One-argument reduce — returns Optional (empty stream = empty Optional) Optional<Integer> max = numbers.stream().reduce(Integer::max); max.ifPresent(m -> System.out.println("Max: " + m)); // Max: 9 // String concatenation List<String> words = List.of("Stream", "API", "is", "powerful"); String sentence = words.stream().reduce("", (a, b) -> a.isEmpty() ? b : a + " " + b); System.out.println(sentence); // Stream API is powerful // --- Practical: build a composite result --- record OrderLine(String product, int quantity, double price) { double subtotal() { return quantity * price; } } List<OrderLine> cart = List.of( new OrderLine("Keyboard", 2, 49.99), new OrderLine("Mouse", 3, 29.99), new OrderLine("Monitor", 1, 349.00) ); // Sum of subtotals using reduce double total = cart.stream() .map(OrderLine::subtotal) .reduce(0.0, Double::sum); System.out.printf("Cart total: $%.2f%n", total); // $538.95 // Three-argument reduce (identity, accumulator, combiner) // The combiner is only used in parallel streams to merge partial results int totalQuantity = cart.parallelStream() .reduce( 0, // identity (subtotal, line) -> subtotal + line.quantity(), // accumulator Integer::sum // combiner (merges two partial sums) ); System.out.println("Total items: " + totalQuantity); // 6 The three-argument reduce(identity, accumulator, combiner) exists primarily for parallel streams. The combiner is never called in a sequential stream, so bugs in the combiner only surface under parallel execution. Always test your three-argument reduce with parallelStream() to verify the combiner is correct.

collect() is the most versatile terminal operation. It uses a Collector to accumulate stream elements into a container — a List, Set, Map, String, or any custom structure. The Collectors utility class provides a rich set of factory methods. For advanced grouping and partitioning, see the dedicated groupingBy & partitioningBy article.

import java.util.*; import java.util.stream.*; record Product(String name, String category, double price) {} List<Product> products = List.of( new Product("Laptop", "Electronics", 1299.99), new Product("Headphones", "Electronics", 89.99), new Product("Desk Chair", "Furniture", 349.00), new Product("Keyboard", "Electronics", 59.99), new Product("Bookshelf", "Furniture", 179.00), new Product("Webcam", "Electronics", 74.99) ); // toList() — collect into a mutable ArrayList List<String> names = products.stream() .map(Product::name) .collect(Collectors.toList()); // toUnmodifiableList() — immutable list (Java 10+) List<String> immutableNames = products.stream() .map(Product::name) .collect(Collectors.toUnmodifiableList()); // toSet() — removes duplicates Set<String> categories = products.stream() .map(Product::category) .collect(Collectors.toSet()); System.out.println(categories); // [Electronics, Furniture] // joining() — concatenate strings with delimiter String productList = products.stream() .map(Product::name) .collect(Collectors.joining(", ")); System.out.println(productList); // Laptop, Headphones, Desk Chair, Keyboard, Bookshelf, Webcam // joining with prefix and suffix String formatted = products.stream() .map(Product::name) .collect(Collectors.joining(", ", "[", "]")); // [Laptop, Headphones, Desk Chair, Keyboard, Bookshelf, Webcam] // counting() long count = products.stream() .filter(p -> p.price() > 100) .collect(Collectors.counting()); System.out.println("Expensive products: " + count); // 3 // summarizingDouble — all stats in one pass DoubleSummaryStatistics stats = products.stream() .collect(Collectors.summarizingDouble(Product::price)); System.out.printf("Count: %d, Min: $%.2f, Max: $%.2f, Avg: $%.2f%n", stats.getCount(), stats.getMin(), stats.getMax(), stats.getAverage()); // toMap — name as key, price as value Map<String, Double> priceByName = products.stream() .collect(Collectors.toMap(Product::name, Product::price)); // {Laptop=1299.99, Headphones=89.99, ...} // toMap with merge function — handle duplicate keys // Sum prices by category Map<String, Double> totalByCategory = products.stream() .collect(Collectors.toMap( Product::category, Product::price, Double::sum)); // merge: sum prices for same category System.out.println(totalByCategory); // {Electronics=1524.96, Furniture=528.0} // toMap with merge + map factory (sorted map) Map<String, Double> sortedTotals = products.stream() .collect(Collectors.toMap( Product::category, Product::price, Double::sum, TreeMap::new)); Java 16 added Stream.toList() as a shorthand for collect(Collectors.toList()). The returned list is unmodifiable. If you need a mutable list, continue to use collect(Collectors.toList()) or collect(Collectors.toCollection(ArrayList::new)).

These intermediate operations control ordering, uniqueness, and the size of the stream. sorted() and distinct() are stateful — they must see every element before producing output. limit() and skip() are short-circuiting and enable pagination patterns.

import java.util.*; import java.util.stream.*; record Employee(String name, String department, int salary) {} List<Employee> employees = List.of( new Employee("Alice", "Engineering", 92_000), new Employee("Bob", "Marketing", 65_000), new Employee("Carol", "Engineering", 81_000), new Employee("Diana", "HR", 57_000), new Employee("Ethan", "Marketing", 73_000), new Employee("Fiona", "Engineering", 98_000) ); // sorted() — natural order (requires Comparable) List<String> sortedNames = employees.stream() .map(Employee::name) .sorted() .toList(); System.out.println(sortedNames); // [Alice, Bob, Carol, Diana, Ethan, Fiona] // sorted(Comparator) — custom sort by salary descending employees.stream() .sorted(Comparator.comparingInt(Employee::salary).reversed()) .map(e -> e.name() + " $" + e.salary()) .forEach(System.out::println); // Fiona $98000, Alice $92000, Carol $81000, ... // Multi-key sort: department asc, then salary desc employees.stream() .sorted(Comparator.comparing(Employee::department) .thenComparing(Comparator.comparingInt(Employee::salary).reversed())) .forEach(e -> System.out.printf("%-12s %-12s $%,d%n", e.name(), e.department(), e.salary())); // distinct() — removes duplicates (uses equals/hashCode) List<String> depts = employees.stream() .map(Employee::department) .distinct() .toList(); System.out.println(depts); // [Engineering, Marketing, HR] // limit(n) and skip(n) — pagination int pageSize = 2; int pageNumber = 1; // zero-based List<Employee> page = employees.stream() .sorted(Comparator.comparing(Employee::name)) .skip((long) pageNumber * pageSize) .limit(pageSize) .toList(); // Page 1 (of size 2, sorted by name): [Carol, Diana] // takeWhile() (Java 9+) — take elements while predicate is true, then stop List<Integer> nums = List.of(2, 4, 6, 7, 8, 10); List<Integer> evenPrefix = nums.stream() .takeWhile(n -> n % 2 == 0) .toList(); System.out.println(evenPrefix); // [2, 4, 6] // dropWhile() (Java 9+) — skip elements while predicate is true, then take the rest List<Integer> afterEvens = nums.stream() .dropWhile(n -> n % 2 == 0) .toList(); System.out.println(afterEvens); // [7, 8, 10] distinct() relies on equals() and hashCode(). If your stream contains objects that do not override these methods, distinct() will use identity comparison and may not remove logical duplicates. Always verify your domain classes implement equals/hashCode correctly, or extract a comparable field first with map().

peek() is an intermediate operation designed for debugging — it lets you observe each element as it passes through the pipeline without altering it. forEach() is a terminal operation that consumes the stream. Both execute side-effects, but their intended uses differ significantly.

import java.util.*; import java.util.stream.*; record Order(String id, double amount, String status) {} List<Order> orders = List.of( new Order("O1", 250.00, "SHIPPED"), new Order("O2", 45.00, "PENDING"), new Order("O3", 180.00, "SHIPPED"), new Order("O4", 99.00, "CANCELLED"), new Order("O5", 320.00, "PENDING") ); // peek() for debugging — see what passes through each stage List<String> shippedIds = orders.stream() .filter(o -> o.status().equals("SHIPPED")) .peek(o -> System.out.println("After filter: " + o.id())) .map(Order::id) .peek(id -> System.out.println("After map: " + id)) .toList(); // After filter: O1 // After map: O1 // After filter: O3 // After map: O3 // forEach() — terminal operation, consumes the stream orders.stream() .filter(o -> o.amount() > 100) .forEach(o -> System.out.println(o.id() + ": $" + o.amount())); // O1: $250.0 O3: $180.0 O5: $320.0 // forEachOrdered() — preserves encounter order even in parallel streams orders.parallelStream() .filter(o -> o.status().equals("PENDING")) .forEachOrdered(o -> System.out.println(o.id())); // Always prints O2 then O5, regardless of parallel execution order // forEach on a parallel stream — order is NOT guaranteed orders.parallelStream() .forEach(o -> System.out.print(o.id() + " ")); // Might print: O3 O1 O5 O2 O4 (non-deterministic order) peek() is intended for debugging only. Do not use it to modify state — the JVM may optimise away peek calls when it determines the pipeline result does not depend on side-effects. Also avoid forEach with accumulation into an external collection; use collect() instead, which is safe for parallel execution. If you need ordered output from a parallel stream, use forEachOrdered(). It preserves the encounter order at the cost of some parallelism, but the upstream computation still runs in parallel — only the final consumption is serialised.

Several terminal operations return Optional because the stream may be empty. findFirst(), findAny(), min(), and max() return Optional<T>, while the match operations (anyMatch, allMatch, noneMatch) return boolean. Understanding when and how to combine these with Optional methods is essential for clean stream code.

import java.util.*; import java.util.stream.*; record Employee(String name, String department, int salary, boolean active) {} List<Employee> employees = List.of( new Employee("Alice", "Engineering", 92_000, true), new Employee("Bob", "Marketing", 65_000, true), new Employee("Carol", "Engineering", 81_000, false), new Employee("Diana", "HR", 57_000, true), new Employee("Ethan", "Marketing", 73_000, true), new Employee("Fiona", "Engineering", 98_000, true) ); // findFirst() — first element matching a condition Optional<Employee> firstEngineer = employees.stream() .filter(e -> e.department().equals("Engineering")) .findFirst(); firstEngineer.ifPresent(e -> System.out.println("First engineer: " + e.name())); // First engineer: Alice // findAny() — any matching element (non-deterministic in parallel) Optional<Employee> anyActive = employees.parallelStream() .filter(Employee::active) .findAny(); anyActive.ifPresent(e -> System.out.println("Found: " + e.name())); // min() and max() with a Comparator Optional<Employee> topEarner = employees.stream() .max(Comparator.comparingInt(Employee::salary)); String topName = topEarner.map(Employee::name).orElse("No employees"); System.out.println("Top earner: " + topName); // Fiona Optional<Employee> lowestPaid = employees.stream() .filter(Employee::active) .min(Comparator.comparingInt(Employee::salary)); lowestPaid.ifPresent(e -> System.out.println("Lowest active salary: " + e.name() + " $" + e.salary())); // anyMatch, allMatch, noneMatch — return boolean, not Optional boolean hasHR = employees.stream() .anyMatch(e -> e.department().equals("HR")); System.out.println("Has HR employees: " + hasHR); // true boolean allActive = employees.stream() .allMatch(Employee::active); System.out.println("All active: " + allActive); // false boolean noneOverMillion = employees.stream() .noneMatch(e -> e.salary() > 1_000_000); System.out.println("None over $1M: " + noneOverMillion); // true // Combining Optional methods with stream results String result = employees.stream() .filter(e -> e.department().equals("Legal")) .findFirst() .map(Employee::name) .orElse("No one in Legal"); System.out.println(result); // No one in Legal // orElseThrow for mandatory lookups Employee cto = employees.stream() .filter(e -> e.name().equals("Fiona")) .findFirst() .orElseThrow(() -> new NoSuchElementException("Employee not found")); findFirst() returns the first element in encounter order, while findAny() may return any matching element and is faster in parallel streams because it does not need to synchronise on ordering. Use findFirst() when ordering matters and findAny() when you just need any match.

Java provides three specialised stream types — IntStream, LongStream, and DoubleStream — to avoid the cost of boxing primitives into wrapper objects. They offer additional terminal operations like sum(), average(), and summaryStatistics() that are not available on Stream<T>.

import java.util.stream.*; // IntStream.range() / rangeClosed() IntStream.range(1, 6).forEach(i -> System.out.print(i + " ")); // 1 2 3 4 5 IntStream.rangeClosed(1, 5).forEach(i -> System.out.print(i + " ")); // 1 2 3 4 5 // sum(), average(), min(), max() int sum = IntStream.rangeClosed(1, 100).sum(); System.out.println("Sum 1..100: " + sum); // 5050 double avg = IntStream.of(85, 90, 78, 92, 88).average().orElse(0); System.out.printf("Average score: %.1f%n", avg); // 86.6 // summaryStatistics() — all stats in one pass IntSummaryStatistics stats = IntStream.of(23, 45, 12, 67, 34, 89, 56) .summaryStatistics(); System.out.println("Count: " + stats.getCount()); // 7 System.out.println("Sum: " + stats.getSum()); // 326 System.out.println("Min: " + stats.getMin()); // 12 System.out.println("Max: " + stats.getMax()); // 89 System.out.printf("Avg: %.2f%n", stats.getAverage()); // 46.57 // Boxing: IntStream → Stream<Integer> with boxed() java.util.List<Integer> boxed = IntStream.rangeClosed(1, 5) .boxed() .toList(); System.out.println(boxed); // [1, 2, 3, 4, 5] // mapToObj: IntStream → Stream<T> java.util.List<String> hexValues = IntStream.of(10, 255, 128) .mapToObj(n -> String.format("0x%02X", n)) .toList(); System.out.println(hexValues); // [0x0A, 0xFF, 0x80] // Converting between primitive stream types LongStream longStream = IntStream.rangeClosed(1, 5).asLongStream(); DoubleStream doubleStream = IntStream.rangeClosed(1, 5).asDoubleStream(); // Practical: ASCII character histogram String text = "hello world"; text.chars() .filter(c -> c != ' ') .mapToObj(c -> (char) c) .collect(java.util.stream.Collectors.groupingBy(c -> c, java.util.stream.Collectors.counting())) .forEach((ch, count) -> System.out.println(ch + ": " + count)); // h: 1, e: 1, l: 3, o: 2, w: 1, r: 1, d: 1 Primitive streams do not have collect(Collectors.toList()). You must call boxed() first to convert to Stream<Integer>, or use toArray() to get a primitive array. Forgetting boxed() is a common compilation error when working with IntStream.

Parallel streams split the source into multiple chunks and process them concurrently using the common ForkJoinPool. A stream can be made parallel with .parallelStream() on a collection or .parallel() on an existing stream. Parallelism is not always faster — it depends on data size, operation cost, and source splittability.

import java.util.*; import java.util.concurrent.*; import java.util.stream.*; // Creating parallel streams List<Integer> numbers = IntStream.rangeClosed(1, 1_000_000).boxed().toList(); // Option 1: parallelStream() from a collection long count1 = numbers.parallelStream() .filter(n -> n % 2 == 0) .count(); // Option 2: .parallel() on an existing stream long count2 = numbers.stream() .parallel() .filter(n -> n % 2 == 0) .count(); // When parallel HELPS — CPU-bound operations on large datasets long start = System.nanoTime(); double result = LongStream.rangeClosed(1, 10_000_000) .parallel() .mapToDouble(n -> Math.sqrt(n) * Math.log(n)) .sum(); long elapsed = System.nanoTime() - start; System.out.printf("Parallel: %.2f in %d ms%n", result, elapsed / 1_000_000); // Ordering: forEachOrdered preserves encounter order List<String> names = List.of("Alice", "Bob", "Carol", "Diana", "Ethan"); names.parallelStream() .map(String::toUpperCase) .forEachOrdered(System.out::println); // Always: ALICE, BOB, CAROL, DIANA, ETHAN // collect() is order-preserving for ordered sources List<String> upperNames = names.parallelStream() .map(String::toUpperCase) .toList(); // order is preserved System.out.println(upperNames); // [ALICE, BOB, CAROL, DIANA, ETHAN] // Custom ForkJoinPool — avoid starving the common pool ForkJoinPool customPool = new ForkJoinPool(4); try { List<Integer> result2 = customPool.submit(() -> numbers.parallelStream() .filter(n -> n % 3 == 0) .collect(Collectors.toList()) ).get(); System.out.println("Multiples of 3: " + result2.size()); } catch (InterruptedException | ExecutionException e) { Thread.currentThread().interrupt(); } finally { customPool.shutdown(); } Parallel streams share the common ForkJoinPool with the rest of your application. A slow parallel pipeline (e.g. one that does I/O or blocks on locks) can starve other tasks. Never use parallel streams for I/O-bound work. For CPU-bound work on small datasets (fewer than ~10,000 elements), the thread coordination overhead typically exceeds the gains — always measure before committing to parallel.

When to use parallel streams:

When to avoid parallel streams:

Streams are a powerful tool, but they come with sharp edges. These guidelines will help you write stream code that is correct, readable, and performant.

Intermediate vs Terminal Operations
Intermediate (lazy, return Stream)Terminal (eager, produce result)
filter(), map(), flatMap()collect(), toList()
sorted(), distinct()forEach(), forEachOrdered()
limit(), skip()reduce(), count()
peek()min(), max(), sum()
takeWhile(), dropWhile()findFirst(), findAny()
mapToInt(), mapToDouble()anyMatch(), allMatch(), noneMatch()
mapMulti() (Java 16+)toArray()
import java.util.*; import java.util.stream.*; // --- Pitfall 1: reusing a stream → IllegalStateException --- Stream<String> stream = Stream.of("a", "b", "c"); stream.forEach(System.out::print); // abc // stream.count(); // throws IllegalStateException: stream has already been operated upon // Fix: create a new stream each time, or store the source List<String> source = List.of("a", "b", "c"); source.stream().forEach(System.out::print); long count = source.stream().count(); // --- Pitfall 2: side-effects in lambdas --- // BAD — accumulating into an external list is not thread-safe List<String> results = new ArrayList<>(); Stream.of("a", "b", "c").forEach(results::add); // fragile in parallel // GOOD — use collect List<String> safe = Stream.of("a", "b", "c").collect(Collectors.toList()); // --- Pitfall 3: streams that do nothing (missing terminal op) --- // This does NOTHING — no terminal operation to trigger the pipeline List.of(1, 2, 3).stream().filter(n -> n > 1).map(n -> n * 2); // Fix: add a terminal operation List<Integer> doubled = List.of(1, 2, 3).stream() .filter(n -> n > 1) .map(n -> n * 2) .toList(); // [4, 6] // --- Best practice: prefer method references for readability --- // Lambda employees.stream().map(e -> e.getName()).toList(); // Method reference — cleaner employees.stream().map(Employee::getName).toList(); // --- Best practice: toList() (Java 16+) vs collect(toList()) --- // Java 16+: concise, returns unmodifiable list List<String> modern = Stream.of("x", "y", "z").toList(); // Pre-Java 16, or when you need a mutable list List<String> mutable = Stream.of("x", "y", "z") .collect(Collectors.toList()); // returns ArrayList (mutable) // --- Best practice: short-circuiting for early exit --- // anyMatch, findFirst, limit are short-circuiting — they stop processing early boolean hasExpensive = products.stream() .anyMatch(p -> p.price() > 1000); // stops at the first match // --- When NOT to use streams --- // Simple loop with index — a for loop is clearer String[] arr = {"a", "b", "c"}; for (int i = 0; i < arr.length; i++) { arr[i] = arr[i].toUpperCase(); // mutating in place — not a stream's job } // Complex stateful logic with break/continue — use a loop // Deeply nested stream chains that are harder to debug than a plain loop A stream without a terminal operation does absolutely nothing. The pipeline is lazy — filter(), map(), and peek() are not executed until a terminal operation like collect(), forEach(), or count() triggers the pipeline. This is a common source of bugs when converting imperative code to streams. When a stream pipeline grows beyond three or four chained operations, consider breaking it into named steps or extracting the predicate/function into a well-named variable. Readability is more important than fitting everything into a single expression. If a plain for loop with early exit or index-based access is clearer, use the loop.

Related articles: For nested collection flattening see flatMap. For grouping and partitioning see groupingBy & partitioningBy.