Contents

Every read or write call on an unbuffered stream translates directly into an OS system call. System calls are orders of magnitude slower than memory operations, so reading a file character-by-character is catastrophically inefficient. A buffered stream solves this by accumulating data in an in-memory buffer — by default 8192 bytes — and issuing a single system call to fill or flush the whole buffer at once. BufferedReader and BufferedWriter wrap any Reader or Writer; BufferedInputStream and BufferedOutputStream wrap byte streams. Unless you need raw, unbuffered access, every file stream should be wrapped in its buffered counterpart.

// Each read/write call on an unbuffered stream = one system call // System calls are expensive (~1000x slower than memory operations) // SLOW: unbuffered — one system call per character try (FileReader fr = new FileReader("data.txt")) { int ch; while ((ch = fr.read()) != -1) { // one syscall per char process((char) ch); } } // FAST: buffered — fills 8192-byte buffer with one syscall, then serves from memory try (BufferedReader br = new BufferedReader(new FileReader("data.txt"))) { int ch; while ((ch = br.read()) != -1) { // serves from in-memory buffer process((char) ch); } } // The difference on a large file: // Unbuffered: millions of system calls // Buffered: (file size / buffer size) system calls — orders of magnitude fewer // Default buffer size is 8192 bytes/chars // Custom buffer size for very large files: BufferedReader br = new BufferedReader(new FileReader("big.csv"), 65536); // 64KB buffer As a rule, always wrap file streams in a Buffered* wrapper unless you have a specific reason not to. The performance difference for sequential file reading is often 10x–100x for large files.

Wrap a FileReader (or any Reader) in a BufferedReader to gain the readLine() method, which reads one complete line and strips the line terminator. readLine() returns null when the end of stream is reached — not an empty string — so the standard while-loop idiom checks for null. Always use try-with-resources to guarantee the stream is closed. In Java 8 and later, lines() returns a lazy Stream<String> that integrates with the stream API and is the preferred modern alternative to the while loop.

import java.io.*; import java.nio.charset.StandardCharsets; // readLine() — reads one line at a time, strips the line terminator // Returns null at end of stream try (BufferedReader br = new BufferedReader( new InputStreamReader(new FileInputStream("log.txt"), StandardCharsets.UTF_8))) { String line; int lineNum = 0; while ((line = br.readLine()) != null) { lineNum++; if (line.contains("ERROR")) { System.out.println("Line " + lineNum + ": " + line); } } } // lines() — Java 8+ Stream<String> of lines (lazy) try (BufferedReader br = new BufferedReader(new FileReader("data.csv"))) { long errorCount = br.lines() .filter(line -> line.startsWith("ERROR")) .count(); System.out.println("Errors: " + errorCount); } // Reading from stdin (e.g., competitive programming) BufferedReader stdin = new BufferedReader(new InputStreamReader(System.in)); String input = stdin.readLine(); int n = Integer.parseInt(input.trim()); // StringReader — read from a String using BufferedReader API String text = "line one\nline two\nline three"; try (BufferedReader br = new BufferedReader(new StringReader(text))) { br.lines().forEach(System.out::println); } // skip() — skip characters; mark()/reset() — mark position and reset try (BufferedReader br = new BufferedReader(new FileReader("file.txt"))) { br.mark(1000); // mark current position (read-ahead limit = 1000 chars) String first = br.readLine(); br.reset(); // go back to marked position String again = br.readLine(); // same as first }

Wrap a FileWriter in a BufferedWriter to batch small writes into larger I/O operations. newLine() writes the platform-specific line separator (\n on Unix, \r\n on Windows), which is safer than hardcoding a character. flush() pushes any buffered data to the underlying stream without closing it. close() automatically flushes first, so try-with-resources guarantees all data is written — but if the JVM crashes before close, buffered data is lost.

// BufferedWriter — accumulates writes; flushes when buffer full or flush() called try (BufferedWriter bw = new BufferedWriter( new OutputStreamWriter(new FileOutputStream("output.txt"), StandardCharsets.UTF_8))) { bw.write("First line"); bw.newLine(); // platform-correct line separator (\n or \r\n) bw.write("Second line"); bw.newLine(); bw.write("Third line"); // BufferedWriter flushes automatically when closed (try-with-resources) } // PrintWriter — adds print/println/printf on top of BufferedWriter try (PrintWriter pw = new PrintWriter(new BufferedWriter( new FileWriter("report.txt")))) { pw.printf("Name: %s, Score: %.2f%n", "Alice", 95.5); pw.println("Status: PASS"); } // FileWriter shortcut: append mode try (BufferedWriter bw = new BufferedWriter(new FileWriter("log.txt", true))) { // append bw.write("[INFO] Server started"); bw.newLine(); } // Writing multiple lines efficiently List<String> lines = List.of("alpha", "beta", "gamma"); try (BufferedWriter bw = new BufferedWriter(new FileWriter("list.txt"))) { for (String line : lines) { bw.write(line); bw.newLine(); } } // IMPORTANT: always flush or close before reading back // Data in the buffer is NOT written to disk until flush() or close() bw.flush(); // force write without closing Always close or flush a BufferedWriter before the file is read elsewhere. Data sitting in the buffer is lost if the JVM crashes or the stream is not closed. Use try-with-resources — it calls close() (which calls flush()) automatically.

BufferedInputStream and BufferedOutputStream provide the same buffering benefit as their character-stream counterparts but operate on raw bytes. They are essential when copying files or processing binary data such as images, audio, or serialized objects. Wrap a FileInputStream or FileOutputStream in the buffered version to reduce system calls; combine with a byte buffer in the read loop for maximum throughput. Other decorators such as DataInputStream and DataOutputStream can be layered on top when you need to read and write Java primitives in binary format.

// BufferedInputStream / BufferedOutputStream — for binary data (images, audio, etc.) // Copy a file efficiently with buffered binary streams void copyFile(String src, String dst) throws IOException { try (InputStream in = new BufferedInputStream(new FileInputStream(src), 65536); OutputStream out = new BufferedOutputStream(new FileOutputStream(dst), 65536)) { byte[] buf = new byte[8192]; int n; while ((n = in.read(buf)) != -1) { out.write(buf, 0, n); } } // out.flush() called automatically on close } // DataInputStream/DataOutputStream — read/write Java primitives in binary format try (DataOutputStream dos = new DataOutputStream( new BufferedOutputStream(new FileOutputStream("data.bin")))) { dos.writeInt(42); dos.writeDouble(3.14); dos.writeUTF("hello"); } try (DataInputStream dis = new DataInputStream( new BufferedInputStream(new FileInputStream("data.bin")))) { int i = dis.readInt(); // 42 double d = dis.readDouble(); // 3.14 String s = dis.readUTF(); // "hello" System.out.println(i + " " + d + " " + s); } // Check how many bytes are buffered / available without blocking BufferedInputStream bis = new BufferedInputStream(new FileInputStream("file")); int available = bis.available(); // bytes available without blocking

Files.newBufferedReader() and Files.newBufferedWriter() are convenience factory methods that return a properly buffered BufferedReader or BufferedWriter with UTF-8 encoding by default, removing the boilerplate of stacking FileInputStream, InputStreamReader, and BufferedReader manually. For even simpler use cases, Files.readString() and Files.writeString() (Java 11+) load or write an entire file in one call. Files.lines() provides a lazy stream for large files without loading the whole content into memory.

import java.nio.file.*; import java.nio.charset.StandardCharsets; // java.nio.file.Files — convenience methods; internally uses buffering Path path = Path.of("data.txt"); // Read all lines at once — fine for small files List<String> lines = Files.readAllLines(path, StandardCharsets.UTF_8); // Read as a stream — lazy, good for large files try (Stream<String> stream = Files.lines(path, StandardCharsets.UTF_8)) { stream.filter(l -> !l.isBlank()) .map(String::trim) .forEach(System.out::println); } // Write all lines at once Files.write(path, List.of("line1", "line2"), StandardCharsets.UTF_8, StandardOpenOption.CREATE, StandardOpenOption.TRUNCATE_EXISTING); // Get a BufferedReader/Writer from NIO BufferedReader br = Files.newBufferedReader(path, StandardCharsets.UTF_8); BufferedWriter bw = Files.newBufferedWriter(path, StandardCharsets.UTF_8, StandardOpenOption.APPEND); // Read/write entire file as byte array byte[] bytes = Files.readAllBytes(path); Files.write(path, bytes); // Copy, move, delete Files.copy(Path.of("src.txt"), Path.of("dst.txt"), StandardCopyOption.REPLACE_EXISTING); Files.move(Path.of("old.txt"), Path.of("new.txt")); Files.deleteIfExists(path); For most file operations, the NIO Files API is more convenient and equally efficient as wrapping streams manually. Use Files.lines() for large files (lazy stream), Files.readAllLines() for small ones (loads everything into memory).