Contents
- Buffer Properties: capacity, limit, position, mark
- flip(), clear(), rewind(), compact()
- get() and put() Operations
- Byte Order
- Reading Files with FileChannel
- Direct vs Heap Buffers
Every ByteBuffer has four properties that control how data flows in and out:
- capacity — total size of the buffer (fixed at creation, never changes).
- limit — the boundary beyond which data should not be read or written.
- position — the index of the next byte to read or write (advances automatically).
- mark — a saved position; calling reset() restores position to mark.
Invariant: 0 ≤ mark ≤ position ≤ limit ≤ capacity
import java.nio.ByteBuffer;
// Allocate a heap buffer of 8 bytes
ByteBuffer buf = ByteBuffer.allocate(8);
// capacity=8, limit=8, position=0, mark=-1
System.out.println("capacity: " + buf.capacity()); // 8
System.out.println("limit: " + buf.limit()); // 8
System.out.println("position: " + buf.position()); // 0
System.out.println("remaining:" + buf.remaining()); // limit - position = 8
The write→read transition is the most common source of bugs with ByteBuffer. You write data into the buffer, then call flip() to prepare it for reading:
ByteBuffer buf = ByteBuffer.allocate(8);
// --- Write phase ---
buf.put((byte) 10);
buf.put((byte) 20);
buf.put((byte) 30);
// position=3, limit=8
// Prepare for reading: sets limit=position, position=0
buf.flip();
// position=0, limit=3 — "3 bytes available to read"
// --- Read phase ---
while (buf.hasRemaining()) {
System.out.println(buf.get()); // 10, 20, 30
}
// position=3, limit=3
// ---- Reset for reuse ----
// clear() — resets for fresh write (discards unread data)
buf.clear();
// position=0, limit=8 (capacity)
// rewind() — re-read from the start without clearing
buf.rewind();
// position=0, limit unchanged
// compact() — shift unread bytes to start, prepare for more writing
buf.flip(); // back to read mode first
buf.get(); // consume one byte
buf.compact();
// Unread bytes shifted to [0..N-1], position=N, limit=capacity
Forgetting flip() before reading is the most common ByteBuffer bug — you'll read stale or incorrect data. Always: write → flip → read → clear/compact → write again.
ByteBuffer offers two families of access methods. Relative get/put operations use and advance the internal position pointer automatically — each call moves position forward by the size of the type written or read. Absolute get/put operations take an explicit byte index and leave position unchanged, useful for random-access reads or writing a header field after the fact. After filling a buffer with put() calls, you must call flip() before reading: it sets limit = position and resets position = 0, switching the buffer from write mode to read mode. rewind() resets position to zero without changing the limit, letting you re-read data you have already flipped. Bulk put(byte[]) and get(byte[]) copy arrays in a single call and are considerably faster than looping byte by byte.
ByteBuffer buf = ByteBuffer.allocate(32);
// Relative put — advances position
buf.put((byte) 1);
buf.putInt(42); // writes 4 bytes
buf.putLong(123456789L); // writes 8 bytes
buf.putDouble(3.14); // writes 8 bytes
buf.putShort((short) 999);
buf.flip();
// Relative get — advances position
byte b = buf.get(); // 1
int i = buf.getInt(); // 42
long l = buf.getLong(); // 123456789
double d = buf.getDouble(); // 3.14
short s = buf.getShort(); // 999
// Absolute get/put — does NOT advance position
ByteBuffer buf2 = ByteBuffer.allocate(8);
buf2.putInt(0, 0xDEADBEEF); // write at index 0
buf2.putInt(4, 0xCAFEBABE); // write at index 4
System.out.printf("%08X%n", buf2.getInt(0)); // DEADBEEF
System.out.printf("%08X%n", buf2.getInt(4)); // CAFEBABE
System.out.println(buf2.position()); // still 0
// Bulk put/get
byte[] data = {1, 2, 3, 4, 5};
ByteBuffer buf3 = ByteBuffer.allocate(10);
buf3.put(data);
buf3.flip();
byte[] out = new byte[5];
buf3.get(out);
System.out.println(Arrays.toString(out)); // [1, 2, 3, 4, 5]
Multi-byte values (int, long, double, etc.) can be stored in big-endian (network byte order) or little-endian order. The default is BIG_ENDIAN.
ByteBuffer buf = ByteBuffer.allocate(4);
// Default: BIG_ENDIAN (most significant byte first)
buf.putInt(0x01020304);
buf.flip();
System.out.printf("%02X %02X %02X %02X%n",
buf.get(), buf.get(), buf.get(), buf.get()); // 01 02 03 04
// Switch to LITTLE_ENDIAN (least significant byte first)
buf.clear();
buf.order(ByteOrder.LITTLE_ENDIAN);
buf.putInt(0x01020304);
buf.flip();
System.out.printf("%02X %02X %02X %02X%n",
buf.get(), buf.get(), buf.get(), buf.get()); // 04 03 02 01
// Practical: reading a binary file with mixed endianness
ByteBuffer header = ByteBuffer.allocate(16);
header.order(ByteOrder.LITTLE_ENDIAN); // most binary formats use LE
// ... read from channel ...
int magic = header.getInt(); // e.g., ELF magic
short type = header.getShort();
short arch = header.getShort();
FileChannel paired with ByteBuffer is the NIO approach to file I/O, offering better throughput than stream-based I/O for large files because it can transfer data with fewer copies between the OS page cache and user space. channel.read(buf) fills the buffer starting at the current position and returns the number of bytes actually read, or -1 at end-of-file; after each read you call flip() to switch to read mode, process the bytes, then clear() to reset for the next fill. MappedByteBuffer, obtained via channel.map(), goes further by mapping the file directly into the process's virtual address space — the OS pages data in on demand and dirty pages are written back without explicit write calls, making it extremely fast for large files accessed in patterns close to sequential or random read.
import java.nio.channels.FileChannel;
import java.nio.file.*;
// Reading a binary file with FileChannel + ByteBuffer
Path path = Path.of("data.bin");
try (FileChannel channel = FileChannel.open(path, StandardOpenOption.READ)) {
ByteBuffer buf = ByteBuffer.allocate(4096);
while (channel.read(buf) != -1) {
buf.flip();
while (buf.hasRemaining()) {
byte b = buf.get();
// process byte
}
buf.clear();
}
}
// Writing with FileChannel
try (FileChannel out = FileChannel.open(
Path.of("output.bin"),
StandardOpenOption.CREATE, StandardOpenOption.WRITE)) {
ByteBuffer buf = ByteBuffer.allocate(8);
buf.putInt(42);
buf.putInt(100);
buf.flip();
out.write(buf); // writes exactly the bytes between position and limit
}
// Memory-mapped file — entire file mapped into virtual address space
try (FileChannel channel = FileChannel.open(path)) {
MappedByteBuffer mapped = channel.map(
FileChannel.MapMode.READ_ONLY, 0, channel.size());
// Read directly as if it were a ByteBuffer — OS handles paging
int firstInt = mapped.getInt(0);
}
Memory-mapped files (MappedByteBuffer) are extremely fast for large files — the OS maps the file into virtual memory and loads pages on demand, without copying data to the JVM heap.
- Heap buffer (ByteBuffer.allocate(n)) — backed by a byte[] on the JVM heap. Cheap to allocate, subject to GC.
- Direct buffer (ByteBuffer.allocateDirect(n)) — backed by native memory outside the JVM heap. More expensive to allocate, but I/O operations skip an extra copy when working with native I/O (OS can DMA directly).
// Direct buffer — allocate once and reuse for high-throughput I/O
ByteBuffer directBuf = ByteBuffer.allocateDirect(64 * 1024); // 64 KB
System.out.println(directBuf.isDirect()); // true
// Heap buffer — good for in-memory processing
ByteBuffer heapBuf = ByteBuffer.allocate(1024);
System.out.println(heapBuf.isDirect()); // false
// When to use which:
// - DirectByteBuffer: repeated I/O operations, network buffers, file channels
// - HeapByteBuffer: in-memory packet parsing, message encoding/decoding,
// short-lived buffers, when allocation cost matters
// CAUTION: DirectByteBuffer is NOT freed by GC automatically —
// it uses a Cleaner internally, but pressure-based freeing may lag.
// For explicit control, use Java 9+ MemorySegment (Foreign Memory API).
Do not create a new ByteBuffer.allocateDirect() per request — direct buffers are expensive to allocate and can exhaust native memory. Pool them or use a single large buffer.