Every new Connection starts with autoCommit = true, meaning each statement is its own transaction. This is fine for single reads but wrong for multi-step writes.
Connection conn = DriverManager.getConnection(URL, USER, PASS);
System.out.println("Auto-commit: " + conn.getAutoCommit()); // true by default
conn.setAutoCommit(false); // switch to manual transaction control
Never forget to call conn.commit() after setAutoCommit(false), or to call conn.rollback() in the catch block. Forgetting either leaves the transaction open and holds database locks.
The classic pattern: disable auto-commit, do the work, commit on success, rollback on failure:
try (Connection conn = DriverManager.getConnection(URL, USER, PASS)) {
conn.setAutoCommit(false);
try (PreparedStatement debit = conn.prepareStatement(
"UPDATE accounts SET balance = balance - ? WHERE id = ?");
PreparedStatement credit = conn.prepareStatement(
"UPDATE accounts SET balance = balance + ? WHERE id = ?")) {
debit.setBigDecimal(1, new BigDecimal("250.00"));
debit.setLong(2, fromAccountId);
debit.executeUpdate();
credit.setBigDecimal(1, new BigDecimal("250.00"));
credit.setLong(2, toAccountId);
credit.executeUpdate();
conn.commit(); // ✅ both updates visible atomically
System.out.println("Transfer complete");
} catch (SQLException ex) {
conn.rollback(); // ❌ neither update persists
throw ex;
}
}
Isolation controls what concurrent transactions can see of each other's uncommitted data. Higher isolation prevents more anomalies but holds more locks:
| Level | Dirty Read | Non-Repeatable Read | Phantom Read | JDBC Constant |
| READ UNCOMMITTED | ✅ Possible | ✅ Possible | ✅ Possible | TRANSACTION_READ_UNCOMMITTED |
| READ COMMITTED | ❌ Prevented | ✅ Possible | ✅ Possible | TRANSACTION_READ_COMMITTED |
| REPEATABLE READ | ❌ Prevented | ❌ Prevented | ✅ Possible | TRANSACTION_REPEATABLE_READ |
| SERIALIZABLE | ❌ Prevented | ❌ Prevented | ❌ Prevented | TRANSACTION_SERIALIZABLE |
conn.setAutoCommit(false);
conn.setTransactionIsolation(Connection.TRANSACTION_REPEATABLE_READ);
// ... do work ...
conn.commit();
Most databases default to READ COMMITTED. PostgreSQL uses snapshot isolation for REPEATABLE READ (MVCC), so phantom reads are also prevented at that level in practice.
A Savepoint marks a point within a transaction that you can roll back to without abandoning the entire transaction. Useful when a batch has independent sub-operations:
try (Connection conn = DriverManager.getConnection(URL, USER, PASS)) {
conn.setAutoCommit(false);
try (Statement stmt = conn.createStatement()) {
stmt.executeUpdate("INSERT INTO audit_log (msg) VALUES ('batch started')");
Savepoint sp1 = conn.setSavepoint("after_audit");
try {
stmt.executeUpdate("INSERT INTO orders (id, amount) VALUES (1, 500)");
stmt.executeUpdate("INSERT INTO orders (id, amount) VALUES (2, 300)");
// imagine order 2 violates a constraint...
} catch (SQLException ex) {
System.out.println("Orders failed — rolling back to savepoint");
conn.rollback(sp1); // audit_log INSERT is still intact
}
stmt.executeUpdate("INSERT INTO audit_log (msg) VALUES ('batch ended')");
conn.commit(); // audit rows committed; failed orders rolled back
}
}
Sending many statements in one network round-trip is orders of magnitude faster than row-by-row. Both Statement and PreparedStatement support batching:
// PreparedStatement batch (preferred)
String sql = "UPDATE products SET price = price * ? WHERE category = ?";
try (Connection conn = DriverManager.getConnection(URL, USER, PASS);
PreparedStatement ps = conn.prepareStatement(sql)) {
conn.setAutoCommit(false);
Map<String, Double> priceAdjustments = Map.of(
"electronics", 1.05,
"clothing", 0.90,
"books", 1.02
);
for (Map.Entry<String, Double> entry : priceAdjustments.entrySet()) {
ps.setDouble(1, entry.getValue());
ps.setString(2, entry.getKey());
ps.addBatch();
}
int[] results = ps.executeBatch(); // sends all in one trip
conn.commit();
for (int i = 0; i < results.length; i++) {
System.out.println("Update " + i + ": " + results[i] + " rows affected");
}
}
executeBatch() returns an int[] — one count per batched statement. A value of Statement.SUCCESS_NO_INFO (-2) means success but row count unknown (driver-dependent).
Accumulating millions of rows in memory before calling executeBatch() will cause OutOfMemoryError. Flush every N rows:
final int BATCH_SIZE = 1000;
int count = 0;
try (Connection conn = DriverManager.getConnection(URL, USER, PASS);
PreparedStatement ps = conn.prepareStatement(
"INSERT INTO events (type, payload, ts) VALUES (?, ?, ?)")) {
conn.setAutoCommit(false);
for (Event event : hugeEventList) {
ps.setString(1, event.getType());
ps.setString(2, event.getPayload());
ps.setTimestamp(3, Timestamp.from(event.getTime()));
ps.addBatch();
if (++count % BATCH_SIZE == 0) {
ps.executeBatch();
conn.commit(); // commit each chunk so locks don't pile up
}
}
ps.executeBatch(); // flush remaining rows
conn.commit();
System.out.println("Inserted " + count + " events");
}
When one statement in a batch fails, JDBC throws BatchUpdateException (a subclass of SQLException). Call getUpdateCounts() to inspect which rows succeeded:
try {
int[] results = ps.executeBatch();
} catch (BatchUpdateException ex) {
int[] counts = ex.getUpdateCounts();
for (int i = 0; i < counts.length; i++) {
if (counts[i] == Statement.EXECUTE_FAILED) {
System.out.println("Statement " + i + " failed");
}
}
conn.rollback();
}
Driver behaviour on batch failure varies. Some drivers stop at the first error; others continue and report individual failures in the counts array. Check your driver documentation.