Contents

Testcontainers is a Java library that manages lightweight, throwaway Docker containers for integration testing. Each test class can spin up a real PostgreSQL database, a Kafka broker, or a Redis instance — and tear it down automatically when the tests finish.

Traditional approaches to integration testing have significant drawbacks:

Testcontainers solves these problems by giving each test run its own isolated, real container. The container starts before the tests, your Spring context connects to it, and it is destroyed after the tests complete.

Testcontainers requires a Docker-compatible runtime on the machine running the tests. Docker Desktop, Colima, Rancher Desktop, and Podman are all supported.

Spring Boot 3.1+ ships a dedicated spring-boot-testcontainers module. Add the Testcontainers BOM and the modules you need to your pom.xml:

<dependencyManagement> <dependencies> <dependency> <groupId>org.testcontainers</groupId> <artifactId>testcontainers-bom</artifactId> <version>1.19.7</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <!-- Core Testcontainers --> <dependency> <groupId>org.testcontainers</groupId> <artifactId>junit-jupiter</artifactId> <scope>test</scope> </dependency> <!-- PostgreSQL module --> <dependency> <groupId>org.testcontainers</groupId> <artifactId>postgresql</artifactId> <scope>test</scope> </dependency> <!-- Kafka module --> <dependency> <groupId>org.testcontainers</groupId> <artifactId>kafka</artifactId> <scope>test</scope> </dependency> <!-- Spring Boot Testcontainers support --> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-testcontainers</artifactId> <scope>test</scope> </dependency> </dependencies>

For Gradle, add the equivalent entries in your build.gradle:

testImplementation platform('org.testcontainers:testcontainers-bom:1.19.7') testImplementation 'org.testcontainers:junit-jupiter' testImplementation 'org.testcontainers:postgresql' testImplementation 'org.testcontainers:kafka' testImplementation 'org.springframework.boot:spring-boot-testcontainers' The Testcontainers BOM keeps all module versions in sync. You only declare the version once in the BOM, and individual modules inherit it automatically.

The classic Testcontainers approach uses @Container to manage the container lifecycle and @DynamicPropertySource to inject connection properties into the Spring context.

import org.junit.jupiter.api.Test; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.test.context.SpringBootTest; import org.springframework.test.context.DynamicPropertyRegistry; import org.springframework.test.context.DynamicPropertySource; import org.testcontainers.containers.PostgreSQLContainer; import org.testcontainers.junit.jupiter.Container; import org.testcontainers.junit.jupiter.Testcontainers; import static org.assertj.core.api.Assertions.assertThat; @SpringBootTest @Testcontainers class ProductRepositoryIntegrationTest { @Container static PostgreSQLContainer<?> postgres = new PostgreSQLContainer<>("postgres:16-alpine") .withDatabaseName("testdb") .withUsername("test") .withPassword("test"); @DynamicPropertySource static void configureProperties(DynamicPropertyRegistry registry) { registry.add("spring.datasource.url", postgres::getJdbcUrl); registry.add("spring.datasource.username", postgres::getUsername); registry.add("spring.datasource.password", postgres::getPassword); registry.add("spring.jpa.hibernate.ddl-auto", () -> "create-drop"); } @Autowired private ProductRepository productRepository; @Test void shouldSaveAndFindProduct() { Product product = new Product(null, "Laptop", 999.99); Product saved = productRepository.save(product); assertThat(saved.getId()).isNotNull(); Product found = productRepository.findById(saved.getId()).orElseThrow(); assertThat(found.getName()).isEqualTo("Laptop"); assertThat(found.getPrice()).isEqualTo(999.99); } @Test void shouldDeleteProduct() { Product product = productRepository.save(new Product(null, "Mouse", 29.99)); productRepository.deleteById(product.getId()); assertThat(productRepository.findById(product.getId())).isEmpty(); } @Test void shouldFindByName() { productRepository.save(new Product(null, "Keyboard", 79.99)); productRepository.save(new Product(null, "Monitor", 399.99)); List<Product> results = productRepository.findByNameContaining("Key"); assertThat(results).hasSize(1); assertThat(results.get(0).getPrice()).isEqualTo(79.99); } }

Here is what happens at runtime:

Use postgres:16-alpine or a specific version tag instead of latest. Pinning the image version ensures reproducible builds and avoids unexpected breakage when the upstream image changes.

Spring Boot 3.1 introduced @ServiceConnection, which eliminates the need for @DynamicPropertySource entirely. Spring Boot detects the container type and auto-configures the matching connection properties.

import org.springframework.boot.test.context.SpringBootTest; import org.springframework.boot.testcontainers.service.connection.ServiceConnection; import org.testcontainers.containers.PostgreSQLContainer; import org.testcontainers.junit.jupiter.Container; import org.testcontainers.junit.jupiter.Testcontainers; @SpringBootTest @Testcontainers class OrderServiceIntegrationTest { @Container @ServiceConnection static PostgreSQLContainer<?> postgres = new PostgreSQLContainer<>("postgres:16-alpine"); @Autowired private OrderService orderService; @Autowired private OrderRepository orderRepository; @Test void shouldPlaceOrder() { OrderRequest request = new OrderRequest("product-42", 3, 15.00); Order placed = orderService.placeOrder(request); assertThat(placed.getId()).isNotNull(); assertThat(placed.getStatus()).isEqualTo(OrderStatus.PLACED); assertThat(placed.getTotalPrice()).isEqualTo(45.00); // Verify it was persisted in the real PostgreSQL database Order fromDb = orderRepository.findById(placed.getId()).orElseThrow(); assertThat(fromDb.getProductId()).isEqualTo("product-42"); } }

With @ServiceConnection, you no longer need to manually map spring.datasource.url, username, or password. Spring Boot inspects the container class (PostgreSQLContainer, KafkaContainer, etc.) and wires the properties automatically.

ContainerAuto-configured properties
PostgreSQLContainerspring.datasource.url, username, password
MySQLContainerspring.datasource.url, username, password
MongoDBContainerspring.data.mongodb.uri
KafkaContainerspring.kafka.bootstrap-servers
GenericContainer (Redis)spring.data.redis.host, spring.data.redis.port
Prefer @ServiceConnection over @DynamicPropertySource whenever you are on Spring Boot 3.1 or later. It is less code, less error-prone, and automatically stays in sync with the properties Spring Boot expects.

Testcontainers provides a dedicated KafkaContainer that runs a Kafka broker (using the Confluent Platform image) with a single Kraft or ZooKeeper node. This lets you test producers and consumers against a real broker.

import org.apache.kafka.clients.consumer.ConsumerRecord; import org.junit.jupiter.api.Test; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.test.context.SpringBootTest; import org.springframework.boot.testcontainers.service.connection.ServiceConnection; import org.springframework.kafka.core.KafkaTemplate; import org.springframework.kafka.test.utils.KafkaTestUtils; import org.testcontainers.containers.KafkaContainer; import org.testcontainers.junit.jupiter.Container; import org.testcontainers.junit.jupiter.Testcontainers; import org.testcontainers.utility.DockerImageName; import java.time.Duration; import java.util.concurrent.BlockingQueue; import java.util.concurrent.LinkedBlockingQueue; import java.util.concurrent.TimeUnit; @SpringBootTest @Testcontainers class OrderEventProducerTest { @Container @ServiceConnection static KafkaContainer kafka = new KafkaContainer( DockerImageName.parse("confluentinc/cp-kafka:7.6.0") ); @Autowired private KafkaTemplate<String, String> kafkaTemplate; @Autowired private OrderEventConsumer orderEventConsumer; @Test void shouldPublishAndConsumeOrderEvent() throws Exception { // Arrange String topic = "order-events"; String message = "{\"orderId\":\"123\",\"status\":\"PLACED\"}"; // Act — send a message to the real Kafka broker kafkaTemplate.send(topic, "order-123", message).get(10, TimeUnit.SECONDS); // Assert — verify the consumer received the message ConsumerRecord<String, String> received = orderEventConsumer.getReceivedMessages() .poll(30, TimeUnit.SECONDS); assertThat(received).isNotNull(); assertThat(received.key()).isEqualTo("order-123"); assertThat(received.value()).contains("PLACED"); } }

The consumer class under test uses @KafkaListener and stores received records in a queue for assertions:

@Component public class OrderEventConsumer { private final BlockingQueue<ConsumerRecord<String, String>> receivedMessages = new LinkedBlockingQueue<>(); @KafkaListener(topics = "order-events", groupId = "test-group") public void listen(ConsumerRecord<String, String> record) { receivedMessages.add(record); } public BlockingQueue<ConsumerRecord<String, String>> getReceivedMessages() { return receivedMessages; } } Kafka containers take longer to start than database containers (typically 15-30 seconds). Use reusable containers (covered below) to avoid paying this startup cost on every test run during local development.

Redis does not have a dedicated Testcontainers module. Instead, you use GenericContainer with the official Redis image. On Spring Boot 3.1+, you can annotate it with @ServiceConnection using the name attribute to tell Spring Boot which connection factory to configure.

import org.junit.jupiter.api.Test; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.test.context.SpringBootTest; import org.springframework.boot.testcontainers.service.connection.ServiceConnection; import org.testcontainers.containers.GenericContainer; import org.testcontainers.junit.jupiter.Container; import org.testcontainers.junit.jupiter.Testcontainers; import org.testcontainers.utility.DockerImageName; @SpringBootTest @Testcontainers class CacheServiceIntegrationTest { @Container @ServiceConnection(name = "redis") static GenericContainer<?> redis = new GenericContainer<>(DockerImageName.parse("redis:7-alpine")) .withExposedPorts(6379); @Autowired private CacheService cacheService; @Test void shouldCacheAndRetrieveValue() { cacheService.put("user:42", "Alice"); String cached = cacheService.get("user:42"); assertThat(cached).isEqualTo("Alice"); } @Test void shouldExpireCachedValue() throws InterruptedException { cacheService.putWithTtl("session:99", "token-xyz", Duration.ofSeconds(1)); // Value should be present immediately assertThat(cacheService.get("session:99")).isEqualTo("token-xyz"); // Wait for TTL to expire Thread.sleep(1500); assertThat(cacheService.get("session:99")).isNull(); } @Test void shouldDeleteCachedValue() { cacheService.put("temp:1", "value"); cacheService.delete("temp:1"); assertThat(cacheService.get("temp:1")).isNull(); } }

If you are on a Spring Boot version before 3.1 (without @ServiceConnection support for Redis), use @DynamicPropertySource instead:

@DynamicPropertySource static void configureRedis(DynamicPropertyRegistry registry) { registry.add("spring.data.redis.host", redis::getHost); registry.add("spring.data.redis.port", () -> redis.getMappedPort(6379)); }

Starting Docker containers adds time to every test run. For databases this is a few seconds, but Kafka or Elasticsearch can take 20-30 seconds. Testcontainers supports reusable containers that persist across test runs during local development.

Enable reuse on a container with .withReuse(true):

@Container @ServiceConnection static PostgreSQLContainer<?> postgres = new PostgreSQLContainer<>("postgres:16-alpine") .withReuse(true);

You must also opt in globally by creating a testcontainers.properties file in your home directory:

# ~/.testcontainers.properties testcontainers.reuse.enable=true

When reuse is enabled, the first run starts the container normally. On subsequent runs, Testcontainers detects the existing container (matched by image and configuration hash) and reuses it instead of creating a new one. This can reduce test startup from 20+ seconds to under 1 second.

AspectWithout reuseWith reuse
Container lifecycleStart and stop per test classStart once, reuse across runs
Startup time (Postgres)~3-5 seconds<1 second (after first run)
Startup time (Kafka)~15-30 seconds<1 second (after first run)
Data isolationGuaranteed clean stateYou must handle cleanup yourself
CI/CDRecommendedNot recommended (use fresh containers)
Reusable containers are a local development optimization. In CI/CD pipelines, always use fresh containers to ensure test isolation and reproducibility. The .withReuse(true) flag is ignored when the global property is not set, so CI runs are unaffected.

Running Testcontainers in CI/CD requires a Docker daemon to be available in the build environment. Most CI providers support this out of the box or with minimal configuration.

A GitHub Actions workflow that runs Testcontainers-based tests:

name: Integration Tests on: push: branches: [ main ] pull_request: branches: [ main ] jobs: integration-tests: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Set up JDK 21 uses: actions/setup-java@v4 with: java-version: '21' distribution: 'temurin' - name: Cache Maven packages uses: actions/cache@v4 with: path: ~/.m2/repository key: ${{ runner.os }}-maven-${{ hashFiles('**/pom.xml') }} - name: Run integration tests run: mvn verify -Pfailsafe env: TESTCONTAINERS_RYUK_DISABLED: false

Key considerations for CI/CD environments:

To pre-pull commonly used images and speed up subsequent steps:

- name: Pre-pull test container images run: | docker pull postgres:16-alpine docker pull confluentinc/cp-kafka:7.6.0 docker pull redis:7-alpine If your CI environment does not support Docker (some cloud-hosted agents), consider using Testcontainers Cloud. It runs containers on remote infrastructure and connects via a lightweight agent, so no local Docker daemon is needed.