Contents
- Dependency & Credentials
- Uploading Objects (PUT)
- Downloading Objects (GET)
- Streaming Large Files
- Polling a Bucket for New Objects
- Other Operations
- Key URI Options
Add camel-aws2-s3-starter. AWS credentials are resolved automatically using the default credential chain (env vars → system properties → ~/.aws/credentials → IAM role).
<!-- Spring Boot -->
<dependency>
<groupId>org.apache.camel.springboot</groupId>
<artifactId>camel-aws2-s3-starter</artifactId>
</dependency>
<!-- Standalone Camel -->
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-aws2-s3</artifactId>
</dependency>
# application.properties — override default credential chain
camel.component.aws2-s3.access-key=${AWS_ACCESS_KEY_ID}
camel.component.aws2-s3.secret-key=${AWS_SECRET_ACCESS_KEY}
camel.component.aws2-s3.region=us-east-1
On EC2 / ECS / Lambda, leave credentials unconfigured — the component will pick up the IAM role automatically through the instance metadata service. Hardcoding credentials in application.properties is an anti-pattern.
Set the exchange body to the content to upload (byte[], String, InputStream, or File). Set AWS2S3Constants.KEY to the S3 object key and optionally AWS2S3Constants.CONTENT_TYPE for the MIME type.
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.component.aws2.s3.AWS2S3Constants;
import org.springframework.stereotype.Component;
@Component
public class S3UploadRoute extends RouteBuilder {
@Override
public void configure() {
from("direct:upload")
// body = file content as byte[] or InputStream
.setHeader(AWS2S3Constants.KEY, simple("uploads/${date:now:yyyyMMdd}/${header.filename}"))
.setHeader(AWS2S3Constants.CONTENT_TYPE, constant("application/octet-stream"))
.to("aws2-s3://my-bucket?operation=putObject")
.log("Uploaded to S3: ${header." + AWS2S3Constants.KEY + "}");
}
}
// Uploading a file from disk
from("file:/tmp/upload?noop=true")
.setHeader(AWS2S3Constants.KEY, simple("data/${file:name}"))
.to("aws2-s3://my-bucket?operation=putObject");
Set AWS2S3Constants.KEY and use operation=getObject. The response body is an ResponseInputStream — read it or convert it to bytes/String in the next processor.
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.component.aws2.s3.AWS2S3Constants;
import org.springframework.stereotype.Component;
@Component
public class S3DownloadRoute extends RouteBuilder {
@Override
public void configure() {
from("direct:download")
.setHeader(AWS2S3Constants.KEY, simple("uploads/${header.filename}"))
.to("aws2-s3://my-bucket?operation=getObject")
// Convert InputStream → byte[]
.convertBodyTo(byte[].class)
.log("Downloaded ${header." + AWS2S3Constants.KEY + "}: ${body.length} bytes");
}
}
For multi-gigabyte files use streamingUploadMode=true, which internally uses the S3 multipart upload API (5 MB minimum part size). On the download side, keep the body as a stream and pipe it directly to the output — avoid convertBodyTo(byte[].class) which buffers everything in memory.
// Streaming upload — Camel chunks the InputStream automatically
from("direct:streamUpload")
.setHeader(AWS2S3Constants.KEY, constant("large/big-file.bin"))
.setHeader(AWS2S3Constants.CONTENT_TYPE, constant("application/octet-stream"))
.to("aws2-s3://my-bucket"
+ "?operation=putObject"
+ "&streamingUploadMode=true"
+ "&streamingUploadPartSize=10485760"); // 10 MB parts
// Streaming download — pipe S3 InputStream directly to a file
from("direct:streamDownload")
.setHeader(AWS2S3Constants.KEY, constant("large/big-file.bin"))
.to("aws2-s3://my-bucket?operation=getObject")
// Write stream to disk without loading it into memory
.to("file:/tmp/downloads?fileName=big-file.bin");
When streaming to file, Camel's File component accepts an InputStream body directly and writes it to disk in chunks. Pair this with streamingUploadMode=true on upload to handle files larger than available heap without OOM errors.
Use from("aws2-s3://...") to poll a bucket. Each new object found becomes a Camel exchange. Combine with moveAfterRead=true to move processed objects to an archive prefix, preventing re-processing.
import org.apache.camel.builder.RouteBuilder;
import org.springframework.stereotype.Component;
@Component
public class S3PollRoute extends RouteBuilder {
@Override
public void configure() {
from("aws2-s3://incoming-bucket"
+ "?prefix=uploads/" // only watch this prefix
+ "&delay=60000" // poll every 60 s
+ "&maxMessagesPerPoll=10" // at most 10 objects per poll
+ "&moveAfterRead=true" // move to archive prefix after processing
+ "&destinationBucket=incoming-bucket"
+ "&destinationBucketPrefix=processed/")
.routeId("s3-poller")
.log("Processing S3 object: ${header." + AWS2S3Constants.KEY + "}")
.convertBodyTo(String.class)
.to("direct:handleS3Object");
}
}
The S3 component supports a broad set of operations via the operation URI parameter:
| Operation | Description |
| putObject | Upload an object (default for producers). |
| getObject | Download an object by key. |
| deleteObject | Delete a single object by key. |
| listObjects | List objects in a bucket (returns metadata list). |
| copyObject | Copy an object between keys or buckets. |
| createBucket | Create a new S3 bucket. |
| deleteBucket | Delete an empty S3 bucket. |
| getObjectTagging / setObjectTagging | Read or write S3 object tags. |
// Delete an object
from("direct:deleteFile")
.setHeader(AWS2S3Constants.KEY, simple("uploads/${header.filename}"))
.to("aws2-s3://my-bucket?operation=deleteObject");
// List all objects with a given prefix
from("direct:listFiles")
.setHeader(AWS2S3Constants.PREFIX, constant("reports/"))
.to("aws2-s3://my-bucket?operation=listObjects")
.log("Objects: ${body}");
| Option | Default | Description |
| region | — | AWS region (e.g. us-east-1). Required. |
| operation | putObject | S3 operation to execute (producer). |
| prefix | — | Object key prefix filter (consumer). |
| delay | 500 | Polling interval in milliseconds (consumer). |
| maxMessagesPerPoll | 10 | Max objects fetched per poll cycle (consumer). |
| moveAfterRead | false | Move object to another prefix/bucket after consuming. |
| deleteAfterRead | true | Delete the object from S3 after consuming. |
| streamingUploadMode | false | Enable multipart upload for large streaming payloads. |
| streamingUploadPartSize | 26214400 (25 MB) | Part size in bytes for multipart upload. |
| useDefaultCredentialsProvider | false | Use AWS default credential chain instead of explicit keys. |