How about S3 Bucket and Localstack?

Marcos
5 min readNov 2, 2022

--

A few days ago I wrote about s3 with Spring Boot (you can find it here). In this text, we will create a little service but use Localstack instead of AWS.

Running Localstack

At first, you have to use a docker image, in this case, I used docker-compose:

version: "3.8"

services:
localstack:
container_name: "${LOCALSTACK_DOCKER_NAME-localstack_main}"
image: localstack/localstack:0.14.2
network_mode: bridge
ports:
- "127.0.0.1:4566:4566" # LocalStack Gateway
- "127.0.0.1:53:53" #
- "127.0.0.1:53:53/udp" #
- "127.0.0.1:443:443" #
- "127.0.0.1:4510-4530:4510-4530" # ext services port range
- "127.0.0.1:4571:4571" #
environment:
- DEBUG=${DEBUG-}
- SERVICES=${SERVICES-}
- DATA_DIR=${DATA_DIR-}
- LAMBDA_EXECUTOR=local
- LOCALSTACK_API_KEY=${LOCALSTACK_API_KEY-}
- HOST_TMP_FOLDER=${TMPDIR:-/tmp/}localstack
- DOCKER_HOST=unix:///var/run/docker.sock
- DISABLE_CORS_CHECKS=1
volumes:
- "${TMPDIR:-/tmp}/localstack:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"

After running this image, it’s time to create a bucket.

Do you have AWS CLI installed? If yes, you can use this command:

aws --endpoint-url=http://127.0.0.1:4566 s3api create-bucket --bucket bucket-example

If not, you have to access the container’s bash using this command:

docker exec -it id_container /bin/bash

Localstack already has AWS CLI installed.

Now, you can use this command:

aws s3api create-bucket --bucket bucket-example

In both cases the answer is something like this:

Ok, bucket created. Let’s make an upload. Here is the command:

aws --endpoint-url=http://localhost:4566 s3 cp cafezin.png s3://bucket-example

Result:

Command to list the bucket content:

aws --endpoint-url=http://localhost:4566 s3 ls s3://bucket-example/

Result:

Fine. Let’s do some code.

Creating a Spring Boot application

In application.properties we have this:

server.port=8089
springdoc.swagger-ui.path=/swagger-ui.html

Creating a config class:

package com.s3example.demo.config;

import com.amazonaws.auth.AWSCredentials;
import com.amazonaws.auth.AWSStaticCredentialsProvider;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.client.builder.AwsClientBuilder;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

@Configuration
public class AWSConfig {

public AWSCredentials credentials() {
AWSCredentials credentials = new BasicAWSCredentials(
"accesskey",
"secretkey"
);
return credentials;
}

@Bean
public AmazonS3 amazonS3() {
AmazonS3 s3client = AmazonS3ClientBuilder
.standard()
.withCredentials(new AWSStaticCredentialsProvider(credentials()))
.withEndpointConfiguration(getEndpointConfiguration("http://s3.localhost.localstack.cloud:4566"))
.build();
return s3client;
}

private AwsClientBuilder.EndpointConfiguration getEndpointConfiguration(String url) {
return new AwsClientBuilder.EndpointConfiguration(url, Regions.US_EAST_1.getName());
}

}

Creating a service class:

package com.s3example.demo.adapters.service;

import com.amazonaws.AmazonServiceException;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.model.*;
import com.s3example.demo.adapters.representation.BucketObjectRepresentaion;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.io.FileUtils;
import org.springframework.stereotype.Component;

import java.io.*;
import java.util.List;

@Slf4j
@Component
@RequiredArgsConstructor
public class S3Service {

private final AmazonS3 amazonS3Client;

//Bucket level operations

public void createS3Bucket(String bucketName, boolean publicBucket) {
if(amazonS3Client.doesBucketExist(bucketName)) {
log.info("Bucket name already in use. Try another name.");
return;
}
if(publicBucket) {
amazonS3Client.createBucket(bucketName);
} else {
amazonS3Client.createBucket(new CreateBucketRequest(bucketName).withCannedAcl(CannedAccessControlList.Private));
}
}

public List<Bucket> listBuckets(){
return amazonS3Client.listBuckets();
}

public void deleteBucket(String bucketName){
try {
amazonS3Client.deleteBucket(bucketName);
} catch (AmazonServiceException e) {
log.error(e.getErrorMessage());
return;
}
}

//Object level operations
public void putObject(String bucketName, BucketObjectRepresentaion representation, boolean publicObject) throws IOException {

String objectName = representation.getObjectName();
String objectValue = representation.getText();

File file = new File("." + File.separator + objectName);
FileWriter fileWriter = new FileWriter(file, false);
PrintWriter printWriter = new PrintWriter(fileWriter);
printWriter.println(objectValue);
printWriter.flush();
printWriter.close();

try {
if(publicObject) {
var putObjectRequest = new PutObjectRequest(bucketName, objectName, file).withCannedAcl(CannedAccessControlList.PublicRead);
amazonS3Client.putObject(putObjectRequest);
} else {
var putObjectRequest = new PutObjectRequest(bucketName, objectName, file).withCannedAcl(CannedAccessControlList.Private);
amazonS3Client.putObject(putObjectRequest);
}
} catch (Exception e){
log.error("Some error has ocurred.");
}

}

public List<S3ObjectSummary> listObjects(String bucketName){
ObjectListing objectListing = amazonS3Client.listObjects(bucketName);
return objectListing.getObjectSummaries();
}

public void downloadObject(String bucketName, String objectName){
S3Object s3object = amazonS3Client.getObject(bucketName, objectName);
S3ObjectInputStream inputStream = s3object.getObjectContent();
try {
FileUtils.copyInputStreamToFile(inputStream, new File("." + File.separator + objectName));
} catch (IOException e) {
log.error(e.getMessage());
}
}

public void deleteObject(String bucketName, String objectName){
amazonS3Client.deleteObject(bucketName, objectName);
}

public void deleteMultipleObjects(String bucketName, List<String> objects){
DeleteObjectsRequest delObjectsRequests = new DeleteObjectsRequest(bucketName)
.withKeys(objects.toArray(new String[0]));
amazonS3Client.deleteObjects(delObjectsRequests);
}

public void moveObject(String bucketSourceName, String objectName, String bucketTargetName){
amazonS3Client.copyObject(
bucketSourceName,
objectName,
bucketTargetName,
objectName
);
}

}

And finally creating a controller class:

package com.s3example.demo.adapters.controller;


import com.amazonaws.services.s3.model.Bucket;
import com.amazonaws.services.s3.model.S3ObjectSummary;
import com.s3example.demo.adapters.representation.BucketObjectRepresentaion;
import com.s3example.demo.adapters.service.S3Service;
import lombok.RequiredArgsConstructor;
import org.springframework.web.bind.annotation.*;

import java.io.File;
import java.io.IOException;
import java.util.List;
import java.util.stream.Collectors;

@RestController
@RequestMapping(value = "/buckets/")
@RequiredArgsConstructor
public class ControllerTests {

private final S3Service s3Service;

@PostMapping(value = "/{bucketName}")
public void createBucket(@PathVariable String bucketName, @RequestParam boolean publicBucket){
s3Service.createS3Bucket(bucketName, publicBucket);
}

@GetMapping
public List<String> listBuckets(){
var buckets = s3Service.listBuckets();
var names = buckets.stream().map(Bucket::getName).collect(Collectors.toList());
return names;
}

@DeleteMapping(value = "/{bucketName}")
public void deleteBucket(@PathVariable String bucketName){
s3Service.deleteBucket(bucketName);
}

@PostMapping(value = "/{bucketName}/objects")
public void createObject(@PathVariable String bucketName, @RequestBody BucketObjectRepresentaion representaion, @RequestParam boolean publicObject) throws IOException {
s3Service.putObject(bucketName, representaion, publicObject);
}

@GetMapping(value = "/{bucketName}/objects/{objectName}")
public File downloadObject(@PathVariable String bucketName, @PathVariable String objectName) {
s3Service.downloadObject(bucketName, objectName);
return new File("./" + objectName);
}

@PatchMapping(value = "/{bucketSourceName}/objects/{objectName}/{bucketTargetName}")
public void moveObject(@PathVariable String bucketSourceName, @PathVariable String objectName, @PathVariable String bucketTargetName) throws IOException {
s3Service.moveObject(bucketSourceName, objectName, bucketTargetName);
}

@GetMapping(value = "/{bucketName}/objects")
public List<String> listObjects(@PathVariable String bucketName) throws IOException {
return s3Service.listObjects(bucketName).stream().map(S3ObjectSummary::getKey).collect(Collectors.toList());
}

@DeleteMapping(value = "/{bucketName}/objects/{objectName}")
public void deleteObject(@PathVariable String bucketName, @PathVariable String objectName) {
s3Service.deleteObject(bucketName, objectName);
}

@DeleteMapping(value = "/{bucketName}/objects")
public void deleteObject(@PathVariable String bucketName, @RequestBody List<String> objects) {
s3Service.deleteMultipleObjects(bucketName, objects);
}

}

Now, accessing the URL http://localhost:8089/buckets/ the result is that:

Tests

In order to facilitate the tests I configured OpenApi in this project, so when accessing http://localhost:8089/swagger-ui.html you can execute tests:

So far we have the following operations:

POST: http://localhost:8080/buckets/bucket-name?publicBucket=true to create a bucket.

DELETE: http://localhost:8080/buckets/bucket-name to delete a bucket.

GET: http://localhost:8080/buckets/ to list all buckets.

POST: http://localhost:8080/buckets/bucket-name/objects?publicObject=true with the following body to create an object:

{
"objectName": "object-name.txt",
"text": "value of object"
}

GET: http://localhost:8080/buckets/bucket-name/objects/object-name.txt to fetch an object by name and download it.

GET: http://localhost:8080/buckets/bucket-name/objects/ to list existing objects.

DELETE: http://localhost:8080/buckets/bucket-name/objects/object-name to delete an object.

DELETE: http://localhost:8080/buckets/bucket-name/objects/ with the following body to delete multiple objects at once:

["nome-objeto-1.txt", "nome-objeto-2.txt"]

PATCH: http://localhost:8080/buckets/bucket-name/objects/object-name.txt/bucket-name2 to move an object between two buckets.

Public vs Private

By now you may have noticed that both the bucket and the objects were created with public access, right?

But it is possible to create them with private access. The CannedAccessControlList enum can be used for this:

public enum CannedAccessControlList {
Private("private"),
PublicRead("public-read"),
PublicReadWrite("public-read-write"),
AuthenticatedRead("authenticated-read"),
LogDeliveryWrite("log-delivery-write"),
BucketOwnerRead("bucket-owner-read"),
BucketOwnerFullControl("bucket-owner-full-control"),
AwsExecRead("aws-exec-read");
private final String cannedAclHeader; private CannedAccessControlList(String cannedAclHeader) {
this.cannedAclHeader = cannedAclHeader;
}
public String toString() {
return this.cannedAclHeader;
}
}

This code is a proof of concept that is functional and available for reference here: https://github.com/mmarcosab/s3-example in branch demo-Localstack.

--

--

Marcos
Marcos

Written by Marcos

I study software development and I love memes.

Responses (1)