Simulating, configuring and uploading to S3 locally using LocalStack and Elixir

Long story short, I have been trying to find an alternative to AWS S3 to develop my WorldLink's features locally without having to pay anything and today, I stumbled upon some interesting topics on setting up S3 locally:

I'll focus only on S3 and aws-cli so if you want to have more details, you can visit the links above.

Table of content:

  1. Setting up
    1. Setting up Docker
    2. Setting up AWS cli
    3. Setting up LocalStack
  2. Configuring AWS cli, LocalStack and Elixir project
  3. Uploading files to LocalStack in Elixir

1. Setting up

1.1 Setting up Docker

I'm currently using WSL2 Ubuntu 22.04 and native Linux Mint 21.3. For the complete isolation of development environment, I do NOT install Docker Desktop on my Windows partition.

Yes, you can install docker locally under WSL2. Since Linux Mint and Ubuntu share the same apt package manager, you should be able to run this command just fine. If you're using other distros like Arch, OpenSuse, ... You can find an alternative way to install Docker.

If you want to keep using Docker Desktop, sure, you can skip this section.

But if you don't want to, make sure to uninstall it before proceeding.

# Add Docker's official GPG key:
  sudo apt-get update
  sudo apt-get install ca-certificates curl gnupg
  sudo install -m 0755 -d /etc/apt/keyrings
  curl -fsSL | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
  sudo chmod a+r /etc/apt/keyrings/docker.gpg

NOTE: Switch to bash temporarily IF you're using fish

# Add the repository to Apt sources:
  echo \
    "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] \
    "$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
    sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
  sudo apt-get update
# Install docker engine and docker compose
  sudo apt-get install docker-ce docker-ce-cli docker-buildx-plugin docker-compose-plugin
# Create group for docker
  sudo groupadd docker
  sudo usermod -aG docker $USER

Then reload your session for the change to take effect (so you don't have to type sudo whenever invoking a docker command)

1.2 Setting up AWS CLI

Run this command to update the packages and obtain python-3 and pip

sudo apt-get update && \
  sudo apt-get install python3 && \
  sudo apt install python3-pip python3-venv

Run this command to create a virtual environment using python

  # Go to home directory
  # Create a folder named awsenv
  mkdir awsenv
  # Go to folder awsenv
  cd awsenv
  # Create an actual virtual environment under s3env folder
  python3 -m venv s3env

Enter the virtual environment

source ~/awsenv/s3env/bin/ # I'm using fish

Inspecting ~/awsenv/s3env/bin, there are 2 important files activate and

  • If you're NOT using fish as the default shell, use activate
  • If you ARE using fish as the default shell, use
  • You NEED to source the activate file whenever you want to use aws cli

Check if aws cli is working or not

aws --version

And voila! Let's go to the next step.

1.3 Setting up LocalStack

You can checkout AWS feature coverage in LocalStack via this link

Copy and paste this line of top of activate or depending on which shell you're using.

docker run --rm -it -p 4566:4566 -p 4571:4571 localstack/localstack

We're done with the setup at this point.

2. Configuring AWS cli, LocalStack and Elixir project

Since I'm only focusing only on S3, the port for S3 LocalStack is 4566.

Copy and paste this line of top of activate or depending on which shell you're using.

alias aws="aws --endpoint-url=http://localhost:4566"

Enter the virtual environment by running this command

source ~/awsenv/s3env/bin/ # I'm using fish

And then run this line to configure AWS

aws configure

Here's my setup credentials

AWS Access Key ID [****************5566]: 112233445566
  AWS Secret Access Key [****************5566]: 112233445566
  Default region name [us-east-1]: us-east-1
  Default output format [json]: json

Let's configuring the project with LocalStack and ExAws


# In mix.exs, install these dependencies
  # The version of these dependencies can vary in the future
  defp deps do
    {:finch, "~> 0.16"},
    {:ex_aws, "~> 2.1"},
    {:ex_aws_s3, "~> 2.0"},
    {:hackney, "~> 1.9"},
    {:sweet_xml, "~> 0.6"},


# Configuration for AWS
  config :ex_aws,
    debug_requests: true,
    region: "us-east-1"

  config :ex_aws, :s3,
    scheme: "http://",
    host: "localhost",
    port: 4566,
    access_key_id: "112233445566",
    secret_access_key: "112233445566"
    region: "us-east-1"

  config :ex_aws, :hackney_opts,
    follow_redirect: true,
    recv_timeout: 30_000


def start(_type, _args) do
    children = [
      # Finch
      {Finch, name: MyApp.Finch}

Fetch and compile all the dependencies

mix do deps.get && mix do deps.compile

And that's all with the configuration

3. Uploading files to LocalStack with ExAws S3 and Elixir

For my use-cases, I came up with 2 approaches for uploading. Given that I have a bucket called "test-bucket" and an absolute path of a photo: /home/phamd/downloads/test-image.png

a. Upload directly with ExAws.S3

There are 2 ways to do this

The downside of this approach is that I can't preserve the original Content-Type or maybe I just haven't figured out yet. Nevertheless, this will come in handy when you want to deal with large files because it's a multipart upload operation

|> ExAws.S3.Upload.stream_file()
|> ExAws.S3.upload("test-bucket") |> ExAws.request

I can preserve the original Content-Type of the image this way.

{:ok, binary_data} = "/home/phamd/downloads/asdf.jpg" |>
ExAws.S3.put_object("test-bucket", "test-folder/new_image.jpg", binary_data, [{:content_type, "image/jpg"}])
|> ExAws.request

b. Get a presigned url and upload via Finch

{:ok, bin_data} = "/home/phamd/downloads/asdf.jpg" |>, presigned_url, [{"Content-Type", "image/png"}], bin_data) |> Finch.request(WorldLink.Finch)


Popular posts from this blog

Asus Zenbook 14 OLED 2022 (UX3402) - 3 months later