Docker Aws Configure



Q: When should I use AWS Lambda versus Amazon EC2? Amazon Web Services offers a set of compute services to meet a range of needs. Amazon EC2 offers flexibility, with a wide range of instance types and the option to customize the operating system, network and security settings, and the entire software stack, allowing you to easily move existing applications to the cloud.

  1. Docker Aws Cli Configure
  2. Aws Docker Setup
  3. Docker Aws Configure Ubuntu
  4. Docker Aws Configure
  • Sure, AWS offers native support for logging and alerting via Amazon CloudWatch. Still, to be completely honest, AWS services for monitoring and observability require some extra work to set up proper alerting, configure log groups, and set up everything to ensure tracing with X-Ray. Then, we also need to decide on metrics to track and build.
  • Aws ecr get-login-password -region us-east-1 docker login -username AWS -password-stdin 12.dkr.ecr.us-east-1.amazonaws.com Tag your image to match your repository name, and deploy the image to Amazon ECR using the docker push command.
  • Docker is a set of platform as a service (PaaS) products that use OS-level virtualization to deliver software in packages called containers. Containers are isolated from one another and bundle their own software, libraries and configuration files; they can communicate with each other through well-defined channels.

You can now build your serverless workloads in a Docker container with FaaS.

Among all the new features and services that AWS announced during the re:Invent 2020, my favorites were definitely the AWS Lambda updates. And there were many! For example, your code execution is no longer rounded up to the nearest 100ms of duration for billing — you are now billed on a per millisecond. On top of that, AWS increased the Lambda’s memory capacity to 10 GB, and correspondingly the CPU capacity up to 6 vCPUs [3]. But today, I want to dig deeper into something even more exciting for me. Namely, from now on, AWS Lambda doesn’t require packaging your code and dependencies to a zip file. Instead, you can now do it with a Docker container image that can be up to 10 GB in size.

Personally, I consider this a game-changer for many serverless use cases. And here’s why.

The easiest way to package code for deployment is to use a (Docker) container

Until recently, the only way of creating a serverless function on AWS was to select your specific language and runtime (ex. Python 3.8), then making sure that you install all your custom dependencies inside of your project directory (or adding site-packages from a Python’s virtual environment to your zip package) and finally, compressing all that into a zip package. If your zip file is bigger than 50MB, you would also have to upload the code to S3 and reference it in your function definition. All that is doable. Many developers (me including) used to create their own methods to make it easier, such as using Lambda layers, site-packages from a virtual environment, and building shell scripts for deployment.

On the surface, it seems like not much changes — instead of zipping your code, you now define your dependencies inside a Dockerfile. But there is more to it, as defining your runtime environment in a container image gives you much more control over your environment compared to what you get with predefined runtimes and zipping dependencies.

A zip file with a predefined runtime environment has its limits: what if you would like to use a specific Python environment that has been reviewed by your company’s security team? Or what if you need some additional OS-level package? With the container image support, you can do that since a Docker container has no restrictions in the base image and packages you choose to install. This makes “serverless” accessible to a wider audience, and the development of FaaS (Function as a Service) becomes much easier.

In theory, it’s even possible to create custom images for other programming languages, although this requires implementing a custom runtime and is more involved.

Docker

The interface of AWS Lambda now looks as follows:

You can now use your own custom environment packaged as a container image.

Note: at the time of writing, only Linux containers are supported.

Container image deployment with a simple example

Let’s build a simple ETL example. Here is a project structure that we will use:

My requirements.txt contains only: pandas1.1.0.

The actual code, demonstrated below, is just a simple ETL example counting exam scores of Harry Potter’s characters, but you can use it as a scaffold for your use case:

Now to the fun part: the Dockerfile that will define all our code dependencies so that we don’t need to zip our code!💪🏻

Usually, your base image for Python 3.8 would start with FROM python:3.8 in order to use the official Python image from the Dockerhub. However, to make it usable with AWS Lambda, your base image must include the Lambda Runtime API. To make it easier for us, AWS prepared many base images that we can use, such as the one defined in line 3 in the Dockerfile presented above. You can find all AWS Lambda images in the public ECR repository as well as in the Dockerhub registry:


Base images for AWS Lambda.

Let’s test our dockerized Lambda function

The best part of developing your Lambda functions with a container image is the dev/prod environment parity. You can easily test your code locally with Docker before deploying your code to AWS. Your local containerized environment is identical to the one you will be using later in production. This is possible due to a web-server environment called Lambda Runtime Interface Emulator (RIE) (you can find out more about it here), which has been open-sourced by AWS. This emulator is already baked into all Lambda images (amazon/aws-lambda-*) that you can find on Dockerhub or in the ECR public image repository.

Run the following commands from the project directory that contains Dockerfile:

Then, in a new terminal window, run:

Here is what I’m getting as output:

Local execution looks good. Let’s deploy it to AWS.

Pushing your container image to ECR & Creating a Lambda function

We can now run the following commands to create an ECR repository and push our container image to ECR:

Now that our image is deployed, we can use it in our Lambda function:

Deploying a Lambda function with a container image from ECR — GIF made by the author

Note that we didn’t have to select the runtime environment since it’s all already defined in our container image. We tested the function from the AWS management console and saw that we got the same result as when tested locally.

Monitoring your Dockerized Workloads with Lambda

By now, you may be convinced that running containerized workloads with AWS Lambda has a myriad of advantages, and you may want to use it now much more extensively. However, I encourage you to think ahead about observability and approach the serverless workloads with an architect’s foresight.

Imagine that you migrated several data pipelines from a container orchestration solution to AWS Lambda. How do you know which of those pipelines failed and why? Sure, AWS offers native support for logging and alerting via Amazon CloudWatch. Still, to be completely honest, AWS services for monitoring and observability require some extra work to set up proper alerting, configure log groups, and set up everything to ensure tracing with X-Ray. Then, we also need to decide on metrics to track and build CloudWatch dashboards to visualize this data.

Docker Aws Cli Configure

You can considerably improve the developer experience by using tools such as Dashbird, which allows you to easily add observability to your existing serverless workloads without any changes to your code or infrastructure. All you need to do is to create an IAM role that will grant Dashbird cross-account permission to communicate with your AWS resources. Once that’s configured, you can immediately start enjoying all benefits of the platform, such as automated alert notifications, visualizations of your metrics, and actionable insights based on the AWS Well-Architected Framework to improve performance, save costs, and enhance the security of your cloud resources.

Actionable insights gathered by leveraging Dashbird.

Recap: the benefits of Container Image as opposed to a zip deployment

When using a container image rather than a zip package for your serverless function deployments, you’ll get the following benefits:

  • Support for any programming language you want (as long as you use a base image that implements the Lambda Runtime API),
  • Ability to easily work with additional dependencies that can be baked into a container image such as additional Python modules or config files,
  • Flexibility & independence of any platform — you can easily move the same jobs to a K8s cluster or any platform supporting containerized applications. In our example, we would have to only change the base image back to python:3.8 and the entry point command to: CMD [“python”, “etl.py”] within the Dockerfile.
  • You have much more control over your packaged environment — with a traditional Lambda runtime environment, you use what you got from AWS. In contrast, with a container image, you have many options to customize the environment to your needs. Imagine that you want to use a smaller and more lightweight Python image for performance and cost optimization or an image that has been approved by the security team to meet your companies’ specific compliance requirements.
  • Your code can run anywhere — a containerized application minimizes any surprises when moving your code from your local machine to the development, testing, or production environment. Your code can run anywhere without side effects.
  • Run event-driven containers — while container orchestration platforms such as Kubernetes are great, some use cases may be better served from a simple FaaS, for instance, when you want your code to run every time a new file arrived in S3 or when somebody made a request to our API. AWS Lambda is perfect for such use cases.

Conclusion

I’m quite happy about all the new AWS Lambda features. As a huge proponent of containerized applications, I prefer that option over zipping the code for a serverless deployment. These days, developing self-contained microservices has become easier than ever before due to the existence of so many platforms and services to run containers at scale. And if you want to ensure observability and enterprise-grade monitoring of your serverless containers, Dashbird is a great option to consider: https://dashbird.io/.


LocalStack is a test/mocking framework for developing Cloud applications that combines kinesalite/dynalite and moto, ElasticMQ, and others.

At the moment the project is focus primary on supporting the AWS cloud stack. You can run it in your local environment without even having an AWS account and start locally test AWS.

In this post, you will learn how to:

  • Create an AWS profile using the AWS CLI.
  • Run LocalStack into a Docker Container.
  • Access to panel UI of LocalStack.
  • Run some commands using AWS CLI using LocalStack.

LocalStack services

LocalStack comes in two flavors: A free, open source Base Edition, and a Pro Edition with extended features and support.

As you see the first is free and you can run it in your local machine and also second however must pay a monthly subscription and set a key into your installation to use it.

Let’s identify what services provides LocalStack in each edition.

To see more information creating an account into https://app.localstack.cloud/ (credit card is no required), on the website, for example, you will find information about how to configure it when using the Pro Edition. You can access to https://localstack-community.slack.com/ if you wish.

Prerequisites

Is possible install LocalStack using Docker or Python however the recommended way of installing is using Docker. This walkthrough assumes that LocalStack will be installed on a Windows machine :

  • Windows 10 Pro, Enterprise or Educational (Build 15063 or later).
  • Hyper-V and Containers Windows features must be enabled.

Startup

Creating an AWS profile

  1. First we will download the AWS CLI from here.
  2. Next, we should create a profile AWS Profile, do it into a PowerShell terminal. Set region to us-east-1 (this is important when using some services like SQS or SNS).
  1. After, create the profile it will be similar (path file C:UsersyourUserName.awscredentials) to:

Preparing the LocalStack container

  1. Start setting up Docker, for it, we download and install it from here.
  1. After install it, check the Docker installation with the following command(use Powershell).
  1. Once Docker is running, pull the LocalStack image. The image size is almost 500mb and uncompress is around a 1gb.
  1. To avoid issues when the container starts the better option is create a folder with the following structure:
  1. Create the docker-compose.yml, it will have the configuration for creating the container using a LocalStack image and it also has the services to starting (line 13) and the port mapping between the container and the host (line 8 and 7). Line 27 and 28 have the path for saving information to use when the container is restarted to retain its state. For a full detailed definition of the environment, parameters check the official documentation here.
  1. Then, run the LocalStack container you must locate where is the docker-compose.yml file then execute.
  1. Stop the LocalStack container when don’t use it.

Interacting with LocalStack

LocalStack Base edition provides a simple UI Web, and you can check it out at http://localhost:8081/. If everything is correct then you will see some like this.

Aws Docker Setup

On this page, you will see more information when starting to interact with some services, for example, S3, SQS, and so on.

Docker Aws Configure Ubuntu

After this is possible to start testing your application using LocalStack or AWS CLI to interact with some services.

Make sure to specify the endpoint and profile according to each service that you use.

Testing LocalStack and S3 Service

The most noteworthy is possible to interact with LocalStack using AWS CLI, here are some commands to use S3.

Creating a bucket

Docker Aws Configure

For list the Bucket recently created

Listing files into a bucket (you can use http://localhost:4572/yourbucket too.)

Conclusions

As result of these all steps you can start testing, for example, run an integration testing of a Web API that uploads and download files using S3 services or starts to send and read messages of a queue. Here you will find how to run LocalStack using DotNet instead a Docker Compose file.

Additional resources