Deploying Containerized AWS Lambda functions with Terraform

In this article, I demonstrate how to deploy containerized AWS Lambda functions using Terraform.

Deploying Containerized AWS Lambda functions with Terraform

In Simple AWS Lambda Deployment with Terraform, I demonstrated how to deploy a class of AWS Lambda functions I call the simple lambda function. Simple lambda functions are deployed by packaging source code as a zip archive and uploading it directly to the AWS Lambda service. These functions tend to be small (<50MB), use only the built-in modules for the programming languages they are written in, and optionally also include the AWS SDK, which is available in all AWS Lambda runtimes.

Developing and deploying containerized applications has become common place in the software industry, and for good reasons (e.g. deployment ease and consistency, portability, etc.)! In this article, I demonstrate how to deploy containerized AWS Lambda functions using Terraform.

The complete source code used in this post is available on GitHub.

The Python code below is for a lambda function that, when invoked, produces faked profile information as JSON response.


Input event


Output (response)

  "statusCode": 200,
  "headers": { "Content-Type": "application/json"  },
  "body": "{\"profile\": {\"name\": \"Jennifer Hawkins\", \"address\": \"294 Dominic Coves Apt. 336\\nSmithchester, WI 70267\"}}"

We begin by configuring the terraform provider for AWS. The code below assumes that your AWS credentials are available in the environment.

Notice that the AWS provider is configured to use the us-east-2 region. This is important because (at the time of this writing) AWS Lambda functions must be deployed in the same region where their container images are hosted in Amazon ECR. (Here is a GitHub issue tracking a discussion about making it so that AWS Lambda functions can use images hosted in any region, in any AWS account. AWS has so far delivered on the cross-account part of the feature request.)

The following terraform code creates a lambda function by using a container image hosted in an existing AWS ECR repository.

The file begins with a line declaring a variables for the environment, which is used for naming resources. The last few lines of the file create an aws_iam_role for the lambda function. But the core lines in this code are those for the aws_lambda_function resource and the aws_ecr_repository data source.

The aws_lambda_function resource is used to provision (as the name suggests) the AWS Lambda function using information from the aws_iam_role resource and aws_ecr_repository data source.

The aws_ecr_repository data source allows us to access the URI for the app’s container image. You’ll need to create an AWS ECR repository called profile-faker for this to work. (See these step-by-steps instruction for creating an AWS ECR repository.)

The app’s source code is located in the ./aws_lambda_functions/profile_faker directory. The directory structure below shows how the essential files in this example are organized.

├── aws_lambda_functions
│   └── profile_faker
│       ├── Dockerfile
│       ├── Makefile
│       ├──
│       └── requirements.txt

The Dockerfile for creating an image of the profile faker app is very basic as shown below.


# Install the function's dependencies using file requirements.txt
# from your project folder.

COPY requirements.txt  .
RUN  pip3 install -r requirements.txt --target "${LAMBDA_TASK_ROOT}"

# Copy function code

# Set the CMD to your handler (could also be done as a parameter override outside of the Dockerfile)
CMD [ "main.handler" ]

Once this Dockerfile exists, we merely need to create a container image for the app and push it to an AWS ECR repository.

I like using Makefiles to help with these things. (You will need to set environment variable values for AWS_ECR_ACCOUNT_ID and AWS_ECR_REGION.)

APP_NAME = profile-faker



.PHONY : docker/build docker/push docker/run docker/test

docker/build :
  docker build -t $(APP_NAME):$(APP_VERSION) .

docker/push : docker/build
  aws ecr get-login-password --region $(AWS_ECR_REGION) | docker login --username AWS --password-stdin $(AWS_ECR_ACCOUNT_ID).dkr.ecr.$(AWS_ECR_REGION)
  docker push $(AWS_ECR_ACCOUNT_ID).dkr.ecr.$(AWS_ECR_REGION)$(AWS_ECR_REPO):$(TAG)

docker/run :
  docker run -p 9000:8080 $(AWS_ECR_ACCOUNT_ID).dkr.ecr.$(AWS_ECR_REGION)$(AWS_ECR_REPO):$(TAG)

docker/test :
  curl -XPOST 'http://localhost:9000/2015-03-31/functions/function/invocations' -d '{}'

Now I can do this to push the image to AWS ECR.

make docker/push TAG=dev

And do this to test the lambda function locally.

make docker/run
make docker/test

The output should look similar to this.

  "statusCode": 200,
  "headers": { "Content-Type": "application/json" },
  "body": "{\"profile\": {\"name\": \"Jennifer Hawkins\", \"address\": \"294 Dominic Coves Apt. 336\\nSmithchester, WI 70267\"}}"

I encourage you to init , plan , and apply the terraform code used in this post and play with deploying containerized lambda functions using terraform.

terraform init
terraform plan -var="env_name=dev"
terraform apply -var="env_name=dev"