Properly use of AWS Lambda layers
What are AWS Lambda layers? As we know, AWS Lambda functions allow to execute code in the cloud according to the serverless paradigm. Each serverless cloud application is normally characterized by multiple independent Lambda functions capable of responding to specific events (like rest API, scheduled, triggers). Each Lambda function is defined by its own deployment package which contains its source code and any requirements, such as additional libraries, dependencies and middleware.
In this type of architecture, the AWS Lambda layers allow to introduce the concept of code/dependency reusability, in order to share modules among different functions: the layers are simple packages that can be reused in Lambda and they actually extend the base runtime. Let’s see how to use layers.
AWS Lambda layers 101
How do you prepare an AWS Lambda layer? To show it let’s consider this example: a Lambda built in Python that requires to run a binary application not included in the standard AWS runtime. In our example, the application is a simple bash script.
#!/bin/bash
# This is version.sh script
echo "Hello from layer!"
To create the Layer we need to prepare a ZIP archive with the following structure:
layer.zip
└ bin/version.sh
From the AWS Lambda console we just need to create the Layer providing the source ZIP archive and compatible runtimes names.
Now suppose to create a Lambda function that uses that layer. The code might look like this:
import json
def lambda_handler(event, context):
import os
stream = os.popen('version.sh')
output = stream.read()
return {
'statusCode': 200,
'body': json.dumps('Message from script: {}'.format(output))
}
Its output will be:
{
"statusCode": 200,
"body": "\"Message from script: Hello from layer!\\n\""
}
Our AWS Lambda function correctly executes the bash script included in the layer. This happens because the contents of the Layer are extracted into the /opt folder. Since we used the structure provided by AWS to build the deployment ZIP archive, our bash script is already included in the default PATH (/opt/bin). Great!
Considering a more complete example, a project in Python I mentioned in another post: Using Chromium and Selenium in an AWS Lambda function.
To use Chromium in a Lambda function, you need to include the binaries and related libraries in the deployment package, as AWS obviously doesn’t include them in the standard Python runtime. My first approach was to not use any layers by obtaining a single ZIP package of more than 80MB. Whenever I wanted to update my Lambda function code, I was forced to upload the entire package, resulting in a long wait. Considering the number of times I repeated the operation during the development phase of the project and that the source of the function was a very small part of the whole package (a few lines of code), I realize how much time I wasted!
The second, much smarter approach was to use an AWS Lambda layer to include the binaries of Chromium and all the Python packages required in a similar way to what was seen above. The structure is this:
layer.zip
└ bin
└ chromium
chromedriver
fonts.conf
lib
└ ...
└ python
└ selenium
selenium-3.14.0.dist-info
...
To install the Python packages I used the usual PIP command:
pip3 install -r requirements.txt -t python
Once the layer was created, the time required for deploying the function was significantly reduced, all in favor of productivity.
Some more information on AWS Lambda Layers :
- Can be used by multiple Lambdas
- Can be updated and a new version is created each time
- Versions are automatically numbered from 1 up
- Can be shared with other AWS Accounts and made public
- Are specific to an AWS Region
- If there are multiple layers in a Lambda, they are “merged” together in the specified order, overwriting any files already present
- A function can use up to 5 levels at a time
- Do not allow exceed the limit of the size of the AWS distribution package
When to Use AWS Lambda Layers
In my specific case, using a layer has brought great benefits by reducing deployment times. So I’ve wondered if it’s always a good idea to use AWS Lambda layers. Spoiler alert: the answer is no!
There are two main reasons for using layers:
- the reduction of the size of AWS Lambda deployment packages
- the reusability of code, middleware and binaries
This last point is the most critical: what happens to Lambda functions during the lifecycle of the layers they depend on?
Layers can be deleted : removing a layer does not cause problems for the functions that already use it. It is possible to modify the code of the function but, if it is necessary to modify the levels on which it depends, the dependence on the layer no longer available must be removed.
Layers can be upgraded: creating a new version of a layer does not cause problems for functions that use previous versions. However, the lambda updating process is not automatic: if necessary, the new layer version must be specified in the lambda definition, removing the previous one first. Although the use of layers can therefore allow the distribution of fixes and security patches related to common Lambda components, it must be taken into account that this process is not completely automated.
AWS Lambda layers: more complex testing?
In addition to what has already been highlighted in the previous paragraph, the use of layers entails the need to face new challenges, especially in the context of tests.
The first aspect to consider is that a layer causes the introduction of dependencies that are only available at runtime, making it more difficult to debug your code locally. The solution is to download the content of the layers from AWS and include it during the build process. Not very practical, however.
Similarly, the execution of unit tests and integration tests undergoes an increase in complexity: as for local debugging, the content of the layers must be available during execution.
The second aspect concerns static languages such as Java or C#, for which it is required that all dependencies are available to compile DLL or JAR. Obviously, even in this case there are more or less elegant solutions, such as loading them at runtime.
Security & Performance
In general, the introduction of AWS Lambda layers does not involve any security disadvantages: better, it is possible to deploy new versions of existing layers to release security patches. As seen above, remember that the update process is not automatic.
Particular attention should be paid to third-party layers : there are different levels made publicly available and dedicated to various fields. Although it is actually convenient to be able to use a layer already configured for a very specific purpose, it is obviously better to create your own layers directly so as not to fall victim to malicious code. Alternatively, it is always recommended to first check the repository of the layer you intend to use.
Performance: the use of layers as an alternative to an all-in-one package has no effect even in the case of a cold start.
CloudFormation
Creating AWS Lambda Layers in CloudFormation is very simple. Layers are resources of type AWS :: Lambda :: LayerVersion. In the Lambda function definition, the Layers parameter allows to specify a list of (maximum 5) dependencies.
Here’s an example:
Conclusions
My two cents: using AWS Lambda layers certainly brings benefits in the presence of large dependencies that do not need to be updated very frequently. Moving these dependencies into a layer significantly reduces the deployment time of your Lambda function.
What about sharing source code? In this case it is good to make assessments that take into account the complexity that is introduced in the application’s debug and test processes: it is likely that the effort required is not justified by the benefits that can be obtained with the introduction of layers.
Did we have fun? See you next time!