DEV Community

Cover image for Dockerising a lambda – Tweeting from a lambda
Antonio Feregrino
Antonio Feregrino

Posted on • Edited on

Dockerising a lambda – Tweeting from a lambda

The reasoning behind using lambdas in AWS is that it is code that is not constantly running on a server (hence the paradigm is known as serverless); lambdas are executed on demand and for a short time.

Creating a lambda is trivial when our code has no dependencies on third-party packages; but we already know that this is not our case – we have dependencies on various packages: pandas, matplotlib, seaborn, geopandas y twython, plus a few files that contain the map.

You may ask yourself: if "there is no server", where do I install these dependencies? and the answer that AWS gives us is made up of 3 solutions, I am going to list them and I will tell you why I choose one of them:

  • You can include your *dependencies inside a .zip file along with your code – in our case, the dependencies and files needed exceed the maximum supported size.
  • You can create something known as layers, which can contain dependencies and other files necessary for the execution of the functions – I did not opt ​​for this because of the size of the dependencies (which is the same as the .zip from the previous point), in fact the layers they are a good option for sharing dependencies among multiple lambdas.
  • You can create a container and run your lambda using it – it's the option I went for, as well as making the deployment easier because the size limitation is no longer a problem for me, it allows me to test the lambda locally (you know I I love being able to run things locally)

Dependencies

My idea is to make the container as light as possible, so before packaging the python dependencies I need to export the pipenv dependencies to the popular requirement.txt, I don't want to install pipenv in the container (as I said at the beginning of these posts, it's not necessary to use pipenv to manage your dependencies).

I created an instruction in the Makefile to generate the requirements.txt file:

requirements.txt:
    pipenv lock -r > requirements.txt
Enter fullscreen mode Exit fullscreen mode

You also have to download the shapefiles – just remember that I made an instruction in the Makefile to download them:

shapefiles:
    wget https://data.london.gov.uk/download/statistical-gis-boundary-files-london/9ba8c833-6370-4b11-abdc-314aa020d5e0/statistical-gis-boundaries-london.zip
    unzip statistical-gis-boundaries-london.zip
    mkdir -p shapefiles
    mv statistical-gis-boundaries-london/ESRI/London_Borough_Excluding_MHW* shapefiles/
    rm -rf statistical-gis-boundaries-london statistical-gis-boundaries-london.zip
Enter fullscreen mode Exit fullscreen mode

Apps entry point

We previously created a file called app.py that contains a method that aggregates the functions that we previously created. It is in this file that we will add the entry point of our lambda – it can have any name but it must always take two arguments. In this case, I go with the naming convention and I will name it handler:

def handler(event, context):
    execute()
    return {"success": True}
Enter fullscreen mode Exit fullscreen mode

The return value must be a searialisable object, in this case, we return a dictionary.

Dockerfile

To create the lambda container I will use Docker, and as you know, Dockerfiles are the recipes we use to create containers. This is the file I will use:

FROM public.ecr.aws/lambda/python:3.8

COPY requirements.txt .

COPY shapefiles/ ./shapefiles/

RUN pip3 install -r requirements.txt

COPY *.py ./

CMD ["app.handler"]
Enter fullscreen mode Exit fullscreen mode

Let's go step by step:

  1. FROM ...: Although we can use almost any image as a base, AWS offers us some with guaranteed level of support, public.ecr.aws/lambda/python:3.8 is an example of them.
  2. COPY requirements.txt ...: We copy the file with our dependencies.
  3. COPY shapefiles/ ...: We copy our shapefiles folder.
  4. RUN pip3 insta...: Installs the dependencies outlined in the requirements file.
  5. COPY *.py ...: Copy the applications file to the container.
  6. CMD ["app...: Specifies the container what is the entrypoint for the lambda.

To build the image we just need to execute a command such as the following:

docker build -t lambda-cycles .
Enter fullscreen mode Exit fullscreen mode

Likewise, I created an insturction in the Makefile:

container: shapefiles requirements.txt
    docker build -t lambda-cycles .
Enter fullscreen mode Exit fullscreen mode

Testing locally

The big advantage of using containers is that we can run the lambda locally, for example once you create an image with the above Dockerfile you can run it as follows.

It is important that you use the -p and -e flags, the first one to specify a port mapping between the 8080 inside the container and the host's 9000. The second one specifies the secrets we previously got from Twitter as environment variables.

docker run \
    -p 9000:8080 
    -e API_KEY="ACTUAL VALUE FOR API_KEY" \
    -e API_SECRET="ACTUAL VALUE FOR API_SECRET" \
    -e ACCESS_TOKEN="ACTUAL VALUE FOR ACCESS_TOKEN" \
    -e ACCESS_TOKEN_SECRET="ACTUAL VALUE FOR ACCESS_TOKEN_SECRET" \
        lambda-cycles
Enter fullscreen mode Exit fullscreen mode

From a separate terminal you can execute the lambda using curl:

curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" \
        -d '{}'
Enter fullscreen mode Exit fullscreen mode

This is how the repository looks like by the end of this post.

Remember that you can find me on Twitter at @feregri_no to ask me about this post – if something is not so clear or you found a typo. The final code for this series is on GitHub and the account tweeting the status of the bike network is @CyclesLondon.

Top comments (0)