9

The situation is that I have a load of aws lambda functions (using node js 8.10) that all do something very different and they're all deployed using CloudFormation.

They all share a few functions which are very complex.

At the moment, if the common code changes, which is fairly frequent, I replicate the common code between each of the projects (including source control) and then redeploy each of the functions. This has always felt wrong.

Now we have lambda layers - yay! or yay?

Great, so now I can maintain the code in a single repo - tick But the rest of the process is not really any better and possibly worse...

If I put the layer in a CloudFormation template and export the ARN for import into the lambda function templates then the exported ARN is only ever for version 1 of the layer.

I could form the ARN without the version using the Sub function and then add the version in the lamda function CloudFormation templates. But, whenever there's a change to the common code, I'd still need to update all downstream lambda function CloudFormation templates to add the latest version.

I could script it but it's still a massive PITA and doesn't really save much effort. I'd need to get the latest of each lambda function project update the version number, commit back to the repo, pr, merge, blah blah blah.

Is there no other way of always using the latest version of a layer?

Russell Keane
  • 537
  • 1
  • 6
  • 12
  • What language are you using? – Noel Llevares Dec 13 '18 at 22:48
  • using node js 8.10 – Russell Keane Dec 14 '18 at 07:48
  • You can use mappings in the cloud formation template. It makes things easy to maintain a variable value all over the template. Such as the lambda layer version, you can add the version in the mappings json and use it in multiple lambda functions. If the layer version changes then you just need to update the value in the mappings (one place) and not the whole document. – Diksha Tekriwal Jan 03 '19 at 09:01
  • Thanks Tekriwal. This definitely makes the template clearer but would this help with the deployment? I.e. if the variable were to be updated to point to a new version of the layer, would cloudformation see it as a change and update the lambda accordingly? Something for me to try I think... – Russell Keane Jan 07 '19 at 08:54

3 Answers3

4

Using Serverless to deploy and CloudFormation Outputs can help with this situation.

  1. Define your layer in it's own service. Create an Output resource (but don't create an export name).
resources:
  Outputs:
    MYOUTPUTNAME:
      Value:
        Ref: MYLAYERLambdaLayer # LambdaLayer is a required suffix
  1. Reference the output as the layer for whatever function requires it
functions:
  ...other required function keys in serverless
  layers:
    - ${cf:NAME_OF_STACK.MYOUTPUTNAME}
  1. Anytime you redeploy the layer, you must force redeploy the entire stack of functions that reference the layer (sls deploy --force). Redeploying just the function will not update the output reference.

Note: if you use Output Export Names, you will run into an error redeploying the layer service as the current version is referenced. Therefore, it's better to use a reference to the stack output which doesn't cause this error.

knappsacks
  • 106
  • 7
0

You can create along with your lambda layer a parameter in parameter store:

https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html

This will follow your ARN along with the version, I created both resources (lambda layer and parameter) in the same stack.

0

Sounds like a similar situation to what I'm dealing with atm.

  • cloudformation
  • shared complex code destined for change
  • Lambdas and Layers
  • python3.9
  • x86_64

My solution includes the following:

  • I'm using yaml.
  • I use the Parameters option to specify some user values.
  • I use the Mappings option to specify things like DB endpoints(dev,test,prod etc.) based on the user parameters.

Now within the Resources option I add a LayerVersion reference. This specifies where to get the zip file for the layer and the reference name PandasLayer within CF. (the join constructs a matching name to the resource already on s3)

  PandasLayer:
    Type: AWS::Lambda::LayerVersion
    Properties:
      CompatibleArchitectures:
        - x86_64
      CompatibleRuntimes:
        - python3.9
      Content:
        S3Bucket: !Join ['',[!Ref UserRegion,'-','python-dependencies-bucket-stem']]
        S3Key: PandasLayersPython3_9.zip
      Description: This layer contains the Pandas stuff.
      LayerName: PandasLayersPython3_9
      LicenseInfo: MIT

Still within the Resources option you have your Lambda call out with it's information. Here you use the !Ref function to assign the layer to your lambda.

  LambdaFunctionUsingTheLayer:
    Type: AWS::Serverless::Function
    Properties:
      FunctionName: DistinctFunctionName
      Description: "This is going to use the layer in processing."
      Handler: lambda_function.lambda_handler
      Role: arn:aws:iam::123456789012:role/service-role/role-name
      CodeUri: CodePathInProject/
      Runtime: python3.9
      Layers:
        - !Ref PandasLayer

Now every time you deploy, the layer will be created from the code in s3(which you update), it assigns the layer references by the logical id so no need to handle arns.

You can assign 5 layers for lambdas, but it's generally better performance if you put them all together and zip. In my case, I have the following structure for my zipped file:

  • /PandasLayersPython3_9/python/shared-code-file.py
  • /PandasLayersPython3_9/python/lib/python3.9/etc.

The lambda will unpack your layer code and store it under the /opt directory. This can be verified in your function.

  • python
import os
print(os.listdir('/opt/python'))
ECrow
  • 1
  • 1