3

Further outlining is in the context of NodeJS and Monorepo (based on Lerna).

I have AWS stack with several AWS Lambda inside deployed by means of AWS CloudFormation. Some of the lambdas are simple (the single small module) and could be inlined:

https://docs.aws.amazon.com/cdk/api/latest/docs/@aws-cdk_aws-lambda.Code.html#static-from-wbr-inlinecode

const someLambda = new Function(this, 'some-lambda', {
  code: Code.fromInline(fs.readFileSync(require.resolve(<relative path to lambda module>), 'utf-8')),
  handler: 'index.handler',
  runtime: Runtime.NODEJS_12_X
});

Some have no dependencies and packaged as follows:

const someLambda = new Function(this, 'some-lambda', {
  code: Code.fromAsset(<relative path to folder with lambda>),
  handler: 'index.handler',
  runtime: Runtime.NODEJS_12_X
});

But in case of relatively huge lambdas with dependencies, as I understand, we only way to package (proposed by API) is @aws-cdk/aws-lambda-nodejs:

import * as lambdaNJS from "@aws-cdk/aws-lambda-nodejs";

export function createNodeJSFunction(
  scope: cdk.Construct, id: string, nodejsFunctionProps: Partial<NodejsFunctionProps>
) {
  const params: NodejsFunctionProps = Object.assign({
    parcelEnvironment: { NODE_ENV: 'production' },
  }, nodejsFunctionProps);

  return new lambdaNJS.NodejsFunction(scope, id, params);
}

For standalone packages, it works well, but in case of the monorepo it just hangs on synth of the stack. I just looking for alternatives, cause I believe it is not a good idea to bundle (parcel) BE sources.

3 Answers3

2

I've created the following primitive library to zip only required node_modules despite packages hoisting.

https://github.com/redneckz/slice-node-modules

Usage (from monorepo root):

$ npx @redneckz/slice-node-modules \
  -e packages/some-lambda/lib/index.js \
  --exclude 'aws-*' \
  --zip some-lambda.zip

--exclude 'aws-*' - AWS runtime is included by default, so no need to package it.

0

Here is an example if using cloudformation and template.yml.

Create a make file: Makefile with following targets

# Application
APPLICATION=applicatin-name

# AWS
PROFILE=your-profile
REGION=us-east-1
S3_BUCKET=${APPLICATION}-deploy

install:
    rm -rf node_modules
    npm install

clean:
    rm -rf build

build: clean
    mkdir build
    zip -qr build/package.zip src node_modules
    ls -lah build/package.*

deploy:
    sam package \
        --profile ${PROFILE} \
        --region ${REGION} \
        --template-file template.yaml \
        --s3-bucket ${S3_BUCKET} \
        --output-template-file ./build/package.yaml

    sam deploy \
    --profile ${PROFILE} \
    --region ${REGION} \
    --template-file ./build/package.yaml \
    --stack-name ${APPLICATION}-lambda \
    --capabilities CAPABILITY_NAMED_IAM

Make sure the s3 bucket is created, you could add this step as another target in the Makefile.

How to build and deploy on AWS ?

make build
make deploy
Traycho Ivanov
  • 2,887
  • 14
  • 24
  • In our case, the size of node_modules folder is ~1Gb (monorepo). So just zipping of all dependencies will not work. Thats why I propose "slicing" of node_modules before zipping. And do not forget about hoisting to the monorepo root – Alexander Alexandrov Aug 18 '20 at 07:15
  • Having 1GB for lambda is obviously not the case, you have to split it into multiple lambdas or heavily reduce the dependencies. Try to see what is the size after zipping them. I would target max 20 MB after zipping, if your lambda is bigger your boot start will be slower. – Traycho Ivanov Aug 18 '20 at 07:37
  • We are still talking about monorepo. It is not the only lambda deps, but of whole monorepo packages – Alexander Alexandrov Aug 18 '20 at 09:01
  • 1
    I would say every function has it own `node_modules`. Here is example of monorepo with serverless, https://github.com/zotoio/serverless-central, it is not required to split anything. – Traycho Ivanov Aug 18 '20 at 09:12
0

I have struggled with this as well, and I was using your slice-node-modules successfully for a while. As I have consolidated more of my projects into monorepos and begun using shared dependencies which reside as siblings rather than being externally published, I ran into shortcomings with that approach.

I've created a new tool called lerna-to-lambda which was specifically tailored to my use case. I published it publicly with minimal documentation, hopefully enough to help others in similar situations. The gist of it is that you run l2l in your bundling step, after you've installed all of your dependencies, and it copies what is needed into an output directory which is then ready to deploy to Lambda using SAM or whatever.

For example, from the README, something like this might be in your Lambda function's package.json:

"scripts": {
  ...
  "clean": "rimraf build lambda",
  "compile": "tsc -p tsconfig.build.json",
  "package": "l2l -i build -o lambda",
  "build": "yarn run clean && yarn run compile && yarn run package"
},

In this case, the compile step is compiling TypeScript files from a source directory into JavaScript files in the build directory. Then the package step bundles up all the code from build along with all of the Lambda's dependencies (except aws-sdk) into the directory lambda, which is what you'd deploy to AWS. If someone were using plain JavaScript rather than TypeScript, they could just copy the necessary .js files into the build directory before packaging.

It's likely that your solution is still working fine for your needs, but I thought I would share this here as an alternative in case others are in a similar situation and have trouble using slice-node-modules.

Joe Lafiosca
  • 1,646
  • 11
  • 15