2

I'd ideally like a way of testing lambdas etc before deploying for quick iterative development.

Have tried to use https://aws.amazon.com/blogs/compute/better-together-aws-sam-and-aws-cdk/ but it appears that this does not yet work with CDK pipelines, as sam local can’t see the nested CDK generated by the pipeline.

Are there any good tricks or tools to solve this?

Am I barking up the wrong tree here? Should I just thinking about developing lambdas directly in AWS/cloud9

Jon Duffy
  • 81
  • 6

2 Answers2

2

TLDR; Yes, you can shorten cdk development cycles. Keep your pipeline for prod deploys. For development, deploy a copy of the app to a "sandbox" account using the cdk-cli. This sandbox non-pipeline app deploys faster to the cloud and works with the local lambda sam testing you mention.

Deploy #1: Pipeline for Prod Deploys As it seems you do, we deploy to our production account via a CDK pipeline setup. The pipeline runs in a test account.

The pipeline's cdk.Stage calls a makeAppStacks function, which encapsulates our stack definitions. The function makes a second appearance below in Deploy #2 when we define our non-pipeline sandbox app. We write the stack code once, but deploy it as a pipeline and standalone app.

// DeployStage.ts
// The stage gets added to the pipeline for deploys to test, prod etc.

export class DeployStage extends cdk.Stage {
  constructor(scope: cdk.Construct, id: string, props: DeployStageProps) {
    super(scope, id, props);

    // actually adds the stack constructs to the app
    makeAppStacks(
      this,
      props.appName,
      props.env.account,
      props.env.region,
    );
  }
}

Deploy #2: cdk deploy to a Sandbox Account for Iterative Development. As you say, the pipeline deploy is too slow for iterative development - you don't want to wait 15 minutes for CodePipeline to pull from the repo, build and deploy for every minor change in a feature branch.

So for faster development, we deploy the same stacks to a sandbox account via CLI, which is faster to deploy to the cloud and can use the local sam debugging.

# deploy the app to a sandbox account for fast(er) iterations
# app.ts uses the AWS_ACCOUNT env var to dynamically deploy
AWS_ACCOUNT=123456789000 npx cdk deploy '*' -a 'node ./bin/app' --profile my-sandbox
// bin/app.ts
// called from the cli, deploys to the sandbox account

const app = new App();

const account = process.env.AWS_ACCOUNT;

// reused stack definitions!
makeAppStacks(app, 'UnicornApp', account, 'us-east-1');
fedonev
  • 20,327
  • 2
  • 25
  • 34
  • Is this problematic for repos that contain both application code and infrastructure code? I have two top-level folders, one `infrastructure` and one `client`. Infrastructure contains my pipeline code and `cdk deploy` deploys that pipeline which then deploys the stack (and the application code inside an S3 bucket). In this manner I would think I would have to maintain two codebases if I were to deploy a copy of it in a dev account with no pipeline. Am I thinking about it wrong? – anondev May 27 '22 at 16:47
  • @DillonHarless Not problematic at all. The pattern is a _great_ fit for one-repo-for-infra-and-code setups. One codebase gets deployed in multiple, DRY ways. Feel free to detail your concern in a new question. I'll be sure to look at it. – fedonev May 28 '22 at 08:43
1

Unless you are specifically using the Context Object in your lambdas, you can test the Lambda_handler function as any other function. If for instance your lambdas are in python, a unit test that calls your handler function will act just like any other function. You will need to mock up your event - there are several libraries about that help provide mock events for various AWS functions - and provide a blank json like object (a dict in python) for the context.

Of course, this is just unit testing. At a higher level (integration) testing you will need some kind of deployment but generally those should be done only every so often anyways - with well written testable code you can test 90% of the behavior before ever having to deploy - the other 10% can be done the cli/the console/or other structures you build around it.

As asked for in a comment

from mock import MagicMock

def lambda_handler(event: dict, context: dict):
    # your lambda code goes here, in a Clean Code, Unit Testable way
    return result

def test_lambda_handler():
    test_event = {
           "MyEventKeys": "my_event Values"
           }
    mock_context_object = MagicMock()
    mock_context_object.function_name = "TestFunction"
    mock_context_object.memory_limit_in_mb = "2048"
    mock_context_object.invoked_function_arn = "Arn"
    mock_context_object.aws_request_id = "RequestId"

    response = lambda_handler(test_event, mock_context_object)

    assert response # whatever you are going to test against

# command line or whatever tool your using for running tests:
$ pytest my_lambda_handler_tests

This works just fine without any need for SAM deployments or the cloud. This is because the way a Lambda is initialized: say in the SAM/Cloudformation Template or the CDK stack - is by telling the lambda system on the backend of AWS where to find this lambda_handler function (note: lambda_handler is just a convention name - you can name it whatever you want.) The code on the back just calls the function its pointed to when the lambda is invoked, after setting up its log stream and a few other maintenance tasks.

There is nothing special about the lambda_handler function. It is just another function, and as such it can be tested like any other function in your system.

If you aren't specifically using the context object, you can literally just pass a blank {} to it in the test as well: response = lambda_handler(test_event, {})

From here, with good CleanCode and an understanding of writing good testable code, you can easily unit test every thing before ever going to the cloud - With a little extra work and some contract/integration tests set up in the right manner, you can even test entire state machine without ever going to cloud, understanding that every lambda call is just a call to the lambda_handler function, just like you would in a single monolith app - its just distributed to different container instances and connected via backend api's (the 'events' passed between)

lynkfox
  • 2,003
  • 1
  • 8
  • 16
  • Can you elaborate on the first part? To clarify, you make it sound like `sam local-invoke` does indeed work with CDK Pipelines, however I'm also under the impression that it does not. How does the use of the Context Object play into that? Thanks in advance – anondev May 25 '22 at 17:58
  • 1
    `sam` does not. I mean that in Python, the `def lambda_handler(event, context)` is just another function. It doesn't need the entire back end that SAM does to test... if you write your code in a Unit Testable way, then you can just use pytest to test the lambda_handler as you would any other function. Forget that its a lambda handler, its just a function call. If you are using the context object (ie perhaps: using aws_powertools_logger and inject context) then you can mock the context object with MagicMock ill add a code example to the above. – lynkfox May 26 '22 at 13:48
  • @DillonHarless i updated the above with some examples - i hope that helps. – lynkfox May 26 '22 at 13:58
  • Thanks @lynkfox. I understand what you meant now. I guess after reading your post, it seems that mocking responses will be essential to test my Lambda Resolvers (I'm using AppSync for my API). I'm pretty much hitting a database every time I use Lambda since I'm mostly using Lambda as data sources. – anondev May 27 '22 at 16:31