47

I have a AWS Lambda function that creates an object from a s3 call in cold start. I then hold the object in the cache while the function is warm to keep load times down. When files are changed in s3, I have a trigger to run the lambda, but not all the running instances of lambda restart and pull from s3.

Is there a way to bring down all instances of lambda forcing a full cold start?

Also, I don't want to use python.

Marc
  • 473
  • 1
  • 4
  • 4
  • Depending on the size of the s3 object your caching, maybe you can verify that the etag still matches to determine if you should pull the object again. If the objects themselves aren't much larger than etags, then this strategy doesn't make sense of course. – RaGe Jun 15 '18 at 03:41
  • There are multiple folders and files that are looped through to create the object. Is the etag for the whole bucket or the individual files? It might make sense to do an async check on the etag. – Marc Jun 15 '18 at 17:30
  • etag is per s3 object – RaGe Jun 15 '18 at 18:20
  • wait a sec, now I'm confused. Your lambda triggers on s3 update events? which means everytime your lambda runs, s3 has changed and you need to re-pull? what's the point of caching? – RaGe Jun 15 '18 at 18:25
  • 1
    The lambda is a router, that redirects traffic. The routes are updated and stored in s3. To keep downtime low, the object is cached so that a majority of the time, when the lambda runs the routes are already mapped in an object. s3 is only checked on a cold start of the lambda. s3 has a trigger for the lambda to be called, but it doesn't restart all the instances of the lambda. So some of the lambdas have an outdated route. – Marc Jun 15 '18 at 20:51
  • Ok, so there is another event source for the lambda besides the S3 that contains routes, that's what I was missing – RaGe Jun 16 '18 at 12:26
  • 1
    The question you're really asking is, how and when do I invalidate my cache. spitballing: Have a separate lambda react to S3 update events and set a "dirty" flag - whether that's a value in dynamo, or an object in s3 is upto you. Your router lambda on being invoked checks the dirty flag (cheap operation) to determine whether or not to use its cached routelist. – RaGe Jun 16 '18 at 12:29

10 Answers10

37

I made an Answer based on my comment and verification from @DejanVasic

aws lambda update-function-configuration --function-name "myLambda" --description "foo"

This will force the next invokation of the lambda to "cold start".

To verify:

@timestamp, @message | sort @timestamp desc | limit 1000 | filter @message like "cold_start:true"
Baked Inhalf
  • 3,375
  • 1
  • 31
  • 45
  • 3
    holy moly, this is madness, why it is not refreshing hot containers on new code deploy... – Somebody Oct 05 '20 at 12:52
  • 1
    Because a file being updated in S3 isn't a "new code deploy"; it's a completely separate service. + – Nathanael Feb 08 '22 at 15:53
  • 1
    It seems warm startup uses the same JVM, so if you create a static random UUID, it will stay same for subsequent warm startups which is, IMO, counter-intuitive and very dangerous! You may be working with stale objects/data inadvertently and you may never know – TriCore Jun 24 '22 at 00:15
15

Use the UpdateFunctionCode API endpoint to force a refresh of all containers. AWS SDKs wrap this up to make it easier for you to call the API using your preferred language.

Renato Byrro
  • 3,578
  • 19
  • 34
  • 1
    Observe that the update will be applied only for new Lambda invocations. If you have a container already serving a request, it will still use what's in its cache. Nevertheless, all subsequent requests will invoke a brand new container in cold start, thus with a cleared cache. – Renato Byrro Jun 18 '18 at 16:55
  • 2
    I came across this answer because we are starting to use provisioned concurrency. But with provisioned concurrency it seems we have to version our lambdas (since you can't provision based on LATEST). But according to the documentation you can't modify the code of a published version, only the unpublished version. So I guess this solution won't help me (I would have to publish a new version of the lambda). – mojoken Mar 04 '21 at 12:59
11

Easiest way I found was changing something in Basic Settings like timeout: Basic Settings

I've upped+1 by a second, saved, and the function got refreshed Memory and timeout settings

user3041539
  • 607
  • 7
  • 17
8

Simply add a new environment variable and / or change an existing one. I created one named BOGUS and gave it a number that I increment whenever I want to force a cold start.

johncurrier
  • 348
  • 3
  • 7
5

Currently, there is no way to force restarts on running Lambda containers.

You can, however, redeploy the function so that it will start using new containers from that point onwards.

Noel Llevares
  • 15,018
  • 3
  • 57
  • 81
  • Is there a way to automate the redeploy? – Marc Jun 15 '18 at 17:26
  • 5
    How about `aws lambda update-function-configuration --function-name "myLambda" --description "foo"`. This will force the next invokation of the lambda to "cold start" ? – Baked Inhalf Feb 22 '19 at 11:52
  • 2
    @BakedInhalf I can confirm your solution works perfectly. Running this cloudwatch query: `fields @timestamp, @message | sort @timestamp desc | limit 1000 | filter @message like "cold_start:true"` Will show that cold starts of the lambda logs start displaying after running the update-function-configuration.Thank you – Dejan Vasic Mar 26 '20 at 22:53
  • @DejanVasic I made an answer based on our comments – Baked Inhalf Mar 27 '20 at 08:34
4

If you are using the Lambda versioning system, another way to do this is by publishing a new version and using an alias to direct all traffic to it.

Here's an example:

Publish version: aws lambda publish-version --function-name your-function-name-here

Update the alias pointing to the new version: aws lambda update-alias --function-name your-function-name-here --name alias-name-here --function-version 123 (use the function version in the output message from the first command above)

Ludicrous
  • 61
  • 2
3

The only way force lambda to discard existing containers is to redeploy the function with something different.

Check out my answer here: Force Discard AWS Lambda Container

Good luck, Moe

Moe
  • 2,672
  • 10
  • 22
2

In addition to some of the valid answers above: I happened to run an experiment on the (average) AWS Lambda instance lifetime. I could not find instances that ran for much longer than (on average) two hours: https://xebia.com/blog/til-that-aws-lambda-terminates-instances-preemptively/.

TL;DR: AWS Lambda is preemptively terminating instances (even those handling traffic) after two hours, with a standard deviation of 30 minutes.

Jochem Schulenklopper
  • 6,452
  • 4
  • 44
  • 62
2

The simplest answer for this question I found is. Make some changes in function like adding a simple comment line or removing any white space and then redeploy the function.

It will clear the cache while deploying.

Nilesh
  • 127
  • 10
0

Following Renato Byrro's answer I made a lambda function using JavaScript AWS SDK to restart another lambda function by updating the description.

import { LambdaClient, UpdateFunctionConfigurationCommand } from '@aws-sdk/client-lambda';

const forceLambdaRestart = async event => {
    try {
        const client = new LambdaClient({
            region: 'your region here',
            credentials: {
                accessKeyId: 'your access key id',
                secretAccessKey: 'your secret access key',
            },
        });

        const command = new UpdateFunctionConfigurationCommand({
            FunctionName: event.functionName,
            Description: `forced update ${Date.now()}`,
        });

        const data = await client.send(command);

        console.log(data);
        return data;
    } catch (error) {
        console.error(error);
        return error;
    }
};

forceLambdaRestart();

It seems like that is enough to restart the lambda and clear in-memory cache.

Paul Şular
  • 145
  • 1
  • 10