2

Following the best practice Take advantage of execution environment reuse to improve the performance of your function, I am investigating if caching the boto3 client has any negative effect while using Lambda Provisioned Concurrency. The boto3 client is cached through @lru_cache decorator and it is lazy-initialized. Now, the concern is that the underlying credentials of boto3 client are not refreshed because Provisioned Concurrency will keep the execution environment alive for an unknown amount of time. This lifetime might be longer than the duration of the temporary credentials that Lambda environment injected.

I couldn’t find any doc explaining how this case is handled. Does anyone know how Lambda environment handles the refreshing of credentials in the above case?

shimo
  • 2,156
  • 4
  • 17
  • 21
Giulio Micheloni
  • 1,290
  • 11
  • 25

2 Answers2

2

If you're using hardcoded credentials:

You have a bigger security issue than "re-used" credentials and should remove them immediately.

From documentation:

Do NOT put literal access keys in your application files. If you do, you create a risk of accidentally exposing your credentials if, for example, you upload the project to a public repository.

Do NOT include files that contain credentials in your project area.

Replace them with an execution role.


If you're using an execution role:

You're not providing any credentials manually for any AWS SDK calls. The credentials for the SDK are coming automatically from the execution role of the Lambda function.

Even if Boto3 role credentials are shared across invocations under the hood for provisioned concurrency (nobody is sure), what would be the issue?

Let Amazon deal with role credentials - it's not your responsibility to manage that at all.


I would worry more about the application code having security flaws as opposed to Amazon's automatically authenticating SDK requests with execution role credentials.

Ermiya Eskandary
  • 15,323
  • 3
  • 31
  • 44
  • 1
    +1 on the security concern. In fact, I use the execution role. I wanted to get a confirmation of what you said. Are we sure that Lambda system takes are of killing and recreating the exe environment that have expired credentials? – Giulio Micheloni Oct 15 '21 at 14:15
  • If they expire, they must have *something* internally in the runtime to refresh them as otherwise, how would they work once expired? Their implementation will be as secure as you can possibly get as they ensure the security *of* the cloud. Use roles and you won't need to worry about any form of credentials - it's an unnecessary headache to try to think of internal implementations :) – Ermiya Eskandary Oct 15 '21 at 14:17
  • It is actually important to know how they rotate these temporary credentials in order to avoid "surprises" in productions. Do you have any reference you can share about what you said? – Giulio Micheloni Oct 18 '21 at 12:46
  • There are no surprises when using execution roles as recommended by AWS - I don’t no, try AWS support. Everything I’ve said is factual according to AWS docs – Ermiya Eskandary Oct 18 '21 at 12:56
2

They aren't.

The documentation for Boto3 doesn't do a very good job of describing the credential chain, but the CLI documentation shows the various sources for credentials (and since the CLI is written in Python, it provides authoritative documentation).

Unlike EC2 and ECS, which retrieve role-based credentials from instance metadata, Lambda is provided with credentials in environment variables. The Lambda runtime sets those environment variables when it starts, and every invocation of that Lambda runtime uses the same values.

Concurrent Lambdas receive separate sets of credentials, just like you would if you made concurrent explicit calls to STS AssumeRole.

Provisioned concurrency is a little trickier. You might think that the same Lambda runtime lives "forever," but in fact it does not: if you repeatedly invoke a Lambda with provisioned concurrency, you'll see that at some point it creates a new CloudWatch log stream. This is an indication that Lambda has started a new runtime. Lambda will finish initializing the new runtime before it stops sending requests to the old runtime, so you don't get a cold start delay.


Update:

Here's a Python Lambda that demonstrates what I've said above. As part of its initialization code (outside the handler) it records when it was first initialized, and then it reports that timestamp whenever it's invoked. It also logs the current contents of the "AWS" environment variables, so that you can see if any of them change.

import json
import os
from datetime import datetime

print("initializing environment")
init_timestamp = datetime.utcnow()

def lambda_handler(event, context):
    print(f"environment was initialized at {init_timestamp.isoformat()}")
    print("")
    print("**** env ****")
    keys = list(os.environ.keys())
    keys.sort()
    for k in keys:
        if k.startswith("AWS_"):
            print(f"{k}: {os.environ[k]}")

Configure it for provisioned concurrency, then use this shell command to invoke it every 45 seconds:

while true ; do date ; aws lambda invoke --function-name InvocationExplorer:2 --invocation-type Event --payload '{"foo": "irrelevant"}' /tmp/$$ ; sleep 45 ; done

Leave it running for an hour or more, and you'll get two log streams. The first stream looks like this (showing start and end with several hundred messages omitted):

2021-10-19T16:19:32.699-04:00   initializing environment
2021-10-19T16:30:57.240-04:00   START RequestId: a27f6802-c7e6-4f70-b890-2e0172d46780 Version: 2
2021-10-19T16:30:57.243-04:00   environment was initialized at 2021-10-19T16:19:32.699455 
...
2021-10-19T17:07:24.853-04:00   END RequestId: dd9a356f-7928-4bf9-be56-86f4c5e1bb64
2021-10-19T17:07:24.853-04:00   REPORT RequestId: dd9a356f-7928-4bf9-be56-86f4c5e1bb64 Duration: 1.00 ms Billed Duration: 2 ms Memory Size: 128 MB Max Memory Used: 39 MB 

As you can see, the Lambda was initialized at 16:19:32, which was when I enabled provisioned concurrency. The first request was handled at 16:30:57.

But what I want to call out is the last request in this log stream, at 17:07:24, or approximately 48 minutes after the Lambda was initialized.

The second log stream starts like this:

2021-10-19T17:04:08.739-04:00   initializing environment
2021-10-19T17:08:10.276-04:00   START RequestId: 6b15ba7c-91e2-4f91-bb6c-99b9877f1ebf Version: 2
2021-10-19T17:08:10.279-04:00   environment was initialized at 2021-10-19T17:04:08.739398 

So as you can see, it was initialized several minutes before the final request in the first stream, yet started handling invocations after the first stream.

This is, of course, not guaranteed behavior. It's how Lambda works today, and may change in the future. But change is unlikely: the current implantation behaves as documented, and any change runs the risk of breaking customer code.

Parsifal
  • 3,928
  • 5
  • 9
  • Can you share any reference on what you said? – Giulio Micheloni Oct 18 '21 at 12:43
  • @GiulioMicheloni - For the use of environment variables or the restart behavior? The former is described in the documentation that I linked to. I don't have authoritative (AWS) documentation for the latter, but have run extensive tests to convince myself that it is so. I've edited my question to show you the code that I used to run these tests so that you can reproduce them. – Parsifal Oct 19 '21 at 12:24
  • So based on what you said, do you know of any way to get a "new" set of temporary credentials? Is lambda changing the environment variables out from underneath you? Or is the lambda execution environment being forcibly stopped (new cold start) when the credentials expire? Or.... some other thing? – Carrot Nov 29 '22 at 23:21