edited to add more info
I am having major problems with any library that relies on "request" when run in AWS Lambda Running the same code on the same version of Node locally works fine.
I have tried all the things suggested in this issue https://github.com/request/request/issues/2047
I have also tried the AWS Lambda forum, Amazon Support, the Slack channel and the request-promise repo. Request itself just directs queries here.
I can't post a full example here, as my function has 18 files, 8 packages, all with their own sub-packages. However, it is well below Lambda's maximum code size. Shis is exactly the code I am running within this function itself:
'PodcastInvocationIntent': function () {
feedparser.parse("http://feeds.soundcloud.com/users/soundcloud:users:238643239/sounds.rss").then(items => {
const now = new Date();
console.log(`within feedparser and got item1 of ${items[0].title} at ${now}`);
}).catch(err => {
console.log(`Caught an error in checking - ${err}`);
});
},
I invoke the function which simply calls a valid, fast-responding podcast. Note: I can ensure what amounts to a "cold start" by doing ask deploy -t lamba
.
START RequestId: dcae5bc2-4883-11e8-a386-853cdfb729cc Version: $LATEST
2018-04-25T12:26:22.593Z dcae5bc2-4883-11e8-a386-853cdfb729cc in _MAINMENU_MODE and caught PodcastInvocationIntent
END RequestId: dcae5bc2-4883-11e8-a386-853cdfb729cc
REPORT RequestId: dcae5bc2-4883-11e8-a386-853cdfb729cc Duration: 129.95 ms Billed Duration: 200 ms Memory Size: 1024 MB Max Memory Used: 48 MB
If I call it later on, I get the rest of the first request. Notice that the RequestId for the feedparser results is the same as the first invocation, ending 853cdfb729cc, is "brought forward" to the request ID nearly a minute later, of 171032bc3aa8
START RequestId: f6dcc8de-4883-11e8-be3e-171032bc3aa8 Version: $LATEST
2018-04-25T12:27:05.605Z dcae5bc2-4883-11e8-a386-853cdfb729cc within feedparser and got item1 of Episode 020 - Building Community in the Era of Voice at Wed Apr 25 2018 12:27:05 GMT+0000 (UTC)
2018-04-25T12:27:05.750Z f6dcc8de-4883-11e8-be3e-171032bc3aa8 in _MAINMENU_MODE and caught PodcastInvocationIntent
END RequestId: f6dcc8de-4883-11e8-be3e-171032bc3aa8
REPORT RequestId: f6dcc8de-4883-11e8-be3e-171032bc3aa8 Duration: 327.94 ms Billed Duration: 400 ms Memory Size: 1024 MB Max Memory Used: 54 MB
Note that this is nearly a minute later - my lambda timeout is set for 6 seconds and my request timeout is set for 3 seconds and the site responds within 300ms. And you can see from the headers sent by the server that what is getting logged is definitely the first request.
Another thing - here's a few request summaries from where it worked the second time:
REPORT RequestId: 55fe3685-487f-11e8-9d8a-7f110e92019c Duration: 29.75 ms Billed Duration: 100 ms Memory Size: 1024 MB Max Memory Used: 55 MB < fail
REPORT RequestId: 6b657de4-487f-11e8-b3d9-775cb704cd69 Duration: 119.94 ms Billed Duration: 200 ms Memory Size: 1024 MB Max Memory Used: 55 MB < OK
REPORT RequestId: 79c7e5ef-487f-11e8-a6be-039c7c5578d1 Duration: 18.75 ms Billed Duration: 100 ms Memory Size: 1024 MB Max Memory Used: 65 MB < fail
REPORT RequestId: 95b42e1f-487f-11e8-863b-1bf45c029658 Duration: 122.49 ms Billed Duration: 200 ms Memory Size: 1024 MB Max Memory Used: 65 MB < OK
Note that the first time, the request finishes at either 18 or 29ms, but the podcast server NEVER responds faster than 55ms (max 300ms). But to be clear, I have tried multiple podcasts on different servers of different sizes.
(No idea why the second two are 10Mb bigger - same code!)
This makes so sense! Any ideas? Node 8.10 request 2.85.0 alexa-sdk 1.0.25, loads of spare memory on Lambda, not hitting any limits.
I have tried request, request-promise, and feedparser-promised, and all display the same symptoms. Thank you.