4

I have an AWS Lambda function which calls a deep learning function on Algorithmia, does some post processing on the results and then returns some data. Algorithmia provides a python client which I am using that just makes things a little easier to send a request to an algorithm on the Algorithmia platform.

The problem is as follows: When an Algorithmia function hasn't been called for a while it is unloaded and the first call to warm it up (cold start) takes a while, possibly 30 seconds. If my Lambda function is going to be waiting for 30 seconds for a response whenever it happens to be triggering the Algorithmia function from a cold start that's going to be very expensive and wasteful.

Is there some way to send off a HTTP request in Lambda and when the request is finished the results are piped into a new Lambda function so as to not require a Lambda function to be waiting the entire time and wasting resources? I'd expect not as I'm not sure how that would practically work - does anyone have other ideas as to how to avoid waiting a while for a response and wasting Lambda resources?

Edit: In most cases (except obviously the ones where the Algorithmia algorithm takes a while to load from cold start) latency is an issue and I can't afford to increase latency by doing some workaround method with the Algorithmia function writing it's response to S3 (for example) and then triggering a Lambda function.

abagshaw
  • 6,162
  • 4
  • 38
  • 76
  • What environment is this running in? Desktop? Webserver? Linux? Windows? etc. – wallyk May 07 '17 at 05:15
  • Sorry I'm not sure what you mean. Algorithmia is a hosted cloud ML service if that's what you're asking. – abagshaw May 07 '17 at 05:15
  • I see no other way besides using a VM (worker machine) with the same trigger that your lambda has- to operate Algorithma and then the VM either processes the result or passes to a queue that triggers a lambda for the post-processing. – johni May 07 '17 at 06:37
  • Erm I haven't used Algorithmia can't you trigger an event to fire up function once it's done it's thing? – Mrk Fldig May 07 '17 at 13:39
  • Old thread, but I second @johni's approach. I have a similar issue (cold start ~1min, need <10s response latency). I run a single VM for cases where the API is not in use and thus Algorithmia is cold, and essentially rely on Algorithmia primarily for the autoscaling during high request volume. Algorithmia caller times out after 3s and queries VM for response. I do wonder if there's a better approach, though. – rococo Dec 24 '19 at 10:10

2 Answers2

1

A lot of Algorithmia functions that output a file allow you to specify the output location (often an output parameter of the input JSON). If that assumption holds for your case, then you can have the Algorithmia function write directly to an S3 bucket and have S3 trigger a separate lambda function. The process would look like this:

  • Add an S3 data source to your Algorithmia account, and configure the permissions according to your needs.

  • When calling the algorithm, set the output parameter to use that S3 data source, e.g. "output": "s3://algorithm-name/sample-0001.png"

  • Configure Algorithmia's python client to disregard the output. This causes the request to return immediately instead of waiting for the function to complete:

from Algorithmia.algorithm import OutputType

client.algo("username/algoname")
    .set_options(output=OutputType.void)
    .pipe(input)
anowell
  • 231
  • 2
  • 7
  • 1
    Thanks for your idea, but if you look at the edit in my post I mentioned that this kind of solution probably won't work for my needs. Every ms counts and writing a file to S3 and then triggering a lambda function I would assume would be fairly expensive costing at least a few hundred ms (or maybe a few seconds - I don't know how fast it is). – abagshaw May 09 '17 at 00:14
  • ah... I misunderstood "expensive" as question of pricing rather than latency. Regularly pinging the service as mentioned by @joarleymoraes is an option today. – anowell May 09 '17 at 00:23
0

You could create a Lambda function just to call Algorithmia API from time to time, just to "keep it warm " for your main processing function. You could use Lambda scheduled event for this.

joarleymoraes
  • 1,891
  • 14
  • 16
  • I agree with @joarleymoraes this is what I call a ping function. It also doesn't necessarily have to be separate Lambda function, you can add that feature into your existing function when a specific payload is passed to it. In Cloudwatch Events (which is the scheduling mechanism for Lambda) specific the payload to pass to the function, say something like { "ping": True } and in your lambda function whenever you get that event call Algorithmia to keep it warm. You can schedule this to happen up to every minute if necessary. – alanwill May 07 '17 at 16:56
  • This is viable in some cases, but there aren't a lot of guarantees around how often you'd have to call it to "keep it warm". If you're sensitive to the costs of keeping lambda running for 30 seconds, then I'd guess that you might also be sensitive to the costs of constantly pinging the function. (Edit: I seem to have misunderstood the "cost sensitivity" of the original question.) – anowell May 09 '17 at 00:13