0

After reading this post here: Azure Machine Learning Request Response latency and the article mentioned in the comments I was wondering if this behavior is also true when a published webservice is called in batch mode. Especially since I have read somewhere (sorry, can't find the link at the moment) that the batch calls are not influenced by the "concurrent calls" config...

In our scenario we have a custom R module uploaded to our workspace which includes some libraries that are not available on aML by default. The module takes a dataset, trains a binary tree, creates some plots and encodes them in base64 before returning those as a dataset. Locally that does not take more than 5s. But in the aML webservice it takes approx. 90s and it seems that the runtime in batchmode does not improve when calling the service multiple times.

Additionally it would be nice to know for how long the containers, mentioned in the linked post, will stay warm.

Community
  • 1
  • 1
jbreunung
  • 1
  • 1
  • This might be a better fit over at Programmers.SE. – Tobi Nary Jan 28 '16 at 09:15
  • containers stay warm for 6 hours – Dan Ciborowski - MSFT Jan 28 '16 at 12:28
  • I need a bit more info about your configuration to help. What is the execution time like during an RRS call with a single input? Can you enable debugging on your AML endpoint and provide any information about the execution time of individual modules? Let me also try and get some information about the overhead(if any) for batch. If this issue is caused by the particulars of your custom R module, this might require more personal support that I can't offer on SO, but you can open a AML chat support ticket to get more individual support. – Dan Ciborowski - MSFT Jan 28 '16 at 12:32
  • First of all, thanks for your feedback on the "cache expiration time" Regarding your questions: Execution time for the endpoint in RRS mode is ~50s for a cold container and ~5s for a warm container. I enabled logging for the endpoint but I can't seem to figure out how to get the exectuion times for each module... Do you know if the "caching" behavior also applies to batch mode and whether the number of "container" in batch mode is influenced by the "max. concurrent calls" parameter? – jbreunung Jan 28 '16 at 15:17
  • Can you please provide your web service URL and experiment URL – neerajkh Jan 31 '16 at 21:37

0 Answers0