0

I have created an Azure ML webservice as an example and face an unknown error when it comes to deploy a web service. The error comes without an explanation, so it's hard to trace.

When running the experiment within the studio, the experiment was running without any issue. However, when deploy to webservice, the test function has failed with the same input as in the studio.

I have also published a sample of the service to see if anyone can see what the issue is.

https://gallery.cortanaintelligence.com/Experiment/mywebservice-1

Some info about the service:

The service takes input as a string represented for a sparse feature vector of svmlight format. It will return the predicted class for the input feature vector. The error fails when running the test function from the deployed service while the experiment within the studio is running without any issue.

Hope anyone has an idea how it went wrong.

David Makogon
  • 69,407
  • 21
  • 141
  • 189
Dat Huynh
  • 1
  • 1

1 Answers1

0

Using test dialog means you are using request-response service which is a real-time API. This has http timeout as maximum time to complete the request. Since the feature vector is too long, the request is getting timed out. Can you please try to use batch execution service as described below

https://azure.microsoft.com/en-us/documentation/articles/machine-learning-consume-web-services/#batch-execution-service-bes

neerajkh
  • 1,233
  • 8
  • 9
  • Just want to focus on the request-response service at the moment. I have tested the python model with the same input on Python Notebook, it runs fast in micro-seconds. I – Dat Huynh Jul 04 '16 at 04:46
  • There are two issue here, I am not sure if that's from azure? 1. I ran the python model in a desktop Python Notebook, it runs fast micro-seconds, but when running in Azure ML Studio, that module takes minutes, 2. The input data for the webservice is string of svmlight format, due to its sparse, the length of string can be quite small. I tested with only one feature, the problem is still happening. If it was the issue from input data, why it's running inside the studio. – Dat Huynh Jul 04 '16 at 05:00
  • {"type": "InvokeModuleEndEvent", "moduleName": "Execute Python Script RRS", "error": "Execution encountered an internal error."}, { "type": "RequestSummary", "status": "Failure", "error": "The model had exceeded the memory quota assigned to it."} – Dat Huynh Jul 04 '16 at 05:00