- I created ML model, which i want to deploy on Azure. In steps, I first preprocess data, like OHE for categories and standarize numeric data using
StandardScaler()
, then I train model. After that, I register it and deploy. I want to consume this model using unstandarized data, and I'd like standarization happen as part of azure request. Is it possible, or I have to standarize data before sending data to predicion to azure? Can i somehow deployStandardScaler.fit()
for training data in same endpoint where my model is? If not, can you advise me how to deal with such case? - Can you advise me which endpoint will be suficient for me? I will have an app, where i set model parameters, and then i want to send an request to azure for prediction and then show it in the same app. Others shoud have ability to do the same. The request could contain between 2k to 40 k records. Can I use batch deploy to such task, or should I use kind of real-time endpoint like ACI. Also, if real time, i will be charged only when someone send request, or i will be charged when there are no request to api?
Asked
Active
Viewed 92 times
-1

MaciJab
- 1
1 Answers
0
Coming to the first point, it is suggestable to perform the standardization before sending the data. Once the data sent to the endpoint, it is suggestable to use it only for the prediction instead of extra pre-processing effort. Complete the pre-processing steps before sending the data to the endpoint.
The online endpoint is an effective approach to implement the complete operation of prediction. As there is a data and requesting for the prediction, everytime we need to communicate with the model. So, when there is a requirement, we need to communicate. That's the reason for online endpoint could be the approach for current prediction model implementation.

Sairam Tadepalli
- 1,563
- 1
- 3
- 11