0

I´m using PyCaret to train a model that will predict the temperature in a device. The idea is that, if the real temperature is higher than the predicted temperature, it should send an alarm to check if everything is working properly in the device.

The model I trained is working well, however I want to set some limits so the alarm do not activate when there is a small difference between the real and the predicted. I want to set the limits as 1 std difference (yellow alarm) and 2 std difference (red alarm).

Is there any way that I can get the STD along with the MAE of each Cross Validation in PyCaret or SKlearn? This way I don´t have to code the Cross Validation myself.

Thank you for all your help.

  • Doesn't cross_validate return a list of scores for each section of the data? Can't you then just compute the std yourself from the cross validations? Something like `np.std(cross_validate(estimator, X, y))` – k88 Feb 09 '22 at 17:05
  • I think that would return the std but for the scores of the estimator. So if the the score is MAE it would return the STD of the MAE of each validation. – Derick Barrera Feb 09 '22 at 20:01
  • Which STD are you referring to then? The train data of each section of the cross validation? If so, you'll probably have to implement it yourself as far as I know no such default method is available in sklearn at least. – k88 Feb 10 '22 at 10:19
  • 1
    Thanks, that what I was afraid of. I was researching a bit and found this `sklearn.metrics.make_scorer`. Hopefully I can make it work. – Derick Barrera Feb 10 '22 at 14:15

1 Answers1

0

In PyCaret I found add_metric this alouds to use a new metric in the results that compare_models return. So I just created an std scorer and then added the std scorer with add_metric.

Dharman
  • 30,962
  • 25
  • 85
  • 135