-2

I have a general question regarding training your model when adding the Regularization strength λ parameter as it puts penalty on your score to prevent over-fitting (as far as I know from class and Tootone answer linked below)

So we need to decrease the λ as much we can so we use it's inverse

MY QUESTION IS >> why using negative value is not a right approach ? and doesn't give correct predictions

What is the inverse of regularization strength in Logistic Regression? How should it affect my code?

1 Answers1

0

When including a regularization parameter, you're typically modifying the cost function so that you minimize

C(x) + λ * p(x)

where C(x) is your cost function and p(x)>0 is the penalty. If λ<0 then you would be rewarded for having a high penalty, when you should be punished.

mickey
  • 2,168
  • 2
  • 11
  • 20