4

Using the notation from Wikipedia, it seems that the scikit-learn Ridge modules use a multiple of the identity matrix as the Tikhonov matrix Gamma. The Tikhonov matrix is therefore specified by a single value alpha. Doing this results in all coefficients being penalized uniformly. I've got some prior knowledge of what my solution should look like, and would like to make specific coefficients extra small. I believe I could achieve this if my Gamma matrix had larger entries along the diagonal for the coefficients I'd like to shrink.

Do any of the scikit-learn modules support non-uniform penalties like the ones I'm describing?

jonthalpy
  • 1,042
  • 8
  • 22
  • 1
    Skimming through the docs, i was under the impression that feature was added, but you noticed my mistake: *n_targets* is the shape needed for alpha, when non-scalar. So it seems [my earlier alternative applied to a similar, but different problem](https://stackoverflow.com/questions/44757238/how-to-configure-lasso-regression-to-not-penalize-certain-variables/44758231#44758231) still applies. – sascha Aug 17 '17 at 17:06
  • That is helpful, thank you. I'm surprised there isn't a general Tikhonov regularizer under the hood though. There is a closed form solution for my problem, so I suppose it wouldn't be too terrible to implement my own class. – jonthalpy Aug 17 '17 at 17:14

0 Answers0