0

Image of data ponits

Ideally want a polynomial fit or Gaussian Process Regression. Unsure how to implement this in sklearn. Data is stored in pandas.

I have tried the below, but it loads very slowly, even when there are only 128 data points.

from sklearn.svm import SVR
X, y = df11[['P1FRAMES']], df11[['A']]
svr_lin = SVR(kernel='linear', C=1e3)
svr_poly = SVR(kernel='poly', C=1e3, degree=2)
y_lin = svr_lin.fit(X, y).predict(X)
y_poly = svr_poly.fit(X, y).predict(X)

Is there a faster way to generate a second order polynomial best-fit line? Or any other best-fit line that you think might be suitable?

Thanks

Tom

Murmel
  • 5,402
  • 47
  • 53
Tom Dry
  • 121
  • 2
  • 9
  • [This example](http://scikit-learn.org/stable/auto_examples/linear_model/plot_polynomial_interpolation.html) might help you - a scikit-learn example of plotting polynomial interpolation. – Ari Cooper-Davis Jan 13 '18 at 12:48
  • Thanks a lot. Have followed this and it works. Last question...how do I get the coefficients of each of the polynomials? Have tried using named_steps but can't see where to get them. – Tom Dry Jan 13 '18 at 23:19
  • @TomDry Both regressors Ridge and SVR use `coef_` (weights) and `intercept_` internally. – Darius Jan 14 '18 at 00:32

0 Answers0