0

I start with a data set consisting of points with two features.

I then transform my points with a 3rd degree polynomial transform, which results in a data set consisting of points with nine features.

Finally, I run a linear regression algorithm, resulting in a vector of ten weights.

I know that given a three weight vector, the slope and y-intercept of the separator line can be calculated, and then you can plot that line by sampling points from np.linspace. However, that method doesn't appear to work for a ten weight vector.

How do I plot the separator created by these weights in the original input space (a Cartesian coordinate system where the axes represent the original two features) using numpy and pyplot?

Edit: the 3rd order polynomial transforms turns the features from X=[x1, x2] to Z=[1, x1, x2, x1^2, x1*x2, x2^2, x1^3, x1^2*x_2, x1*x2^2, x2^3].

The weights from Linear Regression are a vector of size 10 such that w dot Z = y_hat.

I'm trying to plot the separator created by w onto a 2D plot where one axis corresponds to x1 and the other to x2.

  • At least the way I usually thought of plotting linear regression models, 2 features means 2 dimensional (which you can plot), 3 features is 3d (can still be plotted), but upwards of 4th dimension, I don't think you can plot anything... – Hiten Nov 05 '19 at 23:51
  • The resulting separator wouldn't be a straight line, but curved. I've seen pictures of non-linear transformations plotted in the original feature space, but I don't know how it is done. – bluekaterpillar Nov 06 '19 at 00:20
  • Have you tried using mlxtend? The documentation is here: http://rasbt.github.io/mlxtend/ and an example is https://stackoverflow.com/questions/43284811/plot-svm-with-matplotlib – Hiten Nov 06 '19 at 02:55

0 Answers0