-1

I was taking the Machine Learning course by Andrew Ng and in one of the practice labs, they perform this operation for Linear Regression.

x = np.arange(0, 20, 1)
y = 1 + x**2
X = x.reshape(-1, 1)

I checked out the shape of the arrays after the op

>>> print(x.shape,X.shape)
(20,) (20, 1)

What is the difference between x and X, and why can't we simply just use x.T instead of changing it to X ?

  • 1
    Because the transpose of 1d array is precisely another 1d array: https://stackoverflow.com/questions/5954603/transposing-a-1d-numpy-array – adrianop01 Jan 31 '23 at 04:37
  • 2
    Does this answer your question? [Transposing a 1D NumPy array](https://stackoverflow.com/questions/5954603/transposing-a-1d-numpy-array) – Julien Jan 31 '23 at 04:46
  • What's the difference? The shape is different! I don't mean that as a joke. That literally is the difference. – hpaulj Jan 31 '23 at 07:56

1 Answers1

0
X = x.reshape(-1, 1)

gives you array like this 2d dimensions, this because each perceptron takes array of number not single number so you need to pass [1] not 1

[
[1]
[2]
]

x=np.arange(0, 20, 1)

[1
,
2
,
3
]

this can't be passed to ANN you need to reshape it to be (rows and column) as this is only 1 dimension

Mohamed Fathallah
  • 1,274
  • 1
  • 15
  • 17