I'm starting out with numpy and was trying to figure out how its arrays work for column vectors. Defining the following:
x1 = np.array([3.0, 2.0, 1.0])
x2 = np.array([-2.0, 1.0, 0.0])
And calling
print("inner product x1/x2: ", np.inner(x1, x2))
Produces inner product x1/x2: -4.0
as expected - this made me think that numpy assumes an array of this form is a column vector and, as part of the inner function, tranposes one of them to give a scalar. However, I wrote some code to test this idea and it gave some results that I don't understand.
After doing some googling about how to specify that an array is a column vector using .T
I defined the following:
x = np.array([1, 0]).T
xT = np.array([1, 0])
Where I intended for x to be a column vector and xT to be a row vector. However, calling the following:
print(x)
print(x.shape)
print(xT)
print(xT.shape)
Produces this:
[1 0]
(2,)
[1 0]
(2,)
Which suggests the two arrays have the same dimensions, despite one being the transpose of the other. Furthermore, calling both np.inner(x,x)
and np.inner(x,xT)
produces the same result. Am I misunderstanding the .T
function, or perhaps some fundamental feature of numpy/linear algebra? I don't feel like x & xT should be the same vector.
Finally, the reason I initially used .T
was because trying to define a column vector as x = np.array([[1], [0]])
and calling print(np.inner(x, x))
produced the following as the inner product:
[[1 0]
[0 0]]
Which is the output you'd expect to see for the outer product. Am I misusing this way of defining a column vector?