I ran the following code to compute the pseudo inverse of a matrix, but it seems that it makes no difference whether I switch on the GPU or not.
mat = theano.shared(numpy.eye(300, dtype="float32")+1)
fn = theano.function([], theano.tensor.nlinalg.pinv(mat))
fn()
Then I looked at Theano's source code of theano.tensor.nlinalg.MatrixPinv
, and found it just calls Numpy's numpy.linalg.pinv
in the following code (I'll leave out the comments).
class MatrixPinv(Op):
__props__ = ()
def __init__(self):
pass
def make_node(self, x):
x = as_tensor_variable(x)
assert x.ndim == 2
return Apply(self, [x], [x.type()])
def perform(self, node, inputs, outputs):
(x,) = inputs
(z,) = outputs
z[0] = numpy.linalg.pinv(x).astype(x.dtype)
pinv = MatrixPinv()
I'm not very familar with how Numpy is implemented, can it run on GPU?
If not, does that mean that every time I want to compute matrix inverse in Theano, I have to go back from GPU to CPU?