So i am trying to do a function that trains an mlp using PyTorch. My code is as follows :
def mlp_gradient_descent(x,y , model , eta = 1e-6 , nb_iter = 30000) :
loss_descent = []
dtype = torch.float
device = torch.device("cpu")
x = torch.from_numpy(x)
y = torch.from_numpy(y)
params = model.parameters()
learning_rate = eta
for t in range(nb_iter):
y_pred = model(x)
loss = (y_pred - y).pow(2).sum()
print(loss)
if t % 100 == 99:
print(t, loss.item())
loss_descent.append([t, loss.item()])
loss.backward()
with torch.no_grad():
for param in params :
param -= learning_rat*param.grad
for param in params :
param = None
and i m having this error :
mat1 and mat2 must have the same dtype
Note that : The problem comes from the model(x) and x and y are numpy arrays.
Thank you all. And have a great day.