4

I'm writing a simple neural network in pyTorch, where features and weights both are (1, 5) tensors. What are the differences between the two methods that I mention below?

y = activation(torch.sum(features*weights) + bias)

and

yy = activation(torch.mm(features, weights.view(5,1)) + bias)
Fariman Kashani
  • 856
  • 1
  • 16
  • 29

2 Answers2

2

Consider it step by step:

x = torch.tensor([[10, 2], [3,5]])
y = torch.tensor([[1,3], [5,6]])

x * y
# tensor([[10,  6],
#         [15, 30]])

torch.sum(x*y)

#tensor(61)

x = torch.tensor([[10, 2], [3,5]])
y = torch.tensor([[1,3], [5,6]])

np.matmul(x, y)
# array([[20, 42],
#       [28, 39]])

So there is a difference betweeen matmul and * operator. Furthermore, torch.sum makes an entire sum from the tensor, not row or columnwisely.

artona
  • 1,086
  • 8
  • 13
2
features = torch.rand(1, 5) 
weights = torch.Tensor([1, 2, 3, 4, 5])
print(features)
print(weights)

# Element-wise multiplication of shape (1 x 5)
# out = [f1*w1, f2*w2, f3*w3, f4*w4, f5*w5]
print(features*weights)

# weights has been reshaped to (5, 1)
# Element-wise multiplication of shape (5 x 5)
# out =   [f1*w1, f2*w1, f3*w1, f4*w1, f5*w1]
#         [f1*w2, f2*w2, f3*w2, f4*w2, f5*w2]
#         [f1*w3, f2*w3, f3*w3, f4*w3, f5*w3]
#         [f1*w4, f2*w4, f3*w4, f4*w4, f5*w4]
#         [f1*w5, f2*w5, f3*w5, f4*w5, f5*w5]
print(features*weights.view(5, 1))

# Matrix-multiplication
# (1, 5) * (5, 1) -> (1, 1)
# out = [f1*w1 + f2*w2 + f3*w3 + f4*w4 + f5*w5]
print(torch.mm(features, weights.view(5, 1)))

output

tensor([[0.1467, 0.6925, 0.0987, 0.5244, 0.6491]])  # features
tensor([1., 2., 3., 4., 5.])                        # weights

tensor([[0.1467, 1.3851, 0.2961, 2.0976, 3.2455]])  # features*weights
tensor([[0.1467, 0.6925, 0.0987, 0.5244, 0.6491],
        [0.2934, 1.3851, 0.1974, 1.0488, 1.2982],
        [0.4400, 2.0776, 0.2961, 1.5732, 1.9473],
        [0.5867, 2.7701, 0.3947, 2.0976, 2.5964],
        [0.7334, 3.4627, 0.4934, 2.6220, 3.2455]])  # features*weights.view(5,1)
tensor([[7.1709]])                                  # torch.mm(features, weights.view(5, 1))
Ravikant Singh
  • 338
  • 5
  • 18
  • @Ravikankt. I wanted to better understand your answer for 2D matrices and came to the following code: `X = torch.arange(6).view(2, 3) w = torch.tensor([1, 2, 3]) print(torch.matmul(X, w.view(3, 1))) print(torch.matmul(X, w))`. Why does the last matrix multiplication doesn't generate an error (due to matrix dimensions)? – Tin Oct 26 '19 at 08:42
  • @Tin Check this page https://kite.com/python/docs/torch.matmul. It might clear your doubts. – Ravikant Singh Oct 27 '19 at 05:22