0
class MyAlgo(torch.optim.Optimizer):

    def __init__(self, params, model):

        self.model = model

    def step(self, closure = None):

        for name, param in self.model.named_parameters(): 
            param = "a Tensor in size of param"

In PyTorch, can the returned param from model.named_parameters() method written as the approach above? An answer (updated): one should use an in-place operation: param.copy_(torch.Tensor-like) to write into param.

Another question would be, is this the best approach to manipulate parameters? Could self.param_groups-based approach have any better efficiency benefits?

Hyo
  • 109
  • 3
  • The answer is yes. It is possible to get param like that. – BIg G Jun 20 '23 at 20:56
  • @BIgG, your answer appears to be not what I have been looking for. I try to update the model parameters by the line `param = something`, but it turns out that no parameters are changing over several calls of `step()`. – Hyo Jun 22 '23 at 18:12
  • To `write` into `param`, one should use an in-place operation: `for name, param in self.model.named_parameters(): param.copy_(torch.Tensor-kind)`. – Hyo Jun 23 '23 at 09:36

0 Answers0