I am programming with PyTorch multiprocessing. I want all the subprocesses can read/write the same list of tensors (no resize). For example the variable can be
m = list(torch.randn(3), torch.randn(5))
Because each tensor has different sizes, I cannot organize them into a single tensor.
A python list has no share_memory_() function, and multiprocessing.Manager cannot handle a list of tensors. How can I share the variable m among multiple subprocesses?