0

I'm trying torchinfo in a model and it needs two input, one is 3D and the other is 1D input. So I tried:
print(summary(model, input_size=([(10,1684,40),(10)])))
But I recieved:

TypeError: rand() argument after * must be an iterable, not int

and I tried
print(summary(model, input_size=([(10,1684,40),(10,20)]))) \

'lengths' argument should be a 1D CPU int64 tensor, but got 2D cuda:0 Long tensor

I think 'lengths' corresponds second argument (10) in 1st code and (10,20) in 2nd.

What I should do?

I fixed second argument and added .cpu() to length. print(summary(model, input_size=([(10,1684,40),(10,)]))) \

but I recieved:

RuntimeError                              Traceback (most recent call last)
~/.local/lib/python3.8/site-packages/torchinfo/torchinfo.py in forward_pass(model, x, batch_dim, cache_forward_pass, device, **kwargs)
    267             if isinstance(x, (list, tuple)):
--> 268                 _ = model.to(device)(*x, **kwargs)
    269             elif isinstance(x, dict):

~/.local/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
   1101                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1102             return forward_call(*input, **kwargs)
   1103         # Do not call functions when jit is used

~/06rnn_attentionf6/my_model.py in forward(self, input_sequence, input_lengths, label_sequence)
     85         # エンコーダに入力する
---> 86         enc_out, enc_lengths = self.encoder(input_sequence,
     87                                             input_lengths)

~/.local/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
   1119 
-> 1120         result = forward_call(*input, **kwargs)
   1121         if _global_forward_hooks or self._forward_hooks:

~/06rnn_attentionf6/encoder.py in forward(self, sequence, lengths)
    101             rnn_input \
--> 102                 = nn.utils.rnn.pack_padded_sequence(output, 
    103                                                   output_lengths.cpu(), #ここを修正

~/.local/lib/python3.8/site-packages/torch/nn/utils/rnn.py in pack_padded_sequence(input, lengths, batch_first, enforce_sorted)
    248     data, batch_sizes = \
--> 249         _VF._pack_padded_sequence(input, lengths, batch_first)
    250     return _packed_sequence_init(data, batch_sizes, sorted_indices, None)

RuntimeError: Length of all samples has to be greater than 0, but found an element in 'lengths' that is <= 0

The above exception was the direct cause of the following exception:

RuntimeError                              Traceback (most recent call last)
/tmp/ipykernel_715630/614744292.py in <module>
      1 from torchinfo import summary
----> 2 print(summary(model, input_size=([(10,1684,40),(10,)])))

~/.local/lib/python3.8/site-packages/torchinfo/torchinfo.py in summary(model, input_size, input_data, batch_dim, cache_forward_pass, col_names, col_width, depth, device, dtypes, row_settings, verbose, **kwargs)
    199         input_data, input_size, batch_dim, device, dtypes
    200     )
--> 201     summary_list = forward_pass(
    202         model, x, batch_dim, cache_forward_pass, device, **kwargs
    203     )

~/.local/lib/python3.8/site-packages/torchinfo/torchinfo.py in forward_pass(model, x, batch_dim, cache_forward_pass, device, **kwargs)
    275     except Exception as e:
    276         executed_layers = [layer for layer in summary_list if layer.executed]
--> 277         raise RuntimeError(
    278             "Failed to run torchinfo. See above stack traces for more details. "
    279             f"Executed layers up to: {executed_layers}"

RuntimeError: Failed to run torchinfo. See above stack traces for more details. Executed layers up to: []

what I shuld do

noname
  • 1
  • 1
  • Could you share an example, of how the model is being used for inference? I would expect the arguments of your model have different dtypes. If so, use [dtypes argument of summary](https://github.com/TylerYep/torchinfo#multiple-inputs-w-different-data-types) – Dima Mironov Nov 26 '22 at 10:03

2 Answers2

0

Providing your full model code would help us resolve this issue more easily, but some things to try:

  • Using your input_data directly instead of input_size.
  • Setting the correct device parameter.
  • Ensuring you are using tuples for 1D input e.g (10,) instead of (10).
  • Referencing the test case examples in the Github code.
Tyler Yep
  • 37
  • 6
0

Assuming 10 is your batch size when training the model, try this

summary(model, [(1684,40),()])

You don't need to specify the batch size when using Torch summary, since it uses a batch size of 2 to test the network. Let me know if you solve.

Dima Mironov
  • 525
  • 4
  • 19
Zefyrus94
  • 1
  • 1