5

I am trying to convert CNN+LSTM model mentioned in the following blog Image Captioning using Deep Learning (CNN and LSTM). Related github repo is : Pytorch image captioning

I want to convert this pytorch model to tflite. It has both encoder and decoder checkpoints. As far as i understand both of them have to be converted to tflite (correct me if i am wrong)

approach: using the example mentioned in onnx2keras library, onnx2keras i was able to convert encoder to tflite. but with decoder i am facing the following issue.

Not sure what is the right approach. can anyone suggest better approach and help me achieve a tflite model

File “convert_pytorch_tf.py”, line 63, in change_ordering=False) File “/root/anaconda3/envs/pyt2tf/lib/python3.7/site-packages/pytorch2keras/converter.py”, line 53, in pytorch_to_keras dummy_output = model(*args) File “/root/anaconda3/envs/pyt2tf/lib/python3.7/site-packages/torch/nn/modules/module.py”, line 550, in call result = self.forward(*input, **kwargs) TypeError: forward() missing 2 required positional arguments: ‘captions’ and ‘lengths’

Please let me know which approach to follow and help me fix the issue related to the approach i have taken

ML_AI
  • 51
  • 2
  • Hi, Were you able to solve this? What approach you tried? It will be great if you can share more information. Thanks! – mlneural03 Aug 23 '23 at 20:58

0 Answers0