0

sorry 4 poor English. I met a issue about dimensions.The data is NLP data, csv file. And the dimension should be 1 I guess. but error happened. some people said it may relate to the version of torch, I tried torch==1.7.1+cu101 torchvision==0.8.2+cu101 and 1.0.0. It seems no change.

Same issue in referrence link: https://github.com/jiesutd/LatticeLSTM/issues/8 RNN - RuntimeError: input must have 3 dimensions, got 2

Epoch 1 / 10 Training... --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) in () 76 # Generate the output of the Discriminator for real and fake data. 77 # First, we put together the output of the tranformer and the generator ---> 78 disciminator_input = torch.cat([hidden_states, gen_rep], dim=1) 79 # Then, we select the output of the disciminator 80 features, logits, probs = discriminator(disciminator_input)

RuntimeError: Tensors must have same number of dimensions: got 2 and 3

Then, what's wrong with my code below:

disciminator_input = torch.cat([hidden_states, gen_rep], dim=1)

Should I provide all codes? too long to post.

for step, batch in enumerate(train_dataloader):

# Progress update every print_each_n_step batches.
if step % print_each_n_step == 0 and not step == 0:
    # Calculate elapsed time in minutes.
    elapsed = format_time(time.time() - t0)
    
    # Report progress.
    print('  Batch {:>5,}  of  {:>5,}.    Elapsed: {:}.'.format(step, len(train_dataloader), elapsed))

# Unpack this training batch from our dataloader. 
b_input_ids = batch[0].to(device)
b_input_mask = batch[1].to(device)
b_labels = batch[2].to(device)
b_label_mask = batch[3].to(device)

# Encode real data in the Transformer
model_outputs = transformer(b_input_ids, attention_mask=b_input_mask)
hidden_states = model_outputs[-1]

# Generate fake data that should have the same distribution of the ones
# encoded by the transformer. 
# First noisy input are used in input to the Generator
noise = torch.zeros(b_input_ids.shape[0],noise_size, device=device).uniform_(0, 1)
# Gnerate Fake data
gen_rep = generator(noise)

# Generate the output of the Discriminator for real and fake data.
# First, we put together the output of the tranformer and the generator
disciminator_input = torch.cat([hidden_states, gen_rep], dim=1)
# Then, we select the output of the disciminator
features, logits, probs = discriminator(disciminator_input)
eddiewin
  • 15
  • 5
  • gen_rep.shape ---->torch.Size([64, 768]) hidden_states.shape ---->torch.Size([64, 64, 768]) torch.cat((gen_rep,hidden_states),dim=1) RuntimeError: Tensors must have same number of dimensions: got 3 and 2 – eddiewin Jun 23 '21 at 11:42
  • Hello eddiewin, please read this : https://stackoverflow.com/help/minimal-reproducible-example . We need to see your model, and some data that you feed into it (or its shape at the very least). You don't need to provide all codes, only a minimal reproducible example, basically a sample that can be copypasted to reproduce your issue. Please also add the whole stacktrace in the question, not as a comment – trialNerror Jun 23 '21 at 11:53
  • Dear trialNerror, Thank you for your support. I have uploaded the code sample. Hope that anyone can give me some ideas. – eddiewin Jun 23 '21 at 11:57

0 Answers0