sorry 4 poor English. I met a issue about dimensions.The data is NLP data, csv file. And the dimension should be 1 I guess. but error happened. some people said it may relate to the version of torch, I tried torch==1.7.1+cu101 torchvision==0.8.2+cu101 and 1.0.0. It seems no change.
Same issue in referrence link: https://github.com/jiesutd/LatticeLSTM/issues/8 RNN - RuntimeError: input must have 3 dimensions, got 2
Epoch 1 / 10 Training... --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) in () 76 # Generate the output of the Discriminator for real and fake data. 77 # First, we put together the output of the tranformer and the generator ---> 78 disciminator_input = torch.cat([hidden_states, gen_rep], dim=1) 79 # Then, we select the output of the disciminator 80 features, logits, probs = discriminator(disciminator_input)
RuntimeError: Tensors must have same number of dimensions: got 2 and 3
Then, what's wrong with my code below:
disciminator_input = torch.cat([hidden_states, gen_rep], dim=1)
Should I provide all codes? too long to post.
for step, batch in enumerate(train_dataloader):
# Progress update every print_each_n_step batches.
if step % print_each_n_step == 0 and not step == 0:
# Calculate elapsed time in minutes.
elapsed = format_time(time.time() - t0)
# Report progress.
print(' Batch {:>5,} of {:>5,}. Elapsed: {:}.'.format(step, len(train_dataloader), elapsed))
# Unpack this training batch from our dataloader.
b_input_ids = batch[0].to(device)
b_input_mask = batch[1].to(device)
b_labels = batch[2].to(device)
b_label_mask = batch[3].to(device)
# Encode real data in the Transformer
model_outputs = transformer(b_input_ids, attention_mask=b_input_mask)
hidden_states = model_outputs[-1]
# Generate fake data that should have the same distribution of the ones
# encoded by the transformer.
# First noisy input are used in input to the Generator
noise = torch.zeros(b_input_ids.shape[0],noise_size, device=device).uniform_(0, 1)
# Gnerate Fake data
gen_rep = generator(noise)
# Generate the output of the Discriminator for real and fake data.
# First, we put together the output of the tranformer and the generator
disciminator_input = torch.cat([hidden_states, gen_rep], dim=1)
# Then, we select the output of the disciminator
features, logits, probs = discriminator(disciminator_input)