2

I have a nlp datasets (about 300K samples) where there exits duplicate data. I want to split it to train test (70%-30%), and they should have no overlapping.

For instance:

|dataset:      |   train   |     test   |
|   a          |     a     |       c    |
|   a          |     a     |       c    |
|   b          |     b     |       c    |
|   b          |     b     |            |
|   b          |     b     |            |
|   c          |     d     |            |
|   c          |     d     |            |
|   c          |           |            |
|   d          |           |            |
|   d          |           |            |

I have tired exhaustively random sample, but it too time consuming.

Whisht
  • 681
  • 2
  • 6
  • 20

2 Answers2

2

If I'm getting this correctly, try this:

train_inds, test_inds = next(GroupShuffleSplit(test_size=.20, n_splits=2, random_state = 7).split(df, groups=df['duplicate_column']))

train = df.iloc[train_inds]
test = df.iloc[test_inds]
Edzia
  • 31
  • 3
1

It is doable, but requires a few steps to be accomplished.

import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split

# original dataset with duplicates
dataset = pd.DataFrame(["a", "a", "b", "b", "b", "c", "c", "c", "d", "d"])

# get unique values, remove duplicates, but keep original counts
data_no_dup, counts = np.unique(dataset, return_counts=True)

# split using the standard Scikit-Learn way
train_no_dup, test_no_dup = train_test_split(data_no_dup, test_size=0.2, random_state=0)

# retrieve original counts
train, test = [], []
for sample in train_no_dup:
    train.extend([sample] * counts[list(data_no_dup).index(sample)])
for sample in test_no_dup:
    test.extend([sample] * counts[list(data_no_dup).index(sample)])

print("Train: {}".format(train))
print("Test: {}".format(test))

Output

Train: ['d', 'd', 'b', 'b', 'b', 'a', 'a']
Test: ['c', 'c', 'c']
Yahya
  • 13,349
  • 6
  • 30
  • 42