0

I am working with a training data set of 127.000 images scraped from internet.

I know there are quite a few duplicates in there and i want to remove them to improve the performance of my deep learning model.

I have tried several different ways to do this. Some did not work at all, others did just remove a few or way too many.

The last one i tried was this:

import hashlib
import os
import PIL
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
%matplotlib inline
import time
import numpy as np

def file_hash(filepath):
    with open(filepath, 'rb') as f:
        return md5(f.read()).hexdigest()

os.chdir('/content/train')

import hashlib, os
duplicates = []
hash_keys = dict()
for index, filename in  enumerate(os.listdir('.')):  #listdir('.') = current directory
    if os.path.isfile(filename):
        with open(filename, 'rb') as f:
            filehash = hashlib.md5(f.read()).hexdigest()
        if filehash not in hash_keys: 
            hash_keys[filehash] = index
        else:
            duplicates.append((index,hash_keys[filehash]))

for file_indexes in duplicates[:30]:
    try:
    
        plt.subplot(121),plt.imshow(imread(file_list[file_indexes[1]]))
        plt.title(file_indexes[1]), plt.xticks([]), plt.yticks([])

        plt.subplot(122),plt.imshow(imread(file_list[file_indexes[0]]))
        plt.title(str(file_indexes[0]) + ' duplicate'), plt.xticks([]), plt.yticks([])
        plt.show()
    
    except OSError as e:
        continue

for index in duplicates:
    os.remove(file_list[index[0]])
  

This method found 490 duplicates, but i am estimating there are at least a couple of thousand duplicates.

I have also tried imagededup with different methods and thresholds.

pip install imagededup

from imagededup.methods import DHash
method_object = DHash()
duplicates = method_object.find_duplicates_to_remove(image_dir='/content/train', 
                                                     max_distance_threshold=3) 

Last run found 23919 duplicates, and its usually somewhere between 20k and 35k depending on the method and the threshold. This is too many. Running the model removing all these produce a worse result.

Anyone know about a better way to remove duplicate images?

JKnecht
  • 231
  • 2
  • 16
  • Not a solution to this problem, but generally you might perform random transformation (like zoom, scale, rotate) on the image before you pass it to the mode at every epoch. If that is the case then I think duplicate isn't an issue since model won't see the same image again. – Epsi95 Jan 10 '21 at 09:34
  • @Epsi95 Yes, i have been thinking about exactly this. I am performing data augmentation. But still in most articles i have read they say it is very important to remove duplicates. And especially if you have a lot of duplicates. I could have 10k duplicates, i dont know. – JKnecht Jan 10 '21 at 09:41

0 Answers0