0

I am collecting data for a project. The data collection is done by recording videos of the subjects and the environment. However, while training the network, I would not want to train it with all the images collected in the video sequence.

The main objective is to not train the network with redundant images. The video sequence collected at 30 frames/sec can have redundant images (images that are very similar) within the short intervals. T(th) frame and (T+1)th frame can be similar.

Can someone suggest ways to extract only the images that can be useful for training ?

B200011011
  • 3,798
  • 22
  • 33

1 Answers1

0

Update #2: Further resources,

https://github.com/JohannesBuchner/imagehash

https://www.pyimagesearch.com/2017/11/27/image-hashing-opencv-python/

https://www.pyimagesearch.com/2020/04/20/detect-and-remove-duplicate-images-from-a-dataset-for-deep-learning/

Update #1: You can use this repo to calculate similarity between given images. https://github.com/quickgrid/image-similarity**

If frames with certain objects(e.g., vehicle, device) are important, then use pretrained object detectors if available, to extract important frames.

Next, use a similarity method to remove similar images in nearby frames. Until a chosen threshold is exceeded keep removing nearby N frames.

This link should be helpful in finding right method for your case,

https://datascience.stackexchange.com/questions/48642/how-to-measure-the-similarity-between-two-images

This repository below should help implement the idea with few lines of code. It uses CNN to extract features then calculates there cosine distance as mentioned there.

https://github.com/ryanfwy/image-similarity

B200011011
  • 3,798
  • 22
  • 33