3

I have a data set of bacteria images taken from under a microscope and recorded using a high resolution camera. The images are of resolution 800x600 and in another data set (taken from a different microscope) the resolution is about 5312x2988. Models like VGG16 and InceptionV3 are trained on an image resolution of 224x224.

How can I properly input the image data into the network. Do I down sample the images to 224x224? I think that that it would result in loss of too much quality which is need for prediction. Is there any other better method?

saurabh kumar
  • 131
  • 1
  • 8

1 Answers1

2

In principle I see three possibilities:

First crop the image. Maybe You want to detect bacteria and you might be able to use crops of the images to find bacteria in every crop of the image.

Second resize the image. If cropping is not possible and you want to use the image size of the inception V3 model then you will have to resize the image to 224x224. Remember that a 20x10 region is then a single pixel. So if you look for small regions in your original image this will not work.

Use a larger input image size. You can add to the inception model and add layers before the input to the inception model. You can then learn a reasonable downsampling. I would only recommend you to do this if the other approaches fail, because there is not a lot of literature using this approach and there is probably a reason for this.

Thomas Pinetz
  • 6,948
  • 2
  • 27
  • 46