2

My yolov5 model was trained on 416 * 416 images. I need to detect objects on my input image of size 4008 * 2672. I split the image into tiles of size 416 * 416 and fed to the model and it can able to detect objects but at the time of stitching the predicted image tiles to reconstruct original image, I could see some objects at the edge of tiles become split and detecting half in one tile and another half in another tile, can someone tell me how to made that half detections into a single detection in the reconstruction.

3 Answers3

0

Running a second detection after offseting the tiles split would ensure that all previously cut objects would be in a single tile (assuming they are smaller than a tile). Maybe you could then combine the two results to get only the full objects

0

You wrote "I need to detect objects" but didn't say why splitting the image is the solution you chose. I must ask, is splitting the image necessary? Here is the output of yolov4 on a (3840,2160,3) image. yolov4 resize the image internally to size specified as an argument (YOLO FAMILY ALLOWED IN_DIMS: (320, 320), (416, 416), (512, 512), (608, 608)), that should be transparent to the user. enter image description here

gilad eini
  • 360
  • 2
  • 6
  • Since I need to detect very small objects so I thought splitting the image will give the best results. I have tried by giving the original image and the model is not able to detect all the objects if I split the image and send it to model it is giving some better results. – user3288743 Mar 09 '22 at 15:15
0

I think you have to calculate union of the objects to get bounding boxes as you might calculated iou while tiling the images. Did you try that? I am also in the same path used image tiling technique for detecting small objects.