So here is my first question here. I am preparing a dataset for object detection. I have done the following things so far:
- I have an original picture (size w4000 x h3000).
- I used the annotation platform Roboflow to annotate it in COCO format, with close to 250 objects in the picture.
- Roboflow returned a downscaled picture (2048x1536) with a respective json file with the annotations in COCO format.
- Then, to obtain a dataset from my original picture (as I have a lot of objects and the picture is big enough), I decided to tile the original picture in patches of 224x224. For this purpose, I upscaled a bit (4032x3136) to be able to slice it properly, obtaining 252 pictures.
QUESTIONS
How can I resize the bounding boxes of the Roboflow 2048x1536 picture to my original picture (4032x3136)?
Once the b.boxes are resized to my original size picture, how can I resize them again, adapting the size to each of my patches (224x224) created by slicing the original picture?
Thank you!!