How can i get the coordinates of the produced bounding boxes using the inference script of Google's Object Detection API? I know that printing boxes[0][i] returns the predictions of the ith detection in an image but what exactly is the meaning of these returned numbers? Is there a way that i can get xmin,ymin,xmax,ymax? Thanks in advance.
Asked
Active
Viewed 8,670 times
3
-
if you are happy with my answer feel free to mark it as the accepted one. – Gal_M Nov 20 '19 at 08:52
2 Answers
15
Google Object Detection API returns bounding boxes in the format [ymin, xmin, ymax, xmax] and in normalised form (full explanation here). To find the (x,y) pixel coordinates we need to multiply the results by width and height of the image. First get the width and height of your image:
width, height = image.size
Then, extract ymin,xmin,ymax,xmax from the boxes
object and multiply to get the (x,y) coordinates:
ymin = boxes[0][i][0]*height
xmin = boxes[0][i][1]*width
ymax = boxes[0][i][2]*height
xmax = boxes[0][i][3]*width
Finally print the coordinates of the box corners:
print 'Top left'
print (xmin,ymin,)
print 'Bottom right'
print (xmax,ymax)

Gal_M
- 468
- 2
- 14
-
Any explanation for why this is done? Your link is dead. Is it because the input images get resized to a standard size? And that normalised coordinates are useful to work any sized input? – CMCDragonkai Mar 08 '18 at 06:04
-
1is `image` a numpy array? If so `image.size` gives number of elements in the array, and `image.shape` gives dimensions of the image. But I thought it gives number of rows, then number of columns for a matrix i.e. `height, width = image.shape`. – KolaB Mar 08 '18 at 17:41
-
@CMCDragonkai, yes that would make sense. Lots of sizing and resizing in neural networks. – Gal_M Mar 11 '18 at 08:52
-
@KolaB Expect the docs to keep moving for some time to come. https://www.tensorflow.org/api_guides/python/image#Working_with_Bounding_Boxes/ – Gal_M Mar 11 '18 at 08:56
-
@Gal_M Thanks for the updated link. My comment was about the line in your answer that says `width, height = image.size`. I think this should be `height, width = image.shape[:2]`. I still think so after reading the updated link. The very first section "Encoding and Decoding" says "*Encoded images are represented by scalar string Tensors, decoded images by 3-D uint8 tensors of* ***shape*** `[height, width, channels]`. It would be great if you can clarify why you use `width, height = image.size`. – KolaB Mar 11 '18 at 11:40
-
-
3
The boxes array that you mention contains this information and the format is a [N, 4] array where each row is of the format: [ymin, xmin, ymax, xmax] in normalized coordinates relative to the size of the input image.

Jonathan Huang
- 1,560
- 9
- 12