-1

Imagine having an array of images like these with white gap insde gapless uncentered backgrounded

Background is always white(even in the 3rd pic, main object there is that big brown rectangle with shapes inside)

No matter of given type of the image you would need to: 1) find main object boundary rectangle 2) crop it out like this

cropped out

3) and place it in the center of a blank square image.

How would you achieve this? I already know how to crop out anything knowing rectangle and place it anywhere but I just need to know which way would be the best to make the 1st step. Vision API can detect rectangles, faces and barcodes, but it seems what I need is even more simple. I just need to find leftest, rightest, top and bottom non-white pixels and it will be my bounds. Is there any way except iterating pixelBuffer for each pixel?

1 Answers1

0

What is the type of these images? UIImage? CAShapeLayer? In most cases, you should be able to get the .frame from each image in the array, which will give you a CGRect the X and Y origin coordinates, as well as height and width dimensions. You should also have access to .midX and .midY coordinates, or .center.x and .center.y to find the midpoint you're looking for. Unless what you're talking about is taking in a flattened bitmap like a .jpg or .png and running some shape detection on the contents, in which case you would need something like Vision to accomplish what you're trying to do.

miles_b
  • 335
  • 3
  • 10