I'm working on a computer vision application (using openCV) and the goal is to measure the width of an obejct before/after it falls on a container. So after the fall its position will be random, something like this to be clear: candies
The only object i want to measure will be clear filtering the image and will be always on top, but its width won't be parallel to the image plane of the camera in the general case. In the worst scenario the longer face could be very occluded, getting things very hard.
So my question is: what will lead to better accuracy between these 2 strategies?
1) Getting a pair of stereo images of the container, locate two points on the longer face of the object (vertex or medium point of the borders) and then calculate the distance in 3D space between these points
2) Working with a single camera close to the object before it falls into the container, where its motion will be perpendicular to the camera so the entire width will be on a plane parallel to the image plane. Using a known size reference and calibrated camera will lead to width measure from a single image
Thank you in advance