I'm using a Faster RCNN network to perform object/symbol detection and I'm facing 2 major issues.
The bounding box of the detected symbols is not tight enough. For example, in many cases, only 50%-70% of the entire symbol is being identified (Example: Resistor R1 in the image below). What can I do to make my bounding box more accurate?
In the below example we have 3 resistors, R1, R2, R3. The trained network is able to identify R1 with partial IoU, R2 properly but it has missed out R3 completely even though R3 is present on the same page and is the same symbol as R1 and R2. Why does this happen and how can I overcome this? (I tried a correlation-based approach but there are too many variations to consider in my use case)
How can I fix the above issues? Thanks in advance.