I am trying to determine the accuracy of my object detection system so I am putting in rotated images of the original image(below) at 90 deg, 45 deg, 135 deg, 180 deg etc for the system to detect and convert the points of the detection for each rotated image with their respective frame of reference to the original image's frame of reference(in green) so that i can combine the respective detections to determine the accuracy.
Original image link:
http://i1116.photobucket.com/albums/k572/Ruihong_Zhou/37024-Tabby-cat-white-background.jpg
For example: System read in a rotated 90 deg clockwise image of the original image
http://i1116.photobucket.com/albums/k572/Ruihong_Zhou/37024-Tabby-cat-white-background2.jpg
Using the rotated image, the system detects something as indicated by the red dot. However this is with reference to the purple frame of reference. How do i convert the coordinates of the red point back to the original image's frame of reference in green for comparison?
I considered using rotation matrices for points however it seems that these matrices only work for fixed frame of reference only.