I am trying to create a dataset for later ML model training for lane detection. As input I want to use the picture from carla rgb-camera(left side of the image) and label should be the [x, y] point of the available lane.
So the problem is: How can i transform the 3D Waypoints(right side of the image) into 2D points and match the exakt [x,y] points of the lane in the picture i get from my RGB-camera?
I have already did some research online and have three possible solutions in my mind, I am not sure which leads to the right path:
- I have to transform 3D points directly to 2D points, because i already know the position of my camera. (problem: i do not know how.)
- In the left image, you can see my points have already been marked in the image(i do not know why). So there should be a API function i can call, so i get my points in the picture directly. (problem: i did not find the proper API)
- Since i have already have the image and the points, i can program a Opencv function to read the needed points directly from image and save it in a file. (This should work, but result may not as good as above solutions.)
I am looking forward for some further suggestions. :D