I'm trying to implement a "whack-a-mole" type game using 3D (OpenGL ES) in Android. For now, I have ONE 3D shape (spinning cube) at the screen at any given time that represents my "mole". I have a touch event handler in my view which randomly sets some x,y values in my renderer, causing the cube to move around (using glTranslatef()).
I've yet to come across any tutorial or documentation that completely bridges the screen touch events to a 3D scene. I've done a lot of legwork to get to where I'm at but I can't seem to figure this out the rest of the way.
From developer.andrdoid.com I'm using what I guess could be considered helper classes for the Matrices: MatrixGrabber.java, MatrixStack.java and MatrixTrackingGL.java.
I use those classes in my GLU.glUnProject method which is supposed to do the conversion from the real screen coordinates to the 3D or object coordinates.
Snippet:
MatrixGrabber mg = new MatrixGrabber();
int viewport[] = {0, 0, renderer._width, renderer._height};
mg.getCurrentModelView(renderer.myg);
mg.getCurrentProjection(renderer.myg);
float nearCoords[] = { 0.0f, 0.0f, 0.0f, 0.0f };
float farCoords[] = { 0.0f, 0.0f, 0.0f, 0.0f };
float x = event.getX();
float y = event.getY();
GLU.gluUnProject(x, y, -1.0f, mg.mModelView, 0, mg.mProjection , 0, viewport, 0, nearCoords, 0)
GLU.gluUnProject(x, y, 1.0f, mg.mModelView, 0, mg.mProjection , 0, viewport, 0, farCoords, 0)
This snippet executes without error put the output does not look correct. I know the screen has the origin (0,0) at the bottom left. And the 3D scene, at least mine, seems to have the origin right at the middle of the screen like a classic cartesian system. So run my code where the screen coordinates are (0, 718) from touching the bottom left. My outputs from last parameters to gluUnProject are:
Near: {-2.544, 2.927, 2.839, 1.99}
Far: {0.083, 0.802, -0.760, 0.009}
Those numbers don't make any sense to me. My touch even was in the 3rd quadrant so all my x,y values for near and far should be negative but they aren't. The gluUnProject documention doesn't mention any need to convert the screen coordinates. Then again, that same documentation would lead you to believe that Near and Far should have been arrays of size 3 but they have to be of size 4 and I have NO CLUE why.
So, I've got two questions (I'm sure more will come up).
- How can I make sure I am getting the proper near and far coordinates based on the screen coordinates.
- Once I have the near and far coordinates, how do I use that to find if the line they create intersects an object on the screen.