0

I would like to learn more about occlusion in augmented reality apps using the data from a depth sensor(e.g kinect or Realsense RGB-D Dev kit).

I read that what one should do is to compare the z-buffer values of the rendered objects with the depth map values from the sensor and somehow mask the values so that only the pixels that are closer to the user will be seen.Does anyone have any resources or open source code that does this or could help me understand it?

What is more,I want my hand(which I detect as a blob) always to occlude the virtual objects.Isn't there an easier option to do this?

mariosbikos
  • 429
  • 1
  • 5
  • 11

1 Answers1

0

You can upload the depth data as a texture and bind it as the depth buffer for the render target.

This requires matching the near and far planes of the projection matrix with the min and max values of the depth sensor.

If the render target isn't the same size as the depth data then you can use sample it in the fragment shader and discard; when it would be occluded.

ratchet freak
  • 47,288
  • 5
  • 68
  • 106
  • Quick question: Right now what I do is that I render the video stream using gldrawpixels of openGL and then I render the virtual objects by getting the required projection matrix.However right now I don't use any gluperspective commands.What is the correct way to do AR, using gluperspective or not?And should I render video stream using textures or gldrawpixels? – mariosbikos May 13 '15 at 12:19
  • Can you tell me how to pass a texture in the depth buffer?Also, by matching min and max values,you mean the min and max values the sensor can track, or the min and max values the sensor tracked at each frame in idle? – mariosbikos May 15 '15 at 13:46