I would like to learn more about occlusion in augmented reality apps using the data from a depth sensor(e.g kinect or Realsense RGB-D Dev kit).
I read that what one should do is to compare the z-buffer values of the rendered objects with the depth map values from the sensor and somehow mask the values so that only the pixels that are closer to the user will be seen.Does anyone have any resources or open source code that does this or could help me understand it?
What is more,I want my hand(which I detect as a blob) always to occlude the virtual objects.Isn't there an easier option to do this?