5

i am using pyBullet, which is python wrapper to bullet3 physics engine and i need to create point cloud from virtual camera.
This engine uses basic OpenGL renderer and i am able to get values from OpenGL depth buffer

img = p.getCameraImage(imgW, imgH, renderer=p.ER_BULLET_HARDWARE_OPENGL)
rgbBuffer = img[2]
depthBuffer = img[3]

Now i have width*height array with depth values. How can i get world coordinates from this? i tried to save .ply point cloud with points (width, height, depthBuffer(width, height)) but this doesn't create point cloud that looks like objects on the scene.

I also tried to correct depth with near far plane:

depthImg = float(depthBuffer[h, w])
far = 1000.
near = 0.01
depth = far * near / (far - (far - near) * depthImg)

but result with this was also some weird point cloud. How can i create realistic point cloud from data from depth buffer? is it even possible?

i did something similar in c++, but there i used glm::unproject

for (size_t i = 0; i < height; i = i = i+density) {
        for (size_t j = 0; j < width; j = j = j+density) {

            glm::vec3 win(i, j, depth);
            glm::vec4 position(glm::unProject(win, identity, projection, viewport), 0.0);

EDIT:

based on Rabbid76 answer i used PyGLM which worked, i am now able to obtain XYZ world coordinates to create point cloud, but depth values in point cloud look distorted, am i getting depth from depth buffer correctly?

    for h in range(0, imgH, stepX):
       for w in range(0, imgW, stepY):
          depthImg = float(np.array(depthBuffer)[h, w])
          far = 1000.
          near = 0.01
          depth = far * near / (far - (far - near) * depthImg)
          win = glm.vec3(h, w, depthBuffer[h][w])
          position = glm.unProject(win, model, projGLM, viewport)
          f.write(str(position[0]) + " " + str(position[1]) + " " + str(depth) + "\n")
mereth
  • 465
  • 3
  • 9
  • 19
  • 2
    Use [PyGLM](https://pypi.org/project/PyGLM/). (`glm.unProject(...)`) – Rabbid76 Dec 01 '19 at 19:24
  • 1
    Somehow i missed that, will try that tomorrow and then ill update/close this question, thanks – mereth Dec 01 '19 at 19:28
  • seems like PyGLM unProject worked, but im not sure if im getting depth values correctly, i updated question – mereth Dec 02 '19 at 11:17
  • 1
    okay so i am getting those values correctly, problem was in huge near/far values, looks like they need to "hug" object really tight to get correct depth values – mereth Dec 02 '19 at 13:36
  • I want to do the same as you, get a point cloud from the pybullet synthetic camera. I'm not familiar with `pcl` nor with `opengl`. Could you share a minimal working example? Or give some clarification how you pick the step size, model, is `projGLM=pybullet.computeProjectionMatrixFOV(..)` and `viewport=pybullet.computeViewMatrix(..)` – Elod Apr 23 '20 at 15:30

1 Answers1

2

Here is my solution. We just need to know how the view Matrix and the projection matrix work. There are computeProjectionMatrixFOV and computeViewMatrix funtions in pybullet. http://www.songho.ca/opengl/gl_projectionmatrix.html and http://ksimek.github.io/2012/08/22/extrinsic/ In a word, point_in_world = inv(projection_matrix * viewMatrix) * NDC_pos

glm.unProject is an another solution

    stepX = 10
    stepY = 10        
    pointCloud = np.empty([np.int(img_height/stepY), np.int(img_width/stepX), 4])
    projectionMatrix = np.asarray(projection_matrix).reshape([4,4],order='F')
    viewMatrix = np.asarray(view_matrix).reshape([4,4],order='F')
    tran_pix_world = np.linalg.inv(np.matmul(projectionMatrix, viewMatrix))
    for h in range(0, img_height, stepY):
        for w in range(0, img_width, stepX):
            x = (2*w - img_width)/img_width
            y = -(2*h - img_height)/img_height  # be careful! deepth and its corresponding position
            z = 2*depth_np_arr[h,w] - 1
            pixPos = np.asarray([x, y, z, 1])
            position = np.matmul(tran_pix_world, pixPos)

            pointCloud[np.int(h/stepY),np.int(w/stepX),:] = position / position[3]
benbo yang
  • 21
  • 3