4

I'm interested in using a stereo camera for calculating depth in video/images. The camera is a USB 3.0 Stereoscopic camera from Leopard Imaging https://www.leopardimaging.com/LI-USB30-V024STEREO.html. I'm using MAC OS X btw.

I was told by their customer support that it's a "UVC" camera. When connected to an apple computer it gives a greenish image.

My end goal is to use OpenCV to grab the left and right frames from both lenses so that I can calculate depth. I'm familiar with OpenCV, but not familiar with working with stereo cameras. Any help would be much appreciated. So far I have been doing this in Python 3:

import numpy as np
import cv2
import sys
from matplotlib import pyplot as plt

import pdb; pdb.set_trace()
print("Camera 1 capture", file=sys.stderr)
cap = cv2.VideoCapture(1)

print("Entering while", file=sys.stderr)
while(True):
    _ = cap.grab()
    retVal, frame = cap.retrieve()

    #gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

    cv2.imshow('frame', frame)
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

cap.release()
cv2.destroyAllWindows()

This works, but it only gives me a green picture/image with no depth. Any advice on how to get both left and right frames from the camera?

nmante
  • 319
  • 2
  • 10
  • what you get is probably only the ir image. to retrieve the stereo frames, you'd need help from openni, which is not builtin by default.(check: `cv2.getBuildInformation()` ) , so you'd need the openni sdk, and compile opencv from src. – berak Mar 19 '15 at 08:29

3 Answers3

5

The leopard imaging people sent me a clue but I'm not able to make progress because i guess some missing file in my core file. However, I thought it might help somebody. So I'm posting it as an answer. The message was sent by one of the leopard imaging people I contacted through mail. It goes like this.....

We have a customer who successfully separated the two videos on Linux OS one year ago. I tried to contact him to see if he can share the source code with us. Unfortunately, he already left the former company, but he still found some notes (below). Hope it helps.

The camera combines the image from the two sensors into one 16-bit pixel data (The high 8 bits from one camera, and the low 8 bit from the other camera). To fix this problem in linux opencv, should skip color transform by opencv: at

modules/videoio/src/cap_v4l.cpp static IplImage* icvRetrieveFrameCAM_V4L( CvCaptureCAM_V4L* capture, int)

case V4L2_PIX_FMT_YUYV:
#if 1
    /*
    skip color convert. Just copy image buffer to frame.imageData
    */
    memcpy(capture->frame.imageData, capture->buffers[capture->bufferIndex].start, capture->buffers[capture->bufferIndex].length);
#else
    yuyv_to_rgb24(capture->form.fmt.pix.width, capture->form.fmt.pix.height,(unsigned char*) capture->buffers[capture->bufferIndex].start),
            (unsigned char*)capture->frame.imageData);
#endif

Hope this helps.

DISCLAIMER : I went forward with the C++ version(my project was in C++) given in libuvc used the provided routines to obtain the left and right frames separately.

Eswar
  • 1,201
  • 19
  • 45
2

I am chasing the same issue, but with C/C++. I have contacted Leopard and am waiting for an answer. My understanding is that the two grayscale cameras are interlaced into a single image (and I think that OpenCV sees this as a color image, hence the strange colors and being out of focus.) You need to break apart the bytes into two separate frames. I am experimenting, trying to figure out the byte placement, but have not gotten too far. If you figure this out, please let me know!

They have a C# on Windows example here: https://www.dropbox.com/sh/49cpwx0s70fuich/2e0_mFTJY_

Unfortunately it is using their libraries (which are not source) to do the heavy lifting, so I can't figure out what they are doing.

jordanthompson
  • 888
  • 1
  • 12
  • 29
2

I met the same issue as you and finally arrived to a solution. But I don't know if OpenCV can handle it directly, especially in Python.

As jordanthompson said, the two images are interlaced into one. The image you receive is in YUYV (Y is the light intensity, UV contains the color information). Each pixel is coded on 16 bits, 8 for Y, 8 for U or V depending on which pixel you are looking at.

Here, the Y bits come from the left image, the UV bits from the right image. But when OpenCV receives this image, it converts it to RGB which then definitely mix the two images.. I could not find a way in Python to tell OpenCV to get the image without converting it... Therefore we need to read the image before OpenCV. I managed to do it with libuvc (https://github.com/ktossell/libuvc) after adding two small functions to perform the proper conversions. I guess you can use this library if you can use openCV in C++ instead of Python. If you really have to stick with Python, then I don't have a complete solution, but at least now you know what to look for: try to read the image directly in YUYV and then separate the bits into left and right image (you will get two grayscale images).

Good luck !

  • Great. I hadn't looked at this post in a while. I came to the same conclusion, but stopped working on it to prepare for my phd exam. Could you refer me to what functions you used/wrote using libuvc? Perhaps on a gist/github? – nmante Aug 26 '15 at 02:04
  • 1
    Sure ! Here is the pull request I made to libuvc with the functions I added https://github.com/ktossell/libuvc/pull/27 – Surya Ambrose Aug 26 '15 at 08:52
  • Nice. I'm going to check out the pull requests later today. If everything looks good I'll go ahead and accept your answer! – nmante Aug 27 '15 at 15:36
  • I've been trying out your new libuvc functions. I'm hitting a dead end, and I just wanna make sure I'm implementing correctly. Do you think you could check out this small program I wrote which uses your functions? I think your pointers could help me out. https://github.com/nmante/Stereo_Camera_Examples/blob/master/libuvc_cpp/src/main.cpp – nmante Aug 28 '15 at 02:26
  • I'm using uvc to open the device. Then I wrote a uvc callback that should grab the frame in yuyv format. Then I call the functions you wrote on that frame (one for the y pixels, once for the uv pixels) and dump those into two grey frames. Then I just output them. However, when I'm outputting (imshow) I'm just getting two white/blank images. – nmante Aug 28 '15 at 02:28
  • Hum... Did you find a solution to your problem ? Honestly I can't find why this is not working. I guess you have no error messages ? – Surya Ambrose Aug 31 '15 at 09:15
  • Maybe you can try this: `cvReleaseImageHeader(&cvImg); CvMat* cvImgL = cvCreateImageHeader( cvSize(left->width, left->height), IPL_DEPTH_8U, 1); cvSetData(cvImgL, left->data, left->width); cvNamedWindow("TestL", CV_WINDOW_AUTOSIZE); cvShowImage("TestL", cvImgL); cvWaitKey(10); cvReleaseImageHeader(&cvImgL);` It uses opencv (not opencv2). This is what I have to display the images and it works with me. Tell me if it does for you, so that we can look for the origin of the problem :) – Surya Ambrose Aug 31 '15 at 09:17
  • I am also having the same problem and when I check the datasheet of the Usb 3.0 stereo camera, it says that the output is in RAW uncompressed format. It did not say that it was YUV format. So in the above answer, it talks about the 1st 8 bits being yellow, the second 8 bits being u/v, etc. I don't know if it's true. Moreover, it has not worked for me. It kind of results in some overlapped grayscale images. – Eswar Jan 30 '18 at 05:44
  • @SuryaAmbrose can you tell me how did you split the incoming frames to the left and the right frames?. I have the seen the 2 functions available but I'm getting a segmentation fault. So if you could submit your code as an answer it would be greatly helpful to us.. – Eswar Feb 19 '18 at 10:10