0

I am having issues with getting an image from the camera on a raspberry pi over a network and on to a pandaboard(running ubuntu 12.04) to display correctly. The data I get from the camera is the raw YUV data at 1280x720 resolution.

I think my SDL calls are fine, but here is the send code. Anyone feel free to point out if they can see something clearly wrong.

void Client::SendData(const void* buffer, int bufflen)
{
     /*
      Some code to check if connected to server and if socket is not null
     */



     if(SDLNet_TCP_Send(clientSocket, buffer, bufflen) < bufflen)
     {
         std::cerr << "SDLNet_TCP_Send: " << SDLNet_GetError() << std::endl;
         return;
     }
}

Now the recieve code

void Server::ReceiveDataFromClient()
{
    /*
        code to check if data is being sent
    */
   //1382400 is the size of the image in bytes, before it is sent. This data 
   //is in bufflen in the send func and, to my knowledge, is correct. 
   if(SDLNet_TCP_Recv(clientSocket, buffer, 1382400) <=0)
   {
       std::cout << "Client disconnected" << std::endl;
       /*Code to shut down socket and socketset.*/
   }
   else //client is sending data
   {
       //buffer is an int* at the moment, I have tried it as a uint8_t* and a char*
       setUpOpenCVToDisplayChunk(buffer);
   }
}

So, I take buffer directly from Recv, which should only finish when Recv has got all the data from a single send as far as I know. I therefore think that code is fine, but its here incase anyone can spot any issues as I am struggling with this issue at the moment.

Lastly, my openCv display code:

void Server::setUpOpenCVToDisplayChunk(int* data)
{
    //I have tried different bit depths also
    IplImage* yImageHeader = cvCreateImageHeader(cvSize(1280, 720), IPL_DEPTH_8U, 1);

    //code to check yImage header is created correctly
    cvSetData(yImageHeader, data, yImageHeader->widthStep);
    cvNamedWindow("win1", CV_WINDOW_AUTOSIZE);
    cvShowImage("win1", yImageHeader);
}

Sorry for all the "code here to do this" parts, I am manually typing the code out.

So, can anyone state what could be the issue at either of these parts? There is no error, I just get muddled up images, which I can notice are images, just wrongly put together or not full images.

Anyone needs more info just ask or more code I will put it up. Cheers.

  • your tcp packets will get fragmented, thus you have to read in small chunks (like 1k). you can *send* the image in one piece, but not expect it to come in one packet on the other side. – berak Jan 26 '15 at 07:52
  • Aa far as I am aware, the recv acts in a similar way to send and waits for all of "buffer" to be sent before doing anything else. I think... However, I did have it sending in 64kb chunks(TCP max packet size, and receiving these 64kb chunks and it was worse! – user3375989 Jan 26 '15 at 09:55
  • also do *never* use opencv's deprecated c-api. – berak Jan 26 '15 at 13:51
  • Yeah I read the IplImage stuff was deprecated but when searching for ways to allow an image stored as a simple memory block to be displayed it was all I could find. Could you offer guidance on another way to do it? – user3375989 Jan 26 '15 at 13:53
  • Mat m(720,1280,CV_8UC1, data); and btw, 1280*720 = 921600 (which is also, more than a 64k packet...) – berak Jan 26 '15 at 13:55
  • Thanks for the help, but this code can then not be used with the window display call, I guess? – user3375989 Jan 26 '15 at 13:56

1 Answers1

0

Try converting the frames from YUV to RGB. http://en.wikipedia.org/wiki/YUV lists how YUV formatted data is converted to RGB. You might also find readily available code to do that. Check the format of YUV data output from your camera and use the correct transformation.

dhanushka
  • 10,492
  • 2
  • 37
  • 47
  • I was gonna concert to rgb server side, but I can do it on the client no bother, ill give that a bash. I assume you are guessing that the data is arriving fine and its just the displaying I am having issues with? – user3375989 Jan 26 '15 at 09:57
  • Yes. But if you do the conversion at client side, you'll have to transfer more data over TCP because the yuv->rgb conversion expands data, and it'll have an impact on the performance if you are concerned about the speed. – dhanushka Jan 26 '15 at 11:43
  • It's for real time image processing, so speed is most definitely an issue. – user3375989 Jan 26 '15 at 11:45