1

I am trying to code a realtime system that acquires multiple high-fidelity frames from Unreal Engine 5, and sends them back through TCP socket to a python/matlab server, while getting from the server the pose of the cameras beforehand.

I would like to optimize my implementation, while making it robust. At the moment, my code works fine with a single camera, but I am trying to get multiple frames simultaneously. This requires the intersection of TCP Socket management and scene capture management, to keep synchronism and reliability of the pipeline.

I just found this question about frames acquisition in UE4. I was thinking about rearranging this solution to get multiple RenderTarget raw images (at the best quality possible) and send them back through TCP for realtime applications. I was wondering if this method is computationally efficient, or if you there is a better way so far.

Recalling the code in the past thread:

void UScreenShotToTexture::CreateTexture()
    UTextureRenderTarget2D* TextureRenderTarget;
     // Creates Texture2D to store TextureRenderTarget content
     UTexture2D *Texture = UTexture2D::CreateTransient(TextureRenderTarget->SizeX, TextureRenderTarget->SizeY, PF_B8G8R8A8);
     #if WITH_EDITORONLY_DATA
     Texture->MipGenSettings = TMGS_NoMipmaps;
     #endif
     Texture->SRGB = TextureRenderTarget->SRGB;
     
     // Read the pixels from the RenderTarget and store them in a FColor array
     TArray<FColor> SurfData;
     FRenderTarget *RenderTarget = TextureRenderTarget->GameThread_GetRenderTargetResource();
     RenderTarget->ReadPixels(SurfData);
     
     // Lock and copies the data between the textures
     void* TextureData = Texture->PlatformData->Mips[0].BulkData.Lock(LOCK_READ_WRITE);
     const int32 TextureDataSize = SurfData.Num() * 4;
     FMemory::Memcpy(TextureData, SurfData.GetData(), TextureDataSize);
     Texture->PlatformData->Mips[0].BulkData.Unlock();
     // Apply Texture changes to GPU memory
     Texture->UpdateResource();

In comparison, I came up with a different solution. It consists of a C++ Actor Class that manages the frames acquisition and the TCP Client socket.

Camera setup in the constructor:

CAMERA->CaptureSource = ESceneCaptureSource::SCS_SceneColorHDR;

Frames acquisition (happens at each tick, if the pose is updated):

CAMERA->UpdateContent();
STATIC_TEXTURE = CAMERA->TextureTarget->ConstructTexture2D(this, "CameraImage", EObjectFlags::RF_NoFlags, CTF_DeferCompression);
CAMERA_BUFFERED_DATA = static_cast<const uint8*>(STATIC_TEXTURE ->GetPlatformData()->Mips[0].BulkData.LockReadOnly());
STATIC_TEXTURE ->GetPlatformData()->Mips[0].BulkData.Unlock();
//Unlocks the static texture to allow upadating

and then sending it back as:

PYTHON_SOCKET->Send(CAMERA_BUFFERED_DATA , OUTPUT_PACKET_SIZE, 0);

This works fine with a single Scene Capture, but I am trying to acquire more than one frame at the same time, and I would like to either merge the frames into a single 1d-array to be sent (according to the solution I found in the other thread), or make a unique pointer to a buffer for all the data to be sent, (on the path of the code I come up with). At the moment, if I try to:

  • send the first frame back to server

  • receive a random number from server (different from floats I'm sending to update the pose)

  • send a second camera's frame

    In Unreal it uses that random float to update the pose, getting in conflict with the expected execution of the pipeline. I maybe need to make stronger checks on the received data, but at the same time I would like to keep the TCP transmission as fast as possible, with an high (and constant) FPS on server side. Of course, I also want to be sure that the frames I am getting correspond to the same tick. I am also wondering if my solution gets in conflict with the game thread in some way.

I hope my considerations are at least clear to someone who bumped into this kind of task as well. If not, let me know!

marc_s
  • 732,580
  • 175
  • 1,330
  • 1,459

0 Answers0