0

I am going to build a FPS video game. When I developing my game, I got this problem in my mind. Each and every video game developer spend very big time and use a lot of effort to make their game's environment more realistic and life-like. So my question is,

Can We Use HD or 4K Real Videos as Our Game's Environment? (As we seen on Google Streetview - but with more quality)

If we can, How we program the game engine?

Thank you very much..!

LuckyG
  • 131
  • 2
  • 12

1 Answers1

1

The simple answer to this is NO.

Of-course, you can extract texture from the video by capturing frames from it but that's it. Once you capture the texture, you still need a way to make a 3D Model/Mesh you can apply the texture to.

Now, there have been many companies working on video to 3D model converter. That technology exist but is more for movie stuff. Even with this technology, the generated 3D models from a video are not accurate and they are not meant to be used in a game because they end up generating a 3D model with many polygons, that will easily choke your Game engine.

Also, doing this in real-time is another story. So you will need to continuously read a frame from the video, extract a texture from the video, generate a mesh with the HQ texture, cleanup/reduce/reconstruct the mesh so that your game engine won't crash or drop many frames. You then have to generate a UV for the mesh so that the extracted image can be applied to the current mesh.

Finally, each one of these are CPU intensive. Doing them all in series,in real-time, will likely make your game unplayable.I also made doing this sound easy but it's not. What you can do with the video is to use it as a reference to model your 3D environment in a 3D application. That's it.

Programmer
  • 121,791
  • 22
  • 236
  • 328
  • Great answer.! Thank you..! But please can you explain me "How Google Street View Works?" (Just for knowing) – LuckyG May 10 '16 at 05:54
  • @LuckyG That will be hard and long to explain here. I will just make it short. Everything is automated. So they take a car or bicycle with cameras positioned in different angles attached to them. When driving, cameras take pictures and store information such as gps coordinate in each picture and other information that will be used to stitch the images together. When they get back to the office, take out the saved file and process and stitch the files together, and upload it to the server. Everything is automated so they don't have to do it themselves unless there is something to blur. – Programmer May 10 '16 at 06:13
  • Google Street View is one of the world's greatest research programs. You might as well ask "so how does intel make chips". – Fattie May 10 '16 at 14:30