0

I have a Windows application that currently renders graphics largely using MFC that I'd like to change to get better use out of the GPU. Most of the graphics are straightforward and could easily be built up into a scene graph, but some of the graphics could prove very difficult. Specifically, in addition to the normal mesh type objects, I'm also dealing with point clouds which are liable to contain billions of Cartesian stored in a very compact manner that use quite a lot of custom culling techniques to be displayed in real time (Example). What I'm looking for is a mechanism that does the bulk of the scene rendering to a buffer and then gives me access to that buffer, a z buffer, and camera parameters such that I can modify them before putting them out to the display. I'm wondering whether this is possible with Direct3D, OpenGL or possibly use a higher level framework like OpenSceneGraph, and what would be the best starting point? Given the software is Windows based, I'd probably prefer to use Direct3D as this is likely to lead to fewest driver issues which I'm eager to avoid. OpenSceneGraph seems to provide custom culling via octrees, which are close but not identical to what I'm using.

Edit: To clarify a bit more, currently I have the following;

  1. A display list / scene in memory which will typically contain up to a few million triangles, lines, and pieces of text, which I cull in software and output to a bitmap using low performing drawing primitives

  2. A point cloud in memory which may contain billions of points in a highly compressed format (~4.5 bytes per 3d point) which I cull and output to the same bitmap

  3. Cursor information that gets added to the bitmap prior to output

  4. A camera, z-buffer and attribute buffers for navigation and picking purposes

The slow bit is the highlighted part of section 1 which I'd like to replace with GPU rendering of some kind. The solution I envisage is to build a scene for the GPU, render it to a bitmap (with matching z-buffer) based on my current camera parameters and then add my point cloud prior to output.

Alternatively, I could move to a scene based framework that managed the cameras and navigation for me and provide points in view as spheres or splats based on volume and level of detail during the rendering loop. In this scenario I'd also need to be able add cursor information to the view.

In either scenario, the hosting application will be MFC C++ based on VS2017 which would require too much work to change for the purposes of this exercise.

SmacL
  • 22,555
  • 12
  • 95
  • 149

1 Answers1

1

It's hard to say exactly based on your description of a complex problem.

OSG can probably do what you're looking for.

Depending on your timeframe, I'd consider eschewing both OpenGL (OSG) and DirectX in favor of the newer Vulkan 3D API. It's a successor to both D3D and OGL, and is designed by the GPU manufacturers themselves to provide optimal performance exceeding both of its predecessors.

The OSG project is currently developing a Vulkan scenegraph known as VSG, which already demonstrates superior performance to OSG and will have more generalized culling ability.

I've worked a bunch with point clouds and am pretty experienced with them, but I'm not exactly clear on what you're proposing to do.

If you want to actually have a verbal discussion about the matter, I'm pretty easy to find (my company is AlphaPixel -- AlphaPixel.com) and you could call us. I'm in the European time zone right now, it's not clear from your question where you are but you sound US-based.

XenonofArcticus
  • 573
  • 2
  • 7
  • 1
    Thanks, I think OSG will probably work ok, looking at the chapter in the OSG cookbook on managing massive amounts of data. I'd seen Vulkan but don't know how well it would play with older GPUs, where OSG runs on nearly all. I'll add a bit to the question to make things clearer and will also contact directly (I'm based in Ireland). – SmacL Feb 01 '19 at 08:33
  • DirectX 12 is as low level as Vulkan. Also Vulkan doesn't guarantee a better performance than OpenGL(if you're GPU bound then almost no difference), as it depends on the use case and level of experience. It does guarantee an extremely step learning curve and slower development times. – Michael IV Apr 16 '22 at 18:35