0

I have no experience with Direct3D, so I may just be looking in the wrong places. However, I would like to convert a program I have written in OpenGL (using FreeGLUT) to a Windows IoT compatible UWP (running Direct3D, 12 'caus it's cool). I'm trying to port my program to a Raspberry Pi 3 and I don't want to convert to Linux.

Through the examples provided by Microsoft I have figured out most of what I believe I need to know to get started, but I can't figure out how to share a dynamic data buffer between the CPU and GPU.

What I want to know how to do:

  • Create a CPU/GPU shared circular buffer
    • Read and Draw with the GPU
    • Write / Replace sections with the CPU

Quick semi-pseudo code:

while (!buffer.inUse()){                       //wait until buffer is not in use
    updateBuffer(buffer.id, data, start, end); //insert data into buffer
    drawToScreen(buffer.id);                   //draw using vertex data in buffer
}

This was previously done in OpenGL by simply using glBegin()/glEnd() and glVertex3f() for each value in an array when it wasn't being written to.

Update: I basically want a Direct3D12 equivalent of OpenGLs VBO editing using glBufferSubData(). If that makes more sense.

Update 2: I found that I can get away with discarding the vertex buffer every frame and re-uploading a new buffer to the GPU. There's a fair amount of overhead, as one would expect with transferring 10,000 - 200,000 doubles every frame. So I'm trying to find a way to use constant buffers to port the 5-10 updated vertexes into the shader, so I can copy from the constant buffer into the vertex buffer using the shader and not have to use map/unmap every frame. This way my circular buffer on the CPU is independent of the buffer being used on the GPU, but they will both share the same information through periodic updates. I'll do some more looking and post another more specific question on shaders if I don't find a solution.

Jarred
  • 67
  • 6
  • "*by simply using glBegin()/glEnd() and glVertex3f() for each value in the buffer when it wasn't being written to*" Those functions don't write to any "buffer". Or at least, not one you get to see. How those functions work is implementation defined, and many implementations don't work like that. *especially* the waiting part. – Nicol Bolas Jun 17 '16 at 05:26
  • @NicolBolas I understand the two APIs don't work the same way and I'm struggling to find how to implement this section of code in Direct3D. I understand it wont be the same as my implementation in OpenGL, but I wanted to give a general idea of what I was after. In the case of glBegin()/glEnd() I am reading from a buffer defined as double buffer[buffer_length];. – Jarred Jun 17 '16 at 06:01
  • Stop trying to port your code. Instead, figure out how D3D12 works. Write applications in the API. Once you have a handle on it, then you can figure out what the right way to stream data to it would be. – Nicol Bolas Jun 17 '16 at 06:05
  • @NicolBolas Hmm... that doesn't really help me in the short-term though. I have the rest of the program ready to go, I just need to be able to work this part in. Optimization be dammed. – Jarred Jun 17 '16 at 06:10
  • The whole rest of the program? So you've already created your command buffers and queues, pipeline state objects, memory heaps, and so forth. And all you're doing now is trying to figure out how to put data into arrays and touch off a rendering operation? Somehow, I rather doubt that. – Nicol Bolas Jun 17 '16 at 06:15
  • @NicolBolas It's not a very complicated program, I can get away with using most of the template code in Visual Studio. I have added in my own UI and other elements based off of the templates which is good enough for a project like this. Condescending comments aren't very productive, but thanks for your help anyway. – Jarred Jun 17 '16 at 06:19
  • Jumping directly into Direct3D 12 is not advised. It's an API designed for graphics experts who are presumably already deeply familiar with Direct3D 11. "cuz it's cool" is probably not the right reason to choose DX 12 over DX 11. Furthermore, it's unlikely that any IoT device is likely to have a DirectX 12 supporting driver. See [DirectX Tool Kit](https://github.com/Microsoft/DirectXTK/wiki/Getting-Started). – Chuck Walbourn Jul 29 '16 at 06:28
  • @ChuckWalbourn Of course, my reasoning is more along the lines of "because I want a challenge" and "I can add it to my resume" but I don't need to explain why I'm doing something in such depth as it doesn't really relate to the question at hand. I have run sample programs on IoT (which work) and regardless of how it's actually executing the program, I do not care, for it is giving me yet another reason to suffer through the crap that is learning Direct3D. Honestly, I would have just installed Linux on my pi if I wanted to take the easy way out. – Jarred Jul 30 '16 at 07:55
  • Does the sample on IoT use WARP because that's a software device. My point is that learning Direct3D 12 is best accomplished by knowing Direct3D 11 first. If your goal is to suffer, then by all means jump right into DX12 first :) – Chuck Walbourn Jul 30 '16 at 08:18
  • @ChuckWalbourn To be honest I'm not sure what it uses. It's the "DirectX 12 App (Universal Windows)" sample/template that I have built on. So I can only assume it's working some sort of black magic on the Pi. But that's fine, it works on the Pi, and I know it's running DIrect3D when I compile and run it on my desktop so it's not a total waste of time. – Jarred Jul 31 '16 at 00:30

0 Answers0