2

I have worked to develop a GPU-based underwater imaging sonar simulation for real-time applications (see more details in my last paper). The mission part is the reverberation phenomenon, that can be represented by a multipath algorithm.

This work uses precomputed information (normal, depth and angle) during rasterization pipeline using shaders in order to calculate the simulated sonar data, however, this way is restricted to primary reflections. So I need to take account the secondary reflections. Could ray tracing be used only for this part, in a hybrid pipeline (rasterization and ray tracing)?

Vertexwahn
  • 7,709
  • 6
  • 64
  • 90

1 Answers1

0

I hope I can help!

With raytracing, in order to calculate secondary reflections you normally need to first calculate each ray's primary reflection, and you then recursively shoot off another ray from that position. I guess you could skip the first reflection part of raytracing if you can use you shader results to figure out where each ray starts and in which direction it should reflect. You could shoot your rays out of the pixels in the shader's result, using depth information, pixel coordinates, and camera parameters to figure out where the ray's origin is, and using normal information to figure out which direction the ray should go in.

From looking at your project's paper, I think raytracing would be a very useful tool for this project, and I wonder if it might be better to just go for a full raytracing approach to simplify the process. Why exactly do you want to do the primary reflections through shaders? I would recommend looking into nvidia optix, which performs raytracing on the gpu, and looking into global illumination techniques in order to calculate reflections off of all objects in the scene. Global illumination techniques also take into account the fact that surfaces are not perfectly smooth without the use of normal maps, as mentioned in your paper, by using monte carlo integration.

I hope this helps, if you would like me to clarify anything or have any other questions feel free to ask!

kylengelmann
  • 196
  • 8
  • Hi @kylengelmann, thanks for your comments! I have used rasterization to calculate the first reflections because, until this moment, I didn't need to simulate the sound path to calculate the essential sonar parameters (echo reflection, pulse distance and horizontal field-of-view). I can use the precomputed light information to do this process. Also, I skipped to used a full approach (like nvidia optix) because the simulator because the simulator becomes video card independent and serves older computers. – Rômulo Cerqueira Nov 20 '17 at 22:29
  • Now I have searched how to include the multipath phenomenon into my application, and I guess the use of rasterization for secondary reflections is really hard. With this, I think ray tracing can conduct this step, however I am opened for new ideas! :) – Rômulo Cerqueira Nov 20 '17 at 22:39
  • could you give more details how I can use the precomputed data (normal, pixel coordinates, depth) to simulate the ray's origin? I had a look on global illumination techniques, however several authors noticed that they are computationally costly. Do you have some tip? Thanks one more time. – Rômulo Cerqueira Nov 20 '17 at 22:46
  • If you're ok with using a full out ray tracing technique then you should consider not using the precomputed data because it just needlessly complicates things, but if you want to give it a try, this is what I think would work: – kylengelmann Nov 20 '17 at 22:49
  • For each pixel, compute its world position and world surface normal. Then, figure out from which direction a ray from the camera or sonar thing or whatever is creating the sound would come from, and reflect it over the surface normal. Then raytrace from each pixel with the ray's origin at the world position of the pixel and the direction you calculated earlier using the surface normal – kylengelmann Nov 20 '17 at 22:52
  • Global illumination is expensive, but pretty much all raytracing is. Video games usually take an approach of baking the global illumination results into textures that get applied to meshes during lighting calculations. If you need your application to run in real time, this solution might work for you. GPU accelerated raytracing like from nvidia optix might be able to calculate global illumination fast enough to render in real time though, but I'm not sure. – kylengelmann Nov 20 '17 at 22:56
  • Let us [continue this discussion in chat](http://chat.stackoverflow.com/rooms/159429/discussion-between-romulo-cerqueira-and-kylengelmann). – Rômulo Cerqueira Nov 20 '17 at 23:31
  • Hi @kylengelmann, I have worked with dynamic cubemaps with RTT and FBO to simulate the secondary reflections by rasterization. I got the depth information, however I could not collect the normal from reflected objects. textureCube() only returns the reflected color. Do you know how can I collect the normal information? Thanks in advance. – Rômulo Cerqueira Jan 16 '18 at 23:15
  • Oh, I've never heard of people using dynamic cubemaps, I've always just assumed that that would be to slow, I'm glad it's working! Can you do the process twice, once for depth and once for normals, storing the normals as color like in a normal map? – kylengelmann Jan 18 '18 at 07:03
  • Hey @kylengelmann, I invested a lot of time to use dynamic cubemaps, however I did not find how to compute the normal data. Anyway, I think this approach will bring a lot of computational effort, because I will need to compute dynamic cubemaps for all objects, even they are out of viewpoint. At this moment, I shifted to the initial idea discussed before: first reflection by rasterization, and second one by Ray tracing. For the second reflection, I already have the world position and the direction vector. – Rômulo Cerqueira Feb 28 '18 at 02:30
  • My current issue is how to calculate the intersection between Ray and any object on scene by Ray tracing. I have found on internet few interesting examples about this topic, however you need to know the objects' surfaces (e.g. sphere, box...) and position. Do you know how can I calculate these values using Ray tracing and GLSL? – Rômulo Cerqueira Feb 28 '18 at 02:33
  • You can calculate the intersection between rays and triangles, Nvidia Optix has ways of sorting the triangles based off of axis aligned bounding boxes to make this more efficient. I could be wrong, but I don't think raytracing would work in GLSL if you want reflections off of anything that's offscreen – kylengelmann Mar 01 '18 at 20:05
  • I'm going to implement a way to store all objects as a set of triangles and pass them to shader. I will send updates. Thanks! – Rômulo Cerqueira Mar 02 '18 at 00:30