1

Just for learning purposes, I was looking at raytracing techniques. I recently discovered Global Illumation and its variants, reading at this amazing work by Kevin Beason. I did tried to port smallpt in another language (Lua).

I got something working fine, so far, but it renders the scene too slowly in my opinion. So far, I digged over the internet, and i've seen in lots of technical papers covering the subject that this was the main problem of global illumination techniques, especially path-tracing.

It seems to me there are methods to faster the process, such as spacing filling curves (Hilbert curves, for instance). Basically, they all divide the viewport into buckets (or tiles), and then command the render to process path-tracing on each single bucket in a specific order. Technically, I didn't look at how to implement Hilbert curves, but I would like to understand how, in practical terms, this can make the entire process faster ?

My first assumption was, considering each bucket appart, the renderer was called on specific pixels, and then the others were samples using interpolation tricks, but actually, it seems that the renderer is called on each single pixel on the bucket.

So, as a result, the renderer will process all the pixels of the map, which results in the same amount of work than two nested for loops (for each rows, each columns), in my humble opinion. So I do know i am obviously missing something, thus looking forwards some clean explanations.

Thanks in advance.

Roland Y.
  • 423
  • 6
  • 13
  • Nobody ? Here is the link to the project repository, actually [Smallpt-Lua](https://github.com/Yonaba/smallpt-lua). Thanks. – Roland Y. Dec 15 '12 at 18:18

2 Answers2

1

Smallpt is designed to demonstrate global illumination with path-tracing in the fewest lines of code possible. As such it does not feature any performance optimisations.

For anything more complex than simple scenes, most of the rendering time is spent calculating which object a ray hits next along its path, as this has to be done many times for each ray.

A naive implementation will need to, for every ray bounce, compute the intersection of the ray with every object comprising the scene, in order to determine which object is hit next.

To improve on this Acceleration Structures are used to reduce the number of ray/object intersections which have to be calculated, typically by partitioning up the volume enclosing the scene to avoid unnecessary intersection tests.

There are two main approaches:

  • KD Trees - related to Binary Space Partitioning trees.
  • Bounding volume hierarchy - a hierarchy of (usually) axis-aligned boxes which enclose objects and other boxes.

Performance-wise both work equally well, however I believe BVH's are simpler to implement.

jon hanson
  • 8,722
  • 2
  • 37
  • 61
  • Thanks Jon, That's definitely what I was looking for, space partitionning techniques. I'll look into it. – Roland Y. Jan 09 '13 at 17:20
  • Cool. I'd recommend BVH's first. I had a go at KD-Trees and though I did get them working eventually, they're not fun to debug. – jon hanson Jan 09 '13 at 21:33
0

I don't know that sort of method, but tiling alone can help performance because it can improve cache usage for complex scenes - ie. instead of scanning left to right, wandering through the scene pulling everything into cache, by doing tiles-at-a-time, say, a few hundred rays all fall into approximately the same area of the scene instead of moving merrily through it, meaning only that area's data needs to be in cache. Of course, secondary rays are basically random anyways, so if you're using path tracing the first ray will probably not matter all that much. It might be more helpful for photon mapping though.

Anyway, maybe space-filling curves are doing something similar there. Aside from that, I have no idea what they could be for.

[edit] Try taking them out and see if the performance changes.

FeepingCreature
  • 3,648
  • 2
  • 26
  • 25