-1

I want to render a translucent solid object but I don't want to involve myself in refraction. I want this (hopefully) rather simple effect: the thicker the object, the more opaque it gets and the more obscure the objects behind it gets; but (again) I don't want to involve myself in refraction or any complex light-matter interactions for that matter.

Perhaps I'm missing something but I can't find any good sources that discuss simple, non-opaque solids (solid=stuffed geometry) in contrast to totally opaque meshes (hollow object with opaque surfaces) or hollow geometry with transparent surfaces.

Ashkan Kh. Nazary
  • 21,844
  • 13
  • 44
  • 68
  • What properties does your shape have? Is it safe to assume that although it may be convex, it isn't self intersecting and the boundary as described in polygons is completely sealed? – Tommy Sep 05 '11 at 08:04
  • @Tommy It is actually a little frightening. The object is human brain. Vertexes are acquired from some sort of medical equipment. The objective is to detect various layers of brain and render them in way that deeper layers are visible through front layers [ you are not alone, I too don't quite get it yet ;-) ]. – Ashkan Kh. Nazary Sep 05 '11 at 09:08
  • So it's a voxel-type thing initially, with no polygon geometry at all? – Tommy Sep 05 '11 at 19:11
  • I've added a link to http://http.developer.nvidia.com/GPUGems/gpugems_ch40.html in my answer, I think that exactly covers your situation? – Tommy Sep 06 '11 at 01:56
  • @Tommy, thnx very much but it says Access Denied :-? – Ashkan Kh. Nazary Sep 06 '11 at 06:51
  • I can get to it from Mac, iPhone and iPad from here. Maybe try the Google cache at http://webcache.googleusercontent.com/search?q=cache:sYARdM77iSsJ:http.developer.nvidia.com/GPUGems/gpugems_ch40.html+http://http.developer.nvidia.com/GPUGems/gpugems_ch40.html&cd=1&hl=en&ct=clnk&gl=us – Tommy Sep 06 '11 at 18:14
  • This is an exact duplicate of the question you have already asked here: http://stackoverflow.com/questions/7293169/rendering-a-stuffed-translucent-cube-in-opengl/7296050#7296050 and have gotten answers for. – Razzupaltuff Sep 08 '11 at 15:58
  • @karx11erx No, not really. The answer to the question you are refering to was "Refraction". This question, on the other hand, Asks for advice on how to avoid refraction and achieve the desired effect in some other ways. – Ashkan Kh. Nazary Sep 09 '11 at 18:16
  • @Tommy still doesn't let me through. I think it's becauase I'm from Iran. They are smart enough to uncover that behinde the scene I'm gonna use the knowledge in there to make nuclear warhead ;) – Ashkan Kh. Nazary Sep 09 '11 at 18:18
  • Your question hasn't changed. You also had received an answer avoiding refraction in your other question. – Razzupaltuff Sep 09 '11 at 21:57

1 Answers1

1

OpenGL is a forward renderer that restricts the objects it can rasterise to points, lines and polygons. The starting point is that all 3d shapes are built from those two-or-fewer dimensional primitives. OpenGL does not in itself have a concept of solid, filled 3d geometry, and therefore has no built-in concept of how far through an object a particular fragment conceptually runs, just how many times it enters or exits.

Since it became possible to write shader programs, a variety of ways around the problem have become possible, with the most obvious for your purpose being ray casting. You could upload a cube as geometry, set to render back faces rather than front, and your actual object in voxel form as a 3d texture map. In your shader, for each pixel you'll start at one place in the 3d texture, get a vector towards the camera and walk forward resampling at suitable intervals.

A faster and easier to debug solution would be to build a BSP tree of your object for the purposes of breaking it into convex sections that can be drawn in back to front order. Prepare two depth buffers and a single pixel buffer. For clarity, call one the depth buffer the back buffer and the the other the front buffer.

You're going to step along convex sections of the model from back to front, alternating between rendering to the back depth buffer with no colour output and writing to the front depth buffer with colour output. You could get by with just one in a software renderer but OpenGL doesn't allow the target buffer to be read from for various pipeline reasons.

For each convex section, first render back-facing polygons to the back buffer. Then render front-facing polygons to the front buffer and the colour buffer. Write a shader so that every pixel you output calculates its opacity as the difference between its depth and the depth stored at its location in the back buffer.

If you're concerned about the camera intersecting with your model, you could also render front-facing polygons to the back buffer (immediately after rendering them to the front buffer and switching targets would be most convenient), then at the end draw a full-screen polygon at the front plane that outputs a suitable alpha where the value of the back buffer differs from that of the front buffer.

Addition: if the source data is some sort of voxel data, as from a CT or MRI scanner, then an alternative to ray casting is to upload as a 3d texture and draw a series of slices (perpendicular to the view plane if possible, along the current major axis otherwise). You can see some documentation and a demo at Nvidia's Developer Zone.

Tommy
  • 99,986
  • 12
  • 185
  • 204