0

I am currently trying to learn ray casting on a 3D texture using something like glTexImage3D. I was following this tutorial from the start. My ultimate goal is to produce a program which can work like this:

This one

My understanding is that this was rendered using a raycasting method and the model is imported as a 3d texture. The raycasting and texture sampling tasks were performed in the fragment shader. I hope I can replicate this program as a practice. Could you kindly answer my questions?

  1. What file format should be used to import the 3D texture?
  2. Which glsl functions should I use in detecting the distance between my ray and the texture?
  3. What are the differences of 3D texture sampling and volume rendering?
  4. Are there any available online tutorials for me to follow?
  5. How can I produce my own 3D texture? (Is it possible to make one using blender?)
genpfault
  • 51,148
  • 11
  • 85
  • 139
FunnyFunkyBuggy
  • 495
  • 1
  • 7
  • 21

1 Answers1

3

1. What file format should be used to import the 3D texture?

Doesn't matter, OpenGL doesn't deal with file formats.

2. Which glsl functions should I use in detecting the distance between my ray and the texture?

There's no "ready do use" raycasting function. You have to implement a raycaster yourself. I.e. between a start and end point sample the texture along a line (ray) and integrate the samples up to a final color value.

3. What are the differences of 3D texture sampling and volume rendering?

Sampling a 3D texture is not much different from sampling 2D, 1D, cubemap or whatever else the topology of a texture. For a given vector A a certain vector B is retured, namely either the value of the sample that's closest to the location pointed to by A (nearest sample) or a interpolated value.

4. Are there any available online tutorials for me to follow?

http://www.real-time-volume-graphics.org/?page_id=28

5. How can I produce my own 3D texture? (Is it possible to make one using blender?)

You can certainly use Blender, e.g. by baking volumetric data like fog density. But the whole subject is too broad to be sufficiently covered here.

datenwolf
  • 159,371
  • 13
  • 185
  • 298
  • Thanks!. For question 1, what is the format of the data? Is it possible to convert an obj file to such format? – FunnyFunkyBuggy Jun 03 '17 at 08:10
  • @FunnyFunkyBuggy: It's basically a 3D images. Imagine it as a stack of 2D images made up of slices. "Converting" a "mesh" or geometry into volumetric data is certainly possibly by actually rendering the geometry into a volumetric image. But that comes with its own share of problems, like how do you define what's "inside". Your typical mesh is made up from infinitely thin triangles, and if the mesh is not watertight, or is not orientable it's impossible to "fill" a volume. However if you ask *that* question, you seem to have an XY problem. – datenwolf Jun 04 '17 at 07:11