8

For the past few weeks, I have been working on an algorithm that finds hidden surfaces of complex meshes and removes them. These hidden surfaces are completely occluded, and will never be seen. Due to the nature of the meshes I'm working with, there are a ton of these hidden triangles. In some cases, there are more hidden surfaces than visible surfaces. As removing them manually is prohibitive for larger meshes, I am looking to automate this with software.

My current algorithm consists of:

  1. Generating several points on the surface of a triangle.
  2. For each point, generate a hemisphere sampler aligned to the normal of the triangle.
  3. Cast rays up into the hemispheres.
  4. If there are less than a certain number of rays unoccluded, I flag the triangle for deletion.

However, this algorithm is causing a lot of grief. It's very inconsistent. While some of the "occluded" faces are not found as occluded by the algorithm, I'm more worried about very visible faces that get removed due to issues with the current implementation. Therefore, I'm wondering about two things, mainly:

  1. Is there a better way to find and remove these hidden surfaces than raytracing?
  2. Should I investigate non-random ray generation? I'm currently generating random directions in a cosine-weighted hemisphere, which could be causing issues. The only reason I haven't investigated this is because I have yet to find an algorithm to generate evenly-spaced rays in a hemisphere.

Note: This is intended to be an object space algorithm. That is, visibility from any angle--not a fixed camera.

ContingencyCoder
  • 339
  • 6
  • 13
  • For your last question, see http://mathworld.wolfram.com/SpherePointPicking.html for generating evenly-spaced rays in a hemisphere – Drew McGowen Jul 28 '14 at 21:27
  • This is called "backface culling", which may help with searches. As for detecting a cullable face, why not cast rays from each vertex to your camera, then if all three rays from a triangle intersect another face, the triangle is fully occluded and can be removed from the mesh. It should be noted that graphics libraries such as OpenGL can do this for you at the render stage – Bojangles Jul 28 '14 at 21:28
  • @DrewMcGowen Thanks for that link. I have already looked at it. The main issue is that, due to the computationally intense nature of the algorithm, I'd like to keep rays to a minimum (some of the meshes already require 100 billion rays with 512 rays per hemisphere). As with anything random, there are always "clusters" of points. I was looking for something that generated perfect evenly distributed rays. Eventually, I might break down and hardcode the directions for evenly spaced rays in a header file or something. – ContingencyCoder Jul 28 '14 at 21:32
  • 1
    @Bojangles just because the rays from each vertex of the triangle to the camera are occluded doesn't mean the entire triangle is occluded – Drew McGowen Jul 28 '14 at 21:32
  • @Bojangles This is an object space approach. I should have made note of that in the original question, but there is no camera. I'm optimizing the meshes to be viewed in 3D. I appreciate the comment though. – ContingencyCoder Jul 28 '14 at 21:33
  • You can certainly cut out a lot of computation by computing the normal first and then culling any faces that are pointing away from your camera. No rays needed. That won't get rid of occluded faces but it will reduce your computational load. – Logicrat Jul 28 '14 at 21:34
  • How do you get false negatives? How does the algorithm find unoccluded rays from an occluded triangle's surface? Could it just be buggy? – Daniel Darabos Jul 28 '14 at 21:38
  • @DanielDarabos The "occluded" triangles the algorithm finds unoccluded are usually due to the nature of the mesh in a particular instance. It's just an inherent problem with the mesh, not the algorithm. I'm much more concerned with the algorithm completely ruining the visible part of the mesh in some cases. – ContingencyCoder Jul 28 '14 at 21:41
  • @Drew I've just realised my mistake, thanks for pointing it out – Bojangles Jul 28 '14 at 21:41
  • 2
    I understand that you are looking for an object space algorithm, but then I do not understand how something can be hidden? When a surface is hidden there must also be a view point from which the surface is hidden. Any surface should be visible if you place the camera close enough to the surface. Or are your surfaces closed and you want to remove surfaces completely inside other surfaces? Perhaps you could clarify you definition of hidden? – Martin Liversage Jul 29 '14 at 05:38
  • As these "hidden" triangles may not be visible at *any* angle, doesn't that mean they must be completely inside a closed set of other triangles? – Jongware Jul 29 '14 at 06:03
  • @MartinLiversage They are closed meshes. I'm working on a project where final meshes are built from modular pieces. So when they are constructed to form, say, a small room that is completely closed off, I'm looking to remove all the internal geometry. There's currently a distance fall-off that will make sure large internal spaces will be preserved. – ContingencyCoder Jul 29 '14 at 06:07
  • @ContingencyCoder Even if a small room is completely closed off, nothing prevents the inside from being seen if the observer is inside the room. Is it true that the camera is never inside the mesh? – Tavian Barnes Jul 30 '14 at 00:44
  • @TavianBarnes I'm mainly looking at removing triangles inside very small sections of the mesh. Large rooms would probably be left untouched. – ContingencyCoder Jul 30 '14 at 18:40

2 Answers2

2

I've actually never implemented ray tracing, but I have a few suggestions anyhow. As your goal is to detect every hidden triangle, you could turn the problem around and instead find every visible triangle.

I'm thinking of something along the lines of either:

  1. Ray trace from the outside and towards the centre/perpendicular to the surface, mark any triangle hit as visible.
  2. Cull all others.

or

  1. Choose a view of your model.
  2. Rasterize the model, (for example using a different colour for each triangle).
  3. Mark any triangle visible as visible.
  4. Change the orientation and repeat.
  5. Cull all non-visible triangles.

The advantage of the last one is that it should be relatively cheap to implement using a graphics API, if you can read/write the pixels reliably.

A disadvantage of both would be the resolution needed. Triangles inside small openings that should not be culled may still be, thus the number of rays may be prohibitive (in the first algorithm) or you will require very large off screen frame buffers (in the second).

Stian Svedenborg
  • 1,797
  • 11
  • 27
  • My original attempt at using raytracing was using a large sphere around the models and raycasting inwards. However, there was the issue of really large internal spaces that wouldn't be preserved. I'm mainly looking to remove the smaller "rooms" that would be created. Due to this, someone actually recommended an algorithm that would test distances to other surfaces and try to figure out if the surface was occluded that way. I haven't really tried that yet, however, as I don't know if it would work in all cases. – ContingencyCoder Jul 29 '14 at 06:10
  • Also, I would like to add that I am technically finding all the visible triangles with the current algorithm. That is, I'm testing all triangles for unoccluded rays, or rays that haven't hit anything. The triangles are all "considered" occluded at first. – ContingencyCoder Jul 29 '14 at 06:12
  • Hmm, I see... It might still be a viable approach if you are able to identify the "entrance" to these internal spaces, but I see you would still get problems with long open cylinders etc. Do you have any more information about the kind of shapes you expect to get or are they completely arbitrary? – Stian Svedenborg Jul 29 '14 at 07:04
  • Unfortunately, they are pretty much completely arbitrary triangle meshes. As for finding the "entrances", a friend of mine actually recommended I look into some sort of fluid simulation algorithm where I "flood" the meshes and then see what triangles come into contact with the water. Theoretically, the triangles that were occluded would never come into contact with the "fluid". This would give false-positives for triangles that were occluded in certain scenarios, however. – ContingencyCoder Jul 29 '14 at 07:29
  • I like the fluid idea, especially if you should be able to "walk" inside the mesh in the final application. I'm curious though, how would it create false positives (assuming the mesh is "tight")? – Stian Svedenborg Jul 29 '14 at 07:31
  • Curiously enough, one of the applications is a game where players would be walking around! The mesh would be "tight" in certain areas where the player couldn't walk in. ;) Think of a sphere that you can see inside to, but can't walk inside of. Then picture a sculpture in the center of the sphere. While the player can see one side of the sculpture, the other 20K triangles that make up the occluded side couldn't be seen. The raytracing approach attempts to fix this by using hemisphere with a certain radius. It will correctly identify the "other side" of the hypothetical sculpture. – ContingencyCoder Jul 29 '14 at 07:34
  • 1
    Hmm. A thought that strikes me is that you might be trying to do too many things in one step. Take the example you made above. How would you differentiate between the back of the sculpture in the no-go room versus the exact same room where you can walk around. It might be an idea to differentiate between reachable vs. visible only portions of the mesh. In reachable portions flooding could be suitable and for the visible portions, you will need information about possible views. – Stian Svedenborg Jul 29 '14 at 07:52
  • Let us [continue this discussion in chat](http://chat.stackoverflow.com/rooms/58222/discussion-between-contingencycoder-and-stian-v-svedenborg). – ContingencyCoder Jul 29 '14 at 07:53
  • i could do this in u3d with the default raytracing and a physics mesh to detect the collisions of rays, it would make a list of all the vertices to keep. have to make variable amount of rays, for the sphere, there are questions like distribution points on a sphere using phi formulas and disco ball patterns, and then projecting n rays parralel to each ray. – bandybabboon Jun 19 '15 at 10:24
  • you could perhaps do it by filling the volume with water figuratively speaking, the water would pour from a vertice and fill every point inside the volume. difficult. – bandybabboon Jun 19 '15 at 10:27
1

A couple of ideas that may help.

  1. Use a connectivity test to determine what is connected to your main model (if there is one).
  2. Use a variant of Depth Peeling (I've used it to convert shells into voxels; once you know what is inside the models that you want to keep (the voxels), you can intersect the junk that you want to remove.)
  3. Create a connectivity graph and prune the graph based on the complexity of connected groups.
axon
  • 1,190
  • 6
  • 16
  • Im not sure how point 1. would help in this situation. Would you be able to explain that a bit more? Isn't it possible to have hidden faces that are part of the main hull, and have visible faces that are not? – ChrisWebb Jul 29 '14 at 06:01
  • It assumes that most of what you are wanting to remove are small groups of triangles. If so, you can determine which points are part of tris that are connected (share points). Each connected group can be assigned a unique ID. Then the groups that you want to keep can somehow be isolated (and the rest deleted). – axon Jul 29 '14 at 06:05
  • The connection graph is something I considered at first, but it didn't deliver the results I was looking for. The depth peeling idea might be interesting to look at. Thanks for the ideas. – ContingencyCoder Jul 29 '14 at 06:08
  • Wow, the depth peeling looks very interesting. The only issue is that using screen space techniques might be problematic. I'll definitely think about it. – ContingencyCoder Jul 29 '14 at 06:14