4

I am having trouble incorporating transformations. For whatever reasons, everything is not going as I think it should, but to be honest - all the transformations back and forth make me quite dizzy.

As I have read everywhere (although explicit explanations are rare, imho), the principle algorithm for transformations is as follows:

  • transform the Ray (Origin and Direction) with the inverse of the transformation matrix
  • transform the resulting Intersection-Point with the transformation matrix
  • transform the object's normal at the Intersection-Point with the transposed of the inverse

From what I understood, that should do the trick. I'm pretty sure that my problem lies when I try to calculate the lighting, since both the initial intersection and the lighting algorithm use the same function (obj.getIntersection()). But then again, I have no idea. :(

You can read parts of my code here:

main.cpp, scene.cpp, sphere.cpp, sdf-loader.cpp

Please let me know if you need more info to help me - and please help me! ;)

EDIT:

I made some results, maybe someone "sees" (by the results) where I may be wrong:

untransformed scene:

untransformed scene

sphere scaled (2,4,2):

sphere scaled (2,4,2)

box translated (0,-200,0):

box translated (0,-200,0)

sphere translated (-300,0,0):

sphere translated (-300,0,0)

sphere x-rotated (45°):

sphere x-rotated (45°)

alex
  • 1,277
  • 3
  • 14
  • 30
  • It would be easier to help you if you could post the minimum code to reproduce the problem here. But is there an actual issue, or are you just thinking there might be an issue? – olevegard Aug 24 '13 at 13:23
  • I would love to, but I'm using a framework from my university to render the image - and I doubt they'd like me sharing their code. :( Please look at parts of my code I pasted on pastebin. Actually, everything in the logic and what I did wrong should definitely be in there. If you need more (e.g. sphere.hpp), please let me know. – alex Aug 24 '13 at 13:30
  • One thing to be careful of -- when you transform a normal, you need to use homogeneous coordinates `[x,y,w,0]` instead of `[x,y,z,1]`, or you need to only multiply by the rotation part of the transformation. – Vaughn Cato Aug 24 '13 at 13:48
  • Well, in the "proprietary" code of mine (the one from my university), `math3d::point`and `math3d::vector` are differentiated by `1` and `0` in the `w`-coordinate respectively. So whenever I do a matrix multiplication, that should be taken care of by the framework. But good info, I will check that! Thanks! – alex Aug 24 '13 at 13:54
  • The stretched sphere looks like the normals aren't being transformed correctly. Also if all the objects and lights are in world space, why do you "transform the resulting Intersection-Point with the transformation matrix" - the intersection point is already in world space with everything else. Just make sure that the normal is in world space and not object space (use the object's transform without the camera's). – jozxyqk Aug 26 '13 at 14:30

1 Answers1

0

Generally for transformation in computer graphics I would recommend you to have a look at scratchapixel.com and particularly this lesson:

http://scratchapixel.com/lessons/3d-basic-lessons/lesson-4-geometry/

and this one, where you can see how transformations (matrices) are used to transform rays and objects:

http://scratchapixel.com/lessons/3d-basic-lessons/lesson-8-putting-it-all-together-our-first-ray-tracer/

If you don't know this amazing resource yet, I would advice to use it and maybe spread the word at your university. Your teacher should have pointed it out to you.

user18490
  • 3,546
  • 4
  • 33
  • 52