0

i am learning to implement point shadow using depth cubemap following learnopengl.com. in learnopengl it uses one perspective matrix and six view matrices to do light space transform.its six view matrices are:

std::vector<glm::mat4> shadowTransforms;
shadowTransforms.push_back(shadowProj * 
                 glm::lookAt(lightPos, lightPos + glm::vec3( 1.0, 0.0, 0.0), glm::vec3(0.0,-1.0, 0.0));
shadowTransforms.push_back(shadowProj * 
                 glm::lookAt(lightPos, lightPos + glm::vec3(-1.0, 0.0, 0.0), glm::vec3(0.0,-1.0, 0.0));
shadowTransforms.push_back(shadowProj * 
                 glm::lookAt(lightPos, lightPos + glm::vec3( 0.0, 1.0, 0.0), glm::vec3(0.0, 0.0, 1.0));
shadowTransforms.push_back(shadowProj * 
                 glm::lookAt(lightPos, lightPos + glm::vec3( 0.0,-1.0, 0.0), glm::vec3(0.0, 0.0,-1.0));
shadowTransforms.push_back(shadowProj * 
                 glm::lookAt(lightPos, lightPos + glm::vec3( 0.0, 0.0, 1.0), glm::vec3(0.0,-1.0, 0.0));
shadowTransforms.push_back(shadowProj * 
                 glm::lookAt(lightPos, lightPos + glm::vec3( 0.0, 0.0,-1.0), glm::vec3(0.0,-1.0, 0.0));

i know how lookat function work.my question is how to get upvector. in other words , why the last patameter of the first lookat function is (0,-1,0),not(0,1,0)?

i try to use other upvector ,like:

(0,1,0)
(0,1,0)
(1,0,0)
(1,0,0)
(0,1,0)
(0,1,0)

but it went wrong.i ask chatgpt , but its answer is opposite,which is:

glm::vec3 upVectors[6] = {
    glm::vec3(0.0, 1.0, 0.0),  // 右
    glm::vec3(0.0, 1.0, 0.0),  // 左
    glm::vec3(0.0, 0.0, -1.0), // 上
    glm::vec3(0.0, 0.0, 1.0),  // 下
    glm::vec3(0.0, 1.0, 0.0),  // 远
    glm::vec3(0.0, 1.0, 0.0)   // 近
};

in this question someone describe the reason , but i still don't understand.is that means the upvectors have been defined and we just need to follow or other reason?

Any help would be appreciated! Thanks!!!

genpfault
  • 51,148
  • 11
  • 85
  • 139

1 Answers1

0

In theory, you could use your up-vectors and the rendering of the 6 sides of the shadow map would work no problem. The issue comes up when you want to access them.

Theoretically, you could take the 6 sides, make a texture array out of them, and then manually figure out which array layer you have to use for a given direction. Since this is a very common use case (e.g. for skyboxes), there is built-in support for this in the form of cubemaps. They look like this: cubemap example

Ultimately, cubemaps are 6-layered texture arrays with some special treatment. OpenGL has a very specific definition of how the individual array layers must be rotated to get the "correct" result when sampling from the cubemap with a direction vector: cubemap face orientation

You could certainly render the 6 shadow map sides with the "intuitive" up-vectors and then rotate them the way OpenGL wants them later on, but that's just extra work. So what people usually do is to choose the lookAt parameters in a way that rotates the camera the correct way already.

IGarFieldI
  • 515
  • 3
  • 12