8

I am trying to implement instancing in my OpenGL program. I got it to work, and then decided to make my GLSL code more efficient by sending the Model-View-Projection multiplication matrix as input to the GLSL program, so that the CPU computes it for each instance, opposed to the GPU. Here is my vertex shader code (most of it is irrelevant to my question):

#version 330 core

// Input vertex data, different for all executions of this shader.
layout(location = 0) in vec3 vertexPosition_modelspace;
layout(location = 2) in vec3 vertexColor;
layout(location = 3) in vec3 vertexNormal_modelspace;
layout(location = 6) in mat4 models;
layout(location = 10) in mat4 modelsV;
layout(location = 14) in mat4 modelsVP;

// Output data ; will be interpolated for each fragment.
out vec3 newColor;

out vec3 Position_worldspace;

out vec3 Normal_cameraspace;
out vec3 EyeDirection_cameraspace;

// Values that stay constant for the whole mesh.
uniform mat4 MVP;
uniform mat4 MV;
uniform mat4 P;
uniform mat4 V;
uniform mat4 M;
uniform int num_lights;
uniform vec3 Lights[256];

void main(){

// Output position of the vertex, in clip space : MVP * position
gl_Position =  P * modelsV * vec4(vertexPosition_modelspace,1);

// Position of the vertex, in worldspace : M * position
Position_worldspace = (models * vec4(vertexPosition_modelspace,1)).xyz;

// Vector that goes from the vertex to the camera, in camera space.
// In camera space, the camera is at the origin (0,0,0).
vec3 vertexPosition_cameraspace = ( modelsV * vec4(vertexPosition_modelspace,1)).xyz;
EyeDirection_cameraspace = vec3(0,0,0) - vertexPosition_cameraspace;    

// Normal of the the vertex, in camera space
Normal_cameraspace = ( modelsV * vec4(vertexNormal_modelspace,0)).xyz;

// UV of the vertex. No special space for this one.
newColor = vertexColor;

}

The above code works, but only because I'm not using the last input modelsVP to calculate gl_position. If I do use it (instead of computing P*modelsV), the instances won't be drawn, and I get this error:

Linking program
Compiling shader : GLSL/meshColor.vertexshader
Compiling shader : GLSL/meshColor.fragmentshader
Linking program
Vertex info
0(10) : error C5102: input semantic attribute "ATTR" has too big of a numeric index (16)
0(10) : error C5102: input semantic attribute "ATTR" has too big of a numeric index (16)
0(10) : error C5041: cannot locate suitable resource to bind variable "modelsVP". Possibly large array.

I'm sure I'm linking it correctly in my OpenGL code, because if I swap the input location modelsVP with modelsV so that it is 10 instead of 14, I am able to use it, but not modelsV. Is there a maximum number of inputs you can have for your vertex shader? I really can't think of any other idea of why else I would get this error...

I'll include more of my OpenGL code that is relevant here, but I'm pretty sure that it's correct (it's not all in the same class or method):

// Buffer data for VBO. The numbers must match the layout in the GLSL code.
#define position 0 
#define uv 1
#define color 2
#define normal 3 
#define tangent 4
#define bitangent 5
#define model 6      // 4x4 matrices take 4 positions
#define modelV 10
#define modelVP 14
#define num_buffers 18

GLuint VBO[num_buffers];
glGenBuffers(num_buffers, VBO);

for( int i=0; i<ModelMatrices.size(); i++ )
{
mvp.push_back( projection * view * ModelMatrices.at(i) );
mv.push_back( view * ModelMatrices.at(i) );
}

glBindBuffer(GL_ARRAY_BUFFER, VBO[model]);
glBufferData(GL_ARRAY_BUFFER, sizeof(glm::mat4) * ModelMatrices.size(), &ModelMatrices[0], GL_DYNAMIC_DRAW);
for (unsigned int i = 0; i < 4 ; i++) {

glEnableVertexAttribArray(model + i);
glVertexAttribPointer(model + i, 4, GL_FLOAT, GL_FALSE, sizeof(glm::mat4), 
(const GLvoid*)(sizeof(GLfloat) * i * 4));
glVertexAttribDivisor(model + i, 1);
}

glBindBuffer(GL_ARRAY_BUFFER, VBO[modelV]);
glBufferData(GL_ARRAY_BUFFER, sizeof(glm::mat4) * mv.size(), &mv[0], GL_DYNAMIC_DRAW);
for (unsigned int i = 0; i < 4 ; i++) {

glEnableVertexAttribArray(modelV + i);
glVertexAttribPointer(modelV + i, 4, GL_FLOAT, GL_FALSE, sizeof(glm::mat4), 
(const GLvoid*)(sizeof(GLfloat) * i * 4));
glVertexAttribDivisor(modelV + i, 1);
}

glBindBuffer(GL_ARRAY_BUFFER, VBO[modelVP]);
glBufferData(GL_ARRAY_BUFFER, sizeof(glm::mat4) * mvp.size(), &mvp[0], GL_DYNAMIC_DRAW);
for (unsigned int i = 0; i < 4 ; i++) {

glEnableVertexAttribArray(modelVP + i);
glVertexAttribPointer(modelVP + i, 4, GL_FLOAT, GL_FALSE, sizeof(glm::mat4), (const GLvoid*)(sizeof(GLfloat) * i * 4));
glVertexAttribDivisor(modelVP + i, 1);
}
genpfault
  • 51,148
  • 11
  • 85
  • 139
Drake
  • 2,679
  • 4
  • 45
  • 88
  • 3
    The world-to-camera and camera-to-perspective matrices *don't change* per-instance. So there's no need for them to be per-instance data. Plus, you shouldn't be using any world-space at all. You should just go from model-to-camera space. One attribute matrix and one uniform matrix. – Nicol Bolas Aug 31 '13 at 02:19
  • The world-to-camera matrix is Model*View, so it changes per instance since it's the Model matrix that changes each time, no? And I though it would be better to let the CPU compute the model-to-camera matrix, instead of the GPU. (model-to-camera space and world-to-camera space is the same thing, right?) – Drake Aug 31 '13 at 02:49
  • 1
    "*The world-to-camera matrix is Model*View*" No, that's the *model*-to-camera matrix; world-to-camera is just "View". And yes, it does change per-instance, but that's my point. You need *only one* matrix that changes per-instance. It should be the model-to-camera matrix because working in world-space is a terrible idea. – Nicol Bolas Aug 31 '13 at 02:57
  • Sorry but I still don't understand. If you mean the line "uniform mat4 V" in my code should be omitted, I need that View matrix alone to calculate some lighting effect in my fragment shader. If not, what line of code should be omitted/replaced? – Drake Aug 31 '13 at 03:20
  • 1
    I'm assuming that `models` is the model-to-world matrix (FYI: GLSL doesn't have an identifier size limit. Use better names for these). You shouldn't be doing *any computations* in world space. Anything you can do in world space can be done in either camera space or a space positionally relative to the camera but orientationally aligned with the world. So you don't need `models`. And you don't need `modelsVP`, because you have to compute `vertexPosition_cameraspace` *anyway*. So you may as well just use `P` to convert that into `gl_Position`. – Nicol Bolas Aug 31 '13 at 03:25
  • I highly recommend using quaternions. All your attributes can be compressed to `{ vec3 position; quat4 tangent; vec3 texcoord; vec4 color; vec3 modelPosition; quat4 modelOrientation; }`, which fits in 6 attribute locations. – Yakov Galka Feb 26 '17 at 08:51

2 Answers2

14

OpenGL mandates implementations offer a minimum of 16 4-component vertex attributes. Therefore an index of 16 is not guaranteed to be supported by all implementations; see GL_MAX_VERTEX_ATTRIBS for more details.

Your mat4 vertex attributes count as 4 4-component attributes, so an index of 14 is out of range on implementations that only support 16 4-component vertex attributes.

Andon M. Coleman
  • 42,359
  • 2
  • 81
  • 106
7

You are using too many vertex attributes. Here's how to reduce the number of attributes without changing anything much about your code (and any functional changes are improvements). The following assumes that models is the "model-to-world" matrix, modelsV is the "model-to-camera" matrix, and that modelsVP is the "model-to-projection" matrix:

#version 330 core

// Input vertex data, different for all executions of this shader.
layout(location = 0) in vec3 vertexPosition_modelspace;
layout(location = 2) in vec3 vertexColor;
layout(location = 3) in vec3 vertexNormal_modelspace;
layout(location = 6) in mat4 modelsV;

// Output data ; will be interpolated for each fragment.
out vec3 newColor;

//The fragment shader should work in *camera* space, not world space.
out vec4 Position_cameraspace;

out vec3 Normal_cameraspace;
//out vec3 EyeDirection_cameraspace; Can be computed from Position_cameraspace in the FS.

// Values that stay constant for the whole mesh.
uniform mat4 P;

void main()
{
  Position_cameraspace = modelsV * vec4(vertexPosition_modelspace, 1.0);

  gl_Position = P * Position_cameraspace;

  Normal_cameraspace = ( modelsV * vec4(vertexNormal_modelspace,0)).xyz;

  newColor = vertexColor;
}

See? Isn't that much simpler? Fewer uniforms in the vertex shader, fewer outputs to the fragment shader, fewer math computations, and fewer vertex attributes.

All you need to do is change your fragment shader to use the camera-space position, rather than the world-space position. Which should be a reasonably easy change.

Nicol Bolas
  • 449,505
  • 63
  • 781
  • 982
  • I can't thank you enough for taking the time to explain all this. My code is now more efficient and easier to read, plus I have a little more understanding of matrix spaces. Though I still use world-space in one of my shader programs that deals with 2D images since it doesn't use a view or projection matrix. – Drake Sep 01 '13 at 23:18