0

I've already asked similar but a bit unclear question here but this time I will be very specific and to the point.

Suppose I have an actor which grabs a power up. He starts to glow using bloom shader and after 10 seconds back to normal attaching the default shader again. The question basically boils down to:

How to use different shaders on the same model at runtime?

Consider following very simple example:

Default shader:

attribute vec4 Position;
uniform mat4 ModelViewProjMatrix;

void main(void)
{
    gl_Position = ModelViewProjMatrix * Position;
}

Render code inside RendererGLES20 will be:

void RendererGLES20::render(Model * model)
{
    glUniformMatrix4fv(mvpUniform, 1, 0, &mvpMatrix);
    GLuint positionSlot = glGetAttribLocation(_program, "Position");
    glEnableVertexAttribArray(positionSlot);

    // interleaved data, But for now we are ONLY using the positions, ignoring texture, normals and colours.
    const GLvoid* pCoords = &(model->vertexArray[0].Position[0]);
    glVertexAttribPointer(positionSlot, 2, GL_FLOAT, GL_FALSE, stride, pCoords);

    glDrawArrays(GL_TRIANGLES, 0, model->vertexCount);

    glDisableVertexAttribArray(positionSlot);
}

Simple enough! Now imagine that the actor got some power up and following crazy shader is applied:

Crazy Shader:

attribute vec4 Position;
attribute vec4 SourceColor;
attribute vec2 Texture;
attribute vec4 Normal;
attribute vec2 tempAttrib0;
attribute vec2 tempAttrib1;

// A bunch of varying but we don't need to worry about these for now                                           
varying vec4 .........;
varying .........;

uniform mat4 MVPMatrix;
uniform vec2 BloomAmount;
uniform vec2 BloomQuality;
uniform vec2 BloomSize;
uniform vec2 RippleSize;
uniform vec2 RippleAmmount;
uniform vec2 RippleLocation;
uniform vec2 deltaTime;
uniform vec2 RippleMaxIterations;

void main(void)
{
    // Some crazy voodoo source code here...
    // .........
    gl_Position = ..............;
}

As you can clearly see, in order to attach this shader to the model I would need to modify the actual renderer source code to following:

void RendererGLES20::render(Model * model)
{
    glUniformMatrix4fv(mvpUniform, 1, 0, ....);
    glUniformMatrix4fv(bloomAmountUniform, 1, 0, ....);
    glUniformMatrix4fv(bloomQualityUniform, 1, 0, ....);
    glUniformMatrix4fv(bloomSizeUniform, 1, 0, ....);
    glUniformMatrix4fv(rippleSizeUniform, 1, 0, ....);
    glUniformMatrix4fv(rippleAmountUniform, 1, 0, ....);
    glUniformMatrix4fv(rippleLocationUniform, 1, 0, ....);
    glUniformMatrix4fv(rippleMaxIterationsUniform, 1, 0, ....);
    glUniformMatrix4fv(deltaTimeUniform, 1, 0, ....);

    GLuint positionSlot = glGetAttribLocation(_program, "Position");
    GLuint sourceColorSlot = glGetAttribLocation(_program, "SourceColor");
    GLuint textureSlot = glGetAttribLocation(_program, "Texture");
    GLuint normalSlot = glGetAttribLocation(_program, "Normal");
    GLuint tempAttrib0Slot = glGetAttribLocation(_program, "TempAttrib0");
    GLuint tempAttrib1Slot = glGetAttribLocation(_program, "TempAttrib1");

    glEnableVertexAttribArray(positionSlot);
    glEnableVertexAttribArray(sourceColorSlot);
    glEnableVertexAttribArray(textureSlot);
    glEnableVertexAttribArray(normalSlot);
    glEnableVertexAttribArray(tempAttrib0Slot);
    glEnableVertexAttribArray(tempAttrib1Slot);

    // interleaved data
    const GLvoid* pCoords = &(model->vertexArray[0].Position[0]);
    const GLvoid* sCoords = &(model->vertexArray[0].SourceColor[0]);
    const GLvoid* tCoords = &(model->vertexArray[0].Texture[0]);
    const GLvoid* nCoords = &(model->vertexArray[0].Normal[0]);
    const GLvoid* t0Coords = &(model->vertexArray[0].TempAttrib0[0]);
    const GLvoid* t1Coords = &(model->vertexArray[0].TempAttrib1[0]);

    glVertexAttribPointer(positionSlot, 3, GL_FLOAT, GL_FALSE, stride, pCoords);
    glVertexAttribPointer(sourceColorSlot, 4, GL_FLOAT, GL_FALSE, stride, sCoords);
    glVertexAttribPointer(textureSlot, 2, GL_FLOAT, GL_FALSE, stride, tCoords);
    glVertexAttribPointer(normalSlot, 4, GL_FLOAT, GL_FALSE, stride, nCoords);
    glVertexAttribPointer(tempAttrib0Slot, 3, GL_FLOAT, GL_FALSE, stride, t0Coords);
    glVertexAttribPointer(tempAttrib1Slot, 2, GL_FLOAT, GL_FALSE, stride, t1Coords);

    glDrawArrays(GL_TRIANGLES, 0, model->vertexCount);

    glDisableVertexAttribArray(positionSlot);
    glDisableVertexAttribArray(sourceColorSlot);
    glDisableVertexAttribArray(textureSlot);
    glDisableVertexAttribArray(normalSlot);
    glDisableVertexAttribArray(tempAttrib0Slot);
    glDisableVertexAttribArray(tempAttrib1Slot);
}

You see how vastly different code you need to write in order to attach a different shader. Now what if I want to re-attach the default shader back? (this is attaching and detaching of shaders has to happen at run-time, e.g.: actor collected power up).

Any ideas how can I efficiently and easily implement this to allow a model to change shaders at run-time? I am just looking forward to a nice implementation/idea. How would you guys handle the above problem?

Community
  • 1
  • 1
fakhir
  • 157
  • 10

2 Answers2

1

You could call glUseProgram(program) (specifications here) with the intended shader program before rendering your object. You probably want to use the _program variable that you already have.

You can then change what variables (uniforms/arrays) you set based on which shader you're using.

I'm not sure about "attaching and detaching shaders", but to answer your efficiency question, most people tend to group their "models" based on their shader, to minimize the calls to glUseProgram(). This also means you'll only have to set uniforms like bloomQualityUniform once per frame, instead of once per model that uses that shader.

Edit:

Here's an example (based on your example) which would allow you to choose the shader at runtime using an enum

enum MyShaderEnum { DEFAULT, CRAZY}

void RendererGLES20::render(Model * model, MyShaderEnum shaderType)
{
    if (shaderType == DEFAULT)
    {
        glUseProgram(defaultShaderProgram);
        glUniformMatrix4fv(mvpUniform, 1, 0, &mvpMatrix);
        GLuint positionSlot = glGetAttribLocation(_program, "Position");
        glEnableVertexAttribArray(positionSlot);

        // interleaved data, But for now we are ONLY using the positions, ignoring texture, normals and colours.
        const GLvoid* pCoords = &(model->vertexArray[0].Position[0]);
        glVertexAttribPointer(positionSlot, 2, GL_FLOAT, GL_FALSE, stride, pCoords);

        glDrawArrays(GL_TRIANGLES, 0, model->vertexCount);

        glDisableVertexAttribArray(positionSlot);
    }
    else if(shaderType == CRAZY)
    {
        glUseProgram(crazyShaderProgram);
        glUniformMatrix4fv(mvpUniform, 1, 0, ....);
        glUniformMatrix4fv(bloomAmountUniform, 1, 0, ....);
        glUniformMatrix4fv(bloomQualityUniform, 1, 0, ....);
        glUniformMatrix4fv(bloomSizeUniform, 1, 0, ....);
        glUniformMatrix4fv(rippleSizeUniform, 1, 0, ....);
        glUniformMatrix4fv(rippleAmountUniform, 1, 0, ....);
        glUniformMatrix4fv(rippleLocationUniform, 1, 0, ....);
        glUniformMatrix4fv(rippleMaxIterationsUniform, 1, 0, ....);
        glUniformMatrix4fv(deltaTimeUniform, 1, 0, ....);

        GLuint positionSlot = glGetAttribLocation(_program, "Position");
        GLuint sourceColorSlot = glGetAttribLocation(_program, "SourceColor");
        GLuint textureSlot = glGetAttribLocation(_program, "Texture");
        GLuint normalSlot = glGetAttribLocation(_program, "Normal");
        GLuint tempAttrib0Slot = glGetAttribLocation(_program, "TempAttrib0");
        GLuint tempAttrib1Slot = glGetAttribLocation(_program, "TempAttrib1");

        glEnableVertexAttribArray(positionSlot);
        glEnableVertexAttribArray(sourceColorSlot);
        glEnableVertexAttribArray(textureSlot);
        glEnableVertexAttribArray(normalSlot);
        glEnableVertexAttribArray(tempAttrib0Slot);
        glEnableVertexAttribArray(tempAttrib1Slot);

        // interleaved data
        const GLvoid* pCoords = &(model->vertexArray[0].Position[0]);
        const GLvoid* sCoords = &(model->vertexArray[0].SourceColor[0]);
        const GLvoid* tCoords = &(model->vertexArray[0].Texture[0]);
        const GLvoid* nCoords = &(model->vertexArray[0].Normal[0]);
        const GLvoid* t0Coords = &(model->vertexArray[0].TempAttrib0[0]);
        const GLvoid* t1Coords = &(model->vertexArray[0].TempAttrib1[0]);

        glVertexAttribPointer(positionSlot, 3, GL_FLOAT, GL_FALSE, stride, pCoords);
        glVertexAttribPointer(sourceColorSlot, 4, GL_FLOAT, GL_FALSE, stride, sCoords);
        glVertexAttribPointer(textureSlot, 2, GL_FLOAT, GL_FALSE, stride, tCoords);
        glVertexAttribPointer(normalSlot, 4, GL_FLOAT, GL_FALSE, stride, nCoords);
        glVertexAttribPointer(tempAttrib0Slot, 3, GL_FLOAT, GL_FALSE, stride, t0Coords);
        glVertexAttribPointer(tempAttrib1Slot, 2, GL_FLOAT, GL_FALSE, stride, t1Coords);

        glDrawArrays(GL_TRIANGLES, 0, model->vertexCount);

        glDisableVertexAttribArray(positionSlot);
        glDisableVertexAttribArray(sourceColorSlot);
        glDisableVertexAttribArray(textureSlot);
        glDisableVertexAttribArray(normalSlot);
        glDisableVertexAttribArray(tempAttrib0Slot);
        glDisableVertexAttribArray(tempAttrib1Slot);
    }
}
Tom
  • 1,221
  • 10
  • 21
  • The answer is a bit vague (possibly incorrect). Can you be a little more specific? :) – fakhir Jul 04 '13 at 07:33
  • I'm no GLES pro, but i assume that somewhere in your application you set which shader to use. `glUseProgram()` is what does this. If you call `glUseProgram()` and specify a different program, then you'll get a different 'effect'. If there is something stopping you from doing this, please add it to the question – Tom Jul 04 '13 at 07:36
  • Well, the above code is just algorithm, not 'exact' code. It is assumed that shader is set using glUseProgram() from somewhere. It can be inside the render function or from outside world maybe in Geometry class... From wherever it is called, it is guaranteed that the shader is properly bound before calling the render code. – fakhir Jul 04 '13 at 07:47
  • I'm unsure what your question is, if this doesn't answer it. I'll come up with some example code for you when I get time. – Tom Jul 04 '13 at 07:50
0

Before we go into details, first lets get some mental roadblocks out of the way: OpenGL is not a scene graph: You don't feed it a scene and it will then render whole models or such. OpenGL is, let's be honest, glorified pencils to draw on paper provided by the OS.

You should really think about OpenGL being some kind of program controlled drawing tool, because that's what it is. Before you read on, I suggest you open up your favourite image manipulation program (Photoshop, GIMP, Krita, etc.), and try to draw a nice picture. Maybe you copy some layer, apply some filters on it, overlay it over the original layer to get the desired effect and so on.

That's the way you should think about programming OpenGL, especially when it comes to doing shader effects.

Now let's break this down:

Suppose I have an actor which grabs a power up.

For this you need a model of the actor and some animation. This is to be done by an artist with a tool like Blender.

He starts to glow using bloom shader

A glow is normally just an additional pass, that gets overlaid over the original model. Get that photoshop model back in your mind. First you draw your model with an illumination shader. Let's assume you got a Model class and a PhongTechniquq class, derived from the Technique class which provides an interface to be feed a model to be drawn:

class Model;
class ModelState;

class Technique {
    drawModel(Model const *Model, ModelState const *state, /*...*/);
};

/* technique that renders models using a phong illumination model */
class PhongTechnique {
    drawModel(Model const *Model, ModelState const *state, /*...*/);
}

And then for the Bloom effect we have another technique class

/* technique that renders models using a bloom */
class BloomTechnique {
    drawModel(Model const *Model, ModelState const *state, /*...*/);
}

and after 10 seconds back to normal attaching the default shader again.

So in your game animaiton loop you will come across your model. Which has some animation data attached.

class AnimationElement {
    float timeStart();
    float timeStop();
    float X(float T);
}

class Model {
    vector<AnimationElement> animation_elements;
    ModelState animate(float T);
}

and in model state we have some flags which effects to use. So in your overall drawing function

drawscene(float T)
{
    PhongTechnique phong;
    BloomTechnique bloom;

    foreach(m in models) {
        ModelState mstate = m.animate(T);

        if(mstate.phong_pass)
            phong.drawModel(m, mstate, ...);

        if(mstate.bloom_pass)
            bloom.drawModel(m, mstate, ...);

    }
}

Now within the different Technique class implementations you switch to the right shader, set vertex attribute data, and so on and render the model. Or to be exactly: You'd be filling lists of drawing batches, which you will later reorder a bit to optimize the drawing process.

If you want to look into a real game engine: Id Software did release the full source code of their Doom3 and Doom3-BFG engines, the later having a modern OpenGL-3 codepath.

datenwolf
  • 159,371
  • 13
  • 185
  • 298