0

I've written some shader code for my android application. It has some time dependant animation which work totaly fine on webgl version, shader code is below, but full version could be found here

vec3 bip(vec2 uv, vec2 center)
{
vec2 diff = center-uv; //difference between center and start coordinate
float r = length(diff); //vector length
float scale = mod(u_ElapsedTime,2.); //it is equal 1 every 2 seconds and trigerring function
float circle = smoothstep(scale, scale+cirleWidth, r)
   * smoothstep(scale+cirleWidth,scale, r)*4.;

return vec3(circle);
}

Return of the function is used in Fragcolor as a base for color.

u_ElapsedTime is sent to shader via uniform:

glUniform1f(uElapsedTime,elapsedTime);

Time data sent to shader from "onDrawFrame":

public void onDrawFrame(GL10 gl) {
    glClear(GL_COLOR_BUFFER_BIT);
    elapsedTime = (SystemClock.currentThreadTimeMillis()-startTime)/100f;
    //Log.d("KOS","time " + elapsedTime);
    scannerProgram.useProgram(); //initialize shader
    scannerProgram.setUniforms(resolution,elapsedTime,rotate); //send uniforms to shader
    scannerSurface.bindData(scannerProgram); //get attribute location
    scannerSurface.draw(); //draw vertices with given attributes
}

So everything looks totaly fine. Nevertheless, after some amount of time it looks like there is some lags and amount of frames is lesser then from the beginning. In the end it could be like only one-two frames per cicle for that function. In the same time it doesn't seems like opengl by itself have some lags, because i can for example rotate the picture and don't see any lags.

What could be the reason of that lags??

upd: code of binddata:

public void bindData(ScannerShaderProgram scannerProgram) {
    //getting location of each attribute for shader program
    vertexArray.setVertexAttribPointer(
            0,
            scannerProgram.getPositionAttributeLocation(),
            POSITION_COMPONENT_COUNT,
            0
    );

1 Answers1

0

Sounds to me like precision issues. Try taking this line from your shader:

float scale = mod(u_ElapsedTime,2.);

And perform it on the CPU instead. e.g.

elapsedTime = ((SystemClock.currentThreadTimeMillis()-startTime)%200)/100f;
Columbo
  • 6,648
  • 4
  • 19
  • 30
  • ok, it will surely solve the issue, but could you tell me what is wrong here with precision? Should i change precision to "high" to correctly compute it on shader? – константин украинский Aug 22 '16 at 15:01
  • highp might help, but bear in mind that if you are doing the calculation on the fragment shader that not all implementations have to support highp. Even if the shader is using single float precision, then your approach will lose precision given enough time. Single precision floating point only gives about 7 significant figures, so after 10000 seconds or so, the calculation might be noticeably inaccurate. That's why I suggested using an integer modulo operation. – Columbo Aug 23 '16 at 21:54