I have made some full screen renders using OpenGL ES 2.0 on Andorid devices.
In these renders I used a custom fragment shader that uses a uniform time parameter as part of the animation.
I have experienced major image tearing/massive fps drops and pixelated result as the render went on.
After playing around with values and trying to fix it, I found the problem to be in the size of the time parameter, as the value got bigger and bigger the result got worse.
changing the float precision to highp
in the fragment shader didn't help,but the animation got worse at a later time then before, as you'd expect.
I found a solution by limiting the size of the parameter before it was sent to the shader, by using the mod operator on it.
On the other hand, I copied the exact shader code into a browser that runs a web-gl environment to render the same thing that runs on my phone, and there is no problem with the parameter size, no fps drop, no nothing.
I can understand that the graphics card on mobile devices is weaker then what I have on my pc, and it is only natural to assume that my pc graphics card can hold much larger values.
But, my question is, what possible solution can I work with to go around this problem of parameters sizes?
I would like my animation to go on forever*, and not be forced to loop around after 5 seconds.
Here is a link to the website with the animation: website link
*not actually forever but a quite a long time, just like in the browesr.