2

Is anybody using OpenGLES2.0 shaders (GLSL) successfully for audio synthesis?

I already use vDSP to accelerate audio in my iOS app, which provides a simple vector instruction set from C code. The main problem with vDSP is that you have to write what amounts to vector oriented assembly language, because the main per-sample loop gets pushed down into each primitive operation (vector add, vector multiply). Compiling expressions into these sequences is the essence of what shader languages automate for you. OpenCL is not public in iOS. It is also interesting that GLSL is compiled at runtime, which means that if most of the sound engine could be in GLSL, then users could make non-trivial patch contributions.

Rob
  • 1,387
  • 1
  • 13
  • 18

1 Answers1

2

Although the iOS GPU shaders can be relatively "fast", the paths to load and recover data (textures, processed pixels, etc.) from the GPU are slow enough to more than offset any current shader computational efficiencies from using GLSL.

For real-time synthesis, the latencies of the GPU pixel unload path are much larger than the best possible audio response latency using just CPU synthesis to feed RemoteIO. e.g. display frame rates (to which the GPU pipeline is locked) are slower than optimal RemoteIO callback rates. There's just not enough parallelism to exploit within these short audio buffers.

hotpaw2
  • 70,107
  • 14
  • 90
  • 153
  • I'm doing the current vDSP code in the context of wavetable synthesis, which actually has a good amount of parallelism in theory. If you got 4 fingers down on the glass, 3 voices per finger for chorus detune, and 256 samples per audio buffer. As far as transfer into and out of the card, all of the wavetable data would be pre-loaded, with the main issues being at what rate can I reliably invoke the kernel with new parameters and pull 256 samples back to be rendered as audio. – Rob Jun 21 '12 at 19:00
  • ie: control rate of about 100hz (setting new points for the voice splines), and reliably get back enough samples to cover that time period. – Rob Jun 21 '12 at 19:06
  • In this, the distant future, we now have `CVOpenGLESTextureCache` which can by caveat give the GPU and CPU access to the same buffers, eliminating the expensive back-and-forth of a pure-GL approach. However I couldn't comment on latency. As the name suggests, it's primarily for stuff like real-time video processing, which tends to occur in larger steps. – Tommy Jan 30 '15 at 18:30