10

Does anyone know of a linear algebra library for iOS that uses OpenGL ES 2.0 under the covers?

Specifically, I am looking for a way to do matrix multiplication on arbitrary-sized matrices (e.g., much larger than 4x4, more like 5,000 x 100,000) using the GPUs on iOS devices.

Ian Ollmann
  • 1,592
  • 9
  • 16
cklin
  • 900
  • 4
  • 16
  • I believe opengl uses CPU to do simple matrix operations as the matrices are only 9*9. The graphics card shader handles the bigger stuff. – Jesus Ramos Jan 11 '13 at 23:11
  • 3
    @JesusRamos Yes, but if you treated a frame buffer as a giant matrix of values (instead of as a set of colors), you could write shaders that would write the multiplication result into a new frame buffer. cklin is asking if anybody has already coded a library to do that. – benzado Jan 11 '13 at 23:14
  • 1
    AFAIK iOS does not support floating point textures and the limited precision might cause some trouble for implementing asked functionality on GPU. – harism Jan 11 '13 at 23:27
  • @benzado I know that but that would mean reading the framebuffer information from inside OpenGL E.S, which I'm not sure can be done (easily at least). – Jesus Ramos Jan 11 '13 at 23:47
  • @harism: It seems there are a few threads on SO ([here](http://stackoverflow.com/questions/3850569/render-to-floating-point-texture-under-ios), [here](http://stackoverflow.com/questions/13976091/floating-point-textures-in-opengl-es-2-0-on-ios-without-clamping-them-to-0-1)) that claim floating-point textures are supported with the GL_OES_TEXTURE_FLOAT extension since iPad 2/iPhone 4S. Is this not the case in your experience? – cklin Jan 11 '13 at 23:48
  • @cklin if that's the case I stand corrected. – harism Jan 11 '13 at 23:49
  • 1
    @JesusRamos glReadPixel is one way to transfer data from GPU memory to main memory. There's another way to do it by specifying a cached texture as a render target. See my more specific question [here](http://stackoverflow.com/questions/14288391/is-it-possible-to-read-floats-out-from-opengl-es-framebuffer-via-the-ios-texture). – cklin Jan 11 '13 at 23:51

3 Answers3

0

Is there a specific reason you're asking for "uses OpenGL ES 2.0 under the covers?" Or do you just want a fast, hardware optimized linear algebra library such as BLAS, which is built into iOS?

Rob Napier
  • 286,113
  • 34
  • 456
  • 610
  • 2
    Well, the Accelerate framework is fine, but (1) I'd like to shift work from the CPU to the GPU, and (2) I'd like to see if a GPU implementation could be faster. – cklin Jan 12 '13 at 06:34
0

MetalPerformanceShaders.framework provides some tuned BLAS-like functions. It is not GLES. It is metal and runs on the GPU. See MetalPerformanceShaders/MPSMatrixMultiplication.h

Ian Ollmann
  • 1,592
  • 9
  • 16
-1

OpenGL on iOS is probably the wrong way to go. Metal support on iOS would be the better way to go if you're going GPU.

Metal

You could use Apple's support for Metal Compute shaders. I've written high-performance code for my PhD in it. An early experiment I made calculating some fractals using Metal might give you some ideas to start

Ultimately, this question is too broad. What do you intend to use the library for, or how do you intend to use it? Is it a one off multiplication? Have you tested with current libraries and found the performance to be too slow? If so, by how much?

In general, you can run educational or purely informational experiments on performance of algorithm X on CPU vs. GPU vs. specialized hardware, but most often you run up against Amdahl's law and your code vs. a team of experts in the field.

Accelerate

You can also look into the Accelerate framework which offers BLAS.

Apple, according to the WWDC 2014 talk What's new in the Accelerate Framework, has hand tuned the Linear Algebra libraries targeted at their current generation hardware. They aren't just fast, but energy efficient. There are newer talks as well.

Cameron Lowell Palmer
  • 21,528
  • 7
  • 125
  • 126