0

I'm not necessarily looking for an immediate practical solution, but was curious as to whether the state of the art has been taken to practical limits.

I understand low level GPU programming is a black art only a few have mastered and was curious if a mobile device's GPU can be used for such general processing tasks, it's all way over my head. (I recall hearing that the export of 3D game consoles to some countries is forbidden, due to the possibility of employing them to do Supercomputer power computing)

It would be great if there was a Flash native extension that could quickly compress dynamically generated images to GPU texture map formats. I'm guessing some trickery, like using the GPU in the compression process may be needed to get it done in an acceptable time. My particular vector art project could tolerate simple color palette reduction techniques like PVRTC4 which might be? doable in AS3.

I'm working on a project that dynamically generates large background texture maps at runtime from Flash vectors, in a project using Starling (the tools that allows mortals to utilize the GPU via stage3D) It would be a big plus to be able to generate compressed textures that occupy less GPU memory, it seems that the only current solution for minimizing texture map memory size is using pre-compressed external images, like an Adobe ATF container. Although Flash may already be beyond it's projected lifespan, I can still get distribution files 10X smaller using flash vectors instead of any of the alpha channel bitmap compression methods.

Adobe had added the facility "Flash Player 11.4 and AIR 3.4 support runtime texture compression" (but it turns out to currently only work on desktop systems)

Mentioned as an aside in this article: http://help.adobe.com/en_US/as3/dev/WS5b3ccc516d4fbf351e63e3d118a9b90204-7d5b.html

To access the functionality: "Create the texture object by calling the Context3D.createTexture() method, passing flash.display3D.Context3DTextureFormat.COMPRESSED"

The subject was brought up here: stackoverflow.com/questions/18070813/how-to-implement-the-runtime-compression-of-texture

The maker of starling worked on incorporating the Adobe runtime image compression facilities, only to learn from Adobe that it doesn't presently work on mobile devices. https://github.com/PrimaryFeather/Starling-Framework/issues/153

I assume it would be a huge challenge for someone to create the code to compress either PVRTC, ETC1 or DXT5 format depending on the device, just to support the few generating bitmaps on the fly.

Here's a "real time trans-coding" C++ library that may be relevant:

code.google.com/p/crunch/

and a javascript translation: www-cs-students.stanford.edu/~eparker/files/crunch/more_info.html

  • Low-level GPU programming is not practical on most platforms. OpenGL and D3D have moved exclusively to high-level shading languages since hardware varies so much between generation and vendor. Even the assembly languages that they once exposed were not all that low-level, they were/are translated to the GPU's native instruction set by the driver. Anyway, you do know that you can let the driver do compression and then call `glGetCompressedTexImage (...)` to read it back, right? Offline compression is often higher quality though. OpenGL ES is a little different, but your tag implies desktop GL. – Andon M. Coleman Feb 19 '14 at 20:58

1 Answers1

0

I was hoping somebody with actual experience would answer this .. oh well, here goes.

"Low level GPU programming" isn't that much of a black art. You're writing shaders in a dialect of C. You can do bit masking and integer shifts as well as floating point math. The hardest part is wrapping your head around how a GPU does parallel processing and how the data gets fed into your shaders. The OpenGL SuperBible should be your starting point.

In my limited experience, GPU shaders are pretty good at decompressing images, eg I'v used shaders to convert from Bayer RGGB digital camera pixel form into RGB. They're not so good at compressing images, because the output from compression is a variable length stream of data, not an image. If I were trying it, I'd write a vertex shader with the uncompressed image as a texture input and the output into a transform feedback buffer. But I'm not sure transform feedback is available on many mobile devices.

Hope this helps.

Hugh Fisher
  • 2,321
  • 13
  • 8
  • Pedantic point, more than two years after anyone cared: mobile GPUs implementing GLES 2.0 can't necessarily handle integers natively; it is permissible to emulate integer types with floating point types. Hence anything bitwise is 'reserved' (implying illegal). So as to the 'mobile' part of the question, you will likely need to be able to do everything as if floating point, to hit all devices still on the market. See https://www.khronos.org/files/opengles_shading_language.pdf §4.1.3 and §5.5.1. ES 3.0 guarantees genuine integers (see GLSL_ES_Specification_3.00.3.pdf at the otherwise same URL). – Tommy Mar 07 '16 at 18:34