5

I'm writing an cross-platform renderer. I want to use it on Windows, Linux, Android, iOS.

Do you think that it is a good idea to avoid absolute abstraction and write it directly in OpenGL ES 2.0?

As far as I know I should be able to compile it on PC against standard OpenGL, with only a small changes in code that handles context and connection to windowing system.

runnydead
  • 618
  • 1
  • 7
  • 17
  • Have you heard Kivy? It's an open source cross-platform programming language for Linux, Windows, MacOSX, Android and IOS that renders all of it's views in OpenGL (http://kivy.org). The language includes it's own widget toolkit as well. I just thought I'd throw that out there since there's a possibility you might not want to re-invent the wheel if a solution already exists and you perhaps just haven't discovered it yet. – trusktr Sep 09 '12 at 12:57
  • If you have Android or iOS, try searching for "Kivy" in Play Store or App Store to see examples of it in use. – trusktr Sep 09 '12 at 13:07

3 Answers3

9

Do you think that it is a good idea to avoid absolute abstraction and write it directly in OpenGL ES 2.0?

Your principle difficulties with this will be dealing with those parts of the ES 2.0 specification which are not actually the same as OpenGL 2.1.

For example, you just can't shove ES 2.0 shaders through a desktop GLSL 1.20 compiler. In ES 2.0, you use things like specifying precision; those are illegal constructs in GLSL 1.20.

You can however #define around them, but this requires a bit of manual intervention. You will have to insert a #ifdef into the shader source file. There are shader compilation tricks you can do to make this a bit easier.

Indeed, because GL ES uses a completely different set of extensions (though some are mirrors and subsets of desktop GL extensions), you may want to do this.

Every GLSL shader (desktop or ES) needs to have a "preamble". The first non-comment thing in a shader needs to be a #version declaration. Fortunately for you, the version is the same between desktop GL 2.1 and GL ES 2.0: #version 1.20. The problem is what comes next: the #extension list (if any). This enables extensions needed by the shader.

Since GL ES uses different extensions from desktop GL, you will need to change this extension list. And since odds are good you're going to need more GLSL ES extensions than desktop GL 2.1 extensions, these lists won't just be 1:1 mapping, but completely different lists.

My suggestion is to employ the ability to give GLSL shaders multiple strings. That is, your actual shader files do not have any preamble stuff. They only have the actual definitions and functions. The main body of the shader.

When running on GL ES, you have a global preamble that you will affix to the beginning of the shader. You will have a different global preamble in desktop GL. The code would look like this:

GLuint shader = glCreateShader(/*shader type*/);
const char *shaderList[2];
shaderList[0] = GetGlobalPreambleString(); //Gets preamble for the right platform
shaderList[1] = LoadShaderFile(); //Get the actual shader file
glShaderSource(shader, 2, shaderList, NULL);

The preamble can also include a platform-specific #define. User-defined of course. That way, you can #ifdef code for different platforms.

There are other differences between the two. For example, while valid ES 2.0 texture uploading function calls will work fine in desktop GL 2.1, they will not necessarily be optimal. Things that would upload fine on big-endian machines like all mobile systems will require some bit twiddling from the driver in little-endian desktop machines. So you may want to have a way to specify different pixel transfer parameters on GL ES and desktop GL.

Also, there are different sets of extensions in ES 2.0 and desktop GL 2.1 that you will want to take advantage of. While many of them try to mirror one another (OES_framebuffer_object is a subset of EXT_framebuffer_object), you may run afoul of similar "not quite a subset" issues like those mentioned above.

Nicol Bolas
  • 449,505
  • 63
  • 781
  • 982
  • Thanks for your exhaustive answer. So you think that it will be better to create some kind of OpenGL renderer abstraction? For example I can have a texture represented by Texture2D class. This class would contain things common to both specifications, but implementation of some things would be different. – runnydead Jan 19 '12 at 15:28
  • @hubrobin: It doesn't need to be that abstract. You just need some platform-specific code in particular places. Now, if you're targeting GL 3.3 instead of 2.1, then you'll need much more of an abstraction. – Nicol Bolas Jan 19 '12 at 15:53
  • I dont want to support more features on PC. So you are basically saying, that it is doable? – runnydead Jan 19 '12 at 16:00
  • Point of interest: modern ARM devices are little endian, like x86. – karunski Jan 19 '12 at 20:33
3

In my humble experience, the best approach for this kind of requirements is to develop your engine in a pure C flavor, with no additional layers on it.

I am the main developer of PATRIA 3D engine which is based on the basic principle you just mentioned in terms of portability and we have achieved this by just developing the tool on basic standard libraries.

The effort to compile your code then on the different platforms is very minimal.

The actual effort to port the entire solution can be calculated depending on the components you want to embed in your engine.

For example:


Standard C:

Engine 3D

Game Logic

Game AI

Physics


+


Window interface (GLUT, EGL etc) - Depends on the platform, anyway could be GLUT for desktop and EGL for mobile devices.

Human Interface - depends on the porting, Java for Android, OC for IOS, whatever version desktop

Sound manager - depends on the porting

Market services - depends on the porting


In this way, you can re-use 95% of your efforts in a seamless way.

we have adopted this solution for our engine and so far it is really worth the initial investment.

genpfault
  • 51,148
  • 11
  • 85
  • 139
Maurizio Benedetti
  • 3,557
  • 19
  • 26
0

Here are the results of my experience implementing OpenGL ES 2.0 support for various platforms on which my commercial mapping and routing library runs.

The rendering class is designed to run in a separate thread. It has a reference to the object containing the map data and the current view information, and uses mutexes to avoid conflicts when reading that information at the time of drawing. It maintains a cache of OpenGL ES vector data in graphics memory.

All the rendering logic is written in C++ and is used on all the following platforms.

Windows (MFC)

Use the ANGLE library: link to libEGL.lib and libGLESv2.lib and ensure that the executable has access to the DLLs libEGL.dll and libGLESv2.dll. The C++ code creates a thread that redraws the graphics at a suitable rate (e.g., 25 times a second).

Windows (.NET and WPF)

Use a C++/CLI wrapper to create an EGL context and to call the C++ rendering code that is used directly in the MFC implementation. The C++ code creates a thread that redraws the graphics at a suitable rate (e.g., 25 times a second).

Windows (UWP)

Create the EGL context in the UWP app code and call the C++ rendering code via the a a C++/CXX wrapper. You will need to use a SwapChainPanel and create your own render loop running in a different thread. See the GLUWP project for sample code.

Qt on Windows, Linux and Mac OS

Use a QOpenGLWidget as your windows. Use the Qt OpenGL ES wrapper to create the EGL context, then call the C++ rendering code in your paintGL() function.

Android

Create a renderer class implementing android.opengl.GLSurfaceView.Renderer. Create a JNI wrapper for the C++ rendering object. Create the C++ rendering object in your onSurfaceCreated() function. Call the C++ rendering object's drawing function in your onDrawFrame() function. You will need to import the following libraries for your renderer class:

import javax.microedition.khronos.egl.EGLConfig;
import javax.microedition.khronos.opengles.GL10;
import android.opengl.GLSurfaceView.Renderer;

Create a view class derived from GLSurfaceView. In your view class's constructor first set up your EGL configuration:

setEGLContextClientVersion(2); // use OpenGL ES 2.0
setEGLConfigChooser(8,8,8,8,24,0);

then create an instance of your renderer class and call setRenderer to install it.

iOS

Use the METALAngle library, not GLKit, which Apple has deprecated and will eventually no longer support.

Create an Objective C++ renderer class to call your C++ OpenGL ES drawing logic.

Create a view class derived from MGLKView. In your view class's drawRect() function, create a renderer object if it doesn't yet exist, then call its drawing function. That is, your drawRect function should be something like:

-(void)drawRect:(CGRect)rect
    {
    if (m_renderer == nil && m_my_other_data != nil)
        m_renderer = [[MyRenderer alloc] init:m_my_other_data];
    if (m_renderer)
        [m_renderer draw];
    }

In your app you'll need a view controller class that creates the OpenGL context and sets it up, using code like this:

MGLContext* opengl_context = [[MGLContext alloc] initWithAPI:kMGLRenderingAPIOpenGLES2];
m_view = [[MyView alloc] initWithFrame:aBounds context:opengl_context];
m_view.drawableDepthFormat = MGLDrawableDepthFormat24;
self.view = m_view;
self.preferredFramesPerSecond = 30;

Linux

It is easiest to to use Qt on Linux (see above) but it's also possible to use the GLFW framework. In your app class's constructor, call glfwCreateWindow to create a window and store it as a data member. Call glfwMakeContextCurrent to make the EGL context current, then create a data member holding an instance of your renderer class; something like this:

m_window = glfwCreateWindow(1024,1024,"My Window Title",nullptr,nullptr);
glfwMakeContextCurrent(m_window);
m_renderer = std::make_unique<CMyRenderer>();

Add a Draw function to your app class:

bool MapWindow::Draw()
    {
    if (glfwWindowShouldClose(m_window))
        return false;
    m_renderer->Draw();
    /* Swap front and back buffers */
    glfwSwapBuffers(m_window);
    return true;
    }

Your main() function will then be:

int main(void)
    {
    /* Initialize the library */
    if (!glfwInit())
        return -1;

    // Create the app.
    MyApp app;

    /* Draw continuously until the user closes the window */
    while (app.Draw())
        {

        /* Poll for and process events */
        glfwPollEvents();
        }

    glfwTerminate();
    return 0;
    }

Shader incompatibilities

There are incompatibilities in the shader language accepted by the various OpenGL ES 2.0 implementations. I overcome these in the C++ code using the following conditionally compiled code in my CompileShader function:

const char* preamble = "";

#if defined(_POSIX_VERSION) && !defined(ANDROID) && !defined(__ANDROID__) && !defined(__APPLE__) && !defined(__EMSCRIPTEN__)
// for Ubuntu using Qt or GLFW
preamble = "#version 100\n";
#elif defined(USING_QT) && defined(__APPLE__)
// On the Mac #version doesn't work so the precision qualifiers are suppressed.
preamble = "#define lowp\n#define mediump\n#define highp\n";
#endif

The preamble is then prefixed to the shader code.

Graham Asher
  • 1,648
  • 1
  • 24
  • 34