2

To tell from the beginning, I'm very novice within OpenGL world but I need to use it for some rendering optimisations in Android. I have to render a block or a bunch of contiguous pixels in a 2D space using OpenGL ES 2.0. I've found some suitable solutions, however (here and here) and I've tried both of them, but I cannot reach the desired result.

The first thing is that the pixel is always in the origin (center or {0, 0}) and I cannot move it from there. I would prefer to place it to top-left corner of the screen.

The second thing is that I cannot draw multiple pixels. I would like to spawn multiple pixels, not only one.

To summarize: I just want to place the pixels contiguously, for example: first pixel starting from top-left corner, the second one should be immediately after the first pixel on X axis and so on. When the end margin of the screen is met, then, the new pixel should start on a new line (Y+1).

The code that I'm using is:

package point.example.point;

import android.opengl.GLES20;
import android.opengl.GLSurfaceView;
import android.opengl.Matrix;

import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.FloatBuffer;

import javax.microedition.khronos.egl.EGLConfig;
import javax.microedition.khronos.opengles.GL10;

public class PointRenderer implements GLSurfaceView.Renderer {
  private float[] mModelMatrix = new float[16];
  private float[] mViewMatrix = new float[16];
  private float[] mProjectionMatrix = new float[16];
  private float[] mMVPMatrix = new float[16];
  private int mMVPMatrixHandle;
  private int mPositionHandle;

  float[] vertices = {
      0.0f, 0.0f, 0.0f
  };
  FloatBuffer vertexBuf;

  @Override
  public void onSurfaceCreated(GL10 glUnused, EGLConfig config) {
    vertexBuf = ByteBuffer.allocateDirect(vertices.length * 4).order(ByteOrder.nativeOrder()).asFloatBuffer();
    vertexBuf.put(vertices).position(0);

    // Set the background clear color to black.
    GLES20.glClearColor(0f, 0f, 0f, 1f);

    float eyeX = 0.0f;
    float eyeY = 0.0f;
    float eyeZ = 0.0f;

    float centerX = 0.0f;
    float centerY = 0.0f;
    float centerZ = -5.0f;

    float upX = 0.0f;
    float upY = 1.0f;
    float upZ = 0.0f;

    // Set the view matrix. This matrix can be said to represent the camera position.
    // NOTE: In OpenGL 1, a ModelView matrix is used, which is a combination of a model and
    // view matrix. In OpenGL 2, we can keep track of these matrices separately if we choose.
    Matrix.setLookAtM(mViewMatrix, 0, eyeX, eyeY, eyeZ, centerX, centerY, centerZ, upX, upY, upZ);

    final String vertexShader =
        "uniform mat4 u_MVPMatrix;      \n"
            + "attribute vec4 a_Position;     \n"
            + "void main()                    \n"
            + "{                              \n"
            + "   gl_Position = u_MVPMatrix   \n"
            + "               * a_Position;   \n"
            + "   gl_PointSize = 10.0;       \n"
            + "}                              \n";

    final String fragmentShader =
        "precision mediump float;       \n"
            + "void main()                    \n"
            + "{                              \n"
            + "   gl_FragColor = vec4(1.0,    \n"
            + "   1.0, 1.0, 1.0);             \n"
            + "}                              \n";

    // Load in the vertex shader.
    int vertexShaderHandle = GLES20.glCreateShader(GLES20.GL_VERTEX_SHADER);

    if (vertexShaderHandle != 0) {
      // Pass in the shader source.
      GLES20.glShaderSource(vertexShaderHandle, vertexShader);

      // Compile the shader.
      GLES20.glCompileShader(vertexShaderHandle);

      // Get the compilation status.
      final int[] compileStatus = new int[1];
      GLES20.glGetShaderiv(vertexShaderHandle, GLES20.GL_COMPILE_STATUS, compileStatus, 0);

      // If the compilation failed, delete the shader.
      if (compileStatus[0] == 0) {
        GLES20.glDeleteShader(vertexShaderHandle);
        vertexShaderHandle = 0;
      }
    }

    if (vertexShaderHandle == 0) {
      throw new RuntimeException("Error creating vertex shader.");
    }

    // Load in the fragment shader shader.
    int fragmentShaderHandle = GLES20.glCreateShader(GLES20.GL_FRAGMENT_SHADER);

    if (fragmentShaderHandle != 0) {
      // Pass in the shader source.
      GLES20.glShaderSource(fragmentShaderHandle, fragmentShader);

      // Compile the shader.
      GLES20.glCompileShader(fragmentShaderHandle);

      // Get the compilation status.
      final int[] compileStatus = new int[1];
      GLES20.glGetShaderiv(fragmentShaderHandle, GLES20.GL_COMPILE_STATUS, compileStatus, 0);

      // If the compilation failed, delete the shader.
      if (compileStatus[0] == 0) {
        GLES20.glDeleteShader(fragmentShaderHandle);
        fragmentShaderHandle = 0;
      }
    }

    if (fragmentShaderHandle == 0) {
      throw new RuntimeException("Error creating fragment shader.");
    }

    // Create a program object and store the handle to it.
    int programHandle = GLES20.glCreateProgram();

    if (programHandle != 0) {
      // Bind the vertex shader to the program.
      GLES20.glAttachShader(programHandle, vertexShaderHandle);
      // Bind the fragment shader to the program.
      GLES20.glAttachShader(programHandle, fragmentShaderHandle);
      // Bind attributes
      GLES20.glBindAttribLocation(programHandle, 0, "a_Position");
      // Link the two shaders together into a program.
      GLES20.glLinkProgram(programHandle);
      // Get the link status.
      final int[] linkStatus = new int[1];
      GLES20.glGetProgramiv(programHandle, GLES20.GL_LINK_STATUS, linkStatus, 0);
      // If the link failed, delete the program.
      if (linkStatus[0] == 0) {
        GLES20.glDeleteProgram(programHandle);
        programHandle = 0;
      }
    }

    if (programHandle == 0) {
      throw new RuntimeException("Error creating program.");
    }

    // Set program handles. These will later be used to pass in values to the program.
    mMVPMatrixHandle = GLES20.glGetUniformLocation(programHandle, "u_MVPMatrix");
    mPositionHandle = GLES20.glGetAttribLocation(programHandle, "a_Position");

    // Tell OpenGL to use this program when rendering.
    GLES20.glUseProgram(programHandle);
  }

  @Override
  public void onSurfaceChanged(GL10 glUnused, int width, int height) {
    // Set the OpenGL viewport to the same size as the surface.
    GLES20.glViewport(0, 0, width, height);

    // Create a new perspective projection matrix. The height will stay the same
    // while the width will vary as per aspect ratio.
    final float ratio = (float) width / height;
    final float left = -ratio;
    final float right = ratio;
    final float bottom = -1.0f;
    final float top = 1.0f;
    final float near = 1.0f;
    final float far = 100.0f;

    Matrix.frustumM(mProjectionMatrix, 0, left, right, bottom, top, near, far);
  }

  @Override
  public void onDrawFrame(GL10 glUnused) {
    GLES20.glClear(GLES20.GL_DEPTH_BUFFER_BIT | GLES20.GL_COLOR_BUFFER_BIT);

    Matrix.setIdentityM(mModelMatrix, 0);
    //Push to the distance - note this will have no effect on a point size
    Matrix.translateM(mModelMatrix, 0, 0.0f, 0.0f, -5.0f);
    Matrix.multiplyMV(mMVPMatrix, 0, mViewMatrix, 0, mModelMatrix, 0);
    Matrix.multiplyMV(mMVPMatrix, 0, mProjectionMatrix, 0, mMVPMatrix, 0);
    GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mMVPMatrix, 0);

    //Send the vertex
    GLES20.glVertexAttribPointer(mPositionHandle, 3, GLES20.GL_FLOAT, false, 0, vertexBuf);
    GLES20.glEnableVertexAttribArray(mPositionHandle);

    //Draw the point
    GLES20.glDrawArrays(GLES20.GL_POINTS, 0, 1);

  }
}

And here's the visual result:

the result from the applied code

Cosmin Telescu
  • 113
  • 1
  • 2
  • 8
  • 1
    `{0,0}` is the center of the screen. [OpenGL Coordinate System](http://www.learnopengles.com/tag/left-handed-coordinate-system/). – James Poag Jun 06 '18 at 13:22
  • Yes, the origin is in {0,0}. I've tried to change the values within translateM() method for X and Y axes and nothing is changing. – Cosmin Telescu Jun 06 '18 at 13:41
  • Perrhaps you would find [Canvas](https://developer.android.com/reference/android/graphics/Canvas) an easier alternative? It can be used for [high framerate applications](https://medium.com/rosberryapps/make-your-custom-view-60fps-in-android-4587bbffa557). – James Poag Jun 06 '18 at 14:55
  • @JamesPoag, right now I'm using the canvas for drawing points with [drawPoint()](https://developer.android.com/reference/android/graphics/Canvas.html#drawPoint(float,%20float,%20android.graphics.Paint)) method, but it seems to be too intensive and time-consuming in order to render a bunch of pixels (especially on high-density devices). – Cosmin Telescu Jun 06 '18 at 15:17
  • 1
    https://medium.com/rosberryapps/make-your-custom-view-60fps-in-android-4587bbffa557 – James Poag Jun 06 '18 at 18:06
  • @JamesPoag Nice article which it can be considered. Thank you! – Cosmin Telescu Jun 07 '18 at 13:01
  • @Rabbid76 I've updated the content in a more specific way. Thanks for the tip. – Cosmin Telescu Jun 07 '18 at 13:02

1 Answers1

0

What you are trying to accomplish - painting a raster of pixels with specific dimensions measured on pixels - doesn't really appear to be what the OpenGL API was designed to do.

You're looking for a speed gain compared to canvas drawing, and OpenGL is indeed the go-to place for fast drawing through the GPU, though filling rectangular areas on a canvas can be pretty fast. Unless your contiguous pixels are all different colors?

You see, OpenGL has its own coordinate system which does not depend on the screen resolution. For 2D drawing, with no depth, it usually defaults to (-1,-1)-(1,1). This makes it easier to create resolution-independent visualizations. However, you need pixels, which are resolution dependent.

You could use glViewport to convert the default coordinate system to your desired resolution, then translate your vertices in that coordinate system.

However, drawing individual points for a large contiguous area will be slow. At the very least, place all your points in a vertex buffer and draw that buffer once. You could define a color for each vertex to suit your needs. But even this won't be spectacularly fast, since OpenGL must perform calculations for each of your vertices.

For real speed, you'll need to convert the rectangular area you want to fill to two triangles and have OpenGL draw those. The pixel coloring - called fragments in OpenGL- must be done in a fragment shader that runs in the GPU (= fast). In your GLSL you can calculate a color for each fragment. This is a good solution if your contiguous pixels have a single color or a gradient - some kind of calculatable pattern really.

If the pixel color data has no pattern and comes out of an array, you're out of luck. In that case, consider using an OpenGL texture of the right size and have OpenGL draw it on the triangles.

Alexander van Oostenrijk
  • 4,644
  • 3
  • 23
  • 37