3

Ok, this is what I have. I have a 1d bitmap (or bitarray, bitset, bitstring, but I'll call it a bitmap for now) containing the live or dead states from a conway game of life generation. The cell at (x, y) is represented by the bit at y * map_width + x.

Now I have my game of life "engine" working, it would be nice if I could render some graphical stuff now. I thought OpenGL would be a nice choice for this, but I have no idea how I should start and if there are any specific functions or shaders (I know nothing about shaders) that can efficiently render a bitmap into a 2d plane with black 'n white pixels.

If you now think "no you idiot opengl is bad go with ...", feel free to say it, I'm open for changes.

EDIT

I forgot to say that I use a compact bitarray storing 8 bits per byte and using masking to retrieve those bytes. This is my hand made library thingy:

#include <stdint.h> // uint32_t
#include <stdlib.h> // malloc()
#include <string.h> // memset()
#include <limits.h> // CHAR_BIT

typedef uint32_t word_t;
enum {
    WORD_SIZE = sizeof(word_t), // size of one word in bytes
    BITS_PER_WORD = sizeof(word_t) * CHAR_BIT, // size of one word in bits
    MAX_WORD_VALUE = UINT32_MAX // max value of one word
};

typedef struct {
    word_t *words;
    int nwords;
    int nbytes;
} bitmap_t;

inline int WORD_OFFSET(int b) { return b / BITS_PER_WORD; }
inline int BIT_OFFSET(int b) { return b % BITS_PER_WORD; }

inline void setbit(bitmap_t bitmap, int n) { bitmap.words[WORD_OFFSET(n)] |= (1 << BIT_OFFSET(n)); }
inline void flipbit(bitmap_t bitmap, int n) { bitmap.words[WORD_OFFSET(n)] ^= (1 << BIT_OFFSET(n)); }
inline void clearbit(bitmap_t bitmap, int n) { bitmap.words[WORD_OFFSET(n)] &= ~(1 << BIT_OFFSET(n)); }
inline int getbit(bitmap_t bitmap, int n) { return (bitmap.words[WORD_OFFSET(n)] & (1 << BIT_OFFSET(n))) != 0; }

inline void clearall(bitmap_t bitmap) {
    int i;
    for (i = bitmap.nwords - 1; i >= 0; i--) {
        bitmap.words[i] = 0;
    }
}

inline void setall(bitmap_t bitmap) {
    int i;
    for (i = bitmap.nwords - 1; i >= 0; i--) {
        bitmap.words[i] = MAX_WORD_VALUE;
    }
}

bitmap_t bitmap_create(int nbits) {
    bitmap_t bitmap;
    bitmap.nwords = nbits / BITS_PER_WORD + 1;
    bitmap.nbytes = bitmap.nwords * WORD_SIZE;
    bitmap.words = malloc(bitmap.nbytes);

    if (bitmap.words == NULL) { // could not allocate memory
        printf("ERROR: Could not allocate (enough) memory.");
        exit(1);
    }

    clearall(bitmap);
    return bitmap;
}

void bitmap_free(bitmap_t bitmap) {
    free(bitmap.words);
}
orlp
  • 112,504
  • 36
  • 218
  • 315

4 Answers4

2

This is code from my OGL Game of Life implementation.

This uploads the texture (do this every time you want to update the data):

glTexImage2D( GL_TEXTURE_2D, 0, 1, game->width, game->height, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, game->culture[game->phase] );

game->culture[phase] is the data array of type char* of size width * height (phase toggles between two alternating arrays that are being written into resp. read from).

Because GL_LUMINANCE is used, the colors will be only black and white.

Also, you need to set up the rectagle with this (every frame, but I guess you already know this)

glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);


    glBegin(GL_QUADS);                                  // Draw A Quad
        glVertex3f(-1.0f, 1.0f, 0.0f);                  // Top Left
        glTexCoord2i( 1, 0 );
        glVertex3f( 1.0f, 1.0f, 0.0f);                  // Top Right
        glTexCoord2i( 1, 1 );
        glVertex3f( 1.0f,-1.0f, 0.0f);                  // Bottom Right
        glTexCoord2i( 0, 1 );
        glVertex3f(-1.0f,-1.0f, 0.0f);                  // Bottom Left
        glTexCoord2i( 0, 0 );
    glEnd();

Of course you could use buffers and keep the "model" in the GPU memory, but that is not really necessary with only one quad.

Matěj Zábský
  • 16,909
  • 15
  • 69
  • 114
  • Since I store 8 bits in one byte do I need to convert this to one bit per byte before feeding it to OpenGL or does it have a mode that understands masked storage? – orlp Mar 05 '11 at 20:23
  • @nightcracker See this question I asked when I was making mine GoL http://stackoverflow.com/questions/327642/opengl-and-monochrome-texture – Matěj Zábský Mar 05 '11 at 20:27
  • Your question is pretty much my question :) Thanks. Though I'm kinda worried since this will increase memory usage by a factor 8. Thank god I made my application modular though so I can easily change the system. – orlp Mar 05 '11 at 20:32
  • @nightcracker Yeah, the idea to use all 8 bits is quite obvious when making Game of Life. But you must be aware that packing the bits makes you use two more operations per cell write or read, which is much more important than RAM. Also, you will hit OGL texture size limit MUCH sooner than the RAM limit. The array approach is grossly inefficient anyways (you spend most time iterating over dead cells or repeating patterns), go [hashlife](http://en.wikipedia.org/wiki/Hashlife) if you want it to be really efficient. – Matěj Zábský Mar 05 '11 at 20:38
1

First of all consider doing the simulation itself on GPU by ping-ponging between 2 OpenGL textures. If not some complex optimizations - Conway's Life is a pretty straightforward task for GPU. It will require 2 framebuffer objects and some understanding of shaders.

Edit-1: Example fragment shader (brain-compiled)

#version 130
uniform sampler2D input;
out float life;
void main() {
    ivec2 tc = ivec2(gl_FragCoord);
    float orig = texelFetch(input,tc,0);
    float sum = orig+
        texelFetchOffset(input,tc,0,ivec2(-1,0))+
        texelFetchOffset(input,tc,0,ivec2(+1,0))+
        texelFetchOffset(input,tc,0,ivec2(0,-1))+
        texelFetchOffset(input,tc,0,ivec2(0,+1))+
        texelFetchOffset(input,tc,0,ivec2(-1,-1))+
        texelFetchOffset(input,tc,0,ivec2(-1,+1))+
        texelFetchOffset(input,tc,0,ivec2(+1,-1))+
        texelFetchOffset(input,tc,0,ivec2(+1,+1));
    if(sum<1.9f)
        life = 0.0f;
    else if(sum<2.1f)
        life = orig;
    else life = 1.0f;
}

The vertex shader is a simple pass-through:

#version 130
in vec2 vertex;
void main() {
   gl_Position = vec4(vertex,0.0,1.0);
}
kvark
  • 5,291
  • 2
  • 24
  • 33
  • Read my question again: "(I know __nothing__ about shaders)". Though I would love to learn :) – orlp Mar 09 '11 at 12:52
  • Any recommendations? (though I think it would be misplaced to start with shaders. I can get a basecode to work (so I can compile :D), but I can't even get commanderz' snippet to work). – orlp Mar 09 '11 at 14:49
0

If you stick with OpenGL, the easiest way is to upload your bitmap as a texture, then render a quad mapped with that texture. The uploading bit would look something like this:

glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, width, height, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, data);

This assumes that every cell is a single byte, with value 0 for black and 0xFF for white. Note that, on some OpenGL versions, width and height must be powers of two.

Thomas
  • 174,939
  • 50
  • 355
  • 478
0

Old versions of OpenGL provide functionality to draw bitmaps directly without the need for a intermediary texture: glBitmap Compared to other methods for drawing images glBitmap is rather slow, but since one uses it only sparingly this is not that bad.

http://www.opengl.org/sdk/docs/man/xhtml/glBitmap.xml

Bitmaps are placed using glRasterPos or glWindowPos.

http://www.opengl.org/sdk/docs/man/xhtml/glRasterPos.xml http://www.opengl.org/sdk/docs/man/xhtml/glWindowPos.xml

Bitmaps have a small pitfall: If the raster position set using glRasterPos or glWindowPos is outside the viewport no part of the bitmap gets drawn, even if it reached into the viewport; see the reference page of glBitmap for a workaround.

datenwolf
  • 159,371
  • 13
  • 185
  • 298