1

i'm working on a project where i'm using OpenMesh to read stl and obj files and draw them on the screen using openGL. i've been doing the following,

#include <OpenMesh/Core/Mesh/TriMesh_ArrayKernelT.hh>
#include <OpenMesh/Core/IO/MeshIO.hh>

OpenMesh::TriMesh_ArrayKernelT<> mesh;

std::vector<point> vertices;
std::vector<point> normals;

void readMesh(std::string file)
{
    OpenMesh::IO::read_mesh(mesh, file);

    mesh.requestFaceNormals();
    mesh.request_vertex_normals();
    mesh.updateNormals();

    vertices.clear();
    normals.clear();

    for (auto face : mesh.faces())
    {
        for (auto vertex : mesh.fv_range(face))
        {
            auto point = mesh.point(vertex);
            auto normal = mesh.normal(face);

            vertices.push_back(point);
            normals.push_back(normal);
        }
    }

    mesh.releaseFaceNormals();
    mesh.releaseVertexNormals();
}

and when drawing i just pass the vertices and normals vectors to the vertex shader like this

void paint()
{
    glSetAttributeArray(0, vertices.data());
    glSetAttributeArray(1, normals.data());
    glDrawArrays(GL_TRIANGLES, 0, vertices.length());
}

where the vertex shader looks like this:

attribute vec3 position;
attribute vec3 normal;

uniform mat4 modelViewMatrix;

void main(void)
{
    vec4 color = vec4(0.25, 0.25, 0.25, 0.0);

    vec4 P = vec4(position, 0);
    vec4 N = vec4(normal, 0);

    vec3 L = vec3(20, 20, 20) - position;
    vec3 V = -position;
    N = normalize(N);
    L = normalize(L);
    V = normalize(V);
    vec3 R = reflect(-L, vec3(N));
    vec3 diffuse = max(dot(vec3(N), L), 0.0) * color.rgb;
    vec3 specular = pow(max(dot(R, V), 0.0), 0.2) * vec3(0.1, 0.1, 0.1);
    color = vec4(color.a * (ambient + diffuse + specular), color.a);
    color = clamp(color, 0.0, 1.0);

    gl_Color = color;
    gl_Position = modelViewMatrix * P;
}

and the fragment shader is:

void main(void)
{
    gl_FragColor = gl_Color;
}

this produces pretty good results, but the idea of having another copy of the vertices and normals stored in another location (normals and vertices) to be able to draw the mesh looks very counter-intuitive.

i was wondering if i can use openGL buffers with openMesh to optimize this. i've been searching for anything concerning this topic for a while but found nothing.

Mostafa Mahmoud
  • 182
  • 1
  • 10

1 Answers1

1

See Vertex Specification. You can create 2 Vertex Buffer Object for the verticex cooridantes and nortmal vertors:

GLuint vbos[2];
glGenBuffers(2, vbos);
glBindBuffer(GL_ARRAY_BUFFER, vbos[0]);
glBufferData(GL_ARRAY_BUFFER, vertices.size() * sizeof(vertices[0]), vertices.data(), GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, vbos[1]);
glBufferData(GL_ARRAY_BUFFER, normals.size() * sizeof(normals[0]), normals.data(), GL_STATIC_DRAW);

If you use OpenGL 3.0 or later, then you can specify a Vertex Array Object a nd state the vertex specification:

GLuint vao;
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
glBindBuffer(GL_ARRAY_BUFFER, vbos[0]);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, nullptr);
glBindBuffer(GL_ARRAY_BUFFER, vbos[1]);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 0, nullptr);
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);

When you want to draw the mesh, then it is sufficient to bind the VAO:

glBindVertexArray(vao);
glDrawArrays(GL_TRIANGLES, 0, vertices.length());

If you use OpenGL 2.0, the you cannot create a VAO, thus you have to specify the arrays of generic vertex attribute data, before drawing the mesh:

glBindBuffer(GL_ARRAY_BUFFER, vbos[0]);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, nullptr);
glBindBuffer(GL_ARRAY_BUFFER, vbos[1]);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 0, nullptr);
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glDrawArrays(GL_TRIANGLES, 0, vertices.length());

Furthermore note, that the attribute indices are not guaranteed to be 0 and 1. The attribute indices can be any arbitrary number.

If you would use GLSL version 3.30 the it would be possible to set the attribute indices in the shader code by Layout Qualifier.
Anyway you an define the attribute indices by glBindAttribLocation before linking the program or retrieve the attribute indices by glGetAttribLocation after linking the program.

Rabbid76
  • 202,892
  • 27
  • 131
  • 174
  • may be I couldn't address this correctly, but what i'm trying to say is: **how can I eliminate the need for the `vertices` and `normals` vectors?** as far as I can see from this solution, they are still being used. the idea is that I want to reduce the memory consumption per Mesh. – Mostafa Mahmoud Jul 24 '20 at 12:50
  • @MostafaMahmoud Do you mean you want to combine the vertex coordinate and normal vector to an attribute tuple with 6 components? – Rabbid76 Jul 24 '20 at 14:20
  • not exactly, i just want to use the vertices and normals data stored in the mesh structure directly to remove the need for the two nested for loops in the `readMesh()` function – Mostafa Mahmoud Jul 25 '20 at 16:13
  • @MostafaMahmoud So the question has nothing to do with OpenGL or GLSL, it is just a c++ coding question? – Rabbid76 Jul 25 '20 at 16:35
  • not a C++ Coding question, it's almost an OpenMesh question. – Mostafa Mahmoud Jul 25 '20 at 16:55
  • 1
    @MostafaMahmoud The question is neither about the OpenGL instructions nor about the shader code. Anyway, what you want to achieve is not possible, because the normal vector is per face, but you have to specify the normal vectors per vertex. – Rabbid76 Jul 25 '20 at 16:56