3

After getting frustrated with the fixed-function pipeline being slow in Python, I started looking into shaders to draw my game's parallax backgrounds. However, I can't find any simple way to texture what vertices are drawn.

I currently have a PyOpenGL Vertex Buffer Object with an array of vertices, each of which is like this: [x, y, z, texX, texY, texID]

I'm passing these to the shaders with getAttribLocation/glEnableVertexAttribArray. I'm not sure if it works, since the shaders don't currently compile, but things have been working ok with coloured vertices before I added my probably terrible texture code.

What I want is to draw a textured model with a shader, with multiple textures, but the model is a flat background. It's a large (1600x800 or so) image, which is split into 256x256 textures and quad polys. (Side question: Is it better to have more small polys, so less bits of polygon are off-screen, or big polys so that there's less texture-binding?)

Because it's multiple, related textures, I want to do it all in the shader without having to bind the texture for each chunk on the CPU, so I thought sending texture data with vertex data would be best, but I can't get it to work.

Could someone please give me a simple example of the vertex and fragment shaders interacting to make multiple polys with different textures?

EDIT: shaders, vbo and things here: http://pastebin.com/3LaYiyvh Shaders now compile, but triangles are invisible.

EDITEDIT: Got there! Not sure what finally cracked it, but here's the program: http://pastebin.com/k87sfiEf I think it's something like, so long as you bind all the textures at the start, the shader can swap between them. But I'm not sure. Also of note, it's a bad idea to split a big background into smaller chunks to draw because of swapping textures while drawing. Big textures with wasted space are fine, atlases are better!

bonzairob
  • 81
  • 1
  • 4
  • 10
  • 1
    1. 1600x800 will fit into single texture on any hardware since riva tnt 2 pro. 2. Before asking questions, google for "GLSL tutorial" and make your code compile. 3. If python is slow, you can always switch to compiled language like C/C++. – SigTerm Mar 02 '12 at 23:13
  • 1
    @SigTerm 1: Seems like a lot of wasted texture/memory area, is all. Doesn't solve my problem either way. 2: Gosh, Google! I never woulda thunk it! :P I've spent the last two days trying to find an answer to this... it compiled fine before I added the texture code. That's why I've asked for an example, really. 3: The issue was my coding, not really python; it could run faster fixed-function in C, sure, but it would still be terrible. I want to learn the right way. Thanks, though. – bonzairob Mar 02 '12 at 23:45
  • "lot of wasted" non-power 2 textures are supported on many cards. There are limitations, though. It makes sense to simply ignore "wasted" memory, since it is a "micro optimization". "Google! I never woulda thunk it!" See? I knew it. "but it would still be terrible" you **easily** can get **200..400 fps** with fixed function OpenGL in C/C++ if you use right algorithms (application: dungeon crawler), and that's without using display lists, vertex buffer objects - just raw opengl. I wouldn't call it "slow". Python, on other hand, has significant function call overhead. – SigTerm Mar 03 '12 at 05:54
  • @SigTerm Maybe I wasn't clear - I HAVE been using Google, for two days; maybe I'm not searching for the right terms. I'm well aware that C is faster than Python, most likely faster for old fixed-function things than Python is with shaders, but that doesn't mean my code is working either way. I am not willing to do this in C, because it would take far too long and I'm nowhere near as experienced in C as I am in Python. – bonzairob Mar 03 '12 at 10:20

3 Answers3

2

After getting frustrated with the fixed-function pipeline being slow in Python, I started looking into shaders to draw my game's parallax backgrounds.

If fixed function is slow for you, using shaders is not going to make it faster.

Probably you're just doing something fundamentally wrong, like using immediate mode, or having no HW acceleration at all (due to lack of properly installed drivers or similar).

datenwolf
  • 159,371
  • 13
  • 185
  • 298
  • Perhaps, but the bottlenecks in the fixed-function version were all down to Python being somewhat slow, so offloading work to the graphics card seems like it would fix it. It was certainly going better until I tried to texture things. – bonzairob Mar 03 '12 at 10:16
  • @bonzairob: There's no kind of work you can load off into the shader, that Python does. That's simply not how they work. Shaders can not issue drawing commands by itself. Shaders can not switch between textures. Shaders can not set uniforms. And frankly, the performance impact of the Python OpenGL bindings when it comes down to rendering is neglectibe if you're doing it right. – datenwolf Mar 03 '12 at 11:36
  • 2
    @bonzairob: You're not using glBegin(…), glVertex(…), glEnd(…), are you? Because if you do, then **that** is your bottleneck, and no shader in the world can do something about it. Suggestion: Post your code at pastebin and let us have a look at it. – datenwolf Mar 03 '12 at 11:37
  • @bonzairob: Oh just FYI: Splitting a single large texture into small ones and switching between them *reduces* performance, because with every texture switch you're effectively invalidating the GPUs caches. – datenwolf Mar 03 '12 at 11:39
  • no, I'm not using begin/end, but I was before. And I'll stop using small textures too. There's a pastebin here http://pastebin.com/3LaYiyvh I've got the shaders compiling now, but the triangles are invisible. Thanks! – bonzairob Mar 03 '12 at 12:25
  • Your problem lies in line 10 of the source code. You should add the translation to the vector before multiplying the sum of them with the transformation matrix. – datenwolf Mar 03 '12 at 12:44
  • Ah yeah, logic fail on my part. Triangles are still invisible, though. :( – bonzairob Mar 03 '12 at 22:53
1

Could someone please give me a simple example of the vertex and fragment shaders interacting to make multiple polys with different textures?

Google for GLSL tutorials. Examples were written before, and there's no reason to write them again just for you. Or download NVidia OpenGL SDK, and examine it. OpenGL.org also recommends books. "Orange book" covers shaders.

which is split into 256x256

Traditionally it is recommended to do the opposite - take all textures you can and combine them into single "atlas" texture, which is preferably something like 16384x16384 - to minimize state switching.

What I want is to draw a textured model with a shader, with multiple textures, but the model is a flat background. It's a large (1600x800 or so) image

1600x800 will fit entirely into texture on pretty much any hardware since Riva TNT 2 pro. If you care about "wasted" texture memory, then many cards support non-power-of-2 textures. however, non-power-of-2 textures normally have limitations, and on some hardware (certain ati cards + drivers) using such texture will cause fps to take nose dive. I.e. from 200..400 to 40. It isn't worth it. On hardware with even 256MB of VRam, unused 40 percents of texture is micro optimization. And then again, you could use texture atlases, and fill "wasted" space with something useful, if you feel stingy about vram usage. Also, keep in mind that you never know how efficiently driver uses the video memory.

I want to do it all in the shader without having to bind the texture for each chunk on the CPU,

You can't do that. Shaders don't bind textures. They use textures that has already been bound.

I'm not sure if it works, since the shaders don't currently compile,

Well, make them compile and ask again. It is not possible to assist you without seeing your shader or error message. You do know that shader compiler produces error messages, right?

After getting frustrated with the fixed-function pipeline being slow in Python

Consider switching to compiled language like C or C++. You can easily get 200..400 frames per second with raw fixed function opengl in C/C++ application ("dungeon crawler") without using buffers or display lists - IF you use correct algorithm for hidden surface removal (and vsync is disabled) AND your textures are mip-mapped. well known "fixed function" applications include Quake 1..3, Half-Life 1, Cube and many other games which are blazingly fast. Which means - if it is slow, it is your fault.

Unlike C/C++, Python has larger function call overhead - executing bytecode, extracting value of unknown type from list/tuple (that can contain "anything" by design) then dumping it as a float into something like glVertex3f while finally forwarding it to native API call WILL be slower than similar C/C++ call which has no intermediate steps. You can counteract that by using display lists or buffer objects, but for me it isn't worth the effort. However, using specific language for specific task is a matter of personal preference.

--EDIT--

However, if shaders can't bind textures, how does multitexturing work?

There are N texture stages (at least 2 - se glGet/GL_MAX_TEXTURE_COORDS/GL_MAX_TEXTURE_UNITS), you set multiple textures at once. See glActiveTexture. Without shaders (fixed function) you specify color operation for each stage using glTexEnv. With shaders, you set multiple textures, specify which sampler uses which texture stage using glUniform1i/glUniform1v, then read data from them within shader using Sampler2D and similar functions. Shader cannot switch textures. It can use textures that has been already set by your program. Shader has no knowledge about any textures outside of shader. Technically, shader doesn't even know about "texture" it has "sampler" which is used to read data.

Or texture sampling? Shaders must be able to do some work with selecting/changing textures...

Shader does not switch or select textures at all. This part is done by your program. For more information read OpenGL specifications and GLSL specifications for your version of opengl/glsl. Both are available for download from opengl.org website, "documentation" menu.

As I already said, at this point you need GLSL tutorial or book. Both are easy to find. Get either one and keep reading it until you "get" it. Currently it doesn't look like you did your homework and tried to make a simple shader using book or tutorial. If you can't find a book, then NVidia OpenGL SDK had plenty of examples (in C/C++, but it isn't that hard to convert them).

SigTerm
  • 26,089
  • 6
  • 66
  • 115
  • Ok, that's eased my mind on the texture things somewhat, thanks. However, if shaders can't bind textures, how does multitexturing work? Or texture sampling? Shaders must be able to do some work with selecting/changing textures... And as above, I'm not willing to swap to C. At this point, it's not the issue at all. The issue you raised with Python analysing values is mitigated with the NumPy library and jsut plain specifying any numbers as floats. – bonzairob Mar 03 '12 at 10:25
1

Perhaps you can find something useful in my Minecraft mapping blog posts. All the examples use Python, Pygame and PyOpenGL. They do lots of stuff in a fragment shader. They only use trivial geometry: just one quad.

From your description, it sounds like you are much better off with one large texture than with multiple small ones. However, there are scenarios where it really does make sense to have lots of small textures and select between them in the shader. Array textures can be useful in this situation, as they don't suffer the same problems with filtering and clamping that you can get when using a texture atlas. (Indeed, the examples in my blog use a texture atlas and suffer some problems with mip-mapping when zoomed out to far. I've recently been using array textures to solve this problem.)

Weeble
  • 17,058
  • 3
  • 60
  • 75