0

I'm trying to setup SDL2 environment for future running software rendering examples, so i need direct access to pixels to draw. Here is some code, that draws 1 red pixel to texture, then displaying a it said https://wiki.libsdl.org/MigrationGuide#If_your_game_just_wants_to_get_fully-rendered_frames_to_the_screen

#include <SDL.h>
#include <stdio.h>

const int SCREEN_WIDTH = 1920;
const int SCREEN_HEIGHT = 1080;

SDL_Window* gWindow;
SDL_Renderer* gRenderer;
SDL_Texture* gTexture;
SDL_Event e;

void* gPixels = NULL;
int gPitch = SCREEN_WIDTH * 4;

bool gExitFlag = false;

Uint64 start;
Uint64 end;
Uint64 freq;
double seconds;

int main(int argc, char* args[])
{
    SDL_Init(SDL_INIT_VIDEO);
    gWindow = SDL_CreateWindow("SDL Tutorial", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, SCREEN_WIDTH, SCREEN_HEIGHT, SDL_WINDOW_SHOWN | SDL_WINDOW_OPENGL);
    gRenderer = SDL_CreateRenderer(gWindow, -1, SDL_RENDERER_ACCELERATED); // | SDL_RENDERER_PRESENTVSYNC); vsync is turned off
    gTexture = SDL_CreateTexture(gRenderer, SDL_PIXELFORMAT_RGBA8888, SDL_TEXTUREACCESS_STREAMING, SCREEN_WIDTH, SCREEN_HEIGHT);

    while (!gExitFlag)
    {
        while (SDL_PollEvent(&e) != 0)
        {
            if (e.type == SDL_QUIT)
            {
                gExitFlag = true;
            }
        }

        start = SDL_GetPerformanceCounter();

        SDL_LockTexture(gTexture, NULL, &gPixels, &gPitch);
        *((uint32_t*)gPixels) = 0xff000ff;
        SDL_UnlockTexture(gTexture); //20-100ms on different hardware

        end = SDL_GetPerformanceCounter();
        freq = SDL_GetPerformanceFrequency();

        SDL_RenderCopy(gRenderer, gTexture, NULL, NULL);
        SDL_RenderPresent(gRenderer);

        gPixels = NULL;
        gPitch = 0;

        seconds = (end - start) / static_cast<double>(freq);
        printf("Frame time: %fms\n", seconds * 1000.0);
    }

    SDL_DestroyWindow(gWindow);
    SDL_DestroyRenderer(gRenderer);
    SDL_DestroyTexture(gTexture);

    SDL_Quit();
    return 0;
}

As i mention in the code comment SDL_UnlockTexture gets up to 100ms with fullhd texture. (Switching to SDL_UpdateTexture cause no significant difference) It is too much for realtime rendering i think. Am i doing something wrong or i should not use at all texture API(or any other GPU-accelerated api, where texture must be uploaded to gpu memory every frame) for realtime rendering whole frame?

too honest for this site
  • 12,050
  • 4
  • 30
  • 52
SergeyK
  • 1,522
  • 1
  • 13
  • 15
  • C is not C++ is not C. Don't use wrong tags! This looks like C, if you compile as C++, change the tag (but don't add!). – too honest for this site Mar 17 '16 at 13:09
  • @Olaf SDL2 is written in C, have C-style API and I compile it as C++. What is wrong? Tags was copied from other sdl2-like question with pretty similliar specific. – SergeyK Mar 17 '16 at 13:40
  • If you compile as C++, it is C++! It does not matter if the toolkit is written in C. Otherwise you would almost always have to add C and Assembler tags! Note that programming C-style in C++ is bad practice. If you use C style, use C! – too honest for this site Mar 17 '16 at 14:29

1 Answers1

0

As you want to work with raw pixeldata, you should use SDL's SDL_Surfaces and not textures. It's different SDL API optimized for your case, see this example and dont forget to update.

The reason for this is that textures are stored in VRAM and reads from VRAM are very slow. Surfaces are stored in RAM, processed there and are only written to VRAM, which is very fast.

Peter K
  • 1,787
  • 13
  • 15
  • Sorry, this answer is obsolete few years, I'll update it - see the dup. – Peter K Mar 17 '16 at 23:30
  • Actually this approach still works just fine, SDL preserved backwards compatibility in that sense. It does not really matter if you allocate your pixeldata as raw array or surface. Yes, you'll end up with SDL_UpdateTexture with SDL_TEXTUREACCESS_STREAMING enabled and present it anyway, so there's not much difference anyway. – Peter K May 11 '16 at 14:43