I'm making a 2D SDL game in C. The native resolution of the game is 160 × 120, which is just before displaying it on the screen upscaled by a certain (integer) scale factor (x1, x2, ...) to give a nice aliased & pixelated indie-like look. Nothing surprising there. However, it has quite a (visible) impact on the game's performance: the higher the scale factor, the higher the impact (obviously).
The code that handles the scaling and displaying is the following:
#define GET_PIXEL(surface, x, y) *(Uint32 *)((Uint32 *)surface->pixels + (y * surface->w) + x)
void draw(void) {
if (SDL_MUSTLOCK(screen.native))
SDL_LockSurface(screen.native);
for (int i = 0; i < 120; ++i)
for (int j = 0; j < 160; ++j) {
SDL_Rect rect = { j * screen.scale, i * screen.scale, screen.scale, screen.scale };
SDL_FillRect(screen.scaled, &rect, GET_PIXEL(screen.native, j, i));
}
if (SDL_MUSTLOCK(screen.native))
SDL_UnlockSurface(screen.native);
SDL_Flip(screen.scaled);
}
The draw()
function is called every frame. screen.native
is the 160 × 120 game surface, and screen.scaled
is the final surface after scaling. Both are 32-bit surfaces created using the SDL_HWSURFACE
flag.
Is there any better way to do this to improve performance?