I was looking at some talks by Dan Ingalls and he was talking about how they were able to do almost real time 2D graphics way back in the 1970's by a technique called bitblit
This was all done in software and directly on the monitor, is there any reason that techniques like this can't be used on modern gpu hardware?
Is this the way it's done in modern GPU?
I have a high level understanding of the 3D rendering pipeline that's used even for 2D graphics but couldn't some of these old techniques be given a large boost with all that power on a GPU?