I do a lot of game programming in my free time, and am currently working on a game engine library. Previous to this point I have made customized per game engines built straight into the application, however, to challenge my logical skills even further, I decided I wanted to make an engine that I could literally use with any game that I write, kind of like a plugin.
Before this point, I have been pulling textures in using a BufferedImage using getRBG to pull the pixel[] out and by hand writing over the background int[] with the texture int[] array in the (X,Y) position that the Renderable resided. Then when everything was written to the master int[] I would make a new BufferedImage and use setRGB using the master int[] and use a BufferStrategy and it's Graphic to drawImage of the BufferedImage. I liked this method because I felt like i had complete control over the way things were rendered, but I don't think it was very efficient.
Here is a look at the way I used to write to the master int[]
public void draw(Render render, int x, int y, float alphaMul){
for(int i=0; i<render.Width; i++){
int xPix = i + x;
if(Width - 1<xPix || xPix<0)continue;
for(int j=0; j<render.Height; j++){
int yPix = j + y;
if(Height - 1<yPix || yPix<0)continue;
int srcARGB = render.Pixels[i + j * render.Width];
int dstARGB = Pixels[xPix + yPix * Width];
int srcAlpha = (int)((0xFF & (srcARGB >> 24))*alphaMul);
int srcRed = 0xFF & (srcARGB >> 16);
int srcGreen = 0xFF & (srcARGB >> 8);
int srcBlue = 0xFF & (srcARGB);
int dstAlpha = 0xFF & (dstARGB >> 24);
int dstRed = 0xFF & (dstARGB >> 16);
int dstGreen = 0xFF & (dstARGB >> 8);
int dstBlue = 0xFF & (dstARGB);
float srcAlphaF = srcAlpha/255.0f;
float dstAlphaF = dstAlpha/255.0f;
int outAlpha = (int)((srcAlphaF + (dstAlphaF)*(1 - (srcAlphaF)))*255);
int outRed = (int)(srcRed*srcAlphaF) + (int)(dstRed * (1 - srcAlphaF));
int outGreen = (int)(srcGreen*srcAlphaF) + (int)(dstGreen * (1 - srcAlphaF));
int outBlue = (int)(srcBlue*srcAlphaF) + (int)(dstBlue * (1 - srcAlphaF));
int outARGB = (outAlpha<<24)|(outRed << 16) | (outGreen << 8) | (outBlue);
Pixels[xPix + yPix * Width] = outARGB;
}
}
}
I have recently found out it may be multitudes faster, where using drawImage I can loop through all of the Renderables and draw them as BufferedImages using their respective (X,Y) positions. But, I do not know how to work alphaBlending with that. So my questions are, how would I go about getting the results that I want?, and Would it be resource and time beneficial over my previous method?
Thanks -Craig