I have a program that requires going through BufferedImages pixel by pixel quite often. Normally efficiency wouldn't matter enough for me to care, but I really want every millisecond I can get.
As an example, right now, the fastest way I've found of isolating the red channel in an image looks like this:
int[] rgb = image.getRGB(0, 0, img.getWidth(), img.getHeight(), null, 0, img.getWidth());
for(int i = 0; i < rgb.length; i ++)
rgb[i] = rgb[i] & 0xFFFF0000;
image.setRGB(0, 0, img.getWidth(), img.getHeight(), rgb, 0, img.getWidth());
Which means it's going through the image once to populate the array, again to apply the filter, and finally a third time to update the pixels. Also considering that any pixel with an alpha channel of 0 gets completely zeroed out, It must be going through it at least one more time as well.
I've also tried using the individual pixel versions of getRGB() and setRGB() in a more traditional nested for loop, but that's even slower (though probably takes far less RAM since it doesn't have the int[]).
This is the kind of problem that Iterable types solve, but I can't find any way of applying that principle to images. For this project, I'm okay with total hacks that aren't "best practices" if it works.
Is there any way of iterating over the raw data of a buffered image?