1

I'm creating a java program that uses multiple images to create animations and I am looking for the best way to read/load images with ImageIO without using a lot of memory.

I've noticed in a similar post getSubimage() is mentioned as a way to improve performance: Using several bufferedImages in java

My question is: based on the way java ImageIO handles images, is it better to have a few extremely large grouped images and use getSubimage() to separate them (possibly as large as 100,000 pixels by 100,000 pixels because the program uses several scrolling backrounds), split every image into the smallest possible size and load them individually, or a combination of the two. I am currently grouping the smaller images and loading them as one image, but loading larger images individually. Performance varies anywhere from loading in a few seconds to nearly crashing my laptop and forcing me to reboot.

I was wondering if anybody could provide reasoning - if possible in terms of Big O Efficiency - for the benefits of loading many small images versus benefits of loading a few massive images. Does ImageIO loading work like a sort algorithm - where (given the right sort) it can become exponentially slower as size increase, but if broken into sub sets can be tackled fairly quickly?

The Oracle documentation for ImageIO.read(File input) is as follows:

Returns a BufferedImage as the result of decoding a supplied File with an ImageReader chosen automatically from among those currently registered. The File is wrapped in an ImageInputStream. If no registered ImageReader claims to be able to read the resulting stream, null is returned.

I could not find any other indication of how ImageIO actually processes the images. Thank you in advance for any clarification of how this works. If question is too broad or off-topic let me know and I will revise/delete.

Community
  • 1
  • 1
Branden Keck
  • 462
  • 3
  • 12
  • Generally, large backgrounds are broken up into tiles. Several tiles are assembled to form the visible background. The time it takes to read an image goes up with the area of the image. A 500 x 300 pixel image has 150,000 pixels. A 5000 x 3000 pixel image has 15,000,000 pixels. The bigger image would take 100 times longer to read and 100 times longer to process. In the same time as one large image, you could process 100 500 x 300 images. – Gilbert Le Blanc Dec 08 '15 at 16:15
  • @GilbertLeBlanc Thank you, this is very helpful! – Branden Keck Dec 08 '15 at 16:19
  • The question is very broad indeed, and as usual, asking for the "**best**" way enforces a discussion about the use cases and evaluation criteria. In general, you should be careful with `getSubImage`, because *rendering* a sub-image may be *significantly* slower than rendering a small, "standalone" image. In terms of loading performance: Loading 100 images of size 10x10 will likely take longer than loading one 100x100 image, but for large images, this difference should vanish... (to be continued) – Marco13 Dec 08 '15 at 16:42
  • So algorithmically (asymptotically, in "Big-O", as you described it) the loading time should be linear. This means that loading 100 images with 100KB each should not be significantly faster or slower than loading 10 images with 1000KB each. Loading smaller images may increase the flexibilty (in terms of memory and error handling). But of course, all this would have to be validated with some tests. Maybe I'll try out a few things later today. – Marco13 Dec 08 '15 at 16:45

0 Answers0