0

So It took me quite some time solving one problem I had in my code and I'm very interested in some details. I've written a part what exactly I was doing down at the end.

So i was reading an image which I wanted to use with

static BufferedImage img = null;
img = ImageIO.read(new File("/home/user/doggo.jpg"));

Then created a BufferedImage object to store the changes which I would have done.

BufferedImage newimg = new BufferedImage(img.getWidth(), img.getHeight(), BufferedImage.TYPE_INT_RGB);

So because some parts of the image would not be changed, I figured i'd "copy" the BufferedImage to the other one (so there wouldn't be empty places) by doing:

newimg = img;

So when I was running my code, sometimes the image would be distorted, pixelated or not what i hoped for. I knew my algorithm was correct 100% and that there should be no way why it would not work.

It turned out, at least what i lea that I was "copying" one type of BufferedImage to a different type.

img.getType() returned

1

img.getColorModel() returned:

DirectColorModel: rmask=ff0000 gmask=ff00 bmask=ff amask=0

and img.getSampleModel() returned:

java.awt.image.SinglePixelPackedSampleModel@8080b20

For the newly created BufferedImage I got:

newimg.getType() returned

5

newimg.getColorModel() returned:

ColorModel: #pixelBits = 24 numComponents = 3 color space = java.awt.color.ICC_ColorSpace@76fb509a transparency = 1 has alpha = false isAlphaPre = false

and newimg.getSampleModel() returned:

java.awt.image.PixelInterleavedSampleModel@3086002

I'm mostly interested in how does ImageIO read the image into a BufferedImage? How does it define what type of BufferedImage it will be. The image i was reading was a normal jpeg file which has RGB values, so I presumed it would not do much harm copying the BufferedImage objects like that. By now I realized that it is not as simple as I imagined but I'm still in the dark about what happened in the background. I tried reading the oracle docs but they seem maybe either too lacking or too abstract for me to comprehend.

As for the stuff I was coding, I was doing kernel convolution with images, like blur, edge detection etc. and I just wanted to copy the source image to the destination one because I didn't do any edge wrapping/clipping and I did not want the edges of the newly created image to be empty.

  • Important: `newimg = img;` does not *copy* anything. It's a simple assignment. After this, `newimg` and `img` will refer to the *same* `BufferedImage` instance. Given this, I'm not sure if the rest of the question even makes sense... – Harald K Jul 28 '17 at 09:45
  • Oh ok, but why are they different then, when you call getType() and such? The getters are executed after the assignment. – strudelj nudelj Jul 28 '17 at 13:35
  • Sure, they *were* different after the *initial* assignment. But not after `newimg = img;`. After that, they are the same. If you still think they are not, please provide an [MCVE](https://stackoverflow.com/help/mcve), as there's clearly more going on in your code than what you have posted. – Harald K Jul 28 '17 at 14:35
  • Here is the code. https://gist.github.com/pitastrudl/1b8a6e29793bdaf0be858857b36bd334. I cleaned it up and added some comments. The important line is at line 25. If you comment that out, the program works. If you leave it there, the picture comes out distorted. The correct output is an image which has been processed by an edge detection mask. – strudelj nudelj Jul 28 '17 at 20:37

0 Answers0