1

I am taking 3 pictures in my app before uploading it to a remote server. The output is a byteArray. I am currently converting this byteArray to a bitmap, performing cropping on it(cropping the centre square). I eventually run out of memory(that is after exiting the app coming back,performing the same steps). I am trying to re-use the bitmap object using BitmapFactory.Options as mentioned in the android dev guide

https://www.youtube.com/watch?v=_ioFW3cyRV0&list=LLntRvRsglL14LdaudoRQMHg&index=2

and

https://www.youtube.com/watch?v=rsQet4nBVi8&list=LLntRvRsglL14LdaudoRQMHg&index=3

This is the function I call when I'm saving the image taken by the camera.

public void saveImageToDisk(Context context, byte[] imageByteArray, String photoPath, BitmapFactory.Options options) {
    options.inJustDecodeBounds = true;
    BitmapFactory.decodeByteArray(imageByteArray, 0, imageByteArray.length, options);
    int imageHeight = options.outHeight;
    int imageWidth = options.outWidth;
    int dimension = getSquareCropDimensionForBitmap(imageWidth, imageHeight);
    Log.d(TAG, "Width : " + dimension);
    Log.d(TAG, "Height : " + dimension);
    //bitmap = cropBitmapToSquare(bitmap);
    options.inJustDecodeBounds = false;

    Bitmap bitmap = BitmapFactory.decodeByteArray(imageByteArray, 0,
            imageByteArray.length, options);
    options.inBitmap = bitmap;

    bitmap = ThumbnailUtils.extractThumbnail(bitmap, dimension, dimension,
            ThumbnailUtils.OPTIONS_RECYCLE_INPUT);
    options.inSampleSize = 1;

    Log.d(TAG, "After square crop Width : " + options.inBitmap.getWidth());
    Log.d(TAG, "After square crop Height : " + options.inBitmap.getHeight());
    byte[] croppedImageByteArray = convertBitmapToByteArray(bitmap);
    options = null;

    File photo = new File(photoPath);
    if (photo.exists()) {
        photo.delete();
    }


    try {
        FileOutputStream e = new FileOutputStream(photo.getPath());
        BufferedOutputStream bos = new BufferedOutputStream(e);
        bos.write(croppedImageByteArray);
        bos.flush();
        e.getFD().sync();
        bos.close();
    } catch (IOException e) {
    }

}


public int getSquareCropDimensionForBitmap(int width, int height) {
    //If the bitmap is wider than it is tall
    //use the height as the square crop dimension
    int dimension;
    if (width >= height) {
        dimension = height;
    }
    //If the bitmap is taller than it is wide
    //use the width as the square crop dimension
    else {
        dimension = width;
    }
    return dimension;
}


 public Bitmap cropBitmapToSquare(Bitmap source) {
    int h = source.getHeight();
    int w = source.getWidth();
    if (w >= h) {
        source = Bitmap.createBitmap(source, w / 2 - h / 2, 0, h, h);
    } else {
        source = Bitmap.createBitmap(source, 0, h / 2 - w / 2, w, w);
    }
    Log.d(TAG, "After crop Width : " + source.getWidth());
    Log.d(TAG, "After crop Height : " + source.getHeight());

    return source;
}

How do I correctly recycle or re-use bitmaps because as of now I am getting OutOfMemory errors?

UPDATE :

After implementing Colin's solution. I am running into an ArrayIndexOutOfBoundsException.

My logs are below

08-26 01:45:01.895    3600-3648/com.test.test E/AndroidRuntime﹕ FATAL EXCEPTION: pool-3-thread-1
Process: com.test.test, PID: 3600
java.lang.ArrayIndexOutOfBoundsException: length=556337; index=556337
        at com.test.test.helpers.Utils.test(Utils.java:197)
        at com.test.test.fragments.DemoCameraFragment.saveImageToDisk(DemoCameraFragment.java:297)
        at com.test.test.fragments.DemoCameraFragment_.access$101(DemoCameraFragment_.java:30)
        at com.test.test.fragments.DemoCameraFragment_$5.execute(DemoCameraFragment_.java:159)
        at org.androidannotations.api.BackgroundExecutor$Task.run(BackgroundExecutor.java:401)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:422)
        at java.util.concurrent.FutureTask.run(FutureTask.java:237)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:152)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:265)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1112)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:587)
        at java.lang.Thread.run(Thread.java:818)

P.S : I had thought of cropping byteArrays before, but I did not know how to implement it.

CommonsWare
  • 986,068
  • 189
  • 2,389
  • 2,491
Rohan
  • 593
  • 7
  • 22

2 Answers2

3

You shouldn't need to do any conversion to bitmaps, actually. Remember that your bitmap image data is RGBA_8888 formatted. Meaning that every 4 contiguous bytes represents one pixel. As such:

// helpers to make sanity
int halfWidth = imgWidth >> 1;
int halfHeight = imgHeight >> 1;
int halfDim = dimension >> 1;

// get our min and max crop locations
int minX = halfWidth - halfDim;
int minY = halfHeight - halfDim;
int maxX = halfWidth + halfDim;
int maxY = halfHeight + halfDim;

// allocate our thumbnail; It's WxH*(4 bits per pixel)
byte[] outArray = new byte[dimension * dimension * 4]

int outPtr = 0;
for(int y = minY; y< maxY; y++)
{
    for(int x = minX; x < maxX; x++)
    {
        int srcLocation = (y * imgWidth) + (x * 4);
        outArray[outPtr + 0] = imageByteArray[srcLocation +0]; // read R
        outArray[outPtr + 1] = imageByteArray[srcLocation +1]; // read G
        outArray[outPtr + 2] = imageByteArray[srcLocation +2]; // read B
        outArray[outPtr + 3] = imageByteArray[srcLocation +3]; // read A
        outPtr+=4;
    }   
}
//outArray now contains the cropped pixels.

The end result is that you can do cropping by hand by just copying out the pixels you're looking for, rather than allocating a new bitmap object, and then converting that back to a byte array.

== EDIT:

Actually; The above algorithm is assuming that your input data is the raw RGBA_8888 pixel data. But it sounds like, instead, your input byte array is the encoded JPG data. As such, your 2nd decodeByteArray is actually decoding your JPG file to the RGBA_8888 format. If this is the case, the proper thing to do for re-sizing is to use the techniques described in "Most memory efficient way to resize bitmaps on android?" since you're working with encoded data.

Community
  • 1
  • 1
Colt McAnlis
  • 3,846
  • 3
  • 18
  • 19
  • Hi Colt. I am running into an java.lang.ArrayIndexOutOfBoundsException, in the loop. I have edited the question and added the log for the same. Could you clarify what imgWidth and width are? I have taken width to be the total bitmap width which we get when we decode it. I am assuming imgWidth is the same. Also the value of outPtr does not change. Should there not be an increment based on iteration? Thank you so much for your response! – Rohan Aug 25 '15 at 22:56
  • 1
    updated the code to move outptr forward, and also update srclocation with something more accurate. ArrayIndexOutOfBoundsException just means that outArray or imageByteArray is being indexed at a location greater than it's length. – Colt McAnlis Aug 26 '15 at 00:15
  • The byteArray is actually the output of the camera(Commonsware's CWAC Camera). The camera has set picture format to 256 i.e JPEG. JPEGs do not have transparency(alpha channel). So does this matter? – Rohan Aug 26 '15 at 06:26
  • 1
    When an image is loaded into memory from a decode process it's always converted to RGBA8888, regardless if it has transparency or not. – Colt McAnlis Aug 26 '15 at 18:35
  • Thanks Colt. Any idea as to why I'm getting the index out of bounds exception? This is a hurdle I have not been able to get past. – Rohan Aug 26 '15 at 18:48
  • outArray should be a subset of imageByteArray, from what I understand. When I print the lengths of the two arrays, the length of outArray is greater than the imageByteArray(which is the i/p array). This does not seem right to me. – Rohan Aug 26 '15 at 19:43
  • 08-27 07:41:03.525 2828-2866/com.test.test D/Utils﹕ imageByteArray length : 279582 08-27 07:41:03.529 2828-2866/com.test.test D/Utils﹕ outArray length : 3686400 – Rohan Aug 27 '15 at 02:13
  • Please see the edit in my answer; I believe that your input data is the compressed JPG bytes, rather than the RGBA_8888 bytes, which should change what you're doing. My original assumption was that the input was raw byte data, and you were converting it to a bitmap for resizing; But it appears that may not be the case. – Colt McAnlis Aug 27 '15 at 11:26
  • Debugged this : imageByteArray length : 335132 imageHeight : 1200 imageWidth : 1600 dimension : 1200 halfWidth : 800 halfHeight : 600 halfDim:600 minX:200 minY : 0 maxX: 1400 maxY : 1200 outArray length : 5760000 (i.e dimension * dimension * 4). Yes it is compressed data!! Not raw byte data. – Rohan Aug 27 '15 at 11:27
  • Thanks again Colt. Will definitely check out that thread. Perf matters! – Rohan Aug 27 '15 at 11:32
0

Try setting more and more variables to null - this helps reclaiming that memory;

after

 byte[] croppedImageByteArray = convertBitmapToByteArray(bitmap);

do:

bitmap= null;

after

 FileOutputStream e = new FileOutputStream(photo.getPath());

do

photo = null;

and after

 try {
        FileOutputStream e = new FileOutputStream(photo.getPath());
        BufferedOutputStream bos = new BufferedOutputStream(e);
        bos.write(croppedImageByteArray);
        bos.flush();
        e.getFD().sync();
        bos.close();
    } catch (IOException e) {
    }

do:

e = null;
bos = null;

Edit #1

If this fails to help, your only real solution actually using memory monitor. To learn more go here and here

Ps. there is another very dark solution, very dark solution. Only for those who know how to navigate thru dark corners of ofheapmemory. But you will have to follow this path on your own.

Adam Fręśko
  • 1,064
  • 9
  • 14