2

For some reason, I am getting this error whenever I increase my input image size for inference in android (for image classification):

 Process: com.example.android.androidevaluateimagenet, PID: 31064
 java.nio.BufferOverflowException
     at java.nio.FloatBuffer.put(FloatBuffer.java:444)
     at org.tensorflow.Tensor.writeTo(Tensor.java:390)
     at org.tensorflow.contrib.android.TensorFlowInferenceInterface.fetch(TensorFlowInferenceInterface.java:338)
     at org.tensorflow.contrib.android.TensorFlowInferenceInterface.fetch(TensorFlowInferenceInterface.java:301)
     at com.example.android.androidevaluateimagenet.TensorFlowImageClassifier.recognizeImage(TensorFlowImageClassifier.java:148)
     at com.example.android.androidevaluateimagenet.MainActivity.getInferenceTime(MainActivity.java:240)
     at com.example.android.androidevaluateimagenet.MainActivity$2.onClick(MainActivity.java:318)
     at android.view.View.performClick(View.java:4763)
     at android.view.View$PerformClick.run(View.java:19821)
     at android.os.Handler.handleCallback(Handler.java:739)
     at android.os.Handler.dispatchMessage(Handler.java:95)
     at android.os.Looper.loop(Looper.java:135)
     at android.app.ActivityThread.main(ActivityThread.java:5272)
     at java.lang.reflect.Method.invoke(Native Method)
     at java.lang.reflect.Method.invoke(Method.java:372)
     at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:909)
     at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:704)                                                                      at com.example.android.androidevaluateimagenet.MainActivity$2.onClick(MainActivity.java:318)
                                                                                                 at android.view.View.performClick(View.java:4763)
                                                                                                 at android.view.View$PerformClick.run(View.java:19821)
                                                                                                 at android.os.Handler.handleCallback(Handler.java:739)
                                                                                                 at android.os.Handler.dispatchMessage(Handler.java:95)
                                                                                                 at android.os.Looper.loop(Looper.java:135)
                                                                                                 at android.app.ActivityThread.main(ActivityThread.java:5272)
                                                                                                 at java.lang.reflect.Method.invoke(Native Method)
                                                                                                 at java.lang.reflect.Method.invoke(Method.java:372)
                                                                                                 at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:909)
                                                                                                 at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:704)

and I'm not really sure why. For input image sizes below what I used, the model runs fine. Furthermore, the problem is only unique to one model I'm using. I've tried both a smaller and larger (2x larger) models as well, and they work perfectly fine. It is only this model that gave me the problem, but I'm unable to identify what exactly is wrong with this model based on the error produced.

Specific error stack trace:

TensorFlowImageClassifier.java:

    inferenceInterface.fetch(outputName, outputs);

TensorFlowInferenceInterace.java:

public void fetch(String var1, float[] var2) {
    this.fetch(var1, FloatBuffer.wrap(var2));
}

Tensor.java:

public void writeTo(FloatBuffer var1) {
    if(this.dtype != DataType.FLOAT) {
        throw incompatibleBuffer(var1, this.dtype);
    } else {
        ByteBuffer var2 = this.buffer();
        var1.put(var2.asFloatBuffer());
    }
}

FloatBuffer.java:

public FloatBuffer put(FloatBuffer src) {
    if (src == this)
        throw new IllegalArgumentException();
    int n = src.remaining();
    if (n > remaining())
        throw new BufferOverflowException();
    for (int i = 0; i < n; i++)
        put(src.get());
    return this;
}
kwotsin
  • 2,882
  • 9
  • 35
  • 62

2 Answers2

1

Going off the stacktrace and error message, it seems that the complaint is that the float[] array provided to fetch is smaller in length than the size of the output produced by your model. So, you'd want to adjust your code to provide a more appropriately sized array to fetch.

Unfortunately, the TensorFlowInferenceInterface class doesn't have a public method to access the actual shape of the fetched tensor. If you are building from source, you could get that by adding something like the following to the class:

public long[] fetchShape(String outputName) {
  return getTensor(outputName).shape();
}

(which might be a good contribution back to the project).

Hope that helps.

ash
  • 6,681
  • 3
  • 18
  • 30
  • Unfortunately, the capacity I require is quite fixed and I'm not too sure where in the code I can further reduce the memory requirements. I think the issues has to do with the model implementation, since I've used a much larger model that consumes around 3x more memory but there's no problem with running it on mobile. Is there a way to check where the bulk of the memory is coming from the model? – kwotsin Aug 29 '17 at 07:50
1

The problem will definitely be as described by https://stackoverflow.com/users/6708503/ash

Use this log output to see what the shape of your tensor output looks like:

Log.i(TAG, "This is the output tensor shape of my .pb file in asset folder" + 
inferenceInterface.graph().operation(outputNames[i]).output(0).shape());

hope this will help to trace the debugging process.