4

I have a java class that writes small messages (< 100 bytes) to a buffer. I have three message types, and so I've allocated 3 buffers.

// Cache seprate buffers for each message type
Class MyWriter(){
    private ByteBuffer loginBuffer = ByteBuffer.allocate(35);
    private ByteBuffer registerBuffer = ByteBuffer.allocate(23);
    private ByteBuffer placeOrderBuffer = ByteBuffer.allocate(75);
    ....
    }

And then I reuse them on new writes. For example:

 private void writeLogin(){
    loginBuffer.clear();
    loginBuffer.putLong(...)
    ...
 }

 private void writeRegister(){
    writeBuffer.clear();
    writeBuffer.putLong(...)
    ...
 }

I profiled my application and this class is where most processing time is spent.

I had initially thought it would be more efficient to allocate the buffers once, and then run the .clear() method, instead of allocating a new buffer on each read. However, while this is certainly more memory-efficient, does the buffer.clear() method take more CPU time, as it has to find the buffer's place in memory?

**My question is: ** would it actually be a better design to allocate on-the-fly? IE:

private void writeLogin(){
    loginBuffer = ByteBuffer.allocate(35)
    loginBuffer.putLong(...)
    ...
 }

Sure, it would waste more memory, but if it speeds up the processing time, that's great. Is there a rule of thumb, or has anyone already thought about this?

I'm new to Java and not sure how to test something simple like this.

Adam Hughes
  • 14,601
  • 12
  • 83
  • 122
  • 4
    The rule of thumb is that reusing the buffer will be better for performance. Especially since `clear` is O(1) and allocating is O(n) in terms of buffer size. And if you're doing this in a hot loop (probably), code will be inlined, pointer to the buffer will be loaded into register/stack. There will be no lookup. – Marko Topolnik Nov 02 '16 at 13:44
  • This is quite an interesting question, I had not considered this approach, but remember its also about scope, When a variable goes out of scope it is garbage collected so a local definition (your last example) will garbage collect the bytebuffer once it drops out of the method. Have you considered using a processing queue? – Theresa Forster Nov 02 '16 at 13:44
  • Thanks for these comments. Marko, thank you, I feel better about my approach now. Theresa, I'm not sure what processing queue is. Is that a datastructure? – Adam Hughes Nov 02 '16 at 13:47
  • @TheresaForster Not quite. GC will eventually happen - but it want happen in the exact moment after scoping-out. So memory is an issue... 100 bytes is not big, though.,. – MordechayS Nov 02 '16 at 13:49
  • 1
    @AdamHughes you can look up the Deque It can act as a FIFO or a LIFO queue, you coudl get the incoming messagee push it on the fifo and then have another thread pulling the messages off and processing it, meaning that you wouldnt need to do the clear as you could just null it after (which would still mean needing GC but you would have that in most cases, also your definition is a lot less than 100 characters) – Theresa Forster Nov 02 '16 at 13:52
  • 2
    @MarkoTopolnik is correct and you are correct about your assumptions about performance and memory. The only caveat is when you throw the wrench of multi-threading into this. The problems and the workarounds will leave it to bit of an ugly code (even in Java 8) and runtime issues are very hard to diagnose or test. So for small footprint of memory and small price of performance you just might want to go with second approach. Thanks to JVMs Java is very fast. – bhantol Nov 02 '16 at 13:54
  • Thank bhantol, I"ll keep that in mind. Thanks to everyone else who answered as well. – Adam Hughes Nov 02 '16 at 14:57

0 Answers0