2

From the Book "Core Java for the Impatient", Chapter "increment and decrement operators"

String arg = args[n++];

sets arg to args[n], and then increments n. This made sense thirty years ago when compilers didn’t do a good job optimizing code. Nowadays, there is no performance drawback in using two separate statements, and many programmers find the explicit form easier to read.

I thought such usage of increment and decrement operators was only used in order to write less code, but according to this quote it wasn't so in the past.

What was the performance benefit of writing statements such as String arg = args[n++]?

Eran
  • 387,369
  • 54
  • 702
  • 768
wilmaed
  • 41
  • 3
  • it depends on platform, but in case interpreters like java-VM, it was that access to same object (incremented variable) could be performed twice. Some CPU instructions provide shortcuts, allowing do two thing at once, not to mention that CPUs now are parallelizing. – Swift - Friday Pie Jan 10 '17 at 07:39
  • In "ancient" times (and today still) there exists CPU instructions to increment and decrement registers by one. And it makes sense to have similar "instructions" in a higher-level programming language as well, especially early on when most programmers came from a lower-level background and knew about these increase/decrease CPU instructions. It also is easy to understand the general concept of increasing/decreasing something, so having such code increases brevity while still being easily understandable. – Some programmer dude Jan 10 '17 at 07:40
  • @Some programmer dude in java each variable is an OOP object, you could subclass\override increment or ANY math math operation , even operation of reading value of variable. As opposed to C where C code is almost direct analog to assembler code. Question is how VM interprets\compiles the pseudo code from java binary to CPU instructions – Swift - Friday Pie Jan 10 '17 at 07:42

3 Answers3

2

Some processors, like the Motorola 68000, support addressing modes that specifically dereference a pointer, then increment it. For instance:

excerpt from the MC68000 Programmer's Reference Manual

Older compilers might conceivably be able to use this addressing mode on an expression like *p++ or arr[i++], but might not be able to recognize it split across two statements.

1

Over years architectures and compilers became better. Given the improvements in architectures of CPUs and compilers I would say there is no single answer to it.

From the architecuture standpoint - many processors support STORE & POINTER AUTO-INCREMENT as a one CPU cycle. So in the past - the way you wrote the code would impact the result (one vs more operations). Most notably DSP architectures were good at paralleling things (e.g. TI DSPs like C54xx with post-increment and post-decrement instructions and instructions that you can execute in circular buffers - e.g. *"ADD *AR2+, AR2–, A ;after accessing the operands, AR2 ;is incremented by one." - from TMS320C54x DSP reference set). ARM cores also feature instructions that allows for similar parallelism (VLDR, VSTR instructions - see documentation )

From the compiler standpoint - Compiler looks at how variable is used in its scope (what could not be the the case before). It can see if the variable is reused later or not. It might be the case that in the code a variable is increased but then discarded. What is the point of doing that?Nowadays compiler has to track variable usage and it can make smart decisions based on that (if you look at Java 8 - the compiler must be able to spot "effectively final" variables that are not reassigned).

Witold Kaczurba
  • 9,845
  • 3
  • 58
  • 67
0

These operators were/are generally used for convenience by programmers rather than to achieve performance. Because effectively, the statement would get split into a two line statement during compilation!! Apparently, the overhead for performing Post/Pre-increment/decrement operators would be more as compared to an already split two liner statement!

sanrnsam7
  • 131
  • 1
  • 2
  • 11