0

I write a delay function:

void delay(a){
    for (int i=a;i>0;i--) 
        for (int j=0; j<200;j++)
} 

But when I compiled this code by sdcc and keil, and run in 8051 chip. The result is that the delay function compiled by sdcc runs much more slower than the function compiled by keil.

Can someone tell me why...

Jack
  • 227
  • 3
  • 8
  • Because implementation-defined behavior is, well, defined by the implementation. The code you have shown does nothing, so the compilers are free to emit code that *does nothing* in different ways. The C language does not define how long code statements will take, so both compilers can be conforming and still produce drastically different results. – Jonathon Reinhart May 07 '21 at 01:26
  • Side note: Your code is not compilable. -- Your post is missing numbers about the time you get. -- Did you look into the resulting machine code? -- What are the command lines you used to compile? -- Which version of the compilers did you use? – the busybee May 07 '21 at 05:59

1 Answers1

0

Different compilers use different realizations in machine language. There are several issues, not limited to, but what pops up in my mind:

  • No standard definition of translation into machine code: Each compiler may use any solution that complies to the standard. There is more than one possible solution.
  • Different compiler behavior: Each compiler has its own set of options to change specific generation variants.
  • Optimization levels: An empty loop might be optimized away completely, for example.
  • Variable allocation: Compilers are free to select registers or RAM cells to use for their variables.
  • The bit widths of (in your case) int: Probably it's the same with SDCC and Keil, but sometimes there are differences.
the busybee
  • 10,755
  • 3
  • 13
  • 30