6

Sometimes I accumulate a large mass of breakpoints from different debugging sessions in different places in my code. How does the debugger efficiently know when to stop for a breakpoint? It can't possibly be stopping at every single line to check the line number and source file name against a potentially long list of breakpoints, can it?

This is the Java debugger in Eclipse, but I presume the question applies to any debugger.

skiphoppy
  • 97,646
  • 72
  • 174
  • 218

2 Answers2

7

The strategy used in many debuggers (I don't know about Eclipse) is to put a patch into the code at the point of the breakpoint which is essentially a subroutine call or system call. The code jumped to has the breakpoint information, and does whatever printing or accepting of user commands, and also has the code that was overwritten with the patch, so that code can be executed to make the execution match the original code, without the breakpoint

  • Man, that's some heavy-duty wizardry. I knew debuggers were doing some deep things, but I didn't know it was that deep! – skiphoppy May 13 '09 at 16:45
  • 2
    Not necessarily a call even. Where possible, you overwrite with an instruction that raises an interrupt, which the debugger handles. – Steve Jessop May 13 '09 at 17:05
6

To add to Nadreck's good answer:

There's an article here with more details, including some of the more exotic stuff (specific opcodes on x86; hardware breakpoints).

user9876
  • 10,954
  • 6
  • 44
  • 66