11

I'm trying to debug a Python program using pdb. The program could be like this:

def main():
    a = 1
    print(b)
    c = 2
    d = 3

Apparently, print(b) is a typo which should be print(a) but it is not important and I can fix it with the text editor but I want to bypass this error and continue debugging.

I tried jump, like jump 4(assuming "c=2" is line 4) but I was given error "Jump failed: f_lineno can only be set by a line trace function", which means I need to give a line trace function when I'm programming.

So, is there a way to deal with this problem, or is there some other way to bypass the error line when using pdb?

Umang Gupta
  • 15,022
  • 6
  • 48
  • 66
sky zhang
  • 111
  • 3

2 Answers2

6

TLDR: this is pdb's post-mortem mode in which jumping is not supposed to work. But it's still very useful.

enter image description here

Painting by Rembrandt (public domain)

I reproduce it with python 3.8.2 as *** Jump failed: can only jump from a 'line' trace event by running the script "under pdb" like so: python3 -m pdb -c c script.py and trying to jump to another line in pdb prompt which then appears.

What's happened: an unhandled exception, in this case NameError: name 'b' is not defined caused python to stop interpreting the script; pdb intercepted this situation and entered its post-mortem mode.

As Almar Klein nicely put it in his blog post,

Post-mortem debugging refers to the concept of entering debug mode after something has broken. There is no setting of breakpoints involved, so it's very quick and you can inspect the full stack trace, making it an effective way of tracing errors.

Although jump, next, return won't work in post-mortem, bt, up, down, ll and pp, along with the possibility to import modules and run arbitrary python code directly in the pdb's interactive shell can be very effective ways to get the root cause. In our simple example the root cause of the NameError is shown immediately upon a quick ll: pdb prefixes the offending line of code with >>.

Had we not passed -c c (meaning continue), pdb would have shown its prompt and paused before the first line of your program is interpreted, so you'd have a chance to step through the whole program or set a breakpoint before or at the offending line, and jump over it, never entering the post-mortem.

Even in post-mortem, you can prepare a breakpoint anywhere in the program, e.g. break 2 for line 2, and say c or continue so pdb will finish post-mortem, reload the file, and restart the program with the updated set of breakpoints.


Another way to deal with it is to import pdb and pdb.set_trace() in suspicious code - or since python 3.7, simply breakpoint() - and run the python program normally (not "under" pdb anymore) which allows then to jump, next, return etc, as well as everything else - when the breakpoint is hit.


If your Python program is started through behave:

  • prefer to run behave with --no-capture whenever using pdb or similar debuggers (whether post-mortem mode or not), to avoid problems with behave's stdin/stdout capturing making pdb unresponsive and/or its prompt invisible.
  • best of all, if you want to end up in the pdb post-mortem mode automatically while potentially still supporting capturing, set the post_mortem environment variable (can also name it differently) to any value (but only on dev machine, not for automated CI or production!) and commit the following permanently into environment.py:
def after_step(context, step):
    import os
    if 'post_mortem' in os.environ and step.status == 'failed':
        import pdb
        # Similar to "behave --no-capture" calling stop_capture() ensures visibility of pdb's prompts,
        # while still supporting capture until an uncaught error occurs.
        # Warning: this does rely on behave's internals which might change
        context._runner.stop_capture()  # pylint: disable=protected-access
        pdb.post_mortem(step.exc_traceback)
V-R
  • 1,309
  • 16
  • 32
  • 1
    `def after_step(_, step):` (or `def after_step(_context, step):`, if you want to keep the name of the variable as close to the intended meaning as possible) won't issue an `unused-argument` warning. – bers Nov 13 '20 at 07:45
  • Just made 2 more improvements which enable frictionless debugging workflow. I do rely on behave's internals so there is a warning about that, but this allows supporting use cases that rely on capturing, too. Also now I activate this whole automation via environment variable, so the `environment.py` is committed permanently to source control instead of keeping it in stash. – V-R Nov 25 '20 at 15:20
  • BTW: the blog post that I linked, itself contains a broken link to the IEP (Interactive Editor for Python). That is probably because IEP has been [merged into Pyzo](https://pyzo.org/iep.html) - a cross-platform Python IDE. – V-R Sep 22 '21 at 14:25
-2

I'm not sure, but this may be a bug that was fixed in Mar 2018, so you may need to (patch, upgrade, reinstall?) your Python.

tfj
  • 87
  • 1
  • 7