while ! good-or-bad-test-command ; do
git checkout HEAD^
done
However, if the breakage is within just the last few commits somewhere, you can do this manually.
$ good-or-bad-test-command
# if it fails, then:
$ git checkout HEAD^ # pop to previous
$ good-or-bad-test-command # <-- recalled by hitting up arrow in bash
# still fails:
$ git checkout HEAD^ # <-- recalled with up arrow
$ good-or-bad-test-command # <-- recalled
...
thanks to history recall, it will take fewer keystrokes than banging out a loop.
git checkout
avoids moving your branch HEAD
, putting you in a "detached state" from which you can easily recover with git checkout <yourbranch>
.
[Edit, March 2017]
But, the question is, why would you still use git bisect
in this situation? You've linearly searched through the bad commits back to the good one; there is no need to do another binary search for the same info.
It may be fewer steps to just take a guess to find some commit that is still good, and get git bisect
going.
If you suspect you recently broke something, just go back 16 commits. Or 32 or whatever. Go back to the last tagged release. The binary search will quickly zero in:
$ git bisect start
$ git bisect bad # HEAD known to bad; almost always the case
$ git checkout HEAD~8 # wild guess; almost certainly before breakage
$ good-or-bad-test-command # check: is it really good?
$ # if not, git checkout HEAD~8 # go back more, repeat test
$ git bisect good # bisect begins
If we have a git with a very long history, and discover something had broken a long time ago (something previously untested which is now tested), we can probe backwards exponentially to find a good commit: git checkout HEAD~16
; then if that's not good, git checkout HEAD~32
; then git checkout HEAD~64
.
That's the general strategy for binary searching through an unknown-range. Don't linearly scan to determine the range, because that makes the algorithm linear. If we exponentially extend the range, we keep it logarithmic.