I'm doing a little PoC of system tests with python-behave. I wrote a couple of tests, but I wonder how to scale it:
I have a few scenarios I wrote in Gherkin and implemented in python-behave, and I was wondering: In case there are many testers working on the same project, and the testers want to use the same phrases, so that there won't be code duplication in the python-behave files. How could they go about it?
for instance (please ignore the content of the tests as I wasn't giving it much thought)
Tester 1 writes:
Scenario: Simple Google search
Given a web browser is on the Google page
When the search phrase "panda" is entered
Then results for "panda" are shown
Somebody implemented each of the steps in the test in python-behave.
Tester 2 writes:
Scenario: Advanced Google search
Given there is a web-browser on a Google page
When the search phrase "panda" is written
Then results for "panda" are presented
And the related results include "Panda Express"
But the related results do not include "pandemonium"
Notice that the "Given", "When" and "Then" of the two tests are identical in their logic. Is there a simple way for Tester 2 to know that a similar phrases were already written (and implemented)? Is there a way to search in a "phrase bank" or something of that sort in order to avoid code duplication?