0

Say I have a master python-behave feature file, and in said file, I check to see if 25, give or take, feature files exist, run each of them in their correct order, and then verify some postconditions.

I want to be able to test multiple features inside a single feature file, if that's possible. I've written this step:

@when(u'Feature {name} is executed')
def step_meta_feature(context, name):
    context.script_start_time = datetime.datetime.now()
    print("Testing feature " + name)
    os.system("behave " + name + ".feature --no-capture")
    context.script_end_time = datetime.datetime.now()

Currently, whenever, in the @when clauses of the main feature file, a feature is executed, this step will always run successfully. I'm fairly certain this is because it isn't checking any conditions, and if a behave script is executed, whether it fails or not, this step will always pass.

In order to fix this problem, I would like to add a line(s), either in this step, or in the after_feature() function of the environment.py file to check if the executed behave feature passed.

Behave's API does say that it contains a Feature object, created from feature files, which returns a "status" variable that tells you whether the feature passed or failed. However, that status variable only seems to be accessible within the environment.py file.

My thinking is that I would find a way to execute a feature object from within the step using behave's functionality instead of os.system, and check its status afterwards, but I don't know if that's possible. Alternatively, I understand that I could write a single feature file that executes the 25 scenarios in order, but has all of the scenarios in the file. However I want to avoid doing that since the primary script is split out into 25 smaller scripts for individual testing purposes. Also, it wouldn't be a great idea to have several smaller features and one big feature that does what all of the smaller feature does in the same folder, running in some arbitrary order.

How can I check from within environment.py or steps.py if a feature from another file passed or failed?

EDIT: Another idea I guess is to find a way to output the text that is sent to the command line to a feature-for-feature log file and read the last few lines to find if any features or steps failed, although that seems like a roundabout way of doing things.

B J
  • 140
  • 1
  • 3
  • 10

2 Answers2

0

The big question is - why on earth would you need that? If feature X depends on feature Y, then they both will simply fail. If the extra feature is some sort of diagnostic, why not just run it every time? The "extra diagnostic" is only scenario I can potentially imagine where this could potentially be useful (and still within BDD convention), but quite frankly then it should be ran each time anyway to make sure that everything is working as intended.

Alas if for some reason you can't have that, the best way to handle it would be outside of BDD, but inside of your CI (continuous integration, for example TeamCity). So you create each set of features as your build step, and there you can set triggers for specific steps failing.

But my sense here is that you simply have .feature files that are not exactly BDD, and now you are trying to bend the tool to match it. So probably the best solution would be to rethink them.

Tymoteusz Paul
  • 2,732
  • 17
  • 20
  • Unfortunately one of the problems we're having is that we don't have a clear idea of what the end user want the BDD to look like and how they want it to work, so we're guessing. Part of the job involved us taking these large programs and cutting them into smaller sections for testing purposes. It would be redundant and tedious to make a single behave file that combined all of the scripts together and tested all of their postconditions. As a result, reuniting the long scripts in a behave feature seems to be a good way to handle things, but there's no way to test if a feature ran successfully. – B J Jul 27 '15 at 14:13
  • This is to say, that running a feature file from within another feature file has no way to check if the feature succeeded or failed any way, and so every feature will run successfully, even if any of the conditions failed. I should also mention that not every feature in that big file is dependent on another. They're all part of a single "program" if you will that runs automatically, daily, and logs the information. – B J Jul 27 '15 at 14:16
  • @BJ you are missing the point, and this question is most likely way too broad to be covered here. But the point is that the tests you are creating are not BDD if they need to rely on another, or some within the same feature are unrelated. – Tymoteusz Paul Jul 28 '15 at 01:27
0

I have no idea if you're still looking for an answer to this,

I personally do not recommend using a master feature file.

What I would recommend is to setup another python script called environment.py at the top level folder where you run behave from.

Inside this file you can use something like:

    def after_feature(context, feature):
            if context.failed:
                    print('Failed')
            else:
                    print('Passed')

As the method name suggests, behave calls this function after each feature file has finished execution

Or if you're looking for a more generic log of pass / fail and feature name:

    def after_feature(context, feature):
            print('The feature: ' + feature.name + ' ' + feature.status)
Nightsreach
  • 101
  • 1
  • 1
  • 7
  • Silly me not reading the whole question before answering. The environment.py file is called regularly by behave so you can create methods within the file itself to process the passes or fails how you see fit. Behave will automatically execute the before/after methods when applicable – Nightsreach Dec 08 '15 at 05:08