0

I've started using SpecFlow rather recently in a WPF C# application, for UI testing. I have a feature that runs a series of chained scenarios, and I'd like to know if there is a way of stopping the execution of the feature if one scenario fails. I'm aware that the execution can be stopped if debugging, but I'm wondering if I can stop it while running. I've been thinking of trying to stop it in the [AfterScenario()] method. Can this be done?

[AfterScenario()]
public static void After()
{
    if (TestContext.CurrentContext.Result.State != TestState.Success)
    {
        //stop feature from going to the next scenario
    }
}
Timeless
  • 41
  • 1
  • 7
  • 3
    This smells to me. It sounds like you have a order dependency on your scenarios, which might end up causing you problems. you could do something [along these lines](http://stackoverflow.com/questions/24928270/is-it-valid-to-have-specflow-features-depending-on-other-features) with the caveats that come along with that question and answers. I believe you would be better having each scenario standalone. – Sam Holder Sep 03 '14 at 14:06
  • What if your tests are run by some other test runner? Specrun and ncruch will both run tests in parallel which will break in your situation. The runner used on a build server might also run them in a different order. Team City for example seems to run the tests in alphabetical order. – Sam Holder Sep 03 '14 at 22:06
  • Can you give an example of how you have chained the scenarios together? This might help understand your use case. – Sam Holder Sep 03 '14 at 22:07
  • The tests involve the introduction of multiple objects into a fresh database (recreated every time the feature runs -1 time only). Each run from a Scenario Outline will add one element into the database and return a message for success or failure. Now I know when the insertion should pass and when it should fail, but if one test fails unexpectedly, the feature keeps running and the assert for element count fails. In such an unexpected case I would want the whole scenario to stop. I know this probably shouldn't be used in practice, but I'd like to know if stopping a scenario would be possible. – Timeless Sep 04 '14 at 06:44

2 Answers2

2

Using Feature Context, you could probably have each sentence start with something like this...

Given Previous tests did not fail

In that sentence, you verify the current feature context doesn't have a false value. That might look something like this...

bool testsPass = FeatureContext.Current["TestsPass"];
Assert.IsTrue(testsPass);

You would have to remember to set this value before any assert. Off hand, that might look something like this...

bool testPassed = //something to test
FeatureContext.Current["testsPass"] = testPassed;
Assert.IsTrue(testPassed);

As the comment said, typically it's not a good idea to expect scenarios to always run in a particular order. Depending on the runner, they may not do as expected.

Note: on second look, a BeforeScenario might work better but I would still suggest using a @serial tag or something to indicate that's what it's doing.

Brantley Blanchard
  • 1,208
  • 3
  • 14
  • 23
0

I don't know if stopping the entire feature run is possible, after all really all that specflow does is generate tests in the framework of your choice which are then run by some test runner. No unit test runner I know will allow a complete abort of all other tests if one fails. But that doesn't mean that what you want isn't possible.

I can think of a way to 'fake' what you want, but its a bad idea. you could set some sort of flag in the AfterScenario (like creating a file on the disk or setting a mutex) and then checking for this flag in the BeforeScenario and failing fast (with a message like 'skipping test as previous failure detected') if the flag exists. You could clear the flag on the BeforeFeature to ensure that the tests always start clean.

Like I said I think this is a bad idea and you should really reconsider how your tests work.

Based on the extra information given in your last comment it seems that even though you recreate your db for the feature and then run multiple scenarios on the clean db, actually each scenario needs its own clean database. Could you not create a new database for each scenario and then have a single scenario create all the data it needs in its given and then test only a single thing? This seems much more scalable and maintainable in the long term to me.

I have done something similar to this before and created a new database for each scenario and it has worked ok.

Sam Holder
  • 32,535
  • 13
  • 101
  • 181