1

I'm a QA who decided to use SpecFlow for my test automation after some consideration. I think it's brilliant, but missing one feature which I did use often with other test runners such as NUnit - something similar to the TestCaseSource property from NUnit to specify a potentially dynamic set of data for tests to be ran against at run time.

I would often have different data in each environment the test should run in, so cannot specify hardcoded values for test parameters. A trivial example is for checking that each type of user account is able to login, the user account credentials can be retrieved using a DB query to populate each test case dynamically in NUnit:

public List<User> GetTestData()
{
    List<User> testData = new List<User>();
    testData = MyDatabase.GetAllUsersInfo().ToList();

    return testData;
}

[Test, TestCaseSource("GetTestData")]
public void CallLoginService(User user)
{
    var response = LoginController.TryLogin(User.UserName, User.Password);

    if (response.Error != null)
    {
        Assert.Fail("Failed to Login: {0}", response.Error);
    }

    Assert.AreEqual("Logged in ok", response.Message, "Login message not as expected");
}

Obviously this is a simple example of that feature, but I think it describes it well enough. I know we have the ability in SpecFlow to use a Scenario Outline and table of test run input data, but that is still static, so doesn't fit the bill.

I've been looking for a while and have not found anything in SpecFlow like this yet, does anybody know of anything similar to the above which can be used (or planned if anyone who works on the project reads this)?

Thanks :)

chrisc
  • 434
  • 3
  • 9

1 Answers1

1

I have no idea if anything like this is planned but for now the problem is that there is a background code generation step when you edit your feature file via Visual Studio.

When it is saved in Visual Studio it is parsed and converted into the feature.cs file and that is the one that is compiled and used for testing.

So your process would become

  • edit your data source
  • export to feature file
  • get specflow's VS plugin to convert to feature.cs
  • run msbuild
  • run tests via Nunit or similar

I wouldn't do this. Instead I'd focus on getting my tests to be better examples. It sounds like you are to trying to exhaustively cover every possibility. Don't come up with examples to cover every possible case, but instead cover as much logic as possible with fewer tests.

AlSki
  • 6,868
  • 1
  • 26
  • 39
  • Yeah, I'm aware that at the moment it seems that each case is passed back to the feature.cs file and this becomes the static definition of the test. You're right in your second paragraph, and my strategy in general is to use good examples and set up the test data properly. This particular test I am trying for is a bit different in that it is a client integration test to make sure that the end to end basic happy path is working for each client, as my system contains a lot of config which may change and we need confidence that the overall basic flow of data for each client is always working. – chrisc Feb 07 '13 at 11:55
  • I suppose the other problem is also that not all the test runners that SpecFlow can utilise have something that works the same way as NUnit's TestCaseSource, so may not even be possible to incorporate. I can see how it could be done if SpecFlow only ran using NUnit runners, but considering it has to support others that may be the cause of that functionality missing. – chrisc Feb 07 '13 at 12:01