0

We're using a semi-agile process at the moment where we still write a design/specification document and update it during the development process.

When we're refining our requirements we often hit edge cases where we decide it's not important to handle it, so we don't write any code for that use case and we don't test it. In the design spec we explicitly state that this scenario is out of scope because the system isn't designed to be used in that way.

In a more fully-fledged agile process, the tests are supposed to act as a specification for the expected behaviour of the system, but how would you record the fact that a certain scenario is explicitly out-of-scope rather than just getting accidentally missed out?

As a bit of clarification, here's the situation I'm trying to avoid: We have discussed a scenario and decided we won't handle it because it doesn't make sense. Then later on, when someone is trying to write the user guide, or give a training session, or a customer calls the help desk, exactly the same scenario comes up, so they ask me how the system handles it, and I think "I remember talking about this a year ago, but there are no tests for it. Maybe it got missed of the plan, or maybe we decided it wasn't a sensible use-case, or maybe there's a subtle reason why you can't actually ever get into that situation", so I have to try and search old skype chats or emails to find out the answer. What I want to achieve is to make sure we have a record of why we decided not to support that scenario so that I can refer back to it in the future. At the moment I put this in the spec where everyone can see it.

Dave Schweisguth
  • 36,475
  • 10
  • 98
  • 121
Andy
  • 10,412
  • 13
  • 70
  • 95

3 Answers3

2

I would document deliberately unsupported use cases/stories/requirements/features in your test files, which are much more likely to be regularly consulted, updated, etc. than specifications would be. I'd document each unsupported feature in the highest-level test file in which it was appropriate to discuss that feature. If it was an entire use case, I'd document it in an acceptance test (e.g. a Cucumber feature file or RSpec feature spec); if it was a detail I might document it in a unit test.

By "document" I mean that I'd write a test if I could. If not, I'd just comment. Which one would depend on the feature:

  • For features that a user might expect to be there, but for which there is no way for the user to access (e.g. a link or menu item that simply isn't present), I'd write a comment in the appropriate acceptance test file, next to the tests of the related features that do exist.

    Side note: Some testing tools (e.g. Cucumber and RSpec) also allow you to have scenarios or examples in feature or spec files which aren't actually run, so you can use them like comments. I'd only do that if those disabled scenarios/examples didn't result in messages when you ran the tests that might make someone think that something was broken or unfinished. For example, RSpec's pending/skip loudly announces that there is work left to be done, so it would probably be annoying to use it for cases that were never meant to be implemented.

  • For situations that you decided not to handle, but which an inquisitive user might get themselves into anyway (e.g. entering an invalid value into a field or editing a URL to access a page for which they don't have permission), don't just ignore them, handle them in a clean if minimal way: quietly clear the invalid value, redirect the user to the home page, etc. Document this behavior in tests, perhaps with a comment explaining why you aren't doing anything even more helpful. It's not a lot of extra work, and it's a lot better than showing the user an error page or other alarming behavior.

  • For situations like the previous, but that you for some reason decided not to or couldn't find a way to handle at all, you can still write a test that documents the situation, for example that entering some invalid value into a form results in an HTTP 500.

  • If you would like to write a test, but for some reason you just can't, there are always comments -- again, in the appropriate test file near tests of related things that are implemented.

Dave Schweisguth
  • 36,475
  • 10
  • 98
  • 121
1

You should never test undefined behavior, by ...definition. The moment you test a behavior, you are defining it.

In practice, either it's valuable to handle a hedge case or it isn't. If it is, then there should be a user story for it, which acts as documentation for that edge case. What you don't want to have is an old user story documenting a future behavior, so it's probably not advisable to document undefined behavior in stories that don't handle it.

More in general, agile development always works iteratively. Edge case discovery is part of iterative work: with work comes increased knowledge, with increased knowledge comes more work. It is important to capture these discoveries in new stories, instead of trying to handle everything in one go.

For example. suppose we're developing Stack Overflow and we're doing this story:

As a user I want to search questions so that I can find them

The team develops a simple question search and discovers that we need to handle closed questions... we hadn't thought of that! So we simply don't handle them (whatever the simplest to implement behavior is). Notice that the story doesn't document anything about closed questions in the results. We then add a new story

As a user I want to specifically search closed questions so that I can find more results

We develop this story, and find more edge cases, which are then more stories, etc.

Sklivvz
  • 30,601
  • 24
  • 116
  • 172
  • Hmm, What I'm trying to do is make sure that the parameters of the system are well defined, so if someone questions "how does it handle this case", we can find a record that we made a decision that the behaviour is undefined, rather than having to remember or trawl emails to find out that we discussed it once and decided not to handle it. Maybe once I get into more of an agile mindset, I'll stop thinking that way. – Andy Jul 22 '14 at 14:33
  • @Andy the point I'm trying to make is that your story should only say what the purpose is, and the parameters automatically should be fit for it. So each story implies that the work to be done is the *simplest possible solution to the problem stated". If any complication emerges, it's a new story, with a new problem to be solved (never the solution). – Sklivvz Jul 22 '14 at 15:51
0

In the design spec we explicitly state that this scenario is out of scope because the system isn't designed to be used in that way

Having undocumented functionality in your product really is a bad practice.

If your development team followed BDD/TDD techniques they should (note emphasis) reduce the likelihood of this happening. If you found this edge-case then what makes you think your customer won't? Having an untested and unexpected feature in your product could compromise the stability of your product.

I'd suggest that if an undocumented feature is found:

  1. Find out how it was introduced (common reason: a developer thought it might be a good feature to have as it might be useful in the future and they didn't want to throw away work they produced!)
  2. Discuss the feature with your Business Analysts and Product owner. Find out if they want such a feature in your product. If they do, great, document and test it. If they don', remove it as it could be a liability.

You also had a question regarding the tracking of the outcome of these edge-case scenarios:

What I want to achieve is to make sure we have a record of why we decided not to support that scenario so that I can refer back to it in the future.

As you are writing a design/specification document, one approach you could take is to version that document. Then, when a feature/scenario is taken out you can note within a version change section in your document why the change was made. You can then refer to this change history at a later date.

However I'd recommend using a planning board to keep track of your user stories. Using such a board you could write a note on the card (virtual/physical) explaining why the feature was dropped which also could be referred to at a later date.

Ben Smith
  • 19,589
  • 6
  • 65
  • 93