-1

I'm facing the following estimation problem:

When doing effort estimation of development tasks in the context of fixed budget projects to implement stories we use 3 point estimation to come up with PERT. The scope of a project is thus defined through an aggregate PERT value plus a confidence margin if chosen.

Because we have found that often there is more effort involved in getting stories completed a bug fix buffer is being added to provide the team with more time to deliver at a good level of quality.

My biggest concerns with the approach: 1. Separating bugs and story delivery - it seems to soften up the strict definition of done, i.e. it's ok to accept a story even though there are bugs. We have more time to fix. 2. Judgment of the impact of complexity factors to establish a buffer. What says the % buffer allocated is adequate?

Is there a better way to calibrate estimates for quality and complexity?

knarF
  • 1

1 Answers1

0
  • bugs is a sign that things were not done when stories were claimed as done. A solution might be to extend you definition of done to include more testing
  • bugs coming as regressions (things were done in time they were verified, but appear to be not done later). A solution might be to extend your definition of done to automate all new tests and to run all available existing tests before claiming a story done.

Two solutions mentioned above will decrease your velocity, but you will be able to estimate better and more reliably using new decreased velocity, because new issues (bugs) won't come back unexpectedly and disturb your estimation graphs, velocity and predictability.

  • consider to split stories into multiple. I believe it is OK to have a feature completed (claimed done) with known limitations (not bugs) and continue reduction of limitations in scope of other stories. It is in addition to the first 2 points mentioned above. It won't help if bugs are out of control.
Andrew
  • 2,055
  • 2
  • 20
  • 27
  • Hi Andrew, very much agree with your sentiment - your suggestions will yield a lower/more realistic velocity. The issue I have, aside from recognising the real rate of productivity, is how to turn this into a planning assumption to drive forecasting against a target budget. One could adjust the estimates up by an observed factor, or reduce the available capacity by a blanket "bug fix allocation". The latter seems to suggest low initial quality as acceptable and that there's opportunity to get things sorted out from a separate 'bucket', which very much runs counter agile best practices. – knarF Jul 03 '14 at 15:13