As User Stories are completed throughout a Sprint, the amount of actual work required can be tracked as a metric. In some cases, the amount of actual work will be either greater or smaller than the original user story point estimate.
In those cases, the developer will require to enter a number that is either above or below the original estimate.
During planning, teams use a User Story Point scale (Fibonacci or similar) to measure the amount of effort for each User Story. Common estimating methods include powers of 2 (1, 2, 4, 8), the Fibonacci sequence (1, 2, 3, 5, 8, etc.), or similar.
The purpose of this scales is to reflect the level of uncertainty associated with estimating how much effort will the task take, as tasks become bigger. For example, for a small task, such as reading an email, you can have a very accurate estimate of how long will it take you to read it; the level of uncertainty is small. But as the size of a task increases, i.e. reply to 50 different emails, it's more difficult to know exactly how much effort will it take; the level of uncertainty on your estimates will grow exponentially.
I have been reading and browsing for a while, trying to answer the following question without much success:
After completing a User Story, the amount of actual work required is different from the original estimate. Since uncertainty is gone now, should the amount of actual work reflect a value in the User Story Point scale? Or, on the other hand, is there freedom to use more accurate values? now that the implementor knows exactly how much effort he required to complete the User Story.
My reasoning behind this is that, by tracking actual work with more accurate values than those provided by the scale (Fibonacci or others), the team will get a more accurate metric, that will later impact their velocity on the mid/long term.