I am struggling to understand which is the correct way to display a measurement done by a certain sensor after some unit conversions.
Imagine I have a sensor with resolution R = 0.1
that provides a measured value as a float
, with arbitrary number of decimals.
Knowing the sensor's resolution I can truncate the measured value before displaying it, in order not to give the idea of more precision than what there actually is - i.e.:
measuredValue = 13.275456
R = 0.1
displayedValue = 13.3
Now suppose I want to display the measured value in another unit (let's assume that the unit conversion consists of multiplying the measured value by the constant 2.3441212
). I know that in order not to have rounding errors, I should apply all the resolution corrections at the end, just before displaying the final value. Also, according to the Significant Figures Rules for multiplication
The LEAST number of significant figures in any number of the problem determines the number of significant figures in the answer
measuredValue = 13.275456
R = 0.1
newUnitValue = 13.275456 * 2.3441212 = 31.1192778
measuredValue_real_significant_figures = 3
displayedValue = 31.1
Now let's make R = 0.5
. Without unit conversion we have:
measuredValue = 13.275456
R = 0.5
displayedValue = 13.5
Things don't look clear to me when the unit conversion is involved. Does the process change somehow in comparison to the one for R = 0.1
? How is the resolution R = 0.5
reflected on the unit converted displayed value?