According to this very elaborate answer I would estimate the maximum relative error δres,max of the following computation like this:
// Pseudo code
float a, b, c; // Prefilled IEEE 754 floats with double precision
res = a / b * c;
res = a * (1 + δa) / ( b * (1 + δb) ) * (1 + δa/b) * c * (1 + δc) * (1 + δa/b*c)
= a / b * c * (1 + δa) / (1 + δb) * (1 + δa/b) * (1 + δc) * (1 + δa/b*c)
= a / b * c * (1 + δres)
=> δres = (1 + δa) / (1 + δb) * (1 + δa/b) * (1 + δc) * (1 + δa/b*c) - 1
All δs are within the bounds of ± ε / 2, where ε is 2^-52.
=> δres,max = (1 + ε / 2)^4 / (1 - ε / 2) - 1 ≈ 2.5 * ε
Is this a valid approach for error estimation that can be used for every combination of basic floating-point operations?
PS:
Yes, I read "What Every Computer Scientist Should Know About Floating-Point Arithmetic". ;)