I am working on designing a new sensor, and so I have a vector of measured values and a vector of truth values. To represent error, it's simply measured - truth
. Since there's a lot of variation in the truth, I would like to represent the normalized error. My initial thought would be error./truth
to get percent error, but there are many cases where my truth value is zero! Can anyone think of a better way to represent the normalized data while avoiding the divide-by-zero? I'm working in Matlab, though the question is a bit language-agnostic as well.
PS, feel free to push this to another stackexchange if you think it's better suited