I'm writing a simple application that takes in the likelihood of a risk occurring, and the severity of its outcome, and translates that into a rating of how bad the risk is based on a risk matrix.
My current solution is to translate the likelihood and consequence into 1-5, multiply them together, and then use the result in a bunch of if/else statements to get the risk rating. My problem is that the ranges overlap and I have to deal with those cases individually, which I feel is a bit of a cop-out.
var likelihoods = {
"Rare" : 1,
"Unlikely" : 2,
"Possible" : 3,
"Likely" : 4,
"Almost Certain" : 5,
};
var consequences = {
"Minor" : 1,
"Moderate" : 2,
"Major" : 3,
"Severe" : 4,
"Extreme" : 5,
};
var likelihood = likelihoods[item.likelihood];
var consequence = consequences[item.consequence];
var riskVal= likelihood * consequence;
if (likelihood == 2 && consequence == 2) { riskRating = 1 }
else if (likelihood == 1 && consequence == 5) { riskRating = 3 }
else if (likelihood == 2 && consequence == 5) { riskRating = 4 }
else if (riskVal <= 3) { riskRating = 1 } //Low risk
else if (riskVal <= 6) { riskRating = 2 } //Medium risk
else if (riskVal <= 12) { riskRating = 3 } //High risk
else { riskRating = 4 } //Critical risk
What's a cleaner way to do this?