I have a matrix populated by double values (from 0 to 1). For convenience, let's talk about rows and columns. I want to normalize the double values of the matrix, so that the sum of all the rows, for each columns, returns 1. This task triggers a floating point precision issue since, using double
, the sum will never return 1.
So I tried using BigDecimal
but the result slightly differs.
Here is my code:
double[][] U = new double[6][1015];
double[] sumPerCol = new double[db.size()+1];
for(int i=0; i<U.length; i++){
for(int j =0; j<U[i].length; j++){
double x = new Random().nextDouble();
U[i][j] = x;
sumPerCol[j] += x;
}
}
double[] sumPerCol2 = new double[db.size()+1];
for(int i=0; i<U.length; i++){
for(int j =0; j<U[i].length; j++){
BigDecimal x = new BigDecimal(U[i][j],MathContext.DECIMAL128);
BigDecimal tot = new BigDecimal(sumPerCol[j],MathContext.DECIMAL128);
BigDecimal x2 = x.divide(tot,MathContext.DECIMAL128);
U[i][j] = x2.floatValue();
sumPerCol2[j] += U[i][j];
}
}
for(double d : sumPerCol2){
System.out.println(d);
}
For sure, I'm not using BigDecimal
properly. Can anyone help?