I am attempting to do some post processing of the outputs of a multinomial LogisticRegressionWithLBFGS model. The model matrix is created in R and then exported to scala spark for model fitting.
The documentation states that there is "standard feature scaling and L2 regularization". The outputs of the multinomial model from the multinom()
function in R's {nnet}
package is clear as log-odds between a given outcome and a base outcome. There is however not sufficient detailed information in the documentation about how the weights of the LogisticRegressionWithLBFGS can be transformed to obtain a standard set of coefficients.
The term "standard feature scaling" means different things to different people. It could mean that the model matrix is scaled as (x - mean(x))/sd(x) or (x - min(x))/(max(x) - min(x)) or a set of other possibilities. In addition the weights output is a string of numbers that is a multiple of the features that could be folded in different ways to obtain a coefficients matrix - for example by row, by column, or some other arbitrary way.
How do I process the outputs from the LogisticRegressionWithLBFGS().weights to obtain a standard set of coefficients that I can do some post processing, basic inference and predictions with the original model matrix?