I am trying to understand the Large-scale Linear Models with TensorFlow documentation. The docs motivate these models as follows:
Linear model can be interpreted and debugged more easily than neural nets. You can examine the weights assigned to each feature to figure out what's having the biggest impact on a prediction.
So I ran the extended code example from the accompanying TensorFlow Linear Model Tutorial. In particular, I ran the example code from GitHub with the model-type
flag set to wide
. This correctly ran and produced accuracy: 0.833733
, similar to the accuracy: 0.83557522
on the Tensorflow web page.
The example uses a tf.estimator.LinearClassifier
to train the weights. However, in contrast to the quoted motivation of being able to examine the weights, I can't find any function to actually extract the trained weights in the LinearClassifier documentation.
Question: how do I access the trained weights for the various feature columns in a tf.estimator.LinearClassifier
? I'd prefer to be able to extract all the weights in a NumPy array.
Note: I am coming from an R environment where linear regression / classification models have a coefs
method to extract learned weights. I want to be able to compare linear models in both R and TensorFlow on the same datasets.