2

I am trying to understand the Large-scale Linear Models with TensorFlow documentation. The docs motivate these models as follows:

Linear model can be interpreted and debugged more easily than neural nets. You can examine the weights assigned to each feature to figure out what's having the biggest impact on a prediction.

So I ran the extended code example from the accompanying TensorFlow Linear Model Tutorial. In particular, I ran the example code from GitHub with the model-type flag set to wide. This correctly ran and produced accuracy: 0.833733, similar to the accuracy: 0.83557522 on the Tensorflow web page.

The example uses a tf.estimator.LinearClassifier to train the weights. However, in contrast to the quoted motivation of being able to examine the weights, I can't find any function to actually extract the trained weights in the LinearClassifier documentation.

Question: how do I access the trained weights for the various feature columns in a tf.estimator.LinearClassifier? I'd prefer to be able to extract all the weights in a NumPy array.

Note: I am coming from an R environment where linear regression / classification models have a coefs method to extract learned weights. I want to be able to compare linear models in both R and TensorFlow on the same datasets.

TemplateRex
  • 69,038
  • 19
  • 164
  • 304

1 Answers1

3

After training the model with Estimator, you could use the tf.train.load_variable to retrieve the weights from checkpoint. You can use tf.train.list_variables to find the names for model weights.

There are plans to add this support in Estimator directly also.

J. Xie
  • 86
  • 4
  • Thanks, do you have a link to discussion about adding this to Estimator? – TemplateRex Sep 12 '17 at 14:04
  • 1
    This is directly supported in `tf.estimators` now: `wt_names = model.get_variable_names()` `wt_vals = [model.get_variable_value(name) for name in wt_names]` – Sameer Apr 25 '18 at 18:19