5

I want to train a DNN model by training data with more than one billion feature dimensions. So the shape of the first layer weight matrix will be (1,000,000,000, 512). this weight matrix is too large to be stored in one box.

By now, is there any solution to deal with such large variables, for example partition the large weight matrix to multiple boxes.

Update:

Thanks Olivier and Keveman. let me add more detail about my problem. The example is very sparse and all features are binary value: 0 or 1. The parameter weight looks like tf.Variable(tf.truncated_normal([1 000 000 000, 512],stddev=0.1))

The solutions kaveman gave seem reasonable, and I will update results after trying.

Hanbin Zheng
  • 61
  • 1
  • 3
  • Is there any structure in your data? For instance is it a time series with data each day/second? – Olivier Moindrot Jul 13 '16 at 15:06
  • Every example of data is sparse tensor which just has less than 1000 none zero value. The weight matrix is the connection weight between the input layer and the first hidden layer. Because the size of the first hidden layer is 512, the weight matrix looks like tf.Variable(tf.truncated_normal([1 000 000 000, 512],stddev=0.1)). – Hanbin Zheng Jul 13 '16 at 17:52
  • So this is categorical data? You should update your question with additional info. Anyway, having already 512G parameters in the first layer will prove impossible to optimize so you need to find another way – Olivier Moindrot Jul 13 '16 at 17:58
  • @HanbinZheng any update? – zipp Mar 08 '18 at 15:27

1 Answers1

7

The answer to this question depends greatly on what operations you want to perform on the weight matrix.

The typical way to handle such a large number of features is to treat the 512 vector per feature as an embedding. If each of your example in the data set has only one of the 1 billion features, then you can use the tf.nn.embedding_lookup function to lookup the embeddings for the features present in a mini-batch of examples. If each example has more than one feature, but presumably only a handful of them, then you can use the tf.nn.embedding_lookup_sparse to lookup the embeddings.

In both these cases, your weight matrix can be distributed across many machines. That is, the params argument to both of these functions is a list of tensors. You would shard your large weight matrix and locate the shards in different machines. Please look at tf.device and the primer on distributed execution to understand how data and computation can be distributed across many machines.

If you really want to do some dense operation on the weight matrix, say, multiply the matrix with another matrix, that is still conceivable, although there are no ready-made recipes in TensorFlow to handle that. You would still shard your weight matrix across machines. But then, you have to manually construct a sequence of matrix multiplies on the distributed blocks of your weight matrix, and combine the results.

Olivier Moindrot
  • 27,908
  • 11
  • 92
  • 91
keveman
  • 8,427
  • 1
  • 38
  • 46