0

I have a model, e.g.

model = keras.Sequential([
    keras.layers.Reshape(target_shape=(10,10,1),input_shape=(100,)),
    keras.layers.Convolution2DTranspose(1, 3, activation='relu')
])

After it's trained, I would only like to do compute a subset of the outputs, e.g.

out = model(x)[:,3,5]

Is there a way to do this efficiently so that I'm not computing all of the outputs? Ideally, I'd like to define a new model that takes x and the output indices only computes them, e.g.

out = new_model(x,out_indices) 
lab_rat
  • 107
  • 1
  • 10
  • With this model, you get a [None,12,12,1] tensor. Which dimensions do you want to do indexing on? – thushv89 Dec 03 '19 at 20:24
  • The output will be [batch_size, h,w,depth]. I will create a list out_indices [(3,5,4), (2,4,8),...]. I want the model to only run the computation that's needed for these output indices. Is there some way that I can trace these outputs and the make the model do sparse computation? – lab_rat Dec 03 '19 at 22:57

1 Answers1

0

You can do the following,

This is your first model. Note that I removed the Reshape layer and directly specified the input_shape for the Convolution2DTranspose layer.

model = models.Sequential([    
    layers.Convolution2DTranspose(1, 3, activation='relu', input_shape=(10,10,1))
])
model.compile(loss='mean_squared_error', optimizer='adam')
model.summary()

This is probably the bit you're interested in. You get an (None, 12, 12, 1) output from the previous model. Here, you are passing a batch or 4 dimensional indices (one element for each dimension of the previous model output).

inp = layers.Input(shape=(4,), dtype='int32')
out = layers.Lambda(lambda x: tf.gather_nd(model.output, x))(inp)
model2 = models.Model(inputs=[inp, model.input], outputs=out)
model2.compile(loss='mean_squared_error', optimizer='adam')
model2.summary()

Now you can get the values of any indices you pass to the model.

x = np.random.normal(size=(1,10, 10, 1))
ind = np.array([[0,0,0,0],[0,1,1,0]], dtype='int32')
y = model2.predict([ind, x])
thushv89
  • 10,865
  • 1
  • 26
  • 39
  • Thank you for responding. I get this error when I run the code. **ValueError: Data cardinality is ambiguous: x sizes: 2, 1 Please provide data which shares the same first dimension.** – lab_rat Dec 06 '19 at 00:51
  • Can you tell me in which line you get the error and which TF version you're using (I used TF 1.15) – thushv89 Dec 06 '19 at 02:05
  • Thank you for the help. The code works in 1.15. However, I don't think the code is efficiently computing the output. The run-times are pretty much the same as when I compute everything and then index into the array. I bumped the input shape to (1000,1000) to see this. Your code should be way faster than computing everything and the indexing into the output. – lab_rat Dec 06 '19 at 19:24
  • @lab_rat So what do you mean by `when I compute everything and then index into the array`. – thushv89 Dec 06 '19 at 20:57
  • I mean out = model.predict(x)[inds] – lab_rat Dec 06 '19 at 21:27
  • Unfortunately, I don't expect it to be faster than that. As far as I know, `gather_nd` is a slow operation (Some evidence [here](https://stackoverflow.com/questions/46048235/indexing-in-tensorflow-slower-than-gather)). So if you want performance, I suggest using slicing instead of narrow indexing. – thushv89 Dec 06 '19 at 21:34