0

I define a custom loss function in Tensorflow 1.9.0 (can't upgrade due to project restrictions). I have the following variables, obtained after an eigenvalue decomposition:

# eigw.shape = (?, x)
# eigv.shape = (?, x, y)

Now, I want to calculate the argmax of eigw, such that

amax = tf.argmax(eigw, axis=1, output_type=tf.int32)
# amax.shape = (?,)

I want to index eigv with the values given in amax, such that

# result.shape = (?, y)

How do I achieve that? I tried accessing it directly but doing so I run into the issue of the shapes not having equal rank. Also, I tried using tf.while_loop, but I'm new to tf, and thus I was not successful.

What other options do I have? How do I solve that problem most easily?

Thanks

Nico
  • 13
  • 4

2 Answers2

1

In your specific case you may use any TensorFlow function that gathers the max value along the axis rather than the index.

max_value = tf.math.reduce_max(eigw, axis=1)

You may see any other parameters on the documentation. As there are no more TF 1.9 documentation on tesnorlfow.org, I'could find the r1.15, which still used static graphs. https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/reduce_max

  • Thanks for the answer. Unfortunately, I want to index the eigenvecs and not the eigenvals. So your answer didn't provide a solution. – Nico Feb 21 '20 at 17:36
  • Maybe you can use tf.gather_nd: https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/gather_nd . – Piero Esposito Feb 22 '20 at 18:23
0

The solution was simpler than anticipated. After taking a deeper look into the documentation I found, that the eigenvectors are already "[s]orted in non-decreasing order". So I just had to take the last eigenvector. Thanks for the contribution to everyone.

Nico
  • 13
  • 4