1

I am defining a custom layer as the last one of my network. Here I need to convert a tensor, the input one, into a numpy array to define a function on it. In particular, I want to define my last layer similarly to this:

import tensorflow as tf
def hat(x):
  A = tf.constant([[0.,-x[2],x[1]],[x[2],0.,-x[0]],[-x[1],x[0],0.]])
  return A

class FinalLayer(layers.Layer):
  def __init__(self, units):
    super(FinalLayer, self).__init__()
    self.units = units
  

  def call(self, inputs):
    p = tf.constant([1.,2.,3.])
    q = inputs.numpy()
    p = tf.matmul(hat(q),p)
    return p

The weights do not matter to my question, since I know how to manage them. The problem is that this layer works perfectly in eager mode, however with this option the training phase is to slow. My question is: is there something I can do to implement this layer without eager mode? So, alternatively, can I access the single component x[i] of a tensor without converting it into a numpy array?

Dadeslam
  • 201
  • 1
  • 8

1 Answers1

1

You can rewrite your hat function a bit differently, so it accepts a Tensor instead of a numpy array. For example:

def hat(x):
  zero = tf.zeros(())
  A = tf.concat([zero,-x[2],x[1],x[2],zero,-x[0],-x[1],x[0],zero], axis=0)
  return tf.reshape(A,(3,3))

Will results in

>>> p = tf.constant([1.,2.,3.])
>>> hat(p)
<tf.Tensor: shape=(3, 3), dtype=float32, numpy=
array([[ 0., -3.,  2.],
       [ 3.,  0., -1.],
       [-2.,  1.,  0.]], dtype=float32)>
Lescurel
  • 10,749
  • 16
  • 39
  • Great, thank you very much. Unfortunately I am learning tensorflow just for what I need for my research, and hence I lack some basics. I think mine was a basic question. Do you have any text to suggest to learn it more properly? – Dadeslam Feb 01 '21 at 09:03
  • You can read the [TensorFlow guide about Tensors](https://www.tensorflow.org/guide/tensor). But keep in mind that you will have to use the framework to understand a bit more how it works and what are its shortcomings. In that case, you can't easily create a `tf.constant` from tensorslices, that's why it's easier to use op like `concat` rather than creating a new `Tensor` object. – Lescurel Feb 01 '21 at 09:42