2

I want to generate a random orthogonal matrix to randomly rotate my 3D point cloud data in the data preprocessing state. (I have found a numpy implementation How to create random orthonormal matrix in python numpy)

As is suggested, I'm using TensorFlow's new Dataset API.

1.I have tried directly transferring the Numpy code into Tf code, I met the problem that TensorFlow does not allow slicing assignment (TF even does not allow assign for ordinary tensors).

2.I'm trying

def _rotate_point_cloud_ortho(self, single_data):
    rotation_matrix = tf.Variable(tf.constant(rvs(), shape=[3,3]), name='rotation_matrix', trainable=False)
    self.initializer = rotation_matrix.initialized_value()
    rotation_matrix.assign(rvs())
    return tf.matmul(single_data, rotation_matrix)

And got

ValueError: Fetch argument <tf.Tensor 'cond/Merge:0' shape=(3, 3) dtype=float32> cannot be interpreted as a Tensor. (Tensor Tensor("cond/Merge:0", shape=(3, 3), dtype=float32) is not an element of this graph.)

3.I'm thinking of using placeholder for the random orthogonal matrix, but the dataset API uses map, so it's hard to pass a matrix in.

I cannot realize where this error come from.

Except hard-coding the Numpy implementation into a function generating 9 tf variables, and then stacking these variables into a 3*3 matrix, what else can I try?

The code for my current API: class PointCloudDataGenerator(object):

    def __init__(self, tfrecord_path, mode, batch_size,
                  buffer_size=9824):
        # create dataset
        self.initializer = None
        data = tf.contrib.data.TFRecordDataset(tfrecord_path)

        # distinguish between train/infer. when calling the parsing functions
        if mode == 'training':
            data = data.map(self._parse_function_train, num_threads=4, output_buffer_size=100*batch_size)

        elif mode == 'inference':
            data = data.map(self._parse_function_inference, num_threads=4, output_buffer_size=100*batch_size)
        else:
          raise ValueError("Invalid mode '%s'." % (mode))

        # number of samples in the dataset
        self.data_size = 9840 if mode == 'training' else 2468

        # shuffle the first `buffer_size` elements of the dataset
        if mode == 'training':
            data = data.shuffle(buffer_size=buffer_size)

        # create a new dataset with batches of images
        data = data.batch(batch_size)
        self.data = data
Xiuye Gu
  • 21
  • 2

0 Answers0