3

Say v1 and v2 has the same shape. Is it possible in tensorflow to concat v1 and the transposed version of v2 using the broadcast semantic?

For example,

v1 = tf.constant([[1,1,1,1],[3,3,3,3],[5,5,5,5]])
v2 = tf.constant([[2,2,2,2],[4,4,4,4]])

I want to produce something like

[
 [[[1,1,1,1], [2,2,2,2]],
  [[1,1,1,1], [4,4,4,4]]],
 [[[3,3,3,3], [2,2,2,2]],
  [[3,3,3,3], [4,4,4,4]]],
 [[[5,5,5,5], [2,2,2,2]],
  [[5,5,5,5], [4,4,4,4]]]]

that is, with v1 as [3, 4] and v2 as [2,4], I want to do

tf.concat([v1, tf.transpose(v2)], axis=0)

and produce a [3,2,2,4] matrix.

Is there any trick for doing that?

walkerlala
  • 1,599
  • 1
  • 19
  • 32

2 Answers2

1

If you mean by trick an elegant solution, I don't think so. However, a working solution would be to tile and repeat the incoming v1, v2

import tensorflow as tf

v1 = tf.constant([[1, 1, 1, 1],
                  [3, 3, 3, 3],
                  [7, 7, 7, 7],
                  [5, 5, 5, 5]])
v2 = tf.constant([[2, 2, 2, 2],
                  [6, 6, 6, 6],
                  [4, 4, 4, 4]])


def my_concat(v1, v2):
    v1_m, v1_n = v1.shape.as_list()
    v2_m, v2_n = v2.shape.as_list()

    v1 = tf.concat([v1 for i in range(v2_m)], axis=-1)
    v1 = tf.reshape(v1, [v2_m * v1_m, -1])

    v2 = tf.tile(v2, [v1_m, 1])
    v1v2 = tf.concat([v1, v2], axis=-1)

    return tf.reshape(v1v2, [v1_m, v2_m, 2, v2_n])


with tf.Session() as sess:
    ret = sess.run(my_concat(v1, v2))

    print ret.shape
    print ret
walkerlala
  • 1,599
  • 1
  • 19
  • 32
Patwie
  • 4,360
  • 1
  • 21
  • 41
-1

Here's my attempt to add two more elegant solutions to this Cartesian Product problem as follows (both tested); first one using tf.map_fn():

import tensorflow as tf

v1 = tf.constant([[1, 1, 1, 1],
                  [3, 3, 3, 3],
                  [5, 5, 5, 5]])
v2 = tf.constant([[2, 2, 2, 2],
                  [4, 4, 4, 4]])

cartesian_product = tf.map_fn( lambda x: tf.map_fn( lambda y: tf.stack( [ x, y ] ), v2 ), v1 )

with tf.Session() as sess:
    print( sess.run( cartesian_product ) )

or this one taking advantage of the implicit broadcasting of add:

import tensorflow as tf

v1 = tf.constant([[1, 1, 1, 1],
                  [3, 3, 3, 3],
                  [5, 5, 5, 5]])
v2 = tf.constant([[2, 2, 2, 2],
                  [4, 4, 4, 4]])

v1, v2 = v1[ :, None, None, : ], v2[ None, :, None, : ]
cartesian_product = tf.concat( [ v1 + tf.zeros_like( v2 ),
                                 tf.zeros_like( v1 ) + v2 ], axis = 2 )

with tf.Session() as sess:
    print( sess.run( cartesian_product ) )

both output:

[[[[1 1 1 1]
[2 2 2 2]]

[[1 1 1 1]
[4 4 4 4]]]

[[[3 3 3 3]
[2 2 2 2]]

[[3 3 3 3]
[4 4 4 4]]]

[[[5 5 5 5]
[2 2 2 2]]

[[5 5 5 5]
[4 4 4 4]]]]

as desired.

Peter Szoldan
  • 4,792
  • 1
  • 14
  • 24