I have the following code that seeks to find the maximum element of a rank 1 tensor, which shrinks to a rank 0 tensor, and then broadcast it back out to the full length of the rank 1 tensor so I can use it in further computations involving the original rank 1 tensor.
//reduces a rank 1 tensor to a rank 0 tensor.
Tensor<double,0> columnmaximum = input_tensor.maximum(this->imposed_dim).eval();
std::cout << "colmax is\n" << columnmaximum << std::endl;
this->columnbroadcast = Eigen::array<int,1> ({M});
this->rank1base = Eigen::array<int,1> ({1});
//expands it back out to a full column. columnshape is just
Tensor<double,1> columnmaximum_rk2 = columnmaximum.reshape(this->rank1base).broadcast(this->columnbroadcast);
std::cout << "colmaxrk2 is\n" << columnmaximum_rk2 << std::endl;
and noticed the following strange output:
colmax is
-2
colmaxrk2 is
-2
0.238402
3.91433e-310
-3.33086
-2
Something went wrong when broadcasting. My idea was to elevate the rank 0 tensor to a rank 1 tensor (of length one), and then broadcast in the single dimension to replicate the maximum as many times as I need to be able to subtract it from something else.
What is going wrong here with those three numbers in between when printing the enlarged tensor? I know in this special case, I could use the setConstant
method but would like to use the reshape
then broadcast
trick also for higher-dimensional tensors where a summary statistic is less trivial, i.e. a rank 2 tensor etc..
Can anyone explain to me where these non-sensical numbers appear from? Am I committing a basic mistake? The amazingly small number looks a bit like unallocated memory to me.
Thank you so much!