0

I realized a SpatialConvolution function by referring the implementation of tensorflow (use eigen). The implementation in tensorflow is located at SpatialConvolution and I also find one related reply about the implementation : https://stackoverflow.com/a/58955289/7587433

My implementation is as follows: (since my data is row-major, I only keep half of the code)

// Description: Convolution                                                                          
// Input:                                                                                           
//      - name: input0     type: float     shape: Shape{7680, 15, 200, 1}                           
//      - name: input1     type: float     shape: Shape{5, 200, 1, 200}                             
// Output:                                                                                          
//      - name: output0    type: float     shape: Shape{7680, 11, 1, 200}
void Convolution_float_float_float_cpu_Convolution_270(float* input0, float* input1, float* output0)
{

Eigen::array<Eigen::IndexPair<Eigen::Index>, 1> contract_dims;
contract_dims[0] = Eigen::IndexPair<Eigen::Index>(1, 0);

Eigen::array<Eigen::Index, 4> in_dims({7680, 15, 200, 1});
Eigen::array<Eigen::Index, 4> out_dims({7680, 11, 1, 200});
Eigen::array<Eigen::Index, 4> kernel_dims({5, 200, 1, 200});
Eigen::DSizes<Eigen::Index, 2> pre_contract_dims;
pre_contract_dims[1] = kernel_dims[2] * kernel_dims[1] * kernel_dims[0];
pre_contract_dims[0] = out_dims[1] * out_dims[2];
for (int i = 0; i < 1; ++i) {
  pre_contract_dims[0] *= in_dims[i];
}

Eigen::DSizes<Eigen::Index, 4> post_contract_dims;
post_contract_dims[3] = kernel_dims[3];
post_contract_dims[2] = out_dims[2];
post_contract_dims[1] = out_dims[1];
for (int i = 0; i < 1; ++i) {
  post_contract_dims[i] = in_dims[i];
}

Eigen::DSizes<Eigen::Index, 2> new_kernel_dims;
new_kernel_dims[0] = kernel_dims[2] * kernel_dims[1] * kernel_dims[0];
new_kernel_dims[1] = kernel_dims[3];

Eigen::TensorMap<Eigen::Tensor<float, 4, Eigen::RowMajor>>
    in(static_cast<float *>(input0), in_dims),
    out(static_cast<float *>(output0), out_dims),
    kernel(static_cast<float *>(input1), kernel_dims);

out.device(*global_thread_pool_device) = in
    .extract_image_patches(kernel_dims[1], kernel_dims[0], 1,
                           1, 1, 1,
                           Eigen::PADDING_VALID)
    .reshape(pre_contract_dims)
    .contract(kernel.reshape(new_kernel_dims), contract_dims)
    .reshape(post_contract_dims);
}

By handling the same data and set the number of thread in threadpool as 1 (intra_op_parallelism_threads in tensorflow), looks my implementation is about 30% slower than tensorflow. My compiler option is "-std=gnu++11 -O3 -march=native", and tensorflow's XLA is not enabled. I have no idea about what lead to the performance gap. If anyone could give some hints that would be a great help.

Shuai
  • 21
  • 6
  • 1
    why don't you just profile your code and find the issue yourself? you have all the code, just start measuring how long time various parts take. – AndersK May 07 '20 at 04:22
  • Thanks @AndersK. I haven't profiled code on Linux platform before. I will try to find some clues according to your suggestions. – Shuai May 07 '20 at 05:51

1 Answers1

0

After digging into the code, we found tensorflow realized custom eigen kernels based on MKL. With the implementations in eigen_contraction_kernel.h/.cpp and eigen_spatial_convolutions.h, We can get the same performance with tensorflow.

Shuai
  • 21
  • 6