I'm trying to get the same performance with xtensor on the reduction operations (e.g. sum of elements) as in NumPy.
I enable xsimd for parallel computing, but it has no effect.
The following is the benchmark code:
#include <iostream>
#include "xtensor/xreducer.hpp"
#include "xtensor/xrandom.hpp"
#include <ctime>
using namespace std;
pair<double, double> timeit(int size, int n=30){
double total_clocks = 0;
double total_sum = 0;
for (int i=0;i<n;i++){
xt::xtensor<double, 1> a = xt::random::rand({size}, 0., 1.);
int start = clock();
double s = xt::sum(a, xt::evaluation_strategy::immediate)();
int end = clock();
total_sum += s; total_clocks += end-start;
}
return pair<double, double>(total_clocks/CLOCKS_PER_SEC/n, total_sum);
}
int main(int argc, char *argv[])
{
for (int i=5;i<8;i++){
int size = pow(10, i);
pair<double, double> ret = timeit(size);
cout<<"size: "<<size<< " \t " <<ret.first<<" sec\t"<<ret.second<<endl;
}
return 0;
}
And compile this with and without enabling xsimd and all optimisations enabled(-O3):
$ g++ -DXTENSOR_USE_XSIMD -O3 -march=native -I/home/--user--/install_path/include "./18. test speed 2.cpp" -o a && ./a
size: 100000 0.0001456 sec 1.49984e+06
size: 1000000 0.0013149 sec 1.50002e+07
size: 10000000 0.0125417 sec 1.49995e+08
$ g++ -O3 -march=native -I/home/--user--/install_path/include "./18. test speed 2.cpp" -o a && ./a
size: 100000 0.0001433 sec 1.49984e+06
size: 1000000 0.0012621 sec 1.50002e+07
size: 10000000 0.0124868 sec 1.49995e+08
By the way, the same operation using numpy:
$ python bench.py
size: 100000 0.000030 sec
size: 1000000 0.000430 sec
size: 10000000 0.005144 sec
About 4 times faster!
Setup
- Ubuntu 18.04
- Core i7 CPU
- Latest versions of packages
How can I improve xtensor performance? Thanks in advance))