I have a server with two Intel xeon gold 6148 and tensorflow running on it.
When I install tf with pip I get a message that AVX2 and AVX512 is not used with my installation.
So, to get the best performance I tried to build tf from source using docker.
I did so following https://www.tensorflow.org/install/source but for the bazel build command I used:
bazel build --config=mkl -c opt --copt=-mavx --copt=-mavx2 --copt=-mfma --copt=-mavx512f --copt=-mavx512pf --copt=-mavx512cd --copt=-mavx512er //tensorflow/tools/pip_package:build_pip_package
following https://software.intel.com/en-us/articles/intel-optimization-for-tensorflow-installation-guide.
But this installation performs much worse than the standard pip installation.
So, to sum this up: what is the best way to install tensorflow an xeon gold architecture?