0

Environment: Tensorflow 2.4, Intel-Tensorflow 2.4

As far as I know, Tensorflow model in pb format can be loaded on ML.NET.

However, I'm using a quantization package LPOT (https://github.com/intel/lpot) which utilizes Intel optimized Tensorflow (https://github.com/Intel-tensorflow/tensorflow). Even though Intel-Tensorflow is built on Tensorflow, it uses some Quantized Op which has no registered OpKernel on Tensorflow (e.g. 'QuantizedMatmulWithBiasAndDequantize' is deprecated on TF). As a result, the quantized model cannot be run under native Tensorflow environment without installing Intel-Tensorflow.

My goal is to run this quantized pb Intel-Tensorflow model on ML.NET, does anyone know if Intel-Tensorflow is supported on ML.NET? Or is there any other way to do so?

Any help/suggestion is greatly appreciated.

Joanne H.
  • 1
  • 1
  • I'm not sure if this is supported but it may be best to add an issue to the ML.NET repository - https://github.com/dotnet/machinelearning/issues – Jon Jul 09 '21 at 07:55
  • @Jon Thanks, I will add an issue to their repo! I was just wondering if anyone had experience on this specific use case, as I couldn't find any resources/tutorials anywhere on the internet. – Joanne H. Jul 10 '21 at 11:13

1 Answers1

0

The oneDNN supported in ML.NET depends on the ML.NET integration. If they enable oneDNN in the TensorFlow C++ API, ML.NET could have oneDNN support.

enter image description here

You can try installing stock Tensorflow 2.5 in your ML.NET environment with intel OneDNN enabled. You can install stock Tensorflow wheel from this link: https://pypi.org/project/tensorflow/#files

To install the wheel file: pip install __.whl.

To enable oneDNN optimizations, please set the environment variable TF_ENABLE_ONEDNN_OPTS:

set TF_ENABLE_ONEDNN_OPTS=1

To ensure verbose log is displayed: set DNNL_VERBOSE=1

For more information on oneDNN verbose mode, please refer: https://oneapi-src.github.io/oneDNN/dev_guide_verbose.html

For more information on Intel Optimization for tensorflow, please refer: https://software.intel.com/content/www/us/en/develop/articles/intel-optimization-for-tensorflow-installation-guide.html

  • Thanks for the detailed explanation! Let me try to paraphrase it and please correct me if I'm wrong: For instance, if I want to run on Windows platform, I should first install "tensorflow-2.5.0-cp37-cp37m-win_amd64.whl" in my ML.NET environment. Then in my code, I should set TF_ENABLE_ONEDNN_OPTS = 1. This way, ML.NET is supposed to be able to run inference with my LPOT model. Is that correct? – Joanne H. Jul 19 '21 at 01:48
  • I know there is Tensorflow.Net package that allows TF to run natively in ML.NET, I was wondering if I could make use of that package without having to install stock TF python wheel? One of the LPOT maintainers told me that ML.NET relys on https://www.nuget.org/packages/SciSharp.TensorFlow.Redist/ and that I should try building new ML.Net source by specifying a new TF nuget package based on Intel Optimized TensorFlow (I have no idea what that means) – Joanne H. Jul 19 '21 at 01:55