Questions tagged [intel-lpot]

Use this tag to ask questions about Intel® Low Precision Optimization Tool (Intel® LPOT), which is an open-source Python* library designed to help you quickly deploy low-precision inference solutions on popular deep-learning frameworks such as TensorFlow*, PyTorch*, MXNet*, and ONNX* (Open Neural Network Exchange) runtime.

Reference documentation

Report bugs and feature requests

9 questions
2
votes
1 answer

How to use a custom loss function with neural compressor for distillation

I am trying out neural compressor (intel LPOT) to reduce the size of my CNN model implemented in pytorch. I intend to do distillation The below is the code used to distill the model. from neural_compressor.experimental import Distillation,…
ArunJose
  • 1,999
  • 1
  • 10
  • 33
2
votes
1 answer

Test Intel Low Precision Optimization Tool using dummy dataset

I was trying out Intel Low Precision Optimization Tool in Linux. Initially I have created one environment named lpot_environment and installed tensorflow using the below command: conda create -n lpot_environment python=3.7 pip install…
Remi_TRish
  • 193
  • 1
  • 8
1
vote
1 answer

"ValueError: numpy.ndarray size changed " while trying Intel lpot in tensorflow model

While trying out the Intel Low Precision Optimization Tool in tensorflow model, getting some value error. Please find the command I tried below: # The cmd of running ssd_resnet50_v1 bash run_tuning.sh --config=ssd_resnet50_v1.yaml…
RahilaRahi
  • 57
  • 4
0
votes
1 answer

how to confirm if the weights of my pytorch model has been quantized

I was able to successfully quantise a pytorch model for huggingface text classification with intel lpot(neural compressor) I now have the original fp32 model and quantised int8 models in my machine. For inference I loaded the quantised lpot model…
ArunJose
  • 1,999
  • 1
  • 10
  • 33
0
votes
1 answer

Getting error with ". prepare_dataset.sh" command from lpot

I am following this github(https://github.com/intel/lpot/tree/master/examples/tensorflow/object_detection) for lpot and in the 5th step for downloading the dataset I am getting the below error.Unable to proceed. HEAD is now at 7a9934df Merged…
0
votes
1 answer

Assertion Error: Framework is not detected correctly from model format

I am trying Intel Low precision Optimization tool and I am following this github(https://github.com/intel/lpot/tree/master/examples/tensorflow/object_detection).When I run the quantization command as below bash run_tuning.sh…
0
votes
1 answer

Error while quantizing a model using LPOT

I was trying to quantize a trained model using LPOT in my linux machine. By following the below link https://github.com/intel/lpot/tree/master/examples/helloworld/tf_example1 I have specified the Dataset path in the conf.yaml file and after that I…
0
votes
1 answer

How to load and run Intel-Tensorflow Model on ML.NET

Environment: Tensorflow 2.4, Intel-Tensorflow 2.4 As far as I know, Tensorflow model in pb format can be loaded on ML.NET. However, I'm using a quantization package LPOT (https://github.com/intel/lpot) which utilizes Intel optimized Tensorflow…
0
votes
1 answer

what should I input in this common.Model()

I am trying out Intel® Low Precision Optimization Tool. what should I input in this common.Model, specifically what type of the object? quantizer.model = common.Model('../models/saved_model') I am referring the following link…