Questions tagged [quantization]

Use this tag for questions related to quantization of any kind, such as vector quantization.

Quantization, in mathematics and digital signal processing, is the process of mapping a large set of input values to a (countable) smaller set.

For more, please read the Wikipedia article for more.

444 questions
0
votes
0 answers

Pytorch-Optimzer doesn't update parameters

I made my custom model, AlexNetQIL (Alexnet with QIL layer) 'QIL' means quantization intervals learning I trained my model and loss value didn't decrease at all and I found out parameters in my model were not updated at all because of QIL layer I…
Suyoung Park
  • 31
  • 1
  • 1
  • 3
0
votes
2 answers

Unable to properly convert tf.keras model to quantized format for coral TPU

I'am trying to convert a tf.keras model based on mobilenetv2 with transpose convolution using latest tf-nighlty. Here is the conversion code #saved_model_dir='/content/ksaved' # tried from saved model also #converter =…
0
votes
1 answer

Excel Graphing help

I didn't know what stack exchange site to put this on, so I put it here. I am trying to determine if there is a correlation between the size of a school and the major that the school specializes in. In order to do this, I programatically collected…
Brendan Lesniak
  • 2,271
  • 4
  • 24
  • 48
0
votes
1 answer

Question regarding color histogram based methods of generating color lookup-up tables

I have a piece of code that needs to conform to a research paper's implementation of a color quantization algorithm for a 256 entry LUT whose 24-bit color entries are derived from a "population count" color histogram algorithm. The problem is that I…
jdb2
  • 101
  • 6
0
votes
2 answers

Dont understand mean values and std dev values when convertig .pb to ff-lite

I am trying to quantizate a tensoflow graph stored in .pb. The input of the network is a matrix, that each row is normalized with mean 0 and std 1. I want to create a tensorflow-lite model quantizate to inteference faster. I do not know how to pass…
0
votes
0 answers

Errors occurred while converting 32-bit float TensorFlow model into 8-bit fixed TensorFlow model

I am following the procedures listed in the github, Quantization-aware training, https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/quantize. To quantize my own TF model, landing_retrained_graph.pb, I fed it into the…
passion
  • 387
  • 4
  • 16
0
votes
1 answer

Does TFLiteConverter automatically quantize the Keras model?

I converted the trained Keras model using tf.lite.TFLiteConverter into tflite_model. Is the converted tflite_model quantized one? Here is the snippet to make the conversion. import tensorflow as tf keras_model =…
passion
  • 387
  • 4
  • 16
0
votes
1 answer

Need a suggestion for a color palette data structure for iterative color quantization; in particular, any experiences with KD heaps?

I am implementing color quantization that works in iterations. During each iteration, a new color palette is built up, and then that palette is searched through many times for the palette entry that best matches a given RGB triplet. Also, I need to…
dv_
  • 1,247
  • 10
  • 13
0
votes
2 answers

tflite uint8 quantization model input and output float conversion

I have successfully converted a quantized 8bit tflite model for object detection. My model was originally trained on images that are normalized by dividing 255 so the original input range is [0, 1]. Since my quantized tflite model requires input to…
Jasmine Liu
  • 1
  • 1
  • 1
0
votes
1 answer

How to properly Inject fake_quant operations in a graph?

I have a wav2letter model (speech recognition model) where I am trying to properly introduce the fakeQuant operations manually. I have managed to introduce them in the proper place (so that the tflite converter manages to generate the…
0
votes
1 answer

Map each tensor value to the closest value in a list

I have a tensor A with size [batchSize,2,2,2] where batchSize is a placeholder. In a custom layer, I would like to map each value of this tensor to the closest value in a list c with length n. The list is my codebook and I would like to quantize…
deepsy
  • 15
  • 2
0
votes
0 answers

ValueError: Invalid tensors 'outputs' were found when converting from .pb to .tflite

I successfully retrained mobilenet quantized model (architecture="mobilenet_1.0_128_quantized") with my own image dataset: python3 -m scripts.retrain \ --bottleneck_dir=tf_files/bottlenecks_quant \ --how_many_training_steps=50000 \ …
user155
  • 775
  • 1
  • 7
  • 25
0
votes
1 answer

The following error occurred when running the tflite model: input->params.scale != output->params.scale in MAX_POOL_2D Node

I trained the face recognition model with the quantization-aware training method of tensorflow version 1.12.0. The network uses inception-resnet_v1(The source of the code is tensorflow/models/research/slim/nets/). After the training is completed, I…
0
votes
1 answer

Tensorflow bazel quantization build error

I am trying to build tensorflow tools package with bazel 0.18.0 following steps are ok git clone https://github.com/tensorflow/tensoflow bazel build --config=cuda //tensorflow/tools/pip_package:build_pip_package bazel build --config=cuda…
woohoo
  • 23
  • 5
0
votes
1 answer

How to find float output range for quantized matmul/conv2D operation

I am new to tensorflow and quantization, am trying to implement quantized matmul operation for two int8 inputs. Was curious to know the math behind the operation. I see in tensorflow they have implemented the same only for uint8 inputs , would like…
Abhinav George
  • 85
  • 1
  • 10