Questions tagged [quantization]

Use this tag for questions related to quantization of any kind, such as vector quantization.

Quantization, in mathematics and digital signal processing, is the process of mapping a large set of input values to a (countable) smaller set.

For more, please read the Wikipedia article for more.

444 questions
7
votes
1 answer

Matlab : Unable to get unique rationals when implementing a formula for binary to real number conversion Part1

There is a nonlinear dynamic system x_n = f(x_n,eta) whose functional form is x[n+1] = 2*x[n] mod 1. This is a chaotic dynamical system called as the Sawtooth map or the Bernoulli Map. I am facing difficulty in implementing the two representations…
SKM
  • 959
  • 2
  • 19
  • 45
7
votes
3 answers

Effective gif/image color quantization?

So I'm trying to encode some animated gif files in my Java application. I've been using some classes/algorithms found online, but none seem to be working well enough. Right now I'm using this quantize class to reduce the colors of an image down to…
yesbutmaybeno
  • 1,078
  • 13
  • 31
6
votes
2 answers

Is the Leptonica implementation of 'Modified Median Cut' not using the median at all?

I'm playing around a bit with image processing and decided to read up on how color quantization worked and after a bit of reading I found the Modified Median Cut Quantization algorithm. I've been reading the code of the C implementation in Leptonica…
TheCodeJunkie
  • 9,378
  • 7
  • 43
  • 54
6
votes
1 answer

Dequantize values to their original prior to quantization

The paper "Natural Language Processing with Small Feed-Forward Networks" https://arxiv.org/pdf/1708.00214.pdf states: I've implemented quantization as per the above equations in python: b = 128 embedding_matrix =…
blue-sky
  • 51,962
  • 152
  • 427
  • 752
6
votes
1 answer

How can I matrix-multiply two PyTorch quantized Tensors?

I am new to tensor quantization, and tried doing something as simple as import torch x = torch.rand(10, 3) y = torch.rand(10, 3) x@y.T with PyTorch quantized tensors running on CPU. I thus tried scale, zero_point = 1e-4, 2 dtype = torch.qint32 qx…
Davide Fiocco
  • 5,350
  • 5
  • 35
  • 72
6
votes
0 answers

Training quantized models in TensorFlow

I would like to train a quantized network, i.e. use quantized weights during the forward pass to calculate the loss and then update the underlying full-precision floating point weights during the backward pass. Note that in my case "fake…
stecklin
  • 131
  • 7
6
votes
1 answer

BinaryNet implementation in TensorFlow

I recently read a very interesting paper (http://arxiv.org/pdf/1602.02830v3.pdf) suggesting a method for training a CNN with weights and activations constrained to [-1,1]. This is highly beneficial from power/speed perspective. There are…
JonyK
  • 585
  • 2
  • 7
  • 12
6
votes
2 answers

Matlab : How to represent a real number as binary

Problem : How do I use a continuous map - The Link1: Bernoulli Shift Map to model binary sequence? Concept : The Dyadic map also called as the Bernoulli Shift map is expressed as x(k+1) = 2x(k) mod 1. In Link2: Symbolic Dynamics, explains that the…
SKM
  • 959
  • 2
  • 19
  • 45
6
votes
1 answer

Gesture recognition using hidden markov model

I am currently working on a Gesture Recognition application, using a Hidden Markov Model as the classification stage on matlab(using webcam). I've completed the pre-processing part which includes extraction of feature vector. I've applied Principal…
6
votes
4 answers

8 bit audio samples to 16 bit

This is my "weekend" hobby problem. I have some well-loved single-cycle waveforms from the ROMs of a classic synthesizer. These are 8-bit samples (256 possible values). Because they are only 8 bits, the noise floor is pretty high. This is due to…
Nosredna
  • 83,000
  • 15
  • 95
  • 122
5
votes
3 answers

TensorFlow fake-quantize layers are also called from TF-Lite

I'm using TensorFlow 2.1 in order to train models with quantization-aware training. The code to do that is: import tensorflow_model_optimization as tfmot model = tfmot.quantization.keras.quantize_annotate_model(model) This will add fake-quantize…
5
votes
2 answers

What does 'quantization' mean in interpreter.get_input_details()?

Using tflite and getting properties of interpreter like : print(interpreter.get_input_details()) [{'name': 'input_1_1', 'index': 47, 'shape': array([ 1, 128, 128, 3], dtype=int32), 'dtype': , 'quantization':…
mrgloom
  • 20,061
  • 36
  • 171
  • 301
5
votes
2 answers

What is the difference between Linear Quantization and Non-linear Quantization?

What is the difference between Linear Quantization and Non-linear Quantization ? I'm talking with regard to PCM samples. http://www.blurtit.com/q927781.html has an article about it but I'm looking for a more elaborate answer.
Namratha
  • 16,630
  • 27
  • 90
  • 125
5
votes
1 answer

keras model evaluation with quantized weights post training

I have a model trained in keras and is saved as a .h5 file. The model is trained with single precision floating point values with tensorflow backend. Now I want to implement an hardware accelerator which performs the convolution operation on an…
frisco_1989
  • 75
  • 1
  • 8
5
votes
2 answers

A fatal error with 8-bit quantization in Tensorflow

I'm trying to run quantization model in Tensorflow using Bazel with my Ubuntu 16.04 system. I ran the following command: bazel build tensorflow/tools/quantization:quantize_graph and here is the error: ERROR:…
R.Nancy
  • 61
  • 3
1
2
3
29 30