4

I am trying to do dynamic quantization(quantizes the weights and the activations) on a pytorch pre-trained model from huggingface library. I have referred this link and found dynamic quantization the most suitable. I will be using the quantized model on a CPU.

Link to hugginface model here.

torch version: 1.6.0 (installed via pip)

Pre-trained models

tokenizer = AutoTokenizer.from_pretrained("microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext")
model = AutoModel.from_pretrained("microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext")

Dynamic quantization

quantized_model = torch.quantization.quantize_dynamic(
    model, qconfig_spec={torch.nn.Linear}, dtype=torch.qint8
)

print(quantized_model)

Error

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-7-df2355c17e0b> in <module>
      1 quantized_model = torch.quantization.quantize_dynamic(
----> 2     model, qconfig_spec={torch.nn.Linear}, dtype=torch.qint8
      3 )
      4 
      5 print(quantized_model)

~/.virtualenvs/python3/lib64/python3.6/site-packages/torch/quantization/quantize.py in quantize_dynamic(model, qconfig_spec, dtype, mapping, inplace)
    283     model.eval()
    284     propagate_qconfig_(model, qconfig_spec)
--> 285     convert(model, mapping, inplace=True)
    286     _remove_qconfig(model)
    287     return model

~/.virtualenvs/python3/lib64/python3.6/site-packages/torch/quantization/quantize.py in convert(module, mapping, inplace)
    363     for name, mod in module.named_children():
    364         if type(mod) not in SWAPPABLE_MODULES:
--> 365             convert(mod, mapping, inplace=True)
    366         reassign[name] = swap_module(mod, mapping)
    367 

~/.virtualenvs/python3/lib64/python3.6/site-packages/torch/quantization/quantize.py in convert(module, mapping, inplace)
    363     for name, mod in module.named_children():
    364         if type(mod) not in SWAPPABLE_MODULES:
--> 365             convert(mod, mapping, inplace=True)
    366         reassign[name] = swap_module(mod, mapping)
    367 

~/.virtualenvs/python3/lib64/python3.6/site-packages/torch/quantization/quantize.py in convert(module, mapping, inplace)
    363     for name, mod in module.named_children():
    364         if type(mod) not in SWAPPABLE_MODULES:
--> 365             convert(mod, mapping, inplace=True)
    366         reassign[name] = swap_module(mod, mapping)
    367 

~/.virtualenvs/python3/lib64/python3.6/site-packages/torch/quantization/quantize.py in convert(module, mapping, inplace)
    363     for name, mod in module.named_children():
    364         if type(mod) not in SWAPPABLE_MODULES:
--> 365             convert(mod, mapping, inplace=True)
    366         reassign[name] = swap_module(mod, mapping)
    367 

~/.virtualenvs/python3/lib64/python3.6/site-packages/torch/quantization/quantize.py in convert(module, mapping, inplace)
    363     for name, mod in module.named_children():
    364         if type(mod) not in SWAPPABLE_MODULES:
--> 365             convert(mod, mapping, inplace=True)
    366         reassign[name] = swap_module(mod, mapping)
    367 

~/.virtualenvs/python3/lib64/python3.6/site-packages/torch/quantization/quantize.py in convert(module, mapping, inplace)
    364         if type(mod) not in SWAPPABLE_MODULES:
    365             convert(mod, mapping, inplace=True)
--> 366         reassign[name] = swap_module(mod, mapping)
    367 
    368     for key, value in reassign.items():

~/.virtualenvs/python3/lib64/python3.6/site-packages/torch/quantization/quantize.py in swap_module(mod, mapping)
    393             )
    394             device = next(iter(devices)) if len(devices) > 0 else None
--> 395             new_mod = mapping[type(mod)].from_float(mod)
    396             if device:
    397                 new_mod.to(device)

~/.virtualenvs/python3/lib64/python3.6/site-packages/torch/nn/quantized/dynamic/modules/linear.py in from_float(cls, mod)
    101         else:
    102             raise RuntimeError('Unsupported dtype specified for dynamic quantized Linear!')
--> 103         qlinear = Linear(mod.in_features, mod.out_features, dtype=dtype)
    104         qlinear.set_weight_bias(qweight, mod.bias)
    105         return qlinear

~/.virtualenvs/python3/lib64/python3.6/site-packages/torch/nn/quantized/dynamic/modules/linear.py in __init__(self, in_features, out_features, bias_, dtype)
     33 
     34     def __init__(self, in_features, out_features, bias_=True, dtype=torch.qint8):
---> 35         super(Linear, self).__init__(in_features, out_features, bias_, dtype=dtype)
     36         # We don't muck around with buffers or attributes or anything here
     37         # to keep the module simple. *everything* is simply a Python attribute.

~/.virtualenvs/python3/lib64/python3.6/site-packages/torch/nn/quantized/modules/linear.py in __init__(self, in_features, out_features, bias_, dtype)
    150             raise RuntimeError('Unsupported dtype specified for quantized Linear!')
    151 
--> 152         self._packed_params = LinearPackedParams(dtype)
    153         self._packed_params.set_weight_bias(qweight, bias)
    154         self.scale = 1.0

~/.virtualenvs/python3/lib64/python3.6/site-packages/torch/nn/quantized/modules/linear.py in __init__(self, dtype)
     18         elif self.dtype == torch.float16:
     19             wq = torch.zeros([1, 1], dtype=torch.float)
---> 20         self.set_weight_bias(wq, None)
     21 
     22     @torch.jit.export

~/.virtualenvs/python3/lib64/python3.6/site-packages/torch/nn/quantized/modules/linear.py in set_weight_bias(self, weight, bias)
     24         # type: (torch.Tensor, Optional[torch.Tensor]) -> None
     25         if self.dtype == torch.qint8:
---> 26             self._packed_params = torch.ops.quantized.linear_prepack(weight, bias)
     27         elif self.dtype == torch.float16:
     28             self._packed_params = torch.ops.quantized.linear_prepack_fp16(weight, bias)

RuntimeError: Didn't find engine for operation quantized::linear_prepack NoQEngine
alvas
  • 115,346
  • 109
  • 446
  • 738
joel
  • 1,156
  • 3
  • 15
  • 42

1 Answers1

2

Is qnnpack in the list when you run print(torch.backends.quantized.supported_engines)?

Does torch.backends.quantized.engine = 'qnnpack' work for you?

Hanton
  • 616
  • 1
  • 6
  • 13