Possible way to skip quantization for certain layers / Ops in tflite converter
Hello Everyone,
Is there a possible way to skip quantization for certain layers / Ops when converting a keras model to tflite model? specifically for Math.ops supported under tflite (which are available in both quantized int8 or f32 formats), so that the model can have f32 for Math.ops, but other ops in quantized format?
Thanks!