Questions tagged [automatic-mixed-precision]
24 questions
3
votes
1 answer
Can I speed up inference in PyTorch using autocast (automatic mixed precision)?
The docs (see also this) for autocast in PyTorch only discuss training. Does it speed things up if I also use autocast for inference?

Lars Ericson
- 1,952
- 4
- 32
- 45
2
votes
0 answers
TensorFlow mixed precision training: Conv2DBackpropFilter not using TensorCore
I am using the keras mixed precision API in order to fit my networks in GPU.
Typically in my code this will look like this.
A MWE would be:
from tensorflow.keras.mixed_precision import experimental as mixed_precision
use_mixed_precision = True
if…

Zaccharie Ramzi
- 2,106
- 1
- 18
- 37
2
votes
0 answers
How can I use apex AMP (Automatic Mixed Precision) with model parallelism on Pytorch?
My model has a few LSTMs which run out of Cuda memory when run on large sequences with one GPU. So I shifted a few components of the model to another GPU. I tried 2 things with Apex AMP:
Move the model components to another GPU before invoking…

Caesar
- 1,092
- 12
- 19
2
votes
0 answers
Why the scale became zero when using torch.cuda.amp.GradScaler?
I use the following snippet of code to show the scale when using Pytorch's Automatic Mixed Precision Package(amp):
scaler = torch.cuda.amp.GradScaler(init_scale = 65536.0,growth_interval=1)
print(scaler.get_scale())
and This is the output that I…

cuistiano
- 31
- 3
1
vote
0 answers
Pytorch automatic mixed precision - cast whole code block to float32
I have a complex model that I would like to train in mixed precision. To do this, I use the torch.amp package. I can enable AMP for the whole model using with torch.cuda.amp.autocast(enabled=enable_amp, dtype=torch.float16):. However, the model…

The Guy with The Hat
- 10,836
- 8
- 57
- 75
1
vote
1 answer
Does Automatic MIXED PRECISION (AMP) half the paramters of a model?
Before I know the automatic mixed precision, I manually half the model and data using half() for training with half precision. But the training result is not good at all.
Then I used the automatic mixed precision to train a network, which returns…

lee Lin
- 31
- 3
1
vote
1 answer
How to Enable Mixed precision training
i'm trying to train a deep learning model on vs code so i would like to use the GPU for that. I have cuda 11.6 , nvidia GeForce GTX 1650, TensorFlow-gpu==2.5.0 and pip version 21.2.3 for windows 10. The problem is whenever i run this part of code…

samar
- 23
- 2
- 5
1
vote
1 answer
Convert a trained model to use mixed precision in Tensorflow
In order to improve the latency of a trained model, I tried to use Tensorflow mixed-precision.
Just setting the policy as mentioned in https://www.tensorflow.org/guide/mixed_precision does not seem to increase the model speed:
import tensorflow as…

ot226
- 303
- 4
- 15
1
vote
1 answer
Pytorch mixed precision learning, torch.cuda.amp running slower than normal
I am trying to infer results out of a normal resnet18 model present in torchvision.models attribute. The model is simply trained without any mixed precision learning, purely on FP32.
However, I want to get faster results while inferencing, so I…

Programmer1234
- 21
- 1
- 3
1
vote
0 answers
Dtype error when using Mixed Precision and building EfficientNetB0 Model
System information
OS Platform and Distribution : MacOS
TensorFlow installed from : Colab
TensorFlow version : 2.5.0
Python version: python 3.7
GPU model and memory: Tesla T4
Error
TypeError: Input 'y' of 'Sub' Op has type float16 that does not…

Gaurav Reddy
- 11
- 1
0
votes
0 answers
What's the gradients dtype during mixed precision training?
I want to figure out how the torch.cuda.amp.autocast works. Therefore, I conducted an experiment. The code is as following:
class CustomModel(nn.Module):
def __init__(self, input_size, hidden_size, num_classes):
super(CustomModel,…

熊fiona
- 1
0
votes
0 answers
AssertionError: No inf checks were recorded for this optimizer - Unable to find a solution, despite multiple attempts
I have encountered an issue with the following error message while running my code:
Traceback (most recent call last):
File "train.py", line 76, in
model.optimize_parameters()
File…

chair
- 1
0
votes
0 answers
Tensorflow model can't use mixed precision
I'm trying to create a 3D autoencoder (3D Unet without BatchNorm and skip connections) with keras.
When I train it with tf.float32 it learns, but when I'm using mixed precision policy the training seems infinite.
I managed to reproduce error in…

DoMan
- 1
- 1
0
votes
0 answers
How to apply Pytorch gradscaler in WGAN
I would like to accelerate my WGAN-code written in Pytorch.
In pseudocode, it looks like this:
n_times_critic = 5
for epoch in range(num_epochs):
for batch_idx, batch in enumerate(batches):
z_fake = gen(noise)
z_real = batch
…

postnubilaphoebus
- 113
- 4
0
votes
0 answers
Deep learning Inference Performance difference between FP16 and FP32
I trained a resnet50 model (Cifar 10 classification task) using amp(Mixed Precision training).
and when i did inference with test data, i found that the reduction in inference time between fp16 and fp32. (I changed output type using amp autocast…

wonki cho
- 21
- 4