0

I am working on Dog-Cat classifier using Intel extension for Pytorch (Ref - https://github.com/amitrajitbose/cat-v-dog-classifier-pytorch). I want to reduce the training time for my model. How do I enable mixed precision in my code? Referred this github(https://github.com/intel/intel-extension-for-pytorch) for training my model.

2 Answers2

2

Mixed precision for Intel Extension for PyTorch can be enabled using below commands,

    # For Float32
    model, optimizer = ipex.optimize(model, optimizer=optimizer)
    # For BFloat16
    model, optimizer = ipex.optimize(model, optimizer=optimizer, dtype=torch.bfloat16) 

Please check out the link, https://intel.github.io/intel-extension-for-pytorch/cpu/latest/index.html and https://www.intel.com/content/www/us/en/developer/tools/oneapi/extension-for-pytorch.html to learn more about Intel Extension for PyTorch.

Ramya R
  • 163
  • 8
0

For enabling mixed precision from the you can directly follow the github link for intel extension for pytorch (https://github.com/intel/intel-extension-for-pytorch).

For Float32 you can
# Invoke optimize function against the model object and optimizer object
model, optimizer = ipex.optimize(model, optimizer, dtype=torch.float32)

For BFloat16

# Invoke optimize function against the model object and optimizer object with data type set to torch.bfloat16
model, optimizer = ipex.optimize(model, optimizer, dtype=torch.bfloat16)
ArunJose
  • 1,999
  • 1
  • 10
  • 33