0

I would like to know how to use mixed precision with PyTorch and Intel Extension for PyTorch.

I have tried to look at the documentation on their GitHub, but I can't find anything that specifies how to go from fp32 to blfoat16.

1 Answers1

0

The IPEX GitHub might not be the best place to look for API documentation. I would try and use the PyTorch IPEX page, which includes examples of API applications.

This would be an example of how to use fp32

model, optimizer = ipex.optimize(model, optimizer, dtype=torch.float32)

This would be an example of how to use bfloat16

model, optimizer = ipex.optimize(model, optimizer, dtype=torch.bfloat16)