Currently, I am training an MLP using the float32 data type. I also attempted to train the MLP using the float-16 data type by utilizing Half(), but it resulted in a NAN error. Could someone please assist me in providing the PyTorch code for training the MLP model with float16, float8, and int8 data types?
Currently, I am training an MLP using the float32 data type. I also attempted to train the MLP using the float-16 data type by utilizing Half(), but it resulted in a NAN error.