I found out through many research papers that Machine Learning algorithms (and more specifically CNNs/DNNs) are actually very error-tolerant applications i.e., they can survive severe numerical errors, to the point where a very acceptable quality of results and accuracy can be obtained by using 8-bit and even sub-byte integer computational operators. Some research papers for example demonstrate that it is possible to obtain good results with 4-bit Integer MAC units, making floating-point units completely non-useful for such applications.
My question is about Floating-Point, where do you think this is really mandatory? does it find any place in any Machine Learning/Artificial Intelligence sub-domains? or is it really a game in general-purpose and scientific computing only? Any pointers to some useful applications/benchmarks/platforms that really need it and rely on it?