I work for auto industry, where the reliability of machine inference is the critical issue becaue of lawsuit all that. The neural network(NN) is very much popular now, but how about reliability? They say, it was tested on 1000 tests data. Well that's not enough, how about 10000 or more? What can you say about untested or unseen data?
I don't only mean to raise a lack of data issue, but its black box nature of NN. Gaussian process is,I found "safer" since the output can be derived as some kind of distribution(although that depends on kernel you choose), and at least I know the unseen data will return the similar prediction as the similar seen data would. What about NN? any nice distribution of output? Can I safely assume to get continuous result from NN as input data changes? Thank you.
Similar topic How to prove the reliability of a predictive model to executives?