I'm learning about explainable AI (XAI) and some of papers I've read say that we can use XAI to improve model's performance. It seems quite a new problem cuz I think when the model has already converged, it's impossible to find a new global minimum and this contradicts the above statement. I want to ask if there is anyways to improve the model's results that relevant to XAI methods? And if there is, how do they work? Tks a lots!!
-
One example could be the use of 2d input arrays for image recognition instead of 1d. while fully connected is fully connected either way a 2d array may allow for test images to be processed without flattening thus improving performance. Not a perfect example but it's what comes to mind. Performance isn't necessarily just accuracy speed matters too. – kpie Jan 18 '22 at 02:48
-
Please edit the question to limit it to a specific problem with enough detail to identify an adequate answer. – Community Jan 28 '22 at 12:56
1 Answers
XAI methods primarily help in improving models based on better understanding and faster debugging. Thus, as an engineer you can use targeted measures to improve a model.
To the best of my knowledge, there is just one scientific work (see below) that uses explanations of XAI methods directly in the training process of models. Essentially, the paper proposes a novel reasoning approach. First, a model makes a decision. Then, an explanation of the decision is computed. Then, the same (or possibly another) model uses the original input and the explanation to come to a final decision, i.e., in some sense the network ``reflects''/contemplates on its initial decision and its reasons to come to a final conclusion.
"Reflective-Net: Learning from Explanations" by Schneider et al. https://arxiv.org/abs/2011.13986
Disclaimer: I am the inventor and first author of the paper.

- 194
- 5