Questions tagged [xai]
29 questions
0
votes
1 answer
How can LIME explanations for tabular data be created for a Sequential Keras model?
How can LIME explainability method (specifically LimeTabularExplainer) be used to explain a neural network (Sequential Keras model)?
I’m working with Adult dataset (binary classification of tabular data). I encode it with One-Hot-Encoding using…

Kainsha
- 1
- 1
0
votes
0 answers
Emoji features used in the sentiment analysis are not showing on my LIME figures
I'm a student from a business school, and trying to learn how to use advanced data analytics.
My model uses Bilstm to conduct a sentiment classification task. The dataset consists of 80000 tweets. The features includes words and emojis. I'm trying…

Ariel
- 1
0
votes
0 answers
How to output a two class SHAP feature bar graph plot using XGBoost for binary classification?
I am currently working on a binary classification problem, and I am wanting to output SHAP plots showing what features are considered for classification. I was able to easily output this for sk learn's implementation of Random Forest, but for…

John Pruden
- 1
- 1
0
votes
0 answers
IndexError: index 1 is out of bounds for axis 1 with size 1 when using python lime
I am trying to validate my ANN built on tabular data with XAI methods to improve transparency. But when I am using the lime method I am getting this error, and I am not able to debug it. Kindly help me as I am new in this field.
I want to make the…
0
votes
0 answers
Shap explainer gives an error with ECFP4 fingerprints
I am training a Random Forest with molecular fingerprints and adding a shap explainer, with the shap package function
explainer = shap.Explainer(forest)
and it gives me the error:
"ExplainerError: Additivity check failed in TreeExplainer!…

Alya
- 1
0
votes
0 answers
Cannot interpret SVM model using Shapash
Currently, I'm exploring machine learning interpretability tools for one of my project. I found Shapash quite a new tool and many people suggesting to use it to create a few easily interpretable charts for ML model. When I tried it with…

user22
- 112
- 1
- 9
0
votes
0 answers
I want to know why my gradcam implementation always show me all 0 values
I am using GradCAM with my custom dataset. I am following the codes of https://github.com/jacobgil/pytorch-grad-cam <-this work
But I always get 0 values when the target layer is layer.4.
My network is pretrained resnet18. I checked there is a…

Manyerror
- 25
- 3
0
votes
0 answers
DeepSHAP beeswarm plot only shows blue dots and does not have a color gradient to show feature values
I tried running an example on SHAP Deep Explainer from this link using this Titanic dataset. This is the code from the example:
# import package
import shap
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from…

natnicha teja
- 11
- 2
0
votes
0 answers
Upload model to GCP Vertex AI with explanation metadata - problem with input_baselines
I'd like to upload a model to Vertex AI with explanation metadata, in particular input_baselines parameter. I'm getting an error:
TypeError: Value must be iterable
suggesting I'm passing a variable of a wrong type, although documentation suggests…

gosia
- 29
- 3
0
votes
0 answers
Evaluation of SHAP outputs
Am working on implementing XAI using SHAP library, the implementation is just fine but in case we want to prove that shap outputs are accurate and we eventually can trust them, is there any kind of metrics or how can we make sure that the values we…

Rihab
- 23
- 4
0
votes
1 answer
My loss value is nan and accuracy: 0.0000e+00 on numerical data
I am working on the XAI model and implementing a simple model based on my data. While training, the loss, and accuracy are Nan and I am unable to find out the problem.
[XAI]…

Khawar Islam
- 2,556
- 2
- 34
- 56
0
votes
1 answer
Is there any post-hoc explainable AI method for complex CNN-base architectures e.g. Mask R-CNN?
Is there any posthoc easily applicable explainable AI tool for detections carried out by complex CNN-base architectures e.g. Mask R-CNN?
0
votes
1 answer
SHAP: dependence_plot doesn't work with an error which is related deep inside the function
I have this code:
import pandas as pd
import shap
import xgboost
df = pd.read_clipboard(sep=",")
labels=df.pop('target')
model = xgboost.XGBClassifier().fit(df, labels)
# compute SHAP values
explainer = shap.TreeExplainer(model)
shap_values =…

mans
- 17,104
- 45
- 172
- 321
-2
votes
0 answers
SHAP is giving error: 'NoneType' object has no attribute 'copy'
code:
import shap
from pycaret.classification import load_model, predict_model
pipe = load_model()
dataframe =
explainer = shap.TreeExplainer(pipe.named_steps['trained_model'])
train_full_pipe =…

Sc0der
- 1
- 1
- 1