Questions tagged [lime]

LIME (local interpretable model-agnostic explanations) is an explainability method used to inspect machine learning models. Include a tag for Python, R, etc. depending on the LIME implementation.

LIME (local interpretable model-agnostic explanations) is an explainability method used to inspect machine learning models and debug their predictions. It was originally proposed in “Why Should I Trust You?”: Explaining the Predictions of Any Classifier (Ribeiro et al., NAACL 2016) as a way to explain how models were making predictions in natural language processing tasks. Since then, people implemented the approach in several packages, and the technique inspired later techniques for "explainable machine learning," such as the SHAP approach.

Related Concepts

LIME Implementations

Implementations of this approach exist in several software packages.

Python

R

Further Reading

101 questions
1
vote
1 answer

Issue with importing TextExplainer from eli5.lime package relating to deprecated 'itemfreq' function

I am working on a BERT model for text classification and wish to use TextExplainer for model interpretation. However, when loading the library eli5.lime I receive the following error: ImportError: cannot import name 'itemfreq' from 'scipy.stats' It…
1
vote
1 answer

How do I use Lime for an NLP CNN neural network multi class?

I have a problem. I would like to output how the model has decided. I would like to use LIME for this. I have found the following tutorial. I have a free text field and would like to identify which case it is - this is to be solved with the help of…
Test
  • 571
  • 13
  • 32
1
vote
0 answers

Change text size of LIME show_in_notebook

I was reading this tutorial on LIME which showed how to visualise the result of a prediction by executing this code import pandas as pd import numpy as np import lime import matplotlib.pyplot as plt import random import…
Nemo
  • 1,124
  • 2
  • 16
  • 39
1
vote
1 answer

SHAP KernelExplainer AttributeError numpy.ndarray

I've developed a text classifier of the form of python function that can input a np.array of strings (each string is one observation). def model(vector_of_strins): ... # do smthg return vec_of_probabilities # like [0.1, 0.23, ...,…
student
  • 68
  • 1
  • 11
1
vote
0 answers

How to print 2-ngrams in LimeTextExplainer

I try to explain the importance of a sentence using the following pipeline with LimeTextExplainer from LIME package. Pipeline(steps=[('vect', CountVectorizer()), ('tfidf', TfidfTransformer()), ('clf',…
student
  • 68
  • 1
  • 11
1
vote
1 answer

Having trouble to display html output in data bricks python

There are similar question, but nothing solve my problem. One thing I think that the last line of my code doesn't work in data bricks (where I am working). I am displaying part of the code which you actually can't reproduce, but it's probably small…
ash1
  • 393
  • 1
  • 2
  • 10
1
vote
0 answers

unsupported operand type(s) for -: 'str' and 'str' - Implementation LimeTabularExplainer

my issue is that I get the following error enter image description here when I try to follow the implementation given in the…
Manu675
  • 11
  • 3
1
vote
0 answers

keras 1D-CNN outputs with LIME

I have tried to explain the outputs of my 1D-CNN with LIME. My 1D-CNN is a binary class text classifier where every class is independent. Until the LIME, the model works well. But I'm not sure how to apply it to LIME…
DG A
  • 15
  • 5
1
vote
0 answers

Trying to implement Explainable AI on Neural Network (RNN/LSTM/GRU). Do I need Logistic Regression? And Which method should I apply? Lime or Shap?

Trying to implement Explainable AI on Neural Network(RNN/LSTM/GRU). Doi need Logistic Regression? And Which method should I apply? Lime or Shap ?
1
vote
0 answers

Lime Model Explainer in Python vs R

I am trying to convert some R code to Python. The code builds h2o models and uses Lime for explanations. The Lime modules in each language seem to be quite different. What would be the equiavlent python functions to this R code? explainer <-…
1
vote
0 answers

Understanding LIME output

I try to understand this lime output the predicted value is the prediction made by the model we try to understand. But how are min and max calculated? Another question: is RidgeRegressor the default explainable model used by the lime python…
Allyg
  • 11
  • 2
1
vote
0 answers

Python LIME keep returning nonsense errors Index out of bounds or input feature length not match

In python A randomforestregressor of y onto 16 numeric features (variables). Now I have an array of length 16, which is correct, because I have 16 features But I got an error: I don't know what is going on. I even tried to input an array with the…
Chenying Gao
  • 310
  • 4
  • 14
1
vote
0 answers

How to interpret NaN in Lime SubmodularPick

I am creating a visualization that displays the aggregated results of LIME SubmodularPick, basically I plot a histogram of the result as well as a boxplot. However when I look at the table I notice a lot of values are missing, instead they appear as…
Cristian
  • 53
  • 6
1
vote
0 answers

How to work with a model with two input in LIME for explaining text classification

My Keras model is the Following: https://i.stack.imgur.com/2Gfun.png It takes two phrases and the result is the relation between them, that can be either "Attack, Support, Neither". Now, i want to explain the output of this model with LIME, i tried…
Raiseku
  • 21
  • 5
1
vote
1 answer

How to do model agnostics on LSTM neural network in Python?

I used an LSTM neural network (Keras package) on my data set and want to use model agnostic methods to find the effects of the variables on my prediction. The data set is an array with three dimensions. I have multiple ID's that each have their own…
Danique
  • 11
  • 1