1

I have tried to explain the outputs of my 1D-CNN with LIME. My 1D-CNN is a binary class text classifier where every class is independent. Until the LIME, the model works well. But I'm not sure how to apply it to LIME algorithm.

_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 embedding_2 (Embedding)     (None, 400, 128)          14217984  
                                                                 
 dropout_4 (Dropout)         (None, 400, 128)          0         
                                                                 
 conv1d_2 (Conv1D)           (None, 400, 50)           19250     
                                                                 
 global_max_pooling1d_2 (Glo  (None, 50)               0         
 balMaxPooling1D)                                                
                                                                 
 flatten_2 (Flatten)         (None, 50)                0         
                                                                 
 dropout_5 (Dropout)         (None, 50)                0         
                                                                 
 dense_4 (Dense)             (None, 50)                2550      
                                                                 
 dense_5 (Dense)             (None, 1)                 51        
                                                                 
=================================================================
Total params: 14,239,835
Trainable params: 14,239,835
Non-trainable params: 0
_________________________________________________________________
def proba(text) :

  #clean(text) is a preprocessing function.
  pre = clean(text)

  #tovector(pre) is a tokenization, vectorization, padding function
  vector = tovector(pre)
 
  prob = model.predict(vector)

  prob = prob.reshape(1, -1)
  p0 = 1-prob

  return np.hstack((p0, prob))


def limer(text) :
  explainer = LimeTextExplainer(class_names=[0, 1])
  exp = explainer.explain_instance(text, proba, top_labels=1, num_features=10)
  result = exp.show_in_notebook(text)
  return result

limer("Hello, World!")

This is Error Message :

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-74-b745bf0d2681> in <module>()
----> 1 limer("Hello, World!")

8 frames
/usr/local/lib/python3.7/dist-packages/sklearn/utils/validation.py in check_consistent_length(*arrays)
    331         raise ValueError(
    332             "Found input variables with inconsistent numbers of samples: %r"
--> 333             % [int(l) for l in lengths]
    334         )
    335 

ValueError: Found input variables with inconsistent numbers of samples: [5000, 1]
DG A
  • 15
  • 5
  • Does this help https://github.com/marcotcr/lime/blob/master/doc/notebooks/Tutorial%20-%20Image%20Classification%20Keras.ipynb –  Jan 21 '22 at 01:47

0 Answers0