0

I am working on the interpretability of models. I want to use AllenAI demo to check the saliency maps and adversarial attack methods (implemented in this demo) on some other models. I use the tutorial here and run the demo on my local machine. Now that I want to load my pretrained model which is from the huggingface ("cardiffnlp/twitter-roberta-base-sentiment-latest" using this code) I don't know how to add the model to the demo. I checked the tutorial here but this guide only is based on the models implemented in AllenNLP.

These are the changes on the new directory(roberta_sentiment_twitter) I made in allennlp_demo file but for sure it is not true since the main implementation only uses the models implemented in allennlp.

#in model.json
{
"id": "roberta-sentiment-twitter",
"pretrained_model_id": "cardiffnlp/twitter-roberta-base-sentiment-latest"
}

#in api.py
import os
from allennlp_demo.common import config, http
from transformers import AutoModelForSequenceClassification
from transformers import AutoTokenizer, AutoConfig

if __name__ == "__main__":

    MODEL = f"cardiffnlp/twitter-roberta-base-sentiment-latest"
    tokenizer = AutoTokenizer.from_pretrained(MODEL)
    config = AutoConfig.from_pretrained(MODEL)
    # model = AutoModelForSequenceClassification.from_pretrained(MODEL)

    endpoint = AutoModelForSequenceClassification.from_pretrained(MODEL)
    endpoint.run()


#in test_api.py
from allennlp_demo.common.testing import ModelEndpointTestCase
from allennlp_demo.roberta_sentiment_twitter.api import RobertaSentimentAnalysisModelEndpoint


class TestRobertaSentimentTwitterModelEndpoint(ModelEndpointTestCase):
    endpoint = RobertaSentimentAnalysisModelEndpoint()
    predict_input = {"sentence": "a very well-made, funny and entertaining picture."}

Is there any straightforward ways to load my models in the AllenNLP demo?
Also in the future I want to add some other interpretability method to this demo. Is there any tutorial for that too?

2 Answers2

0

If you are simply interested in getting the output from various saliency interpreters, this guide chapter explains how to use the API (you will not need the front-end demo code for this). If you want to apply the interpreters to your custom models, you may also find How to use Allen NLP interpret on custom models helpful.

For adding your custom interpreters, you can install allennlp from source and add your methods in allennlp/interpret. Feel free to make a PR at https://github.com/allenai/allennlp too.

akshitab
  • 266
  • 1
  • 2
0

If you want local and global interpretability that extracts what your model has learned directly from the model itself, try Leap Labs (https://www.leap-labs.com/). They have a waitlist for their new interpretability tools coming out in the next few weeks including feature visualization and saliency mapping using hierarchical perturbation.