4

I built a LUIS model, I want to enable the active learning property on the model, but I do not want to add the tested utterances manually by checking the wanted utterance, All the tutorial that I found do this manually, like the following tutorial: https://learn.microsoft.com/en-us/azure/cognitive-services/luis/luis-how-to-review-endpoint-utterances

I want to add all the tested utterances automatically without review to the trained data,

Does there a method to do that?

Taqwa sleem
  • 157
  • 11
  • There is no such method for [LUIS](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c08) you can check this [API](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c08) list. – Md Farid Uddin Kiron Mar 24 '20 at 08:58
  • @MdFaridUddinKiron Thank you, Is there evidence for that? – Taqwa sleem Mar 24 '20 at 09:21
  • @Taqwasleem - Are you saying you want all unlabeled utterances to be added to the intents with the highest scores? Can you explain what you hope to accomplish by doing this? (Since there are multiple people in this thread, you will need to @ mention me if you want me to see your reply.) – Kyle Delaney Mar 24 '20 at 19:18
  • @KyleDelaney I too have thought about this, though there is inherent risk in the process. Most of the time I'm just accepting all of the suggestions, so I could see how there would be benefit to automating the process. That said, Taqwasleem I often run into conflict items that could mess up my intents if incorrectly trained, so on the whole I don't think I'd recommend this. – billoverton Mar 24 '20 at 20:40
  • @KyleDelaney - Thanks for your response, yes, you are right, it seems strange to do something like that, especially if the bot was available for everyone, I want to do this for academic purposes, it does not necessarily mean that I want to adopt it for a long time. – Taqwa sleem Mar 25 '20 at 08:13

2 Answers2

2

Yes, you can do it using REST APIs.

  • First, you need to get intent programmatically, from here.
  • Then change model programmatically from here.

PS: You may need to Serialize and Deserialize JSON objects, check this library which can be downloaded from NuGet.

1

First, consider what you're doing. If you tell LUIS to add all utterances to the intents that LUIS already predicted for them, then your intention must be for LUIS to continue predicting the same intents that it already had been. Even though adding new labeled utterances will surely change confidence scores a bit, using a script to automatically label unlabeled utterances isn't very different from doing nothing at all.

Even if you do want to do something about the utterances, you might consider just clearing your logs to get rid of them, which is apparently what the versions - Delete unlabelled utterance API does because unlabeled utterances seem to be drawn from your logs. On the other hand, you might as well just not log anything to begin with.

If you really want to automate the process of adding utterances to their aligned intents, you'll have to download the logs and then add the utterances from the logs as example utterances. You could then use the example utterances - Review labeled examples API to see which utterances are labeled and use that to determine which utterances from the logs are unlabeled, but you're in luck because you don't have to. You can just add all the utterances from the logs without worrying about whether or not they're already labeled.

You could do something like this:

  1. apps - Download application query logs
  2. example utterances - Batch add labels
  3. train - Train application version
  4. apps - Publish application
Kyle Delaney
  • 11,616
  • 6
  • 39
  • 66
  • I'm trying to automate removing numeric inputs (e.g. line #, PO # for an order). From this answer I take it that I could download the logs, use a regex to find the utterances in question (in my case utterances with no alpha characters), and then use Delete Unlabled Utterance API to remove them? Since I don't care if they are labeled or not in this process, I can skip the review labeled examples step? – billoverton Jun 08 '20 at 14:42
  • @billoverton - I'm not sure what you mean when you say "removing" but I'm guessing you want to remove certain utterances from your logs. Yes you can download the logs and then use Delete unlabelled utterance. I explained that you could already skip Review labeled examples in any case, but in your case you can skip Batch add labels if that's what you meant to ask. – Kyle Delaney Jun 08 '20 at 17:55