1

I would like to build a model that can take a sentence in the imperative form and output a new sentence in an interrogative form (however, the meaning would be the same in both sentences - both sentences are commands). I have seen the following question and have done some research into what kinds of models could be used, but I am stumped. Any advice on where to go from here would be very welcome.

Convert interrogative sentence to imperative sentence

Example data:

I have several imperative sentences with their interrogative counterparts.

    Imperative: Make sure you know what your own assets are and operate them accordingly.
    Interrogative 1: Do you know what your own assets are and can you operate them accordingly?
    Interrogative 2: Do you know what your own assets are and how to operate them accordingly?

    Imperative: Hold your hands in position.
    Interrogative 1: Can you hold your hands in position?
    Interrogative 2: Could you hold your hands in position?

I would prefer to do this with a machine learning approach because I have so many sentences.

The end goal is to be able to input an imperative and have a random interrogative with the same meaning output.

What I have done

I have created a rule-based system that can classify imperatives with 87% accuracy using NLTK's POS tagging chunking. I have also been able to extract the grammar from sentences using NLTK's context free grammar functions. I have done some research on neural language models and LSTMs, but these seem to want to take a paragraph or more of text as training. I want to use single sentences as training with clear output possibilities.

Final question

Is there an algorithm I can use in order to train the grammar differences between an imperative and its interrogative counterparts so that I can simply input an imperative and get an interrogative in return? Is there another approach I should look into?

Aryana
  • 75
  • 1
  • 1
  • 7
  • You can use a sequence to sequence model to train something like this. It will need a large volume of training data, but each individual training instance can be a single sentence. Alternately you can use a dependency parser and rules to transform sentences. It would be tricky though. – polm23 Oct 07 '20 at 03:55
  • Thank you. I have been researching seq2seq models since I posted this. I don't have a lot of data, which is a problem. – Aryana Oct 07 '20 at 09:20
  • Ah, in that case it'll be tricky. Maybe you should look at rule-based factoid question generation, specifically using a dependency parse to manipulate the sentence structure. – polm23 Oct 08 '20 at 04:11

0 Answers0