-5

How likely is it to succeed in training neural network (e.g. simple feedforward/backprop multilayer perceptron) to solve multiple choice (text based) questions - and if low likelihood - what would be smarter ways to go (or don't go) about this problem?

Here's more information on the multiple choice exam structure:

  • 5 lines of text
  • 1/5 answers (1-2 lines of text each) are correct

also some more assumptions:

  • results/feedback immediately displayed
  • the training data is over 5'000 questions
Cœur
  • 37,241
  • 25
  • 195
  • 267
user2305193
  • 2,079
  • 18
  • 39

1 Answers1

5

In my opinion this problem is extremely difficult to solve. Basically, you are trying to teach a neural network to understand a natural language. Obviously, there were many attempts to solve this task but no significant success yet.

This may be possible (but still unlikely) only if exam questions are very simple, highly specialized and have some special common structure.

Also, 5000 questions sample seems pretty small for this task.

Qumeric
  • 518
  • 5
  • 16
  • Thanks for the answer. I wasn't referring to learning natural language. Let me be more specific: There are some 'meta' rules, that apply to many multiple choice tests - e.g. 'the longest answer tends to be right'. In the example given, the structure is also quite static and with common structure, however questions are not 'very simple'. The question is, could a neural network approximate or give more insight on these 'meta rules', although they may be fuzzy and not apply to all training data questions. – user2305193 Jun 26 '16 at 10:08
  • 1
    It's probably possible to reach "better than random" level but no one can tell you much more. It heavily depends on the sample. But general answer is "yes, it could". – Qumeric Jun 26 '16 at 10:15