0

Thank you for sharing your fantastic tool with us. Very excellent job.

Just a question, why I got different constituency parsing result between online task demo and local python library? I think both of them are based on this model?

For example, input the same sentence,

They quickly ran to the place which is sound came from.

(from a student's composition).

The online demo gave the result: (S (NP (PRP They)) (ADVP (RB quickly)) (VBD ran) (PP (IN to) (NP (NP (DT the) (NN place)) (SBAR (WHNP (WDT which)) (S (VP (VBZ is) (NP ***(NN sound)***)))))) (VP (VBD came) (PP (IN from))) (. .))

but the result of python library version: (S (NP (PRP They)) (ADVP (RB quickly)) (VBD ran) (PP (IN to) (NP (NP (DT the) (NN place)) (SBAR (WHNP (WDT which)) (S (VP (VBZ is) (NP ***(JJ sound)***)))))) (VP (VBD came) (PP (IN from))) (. .))

It seems the online demo gave a better result.

DuDa
  • 3,718
  • 4
  • 16
  • 36

1 Answers1

0

The demo and the library sometimes go out of sync, because we update the library more often than the demo. Right now I'm in the middle of an effort to update all the demo usage information to use the new AllenNLP 2.0 version.

In your example the demo is indeed better, but your example is ungrammatical, so I would not put too much stock into the results anyways. Essentially, this is an out-of-domain sentence. If I fix the sentence ("They quickly ran to the place which the sound came from."), the parse is correct.

Dirk Groeneveld
  • 2,547
  • 2
  • 22
  • 23