What is the best way to feed Antlr with huge numbers of tokens? Say we have a list of 100,000 English verbs, how could we add them to our grammar? We could of cause include a huge grammar file like verbs.g, but maybe there is a more elegant way, by modifying a .token file etc?
grammar verbs;
VERBS:
'eat' |
'drink' |
'sit' |
...
...
| 'sleep'
;
Also should the tokens rather be lexer or parser tokens, ie VERBS: or verbs: ? Probably VERBS:.