5

I am trying to parse a large file (about 500MB) with Antlr4 using c#. But I am having an OutOfMemoryException.

My current code is described below:

var path = GetInput(Path.Combine(DatFilePath)); // Build the large file
var inputStream = new StreamReader(path);
var input = new UnbufferedCharStream(inputStream);
GroupGrammarLexer lexer = new GroupGrammarLexer(input);
lexer.TokenFactory = new CommonTokenFactory(true);
var tokens = new UnbufferedTokenStream(lexer);
GroupGrammarParser parser = new GroupGrammarParser(tokens);
parser.BuildParseTree = false;
GroupGrammarParser.FileContext tree = parser.file(); // here I get OutOfMemoryException

My grammar:

grammar GroupGrammar;

/*
 * Parser Rules
 */

 file: row+;
 row: group | comment | not;
 group: GROUP NAME ATTACHTO NAME; 
 comment: '**' .*? NL;
 not: .*? NL;


GROUP   : '*'? G R O U P ;
ATTACHTO : '*'? A T T A C H T O ;
W : ('W'|'w') ;
E : ('E'|'e') ;
L : ('L'|'l') ;
G : ('G'|'g') ;
R : ('R'|'r') ;
O : ('O'|'o') ;
U : ('U'|'u') ;
P : ('P'|'p') ;
A : ('A'|'a') ;
T : ('T'|'t') ;
C : ('C'|'c') ;
H : ('H'|'h') ;
NAME    : '\''[a-zA-Z0-9_]+'\'' ;
WS: (' ') -> skip;
NL:   '\r'? '\n';

I have fallowed all advices about large files, but I still get the OutOfMemoryException. When I test this code with a smaller file it works great.

Is there something that I'm missing?

I appreciate for any help.

Best Regards

  • Is it possible to break the big file into smaller ones? And parse each smaller one as a separate file into its own tree. Hope this won't jeopardize your business logic. – smwikipedia Aug 02 '17 at 00:35

1 Answers1

0

Try to run tokenization and parsing in a thread with increased stack size:

Thread thread = new Thread(delegate ()
{
    // Tokenize and parse here
},
500000);
thread.Start();
Ivan Kochurkin
  • 4,413
  • 8
  • 45
  • 80