I'm working on creating a compiler and have come across some text that suggests two different ways of implementing symbol tables. With one, there is a symbol table for each level of nesting which is stored on a stack. With the second option there are only two symbol tables, one symbol table is constructed for maintaining all entries, and the other is used to keep track of changes to the primary table in order to know which entries to remove once you've made it past that particular block. What are the strengths and weaknesses of these different implementations? I'm sure that the first option will be faster as far as removing the symbol table for an individual block, however this comes at some overhead (of which I am unclear). The second is clearly resource intensive as far as when a large block with many declarations is to be removed, but it allows for constant time when accessing the variables.
Asked
Active
Viewed 741 times
2
-
3Compare the complexity of implementing the second as opposed to push and pop that is almost all you need for the first. I've been in compilers a long time, and I've never heard of anybody advocating the second alternative. I wouldn't spend any more time on it if I were you. If you want yet another way of doing it, see the solution for Cobol in Knuth volume 1: you will have to adapt it a little but the principle is clear. – user207421 Jan 02 '13 at 01:21
-
1We are developing a compiler for a mixed level language. First we implemented a simple symbol table based on a balanced-tree map, and it suits our compiler perfectly. We explicitly store the scope of the declaration. We had no performance problem with that, although we did not compile programs with tens or hundreds thousands of lines of code. – Boldizsár Németh Sep 09 '13 at 15:35