I'm writing Python code right now which gradually constructs a large dictionary (600 million elements) while constantly reading from the dict. I regularly write that object to a file using cPickle, and pick up where I left off when interrupted by reading from the file.
By the time it's done, the dictionary will take up approximately 40 GB. My computer has 16 GB of RAM. What behavior should I expect? Memory error? Frozen computer? Extremely slow access to the dictionary?
Also, I'm attempting to fix my alternate implementation (a NumPy array instead of a dictionary), expected to take only 5 GB but also about three times longer to run. Am I correct that constant memory access while staying within 16 GB will make the NumPy version actually run faster?