This is an update
This Out of memory error does not happen when running the code using Jupyter or Spyder UI. Any ideas?
I am loading CSV files about 250 MB in size, when the program runs for the first time it does not throw "out of memory" error but when loading the next csv file it does throw it then.
When I get this error I stopped program and re run it, interestingly I dont get out of memory error but when loading the next csv file, I get the error once again.
My PC has 8 GB left in RAM so I dont see why where is the memory cap?
The code I am using to load the csv file is:
df = pd.read_csv(csvfile, index_col=False, sep=',', low_memory=False)
Traceback Error
Traceback (most recent call last):
File "C:/Users/jvivas/Dropbox/Private/Personal/Github/CSVFormattingTools/XOMCSVFormattingTool_version2.py", line 34, in <module>
df = pd.read_csv(csvfile, index_col=False, sep=',', low_memory=False)
File "C:\Program Files (x86)\Python35-32\lib\site-packages\pandas\io\parsers.py", line 562, in parser_f
return _read(filepath_or_buffer, kwds)
File "C:\Program Files (x86)\Python35-32\lib\site-packages\pandas\io\parsers.py", line 325, in _read
return parser.read()
File "C:\Program Files (x86)\Python35-32\lib\site-packages\pandas\io\parsers.py", line 815, in read
ret = self._engine.read(nrows)
File "C:\Program Files (x86)\Python35-32\lib\site-packages\pandas\io\parsers.py", line 1314, in read
data = self._reader.read(nrows)
File "pandas\parser.pyx", line 808, in pandas.parser.TextReader.read (pandas\parser.c:8643)
File "pandas\parser.pyx", line 896, in pandas.parser.TextReader._read_rows (pandas\parser.c:9772)
File "pandas\parser.pyx", line 1865, in pandas.parser.raise_parser_error (pandas\parser.c:23295)
pandas.io.common.CParserError: Error tokenizing data. C error: out of memory
Any help is appreciated
Thanks