I'm having the following error:
"ParserError: Error tokenizing data. C error: out of memory"
When I try to read a large dataframe (5 gb), but I am selecting only the columns that interest me and setting the necessary parameters, and even so it does not work. I've tried using chunks
parameter.
df = pd.read_csv('file.csv', encoding = 'ISO-8859-1', usecols = names_columns, low_memory = False, nrows = 10000)
The strange thing is that when I put the parameter "nrows = 1000"
it works.
I've run dataframes with many more rows than that and it worked perfectly, but this one is giving this error.
Someone has any suggestions?