Suppose I have a large amount of data that I am loading into a dataframe by chunks;? For eg : I have a table which is more than 40 Gb and selecting 3 columns may be around 2 - 3 gb suppose and records are 10 million (count of rows)
c = pd.read_sql("select a,b,c from table;", con=db, chunksize=10**2):
b = c['a']
Since It is reading the table chunk by chunk does it mean it is not loading the whole 3 gb in memory at once and operate only on 10^2 mb at once then goto next chunk automatically??
If not, how to make it behave like this?