13

I'm trying to read 100GB size of csv file
I want to see the profess bar when they reading file

file = pd.read_csv("../code/csv/file.csv") 

like =====> 30%
is there way to see the progress bar when reading the read_csv? or other files

  • 3
    Depends how you're reading the file. If you have something you're iterating through, [`tqdm`](https://github.com/tqdm/tqdm) or [`progressbar2`](https://pypi.org/project/progressbar2/) can handle that, but for a single atomic operation it's usually difficult to get a progress bar (because you can't actually get inside the operation to see how far you are at any given time). There are some workarounds for HTTP requests in tqdm, I think, but I don't think it exists for pandas. – Green Cloak Guy Jul 24 '19 at 01:23
  • 3
    I will just recommend using chunk – BENY Jul 24 '19 at 01:25
  • 1
    Possible duplicate of [How to resolve memory issue of pandas while reading big csv files](https://stackoverflow.com/questions/39398283/how-to-resolve-memory-issue-of-pandas-while-reading-big-csv-files) – Billal Begueradj Jul 24 '19 at 13:36

2 Answers2

4

The idea is to read a few lines from the large file to estimate line size, and then to iterate chunks of the file.

import os
import sys
from tqdm import tqdm


INPUT_FILENAME = f"{BASE_PATH}betas_R_SWAN_offset_100.csv.gz"
LINES_TO_READ_FOR_ESTIMATION = 20
CHUNK_SIZE_PER_ITERATION = 10**5


temp = pd.read_csv(INPUT_FILENAME,
                   nrows=LINES_TO_READ_FOR_ESTIMATION)
N = len(temp.to_csv(index=False))
df = [temp[:0]]
t = int(os.path.getsize(INPUT_FILENAME)/N*LINES_TO_READ_FOR_ESTIMATION/CHUNK_SIZE_PER_ITERATION) + 1


with tqdm(total = t, file = sys.stdout) as pbar:
    for i,chunk in enumerate(pd.read_csv(INPUT_FILENAME, chunksize=CHUNK_SIZE_PER_ITERATION, low_memory=False)):
        df.append(chunk)
        pbar.set_description('Importing: %d' % (1 + i))
        pbar.update(1)

data = temp[:0].append(df)
del df            
Ofer Rahat
  • 790
  • 1
  • 9
  • 15
2

A fancy output with typer module, which I have tested in Jupyter Notebook with a massive delimited text file having 618k rows.


from pathlib import Path
import pandas as pd
import tqdm
import typer

txt = Path("<path-to-massive-delimited-txt-file>").resolve()

# read number of rows quickly
length = sum(1 for row in open(txt, 'r'))

# define a chunksize
chunksize = 5000

# initiate a blank dataframe
df = pd.DataFrame()

# fancy logging with typer
typer.secho(f"Reading file: {txt}", fg="red", bold=True)
typer.secho(f"total rows: {length}", fg="green", bold=True)

# tqdm context
with tqdm.auto.tqdm(total=length, desc="chunks read: ") as bar:
    # enumerate chunks read without low_memory (it is massive for pandas to precisely assign dtypes)
    for i, chunk in enumerate(pd.read_csv(txt, chunksize=chunksize, low_memory=False)):
        
        # print the chunk number
        print(i)
        
        # append it to df
        df = df.append(other=chunk)
        
        # update tqdm progress bar
        bar.update(chunksize)
        
        # 6 chunks are enough to test
        if i==5:
            break
            
# finally inform with a friendly message
typer.secho("end of reading chunks...", fg=typer.colors.BRIGHT_RED)
typer.secho(f"Dataframe length:{len(df)}", fg="green", bold=True)
    

Jupyter Notebook Output - png

OzInClouds
  • 51
  • 2
  • 3