13

I have a huge dataset and I am trying to read it line by line. For now, I am reading the dataset using pandas:

df = pd.read_csv("mydata.csv", sep =',', nrows = 1)

This function allows me to read only the first line, but how can I read the second, the third one and so on? (I would like to use pandas.)

EDIT: To make it more clear, I need to read one line at a time as the dataset is 20 GB and I cannot keep all the stuff in memory.

Guido Muscioni
  • 1,203
  • 3
  • 15
  • 37

4 Answers4

20

One way could be to read part by part of your file and store each part, for example:

df1 = pd.read_csv("mydata.csv", nrows=10000)

Here you will skip the first 10000 rows that you already read and stored in df1, and store the next 10000 rows in df2.

df2 = pd.read_csv("mydata.csv", skiprows=10000 nrows=10000)
dfn = pd.read_csv("mydata.csv", skiprows=(n-1)*10000, nrows=10000)

Maybe there is a way to introduce this idea into a for or while loop.

Malekai
  • 4,765
  • 5
  • 25
  • 60
Davidvs
  • 216
  • 2
  • 3
9

Looking in the pandas documentation, there is a parameter for read_csv function:

skiprows

If a list is assigned to this parameter it will skip the line indexed by the list:

skiprows = [0,1]

This will skip the first one and the second line. Thus a combination of nrow and skiprows allow to read each line in the dataset separately.

Guido Muscioni
  • 1,203
  • 3
  • 15
  • 37
1

You are using nrows = 1, wich means "Number of rows of file to read. Useful for reading pieces of large files"

So you are telling it to read only the first row and stop.

You should just remove the argument to read all the csv file into a DataFrame and then go line by line.

See the documentation for more details on usage : https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html

Aymen
  • 83
  • 1
  • 1
  • 9
0

I found using skiprows to be very slow. This approach worked well for me:

line_number = 8 # the row you want. 0-indexed

import pandas as pd
import sys # or `import itertools`
import csv

# you can wrap this block in a function:
# (filename, line_number[, max_rows]) -> row
with open(filename, 'r') as f:
    r = csv.reader(f)
    for i in range(sys.maxsize**10): # or `i in itertools.count(start=0)`
        if i != line_number:
            next(r) # skip this row
        else:
            row = next(r)
            row = pd.DataFrame(row) # or transform it however you like
            break # or return row, if this is a function

# now you can use `row` !

To make it more robust, substitute sys.maxsize**10 with your actual total number of rows and/or be make sure that line_number is a non-negative number + put a try/except StopIteration block around the row = next(r) line, so that you can catch the reader reaching the end of file.

Michele Piccolini
  • 2,634
  • 16
  • 29