14

I want to split dataframe by uneven number of rows using row index.

The below code:

groups = df.groupby((np.arange(len(df.index))/l[1]).astype(int))

works only for uniform number of rows.

df

a b c  
1 1 1  
2 2 2  
3 3 3  
4 4 4  
5 5 5  
6 6 6  
7 7 7  

l = [2, 5, 7]

df1  
1 1 1  
2 2 2  

df2  
3,3,3  
4,4,4  
5,5,5  

df3  
6,6,6  
7,7,7  

df4  
8,8,8
anky
  • 74,114
  • 11
  • 41
  • 70
Pradeep Tummala
  • 308
  • 1
  • 5
  • 15

4 Answers4

26

You could use list comprehension with a little modications your list, l, first.

print(df)

   a  b  c
0  1  1  1
1  2  2  2
2  3  3  3
3  4  4  4
4  5  5  5
5  6  6  6
6  7  7  7
7  8  8  8


l = [2,5,7]
l_mod = [0] + l + [max(l)+1]

list_of_dfs = [df.iloc[l_mod[n]:l_mod[n+1]] for n in range(len(l_mod)-1)]

Output:

list_of_dfs[0]

   a  b  c
0  1  1  1
1  2  2  2

list_of_dfs[1]

   a  b  c
2  3  3  3
3  4  4  4
4  5  5  5

list_of_dfs[2]

   a  b  c
5  6  6  6
6  7  7  7

list_of_dfs[3]

   a  b  c
7  8  8  8
Scott Boston
  • 147,308
  • 15
  • 139
  • 187
  • 2
    Correct me if I'm wrong, but I think the modified list should be: `l_mod = [0] + l + [len(df)]`. Now, in this instance, `max(l)+1` and `len(df)` coincide, but if generalised you might lose rows. And as a second note, it could be worth passing it on `set` to ensure that no duplicate indicies exist (like having `[0]` 2 times). Great solution btw, you got my upvote :) – N1h1l1sT Nov 23 '21 at 14:10
  • @N1h1l1sT Thanks. Yes, I think you are correct in for generalization. Maybe, you could be using this original list to filter the dataframe also, but I agree with your assumptions here. – Scott Boston Nov 23 '21 at 14:20
5

I think this is what you need:

df = pd.DataFrame({'a': np.arange(1, 8),
                  'b': np.arange(1, 8),
                  'c': np.arange(1, 8)})
df.head()
    a   b   c
0   1   1   1
1   2   2   2
2   3   3   3
3   4   4   4
4   5   5   5
5   6   6   6
6   7   7   7

last_check = 0
dfs = []
for ind in [2, 5, 7]:
    dfs.append(df.loc[last_check:ind-1])
    last_check = ind

Although list comprehension are much more efficient than a for loop, the last_check is necessary if you don't have a pattern in your list of indices.

dfs[0]

    a   b   c
0   1   1   1
1   2   2   2

dfs[2]

    a   b   c
5   6   6   6
6   7   7   7
Mohit Motwani
  • 4,662
  • 3
  • 17
  • 45
2

I think this is you are looking for.,

l = [2, 5, 7]
dfs=[]
i=0
for val in l:
    if i==0:
        temp=df.iloc[:val]
        dfs.append(temp)
    elif i==len(l):
        temp=df.iloc[val]
        dfs.append(temp)        
    else:
        temp=df.iloc[l[i-1]:val]
        dfs.append(temp)
    i+=1

Output:

   a  b  c
0  1  1  1
1  2  2  2
   a  b  c
2  3  3  3
3  4  4  4
4  5  5  5
   a  b  c
5  6  6  6
6  7  7  7

Another Solution:

l = [2, 5, 7]
t= np.arange(l[-1])
l.reverse()
for val in l:
    t[:val]=val
temp=pd.DataFrame(t)
temp=pd.concat([df,temp],axis=1)
for u,v in temp.groupby(0):
    print v

Output:

   a  b  c  0
0  1  1  1  2
1  2  2  2  2
   a  b  c  0
2  3  3  3  5
3  4  4  4  5
4  5  5  5  5
   a  b  c  0
5  6  6  6  7
6  7  7  7  7
Mohamed Thasin ah
  • 10,754
  • 11
  • 52
  • 111
1

You can create an array to use for indexing via NumPy:

import pandas as pd, numpy as np

df = pd.DataFrame(np.arange(24).reshape((8, 3)), columns=list('abc'))

L = [2, 5, 7]
idx = np.cumsum(np.in1d(np.arange(len(df.index)), L))

for _, chunk in df.groupby(idx):
    print(chunk, '\n')

   a  b  c
0  0  1  2
1  3  4  5 

    a   b   c
2   6   7   8
3   9  10  11
4  12  13  14 

    a   b   c
5  15  16  17
6  18  19  20 

    a   b   c
7  21  22  23 

Instead of defining a new variable for each dataframe, you can use a dictionary:

d = dict(tuple(df.groupby(idx)))

print(d[1])  # print second groupby value

    a   b   c
2   6   7   8
3   9  10  11
4  12  13  14
jpp
  • 159,742
  • 34
  • 281
  • 339