1

I have a bit of an odd problem that I am trying to solve in pandas. Let's say I have a bunch of objects that have different ways to group them. Here is what our dataframe look like:

df=pd.DataFrame([
    {'obj': 'Ball',    'group1_id': None, 'group2_id': '7' },
    {'obj': 'Balloon', 'group1_id': '92', 'group2_id': '7' },
    {'obj': 'Person',  'group1_id': '14', 'group2_id': '11'},
    {'obj': 'Bottle',  'group1_id': '3',  'group2_id': '7' },
    {'obj': 'Thought', 'group1_id': '3',  'group2_id': None},
])


obj       group1_id          group2_id
Ball      None               7
Balloon   92                 7
Person    14                 11
Bottle    3                  7
Thought   3                  None

I want to group things together based on any of the groups. Here it is annotated:

obj       group1_id          group2_id    # annotated
Ball      None               7            #                   group2_id = 7
Balloon   92                 7            # group1_id = 92 OR group2_id = 7
Person    14                 11           # group1_id = 14 OR group2_id = 11
Bottle    3                  7            # group1_id =  3 OR group2_id = 7
Thought   3                  None         # group1_id = 3

When combined, our output should look like this:

count         objs                               composite_id
4             [Ball, Balloon, Bottle, Thought]   g1=3,92|g2=7
1             [Person]                           g1=11|g2=14

Notice that the first three objects we can get based on group2_id=7 and then the fourth one, Thought, is because it can match with another item via group1_id=3 that assigns it the group_id=7 id. Note: for this question assume an item will only ever be in one combined group (and there will never be conditions where it could possibly be in two groups).

How could I do this in pandas?

carl.hiass
  • 1,526
  • 1
  • 6
  • 26

2 Answers2

2

This is not odd at all ~ network problem

import networkx as nx
#we need to handle the miss value first , we fill it with same row, so that we did not calssed them into wrong group
df['key1']=df['group1_id'].fillna(df['group2_id'])
df['key2']=df['group2_id'].fillna(df['group1_id'])
# here we start to create the network
G=nx.from_pandas_edgelist(df, 'key1', 'key2')
l=list(nx.connected_components(G))
L=[dict.fromkeys(y,x) for x, y in enumerate(l)]
d={k: v for d in L for k, v in d.items()}
# we using above dict to map the same group into the same one in order to groupby them 
out=df.groupby(df.key1.map(d)).agg(objs = ('obj',list) , Count = ('obj','count'), g1= ('group1_id', lambda x : set(x[x.notnull()].tolist())), g2= ('group2_id',  lambda x : set(x[x.notnull()].tolist())))
# notice here I did not conver the composite id into string format , I keep them into different columns which more easy to understand 
Out[53]: 
                                  objs  Count       g1    g2
key1                                                        
0     [Ball, Balloon, Bottle, Thought]      4  {92, 3}   {7}
1                             [Person]      1     {14}  {11}

PS: If you need more detail about the network steps check link

BENY
  • 317,841
  • 20
  • 164
  • 234
1

BEN_YO's answer is the correct one, however here is a much more verbose solution where I build a map of the 'first key' for the grouping collection:

# using four id fields instead of 2
grouping_fields = ['group1_id', 'group2_id', 'group3_id', 'group4_id']
id_fields = df.loc[df[grouping_fields].notnull().any(axis=1), grouping_fields]

# build a set of all similarly-grouped items
# and use the 'first seen' as the grouping key for that
FIRST_SEEN_TO_ALL = defaultdict(set)
KEY_TO_FIRST_SEEN = {}

for row in id_fields.to_dict('records'):
    # why doesn't nan fall out in a boolean check?
    keys = [id for id in row.values() if id and (str(id) != 'nan')]
    row_id = keys[0]
    for key in keys:
        if (row_id != key) or (key not in KEY_TO_FIRST_SEEN):
            KEY_TO_FIRST_SEEN[key] = row_id
            first_seen_key = row_id
        else:
            first_seen_key = KEY_TO_FIRST_SEEN[key]
        FIRST_SEEN_TO_ALL[first_seen_key].add(key)

def fetch_group_id(row):
    keys = filter(None, row.to_dict().values())
    for key in keys:
        first_seen_key = KEY_TO_FIRST_SEEN.get(key)
        if first_seen_key: 
            return first_seen_key

df['group_super'] = df[grouping_fields].apply(fetch_group_id, axis=1)
carl.hiass
  • 1,526
  • 1
  • 6
  • 26