7

I have a dictionary comprised of product names and unique customer emails who have purchased those items that looks like this:

customer_emails = {
'Backpack':['customer1@gmail.com','customer2@gmail.com','customer3@yahoo.com','customer4@msn.com'], 
'Baseball Bat':['customer1@gmail.com','customer3@yahoo.com','customer5@gmail.com'],
'Gloves':['customer2@gmail.com','customer3@yahoo.com','customer4@msn.com']}

I am trying to iterate over the values of each key and determine how many emails match in the other keys. I converted this dictionary to a DataFrame and got the answer I wanted for a single column comparison using something like this

customers[customers['Baseball Bat'].notna() == True]['Baseball Bat'].isin(customers['Gloves']).sum()

What I'm trying to accomplish is to create a DataFrame that essentially looks like this so that I can easily use it for correlation charts.

             Backpack  Baseball Bat    Gloves
Backpack            4             2         3
Baseball Bat        2             3         1 
Gloves              3             1         3

I'm thinking the way to do it is to iterate over the customer_emails dictionary but I'm not sure how you would pick out a single key to compare its values to all others and so on, then store it.

cs95
  • 379,657
  • 97
  • 704
  • 746

3 Answers3

6

Start with pd.DataFrame.from_dict:

df = pd.DataFrame.from_dict(customer_emails, orient='index').T

df
              Backpack         Baseball Bat               Gloves
0  customer1@gmail.com  customer1@gmail.com  customer2@gmail.com
1  customer2@gmail.com  customer3@yahoo.com  customer3@yahoo.com
2  customer3@yahoo.com  customer5@gmail.com    customer4@msn.com
3    customer4@msn.com                 None                 None

Now, use stack + get_dummies + sum + dot:

v = df.stack().str.get_dummies().sum(level=1)
v.dot(v.T)

              Backpack  Baseball Bat  Gloves
Backpack             4             2       3
Baseball Bat         2             3       1
Gloves               3             1       3

Alternatively, switch stack with melt for some added performance.

v = (df.melt()
       .set_index('variable')['value']
       .str.get_dummies()
       .sum(level=0)
)
v.dot(v.T)

variable      Backpack  Baseball Bat  Gloves
variable                                    
Backpack             4             2       3
Baseball Bat         2             3       1
Gloves               3             1       3
cs95
  • 379,657
  • 97
  • 704
  • 746
2

You can first find all the counts for each product and corresponding emails, then pass the resulting dictionary to pd.DataFrame:

import pandas as pd
emails = {'Baseball Bat': ['customer1@gmail.com', 'customer3@yahoo.com', 'customer5@gmail.com'], 'Backpack': ['customer1@gmail.com', 'customer2@gmail.com', 'customer3@yahoo.com', 'customer4@msn.com'], 'Gloves': ['customer2@gmail.com', 'customer3@yahoo.com', 'customer4@msn.com']}
results = {a:{c:sum(h in j for h in b) for c, j in emails.items()} for a, b in emails.items()}
df = pd.DataFrame(results)

Output:

               Backpack  Baseball Bat  Gloves
Backpack             4             2       3
Baseball Bat         2             3       1
Gloves               3             1       3
Ajax1234
  • 69,937
  • 8
  • 61
  • 102
1

Using the same logic create Series, then we using intersection for list

s=pd.Series(customer_emails)

pd.DataFrame(np.reshape([len(set(x).intersection(set(y)))for x in s for y in s ],(3,3)),index=s.index,columns=s.index)
Out[299]: 
              Backpack  Baseball Bat  Gloves
Backpack             4             2       3
Baseball Bat         2             3       1
Gloves               3             1       3
BENY
  • 317,841
  • 20
  • 164
  • 234