1

I'm pretty new to working with very large dataframes (~550 million rows and 7 columns). I have raw data in the following format:

df = Date|ID|Store|Brand|Category1|Category2|Age

This dataframe is over 500 million rows and I need to pass it through a function that will aggregate it at a particular level (brand, category1, or caetgory2) and calculate market basket affinity metrics. Since several temp tables need to be made to get to the final metrics, I am using the pandasql function to do the calculations on the df. I have tried running my code on both my local computer and a large sagemaker instance, but the compute time is extremely long, and often the script does not finish/the kernel crashed.

I have tried the following packages to try to speed up the code, but no luck so far:

  1. Vaex - I tried recreating the sql calculations in python but this did not seem to be promising at all in terms of speed.
  2. Dask - Not really sure if this one applied here but did not help
  3. Duckdb - since I am calling sql through python, this one seemed the most promising. It worked well when I took a subset of the data (10 mil rows) but will not finish processing when I try it on 300 mil rows...and I need it to work on 550 mil rows.

Does anyone have suggestions on how I can speed things up to work more efficiently? Below is the python function that runs the df through the sql aggregations.

```def mba_calculation(df, tgt_level='CATEGORY_2', aso_level='CATEGORY_2', threshold=1000, anchor=[]): 
"""
tgt_level - string, target level is one of three options: category 1, category 2, brand. Deafult: cat2
aso_level - string, association level is one of three options: category 1, catgeory 2, brand. Default: cat2
anchor - list containing either 0,1, or 2 category1/category2/brand depdending on tgt_level. Default: 0
threshold - co-occurence level of target and associated item; ranges from 1 to the max co-occurence. Default: 1000
"""

#Case1: no anchor selected(default view) - display pairs
if len(anchor) == 0:
    sql_mba = """
            WITH combined AS
                (SELECT t.{} AS TGT_{}, a.{} AS ASO_{},
                    COUNT(DISTINCT t.ID) AS RCPTS_BOTH
                FROM {} t 
                INNER JOIN {} a
                ON t.ID = a.ID and t.{} <> a.{}  
                GROUP BY 1,2
                --set minimum threshold for co-occurence
                HAVING COUNT(DISTINCT t.ID) >= {}
                ),
            target AS
                (SELECT {} AS TGT_{}, COUNT(DISTINCT ID) AS RCPTS_TGT
                FROM {}
                WHERE TGT_{} IN (SELECT DISTINCT(TGT_{}) FROM combined)
                GROUP BY 1
                ),
            associated AS
                (SELECT {} AS ASO_{}, COUNT(ID) AS RCPTS_ASO
                FROM {}
                WHERE ASO_{} IN (SELECT DISTINCT(ASO_{}) FROM combined)
                GROUP BY 1
                )

            SELECT combined.TGT_{}, combined.ASO_{}, RCPTS_BOTH, target.RCPTS_TGT, 
                associated.RCPTS_ASO, RCPTS_ALL
                --calculate support, confidence, and lift
                ,CASE WHEN RCPTS_ALL = 0 THEN 0 ELSE (RCPTS_BOTH*1.0) / RCPTS_ALL END AS MBA_SUPPORT
                ,CASE WHEN RCPTS_TGT = 0 THEN 0 ELSE (RCPTS_BOTH*1.0) / RCPTS_TGT END AS MBA_CONFIDENCE
                ,CASE WHEN RCPTS_ALL = 0 OR RCPTS_TGT = 0 OR RCPTS_ASO = 0 THEN 0 ELSE ((RCPTS_BOTH*1.0) / RCPTS_ALL ) / ( ((RCPTS_TGT*1.0) / RCPTS_ALL) * ((RCPTS_ASO*1.0) / RCPTS_ALL) ) END AS MBA_LIFT
            FROM combined
            LEFT JOIN target
            ON combined.TGT_{} = target.TGT_{}
            LEFT JOIN associated
            ON combined.ASO_{} = associated.ASO_{}
            LEFT JOIN (SELECT COUNT(DISTINCT ID) AS RCPTS_ALL FROM {})
            ORDER BY MBA_LIFT DESC;
        """.format(tgt_level,tgt_level, aso_level, aso_level, 
                   df, 
                   df, 
                   tgt_level,aso_level,
                   threshold, 
                   tgt_level, tgt_level, 
                   df,
                   tgt_level, tgt_level,
                aso_level, aso_level, 
                   df,
                   aso_level, aso_level,
                tgt_level, aso_level, tgt_level, tgt_level, aso_level,aso_level, df)

    mba_df = pysqldf(sql_mba)
    #print(mba_df.shape)
    #display(mba_df.head(50)) 

#Case2: 1 anchor selected - display pairs
elif len(anchor) == 1:
    anchor_item = anchor[0]
    #need to make anchors be this format '%ORANGE JUICE%'
    sql_mba = """
    WITH combined AS
        (SELECT t.{} AS TGT_{}, a.{} AS ASO_{},
            COUNT(DISTINCT t.ID) AS RCPTS_BOTH
        FROM df t 
        INNER JOIN df a
        ON t.ID = a.ID and t.{} <> a.{} 
        --filter tgt to anchor
        WHERE UPPER(t.{}) LIKE '%{}%'
        GROUP BY 1,2
        --set minimum threshold for co-occurence
        HAVING COUNT(DISTINCT t.ID) >= {}
        ),
    target AS
        (SELECT {} AS TGT_{}, COUNT(DISTINCT ID) AS RCPTS_TGT
        FROM df
        WHERE TGT_{} IN (SELECT DISTINCT(TGT_{}) FROM combined)
        GROUP BY 1
        ),
    associated AS
        (SELECT {} AS ASO_{}, COUNT(DISTINCT ID) AS RCPTS_ASO
        FROM df
        WHERE ASO_{} IN (SELECT DISTINCT(ASO_{}) FROM combined)
        GROUP BY 1
        )

    SELECT combined.TGT_{}, combined.ASO_{}, RCPTS_BOTH, target.RCPTS_TGT, 
        associated.RCPTS_ASO, RCPTS_ALL
        --calculate support, confidence, and lift
        ,CASE WHEN RCPTS_ALL = 0 THEN 0 ELSE (RCPTS_BOTH*1.0) / RCPTS_ALL END AS MBA_SUPPORT
        ,CASE WHEN RCPTS_TGT = 0 THEN 0 ELSE (RCPTS_BOTH*1.0) / RCPTS_TGT END AS MBA_CONFIDENCE
        ,CASE WHEN RCPTS_ALL = 0 OR RCPTS_TGT = 0 OR RCPTS_ASO = 0 THEN 0 ELSE ((RCPTS_BOTH*1.0) / RCPTS_ALL) / ( ((RCPTS_TGT*1.0) / RCPTS_ALL) * ((RCPTS_ASO*1.0) / RCPTS_ALL) ) END AS MBA_LIFT
    FROM combined
    LEFT JOIN target
    ON combined.TGT_{} = target.TGT_{}
    LEFT JOIN associated
    ON combined.ASO_{} = associated.ASO_{}
    LEFT JOIN (SELECT COUNT(DISTINCT _ID) AS RCPTS_ALL FROM df)
    ORDER BY MBA_LIFT DESC
        """.format(tgt_level,tgt_level, aso_level, aso_level, tgt_level,
                 aso_level, tgt_level, anchor_item, threshold, 
                   tgt_level, tgt_level, tgt_level, tgt_level,
                aso_level, aso_level, aso_level, aso_level,
                tgt_level, aso_level, tgt_level, tgt_level, aso_level,aso_level)
    mba_df = pysqldf(sql_mba)

#Case3: 2 anchors selected - display trios
elif len(anchor) == 2:
    anchor_item1 = anchor[0]
    anchor_item2 = anchor[1]
    #need to make anchors be this format '%ORANGE JUICE%'
    sql_mba = """
     WITH combined AS
        (SELECT t1.{} AS TGT1_{}, t2.{} AS TGT2_{}, 
            a.{} AS ASO_{},
            COUNT(DISTINCT t1.ID) AS RCPTS_BOTH
        FROM df t1
        INNER JOIN df t2
        ON t1.ID = t2.ID AND t1.{} <> t2.{}
        INNER JOIN df a
        ON t1.ID = a.ID AND t2.ID = a.ID
        AND t1.{} <> a.{} AND t2.{} <> a.{}  

        --filter to anchors
        WHERE
        (
        (UPPER(TGT1_{}) LIKE '%{}%' OR
         UPPER(TGT1_{}) LIKE '%{}%') 
         AND
        (UPPER(TGT2_{}) LIKE '%{}%' OR
         UPPER(TGT2_{}) LIKE '%{}%') 
         )

        GROUP BY 1,2,3
        --set minimum threshold for co-occurence
        HAVING COUNT(DISTINCT t1.ID) > {}
    ),

        target AS
            (SELECT tgt1.{} AS TGT1_{}, tgt2.{} AS TGT2_{},
                COUNT(DISTINCT tgt1.ID) AS RCPTS_TGT
            FROM df tgt1
            INNER JOIN df tgt2
            ON tgt1.ID = tgt2.RID AND tgt1.{} <> tgt2.{}
            WHERE TGT1_{} IN (SELECT DISTINCT(TGT1_{}) FROM combined)
            AND TGT2_{} IN (SELECT DISTINCT(TGT2_{}) FROM combined)

            AND 
            --filter to anchors
            (
            (UPPER(TGT1_{}) LIKE '%{}%' OR
             UPPER(TGT1_{}) LIKE '%{}%') 
             AND
            (UPPER(TGT2_{}) LIKE '%{}%' OR
             UPPER(TGT2_{}) LIKE '%{}%') 
             )

            GROUP BY 1,2
    ),

        associated AS
            (SELECT {} AS ASO_{}, 
                COUNT(DISTINCT ID) AS RCPTS_ASO
            FROM df
            WHERE ASO_{} IN (SELECT DISTINCT(ASO_{}) FROM combined)
            GROUP BY 1
    )

    SELECT combined.TGT1_{}, combined.TGT2_{},combined.ASO_{}, 
            RCPTS_BOTH, target.RCPTS_TGT, associated.RCPTS_ASO, RCPTS_ALL
            --calculate support, confidence, and lift
            ,CASE WHEN RCPTS_ALL = 0 THEN 0 ELSE (RCPTS_BOTH*1.0) / RCPTS_ALL END AS MBA_SUPPORT
            ,CASE WHEN RCPTS_TGT = 0 THEN 0 ELSE (RCPTS_BOTH*1.0) / RCPTS_TGT END AS MBA_CONFIDENCE
            ,CASE WHEN RCPTS_ALL = 0 OR RCPTS_TGT = 0 OR RCPTS_ASO = 0 THEN 0 ELSE ((RCPTS_BOTH*1.0) / RCPTS_ALL ) / ( ((RCPTS_TGT*1.0) / RCPTS_ALL) * ((RCPTS_ASO*1.0) / RCPTS_ALL) ) END AS MBA_LIFT
        FROM combined
        LEFT JOIN target
        ON combined.TGT1_{} = target.TGT1_{}
        AND combined.TGT2_{} = target.TGT2_{}
        LEFT JOIN associated
        ON combined.ASO_{} = associated.ASO_{}
        LEFT JOIN (SELECT COUNT(DISTINCT ID) AS RCPTS_ALL FROM df)
        ORDER BY MBA_LIFT DESC;
  """.format(tgt_level, tgt_level, tgt_level, tgt_level, 
             aso_level, aso_level, tgt_level, tgt_level, tgt_level,
             aso_level, tgt_level, aso_level, tgt_level, anchor_item1, 
             tgt_level, anchor_item2, tgt_level, anchor_item1, tgt_level, 
             anchor_item2, threshold, tgt_level, tgt_level, tgt_level, tgt_level, tgt_level,
             tgt_level, tgt_level, tgt_level, tgt_level, tgt_level, tgt_level, 
             anchor_item1, tgt_level,anchor_item2, tgt_level, anchor_item1, tgt_level, 
             anchor_item2, aso_level, aso_level, aso_level, aso_level, tgt_level, 
             tgt_level, aso_level, tgt_level, tgt_level, tgt_level, tgt_level, 
             aso_level,aso_level)
    mba_df = pysqldf(sql_mba)

return mba_df
```
Kristina
  • 11
  • 2

2 Answers2

1

To conserve memory, prefer import polars over the pandas library.

If your records still do not fit in memory, use external storage. The toSQL function makes it very easy to send your rows to postgres, sqlite, or similar relational database. Then you can use an on-disk data structure, an index, to make JOINs go quickly.

J_H
  • 17,926
  • 4
  • 24
  • 44
0

My preferred tool for out-of-core memory aggregations of very large datasets is Vaex. But you would need to write your datasets to an uncompressed hdf5 file(s). Polars is also pretty good.

However, as you already have your code in SQL and a re-write is probably painful, you may be able to use DuckDB if you optimise your datatypes. If you can get away with float32s or uint8s, for example, you maybe be able to reduce the size of the dataset and this may be enough to get DuckDB to run on 550 million rows. Also, if any of your columns have text, could you convert them into an category ID integer?

DougR
  • 3,196
  • 1
  • 28
  • 29