0

I have the following sequential code:

c = []
for ind, a in df.iterrows():
    for ind, b in df.iterrows():
        if a.hit_id < b.hit_id :
            c.append(dist(a, b))
c = numpy.array(c)

But the number of rows in the dataframe is close to 106. Therefore I want to somehow speedup this operation. I am thinking of using dask for the same along with group by. Following is my approach:

@dask.delayed
def compute_pairwise_distance(val1, val2):
    for i1 in val1:
        for i2 in val2:
            dist = np.sqrt(np.square(i1.x-i2.x) + np.square(i1.y-i2.y) + np.square(i1.z - i2.z))
            gV.min_dist = min(gV.min_dist, dist)
            gV.max_dist = max(gV.max_dist, dist)

def wrapper():
    gV.grouped_df = gV.df_hits.groupby('layer_id')
    unique_groups = gV.df_hits['layer_id'].compute().unique()
    results = []
    for gp1 in unique_groups:
        for gp2 in unique_groups:
            if gp1 < gp2 :
                y = delay(compute_pairwise_distance)(gV.grouped_df.get_group(gp1), gV.grouped_df.get_group(gp2))
                results.append(y)
    results = dask.compute(*results)

wrapper()
print(str(gV.max_dist) + " " +str(gV.min_dist))

I don't know why but I am getting a key error KeyError: 'l'. Also is this the right way of using dask.

0 Answers0