0

I have a huge dict variable which is about 2 Gigabytes. I am doing some scientific calculation on this dict(read only). However, the reading speed of the shared dictionary is much much slower than a regular dictionary even if it can save a lot of memory. Is there a faster way to share a read only data in multiprocessing job? HERE IS MY CODE

import multiprocessing as mp
import numpy as np
import time
if __name__ == "__main__":
    origin_data = {
        "data" : np.random.rand(1000,1000)
        
        
        }
    
    m1 = mp.Manager()
    shm_origin_data = m1.dict(origin_data)
    
    t1 = time.time()
    for i in range(100):
        origin_data["data"]+origin_data["data"]
    t2 = time.time()
    print("local dict time is "+ str(t2-t1))
    
    t1 = time.time()
    for i in range(100):
       shm_origin_data["data"] + shm_origin_data["data"]
    t2 = time.time()
    print("shared dict time is "+ str(t2-t1))

The result is

local dict time is 0.7529358863830566
shared dict time  is 9.097671508789062
  • `exec(open("origin_data.py").read(),globals())` That's a no-go on many levels. For a starter: how should we understand what happens there? Why don't you just show us what happens here and then just import it? – Klaus D. Mar 11 '21 at 09:38
  • You are right. I have edited my code – Serpent_Beginer Mar 11 '21 at 11:21

0 Answers0