I have this root file which is available on Google drive at this link, and when I used to convert it to arrays in root 3 using parallel processing, it took less time and memory. The code I was using was something like
from concurrent.futures import ThreadPoolExecutor
executor = ThreadPoolExecutor(8)
branches = pd.DataFrame.from_dict(uproot.open(''+file_with_path+'')[''+tree_name+''].arrays(namedecode='utf-8', executor = executor))
But now it consumes all my memory in root 4, may be I am not doing it properly. Could you please have a look at it? Also it is not that speedy as it used to be.
from concurrent.futures import ThreadPoolExecutor
executor = ThreadPoolExecutor(8)
input_tree = uproot.open('/path/10k_events_PFSimplePlainTree.root:PlainTree', decompression_executor=executor)
branches = input_tree.arrays(library='pd', decompression_executor=executor)
@jpivarski and I discussed this in the issue on this link and he suggested that it may be just 10% more memory but it is more than 10% for me. May be 60-80% more