1

I have this root file which is available on Google drive at this link, and when I used to convert it to arrays in root 3 using parallel processing, it took less time and memory. The code I was using was something like

from concurrent.futures import ThreadPoolExecutor
executor = ThreadPoolExecutor(8)

branches = pd.DataFrame.from_dict(uproot.open(''+file_with_path+'')[''+tree_name+''].arrays(namedecode='utf-8', executor = executor))

But now it consumes all my memory in root 4, may be I am not doing it properly. Could you please have a look at it? Also it is not that speedy as it used to be.

from concurrent.futures import ThreadPoolExecutor
executor = ThreadPoolExecutor(8)

input_tree = uproot.open('/path/10k_events_PFSimplePlainTree.root:PlainTree', decompression_executor=executor)

branches = input_tree.arrays(library='pd', decompression_executor=executor)

@jpivarski and I discussed this in the issue on this link and he suggested that it may be just 10% more memory but it is more than 10% for me. May be 60-80% more

  • 1
    Clarification: I said that for all I know, it could be a small difference, like 10%. I didn't say that it is or should be 10%. The statement was about my lack of knowledge. – Jim Pivarski Feb 18 '21 at 13:37
  • @JimPivarski sorry for quoting you with really bad words. I am going to edit the post – Shahid Zafar Khan Feb 18 '21 at 15:17

0 Answers0