I have a code in python that reads a very huge file fetches data from another file and writes into a new file with the matched and not matched values.
say for eg
file 1:
ab
bc
cd
gh
file 2:
ab t1 catch1
ab t1 catch2
bc t1 catch1
bc t2 catch3
bc t1 catch4
ef t7 catch1
output :
ab catch1
catch2
bc catch1
catch3
catch4
cd
gh
My Code:
with open("list_with-detail.ids") as f:
for line in f:
if id in line:
do printing
I am dealing with very huge file i.e ~10 GB which takes minutes for fetching relevant data for each id. The id list to be fetched is also very huge i.e ~20 MB.
I want to know a better/faster way to deal with this issue.