I am using aho corasick to performing some string searches on documents. The original code uses numpy array to store in an efficient way the matches of each string of a string list:
import ahocorasick
import numpy as np
def ahocorasickFind(search_list, input):
A = ahocorasick.Automaton()
for idx, s in enumerate(search_list):
A.add_word(s, (idx, s))
A.make_automaton()
index_list = []
for item in A.iter(input):
print(item)
index_list.append(item[1][0])
output_list = np.array([0] * len(search_list))
output_list[index_list] = 1
return output_list.tolist()
search_list = ['joão','maria','thiago'] # thousands of words in the real code
result = ahocorasickFind(search_list,'asdasdasd joão 1231231 thiago') # huge text in the real code
for index, element in enumerate(result):
if(element == 1):
print(search_list[index])
Using the above approach took to much time and memory to iterate and test (if == 1). So, how to get "original" strings found in the input text in a perfomatic way?