The apriori algorithm receives a list of lists, where each list is a transaction. Are you passing the list of transactions? For example:
transactions = [['milk', 'bread', 'water'],['coffe', 'sugar' ],['burgers', 'eggs']]
here you have a list of transactions (lists). Then you can pass it to apriori.
from mlxtend.preprocessing import TransactionEncoder
from mlxtend.frequent_patterns import apriori
from mlxtend.frequent_patterns import association_rules
import time
support_threshold = 0.004
te = TransactionEncoder()
te_ary = te.fit(transactions).transform(transactions)
df = pd.DataFrame(te_ary, columns=te.columns_)
logging.debug("Calculating itemset according to support...")
# time
start_time = time.clock()
# apriori
frequent_itemsets = apriori(df, min_support=support_threshold, use_colnames=True)
# end time to calculation
end_time = time.clock()
time_apriori = (end_time-start_time)/60
apriori_decimals = "%.2f" % round(time_apriori,2)
print("\n\nCompleted in %s minutes\n" % apriori_decimals)
print(frequent_itemsets) #dataframe with the itemsets
lift = association_rules(frequent_itemsets, metric="lift", min_threshold=1)
print(lift) #dataframe with confidence, lift, conviction and leverage metrics calculated
Regarding the min support threshold, and the time the apriori algorithm took to give us the result, with small min_support values we will have a lot of association rules. Thereby, to calculate them the algorithm needs time. This is one of the well-known limitations of the algorithm.
You can find here an overall explanation on how the apriori algorithm works, some highlights are:
Apriori uses a "bottom-up" approach, where frequent subsets are
extended one item at a time (known as candidate generation). Then
groups of candidates are tested against the data. The algorithm
terminates when no further successful extensions are found.
Apriori uses breadth-first search and a Hash tree structure to count
candidate item sets efficiently. It generates candidate itemsets of
length k from itemsets of length k-1. Then it prunes the candidates
who have an infrequent subpattern. According to the downward closure
lemma, the candidate set contains all frequent k-length item sets.
After that, it scans the transaction database to determine frequent
itemsets among the candidates.
As we can see, for a dataset with a large number of frequent items or with a low support value, the candidate itemsets will always be very large.
These large datasets require a lot of memory to be stored. Moreover, the apriori algorithm also look at all parts of the database multiple times to calculate the frequency of the itemsets in k-itemset. So, the apriori algorithm could be very slow and inefficient, mainly when the memory capacity is limited, and the number of transactions is large.
For example, I tried the apriori algorithm with a list of transactions with 25900 transactions and a min_support value of 0.004. The algorithm took about 2.5 hours to give the output.
For more detailed explanation of the code, visit - mlxtend apriori