I am a little confused about usage of filter, tokenizer vs query. I can select ngram filter or tokenizer during indexing (through an analyzer) I can also use multi_field to store different variation of same field for different usage of a query so I should not have concerns about flexibility of this approach as mentioned here: http://jontai.me/blog/2013/02/adding-autocomplete-to-an-elasticsearch-search-application/
when I used ngram filter during analysis of text I gave same result as when I used fuzzy query (even better results, because of edgeNGram option that was not available for fuzzy queries.)
so when should I use fuzzy query (through fuzziness option or fuzzy_like_this query ..) if using filter (during indexing) and simple match query gets better results and as I read it is more scalable?
when should I use ngram tokenizer instead of ngram filter?