5

How do you filter out/search in aggregate results efficiently?

Imagine you have 1 million documents in elastic search. In those documents, you have a multi_field (keyword, text) tags:

{
  ...
  tags: ['Race', 'Racing', 'Mountain Bike', 'Horizontal'],
  ...
},
{
  ...
  tags: ['Tracey Chapman', 'Silverfish', 'Blue'],
  ...
},
{
  ...
  tags: ['Surfing', 'Race', 'Disgrace'],
  ...
},

You can use these values as filters, (facets), against a query to pull only the documents that contain this tag:

...
"filter": [
  {
    "terms": {
      "tags": [
        "Race"
      ]
    }
  },
  ...
]

But you want the user to be able to query for possible tag filters. So if the user types, race the return should show (from previous example), ['Race', 'Tracey Chapman', 'Disgrace']. That way, the user can query for a filter to use. In order to accomplish this, I had to use aggregates:

{
  "aggs": {
    "topics": {
      "terms": {
        "field": "tags",
        "include": ".*[Rr][Aa][Cc][Ee].*", // I have to dynamically form this
        "size": 6
      }
    }
  },
  "size": 0
}

This gives me exactly what I need! But it is slow, very slow. I've tried adding the execution_hint, it does not help me.

You may think, "Just use a query before the aggregate!" But the issue is that it'll pull all values for all documents in that query. Meaning, you can be displaying tags that are completely unrelated. If I queried for race before the aggregate, and did not use the include regex, I would end up with all those other values, like 'Horizontal', etc...

How can I rewrite this aggregation to work faster? Is there a better way to write this? Do I really have to make a separate index just for values? (sad face) Seems like this would be a common issue but have found no answers through documentation and googling.

mclenithan
  • 275
  • 1
  • 14

1 Answers1

4

You certainly don't need a separate index just for the values...

Here's my take on it:

  1. What you're doing with the regex is essentially what should've been done by a tokenizer -- i.e. constructing substrings (or N-grams) such that they can be targeted later.
    This means that the keyword Race will need to be tokenized into the n-grams ["rac", "race", "ace"]. (It doesn't really make sense to go any lower than 3 characters -- most autocomplete libraries choose to ignore fewer than 3 characters because the possible matches balloon too quickly.)

Elasticsearch offers the N-gram tokenizer but we'll need to increase the default index-level setting called max_ngram_diff from 1 to (arbitrarily) 10 because we want to catch as many ngrams as is reasonable:

PUT tagindex
{
  "settings": {
    "index": {
      "max_ngram_diff": 10
    },
    "analysis": {
      "analyzer": {
        "my_ngrams_analyzer": {
          "tokenizer": "my_ngrams",
          "filter": [ "lowercase" ]
        }
      },
      "tokenizer": {
        "my_ngrams": {
          "type": "ngram",
          "min_gram": 3,
          "max_gram": 10,
          "token_chars": [ "letter", "digit" ]
        }
      }
    }
  },
  { "mappings": ... }                                 --> see below
}
  1. When your tags field is a list of keywords, it's simply not possible to aggregate on that field without resorting to the include option which can be either exact matches or a regex (which you're already using). Now, we cannot guarantee exact matches but we also don't want to regex! So that's why we need to use a nested list which'll treat each tag separately.

Now, nested lists are expected to contain objects so

{
  "tags": ["Race", "Racing", "Mountain Bike", "Horizontal"]
}

will need to be converted to

{
  "tags": [
    { "tag": "Race" },
    { "tag": "Racing" },
    { "tag": "Mountain Bike" },
    { "tag": "Horizontal" }
  ]
}

After that we'll proceed with the multi field mapping, keeping the original tags intact but also adding a .tokenized field to search on and a .keyword field to aggregate on:

  "index": { ... },
  "analysis": { ... },
  "mappings": {
    "properties": {
      "tags": {
        "type": "nested",
        "properties": {
          "tag": {
            "type": "text",
            "fields": {
              "tokenized": {
                "type": "text",
                "analyzer": "my_ngrams_analyzer"
              },
              "keyword": {
                "type": "keyword"
              }
            }
          }
        }
      }
    }
  }

We'll then add our adjusted tags docs:

POST tagindex/_doc
{"tags":[{"tag":"Race"},{"tag":"Racing"},{"tag":"Mountain Bike"},{"tag":"Horizontal"}]}

POST tagindex/_doc
{"tags":[{"tag":"Tracey Chapman"},{"tag":"Silverfish"},{"tag":"Blue"}]}

POST tagindex/_doc
{"tags":[{"tag":"Surfing"},{"tag":"Race"},{"tag":"Disgrace"}]}

and apply a nested filter terms aggregation:

GET tagindex/_search
{
  "aggs": {
    "topics_parent": {
      "nested": {
        "path": "tags"
      },
      "aggs": {
        "topics": {
          "filter": {
            "term": {
              "tags.tag.tokenized": "race"
            }
          },
          "aggs": {
            "topics": {
              "terms": {
                "field": "tags.tag.keyword",
                "size": 100
              }
            }
          }
        }
      }
    }
  },
  "size": 0
}

yielding

{
  ...
  "topics_parent" : {
    ...
    "topics" : {
      ...
      "topics" : {
        ...
        "buckets" : [
          {
            "key" : "Race",
            "doc_count" : 2
          },
          {
            "key" : "Disgrace",
            "doc_count" : 1
          },
          {
            "key" : "Tracey Chapman",
            "doc_count" : 1
          }
        ]
      }
    }
  }
}

Caveats

  • in order for this to work, you'll have to reindex
  • ngrams will increase the storage footprint -- depending on how many tags-per-doc you have, it may become a concern
  • nested fields are internally treated as "separate documents" so this affects the disk space too

P.S.: This is an interesting use case. Let me know how the implementation went!

Joe - GMapsBook.com
  • 15,787
  • 4
  • 23
  • 68
  • This is really great, Joe! I'll be implementing this tomorrow, and then I'll mark your answer as correct if it all works, looks like it will. Thanks a lot! – mclenithan Jan 06 '21 at 10:15
  • Yes it did! But due to deadlines on a production deployment, we had to modify the requirements, (it's not a true regex/ngram suggestion search, but close ;)). I took the first part of your query to narrow down the results. That returns too many values with using our modified standard analyzer BUT then we can use the regex only against those results. The queries are now sub 100ms vs 6s. And since this is part of suggestions, and we only have a few fields like tags that actually need the include regex, it feels like a solid. Eventually, I want to finish your full implementation on prod. – mclenithan Jan 09 '21 at 13:03
  • Thanks again for the help, that was a huge help! And will be recommending your book when it comes out ;) – mclenithan Jan 09 '21 at 13:04
  • ~100ms is awesome! @ book -- thank you ;) – Joe - GMapsBook.com Jan 10 '21 at 09:18