You should definitely do what the commenter Val suggests, denormalize (flatten) your data, if it is at all possible. I would suggest, for example, you could use documents like this (basically, do the join before indexing):
B/type2/1 {"serial": "abc", "temp": 1, "member": "jack"}
B/type2/2 {"serial": "abc", "water": 0, "member": "jack"}
B/type2/3 {"serial": "def", "temp": 10, "member": "jack"}
Then if you search {"match": {"member": "jack"}}
, you'll get all those documents. There are two ways of doing something like "joins" in Elasticsearch, parent-child relationships and nested objects. Here's the example of how you could create your mapping with nested objects:
{
"type1": {
"properties": {
"serial": {"type": "keyword"},
"member": {"type": "keyword"},
"type2s": {
"type": "nested",
"properties": {
"temp": {"type": "integer"},
"water": {"type": "integer"}
}
}
}
}
}
Then you would store a record like this:
{
"serial": "abc",
"member": "jack",
"type2s": [
{
"temp": 1
},
{
"water": 0
}
}
}
However, I would strongly urge you not to do this unless you absolutely have to! Use cases where this is a good idea are rare. It makes querying your data more complex, and it's inefficient (so as your data scales, you are going to have issues much sooner).
I know it feels wrong to "duplicate" data. It would be a terrible practice in a relational database. You really have to develop a different way of thinking, for effective and efficient data modeling in Elasticsearch, and one of the differences is that you shouldn't worry too much about duplicating data.