I'm currently using the phillbaker/elasticsearch Terraform provider to generate indices on an AWS OpenSearch cluster running Elasticsearch 7.10 .
For most of my indices this works fine, however, I have one large index that has more than the default 1000 field limit. Based on the providers documentation there isn't an argument for the field limit, I've tried putting it in the mapping but that causes provider to fail to parse my mapping JSON file.
This is the terraform block for the index:
resource "elasticsearch_index" "large_index" {
name = "large-index"
number_of_shards = 1
number_of_replicas = 2
mappings = file("${path.root}/elastic-search-mappings/large-index.json")
analysis_analyzer = jsonencode({
email_analyzer = {
tokenizer = "uax_url_email"
}
})
}
Is there another way to configure this setting with this provider?
I'm including the error from putting the setting into the mapping and the original error:
Error: elastic: Error 400 (Bad Request): Failed to parse mapping [_doc]: Root mapping definition has unsupported parameters: [total_fields : {limit=2000}] [type=mapper_parsing_exception]
Error: elastic: Error 400 (Bad Request): Limit of total fields [1000] has been exceeded [type=illegal_argument_exception]