In the OpenDistro Helm README.md, the Example Secure Kibana Config With Custom Certs defines:
elasticsearch.hosts: https://elasticsearch.example.com:443
This would imply a DNS hostname external to the kubernetes cluster. However, the generated name when using the defaults, and not using custom certs, results in:
# If no custom configuration provided, default to internal DNS
- name: ELASTICSEARCH_HOSTS
value: https://opendistro-es-client-service:9200
which comes from kibana-deployment.yaml: value: https://{{ template "opendistro-es.fullname" . }}-client-service:9200
Shouldn't a typical Kibana config.yml also use the internal DNS, and therefore still be opendistro-es-client-service:9200, or opendistro-es-client-service.default.svc.cluster.local:9200, assuming for example, default namespace? Why would you not use the internal DNS?
UPDATE: There is a similar question with opendistro_security.nodes_dn for the elasticsearch.config (which is copied to elasticsearch.yml):
# See: https://github.com/opendistro-for-elasticsearch/security/blob/master/securityconfig/elasticsearch.yml.example#L17
opendistro_security.nodes_dn:
- 'CN=nodes.example.com'
It is not spelled out anywhere that I can find, but I am assuming this is the CN from Subject of the cert defined by elasticsearch.ssl.transport.existingCertSecret
. Again, shouldn't these be, if anything, the internal kubernetes dns names?.
Or does it not matter if opendistro_security.ssl.transport.enforce_hostname_verification
is false"
- The default is
true
- The value in the default
elasticsearch.yml
(according to the helm README.md) is false. - The actual example (further down in the README.md), does not set it, so presumably, it is
true
. - But the actual values.yaml has a commented out value set to
false
. (I presume you are supposed to uncomment that when defining config, which you must do when adding your own certs).