I have an external ElasticSearch instance that I'd like Fluentd and Kibana to leverage accordingly in OSE 3.11. The ES instance is insecure at the moment, as this is simply a internal pilot. Based on the OSE docs here (https://docs.openshift.com/container-platform/3.11/install_config/aggregate_logging.html#sending-logs-to-an-external-elasticsearch-instance), I should be able to update a number of ES_* variables accordingly in the ElasticSearch deployment config. The first issue is, the variables referenced in the docs don't exist in the ElasticSearch deployment config.
Secondly, I tried updating these values via the inventory file. For example, for the property openshift_logging_es_host, the description claims: The name of the Elasticsearch service where Fluentd should send logs.
These were the values in my inventory file:
openshift_logging_install_logging=true
openshift_logging_es_ops_nodeselector={'node-role.kubernetes.io/infra':'true'}
openshift_logging_es_nodeselector={'node-role.kubernetes.io/infra':'true'}
openshift_logging_es_host='169.xx.xxx.xx'
openshift_logging_es_port='9200'
openshift_logging_es_ops_host='169.xx.xxx.xx'
openshift_logging_es_ops_port='9200'
openshift_logging_kibana_env_vars={'ELASTICSEARCH_URL':'http://169.xx.xxx.xx:9200'}
openshift_logging_es_ca=none
openshift_logging_es_client_cert=none
openshift_logging_es_client_key=none
openshift_logging_es_ops_ca=none
openshift_logging_es_ops_client_cert=none
openshift_logging_es_ops_client_key=none
The only variable above that seems to stick after uninstall/install of logging is openshift_logging_kibana_env_vars. I'm not sure why the others weren't respected - perhaps I'm missing one that triggers use of these vars.
In any case, after those attempts failed, I eventually found the values set on the logging-fluentd Daemon Set. I can edit via CLI or the console to set the es host, port, client keys, certs, etc. I also set the ops equivalents. The fluentd logs confirms these values are set, however, it's attempting to use https in conjunction with the default fluentd/changeme id/pwd combo.
2019-03-08 11:49:00 -0600 [warn]: temporarily failed to flush the buffer. next_retry=2019-03-08 11:54:00 -0600 error_class="Fluent::ElasticsearchOutput::ConnectionFailure" error="Can not reach Elasticsearch cluster ({:host=>\"169.xx.xxx.xx\", :port=>9200, :scheme=>\"https\", :user=>\"fluentd\", :password=>\"obfuscated\"})!" plugin_id="elasticsearch-apps"
So, ideally, I'd like to set these as inventory variables, and everything just works. If anybody has a suggestion to fix that issue, please let me know.
Less than ideal, I can modify the ES deployment config or the Fluentd Dameon Set post-install and set the values required, assuming someone knows how to avoid https?
Thanks for any input you might have.
Update:
I managed to get this working, but not via the properties documented or the provided suggestion. I ended up going through the various playbooks to identify the vars being used. I also had to setup mutual TLS, as when I specified the cert files locations to be none/undefined, the logs indicated a 'File not found'. Essentially, none or undefined gets translated to "", which it tries to open as a file. So, this was the magic combination of properties that will get you 99.9% of the way.
openshift_logging_es_host=169.xx.xxx.xxx
openshift_logging_fluentd_app_host=169.xx.xxx.xxx
openshift_logging_fluentd_ops_host=169.xx.xxx.xxx
openshift_logging_fluentd_ca_path='/tmp/keys/client-ca.cer'
openshift_logging_fluentd_key_path='/tmp/keys/client.key'
openshift_logging_fluentd_cert_path='/tmp/keys/client.cer'
openshift_logging_fluentd_ops_ca_path='/tmp/keys/client-ca.cer'
openshift_logging_fluentd_ops_key_path='/tmp/keys/client.key'
openshift_logging_fluentd_ops_cert_path='/tmp/keys/client.cer'
Notes:
- You need to copy the keys to
/tmp/keys
prior. - Upon completion, you will notice that OPS_HOST will not be set on the Daemon Set. I left it in the properties above as I think it's just a bug, and perhaps will be fixed beyond 3.11 which is what I'm using. To adjust this simply
oc edit ds/logging-fluentd
and modify accordingly.
With these changes, the log data gets sent to my external ES instance.