2

I have an external ElasticSearch instance that I'd like Fluentd and Kibana to leverage accordingly in OSE 3.11. The ES instance is insecure at the moment, as this is simply a internal pilot. Based on the OSE docs here (https://docs.openshift.com/container-platform/3.11/install_config/aggregate_logging.html#sending-logs-to-an-external-elasticsearch-instance), I should be able to update a number of ES_* variables accordingly in the ElasticSearch deployment config. The first issue is, the variables referenced in the docs don't exist in the ElasticSearch deployment config.

Secondly, I tried updating these values via the inventory file. For example, for the property openshift_logging_es_host, the description claims: The name of the Elasticsearch service where Fluentd should send logs.

These were the values in my inventory file:

openshift_logging_install_logging=true
openshift_logging_es_ops_nodeselector={'node-role.kubernetes.io/infra':'true'}
openshift_logging_es_nodeselector={'node-role.kubernetes.io/infra':'true'}
openshift_logging_es_host='169.xx.xxx.xx'
openshift_logging_es_port='9200'
openshift_logging_es_ops_host='169.xx.xxx.xx'
openshift_logging_es_ops_port='9200'
openshift_logging_kibana_env_vars={'ELASTICSEARCH_URL':'http://169.xx.xxx.xx:9200'}
openshift_logging_es_ca=none
openshift_logging_es_client_cert=none
openshift_logging_es_client_key=none
openshift_logging_es_ops_ca=none
openshift_logging_es_ops_client_cert=none
openshift_logging_es_ops_client_key=none

The only variable above that seems to stick after uninstall/install of logging is openshift_logging_kibana_env_vars. I'm not sure why the others weren't respected - perhaps I'm missing one that triggers use of these vars.

In any case, after those attempts failed, I eventually found the values set on the logging-fluentd Daemon Set. I can edit via CLI or the console to set the es host, port, client keys, certs, etc. I also set the ops equivalents. The fluentd logs confirms these values are set, however, it's attempting to use https in conjunction with the default fluentd/changeme id/pwd combo.

2019-03-08 11:49:00 -0600 [warn]: temporarily failed to flush the buffer. next_retry=2019-03-08 11:54:00 -0600 error_class="Fluent::ElasticsearchOutput::ConnectionFailure" error="Can not reach Elasticsearch cluster ({:host=>\"169.xx.xxx.xx\", :port=>9200, :scheme=>\"https\", :user=>\"fluentd\", :password=>\"obfuscated\"})!" plugin_id="elasticsearch-apps"

So, ideally, I'd like to set these as inventory variables, and everything just works. If anybody has a suggestion to fix that issue, please let me know.

Less than ideal, I can modify the ES deployment config or the Fluentd Dameon Set post-install and set the values required, assuming someone knows how to avoid https?

Thanks for any input you might have.

Update:

I managed to get this working, but not via the properties documented or the provided suggestion. I ended up going through the various playbooks to identify the vars being used. I also had to setup mutual TLS, as when I specified the cert files locations to be none/undefined, the logs indicated a 'File not found'. Essentially, none or undefined gets translated to "", which it tries to open as a file. So, this was the magic combination of properties that will get you 99.9% of the way.

openshift_logging_es_host=169.xx.xxx.xxx
openshift_logging_fluentd_app_host=169.xx.xxx.xxx
openshift_logging_fluentd_ops_host=169.xx.xxx.xxx
openshift_logging_fluentd_ca_path='/tmp/keys/client-ca.cer'
openshift_logging_fluentd_key_path='/tmp/keys/client.key'
openshift_logging_fluentd_cert_path='/tmp/keys/client.cer'
openshift_logging_fluentd_ops_ca_path='/tmp/keys/client-ca.cer'
openshift_logging_fluentd_ops_key_path='/tmp/keys/client.key'
openshift_logging_fluentd_ops_cert_path='/tmp/keys/client.cer'

Notes:

  • You need to copy the keys to /tmp/keys prior.
  • Upon completion, you will notice that OPS_HOST will not be set on the Daemon Set. I left it in the properties above as I think it's just a bug, and perhaps will be fixed beyond 3.11 which is what I'm using. To adjust this simply oc edit ds/logging-fluentd and modify accordingly.

With these changes, the log data gets sent to my external ES instance.

M B
  • 21
  • 4

1 Answers1

0

My suggestion is a less ideal solution which is sending logs to external log aggregator using secure-forward.conf, refer around Configuring Fluentd to Send Logs to an External Log Aggregator section for more defails.

You can configure elasticsearch output plugin as well as secure_forward plugin without https.

For instnace,

# oc edit cm logging-fluentd -n openshift-logging
...
  secure-forward.conf: |
    <store>
      @type elasticsearch
      host external.es.example.com
      port 9200
    </store>
...

UPDATE: I've tested against a external fluentd instead of ES, because I have not a external ES instance in my hand. For checking log activation, I also printed out logs as file during test.

  secure-forward.conf: |
    <store>
    @type forward
     <server>
       host external.fluented.example.com
       port 24224
     </server>
    </store>
    <store>
    @type file
    path /var/log/secure-forward-test.log
    </store>

I've verified above configuration can transfer the logs to external fluentd and local log files.

Daein Park
  • 4,393
  • 2
  • 12
  • 21
  • I don't have an external log aggregator at the moment, so, I limited my config map edits to the elasticsearch output plugin. I can see a /user/output-es-config.conf on the pod, but those changes are not taking effect. Do I need to include it somehow? I still plan to pursue the inventory/playbook route even if this addresses the issue temporarily. – M B Mar 11 '19 at 15:49
  • AFAIK you can not change directly `/user/output-es-config.conf` unless rebuilding `logging-fluentd image` with new one you changed. the customizable config file is limited such as `secure-forward.conf`. So you should try to `configure secure-forward.conf` in `logging-fluentd` `ConfigMap` for your needs. – Daein Park Mar 11 '19 at 23:44
  • I attempted your suggestion, however, I'm seeing this in the fluent pod log: `[warn]: no patterns matched tag="output_tag"` – M B Mar 12 '19 at 03:09
  • @MB I've updated my answer to add my test configuration. Check it whether or not work for you. – Daein Park Mar 12 '19 at 08:55