0

I have deployed Open Distro using a modified Helm chart from myself

The Kibana kubernetes service looks like

apiVersion: v1
kind: Service
metadata:
  annotations:
  creationTimestamp: "2019-09-05T15:29:04Z"
  labels:
    app: opendistro-es
    chart: opendistro-es-1.0.0
    heritage: Tiller
    release: opendistro-es
  name: opendistro-es-kibana
  namespace: elasticsearch
  resourceVersion: "48313341"
  selfLink: /api/v1/namespaces/elasticsearch/services/opendistro-es-kibana
  uid: e5066171-cff1-11e9-bb87-42010a8401d0
spec:
  clusterIP: 10.15.246.245
  ports:
  - name: opendistro-es-kibana
    port: 443
    protocol: TCP
    targetPort: 5601
  selector:
    app: opendistro-es-kibana
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

and the pod looks like

apiVersion: v1
kind: Pod
metadata:
  annotations:
    checksum/config: a4af5a55572dd6587cb86b0e6b3758f682c23745ad114448ce93c19e9612b6a
  creationTimestamp: "2019-09-05T15:29:04Z"
  generateName: opendistro-es-kibana-5f78f46bb-
  labels:
    app: opendistro-es-kibana
    chart: opendistro-es-1.0.0
    heritage: Tiller
    pod-template-hash: 5f78f46bb
    release: opendistro-es
  name: opendistro-es-kibana-5f78f46bb-8pqfs
  namespace: elasticsearch
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: ReplicaSet
    name: opendistro-es-kibana-5f78f46bb
    uid: e4a7a0fe-cff1-11e9-bb87-42010a8401d0
  resourceVersion: "48313352"
  selfLink: /api/v1/namespaces/elasticsearch/pods/opendistro-es-kibana-5f78f46bb-8pqfs
  uid: e4acd8b3-cff1-11e9-bb87-42010a8401d0
spec:
  containers:
  - env:
    - name: CLUSTER_NAME
      value: elasticsearch
    image: amazon/opendistro-for-elasticsearch-kibana:1.0.2
    imagePullPolicy: IfNotPresent
    name: opendistro-es-kibana
    ports:
    - containerPort: 5601
      protocol: TCP
    resources:
      limits:
        cpu: 2500m
        memory: 2Gi
      requests:
        cpu: 500m
        memory: 512Mi
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /usr/share/kibana/config/kibana.yml
      name: config
      subPath: kibana.yml
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: opendistro-es-kibana-token-9g8mq
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  nodeName: gke-ehealth-africa-d-concourse-ci-poo-98690882-h3lj
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: opendistro-es-kibana
  serviceAccountName: opendistro-es-kibana
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - configMap:
      defaultMode: 420
      name: opendistro-es-security-config
    name: security-config
  - name: config
    secret:
      defaultMode: 420
      secretName: opendistro-es-kibana-config
  - name: opendistro-es-kibana-token-9g8mq
    secret:
      defaultMode: 420
      secretName: opendistro-es-kibana-token-9g8mq

Unfortunately when I try and curl the Kibana service name I get connection refused

curl: (7) Failed connect to opendistro-es-kibana:443; Connection refused

When I use

kubectl port-forward svc/opendistro-es-kibana 5601:443

I'm able to access Kibana

Any pointers of what I'm missing would be very much appreciated!

Jakub
  • 8,189
  • 1
  • 17
  • 31
Will Pink
  • 433
  • 1
  • 4
  • 8
  • "Unfortunately when I try and curl the Kibana service name I get connection refused" I assume this means you are trying to curl `opendistro-es-kibana.elasticsearch.svc.cluster.local` ? – Patrick W Sep 05 '19 at 17:49
  • Where are you running the curl from? and can you confirm the full curl command? If the curl is trying to use TLS because of port 443 and kibana does not have an SSL cert configured, the connection will be refused, however, connections on port 5601 (port forward = localhost:5601) won't necessary use TLS unless you force it – Patrick W Sep 05 '19 at 17:51
  • I'm doing the following (I updated the service to listen on 5601) ```[root@opendistro-es-client-58b688b566-ph74p elasticsearch]# curl opendistro-es-kibana.elasticsearch.svc.cluster.local:5601 curl: (7) Failed connect to opendistro-es-kibana.elasticsearch.svc.cluster.local:5601; Connection refused``` – Will Pink Sep 06 '19 at 08:02
  • Did you try curl opendistro-es-kibana.elasticsearch.svc.cluster.local:443? You specified the cluster IP to listen on port 443 while you were curling port 5601 – Hang Du Sep 06 '19 at 09:21
  • Without turning on port-forward, does it work if you use the clusterIP of the service instead of the name? Does curling the pod IP work? – Patrick W Sep 06 '19 at 11:39

2 Answers2

3

your service is of type clusterIP therefor its not accessible outside the cluster. change the type to NodePort to make it accessible via <your_node_ip>:<your_service_port>

a better solution will be to use k8s ingress to accept external traffic

Efrat Levitan
  • 5,181
  • 2
  • 19
  • 40
2

Ok I managed to fix it, the Kibana service by default was only listening on the loopback interface. After switching it to use server.host: "0.0.0.0" it works fine.

Thanks for the suggestions

Will Pink
  • 433
  • 1
  • 4
  • 8