1

In our company's internal network we have self-signed certificates used for applications that runs on DEV or staging environments. For our local machines it's already trusted because Active Directory provides that using Group Policy Objects. But in the Kubernetes(Openshift) world, we have to do some additional operations to provide successful SSL/TLS traffic.

In the Dockerfile of related application, we copy the certificate into container and trust it while building Docker image. After that, the requests from application that runs in container to an HTTPS endpoint served with that self signed certificate are success. Otherwise we encounter the errors like "SSL/TLS secure channel cannot be established"

COPY ./mycertificate.crt /usr/local/share/ca-certificates/
RUN chmod 644 /usr/local/share/ca-certificates/mycertificate.crt && update-ca-certificates

However, I don't think this is the best way to do this. It requires lots of operational work when the certificate has expired. Shortly it's hard to manage and maintain. I wonder what's the efficient way to handle this.

Thanks in advance for your support.

happy-integer
  • 383
  • 1
  • 3
  • 15

2 Answers2

1

Typically that should be configured cluster-wide by your OpenShift administrators using the following documentation so that your containers trust your internal root CA by default (additionalTrustBundle):

https://docs.openshift.com/container-platform/4.6/networking/configuring-a-custom-pki.html#nw-proxy-configure-object_configuring-a-custom-pki

Simon
  • 4,251
  • 2
  • 24
  • 34
0

Best is highly relative but you could start by pulling that out into a ConfigMap and mounting it into your container(s). That pushes all the work of updating it out to runtime, but introduces a fair bit of complexity. It depends on how often it changes and how much you can automate the rebuilds/redeploys when it does.

coderanger
  • 52,400
  • 4
  • 52
  • 75