4

I am trying to use it to run some integration tests, so to verify the service code I am deploying is actually doing the right thing.

Basically how I setup is (as described here: https://docs.helm.sh/developing_charts/#chart-tests) creating this templates/tests/integration-test.yaml chart test file, and inside it specify to run a container, which basically is a customized maven image with test code added in and the test container is simply started by command “mvn test”, which does some simple curl check on the kube service this whole helm release deploys.

In this way, the helm test does work.

However, the issue is, during the helm test is running, the new version of the service code is actually already online and being exposed to the outside world/users. I can of course immediately do a roll back if the helm test fails, but this will not stop me hosting the problem-version of the service code for a while to the outside world.

Is there a way, where one can run a service/integration test on a pod, after the pod is started but before it is exposed to the Kubernetes service?

Ryan Dawson
  • 11,832
  • 5
  • 38
  • 61
Fuyang Liu
  • 1,496
  • 13
  • 26

2 Answers2

2

Ideally you'll install and test on a test environment first, either a dedicated test cluster or namepsace. For an additional check you could install the chart first into a new namespace and let the tests run there and then delete that namespace when it is all passed. This does require writing the tests in a way that they can hit URLs that are specific to that namespace. Cluster-internal URLs based on service names will be namespace-relative anyway but if you use external URLs in the tests then you'd either need to switch them to internal or use prefixing.

Ryan Dawson
  • 11,832
  • 5
  • 38
  • 61
  • 1
    That makes sense. I haven't looked into using namespace yet. Our current helm-chart test is run on our test and sandbox environment as some sanity checks before hitting production, so I guess the temporary exposure of the new version of the service is probably fine? :) I will see whether others are coming with some other answers later, before marking your answer as the solution :) – Fuyang Liu Jan 11 '19 at 13:33
  • Are you running the same tests on live or do you disable the hooks when installing the chart on live? – Ryan Dawson Jan 11 '19 at 14:02
  • It's a group of tests and all of them "run" on no matter whatever env they are on. But some of those tests are only carried out on sandbox environment as we add a few line of code on those sandbox tests so they are checking some environment parameters, if not "sandbox" they just return void. – Fuyang Liu Jan 11 '19 at 14:14
  • 1
    I think post-release sanity checks are a good way to use the helm test hooks. For more extensive tests that create or modify data I currently like to run those from the pipeline as you don't want to change data in live. For live you want to switch over traffic and have checks/monitoring that the new version is handling the traffic correctly. – Ryan Dawson Jan 11 '19 at 15:21
  • It makes sense. Thank you Ryan :) – Fuyang Liu Jan 17 '19 at 10:19
1

Use the readiness and liveness probes in the pod spec to ensure that the deployment won't even roll out if there are probe failures.

Ben Mathews
  • 2,939
  • 2
  • 19
  • 25