0

For my project, I want to load some XML's post deployment on adhoc when required(don't want to run on every deployment). loading them directly from Jenkins or GITLAB pipeline is taking longer time due to bad throughput to k8s cluster. I am using Helm for k8s deployment. Any better approach or steps to get my adhoc job run faster.? tried to pass my service account or my account credentials to run script after deployment in an optional stage where I was advised not to do so.

This data load task from pod takes 10 mins but from Jenkins or GITLAB takes 70-90 mins.

limitations:

  1. Service-account was restricted bash run scripts from CLI post deployment
  2. Won't be able to use my personal credentials in pipeline as per best practices

For now I am running this manual step in shell script from my local when required as below. But need to add this part of pipeline. any help/suggestions.?

pkstoken -cluster=<cluster> -user=<me> ns=sade_test1                                                                                         

echo "doing dataload import for $1"

 if  [ $1 == sade-t1 ]; then
  Namespace="sade_test1"
  Podname="sade_test_p1"
  
  kubectl -n $Namespace exec -it $Podname -- bash -c "cd /opt/tomcat/webapp/app-1/bin/ && echo import dataload_custom.xml"
lucifer758
  • 105
  • 2
  • 7
  • Can you split this out into a separate Kubernetes Job? You could generate it using ordinary Helm templating, and it would have access to the service account and other in-cluster resources. Scripting `kubectl exec` isn't usually a best practice. – David Maze Aug 10 '20 at 11:13
  • Its your local env or On-Prem? Those files are always same or they are changing? – PjoterS Aug 11 '20 at 10:05
  • @DavidMaze Thanks or checking David. Is it possible to connect to a running pod with kubernetes job and run some commands and exit.? I am trying to check that with no luck – lucifer758 Aug 13 '20 at 20:53
  • @PjoterS thanks for checking. this is on-prem, those files will be changing, I might be doing on almost all times when I build new docker image. But I only wanted to load these XMLs during build/CI-Pipeline... not on every pod restart. – lucifer758 Aug 13 '20 at 20:55
  • have you considered scenario when you put those files inside pod in the cluster and then use it in your script or its not an option? – PjoterS Aug 17 '20 at 12:04
  • I will have those files inside pod after deploying new image, and I need to run a sequence of commands, I can put these in a script... but I don't want to put it as startup script or don't want to run it on all 8 pods. (these commands will load new data from new image to DB and running the same from all pods is not a good case here). I only want to invoke it on the first pod spinned up.. either from a cron job or any other approach. I don't know if I can connect to a pod in same namespace with cron job and run a set of commands. – lucifer758 Aug 19 '20 at 15:05

1 Answers1

0

Only thing comes to my mind is to mount emptyDir where those XML files are supposed to be created. Later you could create a sidecar container with nginx (or some lightweight web-server) that can read this file from emptyDir via HTTP. You could use service to download this file using curl or wget.

Service connects to any pod in Ready state, so after the first pod is ready file will also be available for download.

PjoterS
  • 12,841
  • 1
  • 22
  • 54