Is it currently possible to "repave" or regenerate pods or containers from a replication controller in Kubernetes based on time or condition for security reasons? Would like to recreate container based on schedule every x min/hours or due to condition(like tripwire). I know this could be done externally, just curious if it was an existing feature or if there was clever way to accomplish this objective.
3 Answers
Not something that is directly built in Kubernetes as is, but you could work around that by leveraging a Liveness probe. If any container inside your Pod is scheduled to "fail" on a certain condition (time or event based) Kubernetes will automatically restart the pod, i.e. recreate the failed containers.

- 2,673
- 18
- 37
Another solution is to use Brendan Burn's ksql script to locate target pods. I use this at my company in our deployment CI jobs. SQL searches for all pods that are running an image that I've just rebuilt, passing the results into a BASH while
loop in which I destroy the affected pods one by one.
#!/bin/bash
# Build the ksql query
QUERY="SELECT pods.metadata->name, pods.metadata->namespace "
QUERY="${QUERY} FROM pods LEFT JOIN containers USING uid "
QUERY="${QUERY} WHERE image LIKE '%/${CONTAINER_NAME}:${CONTAINER_TAG}'"
exec 5>&1 # Duplicate stdout on #5 so we can display the
# results of the query and use them at the same time
# Delete every pod matching QUERY. The `sed` and `awk` calls break down the formatted table
while read line; do
namespace=`echo $line | cut -d' ' -f4`
pod=`echo $line | cut -d' ' -f2`
kubectl delete --namespace=$namespace pod $pod
done < <(echo "$QUERY" | node node_modules/ksql/ksql.js | tee >(cat - >&5) | sed -n 'p;n' | tail -n +3)
It's certainly not the most elegant solution in the world, but it could be simple enough to embed this logic in a container running in your cluster.
The upside of this technique is that it's extremely flexible and extensible.

- 4,341
- 2
- 27
- 38
Does pod.spec.activeDeadlineSeconds
do what you want? http://kubernetes.io/docs/api-reference/v1/definitions/#_v1_podspec

- 3,567
- 13
- 18