1

We have harbor scanning containers before they have been deployed. Once they are scanned, we then deploy them to the platform (k8s).

Is there anyway to scan a container just say a few weeks down the line after it has been deployed? Without disturbing the deployment of course.

Thanks

CPdev
  • 375
  • 2
  • 5
  • 20
  • Containers are ephemeral and k8s actually moves things (when you deploy something, it doesn't mean that the same container runs for weeks). Containers might be stopped and then started/rescheduled in different nodes. Also, starting a new container means that your already scanned image will be used, so what's the point on scanning a running container? – tgogos Oct 05 '18 at 09:06
  • 1
    @tgogos, so if we deploy a scanned container to k8s tomorrow and it runs fine. A few weeks down the line there may be a new security vulnerability on the image we are using. So we then need to find out what deployments may be running with that vulnerability. If that makes sense. – CPdev Oct 05 '18 at 09:16
  • Of course it makes sense :-) My question is why do you want to scan `containers` (which is a *running* thing and might exist in many replicas) instead of the `image` itself (which is a *static* thing)? – tgogos Oct 05 '18 at 09:36
  • We would need to know if the containers that are running could contain the image with the security vulnerability. Just say we have 20 containers running, then we find a security issue in, we need to have something flagged up that says this container that is running has been deployed with the security issue that has just been discovered. – CPdev Oct 05 '18 at 11:04
  • This [article](https://banzaicloud.com/blog/container-vulnerability-scans/) describes a similar "automation" case. It mentions that *"If all scans pass, [Pipeline](https://github.com/banzaicloud/pipeline) pushes the containers to a container registry, or creates a Kubernetes deployment. **We re-scan these Kubernetes deployments with configurable frequency**."* – tgogos Oct 05 '18 at 11:30

1 Answers1

1

I think we have to distinguish between a container (the running process) and the image from which a container is created/started.

If this is about finding out which image was used to create a container that is (still) running and to scan that image for (new) vulnerabilities...here is a way to get information about the images of all running containers in a pod:

kubectl get pods <pod-name> -o jsonpath={.status.containerStatuses[*].image}
apisim
  • 4,036
  • 1
  • 10
  • 16