0

I'm working with my team on a machine with 4 GPUs, we want to use Docker containers, is there a possibility to update a container GPUs for example:

container #1 has 2 GPUs and container #2 has 2 GPUs, now we want to remove 1 GPU from container #1 and add it to container #2 as result we will have; container #1 = 1GPU and container #2 = 3GPUs I have seareched about that (see here) I only find that it's possible to update CPUs, RAM but there's nothing about GPUs.

I'm looking on a way to update the container without losing its content, not necessarily a running container I can stop it and do the update, but the installed packages and all modifications inside the container should remain exist, I hope you understand what I'm saying here.

Walid Bousseta
  • 1,329
  • 2
  • 18
  • 33
  • It sounds like you'd like to do this reassignment without bringing your containers down and then back up? Could you clarify whether that is correct? I could be wrong, and your question seems to suggest that I am, but I don't think it's possible to reallocate CPUs or RAM used by containers as they are running either... you have to bring them down and back up again as well, I think. https://docs.docker.com/config/containers/resource_constraints/ is what gave me the impression that for all three resource types, the resources must be configured at container start-time. You've seen that, I hope? – sinback Apr 08 '21 at 15:24
  • Updating the container without losing its content, not necessarily a running container I can stop it and do an update, but the installed packages and all modifications inside the container should remain exist, I hope you understand what I'm saying here. – Walid Bousseta Apr 08 '21 at 15:36
  • Okay, that makes a lot more sense, thanks for clarifying. How are you running your containers? If you are using docker-compose you can switch off between files which configure the GPUs differently by switching which compose file you are using before calling docker-compose up. That might be convenient for you. – sinback Apr 08 '21 at 15:46
  • I just use `docker run ....` to create the container and when I want to use it, I use `docker start ...` but thank you I will look into `docker-compose` I think that's may be the solution – Walid Bousseta Apr 08 '21 at 15:54
  • 1
    Yeah, if you just use different arguments to `docker run` that should work too. `docker-compose` lets you memorialize the incantations you can feed in to `docker run` into a nice organized YAML file. A lot of developers really like it. It provides exactly the same functionality as using the correct collection of `docker run`, `docker exec`, etc args though. – sinback Apr 08 '21 at 15:57
  • is it possible to update a container with `docker run`, it's possible to add or remove resources with `docker run --gpus ...` but is it possible if I use the same name of the container it will update the resource and have all modifications and packages installed remain – Walid Bousseta Apr 08 '21 at 16:01
  • Yeah, if your container has modifications and packages installed at container build time, any time you run it, it's going to start off with those modifications and packages present, so you should be good. If, however, you're making modifications and installing packages *after* the container has started, then if you stop the container and restart it, you'll lose those modifications and packages. This would be the case regardless of whether you're changing which GPUs you're allocating to it. – sinback Apr 08 '21 at 16:04
  • no I'm not talking about the packages installed in the build time, I'm talking about those installed after running a container from a built image, exp: in contaier name X I installed numpy (this pakg not been installed in the build) after that I want to update the resource without removing the container so all packages will remain, I have tried to do a run command with the same name that throws an error the name is already used, it's impossible to update the conatainer with run command – Walid Bousseta Apr 09 '21 at 09:56
  • Yeah, I'm pretty confident you can't do that. I think it would be technically challenging (software-wise) for the Docker maintainers to allow that kind of behavior, and it's against the philosophy of containers for microservices (homogeneity & reproducibility) to significantly increase the capabilities of containers at runtime, (by doing stuff like installing new packages). You can try taking snapshots of your containers that have the stuff you want installed and run those instead of whatever containers you're running now? – sinback Apr 09 '21 at 12:30
  • Or: an orthogonal suggestion to using Docker. Especially if you only need to run a couple containers or so, there are other virtualization techniques you could be using instead of Docker that might fit your use case better. KVM+QEMU (libvirt) supports many types of hot swapping and it supports sharing PCIe devices with guest VMs via VT-d passthrough. It seems to support hot-swapping PCIe devices as the guest is running https://www.linux-kvm.org/page/Hotadd_pci_devices - and LXC may also do this but I'm not sure (a few minutes of googling didn't disabuse me of the hope though) – sinback Apr 09 '21 at 12:32
  • one method that I think would work even its not efficient, is to use `docker commit`, commit a new image from the container, and build a new one form it, with new resource specifications. – Walid Bousseta Apr 09 '21 at 13:32

0 Answers0