I see that a number of people have set up Docker containers with Guacamole or other tools to allow them to remote in to GUI as if the container was a remote Linux desktop. A friend of mine had a conversation with a professor who told him that they set up Ubuntu desktop access for their students via ubuntu/rdp docker containers.
It's an attractive concept for efficiently packed cloned desktops since you don't need 50 copies of the guest OS, but how would you manage such a swarm without a connection broker like a VDI solution or a hypervizer console like a KVM setup? Would you simply use standard docker (or swarm) management tools to manage the containers themselves, then some separate remote client for the actual remote control connections?
I'm currently reading up on Docker, but unclear: If each desktop is the same, so say Firefox, LibreOffice, etc. Is there any way to gain efficiency by sharing these resources as well? For instance, could there be a container with those resources that the others all connect to... or have it shared on a lower level like the OS? Looking for any way to gain efficiency, lower overall cpu, ram, etc for all combined machines on server. Really looking for anything other than a separate copy of the same thing in each container.
I see that there are solutions for shared persistent storage in containers like Hatchway. Are there other issues caused by statelessness of the container that this does not address?
Also, I see a few ways people have cobbled together internet connectivity for docker containers (like IP per container), but most of the older posts are people frustrated with the process. Is there now a standard or preferred way to do something like this?
Or, if docker/containers are absolutely the wrong way to go about setting up the most efficient possible Linux remote desktop clones, I'd love to understand exactly what part does not work so I can find the right way.