-1

I am a completely newbie when it comes to containers.

I am particularly interested into Windows Containers running in Process Isolation (not Hyper-V Isolation)

I have been doing a lot of reading and watching of videos but there is one fundamental question which has not be explained to me in the reading I have done so far.

Is it mandatory for Every Windows Container/Image to include a base image/layer of either nanoserver or servercore?

What confuses me are comments such as those made at 5m35sec in the following video;

Windows Container 101 Video on Channel9

He makes a statement (and I'm paraphrasing)

"that the only thing necessary to build a docker image is a statically linked binary."

That to me implies that if my HOST operating system which is running the containers has all the dependencies necessary then it is possible to virtualise the kernel from the base operating system negating the requirement for a base operating system image/layer in the docker image.

What am I missing? Why do i need the nanoserver or servercore base image layer?

If my Host operating system is v1903 and the docker image requires a kernel of v1903 why can't it virtualise the kernel from the HOST operating system?

Thanks in Advance!

Community
  • 1
  • 1
Aaron Glover
  • 1,199
  • 2
  • 13
  • 30

1 Answers1

1

The basic thought of docker is to reuse the kernel of host system, see this for windows container:

Windows Server containers provide application isolation through process and namespace isolation technology, which is why these containers are also referred to as process-isolated containers. A Windows Server container shares a kernel with the container host and all containers running on the host. These process-isolated containers don't provide a hostile security boundary and shouldn't be used to isolate untrusted code. Because of the shared kernel space, these containers require the same kernel version and configuration.

But as you know, to make an os run, just kernel is not enough, you need the file system.

So, this is the root of base image comes, see this.

A file system is built up from a series of layers, this make you have possibility to separate some layers to one image, while separate other layers to another image. With base image, that is nanoserver or servercore here, different apps could reuse the same base image, and put just app binary to built upon the base images.

Just as next diagram shows: different container with its own binary could share the base images (ubuntu15.04 here for example), and every container's image plus the shared common image could be a complete file system to make container run.

enter image description here

atline
  • 28,355
  • 16
  • 77
  • 113
  • ok... so the kernel is shared as I had hoped. What the base image contains are the other aspects of the operating system... Presumably things like cmd.exe and powershell etc Assuming the above statement is correct; Wouldn't it have been possible for the base images to simply statically link out to binaries in the host o/s rather than bundling them in an image? – Aaron Glover Jun 17 '19 at 07:33
  • Yes~~~~~~~~~~~~ – atline Jun 17 '19 at 07:34
  • I guess at some point to seek to gain benefit from not being affected by the host operating system environment there are a base amount of dependencies which just must be bundled into the base image. for the sake of 1.7GB (downloaded ONCE) to run many containers its not really an issue – Aaron Glover Jun 17 '19 at 07:37
  • Yes, base image store common things. – atline Jun 17 '19 at 09:17