How and Why Docker ? An introduction to Docker.
Application run on servers. In a corporate environment they would be working on several applications but then they could only run 1 application per server. The operating systems then and even today do not have the convenience of running more than 1 application per server. So every time the business has to host a new application they procure new server and that don’t stop there. The business has to spend money for the storage, networking and human resources as well. The IT team can only speculate the choice of hardware to be procured to meet the requirements of the new application. Most of the time, the hardware purchased would be under-utilized.
Introduction of Virtual Machines.
The history of virtualization goes back to the IBM times of 1960, the story of virtualization from IBM to VMWARE is worth a read on several articles available online, however I would want to bring your attention back to the new world order.
In 2002 VMware introduced ESX server 1.5 which allowed consolidating multiple servers onto a single physical device. And then on whenever a company wanted to introduce a new application, they would no longer have to spend money for over-powered new hardware. Later more competitors joined the arena like Hyper-V, Citrix Hypervisors etc.
Still not enough.
Virtual Machines had it’s short comings. Even though we were running multiple servers on a single physical hardware, each VM server was running its own operating system. thus the physical server’s resources were unnecessarily utilized by the operating system running on each server. Every server needs to be licensed added to the cost. You would also need time and resource to patch and monitor these servers. migrating the virtual machines to a new hardware is a task.
Introduction of Docker.
In 2006 Google introduced ‘process’ containers later renamed as Control Groups or CGroups. This was designed to limit the resource utilization by a single process or a group of processes on physical server. In 2008, Cgroups was merged into LINUX kernel. Namespaces where later introduced which isolates the resources per process or group of processes. Namespaces became the base of container network security by isolating a user’s or a group’s activity from others.
Between 2013 – 2014, Docker was introduced as a containerization platform based on Linux which could also be integrated with Windows and MacOS and was widely recognized. When compared to VMs, Docker has much less resource utilization. While VMs enclose a full-fledged OS within, Docker share the kernel of host machine it is running on.
Containers can run Windows based apps as well as Linux based Apps. Docker supports both Linux and Windows based programs. An application running as a container can be termed as a containerized app. Since a docker container shares the kernel of the host machine a containerized Windows apps will not run on Linux based Docker host and vice-versa. Windows container will require the docker container to be hosted on a Windows OS machine.