Containers are revolutionizing application and service development. We can thank the Linux ecosystem, with its versatility and flexibility, for it.
What are containers, exactly? Think of it as a small slice of software, which can be easily relocated among relatively disparate environments, such as from development to production or from data center to cloud. The goal of containers is to reduce dependency on specific related applications or libraries and to ease the transition across dissimilar topologies or environmental policies.
A container may be small (literally just megabytes in size), but it’s thoroughly equipped with everything it needs to run properly and 100% independently. Compare it to a lifeboat with every conceivable supply needed to stay afloat.
SEE: Disaster recovery and business continuity plan (Tech Pro Research)
Microservices can also be applied to containers to split an application into subcomponents such as a front-end application and back-end database, which can also ease management by simplifying the container elements. This approach allows you to change various subcomponents as needed without affecting other elements.
It’s important to note that a single operating system can run multiple containers (more containers than virtual machines, generally), each of which can access the operating system kernel in a read-only mode. Containers do not need to boot up, per se, but it can hit the ground running almost immediately when started, yet also quickly free up resources on the host system when suspended or stopped to allow other containers precedence to the host resources.
Relying on Linux
The versatility and flexibility of the Linux ecosystem and the core elements related thereto are integral to the deployment (and advancement) of containers, which are revolutionizing application and service development.
Yes, Windows containers for Windows Server 2016 and Windows 10 do exist as well as Hyper-V containers, which are Windows containers running in a Hyper-V virtual machine for additional isolation. However, the breadth of true choice and functionality dwells within the Linux realm.
For example, Docker is a technology platform geared towards facilitating container use. ZDNet states that “Today, Docker, and its open-source father now named Moby, is bigger than ever. According to Docker, over 3.5 million applications have been placed in containers using Docker technology and over 37 billion containerized applications have been downloaded.”
Clearly, containers are here to stay.
SEE: Executive’s guide to the software defined data center (TechRepublic download)
“As the container development model has become more and more mainstream, container choices have also expanded. However, not all container technology is created equal, and the biggest differentiating factor is Linux,” said Scott McCarty, Principal Product Manager for Containers at Red Hat.
Just as the Linux operating system has driven innovation leading to open-source technologies such as Mozilla Firefox, Apache HTTP Server and BIND, Linux is the foundation of any container platform—a fact that sometimes gets lost amid all of the buzz surrounding the entire container ecosystem.
Therefore, McCarty said, evaluation of any container or Kubernetes system must also be an evaluation of the Linux distribution on which it is built.
Dependent on community
One of the most important differentiators is the project’s community. “Any open source project—Linux included—depends on a community that is dedicated not just to innovation, but to constant and consistent innovation based on addressing community-defined problems,” McCarty said.
There are several such projects dedicated to leveraging the Linux kernel. These projects, which synthesize a number of different open source technologies, have birthed open source distributions such as Fedora and CentOS and serve as a foundation where further development can take place.
An operating system is made up of two parts: The kernel and the userspace. Linux containers break things down further, allowing the two parts to be managed separately via a container host (comprising the OS kernel and small user space) and the container image (including the OS’ libraries, interpreters and configuration files, as well as the developer’s application code.)
“So,” McCarty said. “When evaluating different container images, you’re in large part evaluating different forms of Linux.”
Simply put: What does Linux have to do with containers?
“Well, everything,” said McCarty.