What are Docker and Kubernetes? And do I need them?
Docker and Kubernetes are tools for containerization, improving application management, scalability, and efficiency in development and deployment processes
What is Docker? What is Kubernetes? Are they different? Are they the same? Do I need them? To address these questions, let’s start with virtualization.Virtualization began in the 1960s, but we can jump ahead several decades to the proliferation of desktop computing and enterprise applications. Applications then were monolithic. Each resided on its own physical server with an operating system and all executable components. This ensured that if an application crashed, it didn’t affect applications on other servers.There were problems, however, foremost being efficiency. Dedicating an entire server to just one application was costly. Servers are expensive to buy and maintain. They consume energy, generate heat, and must be replaced periodically. This approach to application stability was pricey.
Virtualization offered a solution.
Virtualization enables a simulated computing instance that is abstracted, or separate from, the underlying hardware and software of the physical server. Software called hypervisors create and run virtualized environments known as Virtual Machines (VMs).
Among other things, VMs improved efficiencies. Each VM contained an application with all of its executable components and its own operating system. Each was, in effect, a self-contained island on a physical server.
Thanks to this compartmentalization, if anything malfunctioned within a VM, nothing else was affected. Multiple VMs could reside on one server and a crash wouldn’t affect a VM’s neighbors. Thus, the computing and storage capacities of a server could be fully utilized with multiple VMs, each riding on top of the hypervisor and sharing the underlying hardware. The cost-savings were substantial.
Though an improvement over the status quo, VMs run up against some issues. One is they are somewhat bulky because each requires its own operating system. While this allocation helps to account for each VM’s security, a physical server has to support multiple operating systems, which puts pressure on the hardware’s storage and computing resources.
The shift to microservices.
But what VMs couldn’t account for is the evolving nature of the applications themselves. To meet their needs for agility and flexibility, enterprises have been migrating away from monolithic, self-contained applications towards microservices.
Microservices treat applications as comprised of services, each service being wrapped and executed inside a virtual environment called a container. If a service fails for any reason, the entire application doesn’t crash.
Each container houses only a component of the application, not the entire system, and contains all the necessary executable resources like libraries, binary code, and configuration files to deliver that particular service. A common example for containerized microservices is a transactional system for purchasing products. There might be a search bar, a shopping cart, a buy button, and so on. Each component is housed in its own container and they operate interdependently to deliver the application’s full functionality.
Containers vs VMs.
One way containers differ from VMs is they don’t contain their own operating systems. They all share the host operating system on the underlying physical server, which makes them more compact than VMs.
Containers run on any operating system, making them easier to move about from server to server than VMs and certainly easier than monolithic applications. Containerized microservices can be implemented, updated, modified or retired without changes to the entire application.
Containers accelerate development because individual services in one application can be repurposed for another application. Why write code for a new search bar when you can use the one you already have in an existing application? Applications’ capacities can be increased or decreased by adding or removing services; ramping containers up or down can be done quickly to meet workloads.
The bottom line is containers greatly expedite the development, implementation, and maintenance of enterprise applications. The once monolithic application is now built and operated with easy-to-manage pieces.
VMs still retain value.
VMs still retain their value. If you have an application that doesn’t need to be broken down into microservices, for example, keep your VMs and deploy containers elsewhere. VMs still continue to work and being fully isolated, they provide strong security.
Enter Docker.
If you’re wondering how to build containers, this is where Docker comes in, the de facto standard for constructing and sharing containerized applications. Launched in 2013, Docker is an open-source platform that empowers developers to create, deploy, run, and update containers.
Containerized software created by Docker will always run the same on nearly any environment, whether behind firewalls or out in public clouds. You can use Docker tools to copy and apply microservices to build other applications virtually anywhere.
The need for Kubernetes.
Once you’re deploying containers, now what? The fact is even small applications can have dozens of containers and sophisticated, enterprise-level applications can have thousands if not more. Large companies can have millions of containers that are distributed over geographies and environments. It’s simply beyond the capabilities of even the most robust IT departments to manage them manually.
This is where orchestration comes in and the leading container orchestration solution is Kubernetes. Orchestration in this context means organizing and coordinating all the pieces in a container landscape.
An orchestrator like Kubernetes automates the configuration, provisioning, deployment, and management of containers at virtually any scale. It will mitigate complexity and improve efficiencies, productivity, and agility.
Released a year after Docker by Google, its original developer, and meaning “helmsman” or “pilot,” Kubernetes monitors the container environment and automates critical tasks regardless of how remote sites might be. It can configure containerized applications to ensure their functionality is effectively delivered. It can assess the health of containers and their hosts, moving containers to another host when a host is unable to handle the workload.
Moreover, Kubernetes can scale containers up and down depending on workloads, and load balance them for the best performance.
Final thoughts.
It's beyond the scope of this blog to present a complete description of Kubernetes, as well as Docker, but by turning to such an orchestrator, you will have control and visibility over your container environment.
The nature of applications is transforming and to stay competitive, the chances are good you’ll need platforms like Docker and Kubernetes to build, deploy, operate, monitor, and upgrade your next generation of applications.