09-08-16 | Blog Post

Containers and Docker in a nutshell

Blog Posts

Docker has become quite a popular subject lately, with more and more businesses adopting it across industries. So what is Docker, and why is everyone talking about it?

To answer that question, we must first go into containers. They are a type of virtualization technology, with their own CPU, memory and resources like a virtual machine. The difference, though, is that containers share the kernel (the brain) of the host operating system and don’t need a guest operating system. Since they have abstracted away the operating system, they are lighter and more easily stackable than virtual machines. With containers, you get more out of your server because you can put more containers on it.

This is great news for the DevOps industry, because in addition to making more efficient use of hardware, Docker allows developers to deploy across environments (development, testing, quality assurance and production) using the same container image. This helps with continuous integration, where developers come together and merge all of their working copies with a master main line to determine if there are any code breaks. This helps prevent integration problems and helps development and operations IT be more communicative with each other, improving the quality of the software.

A brief history

Containers have been around since about 2000, when FreeBSD Jails and Oracle’s Solaris Zones were being used. Then Linux Containers (LXC) came along, developed by IBM and Google. LXC made running multiple isolated Linux systems on a single host possible. Then Docker made its appearance, and made LXC easier and quicker to use.

How did they do that? Docker made themselves the equivalent of cloud computing for containers. They are portable and can be easily transported to any machine capable of Docker. Even with LXC, it is possible for an application to be run and tested in one environment but fail when deployed to the server environment, adding complexity (not to mention frustration) to the developer’s life when he has to figure out what’s wrong. With Docker, because the networking and operating system have been abstracted away, code tested in a development environment will automatically work in a production environment.

Using Docker

It’s best to define Docker containers as a single process, which means each component of your application (database, Web server, etc.) has its own container. You aren’t able to run logging scripts, SSH daemons or agents inside the container. However, this level of granularity allows for better updates, since you can update one component without affecting the others.

There are other tools you can use to help schedule and manage deployment of containers. Mesos from Apache and Kubernetes from Google are such technologies, along with Docker’s own Swarm. Each of these are server cluster management software tools and help schedule and manage the deployment of containers. They allow Docker containers to be run at scale with more efficiency, which is especially useful if you have a large application with many components.

It’s safe to say Docker has made containers enormously popular. With the cloud-like flexibility Docker containers provide, they are leading the way in more efficient technology for software developers, and, indirectly, end-users. They help with continuous integration, an intrinsic part of the DevOps movement. In a future post, we’ll discuss ways you can use containers for your business. For more information on Docker, visit their website or watch this video.

Overwhelmed by cloud chaos?
We’re cloud experts, so you don’t have to be.

© 2024 OTAVA® All Rights Reserved