Containers Explained in 6 Facts

Containers are the talk of the town in application deployment, even if not everyone involved in the discussion can articulate exactly why. If you’ve wondered whether containers might fit your data strategy, or if you know they do and need a tool to explain containers to your peers, here’s a guide for you.

At the most basic level, containers let you pack more computing workloads onto a single server, and let you rev up capacity for new computing jobs in a split second. In theory, that means you can buy less hardware, build or rent less server space, and hire fewer people to manage that infrastructure.

Specifically, Linux containers give each application running on a server its own, isolated environment to run, but those containers all share the host server’s operating system. Since a container doesn’t have to load up an operating system, you can create containers in a split-second, rather than minutes for a virtual machine. That speed lets the data centre respond very quickly if an application has a sudden spike in business activity, like people running more searches or ordering more products.

Here are six essential facts to know about containers.

1. Containers are not the same as virtual machines

One way to contrast containers and VMs is to look at what’s best about each: Containers are lightweight, requiring less memory space and delivering very fast launch times, while virtual machines offer the security of a dedicated operating system and harder logical boundaries. With a VM, a hypervisor talks to the hardware as if the virtual machine’s operating system and application constituted a separate, physical machine. The operating system in the virtual machine can be completely different from the host operating system.

Containers offer higher-level of isolation, with many applications running under the host operating system, all of them sharing certain operating system libraries and the operating system’s kernel. There are proven barriers to keep running containers from colliding with each other, but there are some security concerns about that separation, which we’ll explore later.

Both containers and virtual machines are highly portable, but in different ways. For virtual machines, the portability is between systems running the same hypervisor (usually VMware’s ESX, Microsoft’s Hyper-V, or open source Zen or KVM). Containers don’t need a hypervisor, since they’re bound to a certain version of an operating system. But an application in a container can move wherever there’s a copy of that operating system available.

One big benefit of containers is the standard way applications are formatted to be placed in a container. Developers can use the same tools and workflows, regardless of target operating system. Once in the container, each type of application moves around the network the same way. In this way containers resemble virtual machines, which are also package files that can move over the Internet or internal networks.

An application inside a Docker container can’t move to another operating system. Rather, it is moveable across the network in standard ways that make it easier to move software around data centres or between data centres. A single container will always be associated with a single version of the kernel of an operating system.

2. Containers boot in a fraction of a second

Containers can be created much faster than virtual machines because VMs must retrieve 10-20 GBs of an operating system from storage. The workload in the container uses the host server’s operating system kernel, avoiding that step. Miles Ward, global head of solutions for Google’s Cloud Platform, says containers can boot up in one-twentieth of a second.

Having that speed allows a development team to get project code activated, to test code in different ways, or to launch additional e-commerce capacity on its Web site, all very quickly.

3. Containers have proven themselves on a massive scale, such as in Google search

Google Search is the world’s biggest implementer of containers, which the company uses for internal operations. In running Google Search operations, it uses containers by themselves, launching about 7,000 containers every second, which amounts to about 2 billion every week.

Google supports search in different locations around the world, and levels of activity rise and fall in each data centre according to the time of day and events affecting that part of the world. Containers are one of the secrets to the speed and smooth operation of the Google Search engine. That kind of example only spurs the growing interest in containers.

4. When IT guys call containers “lightweight,” here’s what they mean:

“Lightweight” in connection with containers means that, while many dozens of virtual machines can be put on a single host server, each running an application with its own operating system, hundreds or even thousands of containers can be loaded on a host. The containerised app shares the host’s operating system kernel to execute work. Containers thus hold out the hope of becoming the ultimate form of intense computing for the space required and power consumed.

Over the last decade, virtualisation represented an upheaval and consolidation of the one-application-per-server style of doing things. In the same sense, containers represent a potential upheaval of at least some virtualisation workloads into an even more dense form of computing.

5. Docker has become synonymous with containers

Docker is a company that came up with a standard way to build out a container workload so that it could be moved around and still run in a predictable way in any container-ready environment.

Developers understood that containers would be much more useful and portable if there was one way of creating them and moving them around, instead of having a proliferation of container formatting engines. Docker, at the moment, is that de facto standard.

They’re just like shipping containers, as Docker’s CEO Ben Golub likes to say. Every trucking firm, railroad, and marine shipyard knows how to pick up and move the standard shipping container. Docker containers are welcome the same way in a wide variety of computing environments.

6. Containers can reduce IT workloads

There’s an advantage to running production code in containers. As Docker builds a workload, it does so in a way that sequences the files in a particular order that reflects how they will be booted.

One service or section of application logic needs to be fired up before another, and containers are actually built in layers that can be accessed independently of one another. A code change known to affect just one layer can be executed without touching the other layers. That makes changing the code less dangerous than in the typical monolithic application, where an error stalls the whole application.

It also makes it easier to modify the application. If the change occurs to one layer, it can be tested and launched into production. If a problem develops, it can be quickly rolled back, because developers only touched one layer.

At the same time, big applications can be broken down into many small ones, each in its own container.

Just a final word…

Those who are in constant pursuit of better, faster, and cheaper computing see a lot to like in containers. The CPU of an old-style, single application x86 server is utilised somewhere around 10% to 15%, at best. Virtualised servers push that utilisation up to 40%, 50%, or in rare cases 60%.

Containers hold the promise of using an ever-higher percentage of the CPU, lowering the cost of computing and getting more bang for the infrastructure, power, and real estate invested. Give Kobi a call in Dublin today to discuss the implementation of containers in your organisation.

Call us in Dublin on 01 482 5810 to find out more about containers