Don't believe false alarms about Docker containers

Despite what some people say, Docker containers have plenty of resiliency options when needed

As Docker picks up steam, a few people are suggesting that this approach to cloud workload portability and management may have an Achilles' heel.

Docker containers sit on a shared Linux implementation, which creates the potential for significantly more vulnerabilities that can affect the operation of every container on a server, especially if the underlying OS goes down. In such an event, all containerized workloads could go down as well.

[ Get the no-nonsense explanations and advice you need to take real advantage of cloud computing in InfoWorld editors' Cloud Security Deep Dive PDF special report. | Stay up on the cloud with InfoWorld's Cloud Computing Report newsletter. ]

Of course, people compare it to the use of hypervisors, and they suggest VMs may be a better way to manage cloud workloads. That flies in the face of Docker's professed advantage: replacing heavy VMs with lighter-weight containers. 

Don't get worked up about the possibility of Linux OS-level failures threatening the reliability of Docker containers. In reality, Docker has several defenses against failures:

  • Namespaces provide the ability to isolate all processes into a container, including other containers and the host OS. Each container has its own network stack. If it fails, another container can still talk to other containers and the host OS.
  • Docker's Cgroups, which manage resources, can provision for separation. Cgroups not only make sure containers share underlying resources, but they also watch to make sure a single container cannot exhaust all the resources and bring everyone else down.
  • Docker containers can auto-restart if there are issues and should provide additional resiliency. You also can design systems that spread out OS dependencies, which further reduces the risk that a single OS failure will kill your containers.

The core point is that containers can be faster and less resource heavy than VMs, as long as the user is willing to stick to a single platform to provide the shared OS to all containers. There are real advantages to containers:

  • A full VM may take several minutes to create and launch, whereas a container can be initiated in a matter of seconds.
  • Containers offer better performance for the applications they contain, compared to running the application in a VM, which incurs the overhead of running through the hypervisor.

These lightweight architecture advantages and better performance are what make Docker popular.

Although I'm sure there are a few glitches to be worked out, Docker is only in a 1.0 state at this point. We don't need to create drama where none exists.

My colleague Jonathan Baier contributed to this post.

This article, "Don't believe false alarms about Docker containers," originally appeared at InfoWorld.com. Read more of David Linthicum's Cloud Computing blog and track the latest developments in cloud computing at InfoWorld.com. For the latest business technology news, follow InfoWorld.com on Twitter.

Copyright © 2014 IDG Communications, Inc.