Modern virtual machines (VMware/VirtualBox) allow us to create completely sandboxed virtual environments with a mix of operating systems. Containers stop just short of virtualizing the operating system; often described as an advanced chroot, they run directly on the host without emulation. Docker brings order to containers with some clever filesystem tricks, solid image management, excellent developer tools and native support on every major cloud platform.Containers are different, we use them differentlyWhen using Docker, provisioning environments (and cleaning up old ones) becomes so fast and easy that we start to think about provisioning in a completely different way. Our emphasis starts to shift towards quickly (re)provisioning instead of being good at maintenance and migration. It turns out this view of provisioning greatly simplifies our concerns.Docker borrows a few tricks from popular tools like Puppet and Chef to facilitate a repeatable build process. The "materialized" result of this build process is a read-only image containing an exact set of system and runtime libraries necessary to support an application. This pattern provides strong guarantees about consistency when eventually deployed to a server and it nearly eliminates a whole category of bugs related to the subtle differences between environments.Unlike its immediate predecessors, Docker is not a Virtual Machine, but instead relies on an isolation technique provided by the Linux Kernel. Because of this, Docker containers not only spin up fast, but also use server resources very efficiently.What can Docker do?With Docker Hub (or your own private Docker Registry), you can easily publish and share Docker images using basic Docker commands.The Docker Hub allows anyone to publish public images for free (assuming you are not violating any laws) and also offers subscriptions for private repositories. There is a large repository of official and community images to help developers get up and running quickly. Images are available for most major runtimes: Java, Node, PHP, Python, Ruby, etc. In addition, Open Source application resources like databases, caches, queues and web servers are also widely available.Docker's characteristics make it extremely well suited for a modern integration test environment, where the entire environment can be provisioned quickly and easily, with very few moving parts. This pattern is not limited to application code it is just as easy to spin up supporting application components like databases, caches or queues. With a little bit of cleverness, Docker can be used for things like creating a database with existing schema or even test data.Docker has a light footprint and Docker servers require very little setup. The large cloud providers offer native management tools like Amazon's ECS, Google's Container Engine or Microsoft's Azure Container Service. You can also manage your resources directly, either on-premise or in the cloud, with tools like Rancher.How does Docker work?A Docker Container is an isolated space provided by the Linux kernel to run a process or application. Unless otherwise specified, a container cannot see any files from the host system, only files provided by the mounted Docker image. Docker also provides network isolation, with each container getting its own private network interface.One of Docker's most defining traits is its unique take on the filesystem. The entire filesystem, as viewed from a Container, is made up of image layers (think of them like zip files). When an application in a container tries to open a file, Docker looks through the image layers from top to bottom, until it finds a file at the given path. Image layers are read-only, so whenever the application writes a new file or modifies an existing file, a new copy gets written to a scratch area. The scratch area is the first place a container will look for files, making it appear as if the files can be modified. When creating a new image layer, we archive the scratch area, and it becomes the new "top" layer.The first (bottom) layer of an image is usually a very slimmed down Linux installation. The next few layers (intermediate layers) are composed of application dependencies like Python, PHP, Ruby or a Java Virtual Machine, and their supporting system libraries. The assembly of an image can be programmatically defined in a Dockerfile, an easy and repeatable way to construct Docker images. These definitions should be checked into source control and can be iterated on for continued optimization to your applications needs.Containers can be quickly linked together with an easy-to-use command line tool, and as environments get more complex, this configuration can be codified using a Docker Compose document. A Docker Compose document can describe container configuration parameters and how those containers get linked together. This allows infrastructure to become a repeatable definition that can be checked into source control, and improved over time, much like the application source code itself.ConclusionDocker is a tool that offers a powerful new way to package and run applications. It facilitates repeatable builds, which execute very consistently on any server. It gets developers ramped up faster. It deploys quickly and easily to local or cloud based infrastructure. Ultimately, it will let your team focus on their job, adding value to your products.