Everything you need to know about containers and images

Before we dive deep into these concepts of containers and Images lets understand how our web applications were being deployed before the coming of containers.

If you are interested in building your own docker containers and images then jump right Here.

When build script is run and your final executable artifact is build and ready for deployment. The system admin or some build script will login to the Server and install the executable in the deployment environment. This environment is setup to be able to run the application. For example if the application is a Java application then your Tomcat server (Typically) and JRE is set up before this application is setup to run on the Tomcat container.

If you intern to run multiple instances of your application we are suppose to spin up multiple VM with similar configuration for your app. Most of the task were some how automated with powerful script and tools such as ansible.

However, the time require to spin up the VM is considerably costly. Virtual machine (VM) runs a full-blown “guest” operating system with virtual access to host resources through a hypervisor. In general, VMs provide an environment with more resources than most applications need.
It is not possible to scaling up the memory and CPU on VM without issuing a completely new Instance since as the resources are specified at spin up time.

The operating systems (OS) and their applications share hardware resources from a single host server, or from a pool of host servers. Each VM requires its own underlying OS, and the hardware is virtualized. A hypervisor, or a virtual machine monitor, is software, firmware, or hardware that creates and runs VMs. It sits between the hardware and the virtual machine and is necessary to virtualize the server.


Container provide solutions to some of the drawbacks of VM. With containers, instead of virtualizing the underlying computer like a virtual machine (VM), just the OS is virtualized.

Containers sit on top of a physical server and its host OS — typically Linux or Windows. Each container shares the host OS kernel and, usually, the binaries and libraries, too. Shared components are read-only. Sharing OS resources such as libraries significantly reduces the need to reproduce the operating system code, and means that a server can run multiple workloads with a single operating system installation. Containers are thus exceptionally light — they are only megabytes in size and take just seconds to start. Compared to containers, VMs take minutes to run and are an order of magnitude larger than an equivalent container.

What’s the Diff: VMs vs Containers
VMsContainers
HeavyweightLightweight
Limited performanceNative performance
Each VM runs in its own OSAll containers share the host OS
Hardware-level virtualizationOS virtualization
Startup time in minutesStartup time in milliseconds
Allocates required memoryRequires less memory space
Fully isolated and hence more secureProcess-level isolation, possibly less secure

Docker Container

A Docker container is an open source software development platform. Its main benefit is to package applications in containers, allowing them to be portable to any system running a Linux or Windows operating system (OS)

Building and deploying new applications is faster with containers. Docker containers wrap up software and its dependencies into a standardized unit for software development that includes everything it needs to run: code, runtime, system tools and libraries.
This guarantees that your application will always run the same and makes collaboration as simple as sharing a container image.

An image is an executable package that includes everything needed to run an application–the code, a runtime, libraries, environment variables, and configuration files while a container is a runtime instance of an image.

You can developer you docker image and store the in Docker hub image repository and these images will be pull and executed as running containers.

The Docker daemon is what is responsible for actually does the assembling and running of code as well as the distribution of the finalized containers. It takes the commands a developer enters into the Docker client terminal and executes them.

This approach allows pieces of code to be put into smaller, easily transportable pieces that can run anywhere Linux or Windows is running. It’s a way to make applications even more distributed, and strip them down into specific functions

Thanks dear readers, I hope you like this topic. If you are a teacher or a student and you want me to do a write-up on something just like this one, send me an email to me@eddytnk.com with the topic.

No Responses

Leave a Reply

Your email address will not be published.