Containers: The biggest Innovation. But why? Most applications, cloud-native microservices ones, require high-performance, production-grade infrastructure to perform on. Maintaining an excellent understanding of Docker will help you to succeed in the modern cloud-first world.
In this article, we’ll get into topics like; why we need and have containers, what they do for us, and where we can use them.
Also, you will learn the basics command you should know to start with Docker, like restart, remove, and much more.
Before containers
Applications are at the core of businesses. If applications fail, businesses fail. Occasionally they even go bust. These words get more real every day!
Significant applications run on servers. A long time ago, we could solely run one application per server. Unfortunately, at this time, the open-systems world of Windows and Linux didn’t have the technologies to allow us safely and securely run numerous applications on a single server.
Consequently, the story reached something like this: The IT department bought a new server whenever the business needed a new application. But unfortunately, most of the time, nobody understood the performance necessities of the new application, pushing the IT department to make guesses when selecting the model and size of the hardware of our server to buy.
In consequence, IT did the only something it could accomplish — it bought large fast servers that cost a bunch of money. After all, the last thing anyone desired, including the company, was underpowered servers incapable of executing trades and potentially losing clients and earnings. So, IT bought big. Unfortunately, this resulted in overpowered servers operating as low as 5-15% of their potential capacity. A sad waste of company money and environmental resources!
Suddenly there is VMware!
VMware, Inc. gave the world a miracle — the virtual machine (VM). And practically overnight, the world transformed into a considerably better place. Now we have a technology that lets us safely and securely manipulate numerous enterprise applications on a single server.
Besides that, the IT departments are no longer required to procure a brand-new oversized server whenever the business needs a new application. Instead, more frequently than not, they could run new applications on existing servers sitting around with spare capacity.
Suddenly, we could compress massive amounts of value out of existing corporate assets and allow the company to save a considerable amount of money and speed up any process. The world starts to go faster!
But nothing is perfect!
A significant drawback is that every VM requires its dedicated operating system (OS). Every OS consumes CPU, RAM, and other resources that the computer could otherwise use the spare capacity for more applications. For example, every OS needs the capability of updating and monitoring. And in some circumstances, every OS requires a license. All of this causes wasted time and resources.
The VM model has other drawbacks. For example, VMs are slow to boot, and portability isn’t good — relocating and transferring VM workloads between hypervisors and cloud platforms are more complex than required.
The Containers
For an extended period, major web-scale parties, like Google, have been operating container technologies to handle the drawbacks of the VM model.
In the container world, the Container is approximately analogous to the VirtualMachine. However, a substantial distinction is that containers do not require a whole OS. Instead, all containers on a single host share the host’s OS.
What is the advantage of sharing the host OS?
Simple: releases up vast amounts of system resources such as CPU, RAM, and storage. It also decreases the costs related to licensing and declines the overhead of OS updating and other highly demanding tasks.
The advantages are savings of time, resources, and capital fronts.
Containers are also fast for boot and high-portable. Pushing container workloads from your computer to the cloud and then to VMs or bare metal in your data center or cloud provider is easier.
The innovation continues with Linux Containers!
Current containers began in the Linux world and are the outcome of an enormous amount of work from a vast help of people over a long period.
How was this possible?
For example, Google LLC has contributed many container-related technologies to the Linux kernel. Without these and other assistance, we wouldn’t have the current containers that we have today!
Some of the crucial technologies that allowed the tremendous development of containers in current years include; kernel namespaces, control groups, union filesystems, and Docker.
Besides this, containers stayed complicated. But the significant gaming change has arrived!: Docker came along when containers got popular and available to several people.
In Love with Docker!
Docker was the charm that pushed Linux containers functional for any person. And Docker, Inc. helps us with that!
And about Windows?
Microsoft has worked incredibly hard to get Docker and container technologies to the Windows platform.
Today, Windows containers are available on the desktop and Windows Server platforms (specific versions of Windows 10 and later and Windows Server 2016 and later). In accomplishing this, Microsoft has worked near with Docker, Inc. and collaborated with the open-source community.
The core Windows kernel technologies needed to execute containers are Windows Containers. Therefore, the user-space tooling to operate with these Windows Containers can be Docker.
So, you can expect the same experience on Windows, almost the same as Docker on Linux. So, developers and sysadmins familiar with the Docker world from the Linux platform can handle it on your personal computer using Windows containers.
Even small computers like Raspberry Pi could run a container using Docker.
Running Containers on Mac
Nowadays no such thing as Mac containers.
But you can handle Linux containers on your Mac using Docker Desktop. It works by seamlessly running your containers inside of a lightweight Linux VM on your Mac. It’s famous for developers who can fast create and test Linux containers on their Mac.
The Cloud Revolution: Kubernetes
Kubernetes is an open-source project out of Google that has quickly materialized as the big orchestrator of containerized apps. But, of course, that’s a fancy form of declaring Kubernetes is the most famous technology for deploying and operating containerized apps.
Some Basic Commands!
Now that you have an extensive introduction to the Containers, let’s see some commands you must know.
How to list the containers that are running?
docker ps
Output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
45eb5c635989 wordpress "docker-entrypoint.s…" 2 weeks ago Up 2 days 0.0.0.0:80->80/tcp, :::80->80/tcp docker_wordpress_1
9c2028941a97 mysql:5.7 "docker-entrypoint.s…" 2 weeks ago Up 2 days 3306/tcp, 33060/tcp docker_db_1
Above, you can see that we are running two containers.
The container IDs are 45eb5c635989 and 9c2028941a97; respectively, WordPress and MySQL.
The container IDs are essential because we can interact with the containers running. So let’s have some essential interaction to do those containers.
Each time you create a container, it will generate a new ID; some of my examples will always be different from yours, regardless of which Container you are making.
Attaching to running containers
What does it mean?
To jump in inside the Container and perform some maintenance or troubleshooting, you have “login” inside it first.
docker exec -it 45eb5c635989 bash
You should see something like this:
root@45eb5c635989:/var/www/html#
But, it’s tough always to copy and paste or type the container id every time, so there is another alternative to get inside the Container by specifying the container name.
You may have noticed that from the “docker ps” output, the Container with id 45eb5c635989 also contains a name “docker_wordpress_1” that is more friendly to read.
So, with that name, you can get inside the Container like this:
docker pull bitslovers/spotify:v1
Beyond this, whatever command you execute will be executed inside the Container.
To quit the Container and return to your computer (Host), you type exit and hit enter.
Docker Exec – Do a lot more
You can quickly execute a command inside the Container faster without needing to get inside it!
For example, suppose that you want to check a file’s content. It would be faster:
docker exec -it 45eb5c635989 cat /etc/resolv.conf
Output:
search lan
nameserver 127.0.0.11
options edns0 trust-ad ndots:0
How to restart a Container?
It’s pretty straightforward:
docker restart docker_wordpress_1
The Container will restart, and you can continue to use it without any issues.
How to stop a Container?
docker stop docker_wordpress_1
Easy, right?
If you don’t want to use the Container anymore, you can delete it.
docker rm -f docker_wordpress_1
You can execute the command above, even with the Container running. It will force the Container to stop and automatically delete all files related to that Container.
Docker Images
Let’s do some basic commands related to docker images.
How to list Docker Images on your local?
docker images
How to delete a Container Image?
Delete docker images is also an easy task. Remember, after deleting that image. You can’t recover it. You must build the picture again or pull it from your remote Registry.
docker rmi -f wordpress:latest
What can more Containers do?
After several years of working with Docker, I never go through any limitations. You can run any application inside the application. And don’t think you can run only headless (without interface) applications.
It’s possible to run an application that requires a User Interface (UI). For example, you can run the Spotify app inside Docker and use it like any other application.