Why are Containers Gaining Ground in Enterprise IT?
March 17, 2020 | by Krittika Banerjee | Posted In Cloud
The use of containers has picked up steam over the past couple of years as organizations look to adapt and transform their IT strategy to keep up with customer demands. Containers are essentially an IT workload hosting option increasingly being tapped for major production deployments. Adoption of container technology is growing fast, with research firm Gartner predicting that by 2022 more than 75% of global organizations will be running containerized applications in production. Let’s take a look at why containers are important, and what you should understand and examine when getting started with containers in your organization.
Greater application portability
Containers are a solution to the problem of how to get software to run efficiently when moved from one computing environment to another — say, from a developer’s laptop to a test environment, from a staging environment into production, or from an on-premises data center to the cloud. Because containers package application code together with all the related configuration files, libraries, and dependencies, they enable a ‘build once, run everywhere’ method that makes applications extremely portable across various platforms. The portability advantage of containers is significant in today’s multi-cloud environments because this means that developers do not have to rewrite code for each new cloud platform that is being used.
Containers are also a great tool for facilitating CI/CD and DevOps. One of the key success criteria in DevOps is to increase the developer’s stake in operations and how the code runs in production. This is the exact problem that containers address. With containers, developers can own what’s within each container image and set up precise controlled environments to create their build/test/deploy pipelines. Containers are all inclusive, with everything from the base operating system through to the application code bundled into one image. Therefore, when applications are written, tested and deployed inside them, the environment remains consistent across all parts of the software delivery chain. In other words, the way the container runs in development is the same exact way it runs in QA and production.
More efficient use of system resources
Containers are also much more efficient than traditional or hardware virtual machine environments in system resource terms because the OS kernel can be shared among multiple containers on the system. This approach means that only one instance of an OS can run many isolated containers, thereby reducing the CPU, memory, and disk overhead that virtual machines can introduce (since they require a separate OS instance on every VM). All of this amounts to less spending on IT. Since containerized apps can be packed far more densely on their host hardware, you may require fewer operating system instances to run the same workloads, leading to cost savings on software licenses.
Another key benefit is that containerization offers greater modularity. Instead of running an entire complex application inside a single container, the application can be broken down into multiple modules or microservices. Applications created in this way are easier to manage because each module is relatively simple, and updates can be made to individual modules without having to interrupt the rest of the application. Since containers are so lightweight, these individual modules or microservices can be started almost instantly, in a “just in time” fashion as and when they are needed.
Robust disaster recovery and security
Because containers are isolated, they do not interact with one another. If multiple containers are running on a single server, and one of them crashes, the rest of the containers can keep on running without any interruption. Similarly, if one container is hacked or compromised by a security breach, the impact is limited to that specific container. As they are lightweight, they can be shut down and restarted promptly.
But there are downsides, too
Of course, containers have their challenges too. Deploying applications in containers is a relatively new practice. As with every new technology, most developers are just learning how to leverage its benefits. Careful attention needs to be paid to industry standards and best practices in several areas. Especially during the learning curve, your organization must continue to ensure that data security, system integrity, and normal service levels are not compromised.
You will also need to consider container orchestration – the scheduling and management of container workloads based on user-defined parameters. (Kubernetes is the most popular container orchestration tool.) Provisioning and deployment of your containers, scaling up or removal of containers based on incoming workloads, ensuring high availability for your applications despite failure, detection of security flaws, monitoring container of health etc., are some of the key features you can expect from a container orchestration platform.
Another factor to consider is the potential for container sprawl over the longer term. Since containers are based on a one-container one-application infrastructure, this leads to a higher volume of discrete units that need to be managed and monitored. The more the number of containers, the more complex your environment. This can easily snowball into technical debt, and create other unanticipated complications if you don’t establish proper monitoring and management.
How to get started with a container initiative in your organization
Clearly there’s a lot to consider before embracing containerization for application deployments. Before jumping on board, you need to analyze your current IT state. You must decide whether you want to build applications net new or if you will refactor legacy workloads. Containerization is typically more appropriate for new projects, since it is relatively simpler to create the necessary container images using Docker templates after taking into consideration specific requirements in terms of microservices patterns and immutable design infrastructure. If refactoring, the applications that are best suited for containerization are those that can be broken down into individual component pieces. What is important is that components can be sliced and isolated in containers without causing disruptions that lead to massive rewrites.
Another important area to consider is that the skills that your team uses now will need to be reshaped and upgraded. Although containerization is moving incredibly fast, we’re still at the initial stages with several maturity issues and growing pains in the space. There is a skill shortage and dearth of talent with hands-on experience. It makes sense to start by developing your teams’ skill sets on a simple containerization project. You can also encourage your teams to participate in Cloud Native Computing Foundation (CNCF) and other organizations that support Kubernetes, so your developers and IT operators can stay updated on the latest offerings. A lot of organizations are also turning to consultancies or third-party vendors to predict potential pitfalls in future projects and select good candidate applications for subsequent initiatives.
Contact us if you’d like to find out more about containerization and how it can be applied in your organization.