DevOps transformations have made major headway among enterprises in the past few years and will continue to be extensive, and 2019 is predicted to be a crucial time for leaders to plan for and implement it across industries. Among senior executives, there is growing acknowledgement of the fact that the role of DevOps is evolving — from driving marginal efficiency in isolated projects to being a catalyst for innovation and disruption as part of an widespread enterprise trend. New estimates from IDC suggest that the DevOps software market will grow from its 2017 results of $2.9 billion to $6.6 billion in 2022. So, what are the emerging technologies and techniques that will spur this growth? We have pulled together our predictions of the trends that will drive DevOps in 2019. Here are our top picks:
AI-accelerated DevOps will start making inroads
AI is poised to have a big impact on DevOps and transform how teams develop, deliver, deploy, and manage applications. Experts believe AI techniques have the potential to make the DevOps pipeline smarter, with the ability to predict the impact and risk of deployments, spot procedural bottlenecks and identify automation shortcuts. AI-based predictive analytics will allow for easier understanding of where problems arise in continuous integration (CI) or continuous development (CD), and enable better acting on data collected from customers, leading to greater efficiencies in operational capacity planning and better pre-deployment fault prediction. For example, if processed in the right way, application performance metrics can not only identify when a server is down but also help with automated decision-making to enable decisive action. This trend will also accelerate enhanced collaboration between application developers and data scientists for creation of AI-enhanced solutions. According to Gartner, by the year 2022 at least 40% of new development projects will have AI co-developers on their team.
Containerization will not be novel anymore
Growing adoption of DevOps and multi-cloud architecture is going to give rise to greater use of container-related technologies across large enterprises. The application container segment will scale to $2.7 billion by 2020, according to a forecast by 451 Research. An increase in the scale of software development and deployment will also lead to an increase in the size and complexity of container production clusters, and orchestration tools will be in high demand as an effective means to dealing with complexities associated with infrastructure. Kubernetes has already exploded onto the scene as the fastest growing container orchestration technology. As a demonstration of Kubernetes’ dominance, Docker has begun incorporating Kubernetes into its enterprise products, while still investing in its own orchestration tool, Swarm. Around the world, many CIO’s and technologists have already adopted Kubernetes and it will continue to play a big role in making containers mainstream in the coming year.
Functions-as-a-Service (FaaS) will take off
As more and more technology professionals become comfortable in using containers in the production stage, we can expect a spike in the adoption of FaaS (Functions-as-a-Service) — also referred to as Serverless computing. This will eliminate the need for businesses to pay for the redundant use of servers. Instead of having an application run on a server, you can run it directly from the cloud — allowing you to choose when to use it and pay for it, per task – thereby making it event driven. In other words, you just pay for the compute time you consume — there is no charge when your code is not running! Amazon’s AWS Lambda has already emerged as the biggest and best-known example of serverless computing. The other providers include Google Cloud Functions, Microsoft Azure Functions, IBM etc. A recent survey by the Cloud Foundry Foundation — a nonprofit that oversees an open source platform and is a collaborative project of the Linux Foundation — revealed that 22% are already using serverless technology and nearly 50% are evaluating it.
DevSecOps will become a priority
Part and parcel with the enterprise scale-up of DevOps is the growing acceptance that security and compliance must be seamlessly integrated into DevOps transformations if they’re to succeed. The way we do computing from cloud to microservices to serverless, has completely shifted the roots of software engineering. The network we knew, no longer exists and the security industry needs to constantly keep up with an evolving attack surface.
In the 2018 DevSecOps Community Survey, approximately 33% of respondents blame application layer vulnerabilities for security breaches. Since the application is the new entry point for attackers, organizations will need to adopt a programmatic approach to application security that starts with injecting security thinking as early as possible into the software development lifecycle — what is commonly referred to as DevSecOps. 2019 will see a widespread adoption of DevSecOps across enterprises, as the acceptance of its core principles reaches a critical mass in the hearts and minds of many in IT. Mainstream DevOps will start treating security as code, and development and security teams will work hand in hand across multiple points in DevOps workflows in a way that is largely transparent, and preserves the teamwork, agility and speed of DevOps and agile environments.
Automation will remain key
There is a growing realization that in order to amplify responsiveness, operational resilience, and faster time-to-market throughout the software delivery lifecycle, you need to synergistically link up development with IT operations through the use of automation. We are hearing more and more users and vendors talk about the need to apply automation across all stages of the DevOps cycle. This will remain the main goal to strive for in 2019 — a necessity irrespective of how far the DevOps transition has progressed. Scaling automation in highly complex ecosystems will be particularly tricky, and organizations will need to conduct a complete audit of development and operations environments to create a base level of situational awareness. From there, they can look into the lifecycle of software delivery — everything from the initial commitment to the auto-build to testing, beta and release – and identify what resources can be provisioned and deployed as code.
The changes we’re going to see in 2019 will pave the way for making many of these advancements more universally acceptable. And that, to us, is something to get very excited about. There are potentially huge gains to be had, but it is also important to acknowledge that the industry overall hasn’t yet developed enough best practices in some of these areas. There will be much to experiment and learn, as practitioners will be exploring some relatively uncharted territory.
Latest posts by Krittika Banerjee (see all)
- AWS re:Inforce 2019 – After the Show - July 2, 2019
- Improve Your Testing Outcomes With synapseRT 9.6 - July 1, 2019
- Cloud-Native: What Is It All About and Why Does It Matter? - April 30, 2019