- A DevOps-driven approach to enable automation and continuous delivery
- A microservices model to enable highly focused, yet loosely coupled, modular services that can scale easily with demand
- Use of containers to provide an ideal application deployment unit and a self-contained execution environment for faster deployment
While microservices have been around for a while, they have recently gained significant traction as the basis for a software architecture in organizations looking to modernize their enterprise IT systems. Microservices are essentially an architectural style, where an application is structured as a collection of small, self-sufficient, domain-specific services that can be deployed and tested in isolation. And their core value proposition centers around their ability to get software to market faster, by having teams work on these independent application components, speeding up both bug fixes and the addition of new features. While the benefits of microservices are clear, relatively less has been written about the challenges associated with their implementation. In this post, we will look at some of the tricky issues that microservices can introduce.
#1: Distributed transactions increase complexity
Transaction management is simpler in monolithic applications because they consist of a single, common database server. Transactions can be initiated at the database level and committed or rolled back depending on the final outcome of the transaction. But, with microservices, each service is a system apart with its own database. This presents a whole host of complexities when it comes to the process of committing a transaction.
Think of an e-commerce application that takes orders from customers, verifies inventory and available credit, and ships them. In a distributed scenario, the architecture would be split into a dozen smaller services like user handling service, payment gateway service etc., each with their respective database. The application cannot simply use a local ACID transaction, because there is no direct and simple way of maintaining ACID principles across multiple databases. This is where the key challenge lies when it comes to transaction management in microservices.
#2: Testing microservices can be cumbersome
Testing microservices-based applications is not easy. In addition to testing each microservice individually, the APIs must also be tested to make sure they are all seamlessly communicating with each other. The sheer number of services that make up an application, alongside the dependencies between these services can create complexities in the testing process. For instance, a tester would have to carefully analyze multiple logs across each of the various services in order to write effective integration tests. With several independent teams working simultaneously on distinctive functionalities, it can become quite challenging to pick an ideal time window for extensive testing of the entire software and coordinate overall quality assurance.
#3: Operational challenges can increase
Because your operations teams need to manage an entire ecosystem of services rather than just one application, the ability to instantly provision servers becomes an important prerequisite. If it takes days or months to provision resources, you cannot keep up with the pace needed to make the most of microservices. You need to provision and bring down infrastructure automatically, as these services are scaled up and down. Plus, the capability to provision must be distributed across a larger team, instead of just one senior Dev or DevOps person.
You should also be prepared to address multiple failure issues, such as system downtime, slow service and unexpected responses. Here, you will likely need a load balancing strategy that is much more complicated than for monolithic applications.
Furthermore, with so many services to manage, each with its own language, platform and APIs, you are likely to encounter many issues that were left undetected in your testing environment. Therefore, it becomes important to ensure robust monitoring to manage the entire infrastructure and detect serious problems quickly.
#4: Lack of cultural readiness
The microservices approach requires teams to be reorganized into self-contained, autonomous teams, to ensure that every service is developed, tested, deployed and maintained individually. One tenet of this approach is that organized teams need to follow what Amazon calls the “two-pizza box mentality” – as in, create smaller, cross-functional teams that can be fed with two pizzas for dinner (less than ten people). Each team should be balanced with expertise across Dev, Test, Ops, DB administration, UX, and even product management in some cases.
While this may augur well for teams’ individual efficiencies, it may lead to certain challenges for the organization at large. Teams may end up losing visibility into what other teams are working on, with each service becoming a black box of unknown functionality and purpose. This can hinder understanding of the overall system, resulting in duplication of work and added complexities. If responsibilities are not clearly defined, it can also lead to a lot of finger pointing and blame game.
As application development trends keep evolving, the debate between leveraging microservices or using traditional monolithic architectures will only become more pronounced. If you are looking at microservices, it’s worth taking a step back and considering all of these complexities as you build your strategy. A good way to begin understanding the challenges is to start with small services that are easy to extract, so you can achieve faster results and gain early experiences. This will help you track and gauge the trade-offs related to that particular application and environment. And as you gain these early experiences, it will become much more reasonable to further invest in microservices.
If you want to find out more about how you can leverage microservices for your business, you can contact us.
This blog was originally published on the CloudBees website on March 6, 2019.
In today’s digital economy, customers have become accustomed to continuous innovation at unprecedented speeds. To produce great applications and services that measure up to customer expectations, you need strategic software development supported by modern development and delivery processes. Business leaders need to be aggressive about driving the adoption of DevOps and continuous delivery within their organizations in order to support innovation at scale and boost competitive advantage. In this blog post, we’ll dive into why DevOps and continuous delivery (CD) are the single greatest changes your organization can embrace to fast track your digital transformation goals, and what you will need to do in order to get started.
Why continuous delivery?
DevOps is the philosophy and cultural process that guides developers, testers and IT operations to produce more software releases at greater speeds and with better results; continuous delivery is the methodology that helps operationalize DevOps principles. CD is about automating tasks to reduce manual interactions in the process of continuously integrating software and moving successfully tested software swiftly and more frequently into production. Project teams can make corrections and improvements along the way based on continuous automated feedback about what works, and what needs to be done better.
The move to continuous delivery and experimentation is a crucial part of creating a more agile landscape for digital transformation. The IT organization can get working software into the hands of customers sooner, by adding new functionality to existing systems in a piecemeal fashion and then work on improvements selected against a highly prioritized list of issues — dealing with the most pressing issues in the shortest possible time with the help of tight feedback loops. Products that were once released every few months with fanfare can now be updated every few days. Additionally, when incremental changes are released more frequently, it becomes easier to spot and correct deficiencies much earlier, and also roll back smaller changes when necessary to prevent unintended changes from entering the production environment.
A continuous delivery model gives your IT organization the agility and production readiness it needs to respond swiftly to market changes. It makes your teams more productive, your products more stable and increases your flexibility to facilitate fail fast and continuous innovation everywhere — as corroborated by a CloudBees study based on more than 100 business value assessments of enterprises in various stages of DevOps and continuous delivery implementation. Most dramatically, you can drive faster growth, help the business expand into newer areas, and compete more effectively in the digital age, with the increased responsiveness that this model affords.
DevOps is key for enabling CD
Continuous delivery best suits organizations that are equipped with a collaborative, DevOps culture. In a continuous paradigm where the end-to-end process from idea to deployment is optimized, you cannot afford silos and handoffs between development and operations teams. Companies that don’t embrace a DevOps-based culture and fail to knock down silos have difficulty building the kinds of IT and development environments that are required to compete in the digital age. At the end of the day, you can only be as agile as your least flexible team.
Instituting a culture that can allow for increased focus on relentless improvement is very much a people issue. In most cases, it calls for a significant shift in mindset and behavior. For instance, in order to achieve the continuous delivery goal of speeding up releases and increasing software quality, you need to make a smooth shift-left of various activities—such as continuous testing and security. This requires proactive, cross-functional communication at all levels of the company to bridge the gap that has traditionally existed among development, testing, security and the rest of the business. Misguided IT organizations will not be able to fundamentally change how they work, staying instead at a superficial level by only changing high-level processes and structures.
Another valuable organizational shift is to encourage autonomous teams. Development and operations teams need to be empowered to make their own decisions and apply changes, without having to go through convoluted decision-making processes. This involves avoiding bureaucracy, trusting teams, and creating an environment that is free of a fear of failure and rewards experimentation.
The road ahead
Continuous delivery enabled through steady, consistent DevOps practices will be key to customer success in the digital age. As with everything else, there is no one-size-fits-all approach to introducing and implementing these methodologies and ensuring success. But there is enormous value in starting small — experimenting, gathering feedback, learning, revising, and gradually expanding the scope.
Success of your Agile and DevOps initiatives might often be a double-edged sword for technology teams. Happier customers, positive sales numbers, and increased opportunities inevitably lead to only one thing for the CTO — the need to scale. The question is, how? In this blog post, we draw out an overview of some of the capabilities you need to develop a strategy for scaling and keep yourself ahead of new organizational demands as your company matures.
Consistent performance at scale
As distributed teams grow, it becomes critical that their software is available around-the-clock and performs at a level that enables the teams to do their job. Applications need to maintain consistent performance and response times, irrespective of increasing number of users and workloads and support the collaboration needed across teams to drive shared business goals. Downtime and slowdowns can have a direct impact on business metrics and are unacceptable to a growing organization.
Teams relying on single servers in their network architecture face a substantial outage risk, whenever server loads increase dramatically due to factors such as concurrent usage of the application, performance intensive workloads, or even for routine maintenance like patching and version upgrading. Systems should be designed to facilitate instant scale-out by adding new nodes for uniform redistribution of load and ensuring dedicated bandwidth priorities. With an infrastructure architected for improved resilience and business continuity performance, you can keep your mission-critical apps up and running and manage continued growth.
Improved data capacity and speed
It is no secret that with an increase in the number of users, data volumes continue to grow. Both users and the associated higher data volume can have a negative impact on performance at scale. The need for speed and increased data capacity mean that single server systems are often not able to meet the needs. A single server architecture typically has a fixed amount of ingest throughput as it runs on a single machine. These constraints can become a serious liability for applications (or organizations) aiming to scale.
Adequate visibility and control
As your growth accelerates, you are faced with increasingly challenging requirements around security and regulatory compliance. Large organizations face the added complexity of having several distributed users working from multiple locations, multi-jurisdictional global structures, and extensive legal demands. Without proper visibility or control, it becomes impossible to coordinate disparate teams, create consistency and prevent bad actors from negatively impacting your tools or teams.
The challenge that many organizations face is in balancing team autonomy with the right level of control and governance. Administrators of large organizations cannot afford to become a bottleneck, especially as the number of users accessing different applications increases and there is a growing pressure to deliver customer value rapidly and regularly. It, therefore, becomes important to provide administrators better ways to delegate work, while ensuring they are able to monitor the actions of users and maintain appropriate oversight as teams grow.
Moving on at the right time
If you are looking to scale your DevOps practice across the entire organization, the need for enhanced scale, speed, user support and data capacity mean that single server systems are often unable to address the needs of modern applications. You need to identify and transition to more robust solutions that can alleviate the constraints of single server systems, and help you stay ahead and manage complexities as your organization matures. We recommend planning ahead by choosing the right foundation, which is designed to stay efficient and stable against heavy usage and also handle other complexities around ease of administration, security, and compliance.
If you are thinking about the broader deployment of DevOps and not sure about what to anticipate while scaling, we are here to help you take an informed approach.
DevOps transformations have made major headway among enterprises in the past few years and will continue to be extensive, and 2019 is predicted to be a crucial time for leaders to plan for and implement it across industries. Among senior executives, there is growing acknowledgement of the fact that the role of DevOps is evolving — from driving marginal efficiency in isolated projects to being a catalyst for innovation and disruption as part of an widespread enterprise trend. New estimates from IDC suggest that the DevOps software market will grow from its 2017 results of $2.9 billion to $6.6 billion in 2022. So, what are the emerging technologies and techniques that will spur this growth? We have pulled together our predictions of the trends that will drive DevOps in 2019. Here are our top picks:
AI-accelerated DevOps will start making inroads
AI is poised to have a big impact on DevOps and transform how teams develop, deliver, deploy, and manage applications. Experts believe AI techniques have the potential to make the DevOps pipeline smarter, with the ability to predict the impact and risk of deployments, spot procedural bottlenecks and identify automation shortcuts. AI-based predictive analytics will allow for easier understanding of where problems arise in continuous integration (CI) or continuous development (CD), and enable better acting on data collected from customers, leading to greater efficiencies in operational capacity planning and better pre-deployment fault prediction. For example, if processed in the right way, application performance metrics can not only identify when a server is down but also help with automated decision-making to enable decisive action. This trend will also accelerate enhanced collaboration between application developers and data scientists for creation of AI-enhanced solutions. According to Gartner, by the year 2022 at least 40% of new development projects will have AI co-developers on their team.
Containerization will not be novel anymore
Growing adoption of DevOps and multi-cloud architecture is going to give rise to greater use of container-related technologies across large enterprises. The application container segment will scale to $2.7 billion by 2020, according to a forecast by 451 Research. An increase in the scale of software development and deployment will also lead to an increase in the size and complexity of container production clusters, and orchestration tools will be in high demand as an effective means to dealing with complexities associated with infrastructure. Kubernetes has already exploded onto the scene as the fastest growing container orchestration technology. As a demonstration of Kubernetes’ dominance, Docker has begun incorporating Kubernetes into its enterprise products, while still investing in its own orchestration tool, Swarm. Around the world, many CIO’s and technologists have already adopted Kubernetes and it will continue to play a big role in making containers mainstream in the coming year.
Functions-as-a-Service (FaaS) will take off
As more and more technology professionals become comfortable in using containers in the production stage, we can expect a spike in the adoption of FaaS (Functions-as-a-Service) — also referred to as Serverless computing. This will eliminate the need for businesses to pay for the redundant use of servers. Instead of having an application run on a server, you can run it directly from the cloud — allowing you to choose when to use it and pay for it, per task – thereby making it event driven. In other words, you just pay for the compute time you consume — there is no charge when your code is not running! Amazon’s AWS Lambda has already emerged as the biggest and best-known example of serverless computing. The other providers include Google Cloud Functions, Microsoft Azure Functions, IBM etc. A recent survey by the Cloud Foundry Foundation — a nonprofit that oversees an open source platform and is a collaborative project of the Linux Foundation — revealed that 22% are already using serverless technology and nearly 50% are evaluating it.
DevSecOps will become a priority
Part and parcel with the enterprise scale-up of DevOps is the growing acceptance that security and compliance must be seamlessly integrated into DevOps transformations if they’re to succeed. The way we do computing from cloud to microservices to serverless, has completely shifted the roots of software engineering. The network we knew, no longer exists and the security industry needs to constantly keep up with an evolving attack surface.
In the 2018 DevSecOps Community Survey, approximately 33% of respondents blame application layer vulnerabilities for security breaches. Since the application is the new entry point for attackers, organizations will need to adopt a programmatic approach to application security that starts with injecting security thinking as early as possible into the software development lifecycle — what is commonly referred to as DevSecOps. 2019 will see a widespread adoption of DevSecOps across enterprises, as the acceptance of its core principles reaches a critical mass in the hearts and minds of many in IT. Mainstream DevOps will start treating security as code, and development and security teams will work hand in hand across multiple points in DevOps workflows in a way that is largely transparent, and preserves the teamwork, agility and speed of DevOps and agile environments.
Automation will remain key
There is a growing realization that in order to amplify responsiveness, operational resilience, and faster time-to-market throughout the software delivery lifecycle, you need to synergistically link up development with IT operations through the use of automation. We are hearing more and more users and vendors talk about the need to apply automation across all stages of the DevOps cycle. This will remain the main goal to strive for in 2019 — a necessity irrespective of how far the DevOps transition has progressed. Scaling automation in highly complex ecosystems will be particularly tricky, and organizations will need to conduct a complete audit of development and operations environments to create a base level of situational awareness. From there, they can look into the lifecycle of software delivery — everything from the initial commitment to the auto-build to testing, beta and release – and identify what resources can be provisioned and deployed as code.
The changes we’re going to see in 2019 will pave the way for making many of these advancements more universally acceptable. And that, to us, is something to get very excited about. There are potentially huge gains to be had, but it is also important to acknowledge that the industry overall hasn’t yet developed enough best practices in some of these areas. There will be much to experiment and learn, as practitioners will be exploring some relatively uncharted territory.
Why is DevSecOps Important?IT infrastructure and culture have undergone huge changes in recent years. Traditional security methods, which tend to be more bureaucratic, monolithic and ‘one size fits all’, are no longer adequate to address the security challenges compounded by many aspects of DevOps: High-velocity IT leaves security teams flat-footed: DevOps outfits push and modify batches of code over extremely short time frames (hours or even days), which may far outpace the speed at which security teams can keep up with code review, vulnerability scanning etc. This can be a major challenge for security and compliance. DevOps and cloud environments: The cloud plays a big role in many organizations’ DevOps stories and vice versa. In such dynamic environments that operate at huge scale, even a simple misconfiguration error or security malpractice, such as sharing of secrets (APIs, privileged credentials, SSH keys, etc.) can be amplified, leading to widespread operational dysfunction and countless exploitable security vulnerabilities. The use of containers: Vulnerabilities, misconfigurations and other weaknesses in containers can spawn new security headaches. A study by ThreatStack reveals that a whopping 94% of respondents indicate that containers pose negative security risks for their organizations. Privilege exposures: A typical DevOps environment consists of myriad tools, is highly interconnected and rapidly evolving. Privileged account credentials, SSH Keys, APIs tokens, etc., may be tampered with in the absence of adequate security controls. Various orchestration, configuration management, and other DevOps tools may also be granted vast privileges, and result in a hacker or piece of malware gaining full control of the organization’s infrastructure and data. Past attitudes of delegating security to specialized teams placed at the end of the development cycle can be an obstacle in dealing with modern security challenges. Security needs to be built into the foundations of DevOps, fully integrated into your software development pipeline from the very beginning, so your teams can share feedback continuously and address security issues as they arise, rather than at the end of the lifecycle. The practice of DevSecOps views “security as code,” and is a process by which security is integrated into every aspect of the DevOps lifecycle, starting from inception, design, build and test to release, maintenance, support and beyond. It pulls in the information security team to collaborate along with the application development and IT operations team. With all three teams working together, it’s easier to build security controls into the deployment pipeline, reduce delays and flaws that result when an enterprise treats security as an outside entity, siloed from the development process.
How to go from DevOps to DevSecOps?Turning DevOps into DevSecOps isn’t as simple as merely adding a security team. It involves incorporating security as part of every team and process. Here are some tips on the key areas to focus on keeping in mind the challenges that come with such a transition: Get everyone on the same page: DevSecOps is about enabling everyone on the DevOps team — whether on the dev or ops end — to be the best security practitioners they can be. The goal is to make security an essential part of the DevOps culture and enable joint ownership of issues as they arise. Dev and security teams can’t pass the buck when it comes to securing modern infrastructure. Every developer and operations hire should be trained on the basics of secure coding practices and the most common security mistakes at the beginning of their tenure. Similarly, security engineers should have a table with cross-functional DevOps teams from the beginning, even in the planning stages. For instance, if your security engineers can participate when DevOps teams are planning their minimum marketable features (MMFs), they can contribute by building threat models at the feature or service level. The pressure to get projects out on time can lead to risky shortcuts even for organizations that normally take security seriously—and this is when security awareness at this level will yield returns, forcing your team to think through security implications in the midst of rapid commits and releases, or nudging them to halt deployments for penetration testing. Shift security left: As mentioned earlier, security needs to shift left or start from the early stages of your DevOps processes. Injecting code analysis tools and automated penetrating tests earlier in the development process makes it possible for organizations to capture and eliminate security flaws at every step of the development process and also provides feedback about vulnerabilities as soon as they appear. This up-front security work cuts down the risk of costly and time-consuming mistakes later in the cycle. Create transparent policies: Enforcing effective policy and governance is critical in creating an alignment between different teams. The collaboration between teams needs to be properly considered when policy is laid out. For instance, is the security element thoroughly discussed when you are treating your infrastructure as code? Organizational policy should also cover various other aspects such as, the acceptable cloud deployment practice/model, the data types that can/cannot migrate to the cloud, compliance requirements etc. Automate security: You cannot match the speed of security to your DevOps processes without automation. With the use of automated security tools for code analysis, configuration management, patching and vulnerability management, and privileged credential / secrets management, you can mitigate the risk arising from manual errors, and also reduce the associated vulnerabilities. Bear in mind that zero risk is impossible: It is important to bear in mind that the pursuit of perfection can be detrimental to the speed of DevOps and digital business. There is no such thing as perfect security. Organizations must therefore focus on adopting a risk-adaptive approach that ensures continuous visibility and assessment of vulnerabilities, so that their security and compliance posture can be continually adapted as required, and the right actions taken at any given point. This is what Gartner refers to as “continuous adaptive risk and trust assessment” or CARTA.
ConclusionA shift to DevSecOps won’t be quick, easy and organic. It requires a mindset shift to stop looking at security as one-time gating and reimagine it as a continuous security assurance process, which is integrated from the beginning of the development timeline and assessed with each new iteration. There must be organizational commitment all the way to the top to dedicate time and money to develop security awareness at every level, invest in the right security tools, arrange for the appropriate level of staff training and implement as much automation as possible. You can start by fully understanding your current processes and lifecycle. Where are the gaps and shortcomings in relation to integrating security? Is there a champion in the organization who can understand this? And more importantly, are they empowered to act and help enable change? Once these basics have been addressed, it’s about acting on them. As with anything, the actual implementation will determine how effective the transition is. If you haven’t already begun the process, the time is now to merge your security goals with DevOps. Contact us and let us help you understand its benefits, challenges, and best practices, and choose the right approach to making security a bigger focus in your organization.
The most common question companies ask today is “how can we configure our applications so that we can do CI/CD with containers?” — Mike Maheu, VP – Engineering and Strategy, Go2GroupCharlene: What are the biggest impediments for companies in moving code quickly? Mike: It goes top down. A lot of times there is not a lot of buy-in from the corporate perspective. A lot of the tools come from the bottom and end up with fragmentation. So, there should be a process that couples the actual tooling and what they (companies) are trying to do. Larger companies have multiple software, products, and a lot of different teams. The higher level wants to see across the landscape and they want to make sure that they are able to deliver the changes to their applications. The voices are raised up to the top. And they are struggling with delivering software — when we look at the old school ways of “here’s our application, please deploy in the same environment (sic).” These days I am talking a lot about newer technologies like containerization with Docker, Kubernetes, Jenkins Core, and tying things to the cloud. A lot of companies are also wanting to move their on-premise tooling to cloud tooling. Charlene: Are a majority of these applications and development work being moved to the cloud or are they between cloud and on-premises? Mike: Not long ago, larger companies — government and financial — were scared of the cloud. The first step was when some of them got onto Git for their source control management and Atlassian has Bitbucket (sic). People started hosting their code outside their fortress. The first thing that large companies agree to put on their cloud are their Dev tools. Its low risk – the best bet to get speed of delivery when we talk about containerization and the power of the cloud to deliver at scale. It’s a powerful thing! Watch the full interview https://www.go2group.com/resources/videos/ For more information, write to us at email@example.com
Is DevOps implementation easy? The likes of Netflix and Facebook have shown continuous improvement reiterating the technical and business benefits of DevOps — shorter development cycles, increased deployment frequency, and faster time to market. On the other hand, a high percentage of enterprises are still figuring it out — oscillating between short and quick successes and failing to make the big jump to mainstream.