Microservices and Continuous Deployment
by Lauren Camacci, on Oct 9, 2019 12:00:00 PM
It’s overwhelming to change from a legacy application monolith to a microservices Continuous Deployment pipeline. In fact, the fear of a switch over is so great it often results in no changes at all.
But this fear of “breaking up” the monolith keeps your inefficient processes static, which in turn causes ongoing, never-ending pain. Usually the root cause for this vicious cycle is simply that we fear things we don’t understand. With Continuous Deployment and microservices, the lack of comprehension often stems from a misunderstanding about how many benefits “breaking up” a monolithic application really brings.
To give you a clear understanding of the importance of getting over your fears and taking action, we’ve clearly laid out all the benefits implementing microservices and Continuous Deployment brings to organizations.
What is Continuous Deployment?
Fear of change often begins with confusion about the difference between “Continuous Delivery” and “Continuous Deployment.” Here’s a quick breakdown:
Continuous Delivery refers to the practice of having all code always in a ready state to be deployed at any moment. It is the CD in “CI/CD.” CD focuses on automatically deploying successful packages from Continuous Integration (CI) processes into a test or QA environment that mimics (preferably almost exactly mimics) your production environment. This saves time and headache down the road: removing discrepancies between your test and production environments reduces the likelihood of emergency situations and surprise bugs when you finally hit production.
Continuous Deployment refers to into production. Automation distinguishes Continuous Deployment from manual deployment. Automating your deployments increases your lead-times and, if done in tandem with CI/CD, decreases the likelihood of service interruptions due to bugs or breaks.
Unlike CI/CD, Continuous Deployment also involves the rest of your organization, because work that is ready to go to production will also affect marketing, sales, etc. that need to make their own adjustments to prepare for the feature changes. Continuous Deployment works particularly well with package delivery, like Kubernetes clusters and all the GitHub DevOps tools that come along with it.
The Monolith Model Nightmare
Continuous Delivery and Continuous Deployment can help your organization ensure that quality, production-ready builds can deploy seamlessly; they aren’t nearly as scary as they might seem. The nightmarish monolith application deployment process, however, should be much scarier to modern IT teams and companies.
As they first went to production, most successful applications were packaged and deployed using monolithic architecture. This worked great at the time, because monoliths are simple to develop, test, and deploy. The problem is that this approach doesn’t scale. The bigger and more complicated the package, the less effective monoliths are, sometimes reaching a nightmare level.
You’ve lived this nightmare before: Your team or project has a single build pipeline, usually of immense complexity, that follows hundreds and hundreds of steps to deploy monolithic applications from start to finish. Adding to this, other teams and projects rely on this same pipeline. This massive application is also usually so unwieldy that you’re constantly fixing bugs and slapping patches on problems in a manner that prioritizes speed, since the entire monolith stops for every bug.
These “brittle fixes” make the whole monolith weaker and more likely to break. When you and your team finally release your (very brittle) monolith application, it often breaks shortly after production. Even one bug can break the whole monolith. Once that happens, it’s time for a chaotic emergency response situation with war rooms and overnighters, and suddenly you’re living off stale coffee and are lucky if you can get home by midnight.
The monolithic application model and its sole build pipeline create a bottleneck for releasing applications that are often way too big and are likely to break. Using a single lane for multiple traffic flows makes little sense. The answer: microservices.
What are Microservices?
Microservices allow you to “break apart” the monolith into smaller pieces by creating separate components that can be managed by individual groups/teams. You can then implement Continuous Deployment because each feature of an application has its own microservice.
There are many benefits of moving to a microservices architecture, such as:
- Scalability: Where monoliths would become brittle and slow as they grew, microservices architecture allows more teams to work on more features in a loosely connected but largely independent way.
- Decreased risk of major service disruptions: Because each microservice deals with a single feature, fault isolation is much easier. When you can quickly identify where the problem is, you can quickly fix it.
- Lower lead times: Microservices help you get features to market faster, because features aren’t waiting on a monolith to release; each individual feature can go to production as it is ready. Quick reaction time to customer demand is becoming ever more important, and being the fastest, safest responder can really set your organization apart from others.
[Image Idea: four lanes of traffic, merging down to queue at a single toll booth, all bottlenecked, with an X over it; then four lanes of traffic, each with their own toll booth]
There are a few problems with microservices, though, that can delay Continuous Deployment:
- Lots of pipelines: If your company hasn’t yet automated CI/CD and Continuous Deployment, the huge number of pipelines needed for microservices can quickly overwhelm your team.
- Different stacks complicate release management: Different technology stacks across teams can make release management complicated.
- Challenging integration and load testing: Due to dependencies between services, integration and load-testing can present a challenge, because the work comes in more and much smaller pieces and end-to-end testing is uncommonly used.
- Siloes form easily: You have a greater need to communicate clearly to ensure your team’s update(s) to one microservice won’t negatively impact those of other teams’ microservices. Siloes can too easily form as teams work alongside one another but not fully together.
With the right tools and solutions, however, you can overcome these issues and make microservices work at your organization. The key is automation.
Automated Microservices Deployments
Automate your deployments with programs like Inedo’s BuildMaster. BuildMaster is a Windows-centric system but also works with Linux. It also supports custom integrations with Kubernetes, AKS, and IBM Cloud other containerizing programs. You can automate deployments using familiar patterns like blue/green (LINK TO OUR ARTICLE) or GitFlow, or you can create your own.
Continuous Deployment works even better if you pull stored code from an in-house repository. An on-prem program like Inedo’s ProGet is more secure than a third-party open-source program, because it is accessible to and managed by only those on your team with the right permissions. ProGet functionality also includes testing third-party packages for vulnerabilities.
Communication is also critical to the automation process. Microservices can quickly overwhelm your team if siloes form or if you try to use microservices alongside manual deployment. Slack is a business instant messaging platform with high value for communication in DevOps.
Microservices and Continuous Deployment: Increase Lead Times and Reduce Emergencies
Customers demand rapid adaptation to their concerns, without any disruption in service. Because CI/CD and Continuous Deployment is the best way to do this, you must get over the fear of change. Microservices and Continuous Deployment work great together, and if you mitigate the slight challenges of microservices and carefully automate your processes, you can help your company stay competitive.