DevOps For Enterprises
by Patrick Campbell and Mark Lavi, on Feb 17, 2020 4:08:39 PM
In 2020, the DevOps drumbeat thumps toward enterprise-wide initiatives. Many enterprises are turning away from “on-premises” or self-managed environments in favor of public cloud environments, which remove the onus to manage environments in-house and easily scale as the business grows. But self-managed environments help an enterprise deliver on both agility and control. This is particularly important as enterprises increasingly adopt Continuous Integration/Continuous Delivery (CI/CD). CI/CD pipelines accelerate incremental digital business value faster than legacy software and monolithic applications inherited by most enterprises. This requires unprecedented speed and security. In order to survive, modern enterprises need to keep up with the disruption brought on by new startup players.
Nutanix Enterprise Cloud helps enterprises evolve on their digital transformation journey by supporting DevOps practices that bring together the best of on-prem infrastructure with the agility inspired by public cloud services.
Making sure that developers get the infrastructure they need in a timely way is one of the main reasons that enterprises and organizations adopt DevOps principles. When developers get resources faster, they can deliver digital business value faster. Traditional IT processes don’t work if the time to get environments ready at any stage of the end-to-end code development to production workflow becomes a limiting factor or bottleneck. To demonstrate this, we’ve created some theoretical examples of different DevOps environments, one that uses a self-managed solution, one using public cloud, and one excelling with a hybrid environment.
On-premises Data Center
When companies rely on an on-premises data center, they rely entirely on the resources they have direct control over acquiring and maintaining. While this means that the individual company has direct, complete control over its environments, it also creates some internal “power struggles” as resources grow scarce.
In some cases, organizations provide developers with pre-configured server computing power based on a request. DevOps engineers then use this pool of resources to build, test, and deliver digital services and applications. To better prepare for spikes in demand that require more resources, many DevOps engineers have increasingly adopted the “over-subscribe” mentality, that it’s better to have what you might need and not use it than not have what you need when you need it.
The problem this causes is twofold: First, this wastes resources while creating resource scarcity. While one team may be woefully under-resourced, another might be sitting on tons of extra resources. Second, if DevOps teams request a lot of resources but don’t seem to be delivering business value, they’re bound to have tension with the higher-ups like Central IT, who often governs what is provisioned. When there’s a lot at stake, the consequences can be dire: the IT budget will suffer, and the organization will be burdened with massive technical debt that could be strung along for years.
Teams who have been thrown raw infrastructure resources in a data center can become isolated from central IT governance. Operating from a “you build it, you run it mentality,” tribalism can easily occur, with members drawing firm boundaries around their camp’s resources. And silos don’t support maximum DevOps efficiency.
When companies choose a public cloud environment, they relinquish direct control over their environments in favor of an external solution that offers load-based costs and easy scalability. While it’s much faster and can be more cost effective, it’s scary to let go of control.
One of the benefits of the cloud is that DevOps teams have free rein to provision resources from public cloud providers based on their own business unit account and needs. In this case, there’s relatively no waiting for the resources, and they get what they want, whenever they want.
But the downfall of this method is the lack of oversight and accountability. When teams can provision from the cloud “as needed,” business lose control over how much is consumed relative to the business value achieved from the public cloud resources. Teams that work in these environments must learn the intricacies of delivering value from one or more public cloud providers.
When DevOps teams separately access public cloud resources, they can easily become isolated from mainstream IT departments. To avoid this, businesses could incur extra expenses getting IT employees certified and training them in technical expertise required to use public cloud environments.
DevOps for Enterprises (Nirvana)
Exclusively using on-premises infrastructure or a public cloud solution risks wasted resources and personnel silos. DevOps is as much a cultural as a technical shift to promote agile delivery of business value, and communication is a core commitment of DevOps. Whether fearing siloed teams as “shadow IT,” who could introduce risk to environments, or praising them as “tiger teams,” who can implement innovative solutions in a pinch, siloed teams are antithetical to DevOps for enterprises.
But what if the best of the controlled enterprise data center environments as well as public cloud environments could be harnessed for DevOps initiatives? What if an enterprise-ready platform achieves the nimble resources at the fingertips of DevOps engineers? What if there was proper curb of runaway spending that can occur in the public cloud? A platform that spans both environments with central visibility for consumption, cost control, and governance.
DevOps for enterprise initiatives must convince people, from the operations to the software engineering spectrum, to build together a common framework of continuous business value delivery. It is easier to think of CI/CD pipelines that go from development to production as isolated for the DevOps teams. Enterprise DevOps, on the other hand, is aimed at running the entire business with the goal of eliminating the friction between developers and delivering the customer business digital value company wide.
The right solution for provisioning environments can enable the entire operations to engineering teams to work together to identify automated, agile, and efficient processes to deliver digital value. Choosing a hybrid environment, that mixes both on-prem infrastructure with public cloud, gives enterprises the ability to spin up infrastructure resources based on consumer demand while keeping latency and transaction failures down.
Nutanix specializes in hybrid environments and works with almost any cloud, hypervisor, and hardware platform. Using hyperconverged infrastructure (HCI) technology, Nutanix solutions allow enterprises to quickly modernize and to scale to any size with minimal effort. And our hybrid solutions also dramatically reduce wasted spending by calculating costs based on load and usage.
Tools and environments that enable the true continuity of DevOps must provide the ability to massively scale on demand, to eliminate work silos, and to continuously provide new and responsive digital value.
This blog was written by our friends at Nutanix:
Mark Lavi: Principal DevOps Advocate
Patrick Campbell: Senior Strategic Technical Marketing Engineer