Jenkins
5 Reasons People Hate Jenkins CICD
Of the biggest divides on Earth—the Grand Canyon, the Mariana Trench—none may be as deep or vast as the Great Jenkins Divide.
One of our customers was telling us about the Great Jenkins Divide at their company. They described it the way a lot of people describe cilantro: members of their dev team who work with Jenkins regularly either LOVE it or LOATHE it.
We’re fairly neutral about Jenkins. Full disclosure, we don’t use Jenkins in-house, because we use BuildMaster for our Continuous Integration (CI) and Continuous Delivery (CD). But we know Jenkins and respect it enough to have a robust Jenkins integration for BuildMaster.
But who better than a Jenkins true neutral to look past the near-religious divide about Jenkins and do some research?
In 2021, what do people love about Jenkins and what do they hate about it?
We searched the forums, scoured websites, and talked with our customers to learn what people are saying.
What is Jenkins, and why does everyone love it?
Jenkins is a job runner that is primarily used to do Continuous Integration.
As a job runner, it can do “anything” technically, but it has three primary use-cases:
- Build Automation & Automated Testing (the core of CI)
- Deploying to Pre-Production Servers
- Deployment to Production Servers
Jenkins has features, delivered by plugins, to help with Continuous Integration (CI), like automated testing.
Ah, the plugins. Plugins are what make Jenkins, Jenkins. They are also the crux of the Jenkins love-hate relationship.
The main value of Jenkins is that library of plugins. These usually-community-created, free-and-open-source plugins extend Jenkins way beyond its job-running capabilities. Need to back up a system with a button click? How about deploy a file to a server with a click? There’s a plugin for that.
Without plugins, Jenkins wouldn’t do much and people wouldn’t love it. But with plugins, Jenkins is much more chaotic, a main source of Jenkins hatred.
Why does everyone hate Jenkins?
There are five common complaints we found in our research: expertise confusion, expert bottlenecks, and self-service chaos, lack of visibility, and scaling difficulty. Because of all these, Jenkins chaos can happen faster than you expect.
1. The gap between “self-service” and “required expertise” creates complications and chaos.
Problem | Effect | Impact |
Jenkins instances are complex and often do more than basic CI | You need a Jenkins expert for even the most basic maintence | Experts become bottlenecks, slowing new projects, builds, releases, etc. |
Tightly-secured instances create expertise bottlenecks | Self-service seekers will create their own instances or find other alternatives | Builds/projects are not systematically distributed and are chaotic, increasing risk and slowing things down |
Loosely-secured instances become chaotic and unstable over time | Non-expert users modify configuration, install/update plugins for their project | Unrelated builds/projects stop working, causing lost productivity and frustration across teams |
Teams that use Jenkins as it was originally intended — that is, as a project-specific, expert-managed CI server — don’t experience these problems at first. The expert(s) sets up Jenkins quickly and provides basic self-service options for everyone else.
However, things quickly get chaotic after that, since Jenkins can do so many things and is so easy to extend with plugins. And that’s where the problem comes in: Most teams end up with Jenkins configurations much more complex than the average dev can handle.
This turns your Jenkins experts into a bottleneck (see #2 below). To bypass the bottleneck, other team members will try to maintain the server themselves or will just create a separate Jenkins install someplace else. This creates complications and chaos by adding to Jenkins and by distributing Jenkins projects in a non-systematic way.
All of this ends up being very expensive, because when you can’t rely on your tool alone. Your Jenkins people are more of a blocker than Jenkins itself. Sounds a lot like pre-DevOps, manual processes, doesn’t it?
2. Jenkins experts can quickly become both a bottleneck and a firewall, which gets very expensive.
Problem | Effect | Impact |
Jenkins is often used for more than CI | You need a Jenkins expert | Experts become bottlenecks, firewalls, and are very expensive |
Experts become bottlenecks | Jenkins-related tasks require experts | New projects, builds, releases, etc. depend on that one expert (very slow) |
Experts become firewalls | Jenkins knowledge lives inside experts, rather than on some tool | Chaos ensues if experts ever leave or are unavailable |
Experts are Jenkins-only | Harder to hire; All focus is on one tool | Very costly to the business |
Jenkins seems great for small companies because it’s free and open source, and there are so many free tutorials and free plugins! But is it really free?
Consider this example: a small team of four decides to use Jenkins for CI and build automation. One of the members, Terry, volunteers to learn Jenkins, follow the basic tutorials, and set it up. Within just a few hours, Terry manages to get some basic projects configured, and everyone can use it.
Over time, other members ask Terry to set up new projects in Jenkins, or ask Terry if Jenkins can do this or if Jenkins can do that. And of course it can, because there’s a plugin for that!
Eventually, the team starts relying on Jenkins to do everything from basic operations jobs to continuous delivery (CD). However, Terry is the only one of the four who can configure and set it up. Now all work has to go through Terry, and Jenkins tasks become 90+% of Terry’s job.
In this common scenario, “Terry” becomes both a Jenkins bottleneck and a firewall, and the team essentially loses a developer to any non-Jenkins work. The way most teams use Jenkins, you end up hiring full-time Jenkins expert to create and manage Jenkins pipelines. Your “free” Jenkins tool costs one (or more) employee salary and talents… pretty expensive!
Hiring engineers (and keeping them) is very costly. It’s much harder if that person must be a Jenkins expert, because Jenkins expertise requires such skills ask:
- Choosing the best plugins
- Developing pipelines
- Managing and organizing projects
- Creating reports and visibility for other users
- Groovy (Jenkins’s DSL—it’s relatively easy to learn, but like PowerShell, some expertise is needed to use it AND experts will often do too-complicated things with Groovy that a non-expert can’t maintain.)
Software developers cost companies an average of six figures (USD) per year per engineer. Holy mackerel—that’s a lot to spend on someone to channel all their talents and time into just your free tool.
To try to counter this, many teams try to distribute the work of managing Jenkins through “self-service,” and let different team members manage their Jenkins instances. But this causes problems too.
3. Self-service can be too much of a good thing.
Problem | Effect | Impact |
Having a ton of plugins and complications in your Jenkins (especially in the controller/master) | The Jenkins controller slows way down AND becomes difficult to navigate for non-experts | Bottlenecks form, defeating the purpose of faster-paced DevOps tools |
Anyone can install any plugin, which don’t require automatic testing | Vulnerabilities can be easily introduced | Increased risks of malware, downtime, and other nasties |
Lack of visibility (discussed in point #4) means people hold Jenkins knowledge, rather than Jenkins itself | Information is behind human “firewalls” | Slows self-service (and thus work) down, since you’re dependent on people rather than just the tool |
Jenkins is a by-design “democratic” tool, allowing anyone to create projects, configure agents, install plugins, etc. But like too much of any good thing, too much self-service can cause problems. Namely, these problems are speed, vulnerability, and longevity.
Because anyone can install any plugin, odds are that you will not know who installed a plugin, why they installed it, how often it’s being used, or whether it can be removed without breaking anything; Jenkins doesn’t indicate any of this. Heck, some plugins get automatically added because another plugin requires that plugin.
If a less-thorough colleague doesn’t seek answers to ALL these questions from the people who installed them and modifies or deletes a plugin, your stuff can break. And if all these plugins are on your controller, you end up with a sluggish Jenkins that may result in more failed builds.
Because anyone can install any plugin—which are not of equal quality—it’s too easy to introduce vulnerabilities or even malware into your Jenkins installations. To avoid this, perhaps you’ll require your developers to get approval and advice on installing plugins from Jenkins expert… which makes the Jenkins expert a bottleneck again.
And when your information relies on human memory and manual intervention for updates, your Jenkins pipelines become “legacy” much faster than competing tools’ pipelines.
4. Poor visibility into Jenkins installs and projects creates chaos, can disrupt work, and increases risk.
Problem | Effect | Impact |
Jenkins project ownership is unclear | Instances may lock if someone makes a mistake on an project that isn’t theirs | All Jenkins work halts until the correct person can remedy the issue |
Information you may need for auditing or rollbacks are not centralized | You rely on people’s memories rather than on a tool | Risk increases (human memory is much less reliable than software logs) |
Unlike dedicated CI/CD tools, Jenkins does not have “applications” or “releases.” Instead, everything has to be its own project (what used to be called jobs). Though there is a plugin for folders and views, you can’t categorize or organize Jenkins projects easily or clearly without work.
What ends up happening is that you have no idea who owns what Jenkins project. You could accidentally work on someone else’s project and mess it up, delete it and mess it up, or someone else can mess with your project and potentially lock the entire Jenkins instance.
Lack of visibility leads to confusion, which slows down work and could risk more than just lost time. And your Jenkins instances become legacy/outmoded much faster when visibility is obscured between users, teams, and installs.
5. Jenkins is very hard to scale efficiently.
Problem | Effect | Impact |
Not all plugins are supported by Jenkinsfile | Jenkins is hard to back up without heavy manual intervention | Lots of wasted time doing manual processes on an automation tool |
Different departments need different instances | Doing “Jenkins as Code” becomes much harder than similar CI/CD tools | Costs both time and money to stay organized |
Jenkins “database” is just XML files on a disk | XML format makes it relatively expensive to read and nearly impossible to index | Needed expertise costs lots of money and time (see point #1) |
Jenkins can only have a single controller | Jenkins controller can’t run in high-availability mode | Scaled businesses can’t afford risks of downtime and lost work time due to Jenkins outages |
Young devs have plenty of time, energy, and passion but have no money. Jenkins is perfect for them because it’s free, but it’s really not good for mature organizations with responsibility to customers and stakeholders, because Jenkins is not simply built for scaling.
Because plugins are created by the Jenkins community, they won’t always have what you may consider “basic features,” and not all plugins are supported by Jenkinsfile (which is exported Jenkins configuration). To back up Jenkins, you have to manually write down a list of plugins and manually install them, and then configure those plugins manually. Compare this to other tools that allow scheduled, automated backups.
In a similar way, doing “Jenkins as Code” requires extra, manual effort at scale, because different departments will spawn different and often multiple Jenkins instances. Because of the difficulty of back-ups, “Jenkins as Code” means backing up multiple instances in a difficult way every time.
Another thing that slows you down is Jenkins’s database, which is basically just XML. This was originally designed with interoperability in mind, not performance. As a result the verbose and dynamic nature of the xml format makes it relatively expensive to read and more so to index.
And perhaps most importantly: scaled organizations rely on constant up-time. This requires high-availability and load balancing, which Jenkins cannot do with just a single controller. You can “duct tape” solutions together as a standby failover, but (as with any duct-taped solution) these often have performance issues.
Love it or loathe it? Where do YOU stand on Jenkins?
Whether you love it or loathe it, Jenkins is here to stay. We’re building a guide to help you manage the Jenkins chaos and get more love in your dev life.