I've recently heard some rants against plugin systems and modular architectures. One particular criticism argued that they're merely marketing keywords, adding significant complexity to a software's architecture for little end value. This criticism makes sense to some degree, and there's a trap to be aware of when designing such systems, but we need to be careful. There are reasons why project health might benefit from a plugin architecture, and they might not be the ones you even had in mind.
Given that plugins hold a central place in the new architecture we've built for Yarn 2, I figured it could be interesting to put my thoughts on paper for future reference. Grab your hat and let's dive into the depth of the Plugin Temple ðŸ¤
Plugins are boundaries
Because plugins make it possible to implement new behaviors in pre-existing software it's easy to see them as a way to open up a project to the outside world. But it's also very easy to forget that they're the exact opposite: a way to add constraints to an architecture.
Imagine the same application implemented twice - the first time as a monolith, and the second time with a typical core + plugins architecture. Now you need to build a new feature:
With the monolithic application, you'll likely be able to do your assignment by tweaking a few modules here and there, adding a few new branches, and possibly adding new fields to the data structures. You may not even need to create new files!
With a well-designed plugin system, it'll be more difficult - you'll need to make sure that your changes go through the predefined core hooks. You won't be able to just change the core logic to fit your new need, so you think hard about the implementation before even starting to code.
The monolithic application sounds better, right? Easier to work with, faster iterations. And that's true, given the few parameters I've exposed! But now consider those additional ones:
Multiple people will work on the codebase. There's even a non-zero chance that no one from the current maintainer team will be there in a year. Worse: it's also quite likely that no one from the current maintainer team was here even a year ago.
Most contributors only ever make a single commit - to fix the one bug they experience. They won't ever come back, and probably don't have any context regarding why things work the way they do.
This software will be used for years, and its userbase will keep growing.
Under those new parameters, the monolith will quickly start to spiral out of control. New features get developed and injected into the core. When something isn't quite possible yet, a few small hacks are used. And it works! Time flows, contributors come and go, and suddenly you start to notice a weird pattern: each feature you develop introduces new bugs. People send PRs to help you fix those bugs, but introduce new ones in the process. Long-forgotten hacks trigger edge cases more and more often. Technical debt creeps in and, eventually, we come to a point where no one dares to make a change.
The plugin architecture, however, survives. Bugs still happen, but because the broken features are typically scoped to a single plugin people who aim to fix them only have to understand the context of the one affected module instead of the whole codebase. Same things for reviews, which can be done by people familiar with the individual plugins rather than the whole application. Core maintainers can focus on the core work, and delegate the plugin implementation to new contributors.
The monolith application is Yarn 1 and its hardcoded code paths. The plugin architecture is Yarn 2 and its specialized hooks.
It's still too early to call it a definitive victory, but after working almost a year on this approach I've already been able to see its first payoffs. Freshly onboarded contributors have been able to focus their efforts on specific subparts of the codebase without having to be aware of all the subtle details of the core implementation. Finding how a feature is implemented is mostly a matter of finding the right file.
Plugins give focus
Working on an open-source project the size of Yarn is challenging for various reasons, but the one we'll focus on in this article is pretty simple: what are the features worth implementing?
Lots of people use Yarn every day and, as a result, we get a lot of pull requests to add new features into our tool. Each time, when we're about to merge them, the same questions pop into our minds: will it be useful? Is it worth the complexity? Will I feel comfortable having to maintain this myself in a year?
Back in the v1, our typical answers were along the line of "well, let's move forward and see what happens". But I can already tell you what happens: some of those features became cornerstones of our offering (like workspaces, or resolution overrides) while others ended up cluttering our codebase (like Bower support, or multilingual support). In almost every case even though the implementations worked in isolation, they happened to hit some weird edge case when used together with other features.
Plugin systems offer a very simple solution to this problem by stating that not everything has to belong to the core. It becomes perfectly fine if a lot of features are first implemented through community plugins, the time we can assess their cost/value ratio.
Even better, if we someday decide that a feature shouldn't be shipped anymore, it's just a matter of removing the plugin from the codebase. Of course, such actions sometimes make parts of the core irrelevant and subject to changes. Thankfully, the resources freed by outsourcing part of the feature development can then be reassigned to allow maintainers to spend more timing on keeping up to date the most critical part of their software: the core itself.
Conclusion
Plugins aren't good in every scenario. In particular, they can only be designed once you already have a perfect knowledge of the design space - or at least good enough to know exactly what are the parts you're still missing.
In my case, for example, it took almost two years before I finally felt confident enough about package managers to bootstrap the project. Before that, I spent my time writing various package manager implementations, various tentative APIs, all to grasp the extent of what we would need to cover. It's only after failing a few times that I decided we were ready to go.
So plugins are dangerous. They might put you off the trail, looking for the mysterious cities of gold rather than building your fort. Still, in the case of popular open-source projects, I believe using a modular architecture offers some very strong advantages that go far beyond the idea that people may have in mind when thinking about plugins. More than just a way to open your project to new features, they also provide crucial structure and support, helping those projects to stand the test of time.
Top comments (4)
I know it won't be a problem in case of Yarn v2 but sometimes pluggable tools include no plugins bundled with the tool at all. Like when you install Vim, it is unusable (in my opinion) w/o adding ~10 popular plugins. So it is important to distribute the tool with core + a set of useful plugins.
This is really important, plugins helps to keep the core small, furthermore also helps to growth a community around the project.
The monolith should seek a similar architecture as the plugins system. As you say the plugins system forces you to define the api of a core.
The difference I would say is that a monolith knows all of its extensions, this means the api can change wholistically without a depreciation cycle.
Plugin architecture without the Plugin system.
Exactly! That's a point I didn't want to raise too much because in my case Yarn will support third-party plugins (because of the second reason mentioned), but in the case of the first this is spot on.
Whether to build a modular architecture and whether to allow third-party authors to use it are two different decisions. Even a private plugin architecture may yield benefits just by enforcing some structure during the development.