(This is the text of a talk given at the OSI's State of the Source Summit.)
Interested in reading the best long-form content at the crossroads of ethics and tech? Subscribe to my newsletter, The Ethical Technologist, delivered weekly.
Open source has always been a grassroots movement for freedom. But moneyed and powerful interests have taken advantage of the structure of open source, and transformed it into a tool for maintaining a status quo that favors the already powerful. Open source has done a lot of good, but the laissez-faire, libertarian approach to structuring our community and our mission has left us in a place where it's an effective tool for removing freedom. Progressing the ideal of freedom now requires a radical change to the way we do open source, starting with the metaphors and foundational assumptions that guide our thinking. We need to think long and hard, as participants in open source, about the ways that our work is being used to curtail the freedoms of others.
I want to inject a focus on ethics into the conversation. Ethics, contrary to what you might be thinking, isn't a search for absolute rules of right and wrong. Ethics is the search for an answer to the question: How should we get along with other people? We are not alone in the world, and behaving as though we are is no longer an option. Open source is now actively contributing to injustice in the wider world, and we need to own that fact, to take action to preserve what is good about open source, while at the same time reducing the harm we create, because as I will show, we are creating a great deal of harm.
The root of the problem as I see it is this: Open source has explicitly rejected regulating access to the pool of open source software, while turning a blind eye to the extensive system of invisible, implicit, yet very real regulations that are woven through the structure of the community. This total abdication of control is toxic, pushing out people we need, and opening the door to those we don't want. The major failings of open source can be explained by a combination of the existing (implicit, covert) regulations governing the open source community at large, or lack thereof.
The origins of this attitude lie in the metaphor of the pool of open source as a commons. In particular, the prevailing attitude is that this pool of software is an abundant good that cannot be exhausted, and that therefore it is a special kind of commons.
I want to reflect on this metaphor, of the commons, what intellectual and cultural work it is doing for us, and whether it can do work that we need it to. Because metaphors are tools, and like any tool, when it is useful we should keep using it, but if it ceases to be useful, we should find another tool, another metaphor. And if you haven't guessed yet, I think the metaphor of the commons, as it is typically deployed, is no longer doing useful work for the open source movement.
First, let's examine what a non-metaphorical commons is, and then how we usually deploy the metaphor of the commons as applies to software. A commons, broadly speaking is a shared, public, and limited resource. Ocean fisheries, commercially-important forests, and (at least historically) grazing land are all examples of such commons that we're familiar with. These are called commons because the fish, the trees, and the grass are all somehow public or held in common by many people; they don't belong to any single entity.
Notice also that in each of these cases, what is held in common is a limited (though often renewable) resource. Thus, management is a necessary component of such commons, to ensure that the common resources are accessed in a sustainable way: That fish are not over-fished, or trees depleted faster than they are planted. It is this aspect of commons that makes them interesting: How do you identify the stakeholders, ensure their interests are fairly represented, and reach a consensus among likely competing interests while hold the commons in a sustainable manner? Often the stakeholders are not just the fishers and lumberjacks, or their employers or contractors. Forests on public land, for example, provide recreational opportunities for everyone, play an important role in larger ecosystems (host to, for example, endangered species; providing clean air). Thus, over-harvesting lumber isn't just a myopically poor decision for lumbermills, it has additional consequences far beyond the production of wood.
A failure to manage well a common resource leads to what is called the tragedy of the commons. There is an inherent tension between the collective interest in sustaining a commons, and individual interests to take as full advantage of the common resource as possible. When governance fails to properly incentivize limiting access, individual actors will out of self-interest each tend towards excessive access of the resource. For example, consider grazing lands shared by multiple shepherds. Each shepherd is allocated a specific number of sheep that can be given access to the land. One shepherd may think to themselves: It's a small thing if I add just one sheep more to my flock, no one will notice, and the additional impact on the land is marginal. Yet if one shepherd is thinking this, likely all are, and the addition of several dozen sheep is likely not at all marginal, and the grazing land will soon be valueless.
We often talk about the pool of open source software as a commons, the largest commons in the world. It's a resource, owned by no single individual, available for anyone with a computer to access. But as I mentioned, the open source commons is special: You can't use up software just by copying it. You can make as many copies of the Linux kernel as you like, and although you have to find someplace to store it, your copying in no way impacts my ability to do the same. We can each sneak infinitely many additional sheep into the grazing land with absolutely no consequences, because software isn't a physical thing. It's infinitely and trivially sustainable. A tragedy of the commons is literally impossible.
Because a tragedy of the commons is impossible, there is no need to set up a system of governance to sustain the commons. And, indeed, we have as a community explicitly rejected any such notion by adopting a principle called software freedom. This principle of software freedom states that anyone is welcome to copy software out of the commons, without restriction, for any purpose. It's an explicit injunction against governance or regulation, and it is only possible because of that unique feature that software is infinitely abundant. In the past, we saw reason to celebrate the possibility of unrestricted access to the commons, because it was liberating, empowering. Software freedom was a powerful tool for fighting against authoritarian organizations like IBM, AT&T, Microsoft, the U.S. government even.
We even have a lovely pejorative term for when someone proposes restrictions on access to the commons: we call that encumbering the commons. An encumbered commons is no commons at all; either we all have access or none do. Indeed, avoiding encumbrance has become an ethical concern: Because the commons is unlimited, it is wrong for anyone to restrict access.
But the disconnect between physical commons and software commons should be obvious. The defining feature of a commons is its limited nature, the possibility of tragedy, the need for regulation and governance. But open source software lacks this feature. Where the principle of software freedom licenses moral claims against restricting access, in physical commons it is the opposite that is true: Over-accessing physical commons is wrong. What work is this metaphor doing for us, really? It's hard to see.
The principle of software freedom has led us to a place where we openly tolerate unethical use of open source software as a necessary cost for the freedoms of developers. As the OSI's FAQ tells us, "Giving everyone freedom means giving evil people freedom, too." But why is that? That's a deeply unusual claim.
So let's look at a few cases where we have given evil people freedom, and really dig into what that means. Let's talk about ICE, let's talk about deepfakes, and let's talk about the exploitation of underpaid labor. Each of these examples are going to highlight a different way in which our current model of open source is broken.
In 2018 alone, 49,100 immigrant children were arrested by the United States Immigration and Customs Enforcement, or ICE, and held in custody under the Unaccompanied Alien Children (UAC) Program. Through 2019, many of these children were separated from their parents, were denied soap and other basic sanitary items, and in some cases, were told they had to eat off the floor.
This is only the start. ICE has been repeatedly accused of violating immigrants' human rights, exposing detainees to COVID-19 through neglect, illegally violating the privacy of American citizens. Most recently, armed ICE agents were deployed as an illegal paramilitary force in Portland Oregon, Chicago, and possibly other cities as a show of force by the Trump administration against Black Lives Matter protesters. ICE are Trump's shock troops, set out to terrorize his enemies, and oppress those he believes are a threat. ICE is a force opposed to principles of freedom and liberty.
Palantir provides big-data analytics for government agencies across the globe, and currently has a $49m contract with ICE. At the moment, Palantir has 191 repositories on GitHub. While most of these appear to be focused on boring, run of the mill tech infrastructure, and don't appear to have accepted large-scale contributions from the outside, all of them contribute to Palantir's ability to provide survelleince services to their clients. But one stands out: AtlasDB, a high-performance transactional key-value store, appears to have a community and a following. And it's no surprise, such tools are in high demand for scaling startups. But it's also most like a key part of ICE's "mission critical" Investigative Case Management system, used to document, track, and deport undocumented immigrants in the US—such as the children they rounded up in 2018 onwards.
GitHub provides ICE with the hosted enterprise version of the GitHub product (GitHub Enterprise or GHE), which ICE uses, as you'd expect, for in-house software development, specifically by the Enforcement and Removal Operations (ERO) division, the division implicated in the human rights violations described earlier. GHE is in turn powered by Ruby on Rails, a well-known and widely popular open source web application framework.
Most of the maintainers and contributors of open source software that ICE depends upon for its operations probably have no idea that their code has played some small role in visiting atrocities upon children and others. I want to point out that I am not calling those who contribute to these projects in any way blameworthy for their actions. They aren't. But they should be angry to see their worked used in this way, and many maintainers are in fact rightly pissed. Daniel Heinemeier Hansson, the creator of Ruby on Rails, has expressed dismay at this turn of events, and called this contract "a stain on GitHub that'll be hard to wash out".
An open letter to GitHub published in 2019 and signed by hundreds of open source maintainers lambasted GitHub's role in enabling these atrocities:
At the core of the open source ethos is the idea of liberty. Open source is about inverting power structures and creating access and opportunities for everyone. We, the undersigned, cannot see how to reconcile our ethics with GitHub's continued support of ICE. Moreover, your lack of transparency around the ethical standards you use for conducting business is also concerning to a community that is focused around doing everything out in the open. We want to know that the platform we have invested so much of our time and energy in is operating in a way that is consistent with the values of open source software development.
Yet, in many ways, GitHub's behavior is entirely consistent with the principle of software freedom. But it is not consistent with the ideals that drive open source. How should we resolve this tension, between a desire and belief that open source software can can be a tool for justice and freedom, and against oppression, with the fundamental belief that we should not interfere with anyone's ability to use open source software for any purpose whatsoever?
Already, the inundation of fake news is causing a political crisis: Democracy relies at its foundations on a well-informed voting public. Fake news and other sources of disinformation chip away at this foundation by instilling false or dangerous beliefs in voters, who then make ill-informed decisions about who should represent them in government. The increase in fake news is a threat to freedom and liberty across the globe.
Deepfakes, if you aren't aware, are a particularly troubling new kind of video, in which people (usually famous) are made to say or do things they did not actually say or do. At the moment, we rely heavily on video footage, whether shot by professional organizations or, increasingly, by amateur individuals, to sort fake news from real, and to discover breaking news that requires proper coverage and investigation. Deepfakes dramatically exacerbate the fake news problem, by providing dishonest actors the ability to support fake news articles with convincing video evidence, for example of a political leader making a claim they did not actually make. Deepfakes are a very real threat to democracy and the free flow of politically-relevant information.
Just a few years ago, assembling a convincing deepfake was tedious and time-consuming. This time lag is important, because in order to effectively support a fake news campaign, the footage must be produced in a timely manner, as the window for successfully co-opting a news narrative is quite short.
In 2020, however, things have changed a bit. Avatarify is an open source software tool that allows the user to impersonate other people, live in a Zoom video conference, given only a photograph of them. It allows people, in other words, to create shockingly convincing deepfakes, and to do it in real time. This real-time ability serves to radically increase the effectiveness of fake news campaigns, to even create "live" news feeds of fake news.
Avatarify was clearly built as a joke. There's certainly no harm in populating the commons with the occasional joke. But the consequences are far-reaching, because of course anyone can take this code, and build upon it, and use it for any purpose whatsoever. Anyone here includes not just jokesters, but authoritarian organizations, state actors, terrorist organizations, and media outlets intent on disinformation campaigns as a tool for deceit and oppression.
Again, the act of releasing this software into the wild is entirely consistent with the principle of software freedom. But it is not consistent with the ideals that drive open source. Again, we must ask how we can resolve the tension between the idea that open source can be a tool for freedom and against oppression, with the fundamental belief that we should not interfere with anyone's ability to use open source software for any purpose whatsoever, even curtailing the freedom of others.
There is a persistent idea among open source proponents, that software freedom is good, because it levels the playing field through equal access to the commons. The idea of level playing fields as sufficient for equity has always been a bit of libertarian utopian fantasy. The reality is quite different: It is true that you and I benefit from access to the commons to a particular degree, but large companies derive massive value from open source, on a scale far out of proportion to their size.
You and I can access that commons for free, with no strings attached, which is great because perhaps we don't have the resources to pay for high-quality professional software (I certainly didn't when I began my Linux journey in high school). But the same mechanisms that give us free (as in beer) access to open source exist for large, well-funded corporations as well; despite having plenty of resources to help fund open source, there is no explicit motivation for them to reciprocate. Some do give back, don't get me wrong, but it's instructive to examine the incentives for that behavior (which we'll do in a bit).
The key is that open source as marketed by the OSI is literally about exploiting unpaid labor. Long touted as one of the key benefits of open source, businesses can reduce their costs by outsourcing development to teams of unpaid labor. This idea was a huge part of Eric Raymond and Bruce Peren's founding the OSI, to make open source palatable to business. But where in the late 90s this idea was couched as as "cost sharing"—the idea being that companies can share resources with each other to develop open solutions, the reality in 2020 is better described as companies outsourcing their development to free labor to realize significant cost savings, a practice that many have called extractive.
At OSCON 2019, Amazon’s VP of Cloud Architecture Strategy, Adrian Cockcroft, reinforced this view, and took it further:
So now that Open Source business is using the developer community as a force multiplier for engineering: It means they don’t have to invest as much in engineering as if they were doing 100% of this themselves, because the community is actually doing some of their engineering for them…and then you take the enthusiasm of those people, and you use that to create market awareness, and you invest in that market, so know you’ve got your marketing…you’re using open source to magnify your effect in the marketing side.
I imagine your reaction to this quote is something like "so what". The fact that we read this quote and see nothing wrong with what's being said is very telling. In fact, this quote is terrifying. It's an announcement that not only is using unpaid labor to build large-scale commercial enterprises acceptable, it's something we ought to be doing. Adrian is trying to convince us that this normal. It's anything but. No other commercial enterprise in the world operates this way.
But what about those maintainers doing the free labor? They are feeling un-empowered to improve the situation. At Maintainerati 2019, we learned that maintainers face an uphill battle convincing companies that depend upon their project to provide funding; most maintainers don't even know where to start the process, who to talk to, how to pitch, let alone how to manage their finances. They feel exploited: It shouldn't be the responsibility of maintainers to chase down funding from organizations leveraging their work for profit. And when maintainers do succeed, they are rarely able to afford to pay even one full-time engineer, once operating costs are covered.
Now, you might object: No one is forcing these people to continue to contribute to open source. Contribution is entirely voluntary. Except that it's not. These very companies reaping the benefits from open source are also the ones who insist on only hiring candidates after examining their GitHub profiles, and rejecting those with no obvious track-record of open source contribution. We live in a culture where contribution is expected as a matter of course, and required to get hired into the best jobs. So, no, it's not entirely voluntary. Let's not discount the fact that many maintainers get into open source precisely because they want to use their skills to make a positive impact on the world! Moreover, many people have told me that they don't mind being treated this way, but that doesn't mean there isn't a problem. This topic is an entirely different talk in itself, so I can't go into too much detail here, but it's worth reflecting on the incentives for and against contribution that we create and reinforce.
And sometimes maintainers do look to get out of the game. Why not, if they are burning out? And many of them are burning out, asking themselves why they keep working on a project with so many demands, and so few rewards. Because there are no established best practices on how to decommission a project, as maintainers burn out and walk away, we find our digital infrastructure at risk. Case in point: In 2018, Dominic Tarr stopped maintainer a popular NPM module of his,
event-stream. When a friendly-sounding voice offered to take over responsibility, Dominic gladly handed it over. The new maintainer proceeded to inject malware into the package, which remained undetected for two and a half months. As it was, the malware was tightly targeted, so the impact was not very large. But it could have been. This same story could be happening right now. We have no way of knowing.
Business realizes enormous value from open source; but maintainers do not share in the value they helped create. They deserve to. This isn't some weird Marxist ideology, but a fundamental tenet of capitalism: Those who take the risks deserve to reap the rewards. Right now, open source encourages business to shift the risks onto maintainers, while encouraging them to keep the rewards for themselves. Not only is this a deeply unjust situation, is is unsustainable and will soon have significant consequences beyond the individuals involved. All it takes is one person maintaining a key dependency giving up, and opening the door to a malicious actor.
Open source has become a tool of injustice. Open source empowers those who would oppress, makes it easier to take away the freedoms of those at the bottom-most rungs of society, and feeds power to the already powerful. I want to be absolutely clear: This is a problem. We can change this, but it means grappling with some very difficult, deep-rooted issues in the very foundations of open source.
Let's begin by asking: How did we get here? Despite the fact that the principle of software freedom rejects explicit governance over how the unlimited software commons is accessed, there is nevertheless an implicit set of regulations that govern peoples' behaviors, in the form of invisible, yet nevertheless very strong and very real, incentive structure that ensures that maintainers, contributors, and users will behave in certain predictable ways.
Let's examine some of these incentives, and see if they can help us diagnose the problem.
Far and away the most powerful motivator for getting involved in contrbuting to open source is "scratching your own itch". A fine case in point: "Luna" is a web application framework in C++ that I authored several years back. I was employed to work on a project that used
libmicrohttpd directly for serving up content over HTTP from inside of an app written in C. If you have never used
libmicrohttpd, it's a fine library for adding a lightweight HTTP server to an app, but it's API is notoriously difficult to use correctly. So I wrote an easier-to-use abstraction in C++ that I called
luna. The project manager ultimately decided to re-author our product in Go, so I was free to release my library to the wild. I'm certain I'm the only person who uses it, and that's fine, because the only reason it was written at all was because I had an immediate need, a need shared by apparently no one else. A brief glimpse at GitHub reveals that I'm far from alone—there are a lot of people in the same situation.
And that's fine as far as it goes, but when personal need is the single largest motivator to build open source software, we should hardly be surprised when the vast majority of it is developer tools (mostly abandoned). This is not exactly a rich pool of software for the good of everyone, but this state of affairs has broader-reaching consequences that I'll touch on soon.
The second most powerful motivator for getting involved in open source is the potential to buff your own reputation. As it happens, I don't really work very hard to mitigate code-rot in
luna. There's no incentive. I'm not using it (so no itch to scratch), and no-one else is using it. But suppose one of you listening to this talk decides it might be useful for your own project, and left an issue reporting a bug. Well, I'd like that: Someone's using my code! Attention! Who doesn't love attention! And yes, I'll fix your bug, because I want you to like me and use my code.
At some point, if we imagine more and more people begining to use
luna, I could develop a reputation as the "
luna guy", and I can leverage that reputation: I can use it to get a better job, possibly even to work on
luna itself. It opens the door to giving talks at prestigious tech conferences. If it gets big enough, eventually people might develop courses "getting started with web apps using
luna". It might even start making the tech news. Hey, maybe there's a startup idea in here too, hello sweet VC funding!
It's this kind of reputation buff that keeps people in open source, even when it works against their best interests. The promise of money and fame, even if distant, can be a powerful elixir.
But we should think also about who is giving the reputation buff, because of course I'm going to tailor my work to those I think will bestow those good feelings on me. Abstractly, it is the users and beneficiaries of the software I create. Which, in practice (because I'm a developer scratching my own itch), really just means other developers. So it's to my own benefit to focus almost exclusively on developer tools, because that's the most efficient way to build my reputation.
There is one additional incentive to examine, but I think that it derives at least partially from the previous two. And that's adoption rate as a primary metric of project success. Whether its measured in package downloads, GitHub stars, or number of talks, adoption rate is how we separate the wheat from the chaff. We can see the fingerprints of the incentive structure just discussed on this metric: Projects with high adoption are the most likely to increase the reputation of their maintainers. And developer tools born of individual needs are the most likely to be widely adopted.
Setting aside the deeply problematic assumptions of meritocracy that underlie this metric, I think it's important to understand why this metric is felt as so important. This metric provides a normative component to software freedom. The principle of software freedom tells us that open source software should be available to all; Adoption rate tells us that we as individuals are succeeding when people take advantage of that availability, and failing when people reject it.
Adoption rate helps shape the kinds of software that are contributed to, helping ensure that the pool of open source software is mostly developer tools. It also encourages overlooking morally dodgy circumstances. Because adoption is viewed as unconditionally good, it doesn't matter who is adopting it. Except of course that it does. When the Cloud Native Computing Foundation announced on Twitter earlier this year that the United States Air Force was adopting Kubernetes for its fleet of F-16 warplanes, the uproar was loud enough that CNCF deleted their tweet. At a talk I gave earlier this year to an audience of Typescript developers, I discussed how Typescript was used to develop a US Air Force recruiting website for drone pilots. The dismay among the audience was palpable.
I don't mean to pick on the Air Force, it just happens to be topical. But if raw adoption rate really was viewed as a metric of success, you'd have expect cheers in both cases. But that's not what happened. There was a lot of anger, hand-wringing, and disappointment. Consider how you might feel learning that your project was adopted by ICE, versus learning that it was adopted by UNICEF. It feels different, doesn't it? That reaction is worth paying attention to, because it tells us something. It tells us that software is not a commodity, that it's creators feel some sense of ownership over their creations, and that thinking about the pool of open source software as an inexhaustible commons that should be readily available to everyone for any purpose is clearly not getting us where we want to be.
As a developer, these incentives work well in my favor. I build a new developer tool, because I have a need for one. I release it into the public, and other developers see it, use it, and contribute to it, building my reputation within the developer community, and adoption of my project goes up. It's a positive reinforcement cycle not just for me individually, but for the community: The rewards of reputation bring more people into the developer community, and encourage more people to build developer tools.
This is not a particularly useful or helpful place to end up. Indeed, what we see here is that the incentives I've identified ensure that software developers and tech-focused businesses and governments reap almost all of benefits of open source, and discourage creating software (or simply thinking about the software we do create) for the benefit of people outside of this relatively small circle of beneficiaries.
So. The metaphor of an inexhaustible commons and the principle of software freedom have led us to reject explicit regulation of the pool of open source software, in the name of maximizing freedom. But as I've shown, that rejection is really just an abdication of responsibility. There are, despite this explicit injunction, structural, implicit incentives and rules that actively work against the goal of maximizing freedom. Let's connect the dots, and show how these incentives have created the deeply unjust world I described earlier.
The primacy (and normative value) of adoption rate metrics encourage maintainers to overlook who is using their software and why. What's important is that it is getting used, and the bigger the adopter the better. Adoption rate only looks at the numbers, and doesn't consider purpose (because software freedom), and the almost laser-like focus on increasing adoption rate, combined with the reputation buff that naturally arises from increased adoption, strongly incentives maintainers to overlook unjust uses.
sqlite off the top of your heads? The team of three, headed by D. Richard Hipp don't make news with their technical innovations, there just isn't much to innovate. Yet their ongoing maintenance of
sqlite touches nearly every aspect of the tech industry. But if you release something useless but technologically interesting, and you'll find yourself covered by Wired and showered with job offers. We are never encouraged to ask ourselves whether our clever toy is also a weapon: Just add it to the commons, and watch the likes roll in.
When I scratch my own itch, it is almost certainly a result of my job. (My own day-to-day hobbies don't involve internet infrastructure at scale.) The tools that result are useful to my employer, and are therefore more than likely useful to other businesses. So my own internal motivations tend to grow the commons in a direction that is more useful to large businesses than to, let's say, refugees. Combined with the distinct lack of incentives for reciprocation, itch scratching encourages large corporations to adopt my software, because the value realized is so very large compared to the zero cost. And now I'm maintaining a tool that brings me more stress than financial rewards, yet makes tech founders rich.
Open source is broken, it is not working as it was intended to. The incentives need to change. The mental model we use to shape the way we think about open source needs to change. The time has long past that we need to start thinking proactively about the world we want to bring about with open source, and deliberately structuring our community around this vision.
Because what we've done so far as a community is merely abdicate responsibility. Enough.
So, how do we move forward? What does a "post-open source" world look like? Let's start by approaching open source from a different perspective.
Thinking of the pool of open source software as an inexhaustible commons has gotten us into a lot of trouble, and it has done so by neglecting explicit regulation of community behavior, in favor of structural, implicit regulations. As I've said before, this way of thinking condones unjust outcomes.
If we want to move past this state of affairs, and find a new way forward that both captures everything that is good about open source, while leading us away from a world where open source is a powerful tool for creating injustice, then we should begin by rethinking the foundational metaphor we rely on. Talk of commons and commodities, scarcity and abundance hides so much of what is valuable about open source—the people and communities that create and share and benefit from open source. It also serves to hide the injustices that software freedom creates, by focusing exclusively on the "freedom" of software users, without reflecting on either software creators, or the wider world of people impacted by the software we create and use.
So, what if we didn't think about open source through the lens of scarce and abundant resources. What if instead we focused on the people that lie at the heart of open source: Maintainers, contributors, adopters, commentators, victims, beneficiaries; and the relationships between them. The problematic cases I discussed earlier are all concerns about the ways in which members of the open source community, and those beyond who are nevertheless impacted by open source, treat each other. And as I mentioned at the beginning of this talk, the study of ethics is fundamentally an attempt to answer the question: How should we get along with each other?
Getting along with others depends largely upon our ability to treat others as humans. This might sound trivial, but it is enormously important, especially since it's pretty apparent that many open source communities struggle mightily with this idea.
I begin by observing that I am a human: I have a capacity for rational thought and free will, that I have desires, and needs, and goals, and that I am entitled to a certain level of dignity and respect in virtue of these facts. I can infer that you are likewise human, and just like me you have a capacity for rational thought and free will, that you have desires, and needs, and goals, and that therefore you are also entitled to a certain level of dignity and respect in virtue of these facts. If I am worthy of respect, you are too, because we are both human. Respecting someone in this context means consciously recognizing another's fundamental humanness, treating them as we would want to be treated in their circumstances, and refusing to treat them as mere means to our own ends. It means taking others' desires and needs into account when we are faced with a decision, and not just the route that is most expedient for our own personal interests.
Let's consider the concerns I raised earlier in my talk, and think about them in context of this way of thinking.
ICE violates human rights, has been doing so for years. Standing on the principle of software freedom—that we ought to remain neutral about how our software is used—is absolutely not an apolitical, neutral stance. In fact, giving these organizations equal access is an expression of our willingness to discount those they oppress as human, unworthy of consideration. By "remaining neutral", we are in fact choosing to exclude not just such victims, but everyone outside of our immediate community of developers as beneath consideration, and therefore as not human.
Placing weapons into the pool of open source software is equally troubling for the same reason. Merely tolerating this behavior says that we value weapons more than we value the people who might one day be the victim of those weapons, just because they are not part of the immediate community of developers. The implicit incentive structures I discussed earlier, however, mean we don't merely tolerate this behavior, we encourage it, and that's even worse. It is more important to us that we reward developers who build clever weapons, than to think about the consequences for entire populations of at-risk people. When we choose to ignore them, we are not offering them the dignity and respect they are due as humans.
The problem with extracting free labor from open source maintainers should be obvious. Large organizations that leverage open source for free labor are not treating maintainers with the respect they deserve as humans. Respect demands that we reciprocate when we ask someone to direct their labor for our own benefit. This is why we have laws against child labor, and movements to eliminate sweatshops, indentured servitude and slavery from our supply chains. Why do we treat the production of software as though it were different? It's not.
This is why these outcomes are unjust, because we are ignoring the needs, desires, and goals of people, treating them as an ends to justify to ourselves granting additional "freedoms" to other software developers.
But, such thinking also points the way to better outcomes. Recognizing our obligation to extend our consideration beyond software users and businesses means we have an obligation to actively change the incentive structures we've allowed to form.
Open source is a community. It's one thing to expect of yourself to behave in a way that respects the people around you. It's another to motivate a community to behave in ways that are consistent with a principle of respect for others, such as not using our software to violate human rights. Such ethical behaviors are frequently at odds with individuals' self-interest, so we can't rely on a laissez-faire system of implicit regulation to make them happen. We need an explicit, thoughtful incentive structure.
So let's talk about the positive behaviors we want to incentivize, and the negative behaviors we want to dissuade. This is going to vary from community to community, but I think we can come up with a reasonably generalized set. Harmonizing regulations across different communities is going to be hard, but since when did something's being hard dissuade us from trying to solve it anyway? It isn't necessarily intractable.
- We must disincentivize adoption of software by actors unwilling to commit to basic principles of the value of humans. ICE and other oppressive authoritarian organizations do not deserve free, unimpeded access to our work in the name of removing freedom. Open source software must be unattractive and unwelcome to the powerful who exploit it for unjust ends.
- We must disincentivize extractive practices on maintainers. The people who create open source deserve to share in the tremendous value their effort creates. Again, open source software must be unattractive to the powerful who benefit from it disproportionately.
- On the other hand, we must incentivize the creation of software with benefit outside of the developer community, especially the benefit of at-risk populations. Too much software being produced is for the benefit of developers, and not enough is being produced for other demographics. The pool of open source software should reflect the needs of those without power.
- Further, and most importantly, we must incentivize a focus on outcomes and impact, rather than raw adoption rates. Raw adoption is a broken metric; If our goal is to make the world a better place through software, then let's define what that better place looks like, and the kinds of technology that can get us there, and reward developers who work towards those goals.
We have (at least) three levers at our disposal for creating these incentives: licenses, money, and cultural change.
Licenses like the Hippocratic License, License Zero, and the Anti-Capitalist License place conditions on access to software licensed with them. The Hippocratic License, for example, requires that licensees not engage in human rights violations. Such licenses catch a lot of flack from supporters of the Open Source Initiative, because they do not adhere to the OSI's Open Source Definition, which requires that open source licenses not place conditions or restrictions on use. But in light of everything I just said, I believe these licenses, and others like them, deserve our attention.
Why? Because they might just represent a step towards a more thoughtful incentive structure like the one I outlined. A bank involved in human rights violations in Congo is unlikely to adopt software licensed with the Hippocratic License, because doing so would expose the company to more risk than they are willing to tolerate. In the past we'd have lamented the loss in adoption for this project. But, in fact, this is reason to celebrate: The project has successfully imposed an (admittedly very small, but nevertheless real) cost on the business in virtue of their poor treatment of Congolese palm farm workers. Such licenses also provide a positive incentive: Businesses that have invested heavily in infrastructure that depends upon Hippocratic Licensed software may find they have reason to avoid entering into business deals that involve human rights violations.
Preventing unjust, oppressive organizations from using open source, and encouraging otherwise unproblematic organizations to avoid oppression is precisely the point. Only then can we sleep well, knowing that we have reduced the chance that our labor is being used for evil.
License Zero offers two different licenses that work in slightly different ways, but both of which are focused on the idea of returning value to maintainers. The Prosperity License, for example, prohibits the use of the licensed software for commercial purposes without paying a fee. It is designed to prevent extractive use of the software, ensuring that a portion of any value realized is returned to the maintainer. Preventing powerful organizations from extracting value without reciprocation is precisely the point. Without explicit incentives, as we've seen, maintainers rarely share in the value created by their labor.
I call licenses that restrict access in ways that are aligned with treating others with respect "just licenses". Just licenses are, as the name suggests, a tool for justice, designed to help ensure freedom, very much in the spirit of open source. But it's still early days in our experimention with just licensing. There's a lot of work left to be done to figure out how best to make this work.
But licensing, although a powerful tool in our toolbox, is insufficient to achieve these just goals by itself. We must think bigger. Money and cultural change are both much harder, and also arguably more important in bringing about this kind of change. If we want to start normalizing ethical open source, we must start thinking about the kinds of institutions that need to be built to create and support these incentives.
We do already understand some ways at least in which we can get funding to projects that need it. But we haven't yet really seen real experimentation with using that funding as an incentive for software projects, to encourage them to behave in ways that are consistent with treating humans as humans. GitHub Sponsors, Tidelift, and Open Collective are well-known efforts, but they place very few restrictions on the projects wanting to receive money. Moreover, schemes like GitHub Sponsors don't even address the real crux of the problem, because it is essentially a system for shuffling money between maintainers; it does nothing to create strong incentives for consumers of open source to abandon extractive behavior.
A small step in the right direction are foundations like the Linux Foundation, and the Software Freedom Conservancy. Sponsorship of these organizations are attractive, because sponsors can earn marketing clout. Such foundations work towards furthering the cause of open source, promoting a culture consistent with open source values. And the foundations are in a position to use that funding to offer vital services to member projects as an incentive to join. Imperfect as they are, institutions such as these are vital to the open source movement as it exists today.
For this reason, I think that associations or foundations are a pivotal tool for propelling us into a post-open source world, to provide the incentive structure I discussed earlier and for promoting an ethics-based culture in our community to help normalize the broader notions of freedom that I've been discussing today. It's worth looking at two such institutions that already exist that could provide a model for us: The Code for America network, and the Institutional Review Board system at American research universities.
Code for America (along with other members of the Code for All network) is an intriguing model for what could be accomplished. Code for America is an organization dedicated to leveraging technology to improve democratic government in the United States. Software for them is a tool: Their focus is squarely on the impact of their work. Indeed, nowhere on their website will you find mention of "open source", "commons", or "software freedom", even though all of their code is in fact released under open source licenses.
As a result, member projects have strict requirements. The problems that they tackle are decided on in a consensus-based way, and are driven from the bottom up, rather than by developer's whims, to ensure that each project contributes to a broader, yet still concrete societal goal. Member projects must also adhere to strict requirements on open and democratic governance, and adhere to (and enforce) a code of conduct to ensure that their communities are inclusive and thriving.
In this way, Code for America, and other Code for All affiliates, can help ensure that the software needs of people outside the developer community are being met, that their membership isn't merely naval-gazing, or worse yet, developing weapons, with organization resources. They are providing some of the incentives we need to have a healthier open source community, and to ensure that open source really is creating freedom for those who don't have it.
We can imagine other foundations that operate along these lines. They could create some of the incentives I disucussed earlier, by offering resources to member projects on the condition that the projects behave in ways that are consistent with the value of others as humans: By centering the needs of others, by prioritizing impact and outcome. We need more of these.
The scientific community has for some time faced somewhat parallel ethical issues. American research institutions that receive funding from the federal government are required to demonstrate that any research conducted on human subjects meets certain minimum ethical criteria. To fulfill this obligation, each institution has its own Institutional Review Board or IRB. The board's primary goal is the protection of human subjects' well-being, and all experiments on human subjects require the IRB's approval, using a common rubric. The board is staffed by researchers on a rotating basis, to ensure that the board is not treating researchers unfairly, and the research is being reviewed by competent peers. Each time a researcher wants to conduct an experiment, they must submit a plan to the IRB for review; the IRB has the authority to block the experiment if the submitted proposal does not meet the minimum ethical requirements.
But what about the incentives? If an experiment continues without IRB approval, or the IRB is found upon audit to have been negligent in its duties, then the Federal government has the right to withdraw all research funding for the institution—a kind of academic death sentence. This threat to all research creates a set of beneficial incentives. First, and most obviously, the threat of having your funding pulled disincentivizes individuals against trying to cheat the system. Second, the threat of having all funding pulled ensures alignment across the community, and prevents favoritism or preferential treatment for specific individuals. Thirdly, it incentivizes community vigilance against individuals attempting to cheat the system, so the community becomes self-regulating.
Funding in this case is both a carrot and a stick: It can be used to bring people into the fold, and it can be used to keep them there as well. IRBs act as a kind of financially-backed code of conduct, but for communities rather than individuals. This provides additional incentives for the community, to ensure that the project remains focused on the right goals, and continues to participate in a consensus-based decision-making process with its siblings.
Open source creates a lot of good, but it is also enabling injustice in our world, at multiple levels and at large scale. Our metaphors are failing us, our principles are failing us, and our institutions are failing us. We have a moral responsibility as open source advocates to do better.
The pool of open source software is not a commons, and insofar as open source is primarily a force for freedom, thinking about it as such is dangerous. If we really, truly believe that freedom is the ultimate goal of open source, then we need to drop this metaphor, and re-examine what we are doing from an ethical perspective instead of a transactional one. We need new incentives, and new regulations. More importantly, we need new institutions to enact those incentives and regulations. And we all need to be contributing to a culture of change, instead of blindly upholding the status quo.
Obviously we can't stop people from releasing tools of oppression under permissive licenses. Obviously we can't stop people from just doing whatever with code whose source is released into the wild. But the natural conclusion of these two facts is not that we can just shrug our collective shoulders, these facts do not permit us a total abdication of responsibility. That's just absurd. Instead, the logical conclusion is simply that we ought to do what we can, and there remains a lot that we can do. We have a moral obligation to investigate those paths, to experiment, and to discover what an ethical, just, and sustainable community-driven software development program can look like.
So I call on you to reflect on the facts I have presented. To reflect on what "freedom" actually means to you, and whether your view has been too narrow. I call on you to talk about this with your friends, your colleagues, your community. To create a culture that embraces the need for change. I call on you to help me reshape our institutions, to create new ones that support the goals of a new ethics-based open source.
Most importantly, I call on you, when you participate in open source, to think carefully about the people and relationships your work impacts, or could be impacting. We're not alone in this world, far from it.
Interested in reading the best long-form content at the crossroads of ethics and tech? Subscribe to my newsletter, The Ethical Technologist, delivered weekly.