Open source software is often loosely referred to as a “commons,” but we rarely think deeply about what that means. Elinor Ostrom, Nobel-prize-winning economist, gave us a framework to help understand why some commons are resilient over the long run. In this post, I’ll help you understand what Ostrom’s work can tell us about the successful present—and complicated future—of open source software.
How Ostrom defines a commons
Ostrom defines commons as the human and natural systems that arise around valuable resources that can exclude potential beneficiaries from obtaining benefits, but where that exclusion is difficult or costly. Her work draws on a variety of non-digital resources, like fisheries, forests, or water basins, which are often essential to life but hard to control access to. As she shows, contrary to the popular theory of a “tragedy of the commons”, in practice people have evolved many ways to share these resources, carefully tailored to the nature of the thing being shared and the cultures of the people doing the sharing. The resulting human systems are, in her words, “self-organizing, self-governing,” and—to the surprise of cynics—often “long-enduring”.
As we’ll go into more below, open source software isn’t a perfect fit for Ostrom’s definitions of a commons, because (unlike an aquifer or a forest) software is generally easy to control through tools like contracts and copyrights. Open source has specifically chosen not to exercise that control—which creates an immensely valuable commons but also, from Ostrom’s perspective, ties one hand behind our backs in managing that same commons.
Ostrom’s design principles for commons
Ostrom’s principles were derived from her qualitative and quantitative work on more than 900 long-lived commons. She argued that resilient commons usually (though not always!) do the following things—what she referred to as “design principles”:
- Clearly define what the commonly shared resource (like water or a grazing field) is;
- Clearly define who is entitled to use the resource;
- Adapt rules for use and provision of the resource to local conditions;
- Allow most users to participate in the decision-making process;
- Effectively monitor usage, by monitors who are accountable to users;
- Provide a range of punishments for rule-breakers;
- Provide cheap and easy-to-access mechanisms to resolve conflicts;
- Are recognized, and allowed to self-govern, by higher-level authorities (like states or corporations); and
- Are organized as multiple layers of nested organizations, with each level and organization given flexibility to address their local needs.
Follow these principles, and common stewardship of resources can thrive over a long period. Don’t do them, or stop doing them, and your commons is likely to fail.
(These are lightly paraphrased from Ostrom’s capstone work, “Governing the Commons”; errors in the paraphrase are mine.)
Software as a potentially resilient commons
So how does open source do when measured against Ostrom’s principles? Historically, I think it’s a pretty mixed bag, but nevertheless, open source has been resilient and even thrived, despite massive industry changes over the past two decades. This apparent paradox is fairly easy to resolve. Ostrom studied non-digital commons, like water and fisheries.
The ability to copy software at essentially zero cost, and add new collaborators from the entire globe, means open source software is pretty different from the commons she studied in her work. As a result, open source has been able to ignore some of her principles without paying a penalty.
But as I’ll explain below, I think in some important ways this is changing—–because developer time is scarce, and governments and businesses want to have more say in how open source is developed. So in the future we may need to pay more attention to Ostrom’s principles if we want the open source commons to continue to be resilient as the world around it changes.
Rules open source followed well
So what design principles did open source best follow in the first few decades of its existence?
- “Adapt rules for use and provision of the resource to local conditions”: adaptable use has always been part of the open source definition, but an under-appreciated strength of early open source software was that there was also a lot of flexibility in how we provisioned open source. This is very Ostromian— — things that made sense in Debian might not make sense for the Linux kernel, Mozilla could pick and choose from both, and individual perl hackers could also choose their own rules, based on their needs, users, and project types. This flexibility encouraged many different ways of participation, and therefore many different thriving commons within the broader open source commons.
- “Commons are recognized, and allowed to self-govern, by higher-level authorities”: for the first twenty-five years or so, open source was mostly legally unregulated. This again is ideal for Ostrom, since lack of higher-level interference by government or corporations allows for autonomy and local democracy–key parts of resilience.
- “Commons are organized as multiple layers of nested organizations, with each level and organization given flexibility”: This principle (sometimes called “nesting” or “subsidiarity” by Ostrom) is an under-appreciated part of the success of early open source. There were many overlapping and inter-related levels of commons (individual projects; package managers; cross-cutting areas of interest like bioinformatics perl or compiler nerds), which could all set their own policies and cultures, allowing them to pick and choose things that worked for them and their users.
And not so well?
Three Ostrom design principles concern themselves with rules and rule enforcement. Open source has historically not done so well at them.
- Effectively monitor usage, by monitors who are accountable to users;
- Provide a range of punishments for rule-breakers; and
- Provide cheap and easy-to-access mechanisms to resolve conflicts.
What few rules we have had (copyleft and attribution) have been rarely monitored or enforced. What little enforcement has been done is either expensive (through the legal system) or quiet (hard to learn from and access).
Given this failure to follow the rule-related design principles, how has open software flourished? To answer this, it’s helpful to consider how open digital commons are different from “real world” commons. In most of the communities Ostrom studied, rule-breaking has real, significant costs, and so enforcement must be effective. Think of a group sharing the water from a stream—if one farm breaks the rules and takes more than its share of water, it could leave no water for other farms. (Economists call this a ‘rivalrous’ good.)
Overuse is a huge risk to the long-term success of a commons, so it must be met with serious potential enforcement. But since open source software has no cost to copy, the impact of rule-breaking has largely not been severe—it might impact motivation, but doesn’t stop others from using the core resources of the commons. As a result, we’ve paid little cost for not following these principles—yet.
Rules that foreshadow reduced resilience
That “yet” might sound ominous—and it should. Here are some other design principles that open source has traditionally done pretty well at, but might not in the near future.
- “Clearly define who is entitled to use the resource”: This was simple: everyone in the whole world. Can’t get more clearly defined than that! But in practice, the world was smaller: we often had relationships (either social or business) with our users. Now that, in fact, the entire world is using our software, we’re finding that this includes people who drain our time or cause us ethical discomfort.
- “Clearly define what the resource is”: Historically, the ‘resource’ of open source was a piece of software, and so defined by the software artifact. However, increasingly the resource includes the time of the developer, because ongoing maintenance can open developers up to a never-ending stream of requests and demands. If developers don’t define their time well, their attention, energy, and passion risks being depleted.
- “Allow most users to participate in the decision-making process”: a key feature of Ostrom’s successful commons is that they are human in scale. This means most users of a resource know each other; they have hands-on expertise with the resource and its creation; and they have bonds that can help overcome friction and difficulty. All of these facilitate high-quality decision-making. When the entire corporate world is your user base, these human bonds inevitably break down.
Cha-cha-changes
In some senses, open software is the most successful commons in the history of humanity—a true, genuine wonder of human achievement. And we did it in spite of flouting many of Ostrom’s design principles! But as I suggested above, open software is changing—and we should continue to revisit Ostrom to help make sure that those changes don’t accidentally ruin it.
Users: from global in theory to global in practice
We’ve always said everyone could use open software, but now that’s really happening—problems with our software are matters of global import. This is a megatrend that will amplify the importance of all of the design principles, but particularly those that impact how creators and users relate, like “most users can participate in the decision-making process” or “monitoring of the usage rules is effective”. If we can’t figure out how to scale these processes (which could include opting out of them altogether, as the permissive licenses have essentially done for enforcement) then we’ll become increasingly fragile as our user bases grow.
Resources: from software to time
When it was very clear that users had no claim on the time of developers, the “resource” in the software commons was just the software, which could be copied and used infinitely at no extra cost to the developers.
For a variety of reasons, the resource at issue is increasingly not just a developer’s software (infinite!) but a developer’s time (extremely finite!) If time truly becomes a key resource, we’ll have to pay a lot more attention to Ostrom’s principles related to resource usage. If users who aren’t themselves contributors, or who have heavy pressures from their own jobs, can overwhelm developers without any rules or enforcement, we will soon have a critical lack of developers.
Governance: from flexible to rigorous
As I mentioned earlier, an under-appreciated strength of early open source software was how many nested, interacting layers of autonomy there were. Rules that made sense for Python didn’t need to be transposed into Javascript; rules that made sense for libraries didn’t need to be forced onto end-users; etc., etc. This flexibility made communities more resilient. But as use has scaled, and the entire world’s economy (without exaggeration) becomes dependent on open source, the biggest users are beginning to “standardize” their requirements for open source.
These standardization efforts are well-intentioned, seeking to bring industry best practices to a wider audience. And I believe they could be important and impactful—we certainly need to make our software more secure! But they also are, inherently, top-down, since they intend to impose a single standard across an entire sprawling industry. As a result, efforts to improve standardization can (if executed poorly) easily breach Ostrom’s principles of local decision making and nested organizations.
While security is the most visible change of this sort for Tidelift’s partners, it will also creep up on open source in other ways. In a time of rising international tension, with software an increasingly important “battleground”, export control (either formal or informal) may rear its head again. And machine learning will increasingly be regulated, which may have an impact on the burgeoning open machine learning community.
What does this mean for Tidelift?
At Tidelift, we’re very close to individual developers, including many who are at the front lines of these trends as solo maintainers of enterprise-critical packages—in other words, those whose time is precious, and who are being increasingly asked to fulfill more top-down security mandates.
We’ve written about this problem before. Now we’re increasingly working on solutions as well! In particular, we’re working with our maintainer partners to evaluate and improve public security standards, so that they can be more appropriate to a broader number of “nested” open source ecosystems. This is just one part of the bigger puzzle, but one we’re excited about–and you’ll be hearing more from us on.