On June 7, 2022 Tidelift hosted a new event called Upstream, a one-day celebration of open source, the developers who use it, and the maintainers who make it. All of the talks are available to watch at upstream.live, and several of our speakers offered to share their thoughts in a post-event blog post as well. Sumana Harihareswara discusses how to manage different contribution rates in an open source project, a term called “cadence shear.”
This post has the script and video for “Cadence shear: Managing rhythm and tempo mismatches in participation,” a 25-minute talk I delivered as part of Tidelift’s Upstream 2022. It includes a few lines here and there that I cut from the recorded talk to save time, and at the end I’ll mention some topics I didn’t have time to address at all. Thanks to Tidelift for the chance to speak, and thanks to my past clients and colleagues for the work and conversations that helped me notice and address cadence shear.
For more on this and related topics, please check out the resources I’ve put together on the Changeset Consulting website, or subscribe to the Changeset Consulting newsletter to get a free sample from my upcoming book on getting open source projects unstuck.
Hi, I’m Sumana Harihareswara. I’m the founder of Changeset Consulting, a small consultancy that focuses on project management for free and open source software projects and the companies that depend on them.
In your open source project, maybe you’re seeing different contributors contributing at different rates and in different rhythms, and that’s causing strain. Your symptoms might include long waits for feedback/reviews, problems discussing issues and coming to decisions fast enough for everyone to feel reasonably satisfied, resentment (both by those who are waiting and those who feel rushed and overloaded), and ultimately problems with morale, retention, and really everything else.
Sometimes different participant groups want different project velocities and rhythms. That’s cadence shear. Like when:
- Most of your team is volunteer, but there’s a Google Summer of Code intern who’s working 35-40 hours per week for 3 months, and who needs faster turnaround on code reviews.
- Some of your contributors are tied to the release cycle for a related organization/project (such as Debian releases), but some are not.
- A few different companies, nonprofits, government agencies, or academic teams are maintaining and improving a tool together – but one of them is working way slower or way faster than everyone else.
In general, cadence shear is a problem where different subsets of the team are attuned to different deadlines, or have different levels of urgency for project progress.
First off, this is a problem of success. Congratulations: your open source project includes contributors with genuinely different incentives and from genuinely different contexts!
But it’s still a problem, so let’s talk about some of the participant configurations that you see in this situation—paid plus volunteers, volunteers plus time-limited paid, paid teams in a consortium, and volunteers with disparate deadline affiliations—and some approaches to learning and addressing everyone’s expectations.
There’s a saying that “every criticism is the tragic result of an unmet need” and, in keeping with that, a lot of these recommendations boil down to setting expectations and aligning on priorities. And I’ll share some specifics on how to do that.
In this talk I’m going to mainly assume that you are the leader of the project in question, but sometimes I’ll share tips and suggestions for what a contributor can do in this situation. And I hope it’s useful to you to consider the situation from both perspectives.
Paid teams in a consortium
Let’s start with the scenario of multiple different paid teams from different organizations that are trying to work on a project together—but not all at the same speed.
Here’s an example one of my friends told me about. He works at an organization that houses a project where they’re trying to get wide adoption, plus encourage a multi-vendor ecosystem around it to support that wide adoption. They’re pretty resource-constrained. A vendor said: let’s team up! We’ll write code for the features we want!
The vendor had a strong dedicated engineering team, and could produce code pretty quickly. Well, they ran into a big problem because of the code review bottleneck. The vendor would put several developers on a task, they’d take a month to write it, and submit it for review—and then the development team at the host organization would have to shift away from their own priorities, take a week to go through and review this code and ask for revisions, and fall behind on their own roadmap. And the vendor’s customers were getting antsy about getting these new features delivered.
The host organization said: look, we want to make this work, but we’d need more testing resources to get through these reviews faster. The vendor said: OK, we can help with that. Unfortunately, the testing service they hired wasn’t one that would just slot in manually and adapt to the existing infrastructure and codebase; they could only work with a codebase that hooked into their pre-existing automated test infrastructure. So then the host organization tried to spend a bunch of time making those changes, which made the review capacity issues worse.
But fundamentally there was a disconnect between these two partner organizations because they didn’t genuinely share a roadmap and a vision for what to prioritize together. And so, before the testing integration could bear fruit, the conflict was resolved with a fork, and now there’s duplicated effort and a bunch of the bad things that happen when there’s a fork.
So: what could they have done instead?
The biggest one is: Modularize. Structure the architecture so that people who want to add their own features can do so by creating an add-on/extension, integrating with the core you maintain, but do so through a repository that they control and through interfaces they can depend on. So, for example, companies that want to add functionality to MongoDB can start with a language-specific driver and are encouraged to do so, explicitly because, among other reasons:
Fewer requirements to get a pull request approved: There are fewer moving parts in a driver than the core server, due to the smaller codebase. This makes it easier for us to tell if your change is going to break something else, whether that something else is an existing part of the driver or a planned new feature.
Apache License: Among other things, you do not have to sign the contributor agreement when contributing to the drivers, which can hold up the pull request process.
Another important approach: Give them ways to contribute that will actually aid in your code review capacity. For instance, ask them to review your code too, in a nonbinding way. A first-pass code review from an engineer who can at least do a basic round of testing, find bugs, ask about things that don’t make sense, and so on will help save time for more senior developers who can then focus on the more high-octane concerns, like security, performance, and maintainability. And reviewing your company’s code will also help them learn how you do things, which will make their pull requests easier to review and merge in the future.
I spoke about this at Upstream last year in “Sidestepping the PR Bottleneck: Four Non-Dev Ways To Support Your Upstreams” which include some more ways you could encourage your partners to contribute without getting stuck in a review backlog. There’s money, of course, but also mentorship for new contributors, coaching and work assistance for your existing contributors, and providing and maintaining testing infrastructure.
But also, on a more abstract level, ask yourself about your strategy…do you actually WANT external participation, wide adoption, and a multi-organization ecosystem? Is that something you’re strategically committed to? If so then you will probably need to genuinely share the roadmap and process with them. If you don’t actually want that then pull off the Band-Aid and acknowledge that you simply aren’t going to take substantive contributions from anyone outside your own team, so you can set expectations early and avoid frustration.
Here I’d encourage folks to read the TODO Group’s “Effective Open Source Development & Participation” which can help you make the argument internally that, no really, it’s worth investing in collaboration upfront so that you can avoid the unfortunate outcome in the anecdote I told you.
I also suggest you take a look at TODO Group’s guidance on “Understanding Upstream Open Source Projects,” especially the section on “Schedule/Timing Considerations.” And pass it along to the contributing organization. Because it suggests:
If you have internal developers who are making upstream contributions, you’ll need to budget time in their schedules to allow for these contributions to be made. …it’s important to create and maintain a separation of upstream work and product work. In other words, it’s recommended to provide your open source developers with guaranteed time to meet their upstream aspirations and responsibilities….
In the absence of such an upstream time guarantee, it’s easy for these team members to be sucked into becoming an extension of product teams, resulting in their upstream focus drying up in favor of product development, which may help in the short term, but can ultimately lead to loss of your organization’s reputation in the upstream community, which can negatively affect your ability to help guide the upstream project in areas beneficial to your organization.
(Some links to renderings that may be more legible: https://todogroup.org/guides/participating/#best-practices-to-contribute-code-upstream and https://todogroup.org/guides/impact/ )
If you’re dealing with cadence shear from the contributor’s side of the table then I hope this guidance can help you make the argument internally that it’s worth it to adjust your cadence to work better with your upstream.
Paid plus volunteers
Next let’s talk about the situation where the core contributors are paid by the same organization and are aligned on the same cadence and roadmap, but external contributors, especially volunteers, are not.
This is a really common situation. For example, when I started working at the Wikimedia Foundation, most of the maintainers of MediaWiki were not paid by Wikimedia Foundation, or were not paid at all for their MediaWiki work. Then, the engineering department expanded to the point where we were employing more than half of the maintainers.
And we developed a more and more coherent roadmap of priorities for what the paid folks would work on. But we wanted to ensure that people outside the Foundation, in particular volunteers, could make meaningful contributions. This was partly for basically ideological reasons, ensuring we didn’t just turn into an elite ivory tower, but also for pragmatic reasons—making sure MediaWiki stays a multi-vendor, multipolar project is good for the continued health of the project. For example, it helps MediaWiki stay attuned to changes in tech and user needs, and it helps grow contributors whom the Foundation or other vendors might later hire!
(If you’re interested in why you shouldn’t just hire all the really promising volunteers, I recommend Byrne Reese’s piece on why WordPress dominated MovableType in the competition among CMSes several years ago—among other things, Six Apart hired all the MT experts and made them unavailable as leaders and vendors, instead of encouraging them to grow into a strong ecosystem.)
But of course those volunteer-type participants—and here I include engineers whose companies are letting them take a day or a week or a month as a kind of community service sabbatical, to contribute to a project—are episodic, and don’t show up already knowing about and plugged into your roadmap and schedule. Here I want to refer to a research paper on episodic participation: “Managing Episodic Volunteers in Free/Libre/Open Source Software Communities” by Ann Barcomb, Klaas-Jan Stol, Brian Fitzgerald and Dirk Riehle, published in 2020.
I was at the Wikimedia Foundation 2011-2014 and I SO wish I’d had this paper back then! Y’all are lucky. Their description:
Episodic contributors may have less investment in ensuring that their work is completed in a timely manner, or is completed at all. This can be especially problematic if the work is important and others are relying on it.
They make dozens of recommendations in several categories. I think a lot of those suggestions are ones that reasonably well-run open source projects with paid staff are probably doing anyway, so I’m going to focus on highlighting the ones that you might not already be doing and that would particularly help with the cadence shear with episodic participants. In Governance, suggestions include:
“Manage the delivery triangle: Adjust scope (quality or features) or schedule when project releases cannot be completed on schedule at the desired level of quality with the expected features.” If you’re very deeply committed to including episodic participant work on the critical path, this IS one way you could do it.
“Define measuring and success: Define what successful engagement of episodic contributors looks like. Describe how you will measure the impact.” This helps you because it makes you make it concrete—what are you looking to achieve by engaging episodic contributors?—and then you’re on steadier ground to invest in that, make tradeoffs, whatever that ends up looking like for you.
In the Preparation section, suggestions include:
“Create working groups with a narrow focus.” This is one I’ve definitely noticed larger open source projects doing, as a way to harness interest from people and institutions that are PARTICULARLY interested in one specific topic, while creating a bit of a membrane between that hive of activity and the maybe slower-paced work of other sub-teams.
“Set expiration dates: Set distinct deadlines for initiatives.” I’ve found this approach is surprisingly underused. When you’re trying something new in contributor recruitment and onboarding and retention, some kind of experiment, if you don’t set an end date, then you are likely to just accidentally slide into considering it a new part of the status quo without genuinely stopping to evaluate whether it worked. As the researchers say, “Setting an end date for the initiative gives structure to the process and discourages procrastination.”
“Provide templates for presentations: Create one or more standard slide decks which your contributors can use with or without modification.” Remember, you can use their knowledge and curiosity to improve the project in other ways. For example, ask them to teach about your project at meetups, at conferences, and in their communities. You may have just found your most enthusiastic marketer. The researchers suggest that you “create one or more standard slide decks which your contributors can use with or without modification. Contributors may be more likely to present if they do not have to create the material themselves.” And I can tell you that this has particularly worked well for Google Summer of Code and Outreachy to help enthusiastic supporters spread the word about those programs, for instance, on college campuses. This is something they can do completely on their own schedule so it doesn’t cause cadence shear with yours.
This is related to another recommendation, later: “Encourage learners to mentor. Engage [them] in leading other episodic contributors. Let them review episodic contributions.” Again: encourage them to do work that doesn’t end up waiting for or diverting your critical path, as I’ve discussed in “How to Teach And Include Volunteers who Write Poor Patches.”
“Write modular software: Ensure that software is modular.” This goes back to what we were saying before.
In the “Onboarding Contributors” section, I’d like to highlight a few tips:
“Ask new and infrequent contributors about their expectations, availability, preferences, and experience.” I’ve consulted for a few projects where my team, as a consultancy coming in, made the first systematic effort they’d ever had to talk with the episodic contributors about what they wanted and their expectations and the contexts they were coming from. I think a lot of maintainers would rather do something low-touch like a survey, but that is just not going to give you the same quality of understanding because you and the contributor have really different mental models of what contribution even might be structured like, and you must have a conversation so you can elicit their mental model before you can work with it. And then you can guide people to roles and activities that will fit with their schedule and with yours.
“Screen potential contributors: Screen potential contributors to determine if they are a good match for the role. This may include having availability at the appropriate time, or being able to commit to a certain amount of time.” And relatedly, “Explain the need for maintenance: Educate contributors about what happens to a contribution after it is included in the project. Explain the benefits to the project if they remain available to maintain their contribution.”
I think in open source we’re often inclined to say, the sky’s the limit and anyone who wants to try should go for whatever they want to work on. We defer to the contributor’s autonomy. But, once we take a moment to consider what factors make it really likely that a contributor will be able to succeed or fail at a particular role or activity, we can figure out some guidance, some expectations to set, and sharing those upfront is actually a good idea and can save people a lot of unnecessary frustration. And it’s actually motivating to contributors who are uncertain whether they’re qualified to try something and whether they’ll have enough consistent time to work on a particular task, because you’ve taken away the guessing. Now they know.
The researchers mention “Guiding to junior jobs” and I’d add here: for volunteers who are more open to guidance on what they work on, ask them to work on peer-led projects such as extensions, add-ons, and the larger ecology. Again, modularization here is your friend.
“Manage task assignments with an application: Use an application, such as a wiki or bug tracking system, to handle the assignment process.” And I’d connect this with a few of their “Working with contributors” suggestions:
“Give permission to quit a task. Give people permission to skip [a period of] work or a task, without recrimination.”
“Encourage people to quit. Encourage people who no longer wish to fulfill a role or complete tasks to step down.” For example, the zulipbot tool, developed by the Zulip open source community, automatically un-assigns a bug if they haven’t commented on it in several days. Or, for larger responsibilities: June 21 is Volunteer Responsibility Amnesty Day (it’s twice a year, on the solstices). The idea is: if it’s clear who’s responsible for a particular activity or task, AND we explicitly remind people, regularly, that it’s completely fine to say, “actually, no, I need to stop doing this,” then it means everyone can set more realistic expectations, and you can incorporate more realistic plans into your schedule and roadmap.
“Automate process assistance: Consider automation to help people work through the early processes, such as a chat bot or step-by-step interactive site.” Again, this helps you decouple your experts’ time investment from their time availability. This is an area where platform support would be welcome.
There are several contributor retention suggestions that I won’t go into now, since the connection to cadence shear mitigation is pretty indirect, but they’re really worth considering.
And there’s a working-with-contributors suggestion: “Rotate focus areas on schedule: Rotate between different focus areas with a consistent schedule.” This is especially good if you have episodic contributors with specialized skills or domain knowledge: “People will be able to plan when their expertise is needed.” Imagine someone who works full time at VMWare and wants to plan when to take a sabbatical to contribute to your project. If you can collaborate with them to work on the timing, that’s a big help to you both.
I’ll also add something they don’t go into explicitly, which is: if you want to gather contributions from episodic volunteers, then you’re going to need to have most of your communication happening asynchronously. I recently read a suggested breakdown:
- 70% of your group’s communications ought to be async using GitHub, Google Docs, Zulip, Slack, and similar tools.
- 25% synchronous online conversations using livechat tools like Zulip and Slack, and videochat tools like Jitsi, Zoom, and Google Meet.
- 5% in-person meetups, such as annual project and smaller team retreats, hackathons, conferences, and so on—of course that depends on COVID but one big all-day online conference per year would probably make sense, to keep everyone aligned.
Volunteers plus time-limited paid
This is a situation where your open source project is mostly volunteers, but a set of contributors have some limited time availability and want to work on a particular project.
This is similar to the episodic volunteering situation in a mostly paid context, but sort of in reverse. The difference is that the time-limited people probably have a specific project they’re paid to work on.
Many of the recommendations we’ve already discussed, on how to manage episodic participants, are also good for time-limited paid people.
If there is any flexibility regarding what you can get the time-limited paid person to work on, be strategic about what they work on. Maybe they should work on infrastructure rather than feature work. For example: one thing that my colleague did to help the autoconf project, which is nearly entirely volunteers, was to write an assessment of the project’s Strengths, Weaknesses, Opportunities, and Threats which others have used to better structure and prioritize their work.
If the time-limited thing is an internship like Google Summer of Code or Outreachy, then the maintainer team should talk ahead of time and have a clear strategy of asking why you’ve decided to spend time mentoring this person? If the goal is to get a task done, concentrate on that. The deadline becomes similar to crunch time and everyone has to pitch in and adjust your schedules to get this improvement merged in by the end of the internship.
But if you joined the program because you have a goal of increased capacity in the long term by growing this contributor for the long run and building infrastructure, then knowing that will help you guide them differently. Maybe you’ll mutually decide to adjust what success means for this specific internship, modifying your plan on the task you originally envisioned them doing, and adjusting to your own availability—prioritizing building the relationship over pushing the deliverable.
If you’re a time-limited contributor, and you want to help one of your upstreams, if you’re ambitious, one high-leverage thing to do is to improve THEIR upstreams. For instance, nearly no one is paid to work on the Python tools Mailman or Beautiful Soup. If you want to make life easier for them, look for where they are waiting on upstream improvements in libraries like Sphinx or lxml, or even in the Python standard library.
Volunteers with disparate deadline affiliations
Finally, I want to talk briefly about volunteers with disparate deadline affiliations, like when some but not all of your contributors are tied to the release cycle for a related organization/project (such as Debian releases). Maybe there’s a feature freeze deadline for getting your package into an OS or compatible with a new programming language version.
This one can be harder to think about because it ends up being a fundamental strategy question for the project. Maintainers need to get on the same page about how important it is to the project to work under this deadline.
Because: if it’s important, you need to adhere to it. But if it’s a second-class priority, then the team ought to find a way to architect things so that these efforts happen in parallel or are modular.
A common situation in software for the sciences is that a researcher needs a change to the software in time to write and submit a paper.
In this case, it’s tough, but recommending a soft temporary fork might be best. I recommend that you be explicit and nonjudgmental about this, and help the researcher understand: you don’t need your change to the software to necessarily land in main, you need for there to be running code that does what you want. They may not quite understand that, thanks to version control, your paper code doesn’t need to get merged into main to be referred to in your paper. It’s just going to be more difficult to refer to your specific code.
When it’s not possible to use modularization to solve the “my paper code needs to be in the version control repository,” then it might be a good idea to normalize soft and/or temporary forks. But, of course, we need to acknowledge there is a risk that the merge into main may not happen, because once that paper’s written, that researcher might have no more time to work on this. So at that point I’d say it depends on the maintainers’ judgment of how much they care about helping get that particular change across the finish line.
In conclusion, a lot of these approaches boil down to making hard decisions about what you really value, being upfront about your expectations and your needs, and investing in process. Which might sound kind of trite and the kind of advice you’d get in a newspaper advice column, so, sorry it’s not that original. But I hope it’s helpful nonetheless.
I’m Sumana Harihareswara, and I can help you implement these recommendations through my consultancy, Changeset Consulting. And I’m working on a book on managing existing open source projects in general, helping them get unstuck.
And let’s talk more in the chat after this talk – I’d love to hear your stories about what works.
Some topics that I didn’t have time to address in this talk, but that I aim to cover in the relevant chapters of my upcoming book, in no particular order:
Are there particularly different opportunities or challenges if different paid groups within a single organization (such as a backend group, a frontend group, a mobile group, and a DevOps group in a company) have different deadlines and incentives while contributing to the same project?
Volunteer time is different from paid time, as Sue Gardner discusses in her great “A little guide to working with online communities.” You can imagine that a team of paid contributors are working on a conveyor belt, and that volunteers are trying to hook new conveyor belts into this system that go at different speeds. When you’re a paid worker who’s managing volunteers who work at different speeds, what do you need to do to particularly avoid being exploitative, or making them feel yanked around?
Some specific things you can do: assess the project’s pace and figure out what the maintainers’ sustainable pace is (perhaps using a core sample/assay of the last 3 months of turnaround on relevant communications and artifacts), and suggest and get consent on a general expected turnaround time for email, new issues, and patches. Figure out whether your team even WANTS to be a particularly welcoming-to-new-contributors open source project (in my opinion, it’s OK not to be, as long as you are explicit and consistent, and you make referrals to help new contributors find alternate projects that would suit them better). If you do, then inventory and card-sort your project tasks: make a list of 20 different things that need doing for your project. Be deliberately wide and messy here. And, of course, anything you can do to increase your maintainers’ architectural and code review capabilities will help cadence shear.
When I was the engineering community lead at Wikimedia Foundation, one thing I did was identify which of our engineering projects were more amenable, or less amenable, to volunteer work—including both long-term stuff like internships and short-term stuff like “Good First Bug,” developers being around to pair on that codebase during hack days, “please test this and here’s a script for what to test” days, etc. There’s a sweet spot when projects aren’t too prototype-y and aren’t too legacy-y, where projects have better affordances for new contributors—both paid and unpaid! (After all, it’s hard for a new junior tech writer to take on a fast-changing architecture whether he’s staff or not!) And I considered the individual temperaments of the people on the team, and other context-specific factors.
What is scarce and what is abundant in your project? People’s expectations clash and people run into friction when processes set up in abundance-culture have to cope with scarcity-culture. For example, some processes act like everyone has infinite time to look at working code, work through tutorials, respond to code review, read every message on the email list, mentor other people, and so on. Some processes assume schedules, deadlines, and participants being paid for hours of labor. Also, people have pretty different expectations about whether only a few people or nearly everyone gets power and authority and has their thoughts taken seriously. The confluence: some proposed approaches to getting and retaining contributors will scale, but most of them have some range (of numbers of projects, of mentors, of contributors) where they’ll work, and outside that range they break down. And so having better numerical targets helps us decide which approaches to invest in—how many people need to get served? What if we intensely mentor 4 interns a year instead of 5, and spend the extra time on running SpinachCons, open office hours on Reddit Ask Me Anything, and other intake techniques?
Again, please let me know if you would like to share experiences or resources on any of these related topics! And thanks for enjoying my talk. Hope to catch you at next year’s Upstream!