<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=705633339897683&amp;ev=PageView&amp;noscript=1">

The dawning age of regulated open: notes from the field

Luis Villa
by Luis Villa
on February 14, 2024

Updated on February 15, 2024

Don't miss the latest from Tidelift

Last week I spoke at two universities, where faculty and students are trying to untangle the future of AI. I came away from the experience more convinced than ever that we are approaching a fundamental clash between open’s dedication to “anyone can use and modify, for any purpose” and society’s increasing conviction that various types of software must be regulated for the common good.

Debating open AI at Silicon Flatirons

Tidelift co-founder and General Counsel Luis Villa debates Open AI at Silicon Flatirons

Photo: Flickr siliconflatirons 

The Silicon Flatirons program at University of Colorado Law School runs a now-venerable conference on internet policy, often attended by DC policymakers (including, this year, U.S. Senator John Hickenlooper). This year I participated to debate the resolution “The most powerful generative AI models should, by law, be compelled to be open” alongside three law school professors: Casey Fiesler, Paul Ohm, and Blake Reid.

Casey and I won the debate (swinging the audience from 48-52 against to 64-36 for the resolution), but the most interesting discussion for me was not the debate itself. Instead, during the preparation, it was clear that much ultimately hinged on what Casey called “gatekeeping”: if there was a clear, compelling societal interest in closing off AI to a particular purpose, could that be done? If the answer to that was “yes,” then the many other arguments for open would carry the day. If, on the other hand, open source could not effectively prohibit at least some of the worst uses, then it seemed likely the audience would vote no.

This question is not new: policymakers and competitors have asked it for some time. More recently many developers have been asking an important variation on it—“can we restrict ourselves”—as part of a push for “responsible” licensing. But, in light of the many terrible hypotheticals posed by AI advocates—and often believed by lawmakers—the question of “can open source restrict use when lawmakers require it” has taken on new urgency.

For the purposes of the debate, and debate prep, I argued that this sort of restriction is possible. To do that, I used a variety of historical examples, including how Debian and Mozilla handle US export control laws, how a variety of projects have handled patents, and (in a timely example) a discussion of how pre-AI image-modification programs have handled so-called non-consensual pornography. 

But outside the context of the debate, “can we restrict software when we have to” is a harder question—and one that, undoubtedly, open is going to have to grapple with.

Explaining open, sharing, and AI at Duke

Luis Villa discusses Open Source, AI, and law at Duke

Photo: Merry Rabb

From Colorado, I winged it to my own alma mater, Duke, where I was a guest of Professor Owen Astrachan for his computers and policy class. My lecture surveyed the history of open sharing, and how that may be changing in the near future.

The audience here was different (mostly students, not academics and policymakers), but the basic message (somewhat to my surprise) ended up being very much on the same topic: what restrictions is open facing, after 25 years of operating (mostly) without government-issued rules?

For the students, I started with a bit of past history: why do we share so much? How did we end up with a rule that says, in essence, “anyone must be able to use this for anything”? And then I fast-forwarded to the present–where lots of people are pushing back on that freedom. Regular readers of the Tidelift blog will be familiar with many of the themes in the lecture, from ethical licensing to security mandates. And after the talk, student questions jumped on another theme I didn’t mention but that they were quite curious about: the intersection of commercial use and open source.

Despite the familiarity, as I wrote my slides a recurring theme kept jumping out: we’re moving from an age where the open source community argued amongst ourselves about restrictions on commerce and ethics, to an age where governments aggressively impose these restrictions on our community. That’s a massive change that I don’t think we’ve yet reckoned with—and will absolutely shape the world these CompSci students graduate into.

From the lecture hall to reality

It would be nice if these topics were hypotheticals to be discussed just in lecture halls. Instead, the tension is happening out in the wild, and may come to a head this year.

Open source communities are actively discussing how to handle use restrictions in AI. The Open Source Initiative recently published an 0.0.5 revision of their “Open Source AI” definition, and Creative Commons and Open Future Europe have launched a consultation on creativity and AI as well

On the other side, as we’ve documented on the Tidelift blog, new restrictions are constantly coming down the pipe from governments. These have not yet had much bite, but it seems clear that they will only ratchet up from here. We’ll do our best to keep you updated on new government regulation impacting open source via our government open source cybersecurity resource center and I’ll continue to bring you the perspectives on the developments I find most interesting and impactful here on the Tidelift blog.

New call-to-action