A year after OSI debuted OSAID 1.0, critics Amanda Brock and Bruce Perens still aren’t sold — and they say the debate points to bigger questions for tech, as well as open source.
About a year ago the Open Source Initiative announced the first version of its Open Source AI Definition. Almost immediately, OSAID 1.0 was severely criticized from every corner of the open source community for several reasons, the main ones being:
- lack of a requirement for open training data
- the very idea that the term open source could ever be applied to AI, and
- the concern that doing it would water down the meaning of open source so much as to make it useless

OSI’s answer was that basically OSAID 1.0 was a necessary and good starting point that left room for future flexibility. That was a year ago, though, which is a long time in a field that’s changing as quickly as AI. How do things stand now?
If you ask me, I believe that what we call AI today has so many problems that even if there were one universally agreed definition of open source AI and everybody applied it, it wouldn’t really matter.
But maybe that’s just me.
Looking for a more studied opinion, I reached out to two of OSAID’s first critics: Amanda Brock, CEO of OpenUK and a former OSI board member, and Bruce Perens, the OSI co-founder who created the original open source definition. I asked them what they think of OSAID today, now that it’s approaching its first birthday.
Neither of them has changed their mind.
Brock pointed out that there have been many shifts in the last year, from LLMs to small and tiny models, plus synthetic data for training and agentic AI, not to mention DeepSeek, the Chinese model that relies on permissive OSI-approved licenses for the software, with weights covered under a custom license that doesn’t meet the Open Source Definition, mainly due to restrictions on use.
None of this changes the fact that we already have a valid and working Open Source Definition, she said. Adding another has actually started to undermine it.
“I recently saw a report sponsored by Meta in which the authors created their own definition of open source AI and of open source software,” she said. “This simply wouldn’t be happening if we hadn’t opened an unnecessary can of worms with the OSAID.”
Perens was more specific.
“I think the main problem is memory,” he said. “If the AI is to remember a conversation, that means the user is an open source contributor. I think a lot of people would rather have their use of the AI stay private.”
In other words, if it’s to be useful, an AI model can be open source only if all of its users agreed to make everything they tell it public, which is certainly not ever going to be the case.

“The problem before the Open Source AI Definition was openwashing, saying that something was open source when it was not,” he continued. “They hoped that an AI-specific definition would reduce openwashing. If you look at the OSI’s own anniversary report, the problem now that the definition is a year old, is… openwashing.”
Another issue, according to Perens, is that after a year, “Most AI vendors have not accepted OSI as an authority, with statements by Meta being the most glaring example.” Even OSI’s recent report about OSAID seems to Perens to be an attempt to put a good face on what is mostly failure, since the biggest successes they report are that proprietary AI vendors have released “some” things.
What’s Next?
As things stand now, Perens doesn’t rule out that OSI “might find it even more difficult to become a respected source regarding data governance, since there are other, more trusted, voices like EFF in that space.”
Meanwhile, he is working on Post Open, which he defines as “an effort — not specific to AI — to make something like open source that is sustainable for the developer. The big difference is that deep-pocketed users have to pay, and they get support that fits them better than Open Source does today.”
As part of that effort, Perens also wrote a post to illustrate how easy it is to apply the original Open Source definition to AI.
For Brock, the only places and levels where it will be meaningful and possible to define openness for AI will remain its individual components, e.g., software, data, agents, models, and weights. This is because data and software are inextricably linked in AI, but each component has its own, different IP, rights, and licensing needs.
The way to go, Brock said, would be to stick to existing definitions and licenses in the contexts for which they were created, and only add definitions where absolutely necessary, like an Open Model Definition to fill the gaps between software and data. The fact that OSI is an American organization doesn’t help, she added, because “the deep experts on data are largely outside the US, for example in UK, France, and India.”
This said, Brock and OpenUK’s plan is to leave it to others to worry about the definition.
“We are busy and seeking to have an actionable impact in openness. We simply avoid using the term ‘open source AI’ wherever possible, and continue to produce world-leading reportage on it,” she said.
This doesn’t mean they will ignore the issue. For example, OpenUK is now preparing an AI Openness Fringe Event for the Indian AI Impact Summit which will take place in February.
I Have Some Weird Feelings About This…
On one hand, everything Brock and Perens said may be summarized as “just avoid OSAID.” On the other, our conversations left me with a couple of nagging thoughts.
One that doesn’t bother me at all is that most of the advancements that matter about AI in the next five years will — I’m tempted to say “should” — happen outside the United States. The other is the possibility that all this OSAID mess is one more signal that open source as a whole risks fading into irrelevance, regardless of AI.
If Meta, Alphabet, and friends published all their AI-related code under a copyleft license such as GPLv3 tomorrow, what difference would it make? Seriously: would that be enough to end their abusive monopolies? Or would it matter much, much less than new regulations that, to offer just two examples, make adversarial interoperability mandatory and forbid user profiling, no matter what software licenses are involved?

Marco Fioretti is an aspiring polymath and idealist without illusions based in Rome, Italy. Marco met Linux, Free as in Freedom Software, and the Web pre-1.0 back in the ’90s while working as an ASIC/FPGA designer in Italy, Sweden, and Silicon Valley. This led to tech writing, including but not limited to hundreds of Free/Open Source tutorials. Over time, this odd combination of experiences has made Marco think way too much about the intersection of tech, ethics, and common sense, turning him into an independent scholar of “Human/digital studies” who yearns for a world with less, but much better, much more open and much more sensible tech than we have today.
Be First to Comment