A new definition of ‘open source’ could spell trouble for Big AI

a wall of computer code

The Open Source Initiative (OSI), self-proclaimed steward of the open source definition, the most widely used standard for open-source software, announced an update to what constitutes an “open source AI” on Thursday. The new wording could now exclude models from industry heavyweights like Meta and Google.

“Open Source has demonstrated that massive benefits accrue to everyone after removing the barriers to learning, using, sharing, and improving software systems,” the OSI wrote in a recent blog post. “For AI, society needs the same essential freedoms of Open Source to enable AI developers, deployers, and end users to enjoy those same benefits.”

Per the OSI:

An Open Source AI is an AI system made available under terms and in a way that grant the freedoms[1] to:

These freedoms apply both to a fully functional system and to discrete elements of a system. A precondition to exercising these freedoms is to have access to the preferred form to make modifications to the system.

Under such a definition, neither Meta’s Llama 3.1, nor Google’s Gemma model would count as open source AIs, Nik Marda, Mozilla’s technical lead of AI governance, told PCMag. “The lack of a precise definition in the past has made it easier for some companies to act like their AI was open source even when it wasn’t. Many – if not, most – of the models from the large commercial actors will not meet this definition.”

The older, looser definition allowed companies enough leeway to potentially undermine their consumer AI products, changing the models functionality and disabling access on the company’s whim, Marda argued. Such actions could potentially lead to “disrupted services, subpar performance, and more expensive features in the apps and tools that everyone uses.”

Neither Meta nor Google have yet acknowledged the new definition as a standard of the industry.