The Open Source Initiative (OSI) has released a new definition of open-source artificial intelligence, establishing criteria for transparency and accessibility in the development and use of AI systems. This definition focuses on the need for open training data, clear licensing terms, and community-driven development. It highlights the growing debate around the ethical implications of AI and the importance of ensuring its responsible development and deployment.
The Open Source Initiative (OSI) has released a new definition of open-source artificial intelligence, which includes a requirement for AI systems to disclose their training data. This definition directly challenges Meta’s Llama, a popular open-source AI model that does not provide access to its training data. Meta has argued that there is no single open-source AI definition and that providing access to training data could pose safety concerns and hinder its competitive advantage. However, critics argue that Meta is using these justifications to minimize its legal liability and protect its intellectual property. OSI’s definition has been met with support from other organizations like the Linux Foundation, which are also working to define open-source AI. The debate highlights the evolving landscape of open-source AI and the potential conflicts between transparency, safety, and commercial interests.