Major AI Models Not Very Transparent: Report
The Foundation Model Transparency Index created by a group of eight AI researchers from Stanford University, MIT Media Lab, and Princeton University, tracked 10 most popular AI models who disclosed information about their work and how people use their systems.
The report showed that“no major foundation model developer is close to providing adequate transparency, revealing a fundamental lack of transparency in the AI industry”.
Among the models it tested, Meta's Llama 2 (54 per cent) scored the highest, closely followed by BloomZ (53 per cent) and then OpenAI's GPT-4 (48 per cent).
Other models evaluated include Stability's Stable Diffusion (47 per cent), Google's PaLM 2 (40 per cent), Anthropic's Claude (36 per cent), Command from Cohere (34 per cent), AI21 Labs' Jurassic 2 (25 per cent), Inflection-1 (21 per cent) from Inflection, and Amazon's Titan (12 per cent).
“While the societal impact of these models is rising, transparency is on the decline. If this trend continues, foundation models could become just as opaque as social media platforms and other previous technologies, replicating their failure modes,” the researchers said.
The report defined transparency based on 100 indicators for information about how the models are built, how they work, and how people use them. The researchers assessed these companies on the basis of their most salient and capable foundation model and systematically gathered information made publicly available by the developer as of September 15.
For each developer, two researchers scored the 100 indicators, assessing whether the developer satisfied the indicator on the basis of public information.
The initial scores were shared with leaders at each company, encouraging them to contest scores they disagreed with.
Although the mean score was just 37 per cent, 82 of the indicators were satisfied by at least one developer. This means that developers can significantly improve transparency by adopting best practices from their competitors, the researchers said.
“This provides a snapshot of transparency across the AI ecosystem. All developers have significant room for improvement that we will aim to track in the future versions of the Index,” the researchers noted.
--IANS
rvt/prw

Legal Disclaimer:
MENAFN provides the
information “as is” without warranty of any kind. We do not accept
any responsibility or liability for the accuracy, content, images,
videos, licenses, completeness, legality, or reliability of the information
contained in this article. If you have any complaints or copyright
issues related to this article, kindly contact the provider above.
Most popular stories
Market Research

- Origin Summit Debuts In Seoul During KBW As Flagship Gathering On IP, AI, And The Next Era Of Blockchain-Enabled Real-World Assets
- What Are The Latest Trends In The Europe Steel Market For 2025?
- United States AI Governance Market Size, Demand, Growth & Outlook 2033
- NOVA Collective Invest Showcases Intelligent Trading System7.0 Iterations Led By Brady Rodriguez
- North America Perms And Relaxants Market Size, Share And Growth Report 2025-2033
- Canada Real Estate Market Size, Share, Trends & Growth Opportunities 2033
Comments
No comment