OECD Finds Growing Transparency Efforts Among Leading AI Developers
How are AI developers managing risks? Insights from responses to the reporting framework of the Hiroshima AI Process Code of Conduct analyses voluntary transparency reporting under the G7 Hiroshima AI Process from technology and telecommunications companies as well as advisory, research, and educational institutions, including Anthropic, Google, Microsoft, NTT, OpenAI, Salesforce and Fujitsu.
The analysis shows that many organisations are developing increasingly sophisticated methods to evaluate and mitigate risks, including adversarial testing and AI-assisted tools to better understand model behaviour and improve reliability. Larger technology firms tend to have more advanced practices, particularly in assessing systemic and society-wide risks.
The report also finds that key AI actors increasingly recognise the importance of sharing information about risk management to build trust, enable peer-learning and create more predictable environments for innovation and investment. However, technical provenance tools such as watermarking, cryptographic signatures, and content credentials remain limited beyond some large firms.
“Greater transparency is key to building trust in artificial intelligence and accelerating its adoption. By providing common reference points, voluntary reporting can help disseminate best practices, reduce regulatory fragmentation, and promote the uptake of AI across the economy, including by smaller firms” said Jerry Sheehan, director for science, technology and innovation at the OECD.
“As we define common transparency expectations, the Hiroshima AI Process Reporting Framework can play a valuable role by streamlining the reporting process. Going forward, it could also help align organisations on emerging reporting expectations as AI technology and governance practices continue to advance.” Amanda Craig, senior director, responsible AI public policy, Microsoft.
Developed under the Italian G7 Presidency in 2024 with input from business, academia and civil society, the OECD's voluntary reporting framework provides a foundation for co-ordinated approaches to safe, secure and trustworthy AI. It supports the implementation of the Hiroshima AI Process initiated under Japan's 2023 G7 Presidency.
The report How are AI developers managing risks? Insights from responses to the reporting framework of the Hiroshima AI Process Code of Conduct is available here .
The post OECD finds growing transparency efforts among leading AI developers appeared first on Caribbean News Global .

Legal Disclaimer:
MENAFN provides the
information “as is” without warranty of any kind. We do not accept
any responsibility or liability for the accuracy, content, images,
videos, licenses, completeness, legality, or reliability of the information
contained in this article. If you have any complaints or copyright
issues related to this article, kindly contact the provider above.
Most popular stories
Market Research

- Zebu Live 2025 Welcomes Coinbase, Solana, And Other Leaders Together For UK's Biggest Web3 Summit
- Yield Basis Nears Mainnet Launch As Curve DAO Votes On Crvusd Proposal
- Blueberry Launches A Bold New Brand Platform
- Stonehaven Circle Marks 13Th Anniversary With Hadrian Colwyn Leading Calvio Ailegacyx Innovation
- R0AR Launches Buyback Vault: Bringing 1R0R To R0AR Chain Unlocks New Incentives
- Moonbirds And Azuki IP Coming To Verse8 As AI-Native Game Platform Integrates With Story
Comments
No comment