OpenAI offers image monitoring tool to address concerns about AI-generated content


(MENAFN) OpenAI, the organization behind ChatGPT and DaleE, renowned programs specializing in generative artificial intelligence, has unveiled a novel tool designed to monitor digital images produced by AI systems. This development comes amidst growing apprehensions regarding the proliferation of synthetic content generated by AI tools, which has raised significant concerns regarding the authenticity and trustworthiness of online media.

The emergence of generative AI technology has facilitated the creation of diverse forms of content with minimal input, including fabricated photos and manipulated recordings, often exploited for nefarious purposes such as fraud or misinformation dissemination. Recognizing the urgent need to address these challenges, OpenAI has announced the launch of a program specifically engineered to detect images generated through its DAL3 tool.

According to statements released by the California-based company, internal testing of an earlier version of the monitoring tool demonstrated its efficacy in accurately identifying up to 98 percent of images generated by DAL3. Additionally, the tool exhibited a low rate of false positives, with less than 0.5 percent of non-AI generated images mistakenly attributed to DAL3. However, OpenAI acknowledged that the effectiveness of the program may diminish when applied to images generated by alternative models or subsequently altered.

Furthermore, OpenAI revealed its commitment to enhancing transparency and accountability in AI-generated content by adopting industry-standard practices. In alignment with the Coalition for Content Provenance and Authenticity (C2PA) standards, OpenAI will incorporate tags to denote images created by AI, facilitating the identification of their origins and ensuring adherence to established technical protocols.

The collaboration with C2PA represents a pioneering effort in the technology sector to establish comprehensive standards for verifying the source and authenticity of digital content, addressing the pressing need for robust mechanisms to combat misinformation and ensure media integrity. Notably, Meta (comprising Facebook and Instagram) recently announced its intention to implement AI-content labeling based on C2PA criteria, underscoring the broader industry recognition of the importance of transparency and accountability in combating the proliferation of synthetic media.

MENAFN09052024000045015682ID1108192269


MENAFN

Legal Disclaimer:
MENAFN provides the information “as is” without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the provider above.