Openai Faces Backlash Over Deepfake Risks From AI Video Tool
AI video tool Sora 2, developed by OpenAI, has drawn sharp criticism from privacy advocates, Hollywood, and creators over the potential risks it poses in generating deepfake content. Advocacy group Public Citizen is among those calling for the tool's immediate withdrawal, citing concerns that the app could be misused to produce non-consensual imagery and undermine democracy. The tool, which uses AI to generate videos from text-based prompts, has sparked widespread debate about the ethics and safety of rapidly advancing AI technologies.
Sora 2 enables users to create hyper-realistic videos with a few lines of text, opening up possibilities for creative content generation. However, critics argue that its ability to create realistic deepfakes could lead to serious consequences. These include the potential for misinformation, identity theft, and privacy violations. In particular, opponents worry that the tool may be used to create deceptive videos featuring individuals without their consent, leading to reputational harm and widespread misuse.
Public Citizen, a prominent consumer rights group, issued a formal statement demanding that OpenAI pause Sora 2's deployment until stronger safeguards are in place. The organization is concerned that OpenAI, by rushing the tool to market, has failed to adequately assess its potential for abuse.“We must not allow the unchecked proliferation of technology that can be weaponised against the public,” said Public Citizen's senior counsel. The group has also called on regulators to take action against companies deploying AI products without clear guidelines or ethical considerations.
This development has not gone unnoticed in Hollywood, where deepfakes have already caused significant concern. Actors and filmmakers fear that AI-generated videos could be used to replicate their likenesses without their permission. Hollywood unions, including the Screen Actors Guild, have called for clearer regulations to protect actors from having their likenesses exploited in AI-generated content. Some have raised concerns that deepfakes could lead to the mass production of fake movies or videos that manipulate public perception.
See also Elon Musk Responds to Sam Altman's Critique of Office ToolsCreators and content producers, many of whom rely on platforms like YouTube and TikTok, are also voicing concerns. As AI video generation tools like Sora 2 become more widely available, there is an increasing fear that these technologies could be used to impersonate influencers and other online personalities. This could lead to a new wave of impersonation, identity theft, and fraud that may be difficult to regulate or reverse. Critics argue that existing safeguards, such as copyright protections and content moderation policies, are insufficient to address these emerging risks.
As the debate intensifies, privacy advocates are also calling for more stringent regulations on AI-generated content. Experts in data privacy argue that tools like Sora 2 raise significant questions about ownership and consent in the digital age. The ability to produce realistic videos without the involvement of the person being depicted creates new challenges in protecting individuals' rights to control their image. Additionally, there are growing concerns about how the tool could be used in harmful political contexts, such as spreading disinformation during elections or manipulating public opinion.
While OpenAI has defended Sora 2 as a tool for creative expression and innovation, the company is facing mounting pressure to implement more robust safeguards. The company has stated that it is committed to developing ethical AI technologies but has yet to announce specific measures aimed at addressing the concerns raised by critics. Some have suggested that OpenAI could implement stricter user authentication processes, limit the types of content that can be generated, or include more advanced deepfake detection systems within the app itself.
See also Sharp Reality Check for Corporate Generative AINotice an issue? Arabian Post strives to deliver the most accurate and reliable information to its readers. If you believe you have identified an error or inconsistency in this article, please don't hesitate to contact our editorial team at editor[at]thearabianpost[dot]com. We are committed to promptly addressing any concerns and ensuring the highest level of journalistic integrity.
Legal Disclaimer:
MENAFN provides the
information “as is” without warranty of any kind. We do not accept
any responsibility or liability for the accuracy, content, images,
videos, licenses, completeness, legality, or reliability of the information
contained in this article. If you have any complaints or copyright
issues related to this article, kindly contact the provider above.

Comments
No comment