How AI Resurrects Racist Stereotypes And Disinformation - And Why Fact-Checking Isn't Enough
Thanks to algorithmic systems, narratives that tap into deep-seated fears and anxieties travel farther and faster than ever before. They circle the globe before fact-checkers can even flag a problematic post.
In the second half of the year, another technological disruption emerged with OpenAI's Sora, a lifelike video-generation software. Nothing, seemingly, was immune, including politics.
Sora hit the political landscape with particular vigour during the longest federal government shutdown in United States history. The 43-day impasse generated significant pressure and public controversy, particularly around uncertainty and delays that could affect the Supplemental Nutrition Assistance Program (SNAP).
Digital blackface and the policing of Black povertyAt the height of the anxiety over the effects of the shutdown on SNAP benefits, which serves roughly 42 million Americans, a slew of short videos of Black women accosting social service employees or unleashing their frustration on livestream audiences caught the attention of the online sphere.
The SNAP suspension was ultimately blocked by the courts. It was also quickly revealed that the circulating clips were AI-generated.
What is most striking about these videos is how deliberately the caricature of the “Black welfare queen” was staged. In one video, the speaker declares,“I need SNAP to buy an iPhone.” In another,“I only eat steak, I need my funds.” And in a clip with children in the background, the woman insists,“I need to do my nails.”
Each expression of illicit use of funds is a shorthand for the alleged irresponsibility and moral failing that has long been intertwined with the racist trope of the“Black welfare queen.” One X user aptly dubbed these videos nothing short of “digital blackface.”
In the words of Black feminist writers Moya Bailey and Trudy, these videos traffic in“misogynoir” - a term developed to capture the “ways anti-Blackness and misogyny combine to malign Black women.” Bailey and Trudy note that representations of Black women as undeserving, burdensome to the public purse and inherently fraudulent are entrenched rather than exceptional.
Even clips“clearly labeled with a Sora watermark nabbed nearly 500,000 views on TikTok alone,” journalist Joe Wilkins observed. Wilkins goes on to explain that even when viewers were told the clips were AI-generated, some insisted, [“But that is what is happening.” Some argued that even if the videos were technically“fake,” they still“highlight genuine SNAP...issues.”
These comments expose the limits of fact-checking as an antidote to disinformation, especially when dealing with charged tropes. Once a harmful framing is revived and thrust into the collective ether, Ctrl+Alt+Delete becomes ineffective.
What requires attention, then, is not only how we grapple with the new terrain of AI-driven disinformation, but that we critically ask why certain representations hold mass resonance.
Why do particular images and narratives travel so well?
From settled fraud case to viral spectacleAnother case of digital blackface that captured public attention centred on the Minnesota Somali“Black fraud alert” saga. While still rooted in the same anti-Blackness that animated the“Black welfare queen” caricatures, this incident included Islamophobia and rising anti-immigrant sentiments.
The case traced back to a 2022 COVID-era malpractice scheme, which already led to arrests and convictions had. The scheme was led by Aimee Marie Bock, a white woman, and involved a network of Minnesotans, many of whom happened to be of Somali descent.
In December of 2025, U.S. President Donald Trump resurrected the settled case, weaponizing it and tethering it to his longstanding disdain for “third-world countries” and people from“shithole countries.” This rhetoric also folded into his hostility toward political opponents Minnesota Governor Tim Walz and Congresswoman Ilhan Omar.
What followed was not a serious discussion of fraud or of policy safeguards. Instead, the episode reinvigorated debates about white nationalism, racialized citizenship and racial eugenics.
Trump's call to deport Somalis through ICE, declaring“I don't want them in our country,” made this logic explicit. That most Minnesota Somalis hold U.S. citizenship, consistent with the 84 per cent citizenship rate, did little to disrupt the racist story being circulated.
Soon after the president's comments, AI amplified the content. An AI-generated video circulated widely, animating the“Somali pirate” trope. It depicted Black men, presumed to be Somali, as migrants plotting to steal from taxpayers. In it we hear: “We don't need to be pirates anymore. I found a better way. Government-funded daycare. We must go to Minnesota.”
This reference to child care echoed back to a viral video produced by a right-wing commentator claiming to expose another chapter in the“Somali fraud scandal,” this time targeting Somali-run child-care centres. The video prompted a statewide investigation, which ultimately found that all but one of the named centres were operating normally, with no clear evidence of fraud.
The“Black welfare queen” trope and the“Somali pirate” frame may seem to name different crises and different subjects, yet both draw from the same anti-Black racial grammar. In each case, Blackness is rendered fraudulent, criminal and morally deficient, cast as both a personal failing and national burden.
Why these ideas travel even when they're falseThese instances of digital blackface succeeded because misogynoir and anti-Blackness remain readily available discursive resources. AI merely accelerates their movement. The refusal of audiences to course-correct when fact-checked underscores how intuitive and pre-assembled racist and xenophobic scripts already are.
In both the SNAP-themed misogynoiric videos and the AI-generated“Somali pirate” content, nuance and factual accuracy were beside the point. What is at work instead is a broader political project tied to racial capitalism's eugenicist logics.
As Black radical scholar Cedric Robinson argues, racism is not incidental to capitalism but foundational to the inequalities it requires. Poverty is misdirected as evidence of personal and community failings rather than the result of massive structural inequity. And when attached to the racialized poor, especially when Black, Muslim and immigrant, this logic crystallizes into“common sense.”
What is at stake with AI-enabled digital blackface is not only the amplification of racism, but the architecture of political life. In this climate, sober analysis and nuance recede, displaced by the numbing anxiety that structures contemporary public discourse.
Legal Disclaimer:
MENAFN provides the
information “as is” without warranty of any kind. We do not accept
any responsibility or liability for the accuracy, content, images,
videos, licenses, completeness, legality, or reliability of the information
contained in this article. If you have any complaints or copyright
issues related to this article, kindly contact the provider above.

Comments
No comment