403
Sorry!!
Error! We're sorry, but the page you were looking for doesn't exist.
Tips on how to spot AI generated videos
(MENAFN) AI-generated videos have become so convincing in the past six months that it’s increasingly difficult to tell reality from synthetic content. Experts warn that many of us have likely already been fooled, and in the near future, almost every video you see could be suspect. The best-case scenario? Repeated exposure will eventually train viewers to question everything they watch.
For now, one major red flag stands out: poor video quality. Grainy, blurry, or pixelated footage should raise suspicion that AI may be at work.
Hany Farid, a computer-science professor at UC Berkeley and founder of the deepfake detection company GetReal Security, explains, “It's one of the first things we look at.”
It’s important to note that this isn’t definitive evidence. Top-tier AI tools can produce crisp, polished videos, and low-quality clips aren’t automatically fake either. As Matthew Stamm, head of the Multimedia and Information Security Lab at Drexel University, clarifies: “If you see something that's really low quality that doesn't mean it's fake. It doesn't mean anything nefarious.”
Instead, blurry or low-quality videos are more likely to trick viewers, at least for now. They mask subtle inconsistencies that AI sometimes produces, like unnaturally smooth skin, strange hair or clothing patterns, or background objects that behave unrealistically. Farid notes, “The leading text-to-video generators… still produce small inconsistencies. But it's not six fingers or garbled text. It's more subtle than that.”
Ironically, the lower the visual quality, the more deceptive AI videos can be. Requests for videos to mimic old phone footage or security cameras help hide these small errors, making them more convincing to casual viewers.
Recent viral examples show just how effective this can be. A charming video of bunnies jumping on a trampoline amassed over 240 million views on TikTok, while clips of romantic encounters on the New York subway fooled millions before being revealed as fakes. Even seasoned observers can be deceived; one viral video showed an American priest delivering a surprisingly leftist sermon, only for it to turn out as AI-generated content.
For now, blurry, pixelated footage is a useful tip-off that a video may be AI-generated, but as tools improve, this cue will eventually lose its effectiveness. Until then, a healthy dose of skepticism and attention to small visual inconsistencies is your best defense.
For now, one major red flag stands out: poor video quality. Grainy, blurry, or pixelated footage should raise suspicion that AI may be at work.
Hany Farid, a computer-science professor at UC Berkeley and founder of the deepfake detection company GetReal Security, explains, “It's one of the first things we look at.”
It’s important to note that this isn’t definitive evidence. Top-tier AI tools can produce crisp, polished videos, and low-quality clips aren’t automatically fake either. As Matthew Stamm, head of the Multimedia and Information Security Lab at Drexel University, clarifies: “If you see something that's really low quality that doesn't mean it's fake. It doesn't mean anything nefarious.”
Instead, blurry or low-quality videos are more likely to trick viewers, at least for now. They mask subtle inconsistencies that AI sometimes produces, like unnaturally smooth skin, strange hair or clothing patterns, or background objects that behave unrealistically. Farid notes, “The leading text-to-video generators… still produce small inconsistencies. But it's not six fingers or garbled text. It's more subtle than that.”
Ironically, the lower the visual quality, the more deceptive AI videos can be. Requests for videos to mimic old phone footage or security cameras help hide these small errors, making them more convincing to casual viewers.
Recent viral examples show just how effective this can be. A charming video of bunnies jumping on a trampoline amassed over 240 million views on TikTok, while clips of romantic encounters on the New York subway fooled millions before being revealed as fakes. Even seasoned observers can be deceived; one viral video showed an American priest delivering a surprisingly leftist sermon, only for it to turn out as AI-generated content.
For now, blurry, pixelated footage is a useful tip-off that a video may be AI-generated, but as tools improve, this cue will eventually lose its effectiveness. Until then, a healthy dose of skepticism and attention to small visual inconsistencies is your best defense.
Legal Disclaimer:
MENAFN provides the
information “as is” without warranty of any kind. We do not accept
any responsibility or liability for the accuracy, content, images,
videos, licenses, completeness, legality, or reliability of the information
contained in this article. If you have any complaints or copyright
issues related to this article, kindly contact the provider above.

Comments
No comment