Woman Shares 'Creepy' Experience With Google Gemini Nano Banana AI Saree Trend: 'Make Sure You Stay Safe'
The woman shared that she tried generating her image and found something“creepy.” "A trend is going viral on Instagram where you upload your image to Gemini with a single prompt, and Gemini converts it into a saree. Last night, I tried this trend myself and found something very unsettling," she wrote.
She shared the image she uploaded to Gemini – herself in a green full-sleeve suit – and wrote a prompt with it. The results Gemini produced, however, shocked her.
Also read | 10 Nano Banana AI image styles you can create with Gemini, reveals Google
She said: "I found this image very attractive and even posted it on my Instagram. But then I noticed something strange - there is a mole on my left hand in the generated image, which I actually have in real life. The original image I uploaded did not have a mole."
She further questioned:“How did Gemini know that I have a mole on this part of my body? You can see the mole - it's very scary and creepy. I'm still not sure how this happened, but I wanted to share this with all of you. Please be careful. Whatever you upload on social media or AI platforms, make sure you stay safe.”
Her post has evoked a plethora of responses, with several flagging safety concerns around the trend, while others considered it "normal" and suggested she made the videos to gain views.
Also Read | 15 Vintage retro-style AI portrait prompts to catch viral Instagram trend
Read | Gemini AI saree photos: Best prompts to get a vintage Bollywood vibe
How Safe Is the Gemini Nano Banana Tool?While tech giants like Google and OpenAI offer tools to protect user-uploaded content, experts say safety ultimately depends on personal practices and the intent of those accessing images. Google's Nano Banana images, for instance, carry an invisible digital watermark called SynthID, along with metadata tags, designed“to clearly identify them as AI-generated,” according to google. The watermark, though invisible to the naked eye, can be detected with specialized tools to verify an image's AI origin, reports spielcreative.
Read | Google Gemini Nano Banana AI saree trend goes viral on Instagram - step-by-step guide to create it
Can the Watermark Really Prevent Misuse?However, the detection tool is not yet publicly available, meaning most viewers cannot confirm an image's authenticity, Tatler Asia notes. Critics also caution that watermarking can be easily faked or removed.“Nobody thinks watermarking alone will be sufficient,” said Hany Farid, UC Berkeley professor, while Ben Colman, CEO of Reality Defender, added that its real-world applications often“fail from the onset.” Experts suggest combining watermarking with other technologies to better combat convincing deepfakes.
Legal Disclaimer:
MENAFN provides the
information “as is” without warranty of any kind. We do not accept
any responsibility or liability for the accuracy, content, images,
videos, licenses, completeness, legality, or reliability of the information
contained in this article. If you have any complaints or copyright
issues related to this article, kindly contact the provider above.
Most popular stories
Market Research

- What Does The Europe Cryptocurrency Market Report Reveal For 2025?
- United States Kosher Food Market Long-Term Growth & Forecast Outlook 20252033
- Utila Triples Valuation In Six Months As Stablecoin Infrastructure Demand Triggers $22M Extension Round
- Meme Coin Little Pepe Raises Above $24M In Presale With Over 39,000 Holders
- FBS Analysis Highlights How Political Shifts Are Redefining The Next Altcoin Rally
- 1Inch Becomes First Swap Provider Relaunched On OKX Wallet
Comments
No comment