This Viral AI Trend Exposed Private Body Marks — Here’s What Happened!


Recently, a privacy scare has put social media users and experts on high alert. This started when a young woman named Jhalakbhawani posted a troubling video on Instagram that quickly went viral, getting over 4 million views. She showed how Google’s AI, using the popular "Nano Banana" feature from Gemini created an image of herself wearing a saree, and it included a mole on her left arm—the same mole she has in real life. The surprising part is that in the original photo she shared, her arm was fully covered, hiding the mole completely.

This Viral AI Trend Exposed Private Body Marks — Here’s What Happened!

This unsettling incident has raised many questions about privacy, how AI learns from our data, and the risks of sharing personal images online. Let's dive deeper into what happened, why it’s scary, and how it relates to other privacy incidents caused by AI tools in 2025.


Suspicious AI Knowledge: How Did It Know?

Imagine uploading a photo where you're fully dressed, covering any marks or scars. Then, an AI-generated image shows things about you that no one else should know. This sounds like a scene from a sci-fi thriller, but it really happened to Jhalakbhawani.

When she asked, "How did Gemini know that I have a mole on this part of my body? It's very scary and creepy," she voiced concerns many users share about AI’s invisible eyes and noses. The incident sparked widespread fear and confusion about how much AI systems can infer about us from just a simple photo.


The Risks Behind the Fun: The Nano Banana AI Craze

Google’s Gemini-powered “Nano Banana” tool is an AI-driven feature that allows users to create unique, stylized 3D figurines or vintage-style portraits from selfies or photos. It went viral across Instagram and other platforms in India, with many excited users sharing their AI-generated images online.

But the viral nature of this trend came with a big warning from Indian Police Service officer VC Sajjanar. He issued a public advisory urging people to be cautious about the craze. According to Sajjanar, sharing personal images and data online, especially on unofficial or fake platforms mimicking the AI tool, can expose users to scams and cyber theft.

"If you share personal information online, scams are bound to happen. With just one click, the money in your bank accounts can end up in the hands of criminals," he warned in a tweet widely circulated on social media.


What Police Are Saying: A Strong Warning

Beyond VC Sajjanar’s message, the Jalandhar Rural Police also issued advisories regarding Google’s terms and conditions for the Gemini platform. They reminded users that Google can use uploaded images for AI training, meaning your photos may be stored, analyzed, or even shared to improve AI algorithms.

This raised concerns about identity theft and cyber fraud risks, especially for those unaware of what happens after they upload their photo. Once a personal image is online, it can be exploited in many ways, from fake profiles to deepfake videos or fraudulent activities.


Similar AI Privacy Incidents in 2025


Jhalakbhawani’s experience isn’t the first or last AI-related privacy scare of 2025. Here are some notable ones:

  • AI Deepfake Database Leak: Earlier this year, a major breach exposed thousands of AI-generated deepfake images, including disturbing non-consensual content involving celebrities and minors, raising grave ethical and privacy concerns.

  • Microsoft Azure OpenAI Account Hacks: Hackers exploited stolen credentials to produce inappropriate AI content and bypass safety filters, showing how AI tools can be misused if security is lax.

  • Google Gemini On-Device Data Access: Users found that Google’s Gemini AI could access their WhatsApp, SMS, and call notifications to perform assistance tasks. Despite claims it wouldn’t read messages, many felt this was an invasion of privacy due to OS-level data reach.

These incidents collectively highlight how AI’s rapid growth brings with it risks if privacy protections lag behind.


Why Is AI So Creepy About Personal Details?

AI systems like Gemini learn by training on vast datasets, often scraped from the internet or uploaded by users themselves. This training helps AI “guess” and generate realistic images or text based on patterns it has seen before. But this can also mean AI models memorize small details from your photos that you may not want to share, like hidden moles, scars, or backgrounds in images.

Moreover, companies often have terms allowing them to keep and use the data you upload for improving their AI. But how your data is stored, used, or protected is often unclear to most users, which causes mistrust.


How to Stay Safe While Using AI Tools

If you still want to join such exciting AI trends but want to protect your privacy, here are some useful tips:

  1. Use Official Apps & Websites Only: Avoid fake or unauthorized apps promising AI image generation. Only use trusted platforms.

  2. Avoid Sharing Sensitive Photos: Don’t upload images showing personal marks, documents, or sensitive backgrounds.

  3. Adjust Privacy Settings: Turn off location tags and strip metadata from photos before uploading.

  4. Be Careful With Personal Info: Avoid sharing full names, addresses, or other details that could be linked to your uploaded image.

  5. Keep Software Updated: Use updated apps and OS versions that have better security and privacy features.

  6. Report Scams: If you encounter suspicious links or scams related to AI tools, report them to authorities immediately.


Final Thoughts: Balancing Fun and Caution

AI-powered tools like Google’s Nano Banana are fun, innovative, and open a new world of digital creativity. However, as Jhalakbhawani’s viral post and police warnings show, users must tread carefully. Privacy in the AI era is a complicated puzzle, where technology can reveal more about us than we intended.

By staying informed, cautious, and using official sources, everyone can enjoy AI without falling prey to privacy breaches or scams. Remember, the digital world may be virtual, but the risks are very real.


If you’re considering using AI image tools or sharing personal data online, keep these facts in mind. Protecting your identity, money, and peace of mind is always worth more than a viral photo or trend.


(This article is based on recent news reports and expert opinions, aiming to raise awareness about AI privacy challenges in 2025.)

  1. https://genai.owasp.org/2025/03/06/owasp-gen-ai-incident-exploit-round-up-jan-feb-2025/
  2. https://www.bakerdonelson.com/webfiles/Publications/20250822_Cost-of-a-Data-Breach-Report-2025.pdf
  3. https://www.kiteworks.com/cybersecurity-risk-management/ibm-2025-data-breach-report-ai-risks/
  4. https://www.ibm.com/think/insights/ai-privacy
  5. https://idtechwire.com/ai-generated-deepfake-database-breach-exposes-thousands-of-explicit-images/
  6. https://www.storyboard18.com/digital/is-gemini-nano-banana-tool-safe-ai-craze-sparks-warnings-over-privacy-and-security-80939.htm
  7. https://dig.watch/updates/privacy-concerns-rise-over-geminis-on%E2%80%91device-data-access
  8. https://hai.stanford.edu/news/privacy-ai-era-how-do-we-protect-our-personal-information
  9. https://www.protecto.ai/blog/ai-data-privacy-breaches-major-incidents-analysis/
  10. https://www.financialexpress.com/life/technology-amid-google-gemini-nano-banana-ai-viral-trend-ips-officer-issues-alert-on-rising-online-scams-3979116/
  11. https://concentric.ai/google-gemini-security-risks/
  12. https://blog.qualys.com/product-tech/2025/02/07/ai-and-data-privacy-mitigating-risks-in-the-age-of-generative-ai-tools