Google’s new Gemini Nano Banana AI image editor and photo editor has pushed conversations about privacy and security back into the spotlight.
The tool, which allows users to easily generate or edit images, placing themselves alongside celebrities or altering facial features, has rekindled debates around biometric data, user consent, and surveillance capitalism.
While Google maintains that its models are not trained on personal photos, experts point out that the underlying technology may still be used for behavioural tracking, facial recognition, and metadata analysis.
Consent Gaps and Normalised Surveillance at Scale
Eamonn Maguire, director of engineering, AI & ML at Proton, in an interaction with AIM, described Nano Banana as “a troubling expansion of surveillance capitalism into creative expression, raising urgent questions about consent and control of personal data.”
He noted that the tool’s operation depends on analysing biometric data through facial recognition, tracking editing habits, and gathering metadata such as location and device details.
Maguire highlighted a phenomenon he calls “consent gaps”, where users are “strong armed into agreeing to privacy policies without understanding what they’re agreeing to.”
With opaque disclosures and no way to “un-train” data once it enters models, deletion remains limited, leaving users with diminished agency.
The implications go beyond the individual. “The feature accelerates the normalisation of big tech surveillance,” Maguire warned.
Even attempts at reassurance, such as watermarking, are fragile. He said, “Now, Google tries to give lip service to people’s concerns through things like watermarking.”
“But watermarking offers little protection as they can be stripped and there is no standard for cross-platform verification.”
Current laws, he argued, were never designed with AI training in mind, leaving gaps that risk legitimising mass data collection. He warned that tools like Nano Banana could normalise convenience over privacy, impacting future regulations.
Also Read: Canva’s Fight for Relevance in the Age of Google Nano Banana
A Predictable Evolution, With Familiar Risks
Joel Latto, threat advisor at F-Secure, a global cyber security and privacy company, took a more measured stance. “I have not seen indicators that this would pose any new risks,” he said, framing Nano Banana as a natural progression in the competitive race to deliver the next viral AI feature.
Latto observed that most users “hop on to different services and in turn feed all of them with their personal data,” but stressed that basic hygiene practices, such as temporary modes, disabling training, and avoiding sensitive inputs, remain the most practical defences.
He added that while users are entrusting personal data, including potential biometrics, to a major advertising entity, it’s worth noting that companies such as Google generally maintain more robust privacy practices compared to early viral applications like FaceApp.
Deepfake technology forms another layer of concern. “Deepfake generation and detection is an ongoing arms race which F-Secure is invested in as well,” Latto noted. The real shift, he argued, is not necessarily in quality but in access. “Whenever a LLM model/feature goes viral, it lowers the barrier of entry,” meaning more users can produce photorealistic fakes with minimal technical skill.
Latto acknowledged that new releases often lack meaningful restrictions at first. “Just like with the Ghibli case, when these things come out there’s surprisingly little guardrails in place.”
“After the viral wave hits, new restrictions are put in place.”
This reactive cycle underscores the fragile nature of safeguards in the generative AI space.
The Old Privacy Rule For New Users
On the one hand, powerful tools like Nano Banana make creative AI widely accessible, pushing the boundaries of image generation. On the other, they embed users more deeply into ecosystems where consent may be opaque with privacy compromised.
As Maguire argued, the moment is pivotal. But as Latto suggested, the risks may not be entirely new, only more widely distributed.
The fundamentals for privacy and security remain the same, even in the post GenAI world. It looks like it is only a matter of awareness for new users who weren’t aware of it.
The post Google’s Gemini Nano Banana and the Cost of Convenience appeared first on Analytics India Magazine.