Google’s Nano Banana Pro AI Model Further Erodes Trust in Photos
by Matt Growcoot · Peta PixelGoogle has released a more advanced version of its Nano Banana AI image model, and it is becoming increasingly difficult to spot the difference between real and AI-generated photos.
“You will be fooled by an AI photo, and you probably already have been but didn’t know it,” content creator Jeremy Carrasco tells NBC News.
“It is a step in realism specifically. A lot of the things that you used to look for, such as a blurry camera image or something that just looks a little too glossy or a little too smooth, a lot of that has been straightened out.”
Google Nano Banana Pro is a paid-for AI model built on Gemini 3 Pro. A Google AI Pro subscription costs $19.99, but users can get up to two free generations per day.
The ease and power of Nano Banana Pro have been labeled an “escalation.” There are few safeguards in place when it comes to recreating someone’s likeness. Each generation does contain SynthID — Google’s digital watermarking technology that embeds imperceptible signals — and a visible watermark.
“The idea that anyone can become a ‘Photoshop pro’ overnight and use these celebrities or politicians’ likeness is obviously frightening,” adds Carrasco.
Nano Banana Pro Examples
PetaPixel tried Nano Banana Pro on the free version, inspired by an AI image generator test from March 2024, where the most iconic photos of all time were recreated.
Users on X and Reddit have also been sharing their creations. The Tweet below from X user Sid is interesting since it compares the Nano Banana base model versus the Pro model. The Pro achieves a much higher level of verisimilitude, but X users were quick to point out that the bartender’s fingers were in the wrong place.
Meanwhile, a post on Reddit highlights Nano Banana Pro’s ability to create ultra-realistic images containing titans of the tech world.
Had to do a double take. This is Gemini 3.0 Pro / Nano Banana Pro.
byu/Spirited-Gold9629 inGeminiAI
With this level of AI technology, it is clear that AI images need to be labeled or, at the very least, easily identifiable. Gemini 3 has introduced a feature that lets users ask it if an image is AI-generated, but will people take the time to do that as they scroll social media?