AI-generated and edited images will soon be labeled in Google Search results

Google is finally doing something about AI images, but it's not quite enough

· TechRadar

News By Max Delaney published 18 September 2024

(Image credit: Shutterstock / Sundry Photography)

Google has announced that it will begin rolling out a new feature to help users "better understand how a particular piece of content was created and modified". 

This comes after the company joined the Coalition for Content Provenance and Authenticity (C2PA) – a group of major brands trying to combat the spread of misleading information online – and helped develop the latest Content Credentials standard. Amazon, Adobe and Microsoft are also committee members. 

Set to release over the coming months, Google says it will use the current Content Credentials guidelines – aka an image’s metadata – within its Search parameters to add a label to images that are AI-generated or edited, providing more transparency for users. This metadata includes information like the origin of the image, as well as when, where and how it was created. 

However, the C2PA standard, which gives users the ability to trace the origin of different media types, has been declined by many AI developers like Black Forrest Labs — the company behind the Flux model that X's (formerly Twitter) Grok uses for image generation.

This AI-flagging will be implemented through Google's current About This Image window, which means it will also be available to users through Google Lens and Android's 'Circle to Search' feature. When live, users will be able to click the three dots above an image and select "About this image" to check if it was AI-generated – so it’s not going to be as evident as we hoped.

Is this enough?

While Google needed to do something about AI images in its Search results, the question remains as to whether a hidden label is enough. If the feature works as stated, users will need to perform extra steps to verify whether an image has been created using AI before Google confirms it. Those who don’t already know about the existence of the About This Image feature may not even realize a new tool is available to them.

While video deepfakes have seen instances like earlier this year when a finance worker was scammed into paying $25 million to a group posing as his CFO, AI-generated images are nearly as problematic. Donald Trump recently posted digitally rendered images of Taylor Swift and her fans falsely endorsing his campaign for President, and Swift found herself the victim of image-based sexual abuse when AI-generated nudes of her went viral

Get daily insight, inspiration and deals in your inbox

Sign up for breaking news, reviews, opinion, top tech deals, and more.

Contact me with news and offers from other Future brandsReceive email from us on behalf of our trusted partners or sponsors