Preventing AI deepfake scams! Google search results will label images generated by AI

Wallstreetcn
2024.09.17 17:42
portai
I'm PortAI, I can summarize articles.

Google plans to mark images generated and edited by AI in its search, Google Lens, and Android's Circle to Search feature to enhance transparency for users in search results. Only images containing C2PA metadata will be labeled. C2PA is an organization aimed at tracking the history of images, although its standards have not been widely adopted. Analysts believe that this measure is necessary in the context of the rapid spread of AI deepfakes

Google announced on Tuesday that it plans to adjust Google Search to make it clearer to users which images are generated by AI or edited with AI tools.

The company stated that in the coming months, Google will label AI-generated and edited images in the "About this image" window of searches, Google Lens, and the Circle to Search feature on Android. Similar prompts may appear on other Google platforms like YouTube. Google mentioned that more related information will be shared later this year.

It is worth noting that only images containing "C2PA metadata" will be marked as AI processed in searches. C2PA (Content Authenticity and Provenance Alliance) is a group of organizations dedicated to setting technical standards to track the history of images, including the devices and software used to capture and/or create the images.

Currently, companies including Google, Amazon, Microsoft, OpenAI, and Adobe support C2PA. However, the alliance's standards have not been widely adopted. Earlier reports indicated that C2PA faces challenges in promotion and interoperability, with only a few generative AI tools and cameras from Leica and Sony supporting the organization's specifications.

Furthermore, like any metadata, C2PA metadata can be deleted, erased, or damaged to the point of being unreadable. Some popular generative AI tools, such as Flux used for image generation by xAI's Grok chatbot, do not include C2PA metadata, partly because their developers have not agreed to support the standard.

Nevertheless, analysts believe that some measures are better than none, especially with the rapid spread of AI deepfakes. Surveys show that most people are concerned about being deceived by deepfakes and that AI may contribute to the spread of false propaganda. An estimate suggests that scams involving AI-generated content increased by 245% between 2023 and 2024. Deloitte predicts that losses related to deepfakes will surge from $12.3 billion in 2023 to $40 billion in 2027