Google has rolled out new online safety features that make it easier to massively remove blatant deepfake images from the search index and prevent them from appearing at the top of search results. When removing fake content based on user requests, all possible duplicates will also be deleted and results for similar queries will be filtered.
«These protections have already proven effective in combating other types of images obtained without the consent of copyright holders, and now we have created the same capabilities for fake explicit images,” said Google product manager Emma Higham. “These efforts are intended to give people additional peace of mind, especially if they are concerned about similar content appearing in the future.”
Sites’ positions in the Google index will be adjusted to counteract searches for obvious fake content. For example, for search queries that deliberately ask for fake images of a real person, the search engine will return “high-quality, correct content,” such as relevant news articles. Sites with a significant number of fake explicit images will be demoted in Google search rankings.
Google says previous updates this year have reduced the appearance of explicit images in searches for deepfake content by more than 70 percent. The company’s challenge is to teach the search engine to distinguish real explicit content, such as consensual nudity, from fake content, in order to preserve the ability to display legitimate images.
Google has previously made efforts to address the problem of dangerous or explicit content appearing on the Internet. In 2022, the company expanded the list of personal or confidential information that a user can remove from searches. In August 2023, Google began blurring sexually explicit images by default. In May of this year, the company banned advertisers from promoting services that create sexually explicit content.