Image: by Catrin Johnson on Unsplash
Google has published its annual Ads Safety Report for 2023. It highlights its increasing use of AI in the form of LLMs (large language models) to enforce policies at scale.
Google defines its goal as catching bad ads and suspending fraudulent accounts before they make it onto the platform, or removing them immediately when detected.
In 2023 Google blocked or removed:
- Over 5 billion ads (slightly up from 2022)
- 12.7 million advertiser accounts (almost double over 2022)
- 206.5 million advertisements, for violating Google’s misrepresentation policy including many spam tactics
- Over 1 billion advertisements, for violating Google’s policy against abusing the ad network including promoting malware
Google blocked or restricted ads from serving on over 2.1 billion publisher pages (slightly up over 2022), and took broader site-level enforcement action on over 395,000 publisher sites, which it said was a marked rise over 2022.
Google is increasingly using AI in this work. Over 90% of publisher page level enforcement in 2023 started with the use of machine learning models, including Google’s latest LLMs.
These are able to rapidly review and interpret content at high volume, while still capturing important nuances in that content. Google says they have already led to larger-scale and more precise enforcement decisions on some of its more complex policies, such as those against get-rich-quick schemes.
Google’s latest AI model, Gemini (launched as Bard last year), is now being used in ads safety and enforcement efforts.
Further efforts to combat bad actors in 2023 included the launch of the Ads Transparency Center where people can learn more about the ads they see on Search, YouTube and Display. Google also updated its suitability controls to make it easier for advertisers to exclude topics they wish to avoid across YouTube and Display inventory, to ensure brand safety.
Read the full 2023 Ads Safety Report here.