YouTube makes its Likeness Detection tools available to journalists and politicians

youtube-makes-its-likeness-detection-tools-available-to-journalists-and-politicians
YouTube makes its Likeness Detection tools available to journalists and politicians
AI content on YouTube

YouTube has announced the expansions of its Likeness Detection protection tool to key figures in public life. Originally brought to creators to help them avoid imposters, Likeness Detection offers some protection against the threat of AI copycats.

As part of a pilot program, YouTube is now making Likeness Detection available to a select group of politicians, political candidates and journalists.

The idea is a simple one. Anyone enrolled in the program will be warned when content that appears to be them is detected online. The individual will then be able to indicate whether it is them or if it is a fake, so that action can be taken accordingly.

Announcing the pilot program in a blog post, Amjad Hanif, Vice President of Creator Products at YouTube and Leslie Miller, VP of Government Affairs and Public Policy at YouTube, say:

YouTube is where the world comes to understand the events shaping their lives—from breaking news to the debates that drive civic discourse. As AI-generated content evolves, the individuals at the center of these conversations need reliable tools to protect their identities.

Last year, we launched likeness detection to creators in the YouTube Partner Program, an industry-first tool to manage AI-generated content. Today, we’re expanding to a pilot group of government officials, journalists, and political candidates.

How likeness detection works

This tool works similarly to Content ID, but for likeness. It looks for a participant’s likeness in AI-generated content, and if a match is found—like a deepfake of their face—the individual can review the content and request removal if it violates our privacy guidelines.

While this tool provides a powerful way to manage unauthorized AI-impersonation, detection does not guarantee removal. YouTube has a long history of protecting free expression and content in the public interest—including preserving content like parody and satire, even when used to critique world leaders or influential figures. We’ll continue to carefully evaluate these exceptions when we receive requests for removal.

We’re starting with this cohort to ensure the tool meets their unique needs, with plans to significantly expand access over the coming months.

Protecting participation

To guard against abuse and ensure the tool is only used by those it’s meant to protect, we require participants to verify their identity before enrolling them in likeness detection. The data provided during setup is strictly used for identity verification purposes and to power this safety feature, and is not used to train Google’s generative AI models.

The company also shares a Short to provide more information and context:

The post concludes by saying:

Moving forward

Technology alone is not the finish line. We’ll keep advocating for strong legal frameworks like the NO FAKES Act, which establishes a federal right of publicity and acts as a blueprint for international adoption to ensure technology serves—and never replaces—human creativity.