YouTube is making it easier for politicians and journalists to take down AI deepfakes from its platform ahead of this year’s midterm elections. But it’s keeping quiet on who now has access to this tool.
The video streaming giant announced today that it is expanding access to its likeness detection tool to journalists, government officials, and political candidates. The tool flags videos that feature a user’s likeness in AI-generated content and allows them to request unauthorized videos be taken down.
“YouTube is where the world comes to understand the events shaping their lives—from breaking news to the debates that drive civic discourse,” wrote Amjad Hanif, YouTube vice president of creator products, and Leslie Miller, vice president of government affairs and public policy, in a blog post. “As AI-generated content evolves, the individuals at the center of these conversations need reliable tools to protect their identities.”
The expansion comes as AI deepfakes have gotten pretty impressive, raising concerns about their potential to spread misinformation especially around elections. The news also comes as YouTube has been increasingly leaning more into AI.
Last year, the company brought a custom version of Google’s video-generation model, Veo 3, to Shorts—YouTube’s TikTok- and Instagram Reels-like feed of quick, vertical videos. That tool, along with other AI editing features on the platform, has made it easier than ever for users to create deepfakes. At the same time, YouTube has also tried to roll out tools to mitigate the risks.
The company’s likeness detection tool works similarly to Content ID, YouTube’s copyright-flagging system, but for people’s faces. YouTube first started testing the system in 2024 with celebrities and athletes, and expanded it last year to YouTube creators in the company’s Partner Program.
To enroll in the program, eligible users must verify their identity by sumbitting a video selfie and a government ID. The company said any data submitted will only be used for verification purposes and not to train Google’s AI.
Once verified, users can check for videos that use their likeness and request that they be taken down. YouTube, however, emphasizes that just because a video is detected and a removal request is made that does not guarantee it will be taken down.
“YouTube has a long history of protecting free expression and content in the public interest—including preserving content like parody and satire, even when used to critique world leaders or influential figures,” the company blog post said. “We’ll continue to carefully evaluate these exceptions when we receive requests for removal.”
A YouTube spokesperson told Gizmodo that the company is planning a “broad international rollout,” with access to the tool being expanded in the coming weeks and months.
YouTube declined to comment on which politicians and journalists are included in the initial pilot cohort, including whether U.S. President Donald Trump was invited. Trump himself and his adminstration is known for posting AI-generated content using the likenesses of his political and media adversaries.
Read the full article here
