Skip to content

YouTube Teams Up with CAA to Fight AI Deepfakes Targeting Celebrities

    YouTube Teams Up with CAA to Fight AI Deepfakes Targeting Celebrities

    YouTube and CAA Talent Management in Century City have announced a collaboration to assist celebrities monitor AI-generated content that mimics their likenesses. YouTube plans on starting to test its “likeness management technology” with some unnamed award-winning actors and professional athletes from leagues including NBA and NFL early next year.

    This tool will detect content featuring celebrity faces and voices, making it simpler for them to file privacy complaints on YouTube.

    AI Deepfakes of Celebrities

    YouTube and one of the world’s largest talent agencies have joined forces to assist high-profile actors monitor their digital likenesses. YouTube will enable this agency’s clients, including award-winning actors and top NBA and NFL athletes, to access technology that will identify AI-generated content with their faces or voices that appears on a website at scale and request its removal through YouTube’s Privacy Complaint Process.

    YouTube’s latest initiative to address unauthorized use of celebrities’ likenesses involves providing guardrails against fake sound-alikes and allowing creators to flag third-party uses of their videos earlier this year.

    CAA and Gayle King will collaborate to combat misuse of celebrities’ voices for fraudulent schemes, with several celebrities such as Gayle King, Tom Hanks, Elon Musk and Tom Brady having their voices used unknowingly in unapproved clips that promote scams online for days before being flagged as inappropriate by web crawlers.

    AI-Generated Voices

    AI-generated deepfakes have led major platforms and talent agencies to create tools to assist celebrities with managing their digital likenesses and protect their intellectual property rights. YouTube recently joined Creative Artists Agency (CAA) in testing a “likeness management technology,” enabling stars to identify content on YouTube that depicts them using an artificially generated likeness and remove it.

    Video sharing platform Vimeo will enlist CAA clients, including award-winning actors and professional athletes, to begin testing its technology early next year. They will provide feedback to the company before expanding the trial to include top creators, creative professionals and other CAA clients.

    YouTube’s latest initiative to manage AI-generated content builds upon its existing efforts, such as its system for detecting fake faces and synthetic singing detection within Content ID. Furthermore, they now require creators to disclose when their video contains AI-generated material in order to help viewers make an informed decision as to whether or not to watch it.

    AI-Generated Appearances

    YouTube and CAA have joined forces in an unprecedented partnership designed to assist participants in finding and disabling unauthorised AI replicas of themselves. CAA’s expertise and commitment to responsible innovation makes them an excellent first partner for its new likeness-management system, scheduled to go live early 2025. YouTube will test out this tool with CAA clients such as award-winning actors and top NBA and NFL athletes who use CAA services.

    Video platform YouTube is now providing stars with the option of requesting removal of videos that use generative AI to recreate their appearances or voices, further expanding its trustworthiness as an ally to creative professionals in an age of generative AI.

    Recent warnings by Tom Hanks and Elon Musk highlighted how their likenesses were being exploited for investment scams, while tech platforms failed to protect creators’ digital rights. Working together, YouTube and CAA could create their own detection systems and tighten up controls; YouTube plans on testing this tool on several celebrities and athletes over the course of 2019.

    AI-Generated Music

    As AI advances, music industry players are becoming increasingly wary of its effects. While industry power players believe AI cannot produce music better than humans do, they worry that AI might use its powers for illegal purposes without their approval, leading to possible monopoly status for future musical compositions.

    YouTube is already working hard to address these concerns by developing tools to identify deepfake content and allow talent to request its removal. Furthermore, the company recently unveiled a system capable of detecting fake faces as well as synthetic-singing detection technology that will give creators more control over whether their videos can be used for training artificial models.

    But these initiatives are just the start. Berklee Online will continue experimenting and exploring AI tools as they emerge, only incorporating them into their courses when proven safe and secure.

    Leave a Reply

    Your email address will not be published. Required fields are marked *