'Whack-a-Mole': Agents and Lawyers Scramble to Fight AI Celeb Ad Scams
Deepfakes of the famous shilling junk are everywhere online, leaving, says one WME partner, a 'terrible game' to fight
Ashley Cullins writes Dealmakers and recently dove into the coming M&A landscape, the new rules of film finance and why cost-plus deals are dying. You can reach her at ashley@theankler.com
The roster of stars whose likenesses have been usurped in AI-generated ads reads like the invite list to the Oscars: Tom Hanks, Selena Gomez, Jennifer Aniston, Jennifer Lopez and Taylor Swift are just a few whose personas have been used to dupe consumers this year. As AI rapidly becomes more sophisticated and accessible, the problem is only becoming exponentially worse. Check out this debunking of an ad appearing across Facebook and Instagram last month that features an AI deepfake of Aniston’s voice:
“The type of deepfake content that we’re seeing right now is pretty crude,” says WME partner and head of digital strategy Chris Jacquemin, who describes the situation as “a terrible game of whack-a-mole that no one likes playing.” But he warns: “Get ready for the stuff that ultimately makes it impossible to detect.”
As if on cue, just last week Elon Musk offered X’s premium users access to the new AI image generator he integrated into his Grok-2 chatbot. Almost instantly the platform became a den of AI-powered debauchery. In addition to celebrities and politicians depicted in violent, sexual or otherwise compromising situations, a lot of beloved character IP was also infringed rather brazenly.
Unless you’re an IP lawyer, I do not advise you going down this rabbit hole; just imagine what an unsupervised teenage boy who plays too much Grand Theft Auto would ask AI to generate. Here’s one of the more innocuous examples:
With AI getting this good, we’re moving past memeification and toward severe harm for everyone but the scammer. In the Aniston example, her business relationships are potentially harmed, as is her reputation with fans. Consumers might waste money or compromise their identity, and the proliferation of fake content could cause people to question the validity of real endorsements, which hurts legitimate brands. Worse, all this “AI slop” further erodes our collective ability to believe anything.
Already there are thousands of AI-generated scam endorsements “every day, with every client,” says Luke Arrigoni, CEO of Loti, a company that scans the internet for deepfakes and other unauthorized images on behalf of public figures.
Whack-a-mole just won’t cut it anymore.
I talked with lawyers from Manatt, Dickinson Wright and TechFreedom, agents, and tech experts on the frontlines about why it’s time for everyone in Hollywood to start paying attention, what tools are available now to help them and what long-term solutions might look like.
In this issue, you’ll learn:
The technology WME and stars repped by other agencies are using to identify deepfake ads and get them removed
How talent lawyers are negotiating with social platforms to heed their clients’ demands
How fast a scammer can create a fake celebrity ad using AI
Where deepfakes are getting good enough that even the celebrities being impersonated are confused
Why “just sue them” isn’t the right advice in most cases
How copyright can be a cost-effective tool for lawyers to employ
What Google and Meta are doing — and not doing — about the scam ads
The new FTC rule that could turn the tide for Hollywood
And the one being considered that has SAG-AFTRA and the MPA at odds
Why one lawyer thinks we’ll soon see a celebrity emerge as the face of this fight