In May, OpenAI released a new voice assistant to power ChatGPT with human-like voices.
The only issue? One of the voices named Sky may sound similar to the actress Scarlett Johansson, who was courted by OpenAI to be one of the chatbot’s voices but who ultimately turned down the offer. While OpenAI claimed that a different professional voice actor was behind Sky, the AI company pulled down Sky after facing increased scrutiny.
“The primary legal issue in this case is whether OpenAI’s use of a voice that ‘sounds like’ Scarlett Johansson violates her right of publicity,” said Kristelia García, professor at Georgetown Law and an intellectual property and technology law expert.
The spat between the tech company and the actress comes amid growing legal and ethical questions around AI and AI-generated content as the technology develops and is adopted by more people and organizations.
Recently, OpenAI inked deals with multiple media companies to license their content, including the Atlantic, Vox Media and News Corp, a mass media company whose holdings include the Wall Street Journal. The deals will enable OpenAI to use these companies’ content to train its AI models and answer user queries based on their content.
To make sense of all the legal and ethical questions around AI, we asked García for her legal takes on the dispute between OpenAI and Johansson, as well as her thoughts on the future of AI-generated news.
Ask a Professor: Kristelia García on OpenAI, Scarlett Johansson and AI-Generated Content
What are the legal arguments for and against OpenAI’s AI-generated voice assistant that sounds like Scarlett Johansson?
“Rights of publicity” is an umbrella legal term of art that encompasses what are commonly known as “name, image, and likeness” rights. To be clear, there is no single right of publicity, but rather a web of rights that vary by state and collectively protect identity. There is not currently a federal right of publicity, although a bill proposing one was recently circulated.
Typically called upon by celebrities and other public figures, rights of publicity recognize a limited ability to control one’s identity. The thrust of this recognition is two-fold: (1) famous people have a commercial interest in what makes them valuable; and/or (2) people have a right of broadly defined privacy that should allow them to dissociate from certain companies or causes. To do otherwise would potentially lead consumers to falsely believe that the celebrity endorses the product.
There really isn’t a strong legal argument for what OpenAI has done, particularly against the background of the company negotiating with her prior to creating the sound-alike voice.
How does Scarlett Johansson being a celebrity and public figure affect the legal grounds of a potential case?
Some states only grant rights of publicity to celebrities and public figures, like Johansson, who make a living off of their identities. Some states also cut off rights of publicity at death, others allow them to continue post-mortem.
Are there any past legal precedents that could inform this particular scenario between OpenAI and Johansson?
Yes. In a 1988 case called Midler v. Ford, the Ninth Circuit sided with the singer Bette Midler, who had accused Ford Motor Company of using a voice that sounded like hers to sing one of her songs in their commercial, after Midler had turned them down. This is strikingly similar to the OpenAI/Johansson scenario. In that case, the court found Ford’s appropriation of Midler’s voice to be a tort under California’s right of publicity laws.
How does the law draw a line between the illegal use of someone’s voice without their permission and the creation of a voice that could potentially sound like a specific person?
The illegal use of someone’s actual voice is a clearer-cut issue — you can’t. An AI-generated voice or other sound-alike can be trickier depending on the context and the jurisdiction, where some states have stronger protections than others
What obligation, if any, does a company have to notify users that a voice is AI-generated? What ethical concerns do you foresee?
I’m not aware of any legal obligations to disclose AI involvement at this time, though there have been proposals to do so. And we’ve seen such obligations imposed in other contexts. For example, the Copyright Office requires those applying to register copyrights to disclose and disclaim any portions of a work attributable to AI.
Going forward, it will be interesting to see whether public perceptions of AI change as it becomes more commonplace. For example, there was a recent controversy over Marvel Studios using AI to create the opening sequence of its new Secret Invasion TV series. Some fans weren’t pleased. It remains to be seen whether this will eventually become a non-issue.
From a creator’s perspective, the ethical concern is that AI might replace human talent, such as using an AI-generated voice instead of paying the actor for the use of their voice. For this reason, the resolution of the recent SAG-AFTRA strike included strong prohibitions against this kind of AI use by the studios.
Are policymakers moving toward regulating how content gets attributed to AI?
Most of the legislative focus at this time is on whether the outputs of generative AI (1) are legal, and (2) are copyrightable.
At the moment, attribution regulation is being handled more on an agency-by-agency basis, with authorities like the Copyright Office, for example, implementing a disclose-and-disclaim approach toward works created using AI.
OpenAI recently closed deals with the Atlantic, Vox Media and the parent company of the Wall Street Journal. What are some of the potential legal and ethical implications of media companies giving OpenAI access to its archive of content?
The biggest legal implication here is that OpenAI gets to operate without fear of crippling copyright litigation from the companies it has struck licensing deals with. That could give it a very real competitive advantage over other companies that don’t have such licenses. For this reason, I’ve advocated for a compulsory license for copyrighted content used to train large language models. A compulsory license allows prospective users to license copyrighted content without having to first obtain permission, so long as the user pays the statutory rate and meets the statutory terms. This is intended to eliminate the hold-out problem and increase access while still paying creators. These licenses bring their own problems and challenges but would serve to give all users equal access to the same content.
If more people turn to AI chatbots like ChatGPT for news, how could that disrupt the media industry?
At this time and of course subject to change, AI can’t produce actual news, at least as we know it. It can only synthesize various news pieces fed into it. Or it can make things up. What it can’t yet do is observe an event and then report on it. Instead, it needs to synthesize others’ write-ups of the event to generate a similar account. So at this time, the potential media disruption is that consumers are pulled away from traditional media outlets and toward AI that is using those traditional outlets’ content without paying them. That would not be good, obviously. It might alternately look more like Google’s AI Overview function – a bit of a disaster at the moment, but it would purport to offer users a summary, followed by a link (or links) to the traditional media outlet(s) to read more. That would be a better result for the media industry, as would a compulsory license.