OpenAI's Scarlett Johansson saga reveals an 'ask forgiveness, not permission' problem

  • OpenAI is in hot water over its long-running strategy.

  • The ChatGPT maker has faced claims that it has used artists' work to train its AI sans permission.

  • Its latest run-in, with Scarlett Johansson, could amplify those claims.

OpenAI's biggest critics have long held the view that Sam Altman's success has been built on an "ask forgiveness, not permission" strategy that could come back to haunt him.

They might be proved right.

The ChatGPT maker has been embroiled in fresh controversy since Monday after Scarlett Johansson lashed out at the company over a new voice feature for its chatbot. In her view, it sounded an awful lot like the artificial-intelligence assistant she played in the 2013 movie "Her."

"When I heard the released demo, I was shocked, angered and in disbelief that Mr Altman would pursue a voice that sounded so eerily similar to mine," Johansson said in a statement first published by NPR.

The Hollywood star has reason to be frustrated.

Despite declining an offer in September to voice ChatGPT, she said in her statement, Johansson found herself seemingly being evoked by Altman following OpenAI's big launch of an AI model last week that brought a real-time, Johansson-like voice called Sky to the chatbot.

Altman posted one word on X, "her," after the event. He seemed to be referring to the 2013 film "Her," in which the main character, Theodore, played by Joaquin Phoenix, develops a relationship with his AI personal assistant, voiced by Johansson.

OpenAI has responded to the criticism by pulling the Sky voice entirely. It has also issued a statement saying Sky was "never intended to resemble Johansson." That said, the saga highlights a deeper problem facing the startup.

On multiple fronts, the San Francisco company at the heart of the current AI boom faces a growing chorus of critics who say that it has trained its AI models on intellectual property from authors, publishers, and artists, without their permission.

While OpenAI asked for Johansson's permission in this instance, it ended up creating an AI voice that many say sounded just like hers — after she declined to get involved.

Others seem not to have been asked for permission at all.

Sora, a text-to-video AI model unveiled by OpenAI in February, is suspected of using videos from YouTube in its development. In an interview with The Verge published Monday, Google CEO Sundar Pichai said he thought OpenAI may have broken YouTube's terms and conditions.

Though Johansson and Pichai have not filed lawsuits against OpenAI, the "ask forgiveness, not permission" strategy that critics accuse the company of has already landed it in legal hot water.

Several authors represented by the Authors Guild are in the middle of a tense legal battle with OpenAI over concerns that their books were used without permission to train an older OpenAI model.

The New York Times is fighting a legal case against OpenAI, too, suggesting that the similarities in ChatGPT's responses to text in its articles is a sign that the AI company is taking its journalism for a "free ride."

OpenAI could face more trouble from the music industry. Sony Music, whose artist roster includes Beyoncé, sent a letter to OpenAI and other companies last week over fears that they had "made unauthorized uses" of songs from its artists to train AI.

With AI already unleashed on the world, it's hard to know what the path forward might be. In recent months, OpenAI has been scrambling to sign licensing agreements with Reddit and publishers such as Business Insider and the Financial Times.

Creators who suspect their work has been used to train OpenAI without their permission will probably wonder why they weren't offered an agreement in the first place.

Read the original article on Business Insider