OpenAl is building the most powerful tech in the world. The public should be told what Sam Altman lied to the board about.
OpenAI is working on AI that matches or exceeds human intelligence, potentially dangerous tech.
The board accused CEO Sam Altman of not being "candid" when they fired him.
The public needs to know what, if anything, he lied about.
Elon Musk has multiple axes to grind when it comes to OpenAl and its cofounder Sam Altman. But the Tesla CEO made a valid point in the midst of a chaotic weekend for the artificial intelligence startup.
The events kicked off Friday afternoon when OpenAl made it clear in a statement that Altman lied to the board about something so important that they decided to oust him as CEO.
"He was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities," OpenAl said.
This is a drastic and highly disruptive move that boards very rarely make. For any regular private company, this type of corporate intrigue would probably be fine to remain an internal matter. But OpenAl is a special case.
The startup has developed the most powerful tech in the world. GPT-4 is currently the most capable and influential Al model out there, and there's a new version in the works that is expected to be even more powerful. Altman himself has been calling for regulators and other institutions to prepare for artificial general intelligence, when machines match and possibly exceed human capabilities.
"Given the risk and power of advanced Al, the public should be informed of why the board felt they had to take such drastic action," Musk posted on X on Sunday.
"If it was a matter of AI safety, that would affect all of Earth," Musk later added in a reply.
OpenAl's charter, the document that controls the company and instructs its board and executives how to behave, states clearly that that company's "primary fiduciary duty is to humanity." It also commits in the charter to use any influence OpenAl has over AGI deployment to ensure this technology benefits everyone and doesn't "harm humanity or unduly concentrate power."
That's an unusual charter for a tech startup or most other private companies, which usually exist to primarily make a profit and provide investors with a return.
"Very few people know for sure what happened in this case, but my best guess is that when the board members said, 'he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities,' they meant that he repeatedly withheld information that interfered with their legal obligations to ensure safe development of AGI," Toby Ord, a senior research fellow at Oxford University, wrote on X on Sunday.
Musk originally backed OpenAl as a research organization focused on Al safety. He backed out a few years ago, and may feel aggrieved about missing out on the evolution of such a game-changing entity. Musk has also launched a competing Al research group, so chaos at OpenAl could be relatively good for that initiative.
Under Altman's leadership the startup has raised billions of dollars and launched several products with huge commercial potential. The company's recent Developer Day was chock full of Big Tech corporate-style updates and launches. That's a long way from its Al safety research roots.
This is why it's important for the public to know exactly what happened with Altman's ouster. I asked OpenAl's press department about this on Sunday and didn't get a response.
Maybe the board overreacted and is exaggerating what happened with Altman. Or it could have been a more simple coup where the CEO was ousted for another reason, or no reason at all.
However, there are other theories about what happened, along with several media reports.
Bloomberg reported that Altman has been trying to raise huge amounts of money for a new Al chip startup called Tigris. Did the board ask about this? If so, how did Altman respond? Should the CEO of OpenAl be off trying to start another huge company. Musk has done this several times, but it's unusual, and OpenAl's board would likely expect to be told about this.
Other publications have written that OpenAl cofounder Ilya Sutskever wanted to proceed with Al development more cautiously than Altman, given the potential threat the technology poses to society.
The New York Times reported that Sutskever created a "super alignment" team to ensure that future versions of GPT-4 wouldn't be harmful to humanity. Did the board discuss this with Altman? How did he respond?
This type of schism within OpenAl already caused a slew of OpenAl cofounders to leave and launch another startup called Anthropic that says it is more focused on Al safety.
Altman is, according to a photo of himself posted on X Sunday afternoon, at the OpenAI headquarters as I write this, negotiating to return to run OpenAl again. If this happens, it's even more important to know what he said, or didn't say, to the board.
Read the original article on Business Insider