This Tweet Is Proof Lawmakers Aren’t Ready for the AI Boom

Joshua Hoehne / Unsplash
Joshua Hoehne / Unsplash

On Sunday night, Senator Chris Murphy (D-CT) tweeted a bold claim about ChatGPT, saying the chatbot had “taught itself to do advanced chemistry” even though chemistry knowledge wasn’t “built into the model” and nobody “programmed it to learn complicated chemistry.”

“It decided to teach itself, then made its knowledge available to anyone who asked,” Murphy added. “Something is coming. We aren’t ready.”

The only problem: Nearly every single thing Murphy wrote in that tweet was wrong. AI researchers and academics were quick to let him know, inundating his replies and quote tweets with the kind of righteous sound and fury reserved for the Internet’s main character of the day.

“That is simply not true,” Grady Booch, software engineer and developer of the Unified Modeling Language, wrote in Murphy’s replies. “Please, you need to better inform yourself about the reality of contemporary AI.”

“Your description of ChatGPT is dangerously misinformed,” Melanie Mitchell, an AI researcher and professor at the Santa Fe Institute, wrote back in another tweet. “Every sentence is incorrect. I hope you will learn more about how this system actually works, how it was trained, and what its limitations are.”

Aside from being a great example of something that should have remained in the drafts folder, Murphy’s tweet underscores the stark reality that the vast majority of our lawmakers are woefully unprepared for the AI boom. Since ChatGPT’s release in Nov. 2022, we’ve seen big tech giants like Microsoft, Google, and China’s Baidu rushing to get generative AI products out the door—with varying degrees of success. Microsoft released a new version of Bing infused with GPT-4 that was scandalized one journalist enough to write a front page above-the-fold article about it for the New York Times. The company later unveiled a whole line of AI-powered updates to their existing products like Excel and Word. Meanwhile, Google played catchup releasing their AI chatbot Bard a month later.

Amid all this fervor, misinformation surrounding generative AI is rapidly ballooning out of proportion. It’s led people to fundamental misunderstandings of the technology and its capabilities. We’re seeing people make outlandish claims, like that the Bing chatbot has fallen in love with them (it hasn’t), or that it’s sentient (it’s not), or that it’s evil and wants to kill them (it’s not and it won’t).

Now, we have a sitting U.S. senator with a massive platform adding fuel to this fire. To his credit, he did later respond and seemed to imply that he might have been mistaken (or flat out wrong) with his first take. A source with close knowledge of the situation told The Daily Beast that the source for Murphy’s tweet was a presentation on AI by the Center for Humane Technology by Aza Raskin and Tristan Harris. Still, it doesn’t make his initial take any less wrong—or dangerous.

For one, ChatGPT is built using OpenAI’s GPT-3.4 and GPT-4 large language models (LLM). That means that it uses a dataset drawn from a massive corpus of books, scientific journals, and articles from different Internet sources like Wikipedia or news websites. This represents literally petabytes of data for the purpose of predicting text. So it doesn’t and can’t “teach itself” advanced chemistry—or really anything at all—because it's a predictive text bot like the one on your phone. It produces responses based on prompts and the words that likely follow each other.

“ChatGPT doesn’t teach itself,” Mitchell told The Daily Beast in an email. “It is given vast amounts of text by humans. It is trained to predict the next token in a text block.”

Mitchell added that while the training allows it to learn what human language looks like, it doesn’t give it the ability to “understand the queries people give it or the language it generates in any human-like way.”

Moreover, all of this is, in fact, built into the model. That’s the point. ChatGPT was trained to be an incredibly sophisticated and advanced chatbot. “ChatGPT doesn’t decide anything,” Mitchell explained. It has no intentions.”

Why ‘Woke’ Chatbots Are the New Culture War Battlefields

The frustrations of Mitchell and other AI experts are partly fueled by the danger that misinformation around these chatbots pose. If people start treating these bots as these all-powerful or all-knowing things, then they’ll start to give it a level of authority it simply shouldn’t have.

“What I would like Sen. Murphy and other policymakers to know is that they pose a large risk to our information ecosystem,” Emily M. Bender, a professor of linguistics at the University of Washington, told The Daily Beast in an email. “These are programs for creating text that sounds plausible but has no grounding in any commitment to truth.”

She added: “This means that our information ecosystem could quickly become flooded with non-information, making it harder to find trustworthy information sources and harder to trust them.”

Booch largely echoed the sentiment. “Facts are important, and the Senator does a disservice to his community and to the domain of AI by circulating such misinformation,” Booch told The Daily Beast. However, he pointed out that “OpenAI is behaving most unethically by not disclosing the source of their corpus.”

What Everyone Is Getting Wrong About Microsoft’s Chatbot

Currently, there is little in the way of meaningful regulation when it comes to AI. In Oct. 2022, the White House launched a framework for the AI Bill of Rights, which outlined principles for how these models can be built and used to protect the data and privacy of American citizens. However, it’s currently little more than a glorified wish list of vague regulation. Since it was released, the world of generative AIs have exploded—and so have the risks.

Murphy did get one thing right, though: Something is coming and we aren't ready. He probably didn’t realize that he was talking about himself too.

“We desperately need smart regulation around the collection and use of data, around automated decision systems, and around accountability for synthetic text and images,” Bender said. “But the folks selling those systems (notably OpenAI) would rather have policymakers worried about doomsday scenarios involving sentient machines.”

Read more at The Daily Beast.

Got a tip? Send it to The Daily Beast here

Get the Daily Beast's biggest scoops and scandals delivered right to your inbox. Sign up now.

Stay informed and gain unlimited access to the Daily Beast's unmatched reporting. Subscribe now.