Advertisement

From deepfakes to Bing's chatbot, AI-generated content is everywhere. Here's how to spot it.

From deepfakes to Bing's chatbot, AI-generated content is everywhere. Here's how to spot it.
  • ChatGPT's popularity is stirring concerns about the proliferation of AI-generated content.

  • Researchers have developed tools to detect machine-made videos or text.

  • Simple steps like checking your source are crucial for the average media consumer, one told Insider.

Concerns about artificial intelligence programs taking over jobs or robots going rogue are nothing new. But the debut of ChatGPT and Microsoft's Bing chatbot has put some of those fears back in the forefront of the general public's mind — and with good reason.

Professors are catching students cheating with ChatGPT, jobs initially thought to require a human's judgment may soon be on the chopping block, and — like so many other AI models — tools like ChatGPT are still plagued by bias.

There's also the ever-growing threat of misinformation, which can be all the more potent with AI chatbots.

Chelsea Finn, an assistant professor in computer science at Stanford University and a member of Google Brain's robotics team, sees valid use cases for tools like ChatGPT.

"They're useful tools for certain things when we ourselves know the right answer, and we're just trying to use them to speed up our own work or to edit text, for example, that we've written," she told Insider. "There are reasonable uses for them."

The concern for Finn is when people start to believe everything that is produced by these models and when bad actors use the tools to deliberately sway public perception.

"A lot of the content these tools generate is inaccurate," Finn said. "The other thing is that these sorts of models could be used by people who don't have the best intentions and try to deceive people."

Researchers have already developed some tools to spot AI-generated content and are claiming they have accuracy rates of up to 96%.

The tools will only get better, Finn said, but the onus will be on the public to be constantly mindful of what it sees on the internet.

Here's what you can do to detect AI-generated content.

AI detection tools exist

There are several tools available to the public that can detect text generated by large language models (LLM) — the more formal name for chatbots like ChatGPT.

OpenAI, which developed ChatGPT, has an AI classifier, that aims to distinguish between human and AI-written text, as well as an older detector demo. One professor who spoke with Insider used the latter tool to determine that a student essay was 99% likely to be AI-generated.

Eric Anthony Mitchell, a computer science graduate student at Stanford, and his colleagues developed a ChatGPT detector aptly called DetectGPT. Finn acted as an advisor for the project. A demo and paper on the tool were released in January

All of these tools are in their early stages, have different approaches to detection, and have their unique limitations, Finn said.

There are essentially two classes of tools, she explained. One relies on collecting large amounts of data — written by people and machine learning models — and then training the tool to distinguish between the text and the AI tool.

The challenge behind this approach is that it relies on a large amount of "representative data," Finn said. This becomes an issue if, for example, the tool is only given data written in English or data that is mostly written in a colloquial language.

If you were to feed this tool Spanish-language text or a technical text like something from a medical journal, the tool would then struggle to detect AI-generated content.

OpenAI adds the caveat that its classifier is "not fully reliable" on short texts below 1,000 characters and texts written in other languages besides English.

The second class of tools relies on the large language model's own prediction of a text being AI-generated or human. It's almost like asking ChatGPT if a text is AI-generated or not. This is essentially how Mitchell's DetectGPT operates.

"One of the big upsides to this approach is you don't have to actually collect the representative dataset, you actually just look at the model's own predictions," Finn said.

The limitation is that you need to have access to a representative model, which is not always publicly available, Finn explained. In other words, researchers need access to a model like ChatGPT to be able to run tests where they "ask" the program to detect human or AI-generated text. ChatGPT is not publicly available for researchers to test the model at the moment.

Mitchell and his colleagues report their tool successfully identified large language model-generated text 95% of the time.

Finn said every tool has its pros and cons but the main question to ask is what type of text is being evaluated. DetectGPT had similar accuracy to the first class of detection tools, but when it came to technical texts, DetectGPT performed better.

Detecting Deepfakes? Human eyes — and veins — provide clues

There are also tools to detect Deepfakes, a portmanteau of "deep-learning" and "fake" that refers to digitally-made images, videos, or audio.

Image forensics is a field that has existed for a long time, Finn said. Since the 19th century, people were able to manipulate images using composites of multiple photos — and then came Photoshop.

Researchers at the University of Buffalo said they've developed a tool to detect deepfake images with 94% effectiveness. The tool looks closely at reflections in the eyes of people in the video. If the reflection is different, then it's a sign that the photo was digitally rendered.

Microsoft announced its own deepfake detector called Microsoft Video Authenticator ahead of the 2020 election with the goal of catching misinformation. The company tested the tool with Project Origin, an initiative that works with a team of media organizations, including BBC and The New York Times, to provide journalists the tools to track the source of origin for videos. According to the tech company, the detector closely examines small imperfections at the edge of a fake image that is undetectable by the human eye.

Last year, Intel announced its "real-time" deepfake detector, FakeCatcher, and said that it has a 96% accuracy rate. The tool is able to look at the "blood flow" of a real human in a video and uses those clues to determine a video's authenticity, according to the company.

"When our hearts pump blood, our veins change color. These blood flow signals are collected from all over the face and algorithms translate these signals into spatiotemporal maps," the company wrote in an announcement of its tool. "Then, using deep learning, we can instantly detect whether a video is real or fake."

Detection tools are an evolving science. As models like ChatGPT or deepfake applications get better, the tools to detect them also have to improve.

"Unlike other problems, this one is constantly changing," Ragavan Thurairatnam, founder of technology company Dessa, told The New York Times in a story about internet companies' fight against deepfakes.

Other ways to spot AI-generated content

The effectiveness of detection tools still relies on an individual's better judgment.

Darren Hick, a Furman University philosophy professor, previously told Insider that he turned to a ChatGPT detector for a student essay only after he noticed that the paper was well written but "made no sense" and was "just flatly wrong." 

As Finn said, ChatGPT can be helpful when the user already knows the right answer. For average media consumers, the old adage of checking one's source remains salient.

"I think it's good to just try not to believe everything you read or see," Finn said, whether that's information from a large language model, from a person, or from the internet.

Social media makes media consumption a seamless experience, so it's important for users to pause for a moment and check the account or outlet from which they're seeing a piece of news, especially if it's something sensational or particularly shocking, according to St. Louis's Washington University's guide on spotting fake news.

Viewers should ask themselves if they're seeing a video or text from a meme page, an entertainment site, an individual's account, or a news outlet. After seeing a piece of information online and confirming the source, it helps to compare what else is out there on that subject from other reliable sources, according to the university's guide.

When it comes to AI-generated videos or images, there are also still visual cues the naked eye can detect. AI has been reported to have issues drawing hands or teeth.

"Usually there are some small artifacts, maybe in people's eyes, or, if it's in a video, the way that their mouth is moving looks a little bit unrealistic," Finn said.

The photo-editing app LensaAI, which also recently became popular with its Magic Avatar feature, had a habit of leaving "ghost signatures" in the corner of its AI-generated portraits. That's because the tool was trained on pre-existing images, in which artists often left their signatures somewhere on their paintings, ARTnews reported.

"Right now it's still possible to spot some of these if you're looking for the right thing," Finn said. "That said, in the long run, I suspect that these kinds of machine learning models will probably get better, and that may not be a reliable way to detect images and video in the future."

Read the original article on Business Insider