Skip to main content

Here’s how ChatGPT could solve its major plagiarism problem

ChatGPT is a wonderful tool but there’s a dark side to this advanced AI service that can write like an expert on almost any topic — plagiarism. When students that are supposed to be demonstrating their knowledge and understanding of a topic cheat by secretly using ChatGPT, it invalidates testing and grading. AI skills are great but aren’t the only subject that students should learn.

Policing this problem has proven to be difficult. Since ChatGPT has been trained on a vast dataset of human writing, it’s nearly impossible for an instructor to identify whether an essay was created by a student or a machine. Several tools have been created that attempt to recognize AI-generated writing, but the accuracy was too low to be useful.

Amidst rising concerns from educators and bans on students using ChatGPT, Business Insider reports that OpenAI is working on a solution to this problem. A recent tweet from Tom Goldstein, Associate Professor of machine learning at the University of Maryland, explained how accurate it might be at detecting watermarked text that’s written by ChatGPT.

#OpenAI is planning to stop #ChatGPT users from making social media bots and cheating on homework by "watermarking" outputs. How well could this really work? Here's just 23 words from a 1.3B parameter watermarked LLM. We detected it with 99.999999999994% confidence. Here's how 🧵 pic.twitter.com/pVC9M3qPyQ

— Tom Goldstein (@tomgoldsteincs) January 25, 2023

Any tool that could identify plagiarism with nearly 100% accuracy would settle this discussion quickly and alleviate any concerns. According to Goldstein, one solution is to make the large language model (LLM) pick from a limited vocabulary of words, forming a whitelist that is okay for the AI to use and a blacklist of words that are forbidden. If an unnaturally large number of whitelist words show up in a sample, that would suggest it was generated by the AI.

This simplistic approach would be too restrictive since it’s hard to predict which words might be necessary for a discussion when working one word at a time, as most LLMs do. Goldstein suggests that ChatGPT could be given the ability to look ahead further than one word so it can plan a sentence that can be filled with whitelisted words while still making sense.

ChatGPT made a big splash when it entered the community writing pool and can be a great teaching aide as well. It’s important to introduce artificial intelligence in schools since it will clearly be an important technology to understand in the future, but it will continue to be controversial until the issue of plagiarism is addressed.

Editors' Recommendations

Alan Truly
Computing Writer
Alan is a Computing Writer living in Nova Scotia, Canada. A tech-enthusiast since his youth, Alan stays current on what is…
Apple may finally beef up Siri with AI smarts next year
The Siri activation animation on an iPhone running iOS 14.

As the world has been taken over by generative artificial intelligence (AI) tools like ChatGPT, Apple has stayed almost entirely out of the game. That could all change soon, though, as a new report claims the company is about to bring its own AI -- dubbed “Apple GPT” -- to a massive range of products and services.

That’s all according to reporter Mark Gurman, who has a strong track record when it comes to Apple leaks and rumors. In his latest Power On newsletter, Gurman alleges that “Apple executives were caught off guard by the industry’s sudden AI fever and have been scrambling since late last year to make up for lost time.”

Read more
OpenAI’s new tool can spot fake AI images, but there’s a catch
OpenAI Dall-E 3 alpha test version image.

Images generated by artificial intelligence (AI) have been causing plenty of consternation in recent months, with people understandably worried that they could be used to spread misinformation and deceive the public. Now, ChatGPT maker OpenAI is apparently working on a tool that can detect AI-generated images with 99% accuracy.

According to Bloomberg, OpenAI’s tool is designed to root out user-made pictures created by its own Dall-E 3 image generator. Speaking at the Wall Street Journal’s Tech Live event, Mira Murati, chief technology officer at OpenAI, claimed the tool is “99% reliable.” While the tech is being tested internally, there’s no release date yet.

Read more
Bing Chat just beat a security check to stop hackers and spammers
A depiction of a hacker breaking into a system via the use of code.

Bing Chat is no stranger to controversy -- in fact, sometimes it feels like there’s a never-ending stream of scandals surrounding it and tools like ChatGPT -- and now the artificial intelligence (AI) chatbot has found itself in hot water over its ability to defeat a common cybersecurity measure.

According to Denis Shiryaev, the CEO of AI startup Neural.love, chatbots like Bing Chat and ChatGPT can potentially be used to bypass a CAPTCHA code if you just ask them the right set of questions. If this turns out to be a widespread issue, it could have worrying implications for everyone’s online security.

Read more