IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

New AI Detector to Catch Fake College Applications

S.A.F.E., a new software tool from AMSimpkins and Associates in Georgia, is designed to detect and remove fake student applications, recommendation letters and other fraudulent admissions documents generated by AI.

fraud
With the growing popularity of generative AI has come a proliferation of tools to detect AI-generated content, and a new one from the tech company and consulting firm AMSimpkins and Associates (AMSA) aims to help universities detect and filter out fake student applications and recommendation letters.

A recent news release announcing the tool, dubbed "S.A.F.E." (Student Application Fraudulent Examination), described it as part of a broadening effort by universities and tech companies alike to address worries about generative AI tools such as ChatGPT being used for fraud or academic dishonesty.

AMSA founder and President Maurice Simpkins said his company has spoken with administrators from dozens of schools that have received “fraud applications,” adding that there has been evidence of students using ChatGPT for fake recommendation letters. As one example of misuse of AI tools that troubled him, he said some people have generated “pseudo-profiles” of prospective students for college and university applications. He added that these types of use cases underscore the need for regulation and oversight of how generative AI is used in higher ed.

“It's the perfect storm. Not only do colleges and universities have to worry about protecting institutional assets from scammers, now they also have to worry about academic dishonesty from real students,” he said. “This is why we created S.A.F.E., because we understand that colleges and universities are being hit from so many angles and we have to work together if we want to protect academic learning and pursue the highest standards of academic integrity."

According to the news release, S.A.F.E. verifies students' identities and requires them to register before using ChatGPT, bars access to student data by third parties outside of school networks and monitors usage patterns to detect misuse. An email from the company explained that these functions were inspired by guidelines and best practices published by OpenAI, the company that created ChatGPT.

Simpkins said he believes solutions like these will become more important as generative AI continues to improve to get around anti-plagiarism tools designed to detect AI-generated content, and as ChatGPT-like tools become increasingly embedded into other applications for a variety of uses.

“Higher education has been a target for fraud and scam recently, and so as bots become more and more prevalent, connecting them to an AI infrastructure takes no time to do it all. So, if you have a database full of stolen identities, you can ask that bot to generate enough identity information for you to apply to certain colleges,” he said. “You can use it for generating reference letters. You can use it for generating [academic and admissions] essays, and we're seeing it being used in all of those three cases currently.”

Simpkins noted that ChatGPT’s potential for misuse could grow as it becomes more advanced.

“It has had so many other applications built around it," he said. "There are homework-writing tools. There are bots that rewrite words based off the words that you give it. There have been so many other tools built off of the underlying infrastructure of ChatGPT at OpenAI that’s growing exponentially daily, and that's going to be difficult to stay ahead of. We don't know how else it's going to be used.”
Brandon Paykamian is a former staff writer for the Center for Digital Education.