IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Virginia Tech Researchers Train Chatbot to Teach Kids Online Safety

Using large language models, and with some adult supervision, the chatbot would coach young people on how to identify and respond to messages from online predators.

A young boy using a tablet. In the background is a hacker wearing a hoodie.
Shutterstock
Computer scientists at Virginia Polytechnic Institute and State University are developing a technology-assisted education program for preventing online sexual abuse of young people.

According to the website for the National Science Foundation, which provided an $850,000 grant for the project in March, the idea is to build a chatbot with which young people can interact and practice forming messages that make them less vulnerable to online predators. With some adult supervision, the bot will coach them on these interactions in a safe environment, without the malintent or explicit language and images they might face in the real world.

Jin-Hee Cho, an associate professor of computer science and lead researcher on the project, said kids need to become savvy users of online spaces, because predators already are.

“As we have social media, social networks and a variety of different ways of social interaction online between young people, the bad guys or the attackers are heavily leveraging this kind of technology,” she said.

A 2021 national survey of more than 2,500 young adults between ages 18 and 28, conducted by researchers at the University of New Hampshire, found that 15.6 percent of respondents had been victims of online child sexual abuse. This kind of abuse can come in a lot of different forms, and young Internet users are not always able to distinguish real danger from empty threats, or real online friends from groomers.

According to an FBI website, in some cases, the perpetrator may claim to already have a revealing picture or video of a child that will be shared if the victim does not send more pictures. This is sometimes called “sextortion.”

More often, the website said, perpetrators pose as a child or teen and make victims believe they are communicating with someone their own age who is interested in a relationship. They may also pose as someone who is offering something of value, such as money or gift cards.

Cho said this kind of relationship, where a predator intentionally builds trust before asking for explicit content, is sometimes called “cyber grooming.” Once the perpetrators have one picture or video, it becomes a cycle. They threaten violence or exposing the explicit content to get the victim to produce more. Young victims may feel ashamed, confused or afraid, and this can prevent them from reporting the abuse or seeking help.

Cho added that a lot of work in online child abuse prevention is focused on detecting perpetrators, which is helpful, but limited in the sense of being more reactive than proactive. For example, the National Center for Missing and Exploited Children now uses AI to identify and prosecute child predators.

As The Guardian reported in January, some social media platforms also filter messages from unknown users to a separate folder and censor explicit words and images to prevent users from seeing inappropriate content.

Sang Won Lee, another Virginia Tech researcher, said their new chatbot takes a more active, educational approach to supplement these efforts.

“What we can do to complement this approach is to empower [young people] and educate them so that they can learn more about this risk, so that even when this detection is not perfect, if they are exposed to these cases, they can have awareness about the risks so that they can move away and then learn how to protect themselves,” he said.

Cho said her team has spent years gathering examples of conversational data from parents and teens to train the bot, which has proven difficult, as there is no centralized database for predatory conversations. Nonetheless, they now have a “predator” chatbot, which uses common language and tactics of online abuse, and are working on a “youth” chatbot, trained with resilience to these tactics. The two bots can interact with each other and serve as a model for educational efforts.

Cho added that there are ethical considerations at every turn.

“We want to make it very realistic, but at the same time, we want to maintain a certain ethical level,” she said. “We want to be sure of everything before we deploy it for human use, and not just to make it available to the public as some website everybody can access.”

The grant timeline ends February 2027, but Cho said her team will put as much time in as necessary to test the bot internally and at other educational institutions. It is not necessarily meant for young people to use on their own, but as a tool within educational contexts where trusted adults are present.

Lee said that while a strictly person-to-person educational effort may circumvent the issue of this technology falling into the wrong hands, it is not large-scale enough to address the growing online sexual abuse happening right now.

“Obviously, it would have been nice if we could have more of a human-to-human interaction to educate adolescents to learn more about this risk, but as always, the problem is the scale,” he said. “Think about how much it would cost to implement such a course that goes through programming in public school, that’s going to be a huge amount.”

According to a recent news release, Pamela Wisniewski, an associate professor of computer science at Vanderbilt University, will work on testing the product. She received a $344,874 National Science Foundation grant to consult with teens and experts to ensure that the chatbots are safe for minors to use.

“My hope is that we can design, develop, and evaluate a tool that can benefit teens by teaching them about the risks of cyber grooming, as well as coping mechanisms to protect themselves from these risks,” Wisniewski said in a public statement. “The goal isn’t to restrict and surveil their use of the Internet. Instead, we need to give them the tools needed to navigate the Internet safely.”
Abby Sourwine is a staff writer for the Center for Digital Education. She has a bachelor's degree in journalism from the University of Oregon and worked in local news before joining the e.Republic team. She is currently located in San Diego, California.