IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Connecticut Professors: Don't Fight ChatGPT, Incorporate It

While they acknowledge concerns about an AI tool that can write essays for students, professors from the University of Hartford, University of Connecticut and Yale also see its limits and a need to redesign assessments.

042613-writing
(TNS) — As the latest innovation in artificial intelligence spreads through the classroom, university professors see a potential threat to academic integrity but also are curious about how it can be used as an aid in teaching.

Reaction has been swift to ChatGPT, launched in November by OpenAI with the ability to answer questions in seconds with reasonably well-written responses, gleaned from the tremendous amount of data fed into its computers. Some are concerned students will use it as a shortcut to write essays and test responses or that it will eliminate jobs.

“It already has information [from] all the open Internet,” said Zaman Sarker, assistant professor of computer science at the University of Hartford. “All the newspapers are on, CNN, NBC, all the articles,” totaling 600 billion words so far.

'THINK ABOUT IT'



Lisa Zawalinski and Sheetal Sood of the University of Hartford have been discussing the chatbot to see how they can best use it while keeping their students honest.

“It’s really the conversations that we’ve been having that have started to give us new ideas, less fear, more excitement … about this new technology that is blowing up the airwaves in regard to teaching and learning,” said Zawalinski, associate professor of elementary education and director of the Center for Teaching Excellence and Innovation that supports Hartford’s faculty.

Sood, associate dean of the College of Education, Nursing and Health Professions, said her first reaction was fear, “but then I’m always the person who will be like, OK, stop reacting, sit back and think about it.”

As she’s read and talked about ChatGPT, she said she began to think, “How can we incorporate it and teach our students how to use it, as opposed to saying, stay away from it. … Then I started thinking about students and how this can actually be used to support students as opposed to being thought of as something that we just cannot include.”

One way might be to provide an AI-generated prompt to take away the fear that greets students when they see a blank sheet of paper or computer screen and know they have to write an intelligent essay.

Zawalinski too was apprehensive at first. “How are we going to assess students’ writing if they can just pop a question in and have GPT answer it for them, and how is this going to work in my courses now?” she said.

But she thought about calculators arriving when she was in elementary school. “I’m sure math teachers felt complete craziness when this tool that could help children add, subtract, multiply, divide came on board,” Zawalinski said.

“I think it’s kind of natural to fear because it’s going to cause us to reexamine the kinds of assignments, the kinds of assessments and our course learning outcomes in thoughtful ways,” she said. “We’re going to have to have more critical thinking.”

Tom Deans, director of the University Writing Center at the University of Connecticut, said, “writing technologies have always changed. The same people were very upset in the same kind of way when Wikipedia emerged, thinking, students are going to use Wikipedia. It’s going to be terrible. When spellcheck came out, people thought, nobody’s going to learn how to spell anymore.”

The Internet itself has been seen as a threat to learning. “People are like, this is the end of college, if anybody can look something up on the web,” Deans said. “So I think this is not unique in terms of creating a certain kind of panic as a new technology emerges.”

College assignments usually are not seeking simple responses to questions, but involve critical analysis and synthesis of ideas, Zawalinski said. “We might need to reassess the types of assessments we use to measure those things,” she said.

She said there is a concern that ChatGPT will spit out bias and stereotypes about people, not to mention inaccuracies. “So the critical evaluation is going to be equally important to any ChatGPT output that we get. We just have to adjust to that,” she said. The chatbot can’t create accurate citations, either, she said.

Besides, many tests are open book anyway, Sood said. Often, she begins with a case study that her students must assess.

“I really believe in helping students have the access to information that’s available in a book or that’s easily available on the Internet so they don’t have to remember the definition of something,” she said.

The task is, “how do they apply that knowledge to help their [future] students be successful? So once I started thinking about that, I think my approach to assigning or designing assessments really changed,” Sood said.

Sarker, who is teaching a course on artificial intelligence, said ChatGPT is getting better at filtering out offensive answers.

Many were removed and “the actual model was drained one more time to filter out some of the offensive answers it can generate,” Sarker said. “And by having the training two times, it can now generate content that is more coherent, and without having offensive information.”

“It’s drawing from an Internet that reflects humanity and humanity has bias and stereotypes,” said Associate Provost Jennifer Frederick of Yale University. However, “if you try to prompt it to say something that’s blatantly biased or racist. … It’s trained to give you some language that says it won’t do that. But you can get a little more sophisticated in your prompt and you can actually get it to do that,” she said.

Zawalinski said the nature of education teaching: requiring lesson plans and reports from field work, are not adaptable to a simple chatbot question. “ChatGPT can’t talk to them about their individual students that they’re working with,” she said.

'DON'T USE A ROBOT TO HELP YOU WRITE YOUR PAPER'



Sheetal said she’s not free of concerns, but that she can adjust. “The one thing I’m struggling with right now is, most of our work requires the students to think of evidence-based practices that they can use with their students. And that’s something they can simply put in ChatGPT and get a list of evidence-based practices.”

So instead of asking for a list of practices, “I’m asking them to provide me with an annotated bibliography,” she said. “That’s kind of forcing them to take it a step further, and show me where they got the information from.”

Other tactics include locking down browsers so they can’t open any windows other than the test. A program such as Respondus not only will do that but will use the computer camera to monitor students as they take exams.

Zawalinski said she doesn’t use Respondus. “My first stance is always to think differently about the types of assessments and measures I’m using,” she said. There are also responses to ChatGPT being developed, such as ZeroGPT, which detects AI-created text.

Deans said, like any tool, “you’d want to use it transparently and be open about that, and even the graduate students who we’ve talked to have been very nervous, saying this feels like it might be crossing lines.”

There would be an issue if a university hadn’t created ethical codes for using AI, as they have for plagiarism. “I’m dealing with it with my own class, this time I’m saying my students can use it for whatever they want, but they have to acknowledge it,” Deans said. There are times when he prohibits its use, however.

“The more we experimented with it, the more we realize it makes lots of mistakes,” Deans said. “So I’m also alerting students to the fact that this is a technology that will make up facts, make up citations, and so you have to use it with a critical eye. … You’re responsible for whatever you submit.”

Some professors are having students write their essays in class, he said.

Frederick, executive director of the Yale Poorvu Center for Teaching and Learning, said, “I think the main sentiment going on right now at our campus is people are paying attention. And there’s a handful of people on the leading edge who are already thinking about how to integrate this into their courses and help students learn how to use it and think critically about it.”

But Frederick said, in addition to potential problems with academic integrity, that she worries about ChatGPT using students’ inquiries as learning material.

“When you ask your students to interact with this tool, they’re actually contributing to the training and the improvement of it,” she said. “And that raises some issues that lead us to consider, probably students should be able to opt out if they don’t want to be part of that.”

Frederick said there are two messages faculty should receive from their teaching centers: “No. 1, would be to be explicitly clear about expectations for your students,” she said.

“And if this adds a new piece to your policy about what students may or may not do, how they may or may not get help when they’re preparing assignments, then be really clear about that and have conversations with your students,” she said.

Second is avoiding indiscriminate use of the chatbot: “Having students connect to things that mean something to them, requiring outlines and drafts and having a lot of … constructive feedback, peer review, breaking things down into smaller steps. Those are things that I would have recommended three years ago and they still hold.”

A way to use ChatGPT in the classroom might be to say, “give this prompt to ChatGPT and bring the product to class and then we’re going to do things with it. So the product becomes the starting point,” Frederick said.

An example might be to come up with a source on the topic that ChatGPT did not include and see what different angles students come up with. “I think the tool is changing and improving so rapidly that the critiques of this month may be very different from the critiques of two or three months from now,” she said. “So, again, there’s a lot of experimenting going on.”

Frederick said using ChatGPT could violate academic integrity policies if students were presenting AI-generated work as their own. She said teachers should tell their students, “Don’t use a robot to help you write your paper.”

“I think we need to maybe enhance our safeguards a little bit, and part of that is policy and part of that is good old-fashioned conversation about what it is we want our students to learn and getting them to think reflectively about the ways in which the things that they’re practicing are advancing their learning,” Frederick said.

“I think sometimes some students think it’s a magic bullet,” Deans said. “And the more my colleagues are talking right now, they’re playing around with it saying, Hey, it actually produced a pretty [poor] essay with some factual errors for the prompts that they were experimenting with. It made them a little less nervous about simply saying this is going to take over.”

©2023 Hartford Courant. Distributed by Tribune Content Agency, LLC.