According to Julia Reidy, an assistant professor of English at Georgia Highlands College and a member of United Campus Workers, she and others in UCW are concerned that the adoption of AI-powered tools used for things like grading, feedback and content creation may eventually lead to the elimination of available faculty and staff positions across higher ed. She said she thinks AI adoption could be used as an excuse to downsize some departments tasked with things like student advising, which are often understaffed and under-resourced.
“I worry that [professors] will be pressured to do more with insufficient tools if AI is starting to get seen as something that would be a good substitute for instructional labor, because I don’t think that it is actually a good substitute,” she said. “If we’re talking about [faculty and] staff positions, for example, like academic advising, there are some things that can be made more efficient, but what can’t be replaced is human interaction. … If we don’t support those departments with real human labor and the funding to make that happen, then it’s not going to work well.”
Melanie Barron, an organizer with the Communications Workers of America and senior campaign lead for United Campus Workers, said some higher-ed faculty and staff are concerned that AI would make it possible for institutions to have larger and larger classes, “with less and less instructional labor.” She also reiterated concerns raised by Reidy about the need for more human labor in higher ed.
“There’s a real value in the process of getting your higher education and interacting with a real human being [in that process],” she said.
Barron said she believes the adoption of chatbots to assist students, for instance, could exacerbate existing concerns about enrollment declines and student retention, and by extension, concerns about budgets across higher ed.
Barron said that AI-powered tools like chatbots should not be relied upon too much for student advising and other student resources, as they are often unhelpful for answering some questions from students. She said these tools often “run into a wall,” telling students to refer to information on a website, similar to customer support services in credit card companies and other mostly private-sector industries. She added that real human beings would probably “do more than just direct you to the correct page on the website where you can find the answer to your question.”
“College students are adults, but they’re very young adults that often need a lot of help … and they’re coming from a wide variety of educational backgrounds. A lot of them don’t know how to navigate institutions effectively, and the people that work at the institution really make a big difference in making sure students are successfully enrolled in college and then stay there to finish their degrees, and so if college becomes like interacting with your credit card companies, why would you stick around?” she said. “I see it as being a real threat to retaining students over the long term, particularly working-class and first-generation students that really do need a lot of hands-on help to make it all the way through.”
However, other concerns about AI among faculty have to do with how professors should regulate the use of AI among students, particularly in courses that teach writing, according to Reidy. She said this stresses the need for more specific guidance on how professors should police the use of AI, as well as general training and professional development resources on how faculty should use AI tools in education.
“It’s being adopted faster than we’ve created policy and training to adapt to it. Like for example, there is disagreement within a lot of disciplines about whether students using AI is a violation of academic integrity or not, and policing their behavior is something that takes up a lot of work time and adds to our workload. There’s not a consensus or [much] good guidance about exactly how to treat that,” she said. “I think when we do have better policies and we do have better training, hopefully that will get a little bit better.”
Reidy said institutions should work to provide more clear guidelines on how faculty and institutions themselves should make use of AI in their work, as well as how they should regulate it in their work. So far, much of that has been left up to faculty to decide, such as at Purdue University, where instructors have been told to allow the use of AI at their own discretion and explain to students what constitutes unauthorized use of generative AI tools such as ChatGPT in their coursework.
Reidy said institutions should also provide some reassurance to faculty and professors when it comes to how AI could affect their job security.
“I think protections should be put in place for people’s jobs,” she said. “The guiding principle is that the humanity of education is what makes it valuable.”