IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Opinion: Higher Ed’s Reasons to Both Embrace and Fear AI

Artificial intelligence isn’t going anywhere, so we might as well face it with our eyes open. It brings with it an abundance of potential use cases and risks alike, as job displacement is the flip side of efficiency.

A white robot reaching out to touch the middle of three smiley faces. The one on the left is colored green and smiling, the one in the middle is colored yellow and making a neutral face, and the one on the right is colored red and frowning. Dark blue background.
Shutterstock
Much of the writing about artificial intelligence in higher education has been about the tool’s potential to enhance student learning, teaching strategies and the entire education process. Many say it might help identify and track students who would benefit from additional support and resources.

However, there are significant warnings about the potential dangers of AI. Even comedian Jon Stewart of “The Daily Show” recently opined about these, warning, “So I want your assurance that AI isn’t removing the human from the loop.” He questioned the possibility that humans will lose their jobs to AI technology. There are other concerns about the misuse of AI around privacy, information accuracy, cybersecurity and deepfakes, and it is important to consider how we can protect the educational process, our jobs and personal lives from such foreseeable risks.

PROS: PERSONALIZATION, EFFICIENCY, VERSATILITY


In education, being able to personalize the learning experience based upon a student’s needs is uniquely advantageous. AI can help make teaching easier while providing more time to work with the student. As the AI software development company LeewayHertz points out on its website, “With the help of AI-powered chatbots and virtual assistants, educators can provide students with immediate support and assistance outside the classroom, helping them stay engaged and motivated.”

Every sector can benefit from AI in some manner. In a February blog post about three major AI trends to watch in 2024, Microsoft described using AI to build more accurate tools for predicting the weather, estimating carbon emissions and other functions to help farmers be more efficient and mitigate climate change.

Another dramatic growth area is multimodal AI, which offers the enhanced capability of combining many distinct types of data, delivering very comprehensive results. This data can be in the form of text, graphics and multimedia. As IBM put it in a February blog post, “The most immediate benefit of multimodal AI is more intuitive, versatile AI applications and virtual assistants. Users can, for example, ask about an image and receive a natural language answer, or ask out loud for instructions to repair something and receive visual aids alongside step-by-step text instructions.”

CONS: JOB DISPLACEMENT, BIAS, OVERRELIANCE


While many experts promote and predict positive outcomes, concerns with AI technology abound, particularly ones about a significant shift toward a reduced human workforce.

In an interview with Lester Holt in January 2024, Microsoft CEO Satya Nadella discussed the company’s move into the world of AI, the promises and risks, and whether it will displace workers.

“What we have learned, even as a tech industry, is that we have to simultaneously address both of these: How do you really amplify the benefits and dampen the unintended consequences?” Nadella said. “Let us make sure that the technology ultimately is just a tool. This is not about replacing the human in the loop. In fact, it’s about empowering the human.”

IBM CEO Arvind Krishna went a step further in a CNBC interview last year when explaining this shift in workforce, saying the replacement of some jobs is inevitable.

“Generative AI can help make every enterprise process more productive, yes. That means you can get the same work done with fewer people. That’s just the nature of productivity,” he said. “We normally churn 5, 6 percent a year. Over five years, that’s about 30 percent of those roles (back office, white-collar workers) will not need to get backfilled.”

With the information that AI collects and synthesizes, there is a real potential for interjecting bias and increasing inequality. One organization working to inform the public on potential AI biases is OECD.AI, an extension of the Organisation for Economic Co-operation and Development. OECD.AI provides resources to people involved with AI policy creation. The group explains on its website that “AI risks fuel social anxieties worldwide, with some already materializing: bias and discrimination, the polarization of opinions at scale, the automation of highly skilled jobs, and the concentration of power in the hands of a few.” These biases can be propagated globally throughout educational and corporate environments.

In education, overreliance on AI-created information and conclusions can also potentially minimize critical thinking and problem solving by students, and increase cheating. Students might use AI to solve homework problems or take quizzes. In an August 2023 opinion piece in the research journal Education Next, American Enterprise Institute senior fellow John Bailey warned, “Aside from the ethical issues involved in such cheating, students who use AI to do their work for them may not be learning the content and skills they need.” And this is to say nothing of the increasing risks to data privacy, with the proliferation of digital tools processing and recording information.

AI WORLD OF EMOTIONAL INTELLIGENCE


When looking toward the future of AI, a new horizon being discussed is emotional intelligence. In a January 2024 interview with CNN’s Fareed Zakaria, entrepreneur and CEO of Microsoft AI Mustafa Suleyman said virtual assistants will eventually help make some of our most personal decisions.

“Think of AIs in the future as personal assistants,” he said. “Everybody is going to have a conversational interface, which not just represents you and is there to help you and support you, but can also teach you, is actually going to help you day to day, to be better at your job, to make important life decisions — when you’re thinking about whether to relocate to a different city, or whether it might be time to put your elderly parents into a care home, or whether to go ahead and have a serious operation that you’re thinking about.”

Suleyman said his new company, Inflection, created an application called Pi (Personal Intelligence). He said, “We’ve specifically conditioned it to be good at emotional intelligence. It’s a great listener, it’s very even-handed, it presents both sides of an argument, it asks you great questions, it tries to remember what you’ve said in the past. … These elements of empathy are actually quite learnable by the AI and will be incredibly valuable.”

PROTECTIONS FROM AI


Beyond educational or corporate environments, we should also consider how to protect ourselves in our daily lives from AI as the dangers of identity theft, misinformation and deepfakes increase. We need to ensure our data and identity are secure and up to date, and multifactor authentication is necessary in nearly every transaction we make. As social media users, we need to be careful how we share publicly, and review all of our application privacy settings. In considering the implications of AI, a larger choir of experts is calling for us to first understand it, before moving ahead too quickly. In a June 2023 column forForbes, physicist Guneeta Singh Bhalla wrote, “Only when scientists are able to understand how intelligence works and predict how it will evolve, can we develop systems safely and for the benefit of humanity and life on our planet. For this to happen, we need both a pause in further public releases of such technologies and a very large investment in understanding them.”
Jim Jorstad is Senior Fellow for the Center for Digital Education and the Center for Digital Government. He is a retired emeritus interim CIO and Cyber Security Designee for the Chancellor’s Office at the University of Wisconsin-La Crosse. He served in leadership roles as director of IT client services, academic technologies and media services, providing services to over 1,500 staff and 10,000 students. Jim has experience in IT operations, teaching and learning, and social media strategy. His work has appeared on CNN, MSNBC, Forbes and NPR, and he is a recipient of the 2013 CNN iReport Spirit Award. Jim is an EDUCAUSE Leading Change Fellow and was chosen as one of the Top 30 Media Producers in the U.S.