However, there are significant warnings about the potential dangers of AI. Even comedian Jon Stewart of “The Daily Show” recently opined about these, warning, “So I want your assurance that AI isn’t removing the human from the loop.” He questioned the possibility that humans will lose their jobs to AI technology. There are other concerns about the misuse of AI around privacy, information accuracy, cybersecurity and deepfakes, and it is important to consider how we can protect the educational process, our jobs and personal lives from such foreseeable risks.
PROS: PERSONALIZATION, EFFICIENCY, VERSATILITY
Every sector can benefit from AI in some manner. In a February blog post about three major AI trends to watch in 2024, Microsoft described using AI to build more accurate tools for predicting the weather, estimating carbon emissions and other functions to help farmers be more efficient and mitigate climate change.
Another dramatic growth area is multimodal AI, which offers the enhanced capability of combining many distinct types of data, delivering very comprehensive results. This data can be in the form of text, graphics and multimedia. As IBM put it in a February blog post, “The most immediate benefit of multimodal AI is more intuitive, versatile AI applications and virtual assistants. Users can, for example, ask about an image and receive a natural language answer, or ask out loud for instructions to repair something and receive visual aids alongside step-by-step text instructions.”
CONS: JOB DISPLACEMENT, BIAS, OVERRELIANCE
In an interview with Lester Holt in January 2024, Microsoft CEO Satya Nadella discussed the company’s move into the world of AI, the promises and risks, and whether it will displace workers.
“What we have learned, even as a tech industry, is that we have to simultaneously address both of these: How do you really amplify the benefits and dampen the unintended consequences?” Nadella said. “Let us make sure that the technology ultimately is just a tool. This is not about replacing the human in the loop. In fact, it’s about empowering the human.”
IBM CEO Arvind Krishna went a step further in a CNBC interview last year when explaining this shift in workforce, saying the replacement of some jobs is inevitable.
“Generative AI can help make every enterprise process more productive, yes. That means you can get the same work done with fewer people. That’s just the nature of productivity,” he said. “We normally churn 5, 6 percent a year. Over five years, that’s about 30 percent of those roles (back office, white-collar workers) will not need to get backfilled.”
With the information that AI collects and synthesizes, there is a real potential for interjecting bias and increasing inequality. One organization working to inform the public on potential AI biases is OECD.AI, an extension of the Organisation for Economic Co-operation and Development. OECD.AI provides resources to people involved with AI policy creation. The group explains on its website that “AI risks fuel social anxieties worldwide, with some already materializing: bias and discrimination, the polarization of opinions at scale, the automation of highly skilled jobs, and the concentration of power in the hands of a few.” These biases can be propagated globally throughout educational and corporate environments.
In education, overreliance on AI-created information and conclusions can also potentially minimize critical thinking and problem solving by students, and increase cheating. Students might use AI to solve homework problems or take quizzes. In an August 2023 opinion piece in the research journal Education Next, American Enterprise Institute senior fellow John Bailey warned, “Aside from the ethical issues involved in such cheating, students who use AI to do their work for them may not be learning the content and skills they need.” And this is to say nothing of the increasing risks to data privacy, with the proliferation of digital tools processing and recording information.
AI WORLD OF EMOTIONAL INTELLIGENCE
When looking toward the future of AI, a new horizon being discussed is emotional intelligence. In a January 2024 interview with CNN’s Fareed Zakaria, entrepreneur and CEO of Microsoft AI Mustafa Suleyman said virtual assistants will eventually help make some of our most personal decisions.
“Think of AIs in the future as personal assistants,” he said. “Everybody is going to have a conversational interface, which not just represents you and is there to help you and support you, but can also teach you, is actually going to help you day to day, to be better at your job, to make important life decisions — when you’re thinking about whether to relocate to a different city, or whether it might be time to put your elderly parents into a care home, or whether to go ahead and have a serious operation that you’re thinking about.”
Suleyman said his new company, Inflection, created an application called Pi (Personal Intelligence). He said, “We’ve specifically conditioned it to be good at emotional intelligence. It’s a great listener, it’s very even-handed, it presents both sides of an argument, it asks you great questions, it tries to remember what you’ve said in the past. … These elements of empathy are actually quite learnable by the AI and will be incredibly valuable.”
PROTECTIONS FROM AI
Beyond educational or corporate environments, we should also consider how to protect ourselves in our daily lives from AI as the dangers of identity theft, misinformation and deepfakes increase. We need to ensure our data and identity are secure and up to date, and multifactor authentication is necessary in nearly every transaction we make. As social media users, we need to be careful how we share publicly, and review all of our application privacy settings. In considering the implications of AI, a larger choir of experts is calling for us to first understand it, before moving ahead too quickly. In a June 2023 column forForbes, physicist Guneeta Singh Bhalla wrote, “Only when scientists are able to understand how intelligence works and predict how it will evolve, can we develop systems safely and for the benefit of humanity and life on our planet. For this to happen, we need both a pause in further public releases of such technologies and a very large investment in understanding them.”