IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Opinion: When Artificial Intelligence Gets It Wrong

The more we learn about the potential errors, biases and cybersecurity vulnerabilities of artificial intelligence tools, the clearer it becomes that education and caution will need to be priorities going forward.

A robot errs, attempting to put a round peg in a square hole.
Shutterstock
Over the past several years, the media has frequently reported on the power, promise and transformative capabilities of artificial intelligence in almost every discipline. In business, health care, cybersecurity and education, AI is everywhere. While it can provide myriad benefits, what happens when AI gets it wrong? This week I'm going to consider what happens when AI goes awry and how we can work to minimize negative outcomes while preserving positive ones.

POTENTIAL AI MISTAKES


As most are aware by now, there are many situations in which AI can make mistakes. First, as AI works with algorithms, it can make flawed decisions that can be financially catastrophic or damage an institution’s hard-earned reputation, either in business or education. It can be built with unintended biases or discriminate against certain groups. It can also negatively affect data privacy while simultaneously arming cyber attackers with sophisticated tools.

Researchers are now collecting data on how often AI can get things wrong. As Patrick Tucker, the science and technology editor of the national defense news website Defense One wrote in January, “When … researchers put the statements to ChatGPT-3, the generative AI tool ‘agreed with incorrect statements between 4.8 percent and 26 percent of the time, depending on the statement category.’” An error rate approaching 25 percent can be particularly troublesome for any discipline.

EVOLVING AI IN BIG TECH


Concerns about AI getting things wrong have been documented over the last decade. In 2015, Google uncovered a flaw in its Google Photos app which utilized a combination of “advanced computer vision and machine learning techniques to help users collect, search and categorize photos,” according to the New York Times in 2015. Unfortunately, the app incorrectly labeled images of Black people as gorillas. As the Times noted from a Google representative, “There is still clearly a lot of work to do with automatic image labeling, and we’re looking at how we can prevent these types of mistakes from happening in the future.”

Nine years later, in 2024, Google began restricting some portions of its AI chatbot Gemini's capabilities after it created factually inaccurate representations based on generative AI prompts submitted by users. There were concerns Gemini could negatively impact elections worldwide.

In 2016, Microsoft utilized a Twitter bot called Tay to engage a younger audience. Unfortunately, this AI project was quickly removed after it began sharing extremely inappropriate tweets.

Eight years later in 2024, Microsoft introduced a new AI-powered feature called CoPilot+Recall that could take screenshots of a computer desktop and archive the data. Cybersecurity professionals quickly warned creating a searchable archive of a person’s computer activity would be an easy target for hackers to capture. According to a Forbesarticle in June, “As a result of the public backlash, Microsoft plans to make three major updates to Recall: making Recall an opt-in experience instead of a default feature, encrypting the database, and authenticating the user through Windows Hello.” These examples illustrate the continual transformation and evolution of AI, showing its enormous potential, yet revealing the potential pitfalls.

AI CONCERNS IN EDUCATION


AI in education is already utilized independently by faculty and staff, as well as by institutional programs. In 2023, Turnitin, a plagiarism detection software tool, introduced a new AI detector. Unfortunately, end users began to see student’s work being incorrectly flagged as plagiarized. In a public statement in August 2023, Vanderbilt University noted it would be among many institutions to disable Turnitin’s AI detector because, “This feature was enabled for Turnitin customers with less than 24-hour advance notice, no option at the time to disable the feature, and, most importantly, no insight into how it works. At the time of launch, Turnitin claimed its detection tool had a 1 percent false positive rate.” The Washington Post reported in April 2023 that Turnitin claimed its detector was 98 percent accurate while cautioning end users that flags about plagiarism “should be treated as an indication, not an accusation.”

AI AND HEALTH CARE


As AI is being researched, piloted and implemented in many disciplines, while not necessarily in an educational curriculum, it can give educational institutions some pause to fully implement it. The World Health Organization issued a warning in May 2023 calling for “rigorous oversight needed for the technologies to be used in safe, effective, and ethical ways.”

While AI can be a powerful tool in medicine, it can also pose risks without safeguards in place. The Pew Research Center found in 2023 that 60 percent of Americans would be uncomfortable with their health-care provider relying on AI. Still, AI could help doctors analyze diagnostic images more quickly and accurately, help come up with innovative drugs and therapies, and serve in a consultative role to a medical team. Once AIs can provide safe and proven health-care regimens, they might be of greater use in medical schools.

AI AND CYBERSECURITY


A critical use of AI is to protect against cyber attacks through sophisticated monitoring, detection and appropriate responses. AI has been a proven tool to protect both our data and our privacy with its ability to analyze massive amounts of data and detect unusual patterns while scanning networks for potential weaknesses. Unfortunately, cyber criminals are using AI tools to educate themselves to circumvent what AI is trying to protect. Cybersecurity and Infrastructure Security Agency chief Jen Easterly toldAxios in May that AI is “making it easier for anyone to become a bad guy,” and “will exacerbate the threats of cyber attacks — more sophisticated spear phishing, voice cloning, deepfakes, foreign malign influence and disinformation.”

STRATEGIES FOR REDUCING AI RISKS


To balance the enormous potential of AI and its risks, experts recommend specific audits to ensure AI tools are appropriate, accurate and free from biases. Developers incorporating specific ethics in the creation of AI tools and processes is also important. Educating the public and private sector about the potential errors and risks of AI should be part of the formula for the future, which is where higher education could play a pivotal role. In May, a trio of authors in the Harvard Business Review identified four types of GenAI risk and how to mitigate them, summarized them as misuse, misapplication, misrepresentation and misadventure. These are just some of the options to help minimize risks when using AI. Both corporate and educational sectors need to work together to reduce risk so we can all benefit from the many positive opportunities AI offers.
Jim Jorstad is Senior Fellow for the Center for Digital Education and the Center for Digital Government. He is a retired emeritus interim CIO and Cyber Security Designee for the Chancellor’s Office at the University of Wisconsin-La Crosse. He served in leadership roles as director of IT client services, academic technologies and media services, providing services to over 1,500 staff and 10,000 students. Jim has experience in IT operations, teaching and learning, and social media strategy. His work has appeared on CNN, MSNBC, Forbes and NPR, and he is a recipient of the 2013 CNN iReport Spirit Award. Jim is an EDUCAUSE Leading Change Fellow and was chosen as one of the Top 30 Media Producers in the U.S.