Last November I talked about the ethical challenges of AI in college admissions and student essays, and I just learned of educators using AI to write letters of recommendation. In my work with educators who are confused about the topic, I have found some useful ways to explain a few AI basics in this broad and complex subject.
AI exists on a continuum, from very simple to very complex. We’ve had and used AI for a long time. The new AI is different — very different.
There are many definitions of AI. I find it simplest to understand AI as using machines (computers) to mimic human intelligence. Effective AI would make it impossible to tell if one was communicating with a human or a program — a standard called the Turing test.
For some time, we’ve had devices and programs that appeared to think. We don’t even notice all of the simple devices we use that have primitive AI — it’s built into our printers and our cars. The limit of primitive AI, old AI, is that it has no ability to learn how to handle anything new beyond what was expected by its programmer. My printer uses its built-in computer code to function and react. The people who created it tried to think of all the ways a printer could go wrong. They programmed detectors to monitor its operation and condition. Now, if the printer company learns ways to fix or improve the printer, they send me a software update. My printer can only do what the programmers tell it to do.
Aha! These devices can’t improve their lot in life, and they require the learning of others outside of their realm to update their intelligence. My phone’s camera doesn’t improve its ability to take pictures, but programmers figure out how to improve it, and I get the software update.
You get the point. Let’s distinguish programmed intelligence from the new AI, which has learning, problem-solving and improvement built in. Experts distinguish two forms of new AI: narrow and general.
- Narrow AI operates in a limited field and doesn’t apply its learning to new fields. An AI program that learns how to trade stock cannot use that information to detect diseases in medical specimens.
- General AI would learn and understand things that can be used in other areas, just as a human can use its general intelligence to apply what it knows broadly in life. General AI is not here yet, and it is the source of hand-wringing about the future by Musk and Gates, which is beyond the scope of this piece.
As a simple example, there are many math programs that help students learn addition, subtraction and more. Not all are good or effective. Good programs have a form of AI that can use a student’s performance to tailor the program to help the student. It can re-drill missed facts, and speed students along when they are rapidly “getting it” or slow down when the student misses too many. These programs can act like human partners, because the programmer thought of all the possible student responses and addressed them. But the software doesn’t improve itself or learn to do a better job when a student is having trouble. The programmers use external observations and experiences to update the software. New-AI math programs would improve over time, learning from their successes and failures.
Other forms of adaptive software which have been widely used are also examples of old AI. Computerized adaptive testing uses built-in algorithms to personalize the test. If a student is answering easy questions correctly, it can rapidly move up in difficulty until it finds the area where the student is answering incorrectly. It more efficiently (and less painfully) finds what areas the student needs to master before moving on. This is AI, but it’s not the new AI.
There are many education programs that take old AI to the limit to aid teachers in preparation for classroom instruction or with their students in a supplemental fashion. Other programs provide tools for students to use new-AI research to accomplish a whole range of tasks, including the creation of virtual reality, videos, presentations, music and art.
For my purposes, a simplistic definition of new AI is software that learns and uses its learning to get better at things. It gets better and better at detecting cancer in lab samples. It learns to play chess and continues to improve because of what it learned or experienced.
We’ve seen the headlines about all the things that AI can do that humans can’t. I asked ChatGPT about other fields of new AI, and it talked about machine learning, deep learning and natural language processing. We know that educators are using new AI to create lesson plans and syllabi. More broadly, AI can be used to help students learn more, learn better and learn faster. It can personalize learning and tutor students well, while behind the scenes it can automate school operation, coordinate educational resources to optimize the student’s day, and much more. In July, the investment advice company The Motley Fool listed five uses of AI in education — translation and language learning, writing, early childhood education, teaching and tutoring. It is impossible to list all of the ways AI could personalize and improve education for all.
We need to be aware that all technology has a downside, and AI in schools raises many concerns that can’t be brushed off. Cost and digital equity are two factors, along with bias in the very heart of the program and its algorithms, deepfakes, intellectual property ownership, Big Brother concerns, and threats to humanity and heart in education. These and many other topics will have to be addressed. This isn’t the first time a new technology has raised ethical concerns, but there is something new about its implications for our future.
This is a brave new world, and AI is here to stay. As educators, it’s important that we do our best to fully understand the use of these tools in our schools and our classrooms to benefit our students and to educate students correctly in their use.