One of the most exciting things to me about large language models is that they’ve nearly broken the language barrier between humans and computers. Users no longer need technical acumen to wield computers for complex tasks, and some computers can now communicate with users better than some users can between themselves. Generative AI is almost like an API between people and software that will, especially as it improves, make it increasingly easy for them to interface and collaborate. In the next five to 10 years, I expect generative AIs to improve quickly and become more portable and accessible, at least in the sense of appearing on more platforms intertwined with our lives. It may become increasingly easy to forget — and therefore critical to remember and teach people — that AI is not an entity but a tool, and the user is ultimately responsible for what they do with it.
Generative AIs are already capable of doing most menial mental labor we ask of them, like writing our homework or emails. Once they can recognize specific voices accurately enough, and tech companies write the necessary APIs, AI tools may become voice assistants embedded in phones and watches that we can verbally instruct to do anything we now do on a computer — file taxes, book a hotel, move money from a bank account, browse the New York Times. If history is any measure, I predict this change will be significant but gradual and overlooked, like how we think nothing of Google and FaceTime today.
These are the unregulated Wild West days of AI, when we’re still figuring it out, so it’s a time to contemplate possibilities.
Of course, these revolutions always have a flip side. I share the common concerns of educators about cheating, and even more the concerns about what could happen to our information ecosystem as the cost of generating persuasive falsehoods is reduced virtually to zero. What happens when anyone, anywhere, can produce 1,000 professional-sounding pseudoscientific studies with 100 bogus sources, or convincing phishing emails, or financial scams targeting seniors, every day? What happens when it takes no money or expertise to create deepfakes at such velocity and quality that video and photographic evidence are no longer admissible in court, because they’re impossible to verify? And could that necessitate a whole new mode of gatekeeping, and what would that do to already-fraying public trust in institutions? Eventually AI may be able to “generate” designs for previously unachievable weapons or solutions to complex scientific problems. One has to imagine an AI becoming as good at military strategy as modern computers are at chess. These are serious problems for society to navigate, but so are cybersecurity and accessibility, and I think they’re surmountable.
Part of discovering what generative AI can do will be discovering what it can’t. Even the most advanced technologies tend not to be as limitless in practice as they first appear in theory, as they bump up against the infinite complexity and confounding variables of reality. In the case of generative AI, for instance, no matter how accurate or competent it becomes at synthesizing text or imagery, it has no senses or experience of the world and is therefore incapable of subjectivity, which rules it out as a replacement for creative writers and artists. I say this having not only seen the paintings and poems generated by AI tools, but having been doubly convinced by them. But as a tool, generative AI will revolutionize their industries nonetheless, and those students who know how to wield it will have a leg up on those who don’t. So it will be the job of educators to help prepare them for that world.
I don’t envy educators their task of trying to see around this corner. While it’s true that we have historically adapted to new technology and will continue to do so, generative AI carries a potential for exponential change, especially if it becomes so good at coding that it can design its own successor. In that scenario, all bets are off, although I’m skeptical for aforementioned reasons. In any case, as with the early days of the Internet, the biggest leaps and bounds in the status quo are still ahead of us. It’s not too early to imagine what’s possible and not too late to avert disaster. The next Steve Jobs, Jeff Bezos and Mark Zuckerberg are sitting in classrooms somewhere right now. I hope they’re learning something about responsibility.
This article originally appeared in the September issue of Government Technology magazine. Click here to view the full digital edition online.