IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Are We Only 20 Years from the Singularity?

When futurist Ray Kurzweil popularized the idea that AI would one day surpass human intelligence, he predicted its occurrence in 2045. With two decades to go, now is the time to get ahead on regulating it.

person's face with lines of data and light coming out the back of their head
Adobe Stock/Demencial Studies
Ray Kurzweil has left a mark on our lives. The 76-year-old computer scientist, author, inventor and futurist is responsible for advances in technologies we take for granted: optical character recognition, text-to-speech synthesis, speech recognition technology and electronic keyboard instruments.

But by his own telling, all that is only prologue. He is best known, and wants to be best known, for popularizing the idea of a technological Singularity, a societal tipping point when the exponential growth of technology — culminating in an artificial general intelligence — will surpass human intelligence.

He predicts the Singularity will occur in 2045. When he first wrote about it in 2005, it was half a lifetime away. Now it’s only 20 years off. Yikes! It’s coming on strong, especially when you consider the timeline against other technological inflection points: the Internet dates back 55 years; the personal computer, 53 years; the Mac, 40 years; the World Wide Web, 35 years; and the iPhone, 17 years. Readers of a certain age will remember life before most if not all of them, but many have never known the world without them. The next demographic cohort, Generation Beta, will come of age in (or with) the Singularity, should Kurzweil’s predictions hold.

His critics have joked that Kurzweil has been saying, “the Singularity was just around the corner. The corner being 25 years from whenever he was speaking.” The thing is, Kurzweil spoke to state CIOs at their national association’s annual conference 12 years ago. His predictions seemed fantastical and a long way off — especially to a group whose timelines were boxed within the four-year terms of their governors.

Kurzweil has an optimistic and pragmatic vision of a future where humans would benefit greatly from the potential of technology to solve complex human problems, extend life and improve overall well-being. He argues against pausing AI research because the upside for fields like medicine, education and renewable energy is too significant to halt.

Understandably, even at the 2024 NASCIO Midyear conference in April, AI was a dominant theme, but largely as a powerful yet still tactical tool for improving government operations and service delivery. Government Technology’s coverage detailed many of these initiatives, from streamlining operations and enhancing eligibility systems to mitigating the cybersecurity risks GenAI brings with it and creating sandboxes to safely experiment with these new tools.

The realization of the Singularity would depend on a number of factors, including further technological advancements, societal acceptance of AI and the effectiveness of regulatory frameworks.

Good work has begun in several states and localities on such frameworks to promote transparency, fairness, accountability and privacy in AI systems. In addition to myriad working groups, task forces and designated officers responsible for AI, no fewer than 429 bills were introduced in state legislatures during the 2024 session.

Beyond the breathless headlines and hype cycles, we need thoughtful consideration and a long-term perspective in forging a responsible approach to how these technologies are developed, deployed and governed. The rise of intelligent or thinking computers requires a robust ethical framework to help policymakers and practitioners anticipate and mitigate potential risks of technological displacement.

Historian and author Yuval Noah Harari warns of the loss of human agency and ethical concerns around surveillance and autonomy. Philosopher Nick Bostrom and AI researcher Stuart Russell have also raised concerns about the existential risks associated with superintelligent AI. They emphasize the need for rigorous safety measures and ethical frameworks to guide AI development,

cautioning that without these, advanced AI could pose significant threats to humans. Similarly, Timnit Gebru, a widely respected leader in AI ethics research, has focused on the ethical implications of AI and data mining, particularly the biases inherent in large language models and the potential for these technologies to perpetuate social inequalities.

That the Singularity may now only be 20 years away, it gives something that we humans desperately need: a deadline. It is too easy and too trite to say there should be human oversight in how AI systems make decisions. We humans must be as active in deciding how AI will make decisions as it is while we still can.

This story originally appeared in the July/August 2024 issue of Government Technology magazine. Click here to view the full digital edition online.
Paul W. Taylor is the Senior Editor of e.Republic Editorial and of its flagship titles - Government Technology and Governing.