Generative artificial intelligence (GenAI) started the year in relative obscurity, according to Google Trends. It began to get traction in the second quarter and peaked in June. By another measure, the Gartner Hype Cycle, GenAI is at the “Peak of Inflated Expectations” this year. For their part, the views of public CIOs would flesh out a scatter plot — the enthusiasts, the pragmatists, the dystopians, the skeptics and the cynics — all represented from the center to the edges of the chart. We read accounts and forecasts of GenAI’s potential for good and for ill in everything from pharmaceutical research and life sciences to fintech, public safety, creative industries and, increasingly, government.
After such a year, it is difficult to say something novel or useful about GenAI. But what if we flipped the discussion to see what the rise of GenAI tells us about the eponymous laws we have long used to make sense of where we are and where we are going? Let’s consider the full extent of these laws.
Exhibit A is Amara’s law, named for scientist, researcher and former President of the Institute for the Future Roy Amara. He is best known for saying, “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” Boy howdy, does that ever apply to digital technologies in general and artificial intelligence specifically. AI may change the world — curing cancer, reversing climate change or taking all our jobs while creating a bunch more new ones — but mostly in the outbound years and decades. In the near term, it creates shadows for us to worry about, plan around and get distracted by even as we experiment and put it to work.
In many respects, GenAI represents a victory lap for Moore’s law, based on Intel co-founder Gordon Moore’s formulation about the exponential growth of computational prowess. Originally coined in 1965 around the doubling of transistors on microchips every couple of years, it bumped up against the physical limits of silicon-based technologies. As AI models grow massively in size, from millions to billions and even trillions of parameters, the underlying hardware continues to keep pace even as margins narrow. Chip maker Nvidia, with its combination of advanced graphics processing units (GPUs) and tensor processing units (TPUs) that are optimized for AI tasks, reaches beyond transistor density in pushing the upward limits in the exponential increase in computational power.
Not far behind, but perhaps in a supporting role, is Metcalfe’s law. Named for Ethernet inventor Robert Metcalfe, it suggests that the value of a network is proportional to the square of the number of its users. When the campaign for digital government was young, broadband penetration reached 51 percent — giving advocates the opportunity to claim that government could then serve a “digital majority.” As of last year, that number has reached 90 percent. The stakes are high for the remaining 10 percent, often characterized as underserved communities including low-income and racialized populations along with people who have chosen not to engage in a connected world. By extension of Metcalfe’s law, more user interactions contribute to AI’s knowledge, which relies on iterative feedback for fine-tuning and improvement. The absence of marginalized voices is deafening to generative models. They cannot be trained with what isn’t there, increasing the risk of unconscious bias and the skewing of results.
Those risks are compounded when considered against a catch phrase of our time – “data is the new oil” — underscoring data’s role as the critical resource in the 21st-century economy. To be absent from the data sets on which AI understands the world is to go missing entirely.
And lest we allow ourselves to think there will be a break in this exponential growth, Hartmut Neven of the Quantum Artificial Intelligence Lab at Google would remind us of a law named for him — Neven’s law — which says that quantum computers are gaining computational power at a doubly exponential rate and, while a long way off, has the potential to dwarf Moore’s law. Quantum computing raises the spectrum of sentient AI that would eventually come to think and feel like humans.
Indigenous Canadian writer Alicia Elliott is not working on a law per se but on reinventing the Haudenosaunee creation story. Her people know what it means to wrestle back their culture and texts after losing them to outside forces. Looking to the future, she says there is nothing more important than to define and defend what it means to be human. Policymakers would do well to consider all that that means.
This story originally appeared in the December issue of Government Technology magazine. Click here to view the full digital edition online.