This is not a new problem. Consider J. Robert Oppenheimer, nuclear physicist and director of the Los Alamos Laboratory, known to history as the father of the atomic bomb for his work on the Manhattan Project. Soon after the bombs were detonated over Nagasaki and Hiroshima, Oppenheimer met with President Truman to question the ethics of nuclear weapons. Truman was dismissive of the hand wringing and later told aides he never wanted to see the “cry-baby scientist” again.
For its part, too, Congress has often had a difficult time engaging the chief executive on matters of technology. Presidents have a deep bench of technical expertise on which to draw; members of Congress were (and are) comparatively thinly staffed, particularly when considering the impacts of technology. That changed in 1972 with the creation of the Office of Technology Assessment (OTA), which provided Congress with independent analysis on more than two generations of emerging technologies, focusing some of its 750 detailed reports on energy, the environment, education, health care, human services, digitization, government modernization, renewable resources and space exploration. It is a research agenda as relevant today as it was 50 years ago. OTA teams comprised scientists, engineers, policy types and, by design, a humanities scholar, ethicist, poet or even shaman to bring an outsider’s point of view to the deliberations. It operated free from the influence of corporations, foundations or think tanks. The limit of its independence was budget, which Congress cut in 1995.
Other than its archives, which are housed at Princeton University, the OTA became a footnote to history until a long-shot candidate in the crowded 20-person Democratic field of presidential candidates called for its revival. Andrew Yang, a tech entrepreneur turned novice candidate who first gained notice for advocating for universal basic income (UBI) to deal with economic disruptions, wants to bring the OTA back to give Congress access “to get in front of the true challenges of the 21st century and get Congress the information [it] needs to make intelligent decisions.”
So here we are at the end of the second decade of the 21st century. Nuclear weaponry is still a thing, more prolific and less stable than it was a half-century ago, with no overarching ethical norms in place. AI is similarly ethically vexing for its potential impacts on society, including but not limited to implicit bias in the underlying code and the soundness of the logic that informs autonomous vehicles about whose life to save or sacrifice when such decisions are forced. (Google assembled a short-lived industry group to advise on the ethics of AI; the European Commission is piloting a similar group in the hope it can be a competitive advantage.) Add the Internet of Things, robotics and automation to the mix to confront issues of the future of human worth in the absence of conventional worth, and whether UBI might be part of an appropriate market response. And an existential crisis awaits when virtual and augmented reality become indistinguishable from real life — or maybe are real life. (Just ask Elon.)
We are working in and witness to a remarkable era of technological innovation and, in many cases, breakthroughs. The solutions to date are not seamless, the thinking through of wider implications is not complete. Absent bodies such as the OTA, it pushes responsibility for such work closer to the ground, including states and localities.
The chill you feel is real. It is a cold wind that blows through the cracks and the holes.