An open letter signed by nearly 1,300 people calls for AI labs to enact a six-month hiatus on “the training of AI systems more powerful than GPT-4,” referring to the tool released earlier this month by Microsoft-based OpenAI.
“If such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” states the letter, which was also signed by 2020 presidential candidate Andrew Yang, Pinterest co-founder Evan Sharp, Turing Prize winner Yoshua Bengio and numerous executives, AI researchers, scientists and others.
The letter, issued via the nonprofit Future of Life Institute, argues that during the proposed pause, labs and researchers should craft protocols for AI that would be audited by “independent outside experts.” The pause would not serve as a ban on all AI development, the letter states, but would represent “merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.”
As well, the letter urges that more attention be paid to making artificial intelligence safer, more accurate, transparent and trustworthy.
The letter’s release came as many people realized they were taken in by an AI-generated image of Pope Francis in a big white puffy coat that seemed out of place for such a global leader. Other recent AI-generated fakes featured Donald Trump getting violently arrested, raising concerns about the risks of such images during tense political times. The ongoing spread of ChatGPT also continues to spark worries around such issues as academic cheating and digital fraud.
The open letter is hardly neo-Luddite in its outlook. It states that AI can enrich the future of humanity, for instance, and states that the proposed pause — the letter calls it an “AI summer” — would provide time to “reap the rewards” that AI has so far brought.
But the letter also urges more coordination between AI developers and policymakers. It describes a “robust AI governance” regime that would include AI-dedicated regulators; oversight and tracking of AI systems; watermarking to help people discern real and fake images; liability for any harm caused by AI; and public funding of AI safety research.
“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter states.