AI is seen as the next frontier of technology. While some marvel and its ability to automate certain tasks, there are many questions about how to govern the technology and risks it poses.
Former Atlanta Mayor Andrew Young, civil rights activist and King Center CEO Bernice A. King and the presidents of Spelman College and Clark Atlanta University will join artificial intelligence leaders on the new council, according to an announcement made Monday at the HOPE Global Forum in Atlanta.
The announcement of the ethics council came during a fireside chat about the positives and potential pitfalls of AI between Altman and John Hope Bryant, CEO and founder of financial literacy nonprofit Operation HOPE. Bryant announced the council and said it was about being “a force for good.”
OpenAI, one of the darlings of the tech industry, is backed by more than $10 billion in funding and services by Microsoft. Last month, the board of the AI company announced Altman was fired, saying he had not been “consistently candid in his communications with the board.”
His firing blindsided and infuriated investors and employees. Within days, Altman was offered a job by Microsoft. Nearly all of OpenAI’s employees threatened to quit if Altman was not reinstated. Just five days after the board fired Altman, it reversed its decision and almost all the board members who initially ousted him left.
In the wake of the tumult, the U.S. Federal Trade Commission is looking into Microsoft’s investment into OpenAI, according to Bloomberg, as is the U.K. Competition and Markets Authority.
In one of the few comments Altman made alluding to the tumultuous firing, he apologized for being a bit subdued and tired, saying “I’m sorry, it’s been a long few weeks.” The audience laughed and cheered.
“It’s definitely weird, being in the news and reading these things that just don’t seem like me at all,” Altman continued. “You know, in the spirit of having empathy for your enemies, I think people have a lot of anxiety about AI and I get that and I feel that too. And they need a person to project it onto.”
‘Profound ethical implications’
The ethics council was born out of meetings Altman had earlier this year at Clark Atlanta that was facilitated by Bryant. Details on the new initiative were sparse, but Bryant, who will be a co-chair alongside Altman, said it will not provide a legal framework around AI. The goal is rather to provide some ethical guidelines for the burgeoning technology.
How exactly AI will impact humanity is unknown, but it is already transforming everyday life, inspiring both excitement and worry. Companies are trying to get ahead of potential backlash to AI products by standing up their own ethics initiatives, said Paul Root Wolpe, director of the Center for Ethics at Emory University.
Unlike many other technological evolutions, ethics are intrinsic to AI because in most cases the technology must make decisions that impact humans, Wolpe said, like self-driving cars, mortgage algorithms, automated dermatology screenings and more.
“Lots and lots of AI has profound ethical implications and that’s why the AI industry has become so interested in AI ethics, because they can’t get around it,” Wolpe said in an interview before the announcement was made.
Such a council can bring different viewpoints to potentially harmful and biased ways AI could be used, particularly against people of color, because “the definition of harm is not always clear,” Wolpe said. But they are also good public relations for a company that is trying to make sure its products don’t cause public outrage.
Ethical issues in AI are vast, from transparency in how the technology makes decisions to false videos and photos AI can conjure up that’s made to look real, called deepfakes, to autonomous weapons systems, Wolpe said.
Altman acknowledged that it is scary to think about AI going wrong. He said the technology becoming a powerful computer hacking tool and the development of bioweapons keep him up at night.
But he said he was also scared about the technology being developed only in a vacuum.
“The people that are going to be most affected by the technology deserve the biggest voice in what it’s going to do,” Altman said. “If you don’t put it out in the world, if you don’t let people use it, if you don’t show it to people and say ‘Hey, give us feedback,’ you just can’t do that,” Altman said.
A little over a year since his company launched ChatGPT into the world and almost overnight changed how people interacted with AI, government regulators are starting to enact guardrails.
On Friday, the European Union laid out a sweeping new law that’s one of the first major attempts to govern the technology. While the policies would specifically apply to AI systems in the EU market, it will likely impact the tech in the rest of the world.
The U.S. trails behind the EU in regulating AI, though many of the big players are American companies. In October, President Joe Biden issued an executive order on standards for safe and trustworthy AI, but Wolpe noted it “was a blueprint to create regulations, but was not itself a regulation.”
Atlanta leaders will now have a seat at the table with one of the biggest AI players, though it remains to be seen what impact the council will have.
“We want to make sure that the opportunities of AI are also shared with people at the bottom of the economic pyramid here and around the world,” Bryant told The Atlanta Journal-Constitution.
Now in its 10th year, the HOPE Global Forum brings thousands of academics and leaders in business, faith and philanthropy to discuss how to make the economy work better for the underserved. Bryant calls the forum a “Black and brown Davos,” referring to the influential annual World Economic Forum held in Switzerland.
© 2023 The Atlanta Journal-Constitution. Distributed by Tribune Content Agency, LLC.