This is not the first center of its type; in fact, states like New York, Texas and Utah have launched similar hubs focused on advancing responsible use of AI technologies. However, FPF’s Center for AI is believed to be unique in its international scope.
As FPF CEO Jules Polonetsky told Government Technology, the worldwide approach is important because the challenges brought forth by the rapid rise of AI are largely global: “We deeply believe nobody has all the answers — and you really need the various constituencies debating, discussing and working together.”
Formally declared open Wednesday at the DC Privacy Forum, the center will support AI advancement by establishing best practices, research, legislative tracking and other resources, and acting as a source of information for stakeholders.
FPF, a nonprofit that aims to help shape privacy decision-making and practice as it relates to technology, has been working on issues related to AI for years because of the core data protection issues around it, its CEO said. And with the recent and rapid emergence of AI, he said FPF recognized an interest among legislators and the public in developing a greater understanding of the technology. The endeavor will include sector-specific working groups, and balance local and global perspectives to address the wide range of needs when it comes to AI.
“The public sector comes with a special set of responsibilities,” Polonetsky said, citing obligations under the federal Privacy Act and other mandates that may need to be followed. As such, he said, the largest privacy challenge for those in government is understanding how new AI mandates will work together with existing regulatory obligations.
So, what will the center’s work look like in practice? Anne J. Flanagan, FPF’s vice president for AI, said its structure is multi-stakeholder in nature, down to the leadership council that will support its work. The council will include senior officials from the public sector, academia — and members of the public.
Already, FPF has worked to create and make available a checklist that organizations can use to govern their AI use; an update for 2024 is expected soon. The center's work will also involve comparisons of different jurisdictional approaches to AI policy and governance, which she said should increase FPF’s capacity to convene stakeholders on these issues.
The center will prioritize building expert assessments that can be used to evaluate the implementations of AI, Polonetsky added. The federal government is mandating AI assessments, but as he noted, somewhat similar privacy assessments have been required of governments for years. The center will compare how AI assessments will intersect with the privacy assessments, which may not be AI-specific. As he put it, the center will be responsible for much of the “nitty gritty, practical, painstaking work that needs to be done.”
FPF’s new center builds on a long history of related work; it is currently a member of the U.S. AI Safety Institute at the National Institute for Standards and Technology. The nonprofit has also released a range of resources to educate stakeholders about AI, from an internal policy checklist to multiple AI training courses.
The center’s work will be supported by a mix of funding sources, its CEO said, including a 2024 grant from the National Science Foundation and the federal Department of Energy, which supports the use of privacy-enhancing technologies.