Recent Accenture research found positive indicators of citizen views of government use of AI. Nearly two-thirds (62 percent) of respondents overall saw government as at least as qualified as the private sector to deliver AI-enabled services. More than half (56 percent) said they support use of AI by government to deliver new or improved services more efficiently. At the same time, only 35 percent said they were confident that government use of AI would be ethical and responsible and 33 percent said they were not confident (the remaining one-third of respondents were unsure).
Behind these numbers is the fact that however objective we may intend our technology to be, it is ultimately influenced by the people who build and manage it, the decisions they make, and the data that feeds it. This means that the data can reflect pre-existing social and cultural biases whether intentional or not. To ensure public trust, government agencies that are piloting or forging ahead with AI initiatives must focus on avoiding misuse and unintended consequences of AI deployments. It is also important to recognize that fairness is a complex and context-dependent concept and implementing strong governance principles necessitates thinking through these issues — fairness, bias, agency, accountability, transparency — and implementing the methodologies and tools to arrive at an answer that is aligned with an organization’s core values.
What should public-sector organizations be doing to avoid unintended consequences from AI? State governments should work together to build consensus on best practices and governance for both government use of AI and oversight of the private sector’s use of AI. As highlighted by the National Association of State Chief Information Officers (NASCIO) in a November 2018 report Ready for Prime Time: State Governments Tune in to Artificial Intelligence, state and agency CIOs must help drive the dialogue about disruption to the workforce and help plan how these technologies will alter, change, disrupt or eliminate the work that people do on a daily basis.
By coming together and establishing frameworks and standards for AI deployments, state CIOs can help drive the delivery of responsible systems that steer clear of unintended outcomes. They can help ensure that within agencies, data scientists, AI developers and non-IT government program experts work closely together during AI development to systematically tackle key requirements around AI "fairness." A strong AI governance framework is crucial and should include ongoing assessments and rigor around responsible and ethical design and use as AI fast becomes a major tool for citizen services and other government operations. Ensuring clear governance is in place around what these systems will be used for, how fairness will be measured in that context, what data will power them, how users will interact with them, and how adjustments will be made, when appropriate, is essential to promoting trust and transparency.
AI is already part of our daily lives through many core business services and operations, and public-sector leaders are now pushing forward to assess, pilot and in some instances deploy AI for citizen services. The unfamiliar and complex nature of AI has many people concerned over responsible and ethical use by government, but when AI is implemented carefully, with state CIOs collaborating and helping to drive the dialogue on best practices, the fairness of government programs should in fact be strengthened as human errors and unintended bias are reduced.