When we look at how AI will transform the public sector, we’ve seen some amazing things already. The future is only limited by your imagination. In transportation, I’ve seen public vehicles equipped with sensors that detect vibrations in the road; this leads to an understanding of the severity of potholes and allows for automatic prioritization of the most impactful repairs.
Or we can equip drones with AI vision technology, and then send them out to monitor bridges and report back on those that need repair. The same AI vision can be used to help find missing people. If somebody reports a little girl was last seen wearing a pink shirt, we can send up drones or leverage any camera in real-time to report back every place pink is spotted to accelerate a response.
What’s the best way for government organizations to get started with AI?
There are four key areas the public sector needs to look at to be AI-ready. The first one is making sure that they have an AI strategy. Without a strategy, you’re not going to have that direction that points you toward the North Star. Next, make sure you have an AI-ready culture; this starts with a data-driven organization. Third, you must identify your guiding principles for responsible AI. Finally, organizations need to get introduced to AI technology. They should start exploring key AI concepts and learning as much as they can.
What are some challenges organizations may encounter?
People have a fear of missing out on AI, so they jump in and start providing all the data inputs before the desired outputs are defined. Organizations should instead focus on what outputs they want, and then make sure they have the right data, the right machine learning, the right people and the right algorithms to get to that output.
Another challenge is achieving stakeholder support. One of the ways to achieve that support is by properly documenting, sharing, and articulating a path to responsible AI.
What does ‘responsible AI’ mean?
There are some pillars under responsible AI that are critical to getting broad stakeholder support. The first pillar is fairness. When you deploy an AI system, you must ensure it treats the data and the people that would be impacted by the outcome fairly and without bias. The next pillar is reliability and safety, making sure our AI systems are consistent and not creating harm.
Transparency is another critical element of responsible AI. When AI systems process data, execute algorithms, and come up with a conclusion, people need to understand how the decisions were made, or they are not going to trust the outcome. The next pillar is privacy and security. AI systems must respect the confidentiality of data and ensure an individual’s privacy is never compromised. The last essential pillar for responsible AI is accountability. There must be a person or an organization ultimately accountable for the AI system.