However, protecting AI becomes harder in a global context. As the United States grapples with creating ethical frameworks to guide AI development, authoritarian regimes like China and Russia pursue AI dominance unfettered by moral concerns about potential downsides. The U.S. must lead an international and comprehensive effort to develop AI in a safe and sustainable manner. If not, we’ve already seen bad actors can use AI to impersonate people to commit fraud. It’s easy to imagine AI potentially being used to attack critical infrastructure or facilitate financial crimes.
The recently issued White House Executive Order on AI demonstrates an appropriate level of caution. The order focuses entirely on fostering “safe, secure, and responsible” AI that protects civil liberties and guarding American AI technology from malign foreign influence ranks as a top concern. The executive order, alongside legislation like the CHIPS and Science Act, imposes security rules and compliance standards for companies developing sensitive technologies. The Committee on Foreign Investment in the United States also gained newly expanded authority to review foreign investments in tech for national security risks.
So how can the integrity of American AI systems be protected from tampering by hostile powers? While crowdsourcing data from diverse sources often improves AI, leaving systems entirely open invites needless risk. AI platforms should have security analogous to financial networks, which restrict access to sensitive data and machinery.
AI is changing our technological ecosystem like the digital revolution in the early 2000s. As banking, shopping and other services went digital, the government had to create different views on regulations, risks and how customers’ identities were verified. We can take these lessons to prepare for risks and regulations that may be coming for AI.
Fortifying AI systems likely requires identity verification, know your customer (KYC) best practices, activity monitoring, and sanctions blocking that banks use to control risk. AI developers working earnestly to promote beneficial technology can learn much from financial services companies in these areas. Best practices should include:
- Comprehensive risk assessments identifying vulnerabilities in processes and code
- Written policies and controls that formally address known risks
- Alignment of business processes with formal risk management policies
- Rigorous auditing procedures to validate control effectiveness
- Continuous feedback loops to enhance policies and systems over time