The task force, co-chaired by California Reps. Jay Obernolte and Ted Lieu, was established in February and tasked with producing this report. It complements related federal efforts, like the Department of Homeland Security’s task force and that of the National Institute of Standards and Technology.
The report released Tuesday, which Lieu described as “only the first step,” outlines guiding principles, 66 key findings, and 89 recommendations organized into 15 chapters. It is intended to be a tool to help Congress evaluate future policies.
“Collaborating across party lines to find consensus is not easy, and that is especially true for something as far-reaching and complex as AI,” Lieu said in a statement. “Despite the wide spectrum of political views of members on our task force, we created a report that reflects our shared vision for a future where we protect people and champion American innovation.”
The report first offers several guiding principles to act as considerations informing high-level policy: identify AI issue novelty, promote AI innovation, protect against AI risks and harms, empower government with AI, affirm the use of a sectoral regulatory structure, take an incremental approach, and keep humans at the center of AI policy.
In the first principle, the report states that policymakers can avoid duplication by considering whether issues raised by AI have precedent in existing laws. Some experts have previously argued this, explaining existing privacy and non-discrimination laws can often be applied to AI technologies.
The report’s 15 chapters represent different areas of focus for AI issues and innovation, each of which includes findings and recommendations. The report focus areas are government use; federal pre-emption of state law; data privacy; national security; research, development and standards; civil rights and civil liberties; education and workforce; intellectual property; content authenticity; open and closed systems; energy usage and data centers; small business; agriculture; health care; and financial services.
On government use, the report advises the federal government to support and adopt AI standards to govern use, reduce administrative burden for AI use, improve systems’ cybersecurity, encourage supportive data governance strategies, and understand and support government workforce AI needs.
Its second chapter argues federal legislation pre-empts state AI laws, which can help the federal government accomplish its AI policy goals and address a fragmented policy landscape across states.
On data privacy, the report advises promoting secure access to data and taking a tech-neutral approach to privacy laws.
AI is a critical component of national security, the report posits, and a technology being used by U.S. adversaries; it recommends congressional oversight, more AI training at the Department of Defense, and continued oversight over autonomous weapon policies. International cooperation on AI in military contexts should be supported, according to the report.
AI’s impact on different industries and support for research and development deserve continued monitoring, it advises; public-private partnerships are projected to be a key component of this. Small-business research on AI, the report says, should also be supported.
Improper AI use can violate civil rights laws, the report acknowledges, though its writers indicate human oversight can help identify and mitigate this risk. They also recommend agencies be transparent about AI in decision-making and take action to protect against the use of AI in decision-making where it has the potential for discriminatory outcomes.
When considering education and workforce needs, the report argues more resources for AI literacy are needed, and recommends support for the National Science Foundation in its development of curriculum. Existing workforce development programs, it finds, may need to be re-evaluated to address AI skilling needs.
The report recommends clarification on intellectual property regulations and action to counter increasing negative impacts of AI deepfakes.
In a similar vein, its content authenticity chapter explores how content produced by generative AI systems can be authenticated, arguing no such single technical solution exists right now. The report recommends a risk-based approach, ensuring victims of harmful content of this nature have tools to address the harm.
The report supports open AI models, with continued monitoring of their risks.
AI technology currently uses significant energy, even in the most advanced models, which creates a security risk around grid resilience and environmental impact concerns. The report recommends new standards and metrics for communicating energy use, and greater exploration of its role in energy-efficiency initiatives.
For small businesses, AI literacy and the resources to use the tools may be lacking; efforts to support small businesses' AI literacy and adoption should be considered, the report finds.
In agriculture, the report acknowledges the potential for enhanced productivity and resource management, noting the lack of reliable network connectivity as an impediment.
AI can be used in health care to speed drug development and clinical diagnosis but, per the report, its use should be safe and transparent and use a risk-management approach to AI adoption, to prevent disparate health outcomes.
The report finds AI can be transformative for and expand access to financial services, but its adoption should be responsible and protect consumers. A principles-based regulatory approach will allow flexibility and ensure regulations do not hinder adoption by small businesses.
More information on the task force's findings can be found in the report, which Obernolte referred to in a statement as “an essential tool” for AI policymaking.