While most federal regulatory action related to AI has, thus far, come from the Executive Branch, Congress has also been hard at work. At the beginning of the 118th Congress, Majority Leader Schumer announced the formation of a “Bipartisan Senate AI Working Group” (“Working Group”) along with Senators Mike Rounds (R-SD), Martin Heinrich (D-NM), and Todd Young (R-IN). This Working Group convened a series of nine bipartisan forums throughout the fall of 2023. The forums included technology company representatives, civil society organizations, representatives of labor unions, and researchers. The outcome of these forums was the creation of an AI roadmap (the “Roadmap”) outlining the ways in which the federal government can use existing and new resources to manage the increased uses of AI and to try to stave off any potential harms of nefarious uses of AI.
The forums were divided into nine topics:
- Inaugural Forum
- Supporting U.S. Innovation in AI
- AI and the Workforce
- High Impact Uses of AI
- Elections and Democracy
- Privacy and Liability
- Transparency, Explainability, Intellectual property, and Copyright
- Safeguarding Against AI Risks
- National Security
During a discussion about how to support the “U.S. Innovation in AI,” the Working Group emphasized the need for Senate committees to work cooperatively, given the overlapping jurisdictional issues for the future of government involvement with AI. There is also an emphasis on the need to fund programs that were authorized in the CHIPS and Science Act (P.L. 117-167), such as the National Science Foundation (“NSF”) Directorate for Technology, Innovation, and Partnerships, the NSF Education and Workforce Programs, and the Department of Education Microelectronics programs. They also strongly recommended federal investment through regular and emergency spending reach $32 billion – the level that had been proposed by the National Security Commission on Artificial Intelligence’s final report. The Roadmap outlines clear areas for both legislation and executive action as needed.
Another area of great concern and debate among federal legislators has been the impact AI may have on all workers regardless of industry. There is a need for a better understanding of the potential impact on the multiple uses for AI and how work streams may shift as the technology continues to develop and use cases are refined.
The use of AI technology for certain areas, like housing, financial services or healthcare may bring to light new use cases that can solve significant public policy challenges. Such uses, however, may intersect with civil rights laws and while the technology is new, it must comply with current law. This is particularly important for civil rights protections and consumer protection. Of course, there are still going to be gaps in protection and the Working Group in the Roadmap advocated for committees to study these gaps and move forward on legislation to cover them. This includes use of AI for housing or financial choices, the need for transparency in AI system used for health care choices, and addressing ways AI may be used to target incidents of fraud for vulnerable populations.
The forum focusing on Elections and Democracy centered on how committees need to better understand how the technology available now could influence election activity. The U.S. Election Assistance Commission has released an AI Toolkit for Election Officials to better advise election workers at every level on how to manage the growth of this technology. However, questions remain on how local election workers will be able to manage dealing with these issues with very tight budgets and little federal support. Additionally, the Cybersecurity and Infrastructure Security Agency (“CISA”) released a toolkit to assist states on elections. But these materials will only benefit those states and local governments that actually use them.
One of the stickiest issues on the development of AI data sets is privacy and who is liable for harms caused by use of AI. Congress has been stuck in a years long fight over a federal privacy law, even though Chairs Cantwell and McMorris Rodgers made progress this year with the introduction of a bipartisan, bicameral proposal on privacy. Much of the policy fight has focused on preemption and liability, so any federal law governing use of AI will also almost certainly run into these same stumbling blocks. The Working Group identified the need for a strong federal policy on data protection and guardrails on the development of AI for health care services. This later case is particularly sensitive to bias if the data used to determine health care options or treatments is flawed or incomplete. The Working Group recognized that there will be opportunities for efficiencies created by AI technology but there will need to be significant effort to educate both the public and medical community on the impacts of AI on treatment options.
The Working Group encouraged committees to produce legislation to promote transparency related to instances when AI is in use and the contents of the training data being used to develop those tools. This issue caught federal lawmakers attention late in 2023 when the New York Times led a lawsuit against OpenAI and Microsoft alleging that the companies had infringed on its copyright by training their models on a data set containing protected articles. This debate will be an evolving conversation, as at times IP owner’s interests may be divided. For example, we have seen film studios both object to their content being used in training data without a license, but also want to use AI generated content to help reduce costs for filmmaking. As with privacy, it is expected that the debate on copyright and intellectual property will be a heated one in both Congress and the Executive branch. The Working Group recommends that there be a legislative effort to establish a public education campaign to help understand the risks and benefits of AI tools on these fronts.
Finally, the Working Group focused on the impact that AI could have on our national security. The likelihood of our adversaries using AI tools to attack critical infrastructure is significant and the Working Group advises that the Department of Defense and the Intelligence Community collaborate on efforts to reduce this risk, including increasing investments for these efforts. It also identified a significant need for legislation to increase career pathways for AI professionals, including reducing the backlog for security clearances and expanding the workforce.
Overall, the Working Group has concrete ideas on the development of legislation relating to AI development, uses, and risk. This Roadmap is a good starting point but a significant amount of work and investment is needed to actualize any of these ideas to become reality. As policymakers continue to search for ways to legislate on these issues, the Roadmap could serve as a way to guide the conversation and find areas of bipartisan agreement, and provide valuable insight to companies on legislative priorities related to AI moving forward.