AI Federal Regulatory Landscape

,

In the first installment on Transformative AI: Legal Leaps, we thought it would be helpful to provide an overview of the current state of the law governing artificial intelligence as it currently stands, beginning at the federal regulatory level.  In an upcoming companion piece to this post, we will discuss the state of AI-related federal legislation in Congress, including the bipartisan AI “roadmap” put forward by Majority Leader Schumer and his colleagues on the Bipartisan Senate AI Working Group.    

The executive branch and federal agencies have issued numerous regulations and regulatory guidance regarding the development, use, and distribution of AI-powered technologies.  Executive action on AI has come from both Republican and Democratic Administrations: In 2020, the Trump Administration sought to bolster “American leadership” in AI through an executive order encouraging various federal agencies to take action to ensure the United States leads the world in terms of AI research, development, and training. President Biden has stated that AI is a “top priority,” and executive action by his administration has included releasing an “AI Bill of Rights” (read our team’s analysis here), issuing an executive order on the Safe, Secure, and Trustworthy Development and Use of AI (read our team’s analysis here here), and soliciting major private sector actors in the AI space to join a Voluntary AI Safety Agreement (read our team’s analysis here here). Relatedly, the National Institute of Science and Technology (NIST), which operates within the U.S. Department of Commerce, released non-binding guidance on safely developing AI, the AI Risk Management Framework (read our team’s analysis here here). Collectively, these initiatives primarily focus on directing federal government agencies to take steps related to their own use and/or regulation of AI. While aspects of these initiatives targeting non-governmental actors do help to illuminate and preview the potential shape of future federal AI policy, the initiatives are (for the most part) non-binding on the private sector (except for federal government contractors), and therefore the nascent and rapidly expanding AI industry continues to hold its breath on how US government regulation will impact the new technology.

Importantly, as several federal agencies have pointed out, just because AI is a new technology does not mean that existing laws do not apply to it. The Consumer Financial Protection Bureau, Department of Justice’s Civil Rights Division, Equal Employment Opportunity Commission, and Federal Trade Commission have each taken actions related to AI:

  • Consumer Financial Protection Bureau 

The CFPB has issued guidance requiring that creditors provide consumers with accurate and specific explanations for adverse actions taken against them when a creditor uses AI as part of the decision-making process. Additionally, the CFPB initiated rulemaking for the Fair Credit Reporting Act in September of 2023 related to “harmful data broker practices,” which CFPB Director Chopra characterized as “part of an all-of-government effort to tackle the risks associated with AI.”

  • Federal Trade Commission

Of all federal agencies, the FTC has arguably exerted the most regulatory authority in relation to AI, pursuant to the “unfair or deceptive acts or practice in or affecting commerce” provision of Article 5 of the FTCA. It has issued numerous statements related to ensuring AI use is fair and not deceptive, including a warning to not label a product as “AI” if it does not actually incorporate artificial intelligence. 

The FTC has also pioneered one of the more powerful enforcement tools currently in effect: “Algorithmic Disgorgement,” which requires companies to delete not only illegally obtained data but any models created based on that data. Since 2019, over five FTC actions have resulted in the FTC ordering some form of Algorithmic Disgorgement, including the agency’s settlement with Amazon subsidiary Ring. The FTC alleged Ring allowed employees to access private videos and failed to “implement basic privacy and security protections,” and as part of a settlement Ring agreed to delete not only the data in question but also all associated “data products,” including any “models [or] algorithms derived from [the data].”

In February of 2024, the FTC targeted AI-powered impersonation in its “Rule on Impersonation of Government and Businesses.” As its name implies, that rule focuses specifically on using AI to impersonate a government official or a business. Since publication of the rule, the FTC has received numerous comments complaining of fraudulent impersonations of friends, family, and romantic interests not covered by the existing rule. In order to address this gap, the agency issued a supplemental notice of proposed rulemaking which would “prohibit the deceptive impersonation of individuals and would address conduct that is prevalent and harmful.”

  • Equal Employment Opportunity Commission

The EEOC has issued guidance on AI as it relates to compliance with both the Americans with Disabilities Act (ADA) and Title VII of the Civil Rights Act of 1964, focusing on biases in datasets resulting in disparate impacts on employees or applicants and places compliance responsibilities on employers. The EEOC has also made clear that employers who use AI tools in hiring will be responsible for any discriminatory practices that arise from such use, even if the discrimination stems from an underlying flaw in the algorithm.

  • Department of Justice

The DOJ also has an enforcement role in the AI realm with regard to the Americans with Disabilities Act (the “ADA”), and the DOJ has issued its own guidance on ensuring that the use of AI does not violate the ADA, specifically as the ADA applies to state and local government employers (rather than private employers and the federal government, which are the purview of the EEOC).

Other federal agencies, including the Federal Deposit Insurance Corporation and Securities and Exchange Commission, have also taken publicly visible steps toward regulating AI, although any official rules from these agencies regarding AI appear to still be some ways off. 

Overall, we see the AI-related initiatives taken by the executive branch and various federal agencies as falling well short of comprehensive AI regulation, particularly with regard to the use of AI in the private sector.  For now, the United States will have an opportunity to sit back and watch the EU attempt to implement the world’s first comprehensive AI government regulation, the AI Act, which was enacted on March 13, 2024.  We will take a detailed look at the EU’s AI Act in an upcoming installment of our AI series.     

Authors

  • Associate

    Daniel concentrates his practice on technology, privacy, and data security. He advises clients navig...

  • Member

    Ben focuses his practice on technology and data privacy. He advises clients in complex data transact...

Related Posts