Last month, the U.S. Department of Labor (DOL) announced a new initiative, the “AI & Inclusive Hiring Framework,” funded by the DOL’s Office of Disability Employment Policy (ODEP). This framework is a voluntary resource to help employers ensure their third-party Artificial Intelligence hiring tools are inclusive and accessible to disabled individuals. The initiative is timely in light of the July 2024 decision in Mobley v. Workday, Inc., Case No. 23-cv-00770-RFL (N.D. Cal., July 12, 2024), where a California federal district court held that a human resources technology vendor, Workday, can be liable as an “employer” under an agency theory for violations of federal anti-discrimination laws.
Employers’ AI Vendors, as Employers’ Agents, May Increase Employer’s Liability Risk
The court in Mobley determined Workday’s hiring tools are “alleged to perform a traditional hiring function of rejecting candidates at the screening stage and recommending who to advance to subsequent stages, through the use of artificial intelligence and machine learning.” Accordingly, the court ruled Workday acted as an agent of the employer, opening the door to liability under the Americans with Disabilities Act Amendments Act of 2008 (ADAAA), Title VII of the Civil Rights Act of 1964 (Title VII), the Age Discrimination in Employment Act (ADEA), and the Civil Rights Act of 1866 (Section 1981), for discrimination based on disability, race, sex, religion, national origin and age.
While the Mobley case is in its initial stages, it should remind employers of the importance of doing due diligence on the AI vendors they use in their hiring process. The risk is high. The EEOC, in its May 2023 non-binding technical assistance guidance, Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964, states employers may be liable for their agents’ hiring technology that causes an adverse impact on protected groups. Employers also may be on the hook for the AI vendor’s direct liability to job seekers if they contractually agree to indemnify the AI vendor against claims asserted by the employer’s job applicants and employees.
Look to the AI & Inclusive Hiring Framework as a Valuable Resource for Assessing and Reducing Risk When Using an AI Vendor’s Hiring Tools
The AI & Inclusive Hiring Framework, while funded by ODEP, was published by The Partnership on Employment & Accessible Technology (PEAT), a private company. PEAT states in its website, https://www.peatworks.org/ai-inclusive-hiring-framework/framework-overview/framework-details/, that DOL’s ODEP “collaborated” with PEAT on the framework, but PEAT’s material “does not necessarily reflect the views or policies of ODEP or DOL.” While not binding on employers, this resource nevertheless provides valuable tips for employers.
The Framework has ten (10) Focus Areas:
Focus Area 1: Identify Employment and Accessibility Legal Requirements
Focus Area 2: Establish Roles, Responsibilities, and Training
Focus Area 3: Inventory and Classify the Technology
Focus Area 4: Work With Responsible AI Vendors
Focus Area 5: Assess Possible Positive and Negative Impacts
Focus Area 6: Provide Accommodations
Focus Area 7: Use Explainable AI and Provide Notices
Focus Area 8: Ensure Effective Human Oversight
Focus Area 9: Manage Incidents and Appeals
Focus Area 10: Monitor Regularly
PEAT emphasizes employers using the Framework need not implement each goal at once or follow the order of focus areas.
Focus Area 4 specifically drills down on practices and considerations for working with AI hiring technology vendors. Goals include (i) developing procurement policies and procedures before acquiring AI hiring technology, including consideration of nondiscrimination and accessibility requirements; (ii) identifying third-party data or software risk management factors, which involves reviewing vendor attestations on their risk mitigation efforts, understanding the known risks and potential impacts when deploying this technology, and mapping out legal risks; and (iii) creating third-party risk controls, taking into account internal and external users, data privacy and security, and auditing efforts.
Five (5) Guideposts for Employers Considering Deployment of a Vendor’s AI Hiring Tool
Deciding whether and what third-party AI hiring technology to deploy and how to protect against employer liability in the process can be daunting, especially given the quickly evolving technology and legal landscapes. The AI & Inclusive Hiring Framework Focus Areas can be helpful, particularly when keeping these five (5) guideposts in mind:
- Get the right team at the table. This includes personnel and/or outside experts knowledgeable on information technology, intellectual property, human resources recruiting, employment laws, data privacy, data security, compliance and procurement best practices.
- Determine the role the AI hiring tool will play in the hiring process. Will the tool make decisions normally reserved for the employer’s recruiting and/or hiring teams, such as screening out candidates, or will the tool be limited to collating and presenting candidates for ease of internal review? The functions of the tool play a significant role in assessing known risks and potential impacts.
- Delve into the AI hiring vendor’s efforts to ensure its tool is legally compliant. Take the time to understand and evaluate what measures the vendor has taken to ensure legal compliance in developing its tool, testing it, monitoring it and auditing its impact including, but not limited to, compliance with the Uniform Guidelines on Employee Selection Procedures. Identify the risks to the company in using the vendor’s tool in the recruiting and hiring process.
- Focus on contract development and review. The devil can be in the details. Once the team understands the purpose of the tools, its risks and potential impacts, address the parties’ contractual rights and obligations, and implement robust contractual processes to minimize risk.
- Identify best practices and legal requirements for deploying the technology. Ensure internal users are trained on the technology, identify and comply with applicable laws regulating employer use of AI hiring tools, including but not limited to laws in Illinois, Maryland, New York City, and Colorado (effective February 1, 2026). Implement monitoring and reporting tools to continually assess and manage risks and outcomes.
Debbie Friedman is an attorney in Cozen O’Connor’s Labor and Employment Department.