April 18, 2024
More Problems Regulating AI in Employment Decisions
The latest proposals in Congress to regulate the use of AI in employment decisions involve giving people notice and offering alternative processes. It won't work.
This article and the related ones from the Fisher Phillips team explains what's in the proposed legislation. And the authors are absolutely correct that it's important for both employers and tech companies to follow what's happening and weigh in.
Here are the big issues I see.
The answer is to require regular audits and reporting of the outcomes of employment decisions. Offer a safe harbor period to encourage monitoring and to fix problems. Then enforce discrimination laws.
A provision that would significantly disrupt the use of artificial intelligence in the workplace is buried deep in a bipartisan federal proposal to legislate data privacy. If the American Privacy Rights Act is passed by Congress as currently proposed, employers would be required to notify applicants and workers when AI is used for workplace decisions – and also allow workers to force employers to remove AI from the equation if they choose to “opt out” of its use for consequential employment decisions. Many questions remain and the proposal is still a long way off from becoming the law of the land. But this issue bears close monitoring due to its potential to be an absolute game-changer. What do employers need to know about this breaking news development?
Summary of Data Privacy Proposal
You can read about the 10 things that employers need to know about the bipartisan data privacy proposal here. The sole focus of this Insight will be on the AI implications for employers.
2 Key Requirements for Employers
The AI portion of the proposal would require employers to do two things if they use AI for certain workplace uses:
What AI is Covered?
The proposal applies to “covered algorithms,” which expressly includes the use of AI. It defines a covered algorithm as any computational process, including one derived from machine learning, statistics, or other data processing “or artificial intelligence techniques.” This definition is incredibly broad. It could apply to an incredibly broad array of programs used by employers to manage human capital.
If the proposal becomes law, employers will need to take stock in exactly what products they are using that may contain some aspect of machine learning, statistics, or other data processing, including AI to ensure compliance. The proverbial “I didn’t know” defense will not work.
What AI Uses are We Talking About?
The proposal says that the obligation would kick in when employers use AI to make “or facilitate” a “consequential decision.”
What Would the Notice Require?
Once we determine what AI uses would be covered by the law, employers would be required to provide notice of that use to applicants and workers. To comply with the law, employers would need to provide “meaningful information” to the applicant or employee about how the AI tool makes or facilitates the consequential decision, including the range of potential outcomes.
The form of the notice would need to be:
More Importantly – What Would the Opt-Out Requirement Entail?
The next step could be the toughest. Employers who use AI to make or facilitate consequential decisions (however is defined) would also be required to provide an opportunity for applicants and workers to “opt out” of such use.
This leaves many questions unanswered. What if an AI tool is designed to review thousands or millions of data sets to provide an efficient summary of information to an employer, and one employee opts out of AI use? Would an employer have to scrap the entire system or could it conceivably arrive at a usable work-around by introducing human judgment into the process with respect to that one worker? At the other end of the spectrum, could opt-out rates become so high they defeat the purpose of the AI software being deployed in the first place? And, if employees choose to opt-out, are employers required to provide an alternative pathway for the employment decision?
Questions Might Be Answered – But Not in Time
The statute would require the Federal Trade Commission to coordinate with the Commerce Department (neither of which is necessarily known for their detailed grasp of employment-related dynamics, unlike the EEOC or the Department of Labor) to issue guidance regarding this law – but the agency would have a two-year deadline from the law’s effective date to do so. This gap could cause real problems for employers since enterprising plaintiffs’ attorneys could take action against employers during this limbo period while questions remain unanswered.
This is especially concerning given the fact that the proposed law gives applicants and employees the right to file private lawsuits in court against employers for alleged AI-related violations. The law would allow them to recover actual damages plus attorneys’ fees.
Small Businesses Would Be Excluded
The silver lining is that smaller employers would not be covered by the proposed law. If your average annual gross revenue for the period of the three preceding calendar years (or for the period during which you have been in existence if less than three years) did not exceed $40 million, you can breathe a sigh of relief.
California Also Considering Similar Proposal
As we reported a few weeks ago, California lawmakers are pondering a similar measure. A bill aimed at “prohibiting algorithmic discrimination” would prohibit employers from using AI tools to make consequential workplace decisions that result in “algorithmic discrimination.”
That bill also includes a notice requirement and an opt-out provision – but that mechanism would only be triggered and require human decision-making if “technically feasible,” an escape hatch that does not exist in the federal proposal. You can read all about this and other California proposals here.
CompAnalyst® Pay Equity Suite can help you achieve and sustain pay equity