Artificial intelligence tools are not new to the employment space, but the introduction of souped-up large language models like OpenAI’s ChatGPT is poised to change the way companies search for, screen and hire employees.
While the promise of increased efficiency in those processes is incredibly alluring for companies, regulators and employment attorneys are sounding the alarm that handing hiring responsibilities over to machines is fraught with potential liabilities for employers.
And the feds are ready to crack down on employers and vendors as AI adoption booms.
The Equal Employment Opportunity Commission (EEOC) launched an agency-wide artificial intelligence and algorithmic fairness initiative in 2021 to examine how the technologies are impacting how employment decisions are made and is now ramping up its efforts to ensure companies are complying with federal law as more firms use the tools.
This week, the EEOC joined the Consumer Financial Protection Bureau, the Department of Justice’s Civil Rights Division and the Federal Trade Commission in issuing a joint statement warning that “although many of these [AI] tools offer the promise of advancement, their use also has the potential to perpetuate unlawful bias, automate unlawful discrimination and produce other harmful outcomes.”
“We already see how AI tools can turbocharge fraud and automate discrimination, and we won’t hesitate to use the full scope of our legal authorities to protect Americans from these threats,” FTC Chair Lina Khan said in a statement.
“Technological advances can deliver critical innovation, but claims of innovation must not be cover for lawbreaking,” Khan continued. “There is no AI exemption to the laws on the books, and the FTC will vigorously enforce the law to combat unfair or deceptive practices or unfair methods of competition.”
Kevin Johnson, founder of law firm Johnson Jackson, LLC, has been an employment attorney for 30 years and is the chair of the Florida Bar Association’s Standing Committee on Technology.
He says employers are increasingly using AI tools in the hiring process and points to several ways the technology can be beneficial. The tools can save management a great deal of time by using a vendor’s software to sift through resumes to find the best candidates for a position, and he says there are an infinite number of ways the technology will help employers going forward.
But Johnson advises his clients to take caution before adopting the shiny new technologies.
“There are a lot of ways in which bias or discrimination can still take place in the hiring process because all these things are basically just another tool,” Johnson said.
Johnson says companies need to be assured that developers of AI hiring software have taken sufficient precautions to ensure their products can’t be used to deliberately classify or segregate applicants based on protected characteristics.
The longtime attorney says human resources teams also have to scrutinize selection criteria to be sure algorithms or chatbots do not produce a “disparate impact” on diverse applicants that might make the employer liable under Title VII or other anti-discrimination laws.
He offered a few examples.
If an employer uses an AI program to find applicants for a job and lists in its criteria that it is looking for someone who is “100% able to perform the physical demands of the job” and doesn’t mention anything about reasonable accommodation, it might violate the Americans with Disabilities Act, Johnson explained.
Or if a restaurant seeks applicants for a host position and limits the criteria to female applicants between the ages of 17 and 25, for instance, the employer is “pretty much engaging in sex discrimination right there,” he says.
Johnson says it is important for employers to do their homework and find software vendors with systems that will not land them in hot water with regulators.
“I think, for employers, the best advice is, read as much as you can about how this is all developing,” he told FOX Business. “Think about the impact of how you are proposing to use it, and try to think about if something was going to go wrong with this use, where’s it most likely to go wrong?”