Concerns regarding the use of AI in recruiting are growing. Emerging regulation and lawsuits may lead some in HR to feel that the hazards of AI are too large. A new challenge awaits the implementation of HR technology. On July 5, New York City’s anti-bias ordinance goes into force.
This rule mandates employers, not vendors, to conduct independent audits of hiring tools that use AI, even if those tools are not used to make final hiring choices. Employers will be required to disclose a full account of the audit’s findings online, which may put them in jeopardy if the findings indicate prejudice.
“This law is going to discourage the use of these tools in New York City,” said John Rood, founder and CEO of Proceptual in Highland Park, Ill., an AI compliance firm that conducts independent audits. He said the public posting of audit reports is a “huge issue” and potentially opens employers up to new risks.
Employers are accustomed to doing bias tests to guarantee that they are not discriminating and are in compliance with federal law. He stated that if they discover an issue, they can return and correct it. However, New York City’s AI bias statute poses a new obstacle.
If the audit indicates bias, and that finding is published online, employers have “created a new liability for themselves,” Rood said. A bias finding is something that the U.S. Equal Employment Opportunity Commission (EEOC) “is going to be very interested in,” he said.
New York state legislators may extend NYC’s statute statewide. A similar bill is being considered by California lawmakers. The District of Columbia, Connecticut, and New Hampshire are among the other states considering audit laws.
AI risks still too great
AI hiring has been around for around five years and can rank and sort prospects. The vendors’ premise is that technology is less biased than humans. However, the AI concerns are enough to make some HR managers wary.
Nathaalie Carey, HR chief of staff at Prologis, a San Francisco-based logistics real estate company with 2,500 workers, is one of them.
Carey equates the risk of artificial intelligence in recruiting to that of self-driving cars: She isn’t ready to let go of the steering wheel and allow the AI to drive the car. That may change in the future, but for the time being, she is being cautious.
“How do we ensure that the results we are looking for are presented in an unbiased and socially responsible way?” Carey said.
Carey has been collecting data on AI-enabled HR tools. Some vendors won’t say how the algorithms operate — information that would help Carey’s HR team have confidence in the tool’s results. “We are told it’s proprietary,” she said.
“The biggest thing that builds confidence is truly time,” Carey said. And HR users of these systems “are still early adopters.”
One obstacle for HR’s usage of AI hiring tools is increased regulation. Litigation is the next shoe to drop. The EEOC has issued a warning about the possibility of bias in AI. Federal regulators have yet to bring a complaint, but hiring-related class action lawsuits are rising.
Brendan Baker, who applied for a position in 2021, sued CVS Health Corp. in a Massachusetts Superior Court in April. CVS allegedly used HireVue’s video interviewing system and an Affectiva application “to understand human emotions, cognitive states, and activities by analyzing facial and vocal expressions,” according to the lawsuit. It claimed that the technology amounted to a lie detector exam, which is prohibited by state law for employers to utilize.
HireVue discontinued visual analysis in 2021, claiming that “visual analysis has far less correlation to job performance” than verbal data.
Lindsey Zuloaga, HireVue’s chief data scientist, told TechTarget Editorial, “HireVue assessments use machine learning to analyze a candidate’s transcribed answers to interview questions — these algorithms are also locked, meaning they don’t change when interacting with candidates, and they do not look at anything visual or analyze tone, pauses, or other forms of inflection.” Zuloaga went on to say, “Our assessments are not, and have never been, designed to evaluate the truthfulness of a candidate’s response.”
In February, a lawsuit was filed in federal court in Oakland, California, alleging that Workday’s AI software allows for discrimination against Black, disabled, and elderly workers. Derek Mobley, the complainant, stated that the AI screening tools “rely on algorithms and inputs created by humans who often have built-in motivations, conscious and unconscious, to discriminate.” Workday claimed that the complaint lacked merit.
More human involvement
Independent audits can reveal whether AI technologies discriminate against people based on their gender, color, ethnicity, handicap, or other protected characteristics. According to Rania Stewart, an analyst in Gartner’s HR practice, audits are effective tools.
The auditors are staking their businesses on their ability to conduct good audits, Stewart said. But if employers that conduct third-party audits still face lawsuits, the value of the audits could be questioned. “There is a lot of cautious optimism with those third-party audits at this point,” she said.
Stewart said AI recruiting tools are shifting from selecting candidates to finding or sourcing those who have the best skills. That requires more human involvement from HR. It is an approach that is changing how HR uses AI. “It’s not if we use AI; it’s how we use it and how we use it to augment versus just automate,” she said.
David Lewis, president and CEO of OperationsInc, an HR consulting firm in Norwalk, Conn., said employers should have a “healthy amount of skepticism” about using AI hiring tools and consider the AI risks.
While AI in applicant tracking systems has potential, he said, “you’re still at the point where it’s just like the first flat-panel TV,” which lacked high definition and other technologies.
The AI risks are such that even if employers buy this technology through a reliable and proven source, they can’t assume a vendor will “have your back in a worst-case scenario,” Lewis said.