Recruitment & Staffing, Technology

Using AI-based hiring tools effectively (and legally)

Tech industry leaders, government and private citizens have begun discussing the possible threats of Artificial Intelligence to humanity. The recruitment industry has been using the tech for years and finding out, on its own, what the downsides have been.  Either through their own examination, or through complaints and lawsuits, AI can be flawed and result in discriminatory acts, even inadvertently.

Playing catch-up, the EEOC recently issued guidance on the use of AI and other technology in the recruitment process, citing the absence of human intervention often leads to discrimination. They’re also finding intentional discriminatory practices.

The agency filed its first lawsuit against a company that allegedly programmed its software to reject female applicants over 55 and males over 60. For most businesses, the challenge is to leverage AI advantageously without discrimination. Here are some uses and red flags to watch out for.

AI in applicant screening

Almost every business uses some form of AI to screen candidates. If you’re posting on job boards, the tech is built into the system. For most companies, these screening methods weed out the unqualified candidates that apply. For some, however, the screening may be biased, even discriminatory.

During the pandemic, many found themselves out of work. Gaps in employment during this period are typically overlooked by employers, who understand the reason. But for candidates whose employment gaps may be outside the COVID timeline, screening may be an issue. If the worker has a disability that caused the gap, eliminating them outright may be a violation of their rights under the ADA. A best practice is to periodically review the candidates your tech is rejecting — to assure they’re not being eliminated because of anything other than lack of qualifications.

Chatbots

HR departments are adopting AI chatbots as a novel tool for conducting initial screenings and interviews. These chatbots ask a set of predetermined questions, moving the candidate along the hiring process if they’re answered correctly. They may ask about the candidate’s availability for alternate shifts or their experience with a specific type of machinery or software. If the candidate meets the chatbot’s criteria, they are usually granted an interview.

For some job seekers, a disability, speech impediment or an accent may throw off the chatbot, making it difficult for them to advance in the process. You’ll want to assure there are options available other than chatbots, for persons who may not be able to use the technology to their best advantage.

AI interviews

At the cutting edge of HR tech are AI interviews. These involve a video interview between a candidate and the algorithm itself. Again, predetermined questions have been built into the technology, but in addition to noting the candidate’s responses for qualifications, it is offering a ‘score’ of their personality. It may be programmed to look for assertiveness, compassion, or other traits, depending on the needs for the position. The tech is thought to eliminate the guesswork of a recruiter trying to determine whether a qualified candidate is the right fit for the job or the team.

AI analyzes voiced (or text) responses, determining whether the candidate has a strong verbal and writing skills. But more, it analyzes body language, eye contact and other markers to decide. For candidates with a disability, these other scoring areas may make it difficult to move forward in the hiring process.

Fact versus fiction

Business was promised AI would eliminate bias in the hiring process. But researchers from the University of Cambridge’s Centre for Gender Studies found that’s a myth. They write their concern about tech’s ability to ‘analyze the minutiae of a candidate’s speech and bodily movements to see how closely they resembled a company’s supposed ideal employee.’ If technology sees the ideal employee as a specific race or gender, the bias may be built into the system.

They also found changing the screen resolution and brightness influenced the candidate’s personality score. A German broadcasting company found its tech changed an applicant’s scores based on whether they wore glasses or a headscarf.

Even the largest companies have been caught in AI’s mistakes. In 2018 Amazon scrapped its screening program after noting the algorithm learned to be sexist. Another resume screening program looked for candidates named Jared who played lacrosse in high school. These types of problems with the technology may amplify before they’re resolved.

From the other side

Artificial first impressions  

Applicants are now turning to algorithms to create their resumes. They may be highly professional looking and put the candidate in the best possible light. They may also be impossible to discern. You may be impressed by a job seeker’s communication skills — on their resume, cover letter and even correspondence — only to find later they had no part in any written communication.

Practice makes perfect

To help job seekers, Google has introduced an AI interview practice tool. Interview Warmup asks generic job interview questions and can ask questions for specific positions. The tech informs candidates of speech or mannerism patterns that may be overused or unprofessional. This technology may help applicants do better in interviews, but it may also help them outsmart the technology itself.

Just as the internet provided job seekers with ways to ‘beat the system’ regarding screening software, new tech may help them get past AI screenings and interview processes, rendering them ineffective at best.

Where we are now

Google CEO Sundar Pichai admitted in a recent interview they don’t fully understand how their program, Bard, works or how it’s teaching itself. Their engineers are working on understanding ‘black boxes’ and ‘hallucinations’ within the algorithms. Progress may be slow in coming.

As businesses and technology become more sophisticated, we may learn to lean more heavily on AI for recruitment, or decide, as Amazon did in 2018, to move away. Relying completely on technology could mean missing out on quality candidates if the system’s basic rules are inaccurate or incomplete. The challenge for businesses is to review all their hiring practices, either with or without technology, to assure there is no bias in the process. And, with all recruitment, take the final step to verify that the candidate you’re hiring is who they say they are, with background screening.

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.
You need to agree with the terms to proceed

Menu