AI in the Hiring Process: Is It Really Fair and Unbiased?
Sun, April 18, 2021

AI in the Hiring Process: Is It Really Fair and Unbiased?

The hiring process is unfair because countless applicants are being ignored. / Photo by: Sudtawee Thepsuponkul via Shutterstock

 

AI has the capability of creating good and bad outcomes, according to Frida Polli of Harvard Business Review, a general management magazine. The public appears to focus on the bad, especially when it comes to AI bias. However, this fear does not take into account that the root cause of bias in AI “is the human behavior it is simulating.” 

You can send hundreds of resumes to potential employers but chances are, the job you are applying for is “systematically biased,” wrote Julie Schulte of World Economic Forum, a non-profit organization. In the US, for instance, African-American names are “systematically discriminated against” while White names receive more callbacks for interviews. Hence, biases occur because the data set used to train the AI is in itself biased. 

Why the Hiring Process Is Unfair

Let’s first tackle the norm of hiring, which is rigged for three reasons. First, the unconscious human bias makes hiring applicants unfair. In general, recruiters review their applicants through their resumes prior to inviting them for an interview. This process has shown unconscious bias toward old people, women, and minorities. 

Second, the hiring process is unfair because countless applicants are being ignored. LinkedIn and other career-oriented platforms have been successful. To illustrate, an average of 250 job searchers “apply for any open role.” It’s not possible to handle large pools of applicants manually. Hence, recruiters limit their applicant pool to the 10 to 20 percent “they think will show the most promise.” 

Examples include those coming from Ivy League campuses, employee-referral programs, and passive candidates from competitors of the companies looking to fill vacant positions. 

Lastly, traditional hiring tools are by default biased. In the US, federal regulations consider a hiring tool to be biased if it’s job-related. The latter refers to successful people who show certain characteristics. Let’s say all “successful employees” are white men. Taking traditional tools into account, the job hiring assessment will most likely be biased against women and minorities. 

How Can AI Help Eliminate Bias? 

It can eliminate human bias, although many AI tools used for hiring have flaws. But these can be addressed. AI can be designed to “meet certain beneficial specifications.” AI practitioners such as OpenAI and the Future of Life Institute is formulating a set of design principles to make AI ethical, beneficial, and fair for everyone. 

AI should be designed in a way that can be audited and the bias in its system removed. An AI audit should be akin to safety testing a new car before someone drives it. If the technology is defective and doesn’t meet standards, then it should be fixed before “it is allowed into production.” 

Moreover, AI can assess the entire pool of applicants rather than forcing humans to shrink the pool of candidates through the implementation of biased processes. An automated top-of-funnel process can help remove bias, which is more bearable for the recruiter. Unfortunately, there are companies that admit how only a small portion of the pipeline of candidates who apply are reviewed. 

While this comes off as a shock, it should serve as a driving force for technologists and lawmakers to work together to develop tools and policies that “make it both possible and mandatory for the entire pipeline to be reviewed.” Existing regulatory frameworks on employment and hiring must be clarified and updated to accommodate AI, which encourages equal opportunity in hiring. 

AI Is Beneficial in Recruitment but Is it Really Fair? 

Recruiters source potential applicants via active headhunting, advertisements, or job descriptions. AI is usually used to optimize the pipeline of candidates. Once your application is up for screening, an AI bias will influence the chance of your application being rejected. AI will narrow down the selection and decide to reject “unfit” applications. Then, you will be invited for an interview if you passed the screening. 

The interview will utilize algorithms to “support the employer’s final selection decision.” For example, US-based HirVue, an online video interviewing software and pre-employment assessments platform, reviews candidates based on their facial expressions, tones, and the keywords they use during the video interview. 

 

Recruiters source potential applicants via active headhunting, advertisements, or job descriptions. / Photo by: Pixel-Shot via Shutterstock

 

Does this make you question the fairness of AI? Your job experience, skills, languages, etc. will be weighted and interpreted according to past hiring decisions. These past decisions will be used to train the AI to find the right applicant, replicating the same biases that were present in the past. 

Critics warn that AI can be just as biased as their human recruiters, stated Aaron Holmes of American business and financial news Business Insider. Again, AI is trained by humans and the use of data sets that weed out “good” and “bad” applicants. MIT Media Lab-based computer scientist Joy Buolamwini asserted, “Data is not necessarily neutral.” AI bias is still imminent even if that’s not the intention of a company. 
Bias will still manifest in some way or another even if a company uses AI to hire talent. It’s not possible to be “objective” all the time. AI has the potential to address discrimination against women, minorities, and old people but only if the humans training it are willing to make the change.