algorithm bias

 

In some circles, these algorithms acquire the image of a ‘bias-free stakeholder’ in the hiring process. Often managers assume that because the software is devoid of emotions (unlike humans), using them would mean the complete removal of personally-motivated bias from the hiring process.

Any application of AI (Artificial Intelligence) or Machine Learning (ML) learns from the existing data fed to it. This raises concerns such as an amplification of pre-existing bias in the data that have made hiring managers approach them with caution. 

Amazon’s experiment into an AI-based recruiting system was scrapped after it started penalizing resumes that included the term “women’s” or names of women’s colleges. Essentially, the software taught itself to prefer resumes of male candidates over those of female candidates. 

Yet another study by the National Bureau of Economic Research (NBER) proved market disposition towards white names (such as ‘Emily’ or ‘Greg’) that got more callbacks for interviews than black names (such as ‘Lakisha’ and ‘Jamal’).

This raises questions over how these algorithms work.

Working of hiring algorithms

Hiring algorithms is a broad term referring to algorithms that are used by job boards, recruiting sites, or resume evaluation tools. Algorithms find their place throughout the entire hiring cycle.

These hiring systems do have their benefits. However, documented amplification of biases only makes it harder for hiring managers to trust them.

To understand how to overcome these biases, we must examine how they introduce bias at each step.

Bias in sourcing algorithms

Job boards attract recruiters to advertise on their platform with a promise of a wider reach and efficient spending of the hiring budget. 

They circulate the job descriptions around on their platform using predictive algorithms. This job advertisement process introduces the first level of bias.

The focus of these boards is to optimize ad spends. They advertise jobs based on who is most likely to click the job ad, not on the job-candidate match. They borrow the ad distribution logic from the real-world data and this is where they pick up biases.

For example, if the real-world data reflects the pattern of black candidates applying to low paying jobs, the same pattern will be used to distribute ads. The majority of ads for low-income jobs will be shown to black candidates. This closes off the more qualified black candidates from high-income jobs.

Real-world biases are replicated more than contained in this process. And recruiters have no control over it; often they are not even aware of it.

Some job boards or resume template sites claim personalization as their main feature. They learn from the recruiter’s preference as they respond to candidate profiles for specific job roles over time. This makes them a lot similar to Facebook because they replicate those biases seen in recruiter behavior

If a recruiter subconsciously associates certain hobbies, educational choices, or names to a specific group, this pattern gets picked up, replicated, and finally reflected in the candidate profile that the recruiter sees. 

Therefore, sourcing tools affect the visibility of jobs by reflecting real-world bias. 

Bias in selection algorithms

During the screening process, hiring managers often use ‘knockout’ questions based on the core nature of the job’s requirements but may go beyond the job description. These questions come directly from recruiters themselves and are not screened for bias. Even the shortlisted candidates cannot pass any further if recruiter bias kicks in before or after talking to the candidate.

Some tools use machine learning (ML) to bypass this interpersonal bias to some extent. Even then these tools learn from past selection data which, as you might expect, often reeks of racial or sexual prejudice. 

In fact, these ML-based algorithms work against the very purpose of diversity and inclusivity programs aimed at eliminating this bias.

More sophisticated tools use factors such as longevity, performance, productivity, lack of disciplinary actions or frequent job changes to predict ‘on the job’ success of the candidate. 

Since these measures are regulated by the government in the United States, assessment vendors must prove how they remove the bias from their algorithms. These steps are nothing more than subjective evaluations from within the company, which is another source of bias seeping into a hiring process. 

Bypassing these measures is to justify the use of algorithms (producing inequitable results) with a concrete business interest.

How to remove bias from hiring algorithms

The first step to removing bias introduced or amplified by hiring algorithms is to discard the idea that these algorithms are flawless. To build a better process going forward, you must look at the positive and the negative outcomes of inequitable hiring algorithms.

So, how can recruiters who rely on hiring algorithms still make it fair and unbiased? 

1. Feed a diverse dataset to your hiring algorithm

As we already know, hiring algorithms do not act alone. Hiring algorithms optimize results for the dominant group within their input datasets. If their results indicate bias, the data that was fed to them must have contained the same bias, albeit to a lesser degree.

To eliminate bias from the outcomes, you need to eliminate it from the data being fed to it. Use a more diverse dataset for training your AI-based hiring algorithm. 

If you do not have a diverse dataset, you can train your AI to optimize for underrepresented factors of the minor or the non-primary groups. If your resumes are restricted to a specific geographic location, use the names of women’s colleges in that city/state/region. When optimizing against racial bias, train your AI to lay more emphasis on non-white names.

If you do have a diverse dataset but the dominant groups may still indicate bias, change the definition of dominant groups for your AI algorithm. Feed data models to them that focus on other factors representing non-primary groups.

2. Make your hiring team more diverse

Because AI tends to follow the real world rather than improve it, make your hiring and AI teams more diverse. Not only sexually or racially diverse people, hire people from non-traditional professional backgrounds. This includes hiring creatives, linguists, sociologists and passionate people from other walks of life into your AI and hiring teams.

Focusing on a diverse team of professionals may still leave out the best hires if you rely on outdated success markers such as mandatory degree qualifications. To build a truly diverse team, open hiring to candidates with non-academic expertise by focusing on a broader set of skills.

Often smaller startups may not have resources for the overhaul of their hiring process. In such cases, consider augmenting your staff with staff from another, more diverse agency. Make sure you follow staff augmentation best practices throughout your engagement.

For the long-term sustainability of such initiatives, consider developing training programs to help diverse candidates integrate into your hiring and AI teams. 

3. Monitor the AI outcomes of your hiring algorithm at all stages

Before you jump on to the AI-based hiring bandwagon, talk to the vendor of the said hiring software and ask them to explain the process. If they cite IP issues or there is a lot of resistance in sharing their algorithm, ask them for a test run. 

Be very aware of the existing biases in your sample dataset before you feed it to the tool. 

If you already use such a hiring tool (whether proprietary or from a third party), hire an auditor to test your software. You can also audit the outcomes yourself by looking for previously unnoticed trends among selected and rejected candidate pools. 

These audits will help you understand the scope, depth, and frequency of retraining needed to remove the potential biases reflected in it.

Final Words

An ideal solution would be an end-to-end bias screening framework aimed at removing the bias from the entire hiring process – from sourcing to selection. This requires buy-in from all stakeholders. Only when you consider the side effects of using these algorithms, you can create a plan that addresses the core of these issues. 

 


Authors

Rahul Varshneya is the co-founder and President of Arkenea. Rahul has been featured as a technology thought leader in numerous media channels such as Bloomberg TV, Forbes, HuffPost, Inc, among others.