The use of artificial intelligence (AI) in HR technology is growing. From applicant tracking systems to recruiting and background screening solutions, AI can streamline workflows, speed hiring and save HR teams time and effort. But if not used carefully, AI also poses risks of unintentional discrimination.

Both the Federal Trade Commission (FTC) and the Equal Employment Opportunity Commission (EEOC) have announced they will focus on ethical use of AI in employment. What do employers need to know when incorporating AI into hiring decisions?

AI in Hiring: Trends & Uses

AI technology performs tasks once done only by humans and learns from experience so that it continually improves. According to a 2021 study reported by Human Resource Executive, 60% of companies currently use AI for talent management and over 80% plan to increase their use of AI in the next five years.

There are solutions incorporating AI for a variety of HR tasks. Some of the most common uses include:

    • Scanning and filtering resumes

    • Curating background check results

    • Analyzing job candidates’ social media presence

    • Evaluating candidates’ skills to identify top candidates

    • Scheduling interviews

    • Answering candidates’ questions via chatbots

    • Assessing candidates’ body language, speech, and behavior during video interviews

Benefits & Risks of AI in Hiring

When used correctly, AI-based hiring tools can deliver many benefits for employers. They can save time by automating formerly manual tasks. This improves efficiency for recruiting teams and hiring managers, giving them more time to spend on higher-value tasks and potentially decreasing time-to-hire.

By automatically guiding candidates through the steps of the hiring process and responding more quickly to candidates’ questions, AI can greatly improve the candidate experience. Finally, AI can remove biases that human hiring managers may unintentionally bring to the hiring process, helping employers build a more diverse workforce.

But poorly implemented AI-based solutions can pose serious risks employers should be aware of. A report from Harvard Business School found that applicant tracking systems using AI often remove qualified candidates from consideration simply because they’re missing one skill or fail to meet one minor requirement.

At a time when employers are already struggling to find qualified employees, this unnecessarily restricts your candidate pool.

By limiting potential candidates to those who fit a predetermined mold or have certain characteristics, AI technology can also result in a less diverse workforce. This robs companies of the benefits they enjoy when employees bring diverse experiences, skills and insights to work.

When used incorrectly, AI may even lead to unintentional discrimination. In 2015, Amazon discovered its recruiting software was weeding out female candidates. The AI was trained to look for candidates similar to Amazon’s top employees. Since most of those employees were men, the AI gradually began penalizing resumes that included the word “women’s,” such as “women’s volleyball team.”

New FTC Guidance & EEOC Initiative

Both the FTC and the EEOC have been studying the issue of AI in HR since 2016. With the use of HR technology leveraging AI on the rise, both agencies have recently stepped up their attention to the topic.

In April 2020, the FTC released new guidance, “Using Artificial Intelligence and Algorithms.” This guidance states that AI tools should be transparent, explainable, fair, and empirically sound and that employers should hold themselves accountable for compliance, ethics, fairness and nondiscrimination.

While the FTC’s guidance is not legally binding, employers should be aware that this is an area of growing concern for the agency.

In October 2021, the EEOC announced an initiative to ensure AI HR tools comply with the federal civil rights laws it enforces. “These tools may mask and perpetuate bias or create new discriminatory barriers to jobs,” the EEOC stated. “We must work to ensure that these new technologies do not become a high-tech pathway to discrimination.”

Best Practices for Employers Using AI in Hiring

Automated decisioning in itself is not the issue. The problem arises when AI has an adverse impact on a particular group of people. HR teams should be proactive by following best practices for using AI in hiring and employment decisions. How can you do this? Take the FTC’s guidance as your roadmap.

    • Be transparent: Communicate to candidates what data you use, how you use it and how they can control your use of their data.

    • Be explainable: When decisions are made using AI, be able to explain the reason for the decisions to candidates.

    • Be empirically sound: Is the data your background screening provider uses accurate?

    • Be fair: Above all, be mindful not to unintentionally discriminate against candidates in protected classes via the use of automated decisions or AI.

Conduct an audit at least once a year to assess the impact that your AI, automated decisioning and other rules using algorithms have on human beings. This audit should be both qualitative and quantitative.

    • Qualitative: Look around you and survey your employees. Are the identities reflected in alignment with the culture you hoped to create when you started using AI in your screening and hiring process? If not, what is wrong with it?

    • Quantitative: Use your internal data and analytics, as well as that provided by your ATS, Human Resource Information System (HRIS) and background screener, to evaluate the impact of automated decisioning on your candidate and employee population. If a certain population is negatively affected and they are in a protected class, you could be violating EEOC regulations.

Even if the affected population is not in a protected class, consider the ethical implications. Is the impact on this group fair? Also consider how your business is affected. Are you squashing diversity or missing out on qualified candidates? The way you use AI can impact your corporate reputation.

Based on your audits, you can adjust the data input and decision rules you set to improve outcomes. AI learns from experience, but it needs good guidance to make good decisions.

Put the results of your audit in writing. This creates documented evidence that you’re making a good-faith effort to follow FTC guidance, comply with equal employment opportunity laws, understand the impact of AI and work to continually improve.

Questions to Ask Your Background Screening Provider

Consumer reporting agencies (CRAs) may develop the AI and algorithms used to automate the delivery of background screening results, but as an employer, you are ultimately responsible for the decisions you make using a background screening solution.

Protect yourself by taking steps to ensure your background screening provider is using AI in a way consistent with fairness. Ask these questions:

    • How does the CRA gather and use data? How do they use AI? The more you know, the more transparent you can be with candidates. Talk to your CRA to learn as much as you can about their methods. While certain specifics will be proprietary back-end technology, your CRA should be happy to share with you whatever they can.

    • What opportunities does the screener offer to enhance fairness and transparency?Look for a CRA that makes it easy to communicate with candidates and allows them to tell their side of the story. For example, GoodHire’s built-in Comments for Context feature helps candidates provide a fuller picture of background check results that contain criminal records, making it easier for employers to conduct the individual assessments fair hiring laws require.

    • How granular are the adjudication matrices the CRA offers? Some screening providers let you automate the process of adjudicating background check results. Unfortunately, very broad adjudication rules have more potential to adversely affect more people, and many background screening providers offer only limited options. An automated adjudication solution that lets you set more nuanced parameters allows for more individualized decision-making. Keep in mind, however, that your screening solution can only be as good as your internal rules and policies around decision-making.

    • How robust are the solution’s filtering capabilities? Not all CRAs offer filtering. If yours doesn’t, you can remove information from a background check report manually, but seeing it may still create unconscious bias. Robust filtering can ensure that you never see records that aren’t relevant to the job duties and may negatively impact your opinion of a candidate. Look for a solution that lets you customize filters to suit your company’s needs, follow state and local laws and comply with industry regulations.

    • How does the screener enhance accuracy? Making hiring decisions based on accurate data can reduce risk, so it’s important to choose a background screening provider that strives for accuracy. Take steps to report only data you know is accurate. For example, the current trend toward date of birth (DOB) redaction in court records has made it more difficult to verify identities. To help mitigate this issue, some companies have developed proprietary data processes that help enhance accuracy even when DOB is more difficult to tie to individual records.

What Does the Future Hold for AI in Hiring?

The EEOC has indicated they’ll be producing best practices and guidance for what they consider ethical and appropriate frameworks for using automated decision making. The FTC will likely produce more guidance during the next few years as well.

In developing this guidance, regulators are likely to look to Europe, which has stricter data privacy regulations than the US, as a model. Monitoring privacy trends in Europe as well as EEOC and FTC announcements can help you stay current on the latest developments.

The increased focus the FTC and EEOC are placing on AI in hiring creates an opportunity for employers to develop clearer data policies they can share with candidates and employees for greater transparency.

This is already the standard in Europe, and US companies that embrace it now can get ahead of the curve, reduce the risk of enforcement action and enhance both corporate reputation and candidate experience.

Elizabeth McLean

Izzy is an FCRA-compliance attorney and expert in the background screening legal landscape. Prior to joining Inflection, she worked as an attorney for Hirease, where she honed her risk-mitigation skills and provided compliance guidance to gig economy customers such as Handy and Uber. She received a journalism degree from the University of North Carolina at Chapel Hill and a Juris Doctor degree with honors from the University of North Carolina School of Law. She enjoys singing and playing acoustic guitar, bird watching, and listening to old jazz records. She lives in Tucson, Arizona, with her wife, two beagles, and a cat.