Several years ago, a company was considering a new AI recruiting software. Wisely, the company demanded to look “under the hood” at the AI product to understand why it was making certain recommendations. The results of this audit showed that the AI tool thought two candidate traits were most indicative of job success: that the candidate’s name was Jared, and that the candidate played lacrosse in high school.
In my role, I speak to HR leaders about the use of AI every day. We know that AI presents significant opportunity for our organizations to streamline hiring and frontline management – but we also know that there are horror stories like this one that show us clear challenges HR leaders face in evaluating AI technology.
Here are, in my view, the 3 most important commitments at the center of the ethical rollout of HR. I didn’t invent these 3 considerations – if you research the world of “responsible AI” you will see these come up frequently. I’ve adapted them to make clear how they apply in the HR and recruiting context.
Transparency refers to the right of users of an automated system to understand that they are in fact being judged or rated by an automated system and roughly what the system is looking for. This is such a basic requirement of the fair use of AI because without understanding what the system is doing, candidates have no reasonable way to know how whether systems are judging their application fairly.
In my daily work we help companies understand new regulations around AI; in every single piece of proposed legislation we have read, there is a requirement for candidate transparency. As state and federal regulators look more closely into AI, I’m convinced that clear labeling of AI systems in HR will be a requirement.
Takeaway: Candidates have a right to know how their candidacy will be judged. Especially as new laws come into effect, be prepared to disclose the use of automated or AI systems to candidates.
When I’ve given presentations to HR audience and asked for one word that comes to mind when they think of AI, “bias” is always in the top 3 – this is a very serious concern as it ties into both regulatory requirements and DEI commitments.
When we talk about bias in AI, we’re usually discussing training data. The data that is used to create AI algorithms must come from somewhere, and in HR it usually comes from past applications and current employees. A system, for example, might be asked to look at a set of successful employees for a certain role, determine what they have in common and then rate applications according to those factors. But when hiring has shown patterns of discrimination in the past, those patterns are likely to persist in the algorithm.
AI auditing is one answer. AI auditing refers to, among other things, creating assurance that AI algorithms don’t display this kind of bias (and when they are biased, can be remediated or removed from the market). This safeguard is starting to be required by laws like New York City’s Local Law 144; be prepared to commission internal or independent audits of AI hiring systems.
Takeaway: Bias is a key issue in HR. Companies should regularly audit their systems to make sure that they are not exhibiting significant bias.
Very simply, explainability refers to the ability of human users to understand why an automated system is making the recommendations it is.
Recall our AI example of Jared the lacrosse player: that HR team was saved by demanding that their HR vendor explain exactly what the factors leading to employment decisions in the algorithm are. Importantly, HR leaders need to know how those factors are weighted in the algorithm as well; a list of factors isn’t useful unless we also know how the AI algorithm will weigh each factor against each other.
It may go without saying, but it’s critical that we as HR leaders make sure that the factors used by algorithms aren’t protected statuses in hiring like race or gender. One of the challenges of AI, however, is that AI systems may use factors highly correlated with a protected class. For example, in the US, unfortunately, race is highly correlated to residential address. It’s important that HR leaders along with legal teams think through all the factors being considered and consider whether there are inadvertent connections between job considerations and protected class status.
Takeaway: As HR leaders evaluate vendors, they should demand that vendors be very open with them about the factors and weights that go into their algorithms. If a vendor can’t explain in simple terms why Candidate A was suggested over Candidate B, it’s likely time to choose another vendor.
As AI fever continues, HR leaders are going to be inundated by established and startup vendors with new AI products or new AI features of existing products. As that happens, the considerations of transparency, bias reduction and explainability must be top of mind.
John Rood is CEO and founder at Proceptual, a AI compliance platform. Proceptual provides compliance , training, and audit services to HR leaders, helping them comply with emerging regulations of AI and automated hiring technology.
Weekly news and industry insights delivered straight to your inbox.