With the blockbuster debut of ChatGPT and the recent revelations about Microsoft’s new Bing chatbot (that it generates responses that seem neurotic, threatening, and emotionally coercive), the benefits and perils of generative AI have been filling the headlines. This has led to a flurry of questions about how increasingly powerful AI tools will affect a broad array of industries, including HR. For example, hiring professionals are considering how generative AI could be used to help them source and evaluate talent – an issue that’s even more salient at a time when companies are in urgent need of a competitive edge in hiring.

At this time, it’s important to balance hype with caution. While the temptation to embrace tools of this nature will be strong, using generative AI in recruitment is not something that can be recommended right now. There’s no doubt that the versions of generative AI tools we’re seeing today will improve. But as it stands today, these platforms often provide inaccurate information, the process by which they produce output is opaque (which can make a company vulnerable to legal and regulatory challenges), and they’re prone to making biased judgments.

However, this doesn’t mean HR teams should dismiss AI altogether. By using the technology with rigorous controls and oversight in place – in conjunction with other proven inputs – they will be able to develop more efficient and data-driven hiring processes.

Efficient, Sure… but Worth the Risk?

The efficiency capabilities of generative AI make it tempting to use in the hiring process. One way to think about generative AI is that it saves you the trouble of consulting the Internet and instead produces a neat summary of what the internet says, or is likely to say, for you. The information provided by Generative AI tools can sometimes be used directly, or as inspiration for your own content. However at this time, there are several issues that could arise when using these tools to create content for HR purposes.

Let’s start with a less risky use case and see how it stacks up. Generative AI can be used to generate content such as job advertisements. This use case poses less concern, mostly because the internet is awash in job ads and this means that there’s lots of content for the model to draw on. Writing jobs ads can be pretty tedious for humans, so it seems like a great use case for generative AI. Even in this case, though, it’s not without risk. If existing jobs ads use language that is biased according to age, race, or gender, then these biases will also be present in jobs ads produced by the tool. It could produce content that misrepresents your job or which is factually incorrect. And you could inadvertently breach another party’s copyright if the tool reproduces existing text exactly.

Another use case is using Generative AI to create a formal job description. This is riskier because a document such as a job description can have important legal consequences in the event that a selection decision is disputed – the job description, and what’s in it – is often relied upon as the source of truth for job duties and required knowledge, skills, abilities and other characteristics.

Using a Generative AI tool to help develop a job description means that the content of that description is based in part on information about the job (i.e., the prompt that was supplied to the tool) as well as information that effectively boils down to “words that tended to co-occur with the prompt text on the internet”. Using Generative AI in the process of creating documents like this could seriously undermine the utility and legal defensibility of those documents.

Similar issues apply when using Generative AI to develop interview questions. Interview questions need to be relevant to the job to be legally defensible. There is always the risk that interview questions generated through Generative AI will be related to typical descriptions of the job that were part of the training dataset but that do not match the actual specific job being recruited for.

A Lack of Guard Rails

Experience with Generative AI to date suggests that there are few guard rails that prevent the language model from producing content that is nonsensical or incorrect. If the training data does not provide sufficient information for a meaningful response, the Generative AI tool will rely on the probabilistic nature of the language model to produce a response that is “likely” given its data. This may especially be a risk when generating interview questions for jobs that are highly specific and unusual.

A language model is purely based on the likelihood of text appearing in the context of other text. While the nature of AI chatbots can lead users to believe that the tool understands issues of intent and applicability or is guided by some kind of knowledge-based process, the systems have no understanding of what the user is trying to achieve, or how the produced content may be used. This can result in content, in interview questions for example, that may be illegal or discriminatory in some jurisdictions.

Consistency (And Lack of Bias) Not Guaranteed

There is a much higher level of risk when Generative AI is used to process or draw conclusions from applicant data. For example, using a Generative AI tool to summarize a candidate’s resume, or using the tool to compare two candidate’s resumes. Doing this may violate a host of data privacy and data processing regulations, depending on your legal jurisdiction. Additionally, when used in their off-the-shelf on-line services, Generative AI tools do not guarantee that the same input data will result in the same response. A baseline requirement for using AI and automation to evaluate candidates is that identical input should produce identical output. The probabilistic nature of the language models used means that this will not occur without special modifications or settings being applied. Tools such as ChatGPT have not been validated for use in employee selection contexts – there’s no evidence that they produce judgements which aid in the selection of high performing employees. The results that they produce can reflect the same kinds of biases that are seen in online text that was used in their training data, including bias based on protected classes such as race, gender and age.

Given that the Generative AI tools are trained on the basis of internet text, perhaps a good analogy is whether it would be reasonable to post a summary of an applicant’s resume on Reddit and ask users to comment on how suited the applicant would be to the job. If that strikes you as inappropriate, then using a Generative AI language model to do the work is essentially the same.

Developing a Reliable and Holistic Hiring Process

Considering the growing interest in developing AI-powered hiring processes, HR professionals need to figure out how these processes can be implemented productively and with minimal risk. SHRM’s 2022 survey found that 85 percent of HR professionals who use automation and AI for hiring do so to save time and increase efficiency. But just 18 percent believe these technologies improve their “ability to identify more diverse candidates,” while 46 percent want resources that will help them “identify potential bias when utilizing automation or AI tools.”

These findings indicate that HR professionals are rightly circumspect about using AI for hiring, and should extend this caution to the question of Chat GPT. There are many ways employers can address concerns about inaccuracy and bias – for example, they can develop a hiring process that incorporates multiple inputs to generate a strong talent signal and filter out information that isn’t job-relevant. Trained hiring professionals can use AI to retrieve basic information about candidates, but they should also use objective assessments, structured interviews, and other resources to fairly evaluate potential hires.

There’s no doubt that generative AI is an extremely powerful tool that will only become more useful in the coming years. But we’re also learning more about the limitations of these models every day, as well as the difficult task of improving them. This is why HR professionals should use AI with care and remember that there are many other ways to make the hiring process as fair and predictive as possible.


Authors
Matthew Neale

Dr. Matthew Neale is an IO Psychologist and VP of Assessment Products at Criteria, specializing in the development and delivery of psychometric assessments used by employers to inform hiring decisions. In this role, Dr. Neale works with dedicated and professional teams of organizational psychologists, developers and designers to deliver innovative psychometric assessments that give client organizations genuine insight into their current and potential talent.