On today’s episode of the RecruitingDaily Podcast, William Tincup speaks to Siobhan from Reejig about the business case for ethical AI.
Some Conversation Highlights:
Listening time: 32 minutes
Enjoy the podcast?
Thanks for tuning in to this episode of The RecruitingDaily Podcast with William Tincup. Be sure to subscribe through your favorite platform.
Siobhan is an award-winning workforce strategist obsessed with developing inclusive workforce intelligence and designing meaningful careers at scale. For almost two decades, Siobhan worked across the UK, Ireland, Middle East, South East Asia, China, Australia, and New Zealand, delivering the workforce behind some of Australia’s largest projects including the Melbourne Metro, Roy Hill, W2B, and Sydney Metro.Follow
Music: This is Recruiting Daily’s Recruiting Live Podcast, where we look at the strategies behind the world’s best talent acquisition teams. We talk recruiting, sourcing, and talent acquisition. Each week we take one overcomplicated topic and break it down so that your three year old can understand it. Make sense? Are you ready to take your game to the next level? You’re at the right spot. You’re now entering the mind of a hustler. Here’s your host, William Tincup
William Tincup: Ladies and gentlemen, this is William Tincup and you are listening to the RecruitingDaily Podcast. Today we have Siobhan on for Reejig, that’s R-E-E-J-I-G, and we’re our topic today is something I’ve wanted to explore for a long time and we’ve finally got a guest that can actually talk about it. So it’s the business case for ethical AI, so we’ll be exploring ethical AI for the show. So without any further ado, Siobhan, would you introduce both yourself and Reejig?
Siobhan Savage: Sure. And thanks, lovely to connect with you. So Siobhan Savage, I’m one of the founders at Reejig. Reejig is a workforce intelligence platform, so we help large organizations find, retain, and scale their talent at scale. So Reejig is powered by the world’s first independently audited ethical AI, so think of us as the central nervous system for all talent decisions made within your company.
William Tincup: Okay. We’re going to get to independently audited in a few minutes. I wanted to go through, for the audience at least, give them examples, without names and stuff like that, but unethical AI, what do they see? Because what practitioners, as you probably know, as a former practitioner, you know they were struggling with, where do robots start and end? Where do humans start and end? Where’s the line? It’s kind of shifting for a lot of different things. So what should be automated and where should the humans take over? And again, underneath some of that is the underpinnings of what we’re talking about today, but examples of what you’ve seen or what’s been out there in terms of unethical examples of, or just bad uses, of AI.
Siobhan Savage: Yeah. I mean, I think the context for that answer would be, when I was a head of workforce strategy, so my career was basically growing up, finding and moving people, and we were looking to bring in AI to help automate a lot of our decision making when we came to who we would hire and who we would move in our business. And when I started researching this space, when you think about how AI makes decisions, if you make a decision that discriminates against an individual using AI, whether it was the human or the robot making that decision, you are liable in court. And that completely freaked me out because I was in this situation where I was like, “Okay, I really, really want to speed up how we work and use AI.” But at the end of the day, if we are making decisions that cause harm to an individual and are not in line with anti-discrimination laws, if I’m in front of that judge and that judge says, “You did not choose Jane, can you explain why you didn’t choose Jane for this role?” I couldn’t do it.
And that really sent me down this rabbit hole of trying to get my hands around, “Well, how do I know that all the vendors I’m talking to are actually doing the right thing?” And if you think about how AI is simply trained, it has to be trained on big data. It has to look on all the decisions that you make, all the previous people that have worked in your company, it takes all that big data and it trains the AI on who would suit our role based on the decisions we made before.
And unfortunately for us and pretty much most companies, we weren’t making good and fair decisions in the past. And a lot of my workforce, actually over 80% of my workforce at the time was made up of white men. And my view was, if we bring in an AI, it’s only going to double down on the basis of how we’ve made our decisions before. So that was really the kind of how it started sending me down the rabbit hole.
I then went and looked at multiple different vendors and we just had pause because I was working on behalf of some of the governments as well and a lot of risk around bringing in AI. And then when we started Reejig, one of the first things that we said was, as a founding team, we want to make sure that we’re creating AI that is going to cause no harm. And a lot of the vendors that I was looking at, not just in the HR space, but across the whole space, they were all telling customers that they were good and safe because they had audited their own work. And for me, that was like marking your own homework. Like that was pretty much so, “Hey guys, trust me. I’m safe”
William Tincup: I think I did a great job. Yeah.
Siobhan Savage: “Guys, I’m great.”
William Tincup: [inaudible 00:04:31]. Yeah.
Siobhan Savage: Yeah. So that was the basis of what was happening in the market and we’re talking, what is it now? 2022. So that was like three years ago. And we made the commitment as a founding team that before we build a line of code, we need to make sure that we’re building our AI to be good and fair and not good and fair because Siobhan Savage the salesperson tells the customer, good and fair because we wanted to make sure that we had an independent audit, like you get one of the big four to audit your taxes. I had this vision that we would have that.
And unfortunately, it didn’t exist in the market. We had to go and find one. We had to go and actually search for this, which was a whole other journey in itself. So that was kind of like the context of one, why we did it, but also the decision making and how it’s made, really simply, on the data that it used, is biased and will continue to be. So unless someone audits and checks that on a regular basis with the data sets that they’re using and the algorithms that they’re using to par any of those decisions.
William Tincup: It’s interesting because biases, I always look at biases like an onion. And as we peel the onion, we start to learn more about biases within our company and all across from hiring, if you will, all the way to internal promotions and things like that. It’s everywhere, but it kind of rears its head in different ways, weird ways. And so I love the fact that you started with a personal story of how you started to see, okay, the world as it is right now, if we just go and mimic what we’re already doing, we’re going to get the exact same result, so…
Siobhan Savage: Exactly. And I think that discrimination definitely exists.
William Tincup: Oh, yeah.
Siobhan Savage: In every country around the world, it’s just different types and you need to be mindful of each, if you roll out an environment in the US, it will have a very different data footprint than what you would roll out in the UK. So you’ve also got to be really careful about the laws in which each region that you work in, as well as how the data drifts within each region as well. So it’s actually a really complex and really quite challenging thing to do to be honest, it nearly killed us.
William Tincup: Well, and it’s challenging to do correctly. So you might not follow football, but Liverpool is not a club I support, but they actually have a phrase that I do like it’s, win, the right way.
Siobhan Savage: Yeah.
William Tincup: Which, I like that. I don’t like them because I support a different club, but I like that. I like that approach because it’s like, okay, you can win, but if you’re taking shortcuts, especially on the behalf of your customers, like if you’re looking at this and saying, “We’re doing this the right way. Not just because we’ll be able to sleep at night and all of that, but also because our customers aren’t going to… They’ll actually be protected if it’s a lawsuit comes up.” And it comes down to, okay, it looks like there is a bias in a process, well then we can unpack that form and show them that no, there wasn’t.
Siobhan Savage: Yep.
William Tincup: So-
Siobhan Savage: I completely agree.
William Tincup: I love that. So let’s get into independently audited because a part of this journey that you’ve been on is also, fine, okay, well we can’t look at it ourselves and grade our own homework. Especially at that time, how do you find a group of people, an academic institution, et cetera, that big four, if you will, to then go and look at your stuff and then give you feedback? Critical feedback. Because this isn’t just like, “Okay, tell us we’re wonderful.” This is, “Tell us where we’ve made mistakes and where we need to fix it.”
Siobhan Savage: Yeah. And I think the really important thing is when you think about the AI, it’s kind of like a black box. You don’t know why the decisions are being made and what we were striving for was a glass box. How can we see in and tweak and rectify any of those decisions that get out of control? And most folks approach ethical AI like it’s an AI problem in itself, but it’s actually not. It’s everyone gathered around the fire from your privacy folks, your HR practitioners, your AI folks, your ethics folks, your lawyers. It has to be everybody around the fire agreeing on, what is AI ethics in, specifically, talent? Because it’s very different from any other software type of AI, whether it’s loans or whether it’s video or anything. This is very specific.
And one of the things that we did was we spent ages looking for someone who actually was doing it, that actually would look independent to us and be reputable in our space. And all of them were coming from an AI person was doing it. And we were like, “Nope, we need to find folks that are willing to put a team together that looks at it from all angles, to make sure that if we’re going to do this, we’re going to do it as best that we can.” So we reached out to the World Economic Forum and we said, “Hey, we want to do this. Can you point us in the right direction?” And they said, “Hey, this is great that you want to do this, but actually this is the way we’re pushing the world. It doesn’t really exist. So unless you can get somebody to actually do it for you, there isn’t a lot of folks that are going to do this.”
And we really believed that AI ethics, from an ethical AI perspective, would come quite like GDPR did. Like once it happens, it kind of happens all over the world and you get to the point where everyone has to do it or at least have some knowledge. So we reached out to lots of different, whether it was the big fours, whether it was AI practitioners. And we ended up reaching out thinking that the universities actually, they have experts that sit on each part of that view. And if we could get a university to stand up a team, and it had to have representation from each part of that view, that we believed that would be the most independent thing that we could do that didn’t just focus on one part of the puzzle.
William Tincup: Right.
Siobhan Savage: So university-
William Tincup: There’s also no corporate interest. They’re doing it for academic reasons. They’re doing it to actually make sure that they’re doing it right.
Siobhan Savage: Exactly. And University Technology of Sydney, their actual mission is design technology for good.
William Tincup: Oh, cool. I didn’t know that.
Siobhan Savage: So their whole belief is we… And they’re a technology university, right? It’s all about, how do we create anything that we build or help or train our students on? It’s baking in that mission around doing no harm. And that was where we were really aligned. And so we actually partnered up with University Technology of Sydney. They spent probably 12 months first defining what is the ethical use of AI in talent decision making first, because you actually have to create a framework-
William Tincup: Without even looking at your code, they’re just trying to figure out what’s ethical and… No, not moral. What’s ethical in this decision tree?
Siobhan Savage: Exactly.
William Tincup: Oh, wow.
Siobhan Savage: And it’s really complex, right? Just giving you context, we were a tiny startup. At that point, we had four people. Everyone told me I was mental. Everyone was like, “Why are you doing this? Focus on growing the business. This is a stupid idea. Don’t do it.” That was what I was told on a regular basis, right?
William Tincup: Right.
Siobhan Savage: And we really had to stay firm to the idea that we believed that we wanted to be able to sleep at night and cause no harm. And we wanted to make sure our customers were not going to have any decisions made on their behalf that they weren’t aware of. And that was really core to who we are as humans, but also the kind of business that we want to create. Right?
William Tincup: I love it. I mean, first of all, I just love it on so many levels. Quick question about GDPR as you mentioned it, is there intersection points, while we’re in the story, is there intersection points with the way that GDPR looks at data, and especially who owns data and ethical AI?
Siobhan Savage: Yeah. That’s a really good call. So there’s four real key components of the AI ethics. So one, transparency. So we can explain why Reejig has made a recommendation, removing the risk of… So those black box decision making, right?
William Tincup: Right.
Siobhan Savage: Yeah. And there’s accountability. So businesses are taking accountability for the decisions that they’re deploying in their AI. So they are liable for those decisions and the HR team and the business are liable and they get that. There’s the fairness part, which is the algorithms are compliant with global regulations, anti-discrimination laws around the world and human rights. And then the final pillar is privacy and security, so with big data comes an increased responsibility. So we’ve got to ensure that privacy is respected, whether it’s the Californian Act, whether it’s the GDPR Act, we need to make sure that personal data is secured. So training algorithms on data that we haven’t got permission and consent from an individual, we make sure that does not happen.
William Tincup: Right.
Siobhan Savage: And this is very core to what we do as well. You can imagine, at Reejig, being a central nervous system of all your talents. We aggregate in minimum seven different systems from across an enterprise. We create this data lake. We want to make sure that that data lake is actually one, respectable to the data and privacy and security, but also in fairness.
William Tincup: Right.
Siobhan Savage: That we make sure that we train the AI, and as the AI, so if you move into a new country or let’s say you do an acquisition and you buy a large company that’s got a hundred thousand people, that’s a massive amount of data that drifts so you’ve got to keep checking in on it. So this is not a Reejig did this once and now we think we’re great, this is a commitment that we’ve made. One to the business and to ourselves as funders, but also to our customers that anytime we enter a new market, anytime we actually start with a customer and they get really big or any time that someone does an acquisition, we do a reaudit, and we do a regular audit in our businesses on an annual basis to make sure that we’re actually not moving away from that.
And actually, really interestingly, some of our customers have been really, really obsessive with this space. Think of big financial services’ risk organizations, they are really afraid of doing anything wrong and discrimination. They’re actually requesting our auditor to then come in and audit before we go live.
William Tincup: Oh, that’s fantastic.
Siobhan Savage: So that they’ve got a moment in time, which is really cool. And we stay out of that. We just say, “Hey, we’re nothing to do with this. We’re the technology provider. Go and talk to University Technology of Sydney. Do your own sort of audit.”
William Tincup: That lets them clean things up. If there’s something already, if there’s problems there, they’ll know what the problems are, they can fix that, and then technology can then scale that.
Siobhan Savage: Exactly. And I think what you’ll see now is, there was a group of customers that were doing it because they believed and they understood it. What’s happening now is the New York Law. The New York City is passing first of its kind laws around AI used in any talent decision making will have to be audited independently by January 2023. So what you’re going to see now is, just like when GDPR came in-
William Tincup: Sure.
Siobhan Savage: You will see now, a wave of customers actually now wanting to get underneath this and understand it really quickly.
William Tincup: I love that. Oh, well I love it on so many levels. So you’ve said train the AI a few times. And of course, my little mouse brain goes to voice to text and how you train Alexa and voice to text over time. It actually starts to understand what you’re trying to do. You’re obviously dealing with more sophisticated things than that. So tell the audience, when you say train the AI, what does that mean?
Siobhan Savage: So we look at decision making that was made, whether it was who we hired, who we promoted, career pathing, algorithms, what happened in our company before? And we use big sets of data like CVs, whether that’s information that sits within your HCM, and we can tell the patterns of what’s actually happened in your organization. And then what we do is we look at an individual and we look at well, what are all the things that Jane has done before us? What are all the things that she’s done while with us? And looking at big data, what could she potentially do next? And we make sure that Reejig’s actually learning what skills and potential an individual has so that if a woman’s CV is different because she’s taken some time off for parental leave, or whether a refugee has a software engineering degree and hasn’t actually landed into the market in a software engineering degree because people are not giving them an opportunity, Reejig looks at all of their background and says, “Well, based on this individual and the people that you could hire into these roles, these folks would be suitable.”
So big data models around looking at CVs, skills, potential, career pathing models, to then look at making recommendations on who you would actually recommend, whether it’s a hire or promotion or a career pathway. And what Reejig does is it looks at, let’s say in my situation, I was working for a large global engineering firm, over 80% of the population were white men. Reejig actually sorts and cleans up all of that data and makes sure it removes any personal characteristics from any of that data so that we train the models, not based on someone’s personal characteristics and reference points, like ladies football club, or different wordings that the AI will pick up on. We actually clean up all of that data and we mirror the data to make sure that it’s balanced, so that you’re actually looking at a fair view of the market.
The really tricky thing, and I won’t try and take and explain this appropriately because it’s actually very technical, my co-founder Shujia, she’s got a PhD in machine learning. Her expertise is literally this and she is obsessed, as a female Chinese woman who’s not really been given a lot of opportunity in a technology career, she’s really obsessed with making fairness part of any decision. She really looks at new markets and data modeling around how different markets are made up. So US looks very different to Australia. How do we make sure that we’re accounting for that and looking at all different areas around discrimination, not just gender? And we’re doing another big project right now with UTS, University Technology of Sydney, on the next wave of this. So we’re not saying that we’ve got the golden ticket and we’ve fixed everything for… This isn’t forever project. This is a forever thing-
William Tincup: Relentless pursuit. Yeah.
Siobhan Savage: Exactly.
William Tincup: I noticed that you stayed above the line of hire and promote, but you didn’t talk about terminate or fire, and probably for good reason. But I wondered only from the standpoint, the end of 20 here in the States, when we laid off people, women were disproportionally impacted with layoffs and women of color were disproportionately impacted in that group. And it’s like, okay, so on one level I’m thinking to myself, either they knew and they did it anyhow, or they just had no idea that they impacted this group in this way.
Now, and I know that y’all aren’t touching this, you might at one point, it might be something that you help folks with as an intelligence platform, but what’s your take on something like that? Because if you fix the front end, which is fantastic, and you fix the middle, which is, okay, how do people move mobility and marketplaces and skills and expertise, et cetera? There’s an end to that at some point that there are layoffs, there are rifts, there are terminations, et cetera. How does AI, if you don’t, just your opinion, but how do you believe AI helps us there? Ethical AI, I should say.
Siobhan Savage: Yeah. And it’s quite challenging and the approach that we took was, by the time someone’s made a decision to get rid of someone and terminate them, it’s too late.
William Tincup: Yeah.
Siobhan Savage: That’s the view that I have. So I was fortunate in my view that I ran both talent acquisition, talent mobility, and a redeployment team, so I had the view. And those teams typically don’t share information as well as you would imagine.
William Tincup: Not at all.
Siobhan Savage: Yeah. And by the time that someone is told that Jane is being terminated, there’s not a lot you can do because you’ve got a two week window typically of try and redeploy them, and no one ever gets redeployed meaningfully. So what, in Reejig, we look for is, well, let’s look at the folks who are at risk. So businesses are not making decisions to fire someone the next week, it’s typically a phase where the leadership team says, “Actually, we’re going to close down these branches,” or, “We are going to have to move these people from here to here.”
And they’ve got a little window of time where actually no one has been formally told that they’re now going to have to look for a new opportunity. It’s in that window that what Reejig does is it basically says, Jane would be suitable for all of these roles that are open right now. And she has these skill gaps, and she’s not far away from that. And what it’s doing is it’s proactively boosting the opposite way to say, “Hey, take a chance.” And there’s not actually a major difference between Jane and Bob. And actually, she’s only got five skills missing, and are they really skills that you actually need? Because a lot of customers will write their JDs and their job ads and they’ll say, “We need all of these skills,” but actually, when push comes to shove, do you actually need those skills or are they things that the individual will actually be able to gain on the role really quickly?
And what you find with customers is, the minute that you have that conversation and give them data. So we believe decision making support is what Reejig does. We are helping you make a good and fair decision, whether it’s on who you are, who you promote, who you remobilize, that’s really the essence of what we’re trying to do, and it’s powered by the AI ethics. We call it inclusive intelligence. Inclusive intelligence designed and baked into every decision. And if we see, for instance, First Nations folks in Australia or women who are more than likely… There’s too many of them getting actually terminated versus, let’s say men. Reejig actually gives you a data report of that so we will say-
William Tincup: Right. There’s an alert. Yeah.
Siobhan Savage: “Hey, here is the decisions you are making. And look at this big chart that’s punching you in the head saying, do not do this.” It’s kind of like that subtle slap, but to make a good and fair decision in that moment. So we are looking at solving the problem in one, making sure that the data itself is making good and fair decisions, then give the decision maker, the user, really good and fair recommendations about what to do next. And then when the hiring manager, who essentially makes the decision, it’s not us, unfortunately, it’s actually the person who’s in charge of the budget. Give them a view of, “Hey, just so you know, you’re about to do this.”
And a lot of the times that data then becomes that nudge to the manager, again, to make a good and fair decision. And then there’s a solution recommendation. We’re not just saying, “Keep Jane, because she’s a female.” It’s actually, “Hey, put Jane in this role. She is not far away from this role.” And also from a business case perspective, it typically saves them $120,000 a year. So when we actually work with our customers, we’re actually saying, “Every time you do that, you’re saving the business a lot of money.” So we’re incentivizing-
William Tincup: All their recruitment costs and training costs. I mean, you’re still going to scale up and train, but you were going to do that with almost any candidate anyhow.
Siobhan Savage: Exactly. Plus the cost of letting someone go is really expensive.
William Tincup: Good point. Didn’t even think about that. Yeah.
Siobhan Savage: Letting someone go, the outplacement, the actual damage to your brand when you-
William Tincup: Oh, yeah.
Siobhan Savage: When all those customers fired aggressively, that didn’t go away. That sticks it.
William Tincup: No, and it’s a morale. It impacts all kinds of Glassdoor ratings, all kinds of stuff. Just three things real quickly. One, you mentioned the job description, JV. If that’s not right, if that’s somehow, let’s just say ill-formed, then would one argue that garbage in, garbage out? Do we have to fix it at that level to make sure that that’s right? Or do we have to help them and assist them to make sure that, okay, you’ve got a bunch of… You scraped this off CareerBuilder or Indeed and you’ve threw it in here thing. Okay. It’s a bunch of wishlist. Okay. It’s a bunch… You’ve run a department, you know how this goes.
Siobhan Savage: Yeah.
William Tincup: So do we have to fix that so that we can do all the things from that?
Siobhan Savage: So honestly, I think no one has time to do that. So if we went back to our customers and said, “Hey guys, you need to [inaudible 00:25:14].”
William Tincup: Not in today’s market, that’s for sure.
Siobhan Savage: Yeah. It’s like, “Hey guys, you need to fix up all your job descriptions and job architecture before we can go live and that’s a one year project.” What we look at is, I mean, we did it because we want to understand, what are all the skills around a person? So you imagine an individual has a backpack of skills they collected their whole career. And we look at that backpack and we’re like, every individual, we know what skills you’ve got. And then we also know, forward leaning, what you have the potential to do, because we’ve looked at other people like you. And then we look at jobs and we go, okay, whether this is a permanent job, a succession, a gig, we look at all of the people that have ever done those jobs and we look at, what were the backpacks of skills that they had? What was that little cluster that they had?
And then we look at the job description. And most of the time, a job description is actually made up of probably 45 to 50% is actually just talent branding stuff. It’s just like, here’s how we’re great. It doesn’t actually talk about the actual role that the customer’s looking to solve. We want to look at skills and tasks as our main sort of programming for how we make recommendations. So we’ll blend in that information. We’ll blend in all of the people who have ever done those opportunities. And then we look to the public market of big data of customers that have actually already advanced in certain areas. And we look at well, what were the skills that they gained? And they’re a little bit ahead of our customer. So it really blends a multi view of that. I believe job descriptions will go out the window anyway, no organization has time to do it. Job architectures are-
William Tincup: [inaudible 00:26:50]. Yeah.
Siobhan Savage: Pretty static and they don’t evolve as a customer evolves. The world will be made up of skills, capabilities, and tasks. [inaudible 00:26:59].
William Tincup: Yeah. And the fact that you’re looking at those things and you’re thinking of transferable and tangential skills as well, so it’s not just what you’ve got in your backpack, it’s also, what could you put in your backpack relatively easy?
Siobhan Savage: Exactly.
William Tincup: Which, again, a job description is not going to do that. And so I love that. Okay. What are questions, if you could go back to your former self, what are questions that practitioners should be asking of AI vendors?
Siobhan Savage: Have you had an audit? And if not, why not?
William Tincup: Is it just as simple as have an audit? Because I, again, back to grading your own homework, have you had an independent audit?
Siobhan Savage: An independent audit. I think the problem that you’ll find, and listen, I’m coming from HR, so for me to learn this base, which is quite technical, took some time. So we’ve got to remember that the folks that we are selling to and partnering with, they don’t have a lot of time to learn about the AI models so-
William Tincup: That’s a good point.
Siobhan Savage: They’re expecting the vendors to, I suppose, try, and the analysts, to give a view into this space of what they should look for. And I think the laws are now changing, so the independent audit is coming through. It will blanket through the world. It’s just, any global customer will have an office in New York. They suddenly all have to be along that. I think we do need our space to upscale a little bit in understanding this so that they know what to ask.
And I think the very first question is, have you had an independent audit? If you even look at the teams that are in auditing themselves, if it is made up of all men or all one type of cultural heritage, and it’s not a diverse team in itself, not only are they only having one point of view when they’re auditing, but they’re also auditing it themselves. So I think, look at the leadership team of the provider. Have they had an independent audit? Look at their actual business as well, because one of the things that’s really important to us that we ensure cultural, gender and age diversity throughout all of our design and validation of our AI. And that’s a really key component of making sure that you do the right thing, because it’s only kind of window dressing. If you’re telling the market to do this and you’re not doing it yourself, that’s an indication of the business that you’re selling to them.
So I think that’s what I say to our customers is like, one, is it independently audited? What are they doing in that respect? We’ve now got a information set that we can actually give so if anyone wants to reach out to me, I can give you a lot of questions on a spreadsheet that we’ve built for other customers to help them actually understand and ask certain questions when they’re going through an RFP. And I had to learn this as a HR person, I didn’t get it. So it’s written in a friendly to HR view. So if anyone wants to reach out to me, [email protected], happy to share that, just so that you can have a view of, here’s the things you need to care about, here’s the questions to ask. And it also means that privacy and security take us a little bit more seriously as HR practitioners.
William Tincup: Sorry.
Siobhan Savage: Because they’re shutting the door on any of your technology buys until you can be sure that it’s not going to cause a harm to the overall [inaudible 00:30:14].
William Tincup: As they should. Yeah.
Siobhan Savage: Exactly. So I think it’s definitely a space that, if you’re not paying attention to this space right now and you’re going through transformation and buying technology, stop for a little second and just understand what that means and the harm that it could cause because at the end of the day, human or robot making a decision, you are liable.
William Tincup: Love this. Well, I could talk to you forever. And unfortunately, you’ve got work to do and stuff like that. So Siobhan, thank you so much for your time today and also for breaking this down in layman’s terms so that we could all understand it.
Siobhan Savage: Yeah. My pleasure. And happy for my team to forward you on any of that information that could help folks just get their head around a little bit about the space.
William Tincup: That’d be super helpful. And thanks for everyone listening to the RecruitingDaily Podcast. Until next time.
William is the President & Editor-at-Large of RecruitingDaily. At the intersection of HR and technology, he’s a writer, speaker, advisor, consultant, investor, storyteller & teacher. He's been writing about HR and Recruiting related issues for longer than he cares to disclose. William serves on the Board of Advisors / Board of Directors for 20+ HR technology startups. William is a graduate of the University of Alabama at Birmingham with a BA in Art History. He also earned an MA in American Indian Studies from the University of Arizona and an MBA from Case Western Reserve University.