The Impact Of AI Matched Teams On Project Outcomes With Onkar Dalal of Turing

Discover the magic of AI in transforming the future of hiring! Our guest, Onkar Dalal from Turing, sheds light on how AI’s sophisticated multi-format tests could revolutionize the way candidates are matched to job roles. Onkar discusses the critical need for guardrails and heuristics to ensure that these algorithms function as they should. Plus, he shares how AI could significantly reduce the cost and time commitment tied to traditional interviewing processes. AI matched teams could be the future of work!

But, that’s not all! We also talk about the secret sauce that turns a group of individuals into a winning team. Onkar reveals how the lure for seasoned candidates can be increased by transforming the interview process into an efficient and appealing experience. So, let’s dig into the evolving notions of ethical hiring and the potential of AI in eradicating bias. Onkar emphasizes the importance of monitoring and measuring bias in existing data to ensure the continued production of unbiased data by these algorithms. Join us as we navigate through these pivotal aspects of AI-driven hiring in this must-listen episode.

Listening Time: 23 minutes

Enjoy the podcast?

Thanks for tuning in to this episode of The RecruitingDaily Podcast with William Tincup. Of course, comments are always welcome. Be sure to subscribe through your favorite platform.

Listen & Subscribe on your favorite platform
Apple | Spotify | Google | Amazon

Onkar Dalal
Head of Data & AI Turing

I am the Head of Data & AI at Turing, an online platform that connects talented software engineers with the world's best companies. I have a PhD and an MS in Computational Math and Statistics from Stanford University, where I developed optimization algorithms and statistical models for various applications.

With over 10 years of experience in building machine learning, data mining, and data science solutions to bootstrap and grow marketplaces. I lead a team of engineers and researchers who use cutting-edge techniques and tools to solve complex problems and generate insights from large-scale data. I am passionate about creating innovative and impactful solutions that enhance the performance and user experience of Turing's products and services.

Follow

The Impact Of AI-Matched Teams On Project Outcomes With Onkar Dalal of Turing

William Tincup: [00:00:00] This is William Tincup, and you’re listening to the Recruiting Daily podcast. Today, we have Onkar on from Turing. And our topic today is really interesting. It’s the impact of AI match teams on project outcomes. Haven’t talked about this before. Talked about a lot of AI matching stuff, but not not as a result.

So I’m really interested in exploring this. So Onkar, would you do us a favor? May make sure I got the pronunciation of your name, correct. A[00:01:00] B, introduce yourself, and then C, introduce Turing.

Onkar Dalal: That’s good. Thank you, William, for having me over. My name is Onkar, and I lead the data science and AI teams at Turing.

By training, I’m an optimization algorithms person. I was doing a PhD at Stanford, and I happened to stumble into of statistical modeling and data science as an application of optimization algorithms. And that led over 10 years of my life working at the intersection of AI and marketplaces. So building and growing marketplaces using AI.

And my past role at LinkedIn, I’ve led the AI teams, which grew the ads and service for the marketplaces. So I really enjoy working in the marketplace optimization problems. Because it provides like a unique mix of AI and optimization problems, a very different flavor from traditional search and recommendation systems.

William Tincup: So before we jump into the project outcomes and what matching can do for that I do have a question around algorithms especially for the [00:02:00] audience’s sake, how often do you have to go back and tweak? An algorithm or to make sure the algorithm is doing what we thought it was going to do, like we set out with an intention, maybe even with some guardrails, but how often do, is it constantly or is it something that we do periodically? Like what’s, what is that world?

Onkar Dalal: Yeah, it’s a constant work. So daily, I would say we are looking at ways to improve the algorithm, always try to identify the deficiencies in the algorithm, add guardrails, add heuristics, add, so traditional machine learning gets you somewhere, but then you also always need these guardrails and heuristics to make sure it also makes sense in terms of common sense.

It’s a continuous iterative process.

William Tincup: And so it’s, and so that’s the kind of when people talk about, they talk about calibration and recalibration in those terms of just making sure that, again, it could be seconds, minutes, hours, days, but you’re always to, to make it more efficient to make the algorithms that we create.

More and more

Onkar Dalal: efficient. Yes.

More and more efficient and also [00:03:00] adapting to the changes. So if you look at the last 12 months, the economy has shifted drastically. What was the start of valuation, then start upgrades in 2021. It’s very different now. And your algorithms also have to adapt to that because you cannot necessarily just learn on the data from yesterday and hope that it’s applicable today.

So it’s not just the improvement, but it’s also almost like catching up. to not degrading. In ads, it used to be funny, like our metrics used to almost be constant and the impact of the team was actually to keep it constant. There is an aspect of improvement, but there is also an aspect of avoiding degradation, which is also a continuous process.

William Tincup: It’s like you’re watching both sides. You’re watching the efficiency side and the degradation side, both equally important. Thank you for explaining that to me and the audience. So when you, as you look at AI matching, you’ve looked at all kinds of AI matching as it relates to talent and things like that.

What do we need the top line? What do we need the audience to know in terms of project [00:04:00] outcomes? What’s A got to do with C?

Onkar Dalal: Yeah. If you look at traditional recruiting, right? Everybody has a resume and now everybody has a LinkedIn profile, but there is limitation to what is available in that.

So it lacks quite a bit of signals. Nobody writes like what are their weaknesses or what are like everybody writes about strengths, for example. So it, the resume or the LinkedIn profile or the traditional way of how we recruit. Yeah. Even get to the match is very limited. It lacks transparency in the actual performance versus what is on paper.

So we do have interview processes, right? Everybody interviews before hiring someone, but it’s a quite an expensive process and also takes time. And in a way, interviews are also quite limited in what you can gauge within the 60 minutes. Interviews tend to optimize for, avoid for false positives, but we let go of a lot of false negatives.

So we let go of a lot of good candidates to make sure we don’t hire a bad one. And despite that, we, like all these companies, like the if you look at the attrition rates, it’s quite significant, both [00:05:00] regrettable or non regrettable attrition. So overall the matching process is quite expensive, quite inefficient.

And one way to make it better is to use AI and use data to do it. So one of the things we do at Turing is the vetting engine, what we call the vetting engine. So we get, we start with the baseline of what you know about about it. about an individual which comes through their resume, maybe their LinkedIn profile, maybe their GitHub profile.

But this is information which is available. But what we do on top of that is get them through a variety of multi format testing. So that is where the efficiency of interview processes can be improved if we can get that more additional detailed signal about what are your Particular strengths, what are your weaknesses?

What are the things which you have done in the past, which you have written on the resume, but actually is something you can do in the new job? So that is the that is the majority missing piece in the match, which we can, [00:06:00] with the use of data and AI make it much more meaningful. And that of course leads to much better outcomes in the project.

If you think of it like Turing, one of the primary goals in how I think of Turing is to improve efficiency in this whole matching and interview process. So if there are 10 companies interviewing a hundred candidates there are maybe a thousand interviews, right? And if you think of it, a candidate is going to each of them and saying like the majority part of the interviews is common.

So if you could abstract out, say 60, 70% of the interview and let the candidate only do it once. It’s highly efficient for the candidates to not have to go and talk about the same things at 10 different places. And also for the company side we can provide this information in a digestible form for the company.

So they don’t have to spend 80, 90 interviews trying to get the same information, which they could get in a much shorter time. So if you abstract out majority of the interview in this if you think of it as a bipartite graph, so two sided graph of. Candidates [00:07:00] and companies who are interviewing. The information sharing is, there is a lot of overlap in the information sharing.

A lot of it can be abstracted out using data and AI. And you can use this in the matching to improve the quality of match. And the teams which are now built based on this detailed information we have about our developers, of course, have much better outcomes. So you have, you are already aware of the strengths and weaknesses of a candidate.

You know what are the kind of tasks they are potentially good at. What are the kind of tasks they won’t be, and you can customize your onboarding process based on what you know about the developer, even before they have started doing anything

William Tincup: for you. So the testing let’s dig into the testing for a second, for the audience, especially with technical talent.

Are we looking at… Kind of the three dimensionality of the skill and what they have are we And with that I want you to just tell us about testing in a way that you approach it But I’m [00:08:00] also I’m curious as to how you think about potentiality and also pursuits, like someone’s great at let’s say Java, but they don’t want to spend any more time in Java.

They want to do something else. So while their skills are really deep, and that’s what you need as a Java developer. But, so it would show up in a way, like their skills would show how strong they are, but that’s not really what they want to do next or what they want to grow into. So how do we take into account.

On both sides, the, what’s the potential of what they have, but also what they care about, like where do we gather that data? Correct. Correct.

Onkar Dalal: So let me talk about the testing first. So again so I have much shorter background in this space compared to you, but the way I’ve thought of how the interview process has evolved is initially it started with textbook kind of knowledge.

Okay. What did you learn in these courses? And those were the kind of questions we would ask. Then I think Google [00:09:00] came in with these coding challenges, or maybe it was even before Google, but Google suddenly made it a lot popular with this data structure. coding questions. But then when the software developers actually go to work, they very rarely use Any of the information which they were tested on in the interviews, right?

So for our testing, we not only cover the basis with the textbook knowledge, with the data structure or the fundamental knowledge, but we also test developers on practical coding challenges or like practical work. So if you could simulate your day to day and if you would compress it in a magical way to get to.

Get a developer to take a two hour interactive session of sorts, which gives you a flavor of how they would operate in your particular role. That is much more valuable than whether they know some specific textbook knowledge, which they can Google today, or even specific data structure, which they can use chatGPT today to learn very quickly and put it to use.

So the [00:10:00] things we’ve tested in interview versus the things we want to actually… which are actually used in the day to day work. They are somewhat different and our testing is trying to get closer and closer to the And that’s why I was saying we have multi format, multi modal interview process so to say orbiting process.

Now, the second question you asked me about how do you capture the developer preferences? And that is where the market release optimization becomes important because in a traditional search and recommendation system. Given a job, you find the best candidates, right? And it completely overlooks what the candidates preferences are.

Whereas in the marketplace you are thinking of a three way problem. So the platform growth is dependent on supply and demand both being happy in the match. So we have to factor in the developer. So in the multi objective, you call it utilities. So the recruiter’s utility or the client’s utility is to hire someone.

Whereas the developer’s utility is to find a job which [00:11:00] pays well and is something they would enjoy doing. So we need to capture both of these and we do, and that is what goes into the matching. Because we want to make more sustainable matches where both sides are happy and that’s how platform grows.

I love

William Tincup: that. So a couple things. One, I know technical talent in general hates test. So especially the more senior people, I guess the more senior you become, the more you’re like, huh. Yeah. You want me to take a test? So how do we make it fun? Like I’ve heard of of people where they create environments.

Okay, look, you’re going to be in an AWS environment. You’re going to be using these tools. Here’s a problem. And so they can see themselves in a particular stack and solving a problem. And that’s more interesting than a answering a coding quiz, et cetera. I’ve also seen people use Metaverse and do different different types of things to make it.

You still got to get the results of the test, you’ve got an audience by and large, [00:12:00] again, junior level talent, technical talent, maybe but the more senior they become, the more they hate tests, but, and this is in their best interest. Yes.

Onkar Dalal: Yes. So certainly part of it is that messaging.

So while you hate tests, I think you would hate having the same interview or talking about the same thing to 10 companies. So which goes to my previous point, if we make it as an abstraction of the interview where you spend only 20% of the time interviewing. And you spend the 80% overlap over multiple interviews only once.

So that is that messaging. Once to say that, Hey, all the things you’re doing in this testing is only done once. And then it’s applicable to all the jobs and not forever in the future, but at least like for a sizable time window. So one, there is that efficiency argument. And second yes, I agree. We see more senior people.

You’re absolutely right. Inside that more senior people. do not want to do these textbook tests or the coding challenges. They are more and more inclined to do more practical challenges. [00:13:00] And we gauge it based on there. We have to rely on the resume and your GitHub board to let you pass some of the initial stages and directly go to the final stages.

This is the practical coding.

William Tincup: Do you have clients now, or do you even do it for yourself where you’re looking at maybe the team dynamics, so you’re matching up, okay, you’re building a, seven person team to then work on X. Yeah. Whatever the thing is. Are you not looking at skills?

I get all that. But some, not as much with technical talent, but with other talent, you’re also looking at chemistry, like how people get along, how they communicate, how they interact, et cetera. Have you had clients ask you about that type of stuff yet? Where they like, Hey, I want it all. I want to get the match and I want to, because it’s going to help us with the outcome.

Like check, got that. But we’ve got to build team and also want to make sure that this person gels well with this team. However they’re configured. [00:14:00] So how did, again, if, have they asked about that? Is it something that y’all are seeing?

Onkar Dalal: Yeah, so we have two types of clients. So someone who is just augmenting an existing team they certainly care more about how this new person comes in and gels well.

And we have testing along that for soft skills, seniority or leadership skills as we would call it on how this person will respond to some of the challenges, which is not really a technical challenge. So that’s one aspect. And the second aspect is the services offering where we offer to. The interface with the client is a little different where we offer to deliver a project and the whole team is abstracted out of the clients.

So there they care less about how the team dynamics because Turing takes care of it for them. But internally we do care about these things because a successful team it can be This it’s what is it? The sum of parts, it’s larger than the the whole is larger than the sum of parts. So that certainly is applicable when it comes to these services offering.

So this is [00:15:00] something we are closely watching. We have some initial work there to like I said, we have seniority or leadership tests which help us give us some signal on this. And we are. Constantly adding more and more data facets at Google and about the needs on the developer

William Tincup: side. Something that, the IO psychologist would probably have you add is do the personality test for all the team and then to understand what you got.

Yes. And then also… Do a personality test on the talent that’s coming in since you understand, okay how well the personalities are going to get along again, coding’s coding. However, you’re still going to communicate. You’re still going to use Slack or teams or whatever and be on zoom calls.

That’s still going to happen. So the chemistry part. It’s still there. Let me ask about the line that we’ve made between AI matching and what we do more significantly, go deeper, get more richer data so that we can make better matches. And project outcomes. So [00:16:00] outcomes being, price, quality, speed, budgets be projects being done on time within budget, all that type of stuff.

So how do how does the client view success? Is it retention? Is it like when they look at this? Obviously it makes sense, right? But how do they view success? Do they view success as the project that we hired that person on is done on time or within budget? Or is it that we retained them and we didn’t have to go through churn and things like that?

Like, how do they, what are you hearing from clients in terms of what they think success is?

Onkar Dalal: So it’s a spectrum. And so they start with how quickly we can get something on the table. So how quickly can we get started? And that itself is somewhat of a differentiator for us, because we are doing the pre work of interviewing majority part of it.

So if we can quickly get started, that itself gives us an edge. Because the time spent in building the team is also somewhat [00:17:00] taken away from the outcome, delaying the outcome. So that’s one aspect. In terms of successful outcomes, yes some clients care about the project being delivered and then they rematch or redeploy the same talent for subsequent projects.

Sometimes they also want to hire them full time and we have made those teams as well. So that is like an ultimate success where somebody who was, and the other part of the globe who probably had no access to the client here, we made the match and not just made the match, we made it like a much more sustained than like a project based match.

So we have some success stories there as well, but most of it is on the project and then subsequently next project

William Tincup: and so on. And I can see this, how fast you can stand it up is probably the first thing that they view as success is like, oh, this is great. We’re already, we’re technical debt.

We’re all ready behind. We need to be up and running in seven days, so I could see that being the first line of success. And then it gets deeper. I did want to ask you some questions around AI [00:18:00] matching and your take on kind of auditing and also ethics. So what do you, what’s your perception right now of just AI matching and what do we, what is the, what’s What does a practitioner need to ask more about from the vendor?

What do vendors need to do more and do a better job? We’re at the early stages of this, so everyone’s learning as they go. Yeah. It’s a wild west. But it’s just your personal take on just how you look at matching, how AI matching and auditing, and then also how you look at it and what the ethical kind of treatment should be.

Thank you.

Onkar Dalal: Yeah. Yeah. So algorithms are always dependent on the data and they learn the biases very quickly and they optimize the hell out of them because that’s what they are designed for. This is even that LinkedIn we had seen a lot of times, if you are not careful you will see the metrics go up, your business metrics go up, but under the hood, there are some biases which creep in.

And so it is really on the algorithm [00:19:00] developers to, to be the ones who start with these questions. And that across the industry now, there is a lot more talk about responsible AI. People are thinking about this way before way more than they did maybe a few years ago. And specifically in the hiring space, it becomes even more of a.

It’s much more important. For example, in ads, nobody really cared if they saw less ads, right? Whereas with the jobs you cannot take that liberties. We have to collect data, track biases, even when we are at the source of collecting data. The algorithm piece comes a little later in my mind.

Because like I said, algorithms just learn from what the data gives you. So you want to start with that. You want to measure biases in the existing system or existing data, start correcting it there, and then the algorithm picks it. And then of course you want to monitor how the algorithm is continuing to not introduce biases and produce more biased data in the future.

William Tincup: Because you brought up bias, I wondered, this is my last question is it just like when we [00:20:00] started, we talked about algorithms, you’re looking about how to make things more efficient, but you’re also looking at the other opposite side of degrading. I have the feeling that the more we learn about biases, the more we learn about That we don’t know, like it’s like peeling an onion and we’re just peeling layers and okay, we can get race, we get gender, we get this, but where are we going to start getting, neurodiverse and economic diverse, like all these other factors of either bias, preference, however you want to phrase that.

So is it as simple as we’re always looking at bias in the same way as how do we make it more efficient? And how do we make it not degrade?

Onkar Dalal: Yes. I think if you don’t ask, if you don’t measure, then you don’t know the problem. You want to constantly ask these questions and add new dimensions to this question.

Like you said, it started with gender bias, but there are 10, 15 other dimensions, which along with you could be biased. And unless you even think of those, you wouldn’t think of tracking [00:21:00] those. You wouldn’t think of measuring that bias and you wouldn’t think of fixing the bias. So we have to constantly yeah, I think as a society, our thinking is also evolving on this, right?

If you look at regulation, they’re a little slow, but I think they also force hands, especially for the larger companies to think about these things. Yeah, I think it’s a continuous process where we think of new dimensions, which Right now, for you, for example, you said neurodiversity, and I’ve not even thought about it before.

So there will always be a broadening of our horizons as we think about these problems deeper. And then it’s a continuous process to make sure your models are aware of it. Also introduce these as the objective function or the utility in your optimizations. Unless you deliberately unless you deliberately put that as a function optimization target for your algorithm, it’s not going to do anything about it.

That’s right. So this has to be, so the definition of the objective function for an algorithm is the human given or the [00:22:00] designer given algorithm, designer given, right? And that is something we as humans have to constantly do. And the algorithms and the techniques will continue to evolve and become more and more efficient.

But what it like, you have to direct it in the direction, which makes sense. It’ll go fast in the direction. It’ll go faster in five years and two years, right? But. The direction has to be set by us.

William Tincup: Yeah. And again, it could go faster in the wrong direction. So it’s, that’s that constant making sure of the guardrails of making sure it’s doing exactly what we want it to do.

You’ve done a wonderful job. This has been a great topic. Thank you so much for coming on the

Onkar Dalal: podcast. Thank you, William. It’s been a pleasure to chat with you. Absolutely.

William Tincup: And thanks everybody for listening until next time.

The RecruitingDaily Podcast

Authors
William Tincup

William is the President & Editor-at-Large of RecruitingDaily. At the intersection of HR and technology, he’s a writer, speaker, advisor, consultant, investor, storyteller & teacher. He's been writing about HR and Recruiting related issues for longer than he cares to disclose. William serves on the Board of Advisors / Board of Directors for 20+ HR technology startups. William is a graduate of the University of Alabama at Birmingham with a BA in Art History. He also earned an MA in American Indian Studies from the University of Arizona and an MBA from Case Western Reserve University.


Discussion

Please log in to post comments.

Login