How Generative AI Will Transform Organizations From The Top Down With Morgan Llewellyn of Jobvite
Bear with us, but imagine living in a world where Artificial Intelligence (AI) understands our language, context, and the internet. That’s no longer the realms of science fiction, but a reality unfolding in front of our eyes. Our special guest, Morgan Llewellyn, the Chief Data Scientist at Jobvite, enlightens us on the power of generative AI in organizations, and the incredible advancements of large language models (LLMs). These intricate systems laden with billions of parameters are designed to comprehend our language and the context and content of questions and answers.
Then we ask, are you making the most of AI’s capabilities? We explore how to teach individuals to extract the best from AI by asking the right prompts. We delve into ‘Googling 2.0’ – a concept where creative individuals, who ask the right questions, can successfully utilize these models. From software engineering to marketing, we chat about the vast potential of this technology that’s set to revolutionize various fields.
So, if that’s not mind-blowing enough, we look ahead at the future of AI models. Imagine creating visuals from our thoughts, making ideas a reality, or even eliminating the need for fashion models. We consider AI’s endless learning capacity, the necessity of top-down decisions in AI implementation, and the shift in customer demands for a wider range of AI solutions. Prepare to be amazed as we reflect on the disruptive business approaches that leverage AI to quickly enhance an organization’s capabilities. This episode is a peek into the future of transformation and disruption in the generative AI world!
Listening Time: 27 minutes
Enjoy the podcast?
Thanks for tuning in to this episode of The RecruitingDaily Podcast with William Tincup. Of course, comments are always welcome. Be sure to subscribe through your favorite platform.
How Generative AI Will Transform Organizations From The Top Down With Morgan Llewellyn of Jobvite
William TIncup: [00:00:00] This is William Tincup, and you’re listening to the RecruitingDaily Podcast. Today, we have Morgan on from Jobvite, and our topic’s fantastic and timely, of course, how generative AI will transform organizations from the top down. Morgan, while we do some introductions tell us a little bit about yourself and Jobvite.
Morgan Llewellyn: Yeah, so I’m the chief data scientist at Jobvite. My name is Morgan Llewellyn, and I [00:01:00] am a trained behavioral economist by training, and I’ve fallen into data science due to, I would love to say, great choices, but I think it was just, happy accidents as I came out of, grad school.
And so I’ve been working in the AI data science space for
William TIncup: well over a decade. It’s all strategy. It’s all strategy, Morgan. It’s all strategy at this point. That’s the thing about a career, it’s great going backwards. Oh yeah, totally made sense. Totally made sense. I meant to do that.
Morgan Llewellyn: I meant to do that.
Yeah, funny enough, William. I remember I back when I was at Salesforce, I fought like the change, a title change to data scientist. And man, I’m so glad they changed that title. Yeah, that did me a world of
William TIncup: good. I don’t think this is a good idea. I just don’t think it’s a good idea. I don’t think the market’s moving that way.
It just, it’s a heavy data. No one likes data. [00:02:00] That wasn’t, that was
Morgan Llewellyn: just more of the, Hey, I went to grad school to become an economist, right? You want to change my title? It feels like I’m throwing out everything I’ve done. I,
William TIncup: I earned that degree. I earned that degree. Damnit, I don’t want that in my title.
It’s not fair. Yeah,
Morgan Llewellyn: yeah. Who would’ve known that Salesforce would know better than me? ?
William TIncup: Yeah. Markoff, he’s a, he is made a lot of mistakes in his life and a terrible decision. So yeah. Yeah, I can see that. But
Morgan Llewellyn: anyway, so yeah, that’s a little bit about me. I’ve been a behavioral economist and working in the TA and recruiting space for a number of years now.
Yeah, at a
William TIncup: certain point you stop saying how many hours, right? You just cut it off and you’re just like, for a long time. Okay, got it.
Morgan Llewellyn: How long have you been doing this, William?
William TIncup: Yeah. A long time. No, I’m just kidding. 25 over 25 years. .
Morgan Llewellyn: Yeah. See you Just, you, me. At some point it becomes a badge of
William TIncup: honor.
Yeah. Or ageism. I don’t know. It’s I deleted all of my information. Before [00:03:00] 2000, other than my degrees all my work information, I deleted everything before 2000. It looks like I started working in 2000. There you go.
Morgan Llewellyn: That’s the best work.
William TIncup: Exactly. Exactly. And then when we hit 2025, I’ll go back to 2005.
I’ll only have, I’ll only render 20 years of work experience. Yeah, you’re just
Morgan Llewellyn: worried about guys like me, right? Yeah, who are looking for ageism in AI, right? Exactly. Don’t worry, I’ve got you protected. Okay,
William TIncup: thank you. You’re good in my book. See, that’s why this call was so important to me. Alright, so let’s talk generative AI.
So let’s give let’s give the audience kind of a prima facie understanding or a primer on generative AI. I’m sure they’ve heard about it, of course. But let’s do that. And then… Let’s dig into the transform organizations from the top down. Yeah. So
Morgan Llewellyn: Let’s talk about generative AI a little bit and how does it.
Differentiate from all the other conversations that [00:04:00] folks have been, inundated with over the past, 5, 10, 15 years. And so when we’re talking about generative AI, a lot of that conversation is really around LLMs, right? These, what’s called LLMs or large language models and large language models are taking, think of the internet.
And understanding the context and the content and what’s the questions being asked? What’s the right answers? And LLMs are these massive models with billions and billions of parameters that are really Thank you. Trying to understand our language whether that language be English, whether that language be Mandarin, French, what have you.
It’s understanding the language and understanding the content within that language. So being able to differentiate, one of my favorite examples, Python, the snakes. From Python to programming language, different, very different. But in a, if you think about it, where we were say 15 years ago, right?
If we were just doing parsing by words, we have the word python, right? [00:05:00] And how do we know that this is a snake? And this is a programming language, right? We. And then we started developing this idea of context where we started looking at a couple words before the word Python and a couple words after the word Python to understand, Oh you’re talking about, typing or programming right after this word Python, you must be talking about the, the software skill as opposed to the.
The snake. And gradually we started getting greater awareness around the context of a word, but large language models have just blown that out of the water where now it’s not just a couple of words around, a couple of words around that word Python. Now it’s that whole document and understanding.
Okay, we’re talking about where The python snakes live in what they eat. And so now if you ask a question, tell me about this python snake. It’s able to understand from, reading thousands and thousands of documents, just like you or me. It’s able to understand something about that python snake and answer questions.
And so the generative AI part comes to, [00:06:00] how do you apply that? How do you ask a question and then generate a response or code or what have you. Just quickly.
William TIncup: Morgan, how do we teach people to ask prompts? Like years ago, we had to teach people how to write search, right?
So Boolean, if you didn’t know Boolean and maybe most people don’t, but there was a bunch of people that had to learn Boolean to then find things on the internet, great. But now it seems like we’re going to have to learn a new language, if you will, and it’s about asking AI questions or prompts, if you will.
Morgan Llewellyn: That is an excellent question, William. And I think that really speaks to the heart of what’s happening in our space. I think of this as almost Google 2. 0, right? Or Googling 2. 0, where the people who are going to do really well with large language models are the creative individuals who know how to ask questions.
Because again, you’ve got this massive, you’ve basically got the world’s dictionary, right? The world’s history, the world’s encyclopedia of information at your [00:07:00] fingertips. Now, it’s how do you ask the question that you want to, how do you ask the right question to get the response that you want or the information you need.
It’s very similar to Googling, right? Googling has become more or less an art, right? When I, when I interview folks historically it’s always been… You know how to Google, right? What do you do when you’re stuck? And usually I would hope the answer would be I’ll just Google it, right?
I’m not going to try and recreate it. That’s very much what, the prompting is here. It’s very similar to Googling. It’s not like Boolean search where you have to be very specific and things like that. It, I would say these large language models are very forgiving in the data that comes in and the data that, comes out is still going to be pretty good.
And so it’s going to be very similar to Googling. Thank you. Do you think organizations going forward,
William TIncup: do you think organizations already? Yeah. Yeah. Do you think organizations will start building the competency to teach? And train people to then be able to, so [00:08:00] that they can get more out of it.
Like, where’s the, I said, the onus of learning that is it on the candidates, on the employees, on the individual, or is it on the company? Again, we know this learning is going to have to exist and be created who’s responsible.
Morgan Llewellyn: So I can already tell you, William, you are going to be amazing in the future because you ask great questions, right?
I appreciate that. Large language models were built for you.
William TIncup: People, it’s ironically enough, I have three degrees undergrad and art history, MA and American Indian studies and MBA. And most people think that the MBA was the most important degree. It wasn’t clearly the art degree was because I had to learn 15, memorized 15, 000 works of art.
But the MBA, while I diss it the MBA taught me how to ask questions, which is crazy. Cause they could have just said it’s a master’s in asking questions. It’s I would have still signed up for it.
Morgan Llewellyn: I guess two things real quick. Favorite undergrad class was our history funny enough.
So I completely appreciate that response. [00:09:00] Now I’m no expert like you, but I absolutely love that class. Developed an appreciation beyond my pen amount of skill set. Yeah, but asking questions, coming back to that really is it. Open a I barred all these different technologies.
They’re really going to reward creativity. And I think personally, that’s what’s super exciting here is we’re going to be rewarding creativity. Because
that opens up a world of possibilities, right? That opens up some really exciting things that I can’t even think of. But you’re going to think of or someone else is going to think of applications with that opportunity to unleash your creativity. Now, the question, that you asked is where is the onus?
Is it on the business or is it on the individual to learn these prompt engineering skills? And I think it’s a little bit of both, right? Is the easy way out. And let’s start with kind of the employee. And then let’s talk about the business because I think along with [00:10:00] the business comes those hard questions of this is a disruptive technology and what does this mean for businesses, right?
So let’s just talk real quick about, the employee, just like an employee uses Google right to understand and find things they might not know today. They should be looking for opportunities to prompt engineer to understand and find things that they don’t know today. The other reason why I would encourage employees to, to be investigating prompt engineering techniques is it basically makes you a data scientist out of the box, right?
Software engineers become data scientists because now you’ve got these AI models at your finger at your fingertips and not the model, the output that can be super useful. And then for, marketing, et cetera, it’s just. It’s great to have that, in, in software engineering, there’s this idea of a rubber duck.
You talk to the rubber duck to hear if your idea is a good idea or not, right? Or if your, code makes sense. Very similar here, right? It’s like that rubber duck that you can pass it. Hey, here’s what [00:11:00] I’m thinking. Does this make sense? Or, hey, why don’t you go and create it and then I’ll go and edit?
Because it’s always easier to edit. And so I think there is an onus. Yeah, there’s an onus on the employee to upskill if they, if they want to advance their career. And they want to, I don’t want to say stay relevant because, we had a plumber out here the other day, like for plumbers, maybe it makes sense.
Maybe it does. But if this is your space, if you’re generating content, if you’re generating code, then yeah, I think it’s like any other skill you should be learning. Let’s talk about business though. The onus on the business because this is what I think is really interesting.
Because I don’t see, I don’t see, generative AI as being necessarily a threat at an individual employee level, right? It’s not going to be, open AI comes out or, we adopted your job tomorrow. It’s not taking William’s job, right? I don’t see it. It could help you, right? It can help you generate content and you should be using it or it might make your organization more.
And so as churn and different things [00:12:00] happen maybe you need to hire less, but I don’t see it as, taking an individual’s job or a business lines job necessarily. What I think we’re going to be seeing though, is it could make entire organizations.
Disappear because now their business model is no longer valid. And so it’s not taking William’s job, right? It’s taking that organization’s kind of purpose for being away.
William TIncup: It’s interesting to say this. So my son. Two sons, one’s 17, one’s 13 it’s youngest. I was showing him chat GPT. I’m sure he knows more about it than I do.
Anyhow, I was showing him chat GPT the other day. And we’re of course, like acting like eighth graders, of course, he’s an eighth grader, but I was acting like an eighth grader and we’re just having fun. I was writing my obituary and I could, it was just a fun bit. And then I showed him on Instagram, a bunch of AI models.
Model fashion models and I’m like, and I’ll show him a picture. I’m like, is this a real woman? Is this AI? And at one point he couldn’t tell the [00:13:00] difference. And then I said, okay, let’s look at some reels. So then I showed him some AI reels or some reels of AI models, eating, walking, talking the whole bit.
And I said, is this real or is it AI? And literally he couldn’t, and I showed my wife, same thing. Couldn’t tell the difference. I’m like, so do we need the fashion industry? Can’t we just create it right? Do we need fashion models anymore? If we can create fashion models, and make them look real like they’re walking down the runway, real, why do we need fashion models?
And he’s looking at me like, Oh, yeah. No, we don’t need those.
Morgan Llewellyn: So here’s what excites me about that example and coming back to this idea of rewarding creativity. So let’s say three years ago, if I had an idea for a, a blouse, right? Or a men’s suit, right? There is no way I could somehow communicate that.
And make a visual representation of what’s in my head, not possible. You need years and years of experience and, the right people, et [00:14:00] cetera. Yep. But now today there’s a possibility that actually whatever’s in my head, I can represent visually. And be able to, share that visual with other individuals and, potentially make it a reality.
And so that’s why I’m like, there’s a, there’s an opportunity to really reward the creativity and not necessarily the hard skills that are, typically required to move from creativity to reality. And I think. These large language models make that, make that jump a lot
William TIncup: smaller.
The interesting thing about large language models many interesting things, but it’s like I was talking to somebody earlier today, I’m like, we set a really high bar by calling it artificial intelligence. The artificial part we got, right? The intelligence part, it’s going to take a while for things to be really intelligent.
Like it’s artificial, better than human. And then, rebranded. And then at one point, yes, it is artificial intelligence, but it’s, we’ve set this bar. And so people, anytime there’s a failure, people think okay, it’s, [00:15:00] we’re not there yet. Of course we’re not.
It’s like talking to an infant. When do you graduate college? It’s yeah, we’re not there yet. But no one can dispute the capacity of AI. That’s just, if anyone’s trying to dispute that’s just that’s silly. You can’t dispute what it’s top or it’s ceiling, if you will, that it’s, I don’t, it’s limitless.
Whereas every human being has a ceiling, on a day to day hourly basis. But over the totality of their life, they have ceiling. You can only learn so much. And AI doesn’t have that capacity or doesn’t have that ceiling.
Morgan Llewellyn: It doesn’t. And the other, yeah, AI makes mistakes as you or I do.
And so the question is what’s the alternative? Are we going to have Morgan and William with are limited to your point, right? There’s only so much we can learn. Are we going to have William and Morgan make the decisions? Or are we going to have this, this thing with infinite capacity to learn?
Or to just read documents, right?[00:16:00] Are we going to have that make make a decision? And, Against the two William and Morgan versus this infinite thing Which one do we think is going to be more right?
William TIncup: I’m gonna go with that one. Yeah the thing though Because we talked about top down. It’s okay.
This change is gonna happen. It’s got been top down Why do you think it’s top or what do you think the difference between top down and bottom up, like why wouldn’t this be candidate driven or employee driven or consumer driven, if you will, as opposed to something that executives see or the board sees or the industry sees and pushes it down.
Morgan Llewellyn: So I think there’s a. There’s really three kind of reasons why I think it comes top down and the first can be again, this idea of there’s business risk for some organizations. It’s not that Morgan in this particular. Part of the business is going to be affected.
It’s our entire business is going to be affected, right? That’s C suite level thinking and strategy that’s necessary. So I think that’s one. And I think that’s why you see, private [00:17:00] equity VC, even, financial markets reacting to announcements about large language models or PE firms and VC saying, look, everyone in our portfolios is going to be using these because we don’t want our business disrupted.
I think you also see something from the customer side where, if you look back, you know, especially in what’s, the HR tech space, if you look back 10 years ago, it was marketing that was pushing ML AI to customers, right? Hey, we’ve got this thing. We’ve got this thing. We want to talk to you about this thing, right?
And a couple of years ago, I think we started seeing a shift where more and more customers were coming and saying, Hey, do you have this point solution? Can you help me do this? Can you help me do that? But in the last six months, William, what we’ve started seeing is customers coming to, coming to us saying, tell us about your strategy.
It’s not just a point solution anymore. Tell us about the horizon and how you’re moving towards that horizon. And [00:18:00] so customers have a really, I think, changed the way they think about incorporating AI. And I think that’s also driving the C suite to get involved. And then the final thing when, you know, the third thing, right?
So we already talked about Business disruption customers. And the final thing I think is really culture and innovation that we’re talking about here. And that’s why it’s the C suite. So when you think about a disruptive technology, what it requires is innovation. And an innovative culture, right? Some organizations have that typically newer organizations and you’re going to need top level, top down leadership, pushing innovation among a lot of organizations that have more or less pushed all the innovators out.
So if you’re a. A business that’s been coasting and using the same technology, year after year, your innovators have bled out, right? They’ve gone and, they’re looking for, they’ve gone to find something new. So [00:19:00] leadership is going to have to reinvigorate.
That innovative DNA in that organization. And so that’s going to come down from the top, right? They’re going to have to instill an innovative spirit among people who maybe have self selected into a not so innovative organization. And so leadership is going to have to figure out how do they get an organization that maybe hasn’t been so innovative, that is at risk, right?
For, deprecation. They’re gonna have to figure out how to get their people to be innovative and to adopt and use these new technologies. So that’s why I think it’s top down.
William TIncup: You’d use two words that are really interesting, disruption and transformation. And as it relates to generative ai, do you see those as, especially let’s do with transformational or transform, do you see that as iterative, like we won’t really know what’s happening, just we, every day it just kinda changes, or is it gonna be more category.
Cataclysm, that’s the wrong word. Will it be, we wake up one day and things are really different? [00:20:00] Because I’m, I think people are Probably not overwhelmed, but they’re looking at, okay, we know that this is transforming, but is it a slow transformation or is it going to be something that’s, that happens, faster?
I’m like, I’m in the back of my mind, I’m thinking about Moore’s law and I’m thinking about, okay, is this something, again, we won’t really notice the difference. Because we’ll just consume it and the change will happen. We’ll just consume it and consume it. And it was like, okay, we’ll wake up three months later and yeah, of course we’re all using prompts.
Or is it something that, we’re going to really, this is going to happen fast really fast, that transformation and disruption, especially.
Morgan Llewellyn: So I think. I think you can see it fast. And here’s the reason why. Let’s again, take this disruptive business approach. So you’ve got an organization who isn’t, using some of these powerful models and you’ve got another organization that is as a customer, right?
I might switch from the non innovative [00:21:00] organization to the innovative. And to me, that seems like that’s the, the Flip has been switched. Suddenly, I’m going to see larger capabilities, new capabilities that I didn’t have before. Because I have this new vendor providing me these opportunities that just didn’t exist.
So I think if we think about it from a business perspective in the tooling you’re using I think you’re gonna see, some, It’s going to be binary. In some cases, it’s replacing vendors and move towards these more innovative vendors. And, maybe for others, it might be more gradual where you’re, upgrading specific features and technology, but I think at the micro level, yes, it is going to be binary because at that feature level, that tool level, you are going to see a significant improvement.
Think about something as basic as reporting. An organization wants to report on, hires or something like that. And now you’ve suddenly got this capability where I can tell you the report I want, and it’s just gonna go generate it. No longer do I [00:22:00] have to go, ask my business analyst to generate a new report or something like that.
Or it’s gonna solve that problem for me. That to me is pretty significant change in the way that we do things. What do you think, William? What’s your thought? You
William TIncup: know, I think it’s like an animal farm. I think that the definition of words will change. And we, as you go forward, we’ll learn more.
We’re peeling an onion of things that we don’t really know what the core of the onion. We don’t know how big the onion is. And so we’re just going to peel back layers. And people really notice as much. And it might take a couple of generations. People think that, I think they want to believe that it’s going to be more like minority report next week.
And the truth is minority report was shot, over 20 years ago. So it’s going to happen slower. But we won’t notice the change. It’ll be that subtle that the words have changed. We’ll just be using a kind of a new idiom, a new [00:23:00] vernacular. We’ll just be using new words to describe some older things and we just won’t even notice the difference.
So I don’t think most people now, the people that have their finger way down on the pulse of it. Yeah. Yeah. Probably they’ll know more about it. No more about it in advance of other people. Cause it’s I live in Texas. When I was growing up, we would get music two years later after California and New York got it.
So something that was hot in New York, let’s just say rap in the 80s or skate music in California, we’d get that two years later. Okay. Okay. Now internet. Okay. We don’t get it two years later, but there’s still a group of people that will get, that will understand this and consume it faster than the mainstream folks.
Cause you always have kind of the adoption curve of you got early entrance and then mainstream, and then you’ve got laggards, right? So I think the laggards is going to be generations that just. Opted out or people, maybe [00:24:00] not even generations, but people just opted out of learning something new. So yeah, I have a more kind of iterative look at it and say, okay, it’s going to happen slower.
We won’t see the paint drying. Yeah, I,
Morgan Llewellyn: I think at the business level, right? So thinking about from a person, a person level. View versus a business level view, I think those laggards are at risk is the way I would describe it. Because this is, this really is different than your traditional UI, right?
And yeah, I
William TIncup: think you’ll see that. This is the, this is what the middle nineties, there was people in the early nineties that know about the internet and played with the internet. It wasn’t as, it didn’t look like it does now, clearly. But there were people that, that did that. And then there was like a whole host of people.
I owned a web development firm in the nineties and like talking to people about the internet, then they just look at you like have you smoked dope today? What are you talking about? I’m like there’s a display, it’s called the worldwide web. [00:25:00] And. I think we’re it feels like that to me, like this is that, again, I think you said Google 2.
0, which really makes sense to me, so it’s some people are just going to get it and go, oh, hell yeah, let’s do that, and go, and they’ll try stuff, even if it failed, they’ll throw it against the wall ah, let’s see what sticks, and there’s going to be a bunch of people that fight it. Bill Gates infamously said the internet’s a fad.
It’s huh, okay maybe not. And I think there’s people that are going to do the same thing with AI. They’re going to, they’re going to be those people that go I think it’s good. It’s just going to pass by, I don’t think it’s we’ll wait for the next thing. Oh, okay.
Morgan Llewellyn: Let me throw one more thing at you and why I think this is a little bit different because there’s something else happening at the same time that I think reduces the the cost of adoption for these, powerful AI models. And that’s what’s happening in the regulation space.
So you’ve got, New York City their bias laws coming out, what, on July 5th is when they’re going to enforcement. You’ve got Europe talking about regulations. [00:26:00] And in the HR and TA tech space I think regulation is going to be a boon to the AI industry, and why that’s the case is you’re going to take the uncertainty regarding, risk of using AI or automation, you’re going to remove that from the buying equation, right?
Because now it’s going to be, here’s the. Here’s the test. Did you pass or not? And no longer does, the organization who’s purchasing these, AI technologies, no longer are they responsible to understand is this biased? Is it not biased? What algorithm are you using?
It can be, what are you doing for me? How does this make, finding people, keeping people better. And did you pass the test? Yes. And so I think this other thing is happening underneath everything that is actually going to speed the adoption. And so I’m a big fan of regulation in this space because it reduces uncertainty.
William TIncup: And it also forces us to have more intellectual discussions around [00:27:00] ethical AI and audited AI and things like that. So I like, and I think it’s in parallel. And again, there’s a group of people that understand that, know that, and then there’s a group that have, are oblivious. They have no idea that even that those things are going on.
Morgan, I could talk to you all day, but I know you got like work to do and stuff like that. So thank you so much for coming on the podcast. This has been wonderful. And of course we just touched the very tip of the iceberg of this thing, but I appreciate you. Yeah. Thank you, William. Thank you for having me.
Absolutely. Thanks everyone for listening to the podcast until next time.
William is the President & Editor-at-Large of RecruitingDaily. At the intersection of HR and technology, he’s a writer, speaker, advisor, consultant, investor, storyteller & teacher. He's been writing about HR and Recruiting related issues for longer than he cares to disclose. William serves on the Board of Advisors / Board of Directors for 20+ HR technology startups. William is a graduate of the University of Alabama at Birmingham with a BA in Art History. He also earned an MA in American Indian Studies from the University of Arizona and an MBA from Case Western Reserve University.