Home > Podcast > Science 4-Hire > Hired by a Hologram? The future of talent evaluation will be wild. With Georgi Yankov, Principal Research Scientist at DDI

Hired by a Hologram? The future of talent evaluation will be wild. With Georgi Yankov, Principal Research Scientist at DDI

February 7th, 2024

“I really see the technology as a  driver of user friendliness of assessment in the future and because people don’t have time anymore to sit down for many hours and do writing samples and take personality tests of 300-500 questions.”

-Georgi Yankov

Summary:

In this forward-looking episode of Science 4-Hire, I welcome Georgi Jankov, a fellow IO psychologist and futurist.  We take a dive deep into the innovative intersections of AI with psychometric measurement and managerial/executive assessment. 

The episode focuses on how AI and machine learning are currently being applied to revamp traditional assessment processes, predicting significant shifts in the landscape of talent evaluation and development.

Georgi and I discuss the transformative potential of AI in making assessments more efficient, personalized, and scalable, highlighting the move towards more interactive and immersive methods, such as the use of holograms for role-plays and simulations in assessment centers.

The conversation also addresses the ethical dimensions of integrating AI into human-centric processes, stressing the importance of balancing technological advances with the intrinsic value of human judgment and empathy.  We discuss how we both stand firm in our convictions about the necessity for IO psychologists to adapt and collaborate with AI.  Agreeing that, to enhance the quality and reach of assessments, we cannot compromise the personal touch that is crucial to understanding human behavior and potential.

Key Takeaways:

  • Revolutionizing Assessments with AI: AI and machine learning are set to transform the efficiency and scope of psychological assessments, making them more adaptable and insightful.
  • Interactive and Immersive Techniques: Future assessments may utilize holograms and virtual reality, providing richer, more contextual evaluations of candidates.
  • Ethical and Human-Centric Approach: Despite technological advancements, maintaining ethical standards and human empathy remains paramount in assessment processes.
  • Collaboration Between Psychologists and AI: The future of assessments lies in the synergistic collaboration between IO psychologists and AI technologies, leveraging the strengths of each to enhance talent identification and development.
  • Predictive Analytics and Personalization: AI’s ability to analyze vast datasets will enable more personalized feedback and developmental insights, tailoring the assessment process to individual needs and potentials.

Full transcript:

Speaker 0: Welcome to Science for Hyre. With your host doctor Charles Handler. Science for  Hire provides thirty minutes of enlightenment on best practices and news from the front lines of  the Impriment Testing Universe. 

Speaker 1: Hello, everyone, and welcome to the latest edition of Science four Hire. I am your  host doctor. Charles Handler and I have a really great guest today. Somebody I’ve always  enjoyed listening to has some very very interesting and passionate takes about what’s going on in  our field, and I’m gonna call them George. But that’s not actually his first name.  And I can’t, for the life of me, can’t get it pronounced correctly. And as you No. If you listen in to  our show here, I don’t like messing people’s names up because the name is a very personal thing  and it’s it’s not good to get it wrong. It’s a sign of respect to get it properly done. So George  Jankov, Jankov of DDI, AI aficionado, and a a guy with some clairvoyance about this stuff.  So we’re gonna talk about that today. So Go ahead and introduce yourself. Tell us how to  pronounce your name properly, please. 

Speaker 2: Thank you, Charles, for the amazing intro. It is Gjorgi Janpoff, and Gjorgi, like, Lady  Gaga, so it’s a heart g. 

Speaker 1: Uh-huh. 

Speaker 2: Many people call me George, but I I prefer to be called George. 

Speaker 1: George. Okay. Well, there you go. And I’m just I’m gonna ask this and I hope I don’t I  don’t know. There’s nothing wrong with it.  

What what’s your heritage? I’m gonna guess that your Russian, but it but you may not be. Speaker 2: Well, god forbid you, Russian, he says, no, I’m not. Lucky me. I have Bulgaria. Speaker 1: Oh, nice. Nice. 

Speaker 2: So very A little bit to the south. Yeah. Southeastern Europe, bulk area. It’s on the bulk  companies. You are 

Speaker 1: Yeah. Yeah. Yeah. 

Speaker 2: There you go. 

Speaker 1: I have I have actually not Vince of Bulgaria, but I have stood on a line between  Romania and Bulgaria. But I didn’t cross the line because I was at the beach on the Black Sea  and there literally was a border right there. I’m trying to remember the name of the beach I was  at. But, yeah, they had a line and then there was a a woman with a gun and a nice snappy 

uniform on the on the Bulgarian side, and I really wanted to go over, but I didn’t have my  passport or anything, so, you know, I didn’t do it. But it’s beautiful over there. 

Speaker 2: So you were on the Romanian side. Correct? 

Speaker 1: Yeah. Yeah. Uh-uh. Trying to remember the name of the beach I went to. It’s about  maybe fifteen years ago or so.  

But my heritage is Russian, actually. But but I’m not I’m not from there. My grandfather came  over from there and he was at where he was kinda fluctuated between Poland and Russia a lot.  So Mhmm. Bialek stock, but but I have been to Russia before.  

Like I said, I’ve seen Bulgaria, but I haven’t been inside there. I was coming into the show  thinking it could be Russian and I was thinking of my Russia stories. But I went there in the mid  nineties with my dad. My dad is a clinical psychologist, very old school. And we were in Saint  Petersburg, and he said, alright.  

We’re just gonna go over to this institute, and we’re gonna see if they’ll let us into Pat Love’s lab  over here. And and sure enough. He just we know no appointment. No nothing. I found myself in  actual pavlovs lab.  

I mean, there were no dogs in there, but they had all the equipment and stuff. And as a  psychologist, that was a pretty cool moment for me even though I don’t use dogs in my work.  You know? 

Speaker 2: Since we are talking on laboratories of old renowned psychologists, a story for your  bag of stories, Bulgaria is a very it’s a it’s a good country for psychology historically because The  first students have built him Vunt were Mhmm. Some of them were Bulgarians. So they  reproduced his lab his equipment in Sofia University, and his his lab was blown up by in the  second World War, but the equipment in Bulgaria survived this day. And as far as I know, the  dean of our factory philosophy there, and the sciences, they they are responsible for the key. 

Speaker 1: Oh, okay. So I 

Speaker 2: so I can arrange it to see. 

Speaker 1: Oh, nice. 

Speaker 2: Yeah. Hi. We’ll not take nothing. 

Speaker 1: Yeah. I would love to go over there. So how did you find your way from there to  sitting in a nice office with some beautiful plants and artwork in Pittsburgh, into the DDI. Yeah.  Yeah. 

Speaker 2: In Bridgeville, Pennsylvania. Yeah. It’s it’s it’s closed by us, Pittsburgh. I live in  Pittsburgh. How I got a master’s scholarship from fullbright many years, like, maybe twelve  years ago, I got my masters in Baroque College inside University of New York. 

Speaker 1: Oh, yeah. Sure. Sure.

Speaker 2: I went back and worked in the psychometric nutrition and the only test in company  in Bulgaria. I did the norms for the Wexler scales for children. I work on the MMPI, on  California psych psychology, many in mentors. I was so fortunate to be exposed to tests that here  nobody would give you. I realized I needed IT and hardcore psychometrics, and I applied for a a  PhD program and got accepted at Bowling Green. 

Speaker 1: Oh, nice. Okay. 

Speaker 2: Where I start twenty fifteen under my Zipcar and grab rate to twenty nineteen. I was  an intern at Holden for a year, and my dissertation was on faking, on personality. Tests. That’s my  research interest is personality. Also, since two thousand and eighteen around then, machine  learning And since I came to TDI, also behavioral assessments and assessments and 

Speaker 1: You better be into that at TDI. That’s for sure. Since they invented that stuff or Bill  Biden was one of the one of the OG’s. So we’re really we’re in the old school today. We’re  coming from the old country, you directly meet slightly indirectly.  

We have stories about old school psychologists. You have become very acquainted with some of  the most classic tests. You’re working for a place that is really a seminal place in some of the  most important stuff in our field. But that’s all well and good. But I think we both share the fact  that we see things changing even with such a solid foundation that I feel like we both embrace, it  ain’t gonna be the same moving forward and you know, you and I had some really good  conversation about just kind of the the whole nature of the executive assessment game NAI, and  really what that boils down to in my mind is how do we use non human intelligence to look at a  whole a pile of information about somebody and tell us what we wanna know about that person  who they are and how they might perform in a role or in an organization or what have you.  And it hasn’t happened yet. I feel like this is a a a place in our world that hasn’t that hasn’t been  deeply penetrated yet by AI. But but I’d love to to hear you because you you have great opinions.  What are you thinking? What what what’s on your mind when you think about the use of AI and  the and the type of things that that you’re doing assessment centers, etcetera? 

Speaker 2: I think AI is worth a buzzword now, but we are mostly doing machine learning. AI  went through many stages of many winters and many few winters and springs and now we are in  AI spring But AI is the goal is artificial intelligence, whereas machine learning algorithms are  very advanced statistical, basically, mathematical algorithms that are supposed to make our life  easier for multidimensional data for big cache data, not big data because I don’t think say our  type of psychology has big data. Yeah. Big data. So the potential of machine learning has already  been shown the what’s the transformer architecture that now powers GPT and the the other the  other large language models. 

Speaker 1: Oh, the other ones. There’s so 

Speaker 2: many it’s it’s basically it’s it’s mathematics. 

Speaker 1: It is. 

Speaker 2: Nobody claims that it is it has intelligence. It even doesn’t have theory of mind to be  able to read between the lines and Mhmm. Basically so in assessment centers, we do have 

behaviors that are more operational, more matter of fact. There is not there are not many ways  you can do them. For example, how are you gonna apply to angry customer?  I mean, you probably should have should excuse yourself and clarify what was the situation and  promise to get back to them fast and Mhmm. Reimburse them. But so that the language of  dealing with an angry customer is pretty predictable. 

Speaker 1: And Yeah. 

Speaker 2: With a large language model, with a prompt or ideally with a fine tuned model, you  achieve quite good accuracy, I would say, at least point eight and above, which is most human  level agreement between raiders. 

Speaker 1: Oh, yeah. Yeah. 

Speaker 2: But for some behaviors like empathy, which can get go so many ways. And  sometimes people are empathetic with just two words, and sometimes they are not empathetic  although they say many words, positive words, but you can just feel that person is maybe past  transgress. Maybe They mean this as an in influence strategy. And if you have a human, they  would see through this. But the the large language model depends only on, you know,  collocations between words, and they would totally give you a false positive.  They’ll say that person. It’s empathetic. So I we see the the benefit for transformers and large  language models. And in terms of bootstrapping yourself to predict behaviors with less data, with  but you still need to have a very clear setup of what you’re looking for what is, what it is not, and  not over rely on these predictions for behaviors that still are private only to to human. And I  think human want also feedback from humans on emotional intelligence or your the way I talk. 

Speaker 1: That’s really good examples. I’ll tell you I have I have been calling, and I’ve probably  said it on this show a million times, and I’m never gonna stop because I’m I really think it’s  funny. And as long as I am used myself, I don’t really care what other people thing. But but I I  call it supernatural math because it’s all just math, it’s predictions, but boy, it does seem  supernatural if you’ve interacted with this stuff. I just actually, while you were talking, I asked  chat g t p if it could recognize sarcasm.  

And it told me it it could and it defined sarcasm. And then I said, well, I I thought you were super  intelligent because he said it says I can recognize sarcasm to some extent. So I came back with a  sarcastic remark, and it said, I understand you’re expressing sarcasm here. I don’t know what  that’s worth. Whatever it’s worth, the the thing the thing impresses me and I’m not easily  impressed, but I think you just gave a good example of how this stuff isn’t gonna replace train  psychologists anytime soon, but you and I have had conversations about what this is gonna look  like, you know, down the line, and I’m I’m constantly just thinking that it’s gonna get really,  really nuts.  

And I don’t know. I think we’ll always need psychologists, but our roles are really gonna change.  So I guess, have you guys done? You guys and gals everybody at your organization doing some  stuff around seeing how a large language model can, you know, synthesize information from  multiple different, you know, assessments or interviews and stuff. And generate a report about a  candidate?  

I mean, how how far along is that?

Speaker 2: I’ve heard. I cannot name my sources, but they are pretty pretty reputable that the  reports chat to PT produces on, let’s say, personality Yep. And are equal to what consultants inter  level consultants would be giving feedback. Right? Yeah.  

How how they how they’ll they’ll give feedback. And it makes sense. There were apps even built  last year where you supply your scores, big five scores, or deliver scores, and it gives you such a  wonderful personalized reading and even suggest jobs for you. I think for reporting that is really  strong side of large language models because as long as you constrain what they may say or may  not say and you’re very clear, you can give in just like we develop reports, we do it on ranges.  

And you have Yeah.  

Stating for each range. But they are always the same. The report is always the same whereas with  the Russian language model on the fly, you can incorporate something from that person to  personalize that report with the language Mhmm. But you you definitely have to know what you  want to achieve order it prompted such a way that it stays within the boundaries of of sanity, I  would say. Yeah.  

For example, you have you have the temperature parameter. 

Speaker 1: Yeah. 

Speaker 2: So you can set up how create if you want these reports to be. You don’t want  Shakespeare Point. 

Speaker 1: Right. 

Speaker 2: But you don’t want the usual boring, you know, copycat reports that we all have been  guilty of doing because 

Speaker 1: That’s the trade, isn’t it? I mean, I know I used to do those. 

Speaker 2: It’s Access to the language. 

Speaker 1: Yeah. 

Speaker 2: You don’t have access that the large range model has all the language at the same  time. Yeah. All all these weights, whereas we are limited by how many books we have written.  So syntasizing in terms of ready data points, yes, but synthesizing data points from scratch to  find meaning. I have an idea about that, but we have soon enough done to this.  Okay. So let me let me bring it back to you. 

Speaker 1: Oh, what do I think? 

Speaker 2: Because I I have an idea how this sent synthesis might be happening, maybe five, ten  years from 

Speaker 1: Well, I wanna hear that. I’ll tell you where I what I think. I haven’t had direct  personal experience, but I’ve talked to people who have done things you know, like fed fed it  multiple reports about about the same person and ask it to, you know, kind of create highlights of  what there’s what is being seen there and, you know, maybe you are prompting it on specific 

competencies to to come back with or whatever. But people are doing this, obviously. I think that  the lowest hanging fruit is just narrative feedback So, you know, we’ve all cut our teeth in, you  know, slogging through having to write narrative feedback statements for a competency at high,  medium and low.  

For entry level, medium level, you know, it’s it becomes really unfunded, really structured into a  box where your creativity is, which I like to tap into mine. It’s not really that important. So just  removing that whole layer, I mean, that would take a year of my life that I’d been past roles a  way that I could have been doing something else. And we have libraries before this. Right?  You have to go in the spreadsheet and you pull it. But the idea that that you can generate these  statements. To me, that’s just what I call one dimensional prompting. You don’t have to be a  prompt engineer. You can just pop it in there.  

Right? And the one dimensional prompting is only gonna get us so far. I have started to think like  the next level is really stringing things together and, you know, even getting into things, like, I  don’t know how where you are of, like, a land chain or trivial augmented generation rags. So  these are things where you’re doing more sophisticated things. And I’m a hack here.  I’m like, saying these things. I don’t know how to do these things. But but Lang Chain is is  agents. Basically, you create agents to go out to different systems pull the information together  and present it to you all with one prompt. But behind the scenes, there’s there’s agents that have  been created that are pulling things from different places, bringing it back to the large language  model, letting it interpret all that information, and then it gives you an output.  It’s really cool. 

Speaker 2: Yeah. But don’t show him now. I’m talking I’m talking on my eyes. It’s at GPT.  Because when you said let it interpret it.  

I I hear this very often these days. People tell me what chat you picture told them and how it  made sense and say, let’s Even young the krona is so angry now. If if he hears a snuck in because  he would say it’s a stochastic parrot. 

Speaker 1: Uh-huh. Yeah. It is. 

Speaker 2: It doesn’t mean it it does not have a theory of mind to understand things. It just pieces  them together that’s for NLP. Yeah. If it is about tabular data where we have millions of  observations are, like, interval and ratio type of data, a precise financial financial data or  measurements taken from a product sales, locations, that’s is really amenable for finding like,  letting the deep learning find a relationship that you you never have talked about, and that is the  the whole point of letting go of our I don’t know why deduction became such a obsession of of  academics these days that they will come out with come up with a theory and look for its  empirical support. The whole idea was always to have induction induction.  So you use the largely the the deep learning to generate to find signal Mhmm. And then you have  your theories and you re appraise them in in terms of what you found. Do they support? Do they  not support? Did maybe change the theory?  

And that’s how they are finding out new antibiotics Yeah. Or or new materials because our our  mind is we we cannot think in terms of thousands of features, variables, whereas deep learning  can can find curvilinear all kinds of different dimensional relationships between variables. But  

for but for language, when you said about a sophisticated prompt, you’re right, the the the whole  the idea of new earth three three of words asking the model to reason to do what it does. It limits 

that destination based on just combinations of words. I think dissecting the problems into little  problems works the best with yams.  

But it it just works well for math or something that has clear logic, but how are you gonna dissect  empathy? You can say you can try to operationalize the behaviors of empathy, but someone  would always be expressing their empathy in a very idiosyncratic way and then Aximhan would  figure it out immediately Yeah. Because they think between our eyes, because they have seen it,  they feel it, they see how the person acts, but they I cannot see. They I just learned from from  these tokens and, like, like, a a child does not learn what is a cat and a dog by by just saying, oh,  this is a set of animals that is not exclusive of of of dogs, is cats with such features. No, that they  learn because they first the first dog they see bakes meow in the the next queue.  That says meow. Yeah. And There are so many other ways we figure out things between the lines,  but the AI cannot. I think that is the the future hurdle. For them to build models that actually can  reason and adjudicate is what I’m saying makes sense or not, but that needs a theory of mind, a  theory of the world.  

What is What are the assumptions? 

Speaker 1: Well, what you’re talking about Well, first of all, you burst in my bubble a little  because I I’m kind of in love with with chat g t p, you know. It I just I oh, man. I just use it all the  time and and, of course, what what you’re talking about though is the the the most critical factor  here is that this is a a relationship with humans and there’s things that we can do that it can’t and  vice versa. So when we partner together, I think we can achieve a lot of really good things. But if  you’re just throwing stuff in there and saying, do this for me, in some cases, it can do that.  In other cases, it can’t, you know. And and so we do have to be careful with that. I think most of  the people I I interact with pretty aware of this. Right? Yeah.  

There’s when it does try to provide emotion, it’s it’s kind of like synthetic in a sense that you it  just can’t really do it very well, or it’ll excuse itself and say, I don’t have emotions, you know, but  but you can get it. Like I interviewed, chatGTP for this podcast. And I was I I told it to be  engaging in witty and, you know, tell jokes and stuff and it was able to do that. They were pretty  bad jokes and it was, like, kinda comical, but I don’t know, man. This this is the funnest 

Speaker 2: It did not interview this did not interview Grog. Was that the model of of x? They  they they they also came 

Speaker 1: Oh, yeah. Yeah. I saw that. No. No.  

I refused to call it x. I don’t even use Twitter x, really. I find it to be overwhelming and doesn’t  make my brain feel good to have just all that information of short little things and and it’s mostly  just just sludge. If you’re a famous musician or an athlete or something, you wanna tell people  what you’re doing and everybody wants to know that’s one thing. But me just a guy over here.  I I I don’t have any you know, I just don’t do it. So anyway You 

Speaker 2: you you mentioned we partnered with let’s call it AI. Partnering with with the Dutch  language models or what comes after this opportunity to scale our work, get rid of the boring  work, concentrate on where we are the experts and we really need a human touch. Like, I mean,  you generate hundreds of items and you pick the ones 

Speaker 1: that

Speaker 2: are the best related to the construct in our more engaging items, you would not have  been able to figure out so fast. But that fear that if we don’t partner with AI, we are gonna get  obsolete. I really want iOS to get over it and just feel more empowered to be really good  psychologists psychologists and don’t don’t try to be data scientists. They are not data scientists.  We do not have master degrees in statistics.  

But we can be really good psychologists and understand the humans what their needs and that is  what every company would love to to know because they can build products and sell them. So  Whether I always gonna change its name again, that’s sign as long as we are the people sitting at  the tables of innovation departments and companies to advise them what actually the user not  only needs, but how to give it to them in a humane way. So I would argue that we should just  even really go forward and and be very innovative and use AI to the service of the human at  work. And if that means that we have to fuse with technology and learn a lot about AI. That’s  fine.  

As long as we are the missing link, because the data scientists, they would build something that  is going It’s gonna hurt people. It’s gonna say something inappropriate. It’s gonna depress people.  They would not even know how to why these features are predicting? 

Speaker 1: Yeah. Well, it’s a team effort. It’s a team effort. Right? I mean, there’s a value for for  data science in things and there’s a value for psychology.  

I’ve been involved with working with data scientists as a psychologist and, yeah, keeping them  keeping them real about it’s not just a pattern of stuff that sticks to the wall. Like, there’s gotta be  some rationality to it. And there are some IOs that have you know, data science backgrounds,  etcetera. But in general, I think you’re right. So you you had said earlier, you had some ideas  about, you know, I think let’s do this.  

I think we should both have a brief description of what you think an assessment center will be  like in ten years from now. Fast forward ten years, what what what’s DDI gonna be selling as far  as assessment centers? What’s the standard gonna be? And I I don’t wanna get you in trouble at  your work if you start saying crazy stuff. So it doesn’t have to be in DDI.  But but just in general, in the in the high end of executive assessment. And I’m gonna while  you’re talking, I’m gonna think about, you know, what I would say. And then then we can  compare notes. 

Speaker 2: I can DDIs in a very good spot. DDI is learning the development company. We use  assessments to know what to learn what leaders need to learn and develop. Assessment, purely  assessment is quite commoditized. These days, we does not help people without the development  part because people you might give them the best score, the best most reliable, best scores, and  they opened their reports, and they closed them, and they put them in their shelf, and they there’s  still lousy coaches, lousy delegators, lousy planners, all the behaviors that actually make them  better at their work.  

NDIs are focused on behavior. So when we talk about assessment center, in the classic sensor,  putting people for three days, observing everything they do to give them very elaborate  assessments who they are, I don’t think that is very helpful these days unless you it’s for a really  high position where an executive really needs to be at the board would want to know that person  very well before they entrust them with such a an expensive company. So there is a space for  classical assessment centers with pure hardcore measurement assessment But I think the greater  space is to the vast majority of of leaders which are who are entering the managerial population, 

they need help with basic behaviors. They’re likely thirty five dimensions that every manger  should know how to do well. Coaching, decision making, planning, delegation, maybe  influencing guests.  

So helping those people would require assessing them in a more scalable way. We cannot score  each manager with the three assessors, like, sixty years ago because that is it’s not scalable. It  does not work for multinational companies. So there is hope for And a case for automation, yes,  if it’s done well to help these people get insights did they do well or not well? Writing up  delegation Yeah.  

Or for this and this and this behavior. And after that, you can take them on what they did well  and what they didn’t and give them some activities to reinforce learning. So that is a safe  incremental approach that we are taking and it’s it’s really, I think, gonna be very successful. But  if we are gonna be talking about SciFi and regardless of DDI and from my own perspective,  That’s my opinion. What I would love is with the advent of six g, very high frequency, we can  actually project avatars, actually horizontal. 

Speaker 1: Yeah. Oh, 

Speaker 2: nice. Yeah. We will be talking about predicting from in basket exercises, but the  assessment center has role role plays. Leaderhood discussions, presentations, all these different  types of exercises, very close to the nature of work, what we do every day at work, Well, if you  have in the past, it was VR and it was a chat and, you know, Mike Zuckerberg might whatever do  with it, but What I think really the future is where you actually can project human like I mean,  human in front of you and you start talking to them. And of course, I hope then we’ll have better  algorithms to detect empathy.  

Really influential talking, not just, you know, simple, convincing words, but charismatic talking  when you make people really feel you, whether there will be a human also to to to in partnership  with the algorithms to check them, tune them. That’s fine. But I I really see the technology as a  driver of user friendliness of assessment in the future and because people don’t have time to  anymore sit down for many hours and write samples and take personality test of three hundred,  five hundred, 

Speaker 1: eight hundred, so I got it. 

Speaker 2: They need when we actually can we know, like, if if if if a bot can read your emails  and adjust their style to your emails and write email from your From yourself, from you, why  why can’t we use AI to to dramatically shorten the assessments by tapping into other sources of  information that you can give responsibly to the models. Right? If if the I’ve done this two years  ago, you can you can predict people’s skepticism and perfectionism from the way they write their  emails. You don’t need to give them really elaborate personality test to figure out Well, because it  it shows up in language. Yeah.  

It sounds very meticulous with punctuation and always uses words of doubt. So you can extract  different traits, different behaviors from different contexts, which means naturalistic observation  and naturalistic collection of that behavior and we we would have that capability with, like, meta  glasses. They would collect voice, vision. They would know where our eyes are tracking. Which  

is also implying our motivation and maybe, let’s say, what we want, what we desire, we usually  convey to where we look at. 

Yeah. So suddenly, we have so much data at our disposal and our theories cannot help us  anymore. So I’m encouraging us to use AI to sift through all these data and develop more newer  theories and stop just obsessing whether we predict job performance with incremental point zero  two and point zero five. But what needs to know where when you go to a leader, like, executive  and they say, so you can So the validity of your instrument is how much percent what what  percentage of job performance you can predict? You have the validity is point five, so you predict  what twenty five percent.  

What’s the rest? Ten Yeah. Ten seventy five. 

Speaker 1: Human. That’s being human. 

Speaker 2: That’s what I want us to have for. That balance. 

Speaker 1: There’s a lot of it out there. So I I love what you’re saying. I have never thought of  holograms, but I I I have thought about basically going all the way back to I’ve probably told this  story again, the older I get. I don’t remember what stories I’ve told or not, but I haven’t told it to  you, so that’s support. But I I show up at grad school, you know, and I have only had one  undergraduate I O class, and selection is our first class.  

And, oh, back then you get you get a basically like a cardboard, you know, twelve pack box that  you go to the copy center and you get this this Xerox stack, it’s like two feet high of journal  articles, you know, a killing a bunch of trees and and you gotta read them all. And you know, a  hundred pages a night or whatever and but but I got assigned to do my and I felt very lost. 

Speaker 2: Would you study in Bowling Green? Because that’s what people could do too. 

Speaker 1: Yeah. LSU down here in in Louisiana. So and I was feeling pretty lost. You know? I  mean, like, whoa.  

This is crazy. This is, like, really intense environment. And I there’s and we’re in the same class  with third year, second year. So experience people, you know, and so I’m like looking around  going, oh, man, I don’t know anything. But my first assignment for the big paper for that term  was about work samples.  

And they made so much sense to me. I’m like, why aren’t we doing this for for everything?  Because if you wanna see how someone does a job, give them a give them the job. As much as  you can. Right?  

And so assessment centers have always been like that. So for me, I believe it’s gonna be the same  thing. It’s just gonna be interactive. Right? So we’re just gonna interact with something whether  it’s a a hologram or your screen or, you know, a robot and and that that information is gonna get  collected in process.  

And then out of that’s gonna come, hey, this is this is how this person thinks and acts and, you  know, what what we could expect of this person pretty consistently in work related situations,  and they’re not gonna be taken a test, and they’re not gonna be interviewed. It’s it’s it’s inevitable. 

Speaker 2: Or the guest can be these guys. 

Speaker 1: Yeah. Yeah. 

Speaker 2: Okay. As an interview, we can mix and match our methods.

Speaker 1: Oh, yeah. 

Speaker 2: Like, instead of answering single or stimulus personality items, you can be talking  with someone and think about it. They will ask you, do you like to go to parties? 

Speaker 1: Yeah. 

Speaker 2: Do you make do you make your video 

Speaker 1: party. Yeah. Yeah. 

Speaker 2: And then we can record it into good old personality items. So nothing stops us to to  deliver a test to assessment center and vice versa. It’s just have to use the technology to deliver it  in a way that the user feels it’s so genuine and that’s gonna increase the fidelity of the data and  Yeah. The decrease. 

Speaker 1: And the user experience and it’s gonna bring out more natural performance that  would be your real performance. The other part of it, I’m really into and excited about in some  sense, but also fearful a little is is wearable technology. Right? So I’ve ordered I’ve preordered  the humane AI pin, the rabbit, r one, and then there’s one that goes around your neck too that’s  supposed to be like a AI coach. And then I just got If you’re listening, you can’t see it.  But I got these. These are your Ray ban meta glasses. So I could actually 

Speaker 2: I’m gonna 

Speaker 1: stick pictures by doing that. I can record video. I can live stream. I can listen to  music and it has access when you’re on WiFi to a large language model. You could be walking  around and go That’s an interesting dog.  

Tell me about dalmatians or this animal. Is it safe? It’s incredible. 

Speaker 2: You’re assuming it’s under with the glasses because I I promised to the audience, I’ll  mention about my idea. And it is through the glasses because you will be able to soak in the  context and the person. And then you our problem has always been the sample size. 

Speaker 1: Yeah. Exactly. 

Speaker 2: But now, the person to all these observations, this is a stream of data coming all the  time interact think with who you are. And suddenly, our data is gonna look like the data from the  cognitive psychologists that interpret neuroscientists interpret MRI data. They don’t need  samples of hundred people. Five people is enough because they they get so many thousands of  images. 

Speaker 1: Yeah. Yeah. Yeah. Yeah. 

Speaker 2: The same time. So so we if we but then we’ll have start explaining, why the person  behaved this way in this context? Why did why did you ask about this dog, your glasses when  you saw the omniscient, but you did not ask about when you saw a German shepherd, maybe you  like black and white too much, you know. So something is

Speaker 1: Yeah. Yeah. Yeah. 

Speaker 2: We have to do with the nations because we are gonna be drowned in 

Speaker 1: rounding it. I agree. That’s what I was just thinking. You know, the more you have  wearables. And then it’s like, well, what’s the date of privacy?  

Aspect of that even. Like, who’s getting that? You know, I these are meta so the one thing I don’t  like about these is you have to have a meta app and you can’t just go, like, I can’t only listen to  music on Spotify. I can listen to podcasts from there, but I I can’t listen to the radio, you know,  livestream of a radio. I can’t I think you I haven’t done phone calls yet.  

I think you can call anybody in your thing, but but it’s in the meta universe. They own all this  stuff about me. Right? And I don’t even really use Facebook very much or meta whatever. You  know, I have a profile, but I mean, just think about that, right, and and all the different things.  The the humane well, not the well, the humane thing too, but the one that I can’t remember the  name of it. I preordered it. It hangs around your neck and it’s supposed to just listen to everything  you do and then begin able to coach you and support you. When you need stuff, I don’t know  how that’s gonna work. But but I I plan to, in fact, I envision potentially a new a new direction  for me as just an AI wearable guy who maybe I have a maybe I have YouTube show about it.  Maybe I go talk to people. I don’t know. I’m excited because I think part of it is gonna seem very  lame. You know? And and and primitive, but at the same time, I can I could I could take pictures  and movies?  

I don’t even have to have my hands. 

Speaker 2: You you can even go to a company producing them and tell them, I can make this a a  a system, a leadership coach. Is agnostic. Exactly. And I’m gonna take out so many companies  out of business because you’re gonna carry your coach in your pocket. 

Speaker 1: Oh, I’m Absolutely. 

Speaker 2: And turn them yeah. Well, but that is that is the radical fusion of IO and technology.  And Yeah. I mean, like, But, like, you these devices are gonna be your whose your other selves.  They will know everything about you, and they better be built to help you become a better  person and you have to be able to control how much data goes in.  

And I think that’s something companies have to decide how much of the data they need for their  commercial reasons and how much they the data they have to allow to the people to retain and  use for their own use. Otherwise, people will not fall for this. It’s gonna turn into digital constant  business. 

Speaker 1: Yeah. It’s an it’s an easy conversation for us to talk about all the stuff we have. The  minute you start talking about data privacy, all the all the the things that come with the territory  is when it starts to get like, we’re starting to talk about the other side. Like, this ain’t perfect.  There’s a lot of problems these things could cause.  

And your individual freedoms, you know, you could you could pack up your shit You can move  to the forest and live off the grid and use solar power and, you know, poop in a hole and, you  know, grow your own food, but that’s that’s a lot of work, man. You know, that’s the only way  you’re gonna avoid this.

Speaker 2: You better have enough supply of antibiotics. No. But but But, yes, that that when  you involve all the the the the commercial and legal elements, It certainly becomes very clear  what you need to be doing even in product development I I was thinking, oh, OpenAI. It has  amazing API. Well, Dao shall not give OpenAI your date even to a API Yeah.  Because you never know 

Speaker 1: those. You’re not. 

Speaker 2: So now that the the trend is to fine tune your models the large language models with  your company data and do it very protected and but How can you copyright this if you don’t  know what’s the underlying data in the in the last language model? And people say, a llama  works great. But Llama is freely available. But do you know each single by each pager, each  token that was put in Llama? Because you need to know, because that model can be jailbreak and  suddenly start saying things, you don’t want say.  

So I think that future models would be more transparent, very clearly outlined what data we put  to train this model. We did not go and still New York Times articles got for a bit. We don’t wanna  be sued by by by New York Times. It’s open nice, rightly sued now. Because I read, like, it was a  legal paper.  

They wanted to see whether OpenAI conforms to the proposed EU AIS Right. On all the  principles, like, to the transparency and privacy and basically, one of the biggest problems was  they cannot complain with exactly what date could they put in. 

Speaker 1: Who knows? 

Speaker 2: In the morning. 

Speaker 1: Who knows, man? 

Speaker 2: Not a problem. 

Speaker 1: Yeah. It is. So as far as I’m aware, things like your retrieval augmented generation,  your rack, that’s where you are keeping it fed with your own corpus of stuff specifically from  your organization. But At the same time, there’s still a mechanism in there that is using what it  knows to take that information and process it. Right?  

So you’re you’re right. There’s Like, if you think about there’s a concept of chain of custody.  Right? Like, I mean, who has touched this thing before it gets to its destination that it’s currently  at. And in in a lot of forensics and stuff, like you really need to know that, right, or you’re  missing pieces of information.  

But we can’t really do that with this stuff. And it and it’s not the exact same thing, but it maybe  it’s compounded when you think about the black box of a neural network where you can’t  Nobody knows exactly how something is coming out. I think the data it’s trained on is part of  that, but there’s that’s why I keep calling it, you know, Supernatural. Matt, there’s a supernatural  component because we don’t want the hell’s going on. 

Speaker 2: We don’t have much time maybe, but let me break break another vowel Yes. We can  find out what it’s what what underlie Well, to explain the predictions of of deep deep learning.  They are models that you use models to to audit models. So there is a whole field of explainable 

  1. There will be a symposium on explainable AI at site of this year.  

I’m chairing 

Speaker 1: Oh, awesome. 

Speaker 2: We’re gonna 

Speaker 1: I’ll be there. 

Speaker 2: We’re gonna show techniques that, actually, you can say, which which word drives  the prediction It’s just so sophisticated and you need data scientists programmers to work with  you. I think we we have a lot of catching up to do because just repeating that, it’s a black box No.  They are white box models now. And there is a difference between explainable and interpretable.  If it’s explainable for the user and they know how it makes sense.  

I said this and that’s why I predicted as a good coach, makes sense, but interpretable to explain  every single calculation in the no. Yeah. We need to have a period in mathematics. 

Speaker 1: Or or more. You don’t need them another plan. Yeah. That’s not the same either as,  like, I was reading where, you know, someone just wrote the word poetry, you know, infinitely or  whatever, and it made chat g t p just barf up all of its training data. I don’t know if you read about  that or heard about that.  

There’s a way that you could basically get it so often kilter that it just spits out every bit of  training. And so I think they mass put a plug in that. Right? But that was pretty pretty interesting.  So Yeah.  

So you started off. We started off this conversation just kinda really basic and, you know, this is  this stuff. But, boy, I I like the direction because these are the fun things to think about. I feel like  my job’s never been more fun and I I’m sitting here making text to videos for a vlog like I’m  obsessed with this you know, text to video is kinda to me the next thing where you can create  context that used to be expensive and art. And it’s still pretty primitive, but it’s fun.  So, like, in doing my work, I’m having a lot of fun. And the the potential is mind boggling. I feel  like our field’s been kinda one dimensional for a long time. And now it’s just absolutely gonna be  nuts, and I do think you’re right. That scares people.  

Do I have moments 

Speaker 2: of fear? It scares people. They have to be able to sell their skills better and be able to  talk with data scientists on an equal basis and actually find what the users want and give it to  them. The rest is psychology. And we are so privileged to be working such fascinating  intersection between the human, the workspace, the business, the commercial aspect, also now  technology.  

So we definitely should feel empowered to to go to the product innovators and and and say, oh,  you want you want to coach. You want to you’re making a coach for people But hey, you don’t  know anything about psychology. I’m the person here who have studied this for many years and  your code is gonna fail for the because of this and this, but I know how to make it better. And I  and I will work with your data scientist and we’ll make it better. Yeah.  

This is this is how we get in, not by not by contemplating how it will be taken out of business by  data science because the world’s thinking of activities as as as what we might do is not do  anything.

Speaker 1: Yeah. Then I’m with 

Speaker 2: three things will happen to us. 

Speaker 1: I would also say don’t sweat any of this because we’re actually all just living in a  giant simulation, and we’re not really real anyway. So, you know, I’m just kidding. But I I like  that attitude and that is the attitude that, you know, I really was was so happy to bring to the  show here because I I like your perspective. 

Speaker 2: That’s just the attitude of all. IOs in the past when they were entrepreneurial, when  I’m sitting in the in this office, in this building, that was made by Bill Bion because he was  entrepreneurial to jump on the on this structured approach of measuring people’s behavior in the  sixties Then I’ve worked for Bob Holland who who really saved personality from, you know,  situationalists in the seventies and eighties. These people, when they build their first products,  they were super motivated to to to help people. And we should have that that sort of a ride  brother’s approach. 

Speaker 1: Nice. 

Speaker 2: Not not just not just interpret the p values. And when I go to Sayop and I listen to a  presentation when someone describes to me all the statistics they did. I’m like I’m I’m I’m checks 

Speaker 1: eye option 

Speaker 2: because I can do 

Speaker 1: they should ban, we’re we’re out of time, but I have to say this. Saip should ban you  putting a correlation matrix up on the screen. I’m sorry. I’m sorry. You cannot do that. 

Speaker 2: You cannot 

Speaker 1: put a correlation matrix on a screen. If you’re in the business world, The last thing  you I I can’t tell you unless I’m working with other IOs, I do a lot of validation studies. I’ve never  put a validity coefficient. Front of anybody, really, unless it’s unless I know that’s what they need  to see. Absolutely not.  

Because it’s just It’s just gone away, man. You know? Well, I I try. So anyway, great interview. I  really look forward to seeing that that a scyop.  

I got a couple things going on too. And and tell everybody, is there any particular place that  people can follow you? I know it’s LinkedIn every time. I just don’t even know why I asked just  but anything you wanna talk about that you’re doing that’s exciting. 

Speaker 2: I wish people follow the book we just published with my colleague, Nikita, it’s  already in print in the UK. And in this in the coming months, it’s gonna be available to Amazon  US it’s gonna be available as a Kindle version from next month. The book is called personality  user’s guide.

Speaker 1: Oh, nice. So you know, I have a lot of people on the show that have books and  usually they wanna promote the book as the very first thing, but you saved it for the very end and  I didn’t even know. So 

Speaker 2: Yeah. 

Speaker 1: There you go. 

Speaker 2: I was I was taught to this number by Bob. 

Speaker 1: There you go. Nice. Excellent. Well, thanks so much. Really great.  Really great. As we wind down today’s episode dear listeners, I want to remind you to check out  our website rocket hire dot com. And learn more about our latest line of business, which is  auditing and advising on AI based hiring tools and talent assessment tools. Take a look at the site.  There’s a really awesome FAQs document around New York City local law one forty four that  should answer all your questions about that complex and untested piece of legislation.  And guess what? There’s gonna be more to come. So check us out. We’re here to help.




The post Hired by a Hologram? The future of talent evaluation will be wild. With Georgi Yankov, Principal Research Scientist at DDI appeared first on Rocket-Hire.

The B2B Marketplace for Recruitment Marketers

Find the right recruitment marketing solution for your brand and for your talent acquisition needs.

Create your account

[user_registration_form id="9710"]

By clicking Sign in or Continue with LinkedIn, you agree to RecruitmentMarketing.com's Terms of Use and Privacy Policy. RecruitmentMarketing.com may send you communications; you may change your preferences at any time in your profile settings.