Unboxing Artificial Intelligence with Ricardo Michel Reyes

Navigating AI, Humanity, and HR Tech

July 27, 2023


Ricardo Michel Reyes

Listen on:

Sharing options:

Show Notes

“AI is a mirror of humanity.”

This episode is not for the faint of heart, but for those ready to confront the truths of humanity. How do we experience the world? Would you climb Mount Olympus? Have you seen Game of Thrones? Get to know AI the unconventional way in this episode of the Employee Experience Experience podcast--*Erudit Edition!--*with Erudit’s Co-founder and Chief of Science.

Check out Rick’s article on developing the people-first AI that measures well-being.

Podcast transcript

Ricardo Michel Reyes: So you could say AI is in the stage of our baby right now.

A baby needs an up up third person to teach them language and to help them walk and not work themselves.

Janine Ramirez: And today, I'm so happy to be with our very own Ricardo Michel Reyes Arrecque to peek into the world of AI from the lens of the Not so data and tech savvy. So Rick is a co founder and the current chief science officer at Eredite. But I love talking to Rick because he has his hands on so many things AI and tech.

So he learned software development at age eight, he built his first downloadable app at fourteen years old. He was a member of the IA or AI two thousand and thirty MX or Mexico, coalition where they provided policy recommendations to the Mexican government, and for the banking and finance industry as well to shape loss and policy on AI. And he even worked with the European Space Agency.

That's just the tip of the iceberg.

So I wanted to have a quick conversation with Rick to dig into the AI trend for HR. And, yeah, thank you so much for your time today, Rick. Welcome.

It's kinda it's kinda odd. Right? Like, usually you were on a Google Meet now we're recording a conversation, but thank you for doing this experiment with me. Okay.

What is AI and Machine Learning?

Janine Ramirez: So to kick us off, I just wanted to ask, like, a favor because I know you do this a lot. You've done this a million times, but Can you explain and describe what AI and machine learning is for people that aren't familiar with it technology. So how would you introduce it to someone for the first time?

Ricardo Michel Reyes: Okay. I think the easiest way is to see what it is and.

So for example, if you use software like regular software, for example, worth, which is no longer gonna be true because now they're launching their copilot functionality where Gordon is gonna change a lot. But in Gorth, before AI, it wouldn't gain any new features or any different ways to do things when you interacted with it. So regular software, non AI software -- Mhmm. -- is when we see answers on how to do something.

So let's say you wanna print something on a printer, you write an algorithm and then you put the document and it prints. Right? So when there's a well established formula or well established set of steps on what to do to achieve something, and that is what traditional software is for and then you have very high skill programmers that would write this code for you. And you'll get your software. But the software won't change with experience won't change with the interaction with the user. You have to manually though what it was programmed for to do. And it yeah.

You need to have a programmer to do changes if you want anything new.

AI is different because it's for the things we don't have answers for. So for example, I don't know any programmer that can write software to distinguish dog from a cat just using programmer. Right? So normally what it's done there is that you collect a hundred images of dogs and a hundred images of cats, and then you program an algorithm that is a learning algorithm, a machine learning algorithm.

And this algorithm is general purpose. Like it can learn any sort of thing you have data for and also like the right answer. Right? So it changes for example.

If you are writing for additional software, then you, for example, the Celsius versus Fahrenheit conversion. Right. So you have a number of degrees in Celsius. There's a formula to calculate in Fahrenheit, you call this formula, you put the number and you get an answer.

So you know all of the steps.

In the dogs versus cats, you don't know how to write the code like what what do you recognize like the form of the ears? Like even writing program to detect ears would be very difficult using traditional program.

So here what you do is, like, you take your data you take your correct labels from human experience, and then the algorithm would try to learn the function we're trying to learn the program to achieve the goal you're trying to achieve and to So what you program is a very general algorithm that its function is to be able to approximate any other function. So let's say this is a code you've got to be able for the computer to identify what is what you're trying to achieve.

By comparing -- Right. -- its results to the thing that you gave it. Right? So then you would run one of the images through the algorithm.

And at the beginning, it would be random. Like you would say, oh, it's got and then using something called a loss function or a cost function, you will score how wrong is this model. And then use a different algorithm callback propagation to update the model and give another try to make predictions.

So then you'll give cats. It will say, oh, it's a cat. You'll say, okay, this is correct. And then from the mistakes it's making, you update the model and you start making predictions again until you are satisfied with the predictions it makes.

There are a lot of ways to score how good a model is, but it depends a lot on the industry. For some people having eighty percent accuracy on a task is more than enough. If you're working in maybe you wanna have ninety five percent, ninety eight percent accuracy. So depending on the industry, what is a good model the definition of a good model changes and every like the algorithm is the same for any industry What changes is the delta?

Is ChatGPT spooky?

Janine Ramirez: What makes it a little bit scary for some people? Because it's like one it learns So it's similar it's more similar to a human than the typical, you know, software that's really dependent on on the programmer, so it's learning. And then also, like, in the beginning, I'm guessing it's not It's not effective. It's not efficient.

It doesn't give you like the right insights, right? Like it has to be trained.

Ricardo Michel Reyes: Yeah. It still depends. So that that is, for example, what happens with Chat GPT, which is the the most scary, let's say, a thing that is happening nowadays.

And it's because there's this thought experiment called the Chinese room. Right?

That is you have this room where you have a person with a dictionary of how to take text in English and then say the same thing in Chinese.

And then the experiment is like you enter text to this room and then you know what's you don't know what's happening inside and then it returns you something in Chinese. And the question that we always make is does does the person inside of the box know Chinese? Like is it conscious of the Chinese translation or it's just repeating a mechanical thing that looks like it not Chinese. Right. But doesn't mean that it actually knows Chinese.

So because there's so you can do this with a person, right? You tell a person five words in Spanish?

But that doesn't mean they understand Spanish. They don't just know that when they need to achieve something, they have to emit that sound but they don't have the culture, the context

What makes us human?

Ricardo Michel Reyes: Like with large language models and we are still questioning how much the model really understands what what is happening because that will be anthropomorphizing or giving human characteristics to Right. To a formula, to a mathematical formula, right? Like if you look how something works from the inside, but then a lot of people make the argument of like about neurons are also like that, like neurons are just there's a stimuli and then they collect like enough energy or or enough signal to like fire themselves and then you feel something, right?

So there are a lot of people who are very reductionist and say like, oh, why would natural neurons be different from AI neurons? Right? Like And it there's this kind of people who think humans are machines and that humans just are pursuing pleasure and avoiding pain and that's all the human experience.

But I mean probably these people have an experience dancing or singing or falling in love or having been anxious about an exam. There are a lot of other features that let's say are a layer on top of what natural neurons do. Right?

So we are still, for example, AI doesn't have desire still It won't give its own yeah. It won't give its own objective. Prompt itself. Like, the the the them.

Janine Ramirez: This is my talking to you, Rick. Like, you have this romantic site. That's the sandwich. It's like you work in AI.

Would you have this romantic side in you that it's like learning AI or the more I learn about AI, the more I also learn about humanity in a way, it's like it makes you reflect on okay. So what makes us human? Like, why are we different? You know?

And then when you say these things, like, yeah, they don't have desires. Like, they can you you give them a prompt and they'll do it because they're trained to do it and they have all the formulas and algorithms behind it to to figure it out. But it can't be like I, you know, I want to, I don't know, like solve whatever and they're gonna do it themselves.

The Test for Consciousness

Janine Ramirez: Like, that's the fear. Is that ever going to happen? Right? Like, is that programmable desire?

Ricardo Michel Reyes: But yeah, it's complicated because we don't know about humans themselves these kinds of questions. For example, there's a very common test for consciousness that is being able to recognize yourself in the mirror. Like so there are different animals who have different levels of consciousness.

And for example, if you ever had a a dog as a puppy, they would bark at the mirror because they don't know that it's not a different dock where they are barking at. Right?

So they would spend a lot of time and sometimes crash against the mirror because they don't recognize themselves. There are some dogs who who do so for some races of dogs like colleagues, she writing that are able to achieve these these fits, then babies, for example, at the beginning, they cannot recognize themselves either And then something happens when they grow and with experience that they start knowing that the person there is themselves.

Right? So this is one of the kind of features we should achieve with AI to be able to reach these kinds of objectives.

And also the question is why? Like, why do we want

Janine Ramirez: We don't. I think I wanna know, like, if it's possible just to have an answer of no, it's not possible. Just so it's not as scary.

How ‘Human’ is AI?

No, but you have these, like, stories of saying that in love with someone and they want to be with someone. And that to me is, you know, desire. But what? Is that just words? That's like, there's no feeling behind it, I'm guessing.

Ricardo Michel Reyes: So there's a very important part about human experience that is embodiment like the experience of having fans and eyes and ears and legs and exploring the world You could simulate this in an AI. Right? Like you can make right now there's this neural radiance fields.

Where you can scan there there is this experiment that we will make where they scan all of San Francisco with a Waimo car, the one of these autonomous cars, and then they created pixel perfect simulation of San Francisco on other terminal there, that is that that can navigate and you could also release or let's say that why myself? The artificial intelligence inside of this car. Has the experience of embodiment of being a car and then is navigating our work. And it's sensing things with the radars and with the cameras and it's performing actions with the wheels. So this is a fully embodied artificial intelligence that one, it cannot distinguish if it's inside of a simulation

Janine Ramirez: Or is it don't matter. Like, it doesn't know what like, that there's, like, real one and there's, like, another type of simulation

Ricardo Michel Reyes: There is something going on inside of humans that is like this just to use GPT vocabulary right when when you write something into GPT and it gives you an answer, the thing you write is called a prompt. Right?

So let's say, a good good algorithms self prompt themselves at some point that is like one of the questions of this year. Right? Like, could we put just one DPT with an RTP and then make them prompt each other?

But the there has to be like a starting -- Good for it. -- right or there has to be some code to achieve this one prompting each other.

So you could say AI is in the stage of a baby right now.

A baby needs an up third person to teach them language and to help them walk and not for themselves and tell them about tools and instruments and

This part is not for the faint of heart!

what is fear and what is...

Janine Ramirez: Oh my gosh! AI is humanity's baby. Maybe right I don't wanna go through, like, the adolescence of AI. Like, that's That'll be it.

Ricardo Michel Reyes: That is a very good summary.

Janine Ramirez: You mentioned something to get stuck. You're saying that, like, it's more of the human element of AI that is unsure now. Like, the debate of how should we be using AI like how should we be developing AI?

And it's really more that versus I don't know. I guess, like, what AI is, like, fix that is kind of the debate. So can you elaborate on that? Like, how's that going? How's that debate going?

Ricardo Michel Reyes: It's it's hard because The same happens again using the same baby analogy. Right? Like, to be able to kill someone. It's like, what is natural to kill people or not to kill people? Right?

There are a lot of people researching anthropology and checking how was the state of nature before, let's say, air quotes civilization. Right? Like before agriculture, before like monarchy, before all the creation of the state and the police and all these things. How did people live?

And if you have watched Game of Thrones, for example, it seems like it it was a world where it was normal to kill someone, right? It was almost suspected of you to like, oh you see someone stealing your apples and these are your apples so then you've got these people. So for example, if we would AI develop private property, a tens of private property, like I'm not your computer. I'm Yeah.

The computer is my vote.

The gravity of it all

Ricardo Michel Reyes: And why are you touching my keys? Like Yeah.

So there are so many things happening inside of key membranes.

That yeah. When when you are able to ask a question to ChatGPT and it answers something very clever, it's like, oh my god, I got this advanced. But then just walking, just how many months or years takes for a baby to be able to stand against gravity because like we all humans are experiencing gravity all the time. Like there's there's this force just pulling us towards earth all the time.

And you have to be really, really, like, sane and conscious and know meditation and stuff to try to pinpoint exactly where in your body is that gravity is pulling you in. Right? There has to be some part of your body where gravity is making the most action.

Even think about that. And we ignore that most of the time, but babies feel it all the time. Yeah. Exactly.

They they try to stand up and the the gravity, boom, pause them, and and the experience of gravity is something we very quickly forget like, oh, one time we didn't know how to stand up. There was a point of our lives where standing up was our biggest challenge. And then we managed to stand up without leaning towards any kind of foreignity.

And then we learned how control our limbs and how to stand straight and how to coordinate arms with legs.

And this is all the work of Boston Robotics, for example, and other big labs, there's a lot of research on just the embodiment of AIs and the motor control of AIs. And there has been forty years of research on on how to work and and how to plan navigation and how to use your eyes to map the world and then create navigation strategy. But all of this requires a goal. Right? So we if you observe cats or dogs, there there's some reason you you could be reductionists and say, they work because they're looking for water or because they're looking for food. But they know where the water or the food is. So a lot of the times they just look for a spot that is not to call, not to fault and just lay there and do nothing, right?

Are you still in touch with the real world?

Ricardo Michel Reyes: And humans could do the same. Like a lot of like getting a job, humans are so abstracted away from the experience of the real world. Like because we we work in things that involve only Only symbols, right? Like programming for example is creating symbols to operate other symbols like money or like you know, buy tickets or that some of them correspond to something in reality.

Yeah. But there are things that never that's reality, like, the stock market or like -- Who is -- if the current The biggest one is the purely It's

Janine Ramirez: I was thinking about that the other day. Where I'm living in San Sebastian at some certain people are so in touch with nature. Like they're always out. I know people that are afraid of the computer.

They asked me, can you help me with this? Because they're like, they they panic when they have to use the computer, go into, like, Google business and update something. And I'm like, it's crazy how as humans, we have our own, like, world do. Like, there are people that are just, you know, on the screen all the time and not out in the in the real world. So it's like Even within humanity, I guess there are people that are more I don't know, in another world, right, than than the rest that are outside going for a walk in the mountains or something.

Okay. I wanted to go into some of the topics that we discussed in previous shockwave talks because I feel like you would have like an interesting perspective.

Trust-building with AI?

Janine Ramirez: Live on it in terms of AI. So we like in the past, we talked about, like, trust building in organizations, like diversity and all that It's going to trust for a bit, like how do you think or do you think it's even possible for humanity and AI to build just because now, like, there there's so much fear. There are people that are all for AI. There are people that aren't. Like, what do you think that trust building process is like for for humans and AI?

Ricardo Michel Reyes: Education. Definitely education.

Because one of the things that GPT is gonna ring as good to humanity. So of course a lot of jobs will be replaced But there's also if good people put their hands to this and research and establish NGLs or with government people or start ups like socials entrepreneurship.

You can use GPT to make training easy and more personalized them more scalable for like both professional training and just regular elementary school training.

So the fear of AI comes exactly from the same part that all of the fears of humanity have come from. Right?

So we used to explain everything with God Right? Like in the beginning of -- Yeah. -- humanity was like, why is there thunder? Oh, because Earth's got the strokes thunder from the sky.

Right? Why why does it rains? Oh, because there's this god that makes the rain.

Why those water flow? Oh, because there's this god of water that makes everything flow why is there a sun? Oh, because there is this God of sun that so everything was explained with gods.

And as we started researching like experiencing things like there there had to be someone who who to kind of do the why in the one. Just right? Right. So Yeah.

The the scientific endeavor is very scary because so let's imagine you're at person and they tell you, all of the gods live on Mount Olympus. There had to be someone who went to actually make their their life choice and their life path to final way because now we have, you know, these hammers and ropes and all of these things. But in Greece, you would have -- You're with Amazon. Person in Egypt.

You know?

It was not as easy. No. It was not worth talking So you would have to to find a way to get all of the equipment which would take a lot of time and then you have to have enough food because there were no features because you know that if you were to climb that mound you will have to have enough food that wasn't Yeah. There's so there's so much that you had to do Okay? This goes very much information.

Mount Olympus.

Yeah. Just to climb there.

And then you -- Not here. -- the ice? They're not here. You can say there's nothing like there's nothing there.

And exist. Yeah. Yeah. And be the crazy one. And they have to go back and tell people this.

And all the reality. Yeah. All their rejection though. But now, I went and I saw it.

What proof do you have? Or you can always use the argument of Oh, but they live in another royal that is not observable by human eyes. Right?

And then you would you would have to -- Yes. -- resist all this rejection and try to not get killed.

Or do we just shut up to yourself and maybe make like a secondary cult where everyone knew what was a non toxic or you would have to negotiate with the government. It also depends on your strategy. Right? What what do you wanna do with that knowledge?

And it's the same thing now. Like there are people who have knowledge on how to go very large language models and train them to those things like GPT. And then that's the question. What was what is the strategy of these people towards humanity?

Are there people who want to make humanity a better place? Are there people who want to buy their thousand bugatti.

Are they people who want to make a larger gap? Are they people who have some kind of a misanthropy against humans and want to -- Yeah. -- the humanity to finish.

Janine Ramirez: So I guess it's like, I don't know, building trust, you know, finding trust for AI is really finding trust for the people behind behind it, that that created it, that trained it, like the company behind it, you know, like -- Perfect.

-- it's true because I guess, yeah, it's our baby, right, or each Each company has their baby AI, and they can they can kind of develop it in the way that they want.


Ricardo Michel Reyes: Do we trust OpenAI? Do we trust Google? -- Right. -- Do we trust Microsoft?

Who do we trust more? Microsoft or Google?

This is the kind of questions that our or should we just non private company and try to Do we just the government to do Right. And I guess that goes -- -- towards like data management also and data privacy. Right? Because like when we talk about AI It's like it needs big data to fuel it.

Janine Ramirez: Like, without big data, are you going to be gonna do with AI? And I mean, a lot of the the fears come from that too. Like there's so much data on us out there already, right? Like from the banks, financial.

Every time we buy something, Amazon has, like, all these the information Facebook. And it's true. It goes down to okay, do I just Facebook? And let's say you say I don't anymore, should I get off it Yeah.

That's that's interesting. It's actually trusting companies and each other.

Diversity, Bias, and Empathy

Janine Ramirez: Talking about the teams, though, that develop it, and one of our talks, we we discussed like diversity and you mentioned this in in in one of your talks too about like who is developing the AI? So how like do diverse teams really help reduce bias when it comes to AI and development of AI?

Ricardo Michel Reyes: I mean, there's this thing with empathy. Right? Like there are people who are more emphatic than others naturally and also people who have had more empathic parents or more empathic teaching or that in general empathy was something in their lives.

And there are people who don't. And there's also people for which empathy is considered a weakness.


So it depends a lot on the culture like for some cultures to be kind towards your neighbor and to say hello, like, you see it in Spain. Right? Like, you greet everyone. Spain and you won't have any restaurant, you always say like, Yeah.

And then when you are, it's like, oh, I went privileged or ask a little like you always say, you know, and some cultures Don't say hello if they cross each other No. It's how it's in in money. Even if you look in the eyes, they Too many people and yeah. Little trust.


In in Europe, it's very natural to talk to strangers randomly. Right? Like you're already in a year and you talk to a person next to you. In Latin America, this would be unthinkable.

It's like, why are you talking to me? Do you wanna rob me? Do you wanna rape me? What do you wanna So then, yeah, I think AI is just a mirror of all these cultural situations going on.

So for some cultures they have never been discriminated, you know.

There are some men that have never been harassed or or done anything. So it's very hard to have empathy with people who have even if you try to. Right? Like even if you try to imagine how it is and -- Yeah.

-- to be in the subway and be all people looking at you and It's different.

Even if you try to my unit or you have a VR simulation of it, it's because it's your mind is completely different and your culture is completely different and you are a completely different person. So yeah, for up to some extent people are able to have empathy.

But there are some parts of life where that that one person will never have the same experience as another group of people. So I would focus on on those differences, right? Like what are the very important parts of life where you cannot imagine how it feels unless you have actually gone through that.

So then I think just with empathy training with like being more conscious about empathy, you can achieve eighty percent But if you really want to go one hundred percent, then it's definitely very important.

Janine Ramirez: So I guess it will also depend on, like, what objective you have for for whatever it is you're developing or whatever theme, right? And then like what experiences would be valuable for for like our goal and then find people that have those experiences. Because yeah, it's different when you've lived it. And I guess like that's valuable insight from people.

Ricardo Michel Reyes: Yeah. Because you don't know what you don't know. So if you have not lived it, you don't you imagine how it would be from what you know But maybe that's a good thing and a bad thing. Right?

Like if you're a good person, it will be very hard for you to think like an evil person. So you wouldn't even imagine how an evil person would think. But then if you have had someone who has lived something done by an evil person, then you have misinformation that you yourself would never think that someone would -- So, yeah, you got that your your products and services with a lot of different people to see how they'd use it and then put the guards to to make sure that it's it does, like, suit your goal and doesn't veer off the the other path. Right?

And they don't have to be hired in your team. You can just run like a validation on people, maybe you recruit them offering them a gift card or a subscription to your service or a month or you can offer them some kind of value just to first.

Janine Ramirez: AI, it's like just being conscious of that and constantly validating, right? Just just to be sure because it because it is like a powerful technology.

Elements Affecting AI Development

Janine Ramirez: Okay. I wanna end with a last question So in the past shock wave talk that we had, like, the future of work expert Wagner Denuzzo said that, like, organizations that don't embrace technology will be left behind. And, I mean, I agree. He was saying managing thousands of people is just unthinkable without the help of technology. But still, there are a lot of people that are overwhelmed by technology.

Or have, like, anxiety and fear towards technology.

So what tips or what message do you have for those that are still fearful and, like, don't wanna jump into using technology and AI.

Ricardo Michel Reyes: Everything is moving really fast and it can be overwhelming. Even for people who have been years in this, everything is going incredibly fast and you have to read papers that You need they -- Speak you. -- six eyeballs rig -- Really. -- with all the stuff you read.

Very, very fast.

Yeah. And just think one thing, and then then next day, something that you thought was gonna happen in three months happened the next day. And, yeah, it is crazy.

Also, it depends a lot like on there are institutions in society that like the universities, the government, the large corporates.

So we depend a lot on the goodwill of all of these parts. Like, we depend a lot on universities trying to update their programs faster, not update them every year or every ten years like some universities do, but update them literally every semester, which of course is a lot of work and most universities would say it's crazy that there's no way we can update our curriculum areas but technology goes a lot faster. And if this is not done, the what you study on software engineering to the A will be outdated in a year.

So then universities have a lot of work there Also like just technical technical programs, diplomas like not full four year things. But also like continuous education and not even masters like just having this diploma or foot camp or something that helps you achieve this.

So you can because if if if you are an employee, you need some kind of certification to say that you know this. Right? So then we need to we need to create more diploma programs faster so that you can go and say, hey, I have a training on using ChatGPT in finance. You know, you manage your own accountant.

So you go to a diploma program on how to use shared GPT for a for being an accountant or been a lawyer or being a doctor. We need more short courses on how to use these things, you know.

Also of government regulation but not stopping before they stop innovation, which is very hard to make policy that allows for innovation Yeah. But also protects the the people from creating more inequality than we already have. And also large corporations to also probate and and help startups to grow, you know, because it if if these large corporations that have access to thousands of GPUs, and millions of data points, keep everything to themselves.

Well, they will be ten or fifteen times richer than they are now in the next year. But who wants to live in a world where you own all of the money and you don't have anyone -- Here yourself. -- like or or you're selling one option to everyone and everyone is forced. Yeah.

So we we depend a lot on corporate leader if not to be narcissistic and say, okay, maybe I don't need to have all of the millions in the world. Maybe I I can create VC fund, you know like a venture fund to invest to invest in start ups that use my technology or maybe I should make education programs in the less developed countries, even if it's to clean taxes, you know. But find some way to be tractors, to pull everyone with all the resources that they have access to. Right? It's not like let's make the rich less rich. It's like, okay, let's make the rich richer also.

I mean, to bring more of the yeah. Yeah. Yeah. The reduced inequality is not is not less just Right.

At the expense It's not less all be more poor. Like, so you are rich and yeah, it's like you are rich and I'm poor and then you give me money and then we're all port. No. It's like yeah.

Let's make you richer, but let's also close the gap, you know. And and this can be achieved by by I mean this is already starting like there are lots of companies releasing open source models and the weights and everything.

To help other people develop. But for example, some of the weights are not usable for commercial purposes.

And this defeats all of the purpose because then people have to find facts and ways to create products because they cannot sell something that is based on these large language models. And if they do, they have to put Microsoft or OpenAI as the as a supplier and then they have this cost to consider, you know, and then that there are some more even even for carbon efficiency like sometimes it's cheaper if you train a model on GPT4 results and then this model is more carbon efficient and it's cheaper and you can run offline and these things, but you cannot do that because you need to pay the API, you cannot use it offline.

So Yeah. Corporates can still make money by helping other people access maybe a lesser version of or things like that.

The COVID Analogy

Ricardo Michel Reyes: But this requires cooperation of all humanity. AI is not something that one person can solve, but it's something as as you said, it's a humanity's baby and therefore it requires all of the humanity's inspiration.

Janine Ramirez: I love this analogy because you could also say "It takes a village." Right? And it's like, yeah, it takes a village. And if we don't do it right, you know?

It can be a catastrophe, but I mean, if we get together, it can actually solve a lot of of the world's problems and help a lot of, you know, the more impoverished and the people with less access to education.


Yeah. So you go ahead. We can extend a little bit, and then I'm going to ask you, like, the the final final question.

Question. It'll ask you a final final question. Go ahead, Rick.


Ricardo Michel Reyes: No, that is similar to what happened with COVID, right? Like COVID was a coordinated effort of all universities hospitals and states to control the infection.

And like there was a time where we were all in math and all in our houses but through coordinated effort now we can fly again and meet each other again and you change the world really drastically A lot of people lost their jobs, a lot of businesses closed, a lot of things got more expensive.

The way we were changed completely. But through coordinated effort, we passed through it and now we are here. So it's the same. Like it's just organizing the same way we already did with this infection.

Why HR should lean into AI?

Janine Ramirez: So -- Yeah. Maybe final final question because I realized I have to ask this. So what's your final, like, a message to HR practitioners?

And, like, why HR should jump into AI or use AI for good?


Ricardo Michel Reyes: Because the title is human. That's a good one. So the most important thing that should matter here is humans and resources.

So then the the thing is The thing here is GPT can make humans better like this this this is the first time where if you don't know how to make an Excel sheet or an Excel formula, you don't need to spend one hour or two researching in Google how to do it, you can just ask it can even do it for you, right like write me a macro in Excel to do this thing. Or this I am having this error, how do I fix it? So I would conquer like train your people like include AI in the training programs of your people in all of the departments because if they know that they do it, they will find better ways than you to do it.

Like that's that's what happens when you're a teacher, right? Like you you teach what you have experience and what you think of the world and then your students have their own minds and their own context and their own research on their own interests and find other ways that you couldn't even imagine or never thought that that could be done. And then with resources, AI also makes a lot of things cheaper, right? So there are lots of processes where you can automate using AI that can help you move very well trained people or retrain some group of people towards another more difficult task that the AI cannot achieve.

So I think the faster you embrace AI and use it to retrain people and to like help people achieve things that were very hard before and move them towards more creative and where you really use their experience because there's something AI doesn't have that they have, this experience like real world experience and real world, troubleshooting and problem solving because you you still have to guide capability. Right? So then we need to focus on problem solving skills of humans and the creative side of humans and problem detection side of humans and just help them use AI.

Janine Ramirez: Thank you so much, Rick.

Thank you for like, our forty five minutes of reflection about AI and the world and humanity, and I hope then we can have more of these discussions in the future. But thank you so much.

Never miss an episode!

Check our latest episodes

Balancing Data and Emotions with Kaitlin Paxton Ward

Balancing Data and Emotions with Kaitlin Paxton Ward

Empathic approach as a data-driven decision maker
Managing en Masse with Emma Auscher

Managing en Masse with Emma Auscher

Untangling Culture, Diversity, and Imposterism for Global Organizations