Trust on Purpose
Are you intentional about building, maintaining or repairing trust with the people in your life? Most of us aren’t, and sometimes important relationships suffer as a result. So much of what is right or amiss in those relationships ties back to trust, whether we realize it or not. We are dedicated to helping you become intentional about cultivating strong trust with everyone important in your life: the people and teams you lead and work with, and your family, friends and community, as well. In the Trust on Purpose podcast, we dive into everything that makes up trust, what supports and damages it. We unpack situations we commonly see with leaders, teams, organizations, and others we work with to show how trust can be strengthened, sustained, and repaired when broken. Listen in for conversations between two pros who care deeply about you being an intentional and masterful trust-builder in your life so you and your relationships flourish. We share pragmatic and actionable takeaways you can use immediately and deepen with practice. If you have questions or situations related to trust that you’d like us to talk about in a future episode, please email charles@insightcoaching.com or ila@bigchangeinc.com.
We'd like to thank the team that continues to support us in producing, editing and sharing our work. Jonah Smith for the heartfelt intro music that you hear at the beginning of each podcast. We LOVE it. Hillary Rideout for writing descriptions, designing covers and helping us share our work on social media. Chad Penner for the superpower editing work that he does to take our recordings from bumpy and glitchy to the smooth and easy to listen to episodes you are all enjoying. From our hearts, we are so thankful for this team and the support they provide us.
Trust on Purpose
AI and trust: can we believe everything we see?
Send us a message - we'd love to hear from you
Get ready to dive deep in this episode of Trust on Purpose, where hosts Charles Feltman and Ila Edgar tackle big questions around trust and artificial intelligence - starting with a curious twist! It kicks off with the story of a school project involving AI-generated sloth images - yes, sloths - and quickly jumps into a thought-provoking conversation about the ripple effects of AI on trust in our daily lives. Can we believe everything we see and hear in a world where AI is increasingly behind the scenes? Charles and Ila dig into the complexities of transparency, potential deception, and our ethical responsibility to be upfront about AI's role in the content we create and consume.
This episode is more about asking questions than answering them, guiding listeners through the foggy territory where trust and tech overlap. Charles and Ila discuss how openness and integrity can keep psychological safety alive in our teams and relationships, especially as AI takes a bigger role in shaping what we see and believe. Tune in for a candid, unfiltered discussion on navigating trust in a digital age where things aren’t always what they seem!
We want to thank the team that continues to support us in producing, editing and sharing our work. Jonah Smith for the heartfelt intro music you hear at the beginning of each podcast. We LOVE it. Hillary Rideout for writing descriptions, designing covers and helping us share our work on social media. Chad Penner for his superpower editing work to take our recordings from bumpy and glitchy to smooth and easy to listen to episodes for you to enjoy. From our hearts, we are so thankful for this team and the support they provide us.
Hi, I'm Charles Feltman.
Speaker 2:And my name is Ila Edgar, and we're here for Trust on Purpose.
Speaker 1:And my name is Ila Edgar, and we're here for Trust on Purpose. Today we have a kind of a little bit different podcast, in that we're really going to ask questions of ourselves and each other and you, rather than talk more specifically about things that people can do. Our topic is trust and AI, or AI and trust and what effects that might have on the way that people approach trusting each other at work, at home, in our communities and so on. A couple of days ago, I read just a short article in the Los Angeles Times newspaper. It was written by someone whose child I can't remember whether it was a son or daughter was in a school where she was asked to do an oral report before the class. All the students were doing this on their country of origin. I guess it might have been an international school, I'm not sure.
Speaker 1:Anyway, so she went off to find out information about her country of origin, which is Costa Rica, and one of the big things in Costa Rica is sloths. You could go there and see sloths. So she wanted to bring in a picture or two of sloths as sort of visual aids for her report in class, and so she went on the internet and searched for pictures of sloths and found a few that some of them you know very cute. It almost looked like the sloths were smiling for the camera and that sort of thing. So her parent, who wrote the article, kind of took a second look at it and came to find out that most of the pictures that she had downloaded were in fact generated by AI. They weren't actual pictures of real sloths in anywhere zoo or out in the world or anywhere else, but rather they were pictures that came from AI, which raised the question for me where are we going to go with this when we can't trust that anything that we see on the internet, or perhaps in other places as well, is actually something that comes from a human being or from the natural world that purports to be an authentic photograph, for example, or document or video, but in fact it's not.
Speaker 1:It's generated by artificial intelligence in some way, and our question, set of questions perhaps that we're really diving into here today has to do with what effects that might have as we move into the future on trust, interpersonal trust at work, trust between individuals and organizations, trust among groups in society and so on, as people use the internet and even beyond the internet, because obviously, documents can be faked in an organization and shared around have nothing to do with the internet. Something could be created and shared around it that looks like an official document or a real video that turns out to be a fake. So what does that have to do? What impact is that going to have on our ability to trust each other, or our desire to trust each other, or our level of comfort and safety in trusting each other? That's kind of where we're exploring today, ila. What comes up for you as we think about that?
Speaker 2:For me, it really makes me worry and I'm hopeful and optimistic, but I know that this is probably my rose colored glasses. I think number one transparency for people to say or state this is an AI generated document, or this is an AI generated, or I have used AI to help write, create whatever that is, and not to hold something or claim that something that has been 100% AI generated as your own work, as your own thinking, as your own creativity. So for me, that's an integrity line and I would hope but I think we already see, if you're reading articles like this that it's easy to be confused because there isn't that transparency that people aren't saying or being forthright about. This is an AI-generated image. And I think this is interesting because I also talked to our lovely Joseph Myers this week.
Speaker 2:We're talking about his trust distrust app and you and I did that exercise and report with him and he's added a button in the report that will generate an AI image that represents a piece of the report and I'm like I think that's really cool. And so he showed me the AI image that was generated for him in his report and I'm like I love that. That is really really cool. But, again, fully transparent. This is an AI generated image, and I think for me, the sticky right now is how will other people be in integrity about this? Is this isn't part of it is part of it isn't? And so that's kind of the first stroke for me, because if I find out that someone that I trust to be completely capable, reliable, sincere, have a lot of care, that they've used AI and not disclosed it but actually held it as their own, that would be a massive trust breakdown for me, massive.
Speaker 1:Me as well, and what you said a moment ago will people be in integrity around that? That's a question for me. Will they be Right? There you have something that comes in as suspicion or doubt, which immediately sets our distrust network in our neurobiology on alert. Do I need to be careful about what this person may present to me? Should I be? Should I be careful about that and really kind of double check about that and really kind of double check? So this is a question that comes up in my mind as I'm saying this is is it important to distinguish between AI and human generated material? What's the value in that?
Speaker 2:I'm holding my breath, because I I think that's the value of don't want to just repeat integrity. I think it's the transparency and the level of trust in a relationship and if this is not something that's part of that disclosure or transparency or consistency and I'm immediately going to this is a really important conversation for teams to have about how do we use AI, how do we navigate it, how do we support each other, how are we transparent about it? I don't, like I'm literally vibrating to think that I could be in relationship with someone that I trust for example, you and you don't disclose, or I don't disclose to you. I just I don't know what that relationship could look like and I don't think it's one that I would feel safe, where I would, I don't know like my brain is just oh, so I'm going to shut up. What do you think?
Speaker 1:Well, what you're pointing to, I think, is that the importance for you and I would agree with you, it's important for me as well the importance of honesty and integrity in the relationship that I have with people in my life important relationships that I have with people in my life, so that if they say one thing, or even that they don't say, oh yeah, no, this is something I made myself when, in fact, it was generated by AI, but simply they don't say it was generated by AI and leave it to you to find out or decide or not, or whatever, and then you find out it was. I think the basic thing there is that there's a betrayal of trust there, just what you're saying, that I trusted that what you put forward, whatever you know, in whatever context it is, is something that you in fact did yourself. So okay, did yourself so okay. So we then have a conversation on the team, for example, about generating documents, generating whatever it is that the team is doing, how much AI use is okay there, or if we're going to have an agreement with each other that we can use AI to completely generate stuff or just give us ideas, or whatever it is. That's the conversation, is what's our commitment to each other around this. So in a work setting, I can see where that is important to be able to do what I guess.
Speaker 1:I wonder beyond that, though, if you and I are part of a team and we are in a sense, you know, we're a part of a team and I come up with something really brilliant as something that I'm saying in one of our podcasts, and you're like wow, that's really great, charles, that's really cool. And then you find out later that AI generated that and I just regurgitated it, but I didn't say it. What we need to do is have a conversation between ourselves about what our standards are around. You know disclosure about it and that sort of thing. So we do, and all as well.
Speaker 1:But then what happens outside of that? So I go to the internet to get some data that we want to use in one of our podcasts or something, or if we're doing a piece of work with a client and we want to have some data that we want to bring to them, and those data are generated by an AI, but I don't know that and then we find out later, or the client finds out later, or someone who's listening to our podcast finds out later that that was actually just AI generated data that may actually have been full of what I guess they call hallucinations, material that the AI made up, which seems to happen, although the creators of AIs are working really hard to try and keep that out of there. But the problem is AIs are full of bias that's injected by what is fed into them, by the people who feed them, if you will. So how do we trust that stuff? How do we trust what we bring to the table, if you will?
Speaker 2:This feels so heavy and so daunting, so complicated. How do we navigate this? And again, I'm not saying I think both of us are very clear that we're not proposing that we have answers to this. We really wanted to be in the question. I'm as overwhelmed as anybody right now, but I think it comes back to that honesty, integrity, transparency and the same way, unfortunately, we've had to learn for many years about fake news and just because you see something posted on social media doesn't make it true.
Speaker 2:So validate your sources, do your due diligence. Where did this come from? Is there comparable data? Is this in line with what other research in this topic or area has come up with? I don't know. There's got to be some way, because we care about sharing true, validated, not making shit up and holding it as real, and so how we do that, and that comes to us as humans and individuals. Because I worried isn't the right word because I care about my personal reputation and the integrity that I how I want to be in my life, and so that would be something that would be important to me to make sure that I do my due diligence or I don't not disclose, or whatever that is. But we know, if we look left and right in our world, that's where the tricky comes. If not everybody will feel that way, not everybody will want to be that way, and then what?
Speaker 1:The other thing that comes up for me is that adds a whole lot of work to my work. Look, if I'm doing due diligence on anything and everything that I am bringing into to convey to somebody else, I have to stop and go. Okay, wait a minute, let me. Let me just check on the sources of these data. Or let me just check on the source of this quote or this video that I'm showing, or whatever it is. That in and of itself, can be a practically a full-time job. That's why news companies have fact checkers. That's their jump to go do the fact checking, and I don't have the resources to pay a fact checker Now.
Speaker 1:There used to be a website that you could go to look up and, in effect, done fact checking. It would check on stories, internet rumors and stuff. I can't remember the name of it, but I used to use it a lot internet rumors and stuff. I can't remember the name of it, but I used to use it a lot. Now I think there's just too much out there, so I don't actually go to that site, but maybe something like that on a bigger scale might be a valuable resource. But again, it's all going to depend on who's going to actually do the checking and do I believe them? Yeah, and, like you said, do other people care enough? Do other people care about whether or not what they're passing on is in fact valid, that they've done due diligence on it?
Speaker 1:in a world, where stuff can be made up pretty easily. That looks really real.
Speaker 2:Well, and it takes all sorts to make our world go round. We know that, and so we already know that I think people will have very different opinions about AI and different beliefs about how it should or shouldn't be used that are different than what we're talking about here, and so let's just acknowledge that that is true. That is true. How does that impact us and how do we navigate in a way that's true and feels an integrity for us? And I think, as this continues to unfold, we're going to bump into trickier and trickier and trickier situations, and it will take teams that have developed strong trust with each other, that they have a foundation of psychological safety so that when not if, but when these super complicated situations come to light, whether it's proactively or reactively, that they have the strength in the relationship to be able to talk about it. Even if it's deeply disappointing, even if it feels deeply out of integrity, even if it's been absolute trust damaging, that there's an ability to talk about it and learn from it.
Speaker 1:Yes, that's kind of what we preach in general, yeah.
Speaker 2:Kind of what we hope for, but this in particular I mean AI is still so new and it's still. What is it going to evolve into?
Speaker 1:Yeah.
Speaker 2:Now I will say transparently. I will say transparently AI is open on my browsers all the time. Looking at a blank sheet of paper terrifies me, so absolutely get me started and then away I go.
Speaker 2:I'm like thank you for pointing my brain in one particular direction. Let me see what I can do with that. And so I absolutely love or I'm struggling to write a sentence. What other way can I write this? So I appreciate the value that it can bring and the support that it can bring. I'm also very clear that it does. I'm still me doing my work, coming from my heart.
Speaker 1:And that actually is that kind of goes around. Again to the question that came up for me what's important about that? That? If you're honest, if you say hey, here's my LinkedIn post that an AI created, I posed the question or gave the prompt and the AI created this, and I think it's as good or better than what I could have done by myself, so I'm putting it out there. So I'm honest about AI developing it. So the question kind of comes to mind what's important about whether or not AI develops it, or I do myself, or I do it in sort of a combination of getting some ideas from AI and then expanding on them and add my whatever my two cents worth into it.
Speaker 2:I feel that that's somewhat situational and somewhat about what has been pre-designed. So I'm making up the situation. I'm not super artistic or crafty, but let's say there's a marketing team trying to come up with a new concept and they're struggling, or maybe they're not struggling and maybe part of their creative process is let's throw some prompts or ideas into AI and see what it generates. Maybe that's an agreed upon and a shared standard, that that can be absolutely part of the creative process. Or maybe the opposite standard is that we come up with our own, we see what we can do first as humans, what we can collectively do, and then, if we're stuck, we use AI. So I think it really is. What are the agreements and the uses and the hows? When is it okay? When does it make sense? How are we disclosing it? How are we using it? Because I think it's really situational.
Speaker 1:Yes, and the context is, I think, important. Ai is great doing all kinds of stuff like that and can be very useful, everything from writing speeches. To answer my own question, what's important about something being generated by a human versus by AI and when I say sure, I'm talking about fully generated by a human, or at least having some human input into it, versus something that's just generated by an AI. I think one of the things that a lot of people would say. I may be wrong, but I think a lot of people would say something you said a few moments ago that AI is not capable of generating something that has real human heart empathy as part of it, and yet people are using it. They're going to AI therapists.
Speaker 2:What Hang on? Sorry, what.
Speaker 1:People are using AI therapists.
Speaker 2:Oh, I didn't know this was a thing. Wow, yes, it's a thing, I'm not a whole lot of people yet.
Speaker 1:But yes, people are using AI as therapy or as a therapist. I guess I certainly haven't tried it.
Speaker 2:I think I might have to because I'm super curious.
Speaker 1:So intrigued by that, like what? What? I heard about this a couple of months ago, I guess for the first time, and I was doing the same thing, going oh my gosh, how does that work? But apparently the people who do this are quite satisfied, for the most part, with what they get. There's something there that people can connect with that feels like empathy, I think, even though it can't really be human empathy because it's generated by an AI. Going back to the sort of the question, what's important about it being something that at least has had some human fingerprints on it is that it's the humanity in it, the empathy, the heart in it, that a human being brings to it. Whatever it is is something that an AI can't generate on its own, but that just might be a belief that we all have about humans, that there was a Forbes article that I recently read and it was eight workplace trends that will define 2025.
Speaker 2:And one of them is the rise of human-centric leadership. As AI takes on traditional managerial tasks, leadership roles are transforming. Leaders who foster emotional connections and build cohesive teams will be in high demand. Key attributes include empathy. Facilitating human and machine collaboration, focusing on talent development. This shift represents a fundamental change from task management to nurturing teams through rapid change. Leaders who adapt to this model will be crucial in balancing technological advancements with human needs in the age of AI. I loved how that was written, and so you know I'm like yay, AI, it has been really, really cool with some of the things that I've played with, and I'm just a baby beginner and we're still all humans. We still need that human touch, that human empathy, that connectedness, that belonging, that sense of care, and so I think again, leaders that can help their teams navigate this balance of humanness and technology in a very complicated world.
Speaker 1:Yes, that is really well put.
Speaker 1:I love that too, I remember you reading it last week or a couple of weeks ago to me and yeah, there's a lot there, but it really centers human relationship and that the human relationship is primary, that the AI human-AI relationship is secondary and its job, if you will, its role, is focusing on tasks, jobs, getting certain things done, and that the humans then potentially could be freed for different kinds of activities related to whatever the organization, whatever the company is doing. I read a really interesting article by Seth Godin this morning in HBR about strategy. It's a really nice short piece on strategy and what makes strategy different from task planning and that sort of thing, and that could be something that more humans could focus on and really develop for their companies, leaving then the AI to do other things.
Speaker 1:And some would argue that AI could be really good at doing strategy, for example, even though Seth Godin says in his article that it takes empathy to create a really good strategy that looks out into the future and answers the question what will people need and want?
Speaker 1:You have to be able to put yourself into the shoes of humans in the future, which is an amazing empathy activity.
Speaker 1:So I don't know about you, but these questions are sort of orbiting around my head. It feels like, and I think there's a lot that we need to learn, a lot we need to think about and talk about, and I think what you've said more than once in this conversation is that the conversations that we have about this we need to have. We can't shy away from them, whether it's a team, the entire organization, what we want our culture to look and feel like as we live in it and we bring AI into it and deal with AI. How's that going to be in our organizational culture? How's that going to be in our communities? How's that going to be in our politics? How's it going to be in other areas of our lives. So that, I think, is one of the things that I'm taking from our conversation, is the need for honest, authentic conversation that takes these questions into account, that doesn't shy away from them and really tries to address them.
Speaker 2:And, as we entered into this conversation, fully disclosing, like we don't have the answers we're not saying that this is what you should or shouldn't do but the value of being in a conversation, even though we don't have a particular outcome. But we know that the topic is important to put on the table and we know that this is something that leaders really need to be able to do. That it's not about having the answers of how is AI going to fit into this team or this organization or our culture, or rather, how is it. Let's talk about it. I'm not here with the answers, but collectively let's decide what that looks like, and then we might bump into something and go, oh, let's think of that part, or we're now in a conundrum. How do we manage this together? I think those are like really amazing, powerful and courageous things to do.
Speaker 1:And it kind of perversely, it strikes me that we can also ask that question of an AI. How should we as a team here are some parameters how should we as a team deal with and use AI productively?
Speaker 2:I wonder what it would say. I know and maybe that helps form a conversation Exactly.
Speaker 1:Yeah, I was almost tempted to, while you were talking, go look open ChatGPT and ask it.
Speaker 2:But yeah.
Speaker 1:I think we are rapidly moving into a world that those of us, well, of a certain age may not necessarily be well-equipped for, but we have to deal with it anyway because it's coming. But we have to deal with it anyway because it's coming and the best that we can do is have courageous, honest, authentic conversations and make choices based on those conversations that we believe are the best choices we can make together with each other.
Speaker 2:Yeah, I love that. I think that there's maybe a follow-up conversation here. Once we go on to chat GPT and see what it says, Maybe there'll be a sub post to be like oh my gosh, this is interesting.
Speaker 1:Yes, we can put it in the show notes. Here's what chat GPT says We'll put it in the show notes.
Speaker 2:There we go, there we go. I really appreciate being able to jump into a conversation and a topic that I don't really know a lot about and to just explore it with you. Like I really thank you for creating space to do that.
Speaker 1:Yeah, I feel like I mean, I know there are a lot of really smart people who are grappling with these questions and who have a lot more experience with it than either of us do, and it's at that stage where a lot of experience isn't much yet. So even people like us who don't have a lot of experience and haven't really been thinking about it deeply for the last four or five or eight years, can still have something to say, a value in the conversation.
Speaker 2:Yeah, and different perspectives and different experiences and different views. We welcome those because that's collectively how we learn. Yeah, well, thank you for this conversation.
Speaker 1:Thank you. Thank you very much.
Speaker 2:On behalf of both Charles and myself, we want to say a big thank you to our producer and sound editor, chad Penner. Hilary Rideout of Inside Out Branding, who does our promotion, our amazing graphics and marketing for us, and our theme music was composed by Jonas Smith. If you have any questions or comments for us about the podcast, if you have a trust-related situation that you'd like us to take up in one of our episodes, we'd love to hear from you at trust, at trustonpurposeorg.
Speaker 1:And we'd also like to thank you, our listeners. Take care and keep building trust on purpose Until next time.
Speaker 2:Until next time.