Trust on Purpose

What can we trust AI with and what boundaries do we need?

Charles Feltman and Ila Edgar

Send us a message - we'd love to hear from you

What if moving faster with AI didn't mean giving up your judgment?

We sit down with Lindsay Semas, an AI leader who's spent years building trustworthy technology at scale, to explore the gap between what AI can do and what we should let it do - from drafting emails to shaping customer strategy.

We start with the quick wins: smarter research, better first drafts, fewer browser tabs. Then we get into the harder stuff: how to earn trust in AI outputs by asking for sources, questioning what's missing, and matching your scrutiny to the stakes. Lindsay introduces a sliding-scale approach: lean on AI for synthesis, but keep humans in the loop when outcomes touch customers, compliance, or your reputation.

The real heart of the conversation: Governance. Lindsay walks us through how her company built a cross-functional trust council, complete with checklists, accountability structures, and clear guidelines on when AI decisions need human oversight. We also tackle the anxiety around job displacement and the pace of change.

Whether you're leading a team or just trying to use these tools better, this one's for you.

Subscribe, share with a colleague, and let us know what resonates.

We want to thank the team that continues to support us in producing, editing and sharing our work. Jonah Smith for the heartfelt intro music you hear at the beginning of each podcast. We LOVE it. Hillary Rideout for writing descriptions, designing covers and helping us share our work on social media. Chad Penner for his superpower editing work to take our recordings from bumpy and glitchy to smooth and easy to listen to episodes for you to enjoy. From our hearts, we are so thankful for this team and the support they provide us.

SPEAKER_02:

Hello, my name is Charles Feldman.

SPEAKER_01:

My name is Hila Edgar, and we're here for another episode of Trust on Purpose. And who's with us here today, Charles?

SPEAKER_02:

Lindsay Simus, who is an expert in AI in many ways, and and uh has covered or been thinking about AI. We'll let her introduce herself a little bit in a second here, but has been thinking about and deeply involved in AI and issues of how it's used and how people can trust it and in some ways where they should not trust it for many years now. So I would like to, if you would, Lindsay, give us a little bit of your background and um where you're coming from on this conversation.

SPEAKER_00:

Great. Well, thank you for the opportunity. I think anytime I can weave trust into a conversation, um, it brings me a lot of joy. So happy to be with you both talking about this. Um, and yeah, for the last decade or so, I've been uh working in the AI field. And um, you know, my my career has been working throughout various tech companies in varying roles, but traditionally sitting really at the intersection of like operations and strategy. So helping businesses grow and helping them kind of determine their course, their go-to-market plans, and really how to bring to life their business objectives and uh you know execute on that as well. So more recently, I've you know done my own training on the topic of trust really to um enhance my leadership skill set. And that's really where I had put, I put and continue to put a lot of thought because of the amount of attention that's been on AI and how it's grown. And so uh I really find the conversation fascinating when you can talk about artificial intelligence and all that it can do for us as people and for the business world, but then also think about the dimensions that trust plays in both leveraging AI tools, um, how AI tools are created and governed, and you know, how people are relying on them in their personal lives, and then at an enterprise level, how businesses or governments are also relying on them and kind of the the things that are obvious and some low-hanging fruit, um, quick wins, but then also where things get a little bit meatier and things get a little bit more complex, and where we have to be really smart and intelligent with how we're developing these tools and leveraging them. And then, you know, beyond that, like what do we not even know yet? What do we need to be mindful of today that's beyond our reach or beyond our thought processes that you know we've got to take careful steps around? So that's a bit about me, a bit about what I do and um why I think it's gonna be fun to have a conversation.

SPEAKER_02:

So let's start with that kind of um what can AI, what is AI doing for us now? What are what are the benefits we're seeing from it?

SPEAKER_00:

Well, I feel like um if you're not in some way, shape, or form referencing AI or using AI, like it's like faux pas, like like, you know, you're not one of the cool kids. I feel like uh everyone's trying to say, oh, I used AI for this, or I used Chat GPT for this, or you know, at work, oh, we just put in a new, you know, AI agent that's doing this for our business and driving efficiency or what have you. So it's definitely like the hot, buzzy thing. Uh, where I think that there's obvious low-hanging fruit, probably first and foremost, is in our day-to-day lives. I think there's immense opportunity for using artificial intelligence to whether it's you know taking care of your grocery list or um, like I think of my children and doing homework tasks with them, um, you know, giving, say, ChatGPT some prompts and having it give stuff back that just gets you farther faster, rewriting emails, um, little things like that, I think are no-brainers, um, or uh, you know, event planning, like I'm gonna be going into Boston soon with a friend. And it was like, give me some ideas of things to do that would be open at this time. You know, so it's far more complex. Instead of clicking on 15 different websites, it's really just giving me a recommendation of an itinerary. Uh so I think there's a lot of uh easy things there. And then I think the next step of that is there's applications now uh that you can use that take it that next step farther, where um it's not just putting in prompts, but it's actually an application that's doing whatever the specific thing is that you're looking for, and it does it at maybe a more sophisticated level. So that to me would be the low-hanging fruit in our personal lives. And I think there's low-hanging fruit in the business world, but that's where I think it starts to get a little bit more.

SPEAKER_02:

Interestingly, I just read an article in McKinsey, um, their I get their newsletter. Um, and this one happens to be about how AI agents now are kind of upending um in some ways how business how people go about looking for the best deals, for example, on something. Because you can you can ask your a your AI agent to go out and find you, compare because you know it's like really hard to compare, for example, insurance policies. It's a time where you know we can read so um it it can take me forever with a spreadsheet that I might try and create for myself to even come to some vague comparison. But I could ask an AI to go out and find me the uh given the conditions, find me the best uh option for me. And um, so one of the things, of course, in the in the context of trust is do I trust that that's that's gonna actually happen? This is gonna be the best for me. But so assuming that I do, now I can easily compare apples and oranges and um you know grapefruits and rubber balls, and see what's gonna be the best, which is great for me as a consumer. I can see where it's going to sort of upend in some ways, how businesses have uh obscured um a lot of information in order to be able to manipulate their own pricing in in ways that we haven't as consumers been able to disentangle very well.

SPEAKER_00:

Yeah, and in that example, um the output that comes back to you when you give that prompt with insurance policy comparison, the output is going to be what's publicly available. So if there's information and say a top insurer that would have been the best pricing for you, dependent on the specifications that you put in, it may not be accessible to access. So it's always really important, like coming back to trust and like the reliability of what you're getting back, like taking the time to say to yourself, okay, what was the source for this information? And do I think it was a complete set of information or data that it worked from? And in some cases it may be good enough, but if it's something that, you know, there's the concern that maybe the information might not have been part of the data set, then that's where human discretion needs to come in. And you can probably trust, like how I think about it is if I can leverage AI in any capacity of my life to get me farther, faster, then I'm gonna pursue it, but not recklessly, not without the review or you know, maybe the 10 or 20% extra. So if it can do 80% of the work and I can do 20, that's a win, or even 50% of the work and I can do the other half, that to me is a win. But the the mindset that we can just blindly trust, that I think is very risky. Um, even in you know, low-stakes situations, because it can be, even if not intentionally misleading, it can it can be misleading. And, you know, people, consumers, individuals just need to know that. And then I very much think you bring that into the business world and it just gets amplified.

SPEAKER_02:

As a consumer um who might use AI in in that way to compare different things or whatever. What how am I to know kind of what things to keep in mind, um what questions to ask myself, and also ask AI, um that will get me closer to the yeah, like 70% as opposed to 50% or 40% or 30%.

SPEAKER_00:

Yeah.

SPEAKER_02:

How do I get how do I find those? Yeah, go ahead.

SPEAKER_00:

So a lot of it comes down to yeah, a lot of it comes down to the the skill set of prompting is um, you know, knowing how to, but also to I think this is just communication, like knowing how to ask the right questions. So I think a lot of it also comes down to trusting yourself, like looking at the output and deciphering if if this seems accurate. Also asking for sources or certain AI tools will specifically give you the source so you can drill in more. So if you if it gives you the source and you click it on, click on it and it's Wikipedia and you're like, wait, that's not the reliable source I was looking for, then there's your answer. Um, but generally speaking, I think if you're intending for something to be, you know, extremely accurate or fact-driven, uh, you've got to be, you know, that kind of is like your calibration of how much you can trust the output. So, you know, it's shopping insurance policies, if if the reference in all those cases was the actual insurer's websites or, you know, certain, you know, third parties that do the assessments, like that's kind of your your that's great.

SPEAKER_02:

So that's helpful. And I'm I'm thinking there probably needs to be that kind of thing, that kind of information needs to be available to consumers, you know, that you know, and who and and do you know if anybody is doing that?

SPEAKER_00:

Well, that's where you get into the conversation around like AI governance and what are the laws around AI and what is the responsibility of you know, whoever is providing whatever tool it may be, AI tool, um, to be providing consumers and users with information on the data sources. Like, are these open data sources, like in the case of ChatGPT, it's mining all openly available data sources, or in the case of like the business that I work in, you know, we have more closed environments and closed data sets that are say specific to customers. And that's on purpose. That is their data set that is not readily available, it is their data and their data only, and that is specifically governed. And there's legal rules around how that data can and cannot be used. But you can imagine in any situation where the data governance part or legal compliance needs to be adhered to, and especially at a more visible level, the stakes are higher, kind of simple statement. Um, there's a lot more hoops. There's a lot more red tape, there's a lot more that's involved. So when you think about using it in your day-to-day, how readily adopted would ChatGPT be if there was that much red tape every time you hit enter, and everyone had to hit 19 disclaimers and everybody had to approve every single bit of information that it was using. Like it wouldn't be right. So it it you just have to factor you knowing the data, you know, knowing how it's regulated, how it's restricted or not is a really important thing just to be educated in general. Yeah, whenever you're using a tool.

SPEAKER_01:

I want to rewind back to a little further back in our conversation. Then you were talking about the skill set of prompting and asking the right questions. But even where we are now, there's a an ownership or a competency that needs to be built as a as a user. But I can't just dive into AI blindly willy-nilly, not being accountable and learning how to use it, right? We don't get a driver's license without having practice. Hours and hours and hours and hours and hours of practice. We don't learn a skill set like this by just jumping in and figuring it out. So there's also an integrity and a trust piece that is like our accountability. So where where does that integrity fall? Not just in our personal lives, because I I think it should too, but definitely when this starts to fill into how we use it in our work lives.

SPEAKER_00:

Yeah, that um that's where I think the human dimension of artificial intelligence really comes in, because I think it is really easy to think of uh I think it's really easy when artificial intelligence is being discussed for the bulk of the conversation to be around the technology. Uh, I think that, you know, the technology to me is almost the simpler part. The harder part is the human part, like you mentioned, the integrity, the practice of using it ethically, um, building AI models ethically uh and responsibly, governing the data sets and the outputs appropriately. Uh, that to me is the really complex part, the human part. And um I I don't think we're there yet. I think the pace in which this technology has shown up in our lives has has gone much, much faster than the pace in which we've been able to build it, use it, manage it, regulate it appropriately.

SPEAKER_02:

How do we as average humans in uh in the world, how do we have how do we trust that that's even ever going to be addressed if we're quite far behind already? We're already lagging behind the the technology and its advancement.

SPEAKER_00:

Yeah. Uh I think it's a tricky one. I I I don't think there's an easy, simple answer. I think that my personal view on this is that a lot more responsibility has been pushed down on individuals and you know, or the consumer. I think a similar model to like a social media, like the responsibility uh from social media companies, it's been pushed out to individuals to say you click the disclaimer, like you kind of knew this this feed was going to come at you in this way. And if it's harmful to you, like that's on you. Um, it's similar in concept to if you're using these tools, then you know it's expected that you're aware of what you're doing and what you're using and for what purpose, and that onus is on you. So it's it's hard. It is very, very hard to uh feel that you can trust the output, uh, that you can trust that your prompting is accurate. Now, of course, if we're doing things like grocery list efficiency and things of that nature, like the stakes are low, right? But if you're doing things in a business world or you're doing things that have person higher personal consequences to you, or say the medical field, for example, yeah, the checks and balances are are more important and it and it it's difficult to get those assurances on the surface.

SPEAKER_01:

Well, you've got to and in a space that's changing every single day. Every day. Every day. And so how how do you possibly stay on top of that, even if you're well intentioned? Right? Because it's changing every day.

SPEAKER_00:

Yeah. Yeah, this is a constant conversation in the businesses that I've been in and even uh, you know, that I've been working in in the say the pat the focus of the past, say, 18 months, is that the technology and AI space is changing so rapidly. How do you even keep up? How do you position your business or what you're selling, or how do you coach your teams on how to sell or how to grow a business with this much change? And uh how do you, you know, coming back to the trust, how do you really gain that solid footing and and and build trust and expand trust and even feel comfortable assessing how you're feeling about it because there's so much change. Um very, very difficult. I think um for me, it comes when I'll for me personally, but then also in, you know, with the teams that I work with, it comes down to the principles that we anchor on. Yeah, and me personally, what the principles that I anchor on, and that then leads the actions that I take to say assess the tool or assess, you know, the the outputs or what we're leveraging. In business, I think it comes down to the the company's vision, mission, values, and how that's driving the team and using that kind of as our our framework to assess what we're doing and continue to grow trust. I know it sounds um at times it sounds kind of um idealistic, but I think you need things like that to lean on when there is so much change and there is so much.

SPEAKER_02:

It strikes me as I've been listening to you the um definition Ela and I use, and I think you as well much of the time, Lindsay, of for trust is or that for trusting is um choosing to make something you value vulnerable to another person's actions. Um in this case, the other, the other is not necessarily a person. Um, it's now an artificial intelligence or artificial intelligence that's been created by humans for now, anyway. Um and so the there is a certain amount of risk assessment that we have to do. There's you know, take this risking, making something you value. So how do we assess that risk? And I think part of the conversation has been about the risk side. There's so many, you know, potential downsides, right? Or at least in this the conversation around it is going, um, but also all these upsides. How do we go about making that risk assessment? Let's say I'm a leader of a team in a company, and my team is tasked with uh, I don't know, uh opening a new market uh with some product that we have. And so we're using we're you can use AI to help us with that process.

SPEAKER_00:

Um with the assessments and maybe the strategy, some of the business plan recommendations.

SPEAKER_02:

What's um what I value in this case is getting that done and done well, right? Because it's can it's good, you know, um, it's going to hit my livelihood in some way. Is this my reputation? All those things. Um so I'm so how do I think about that? How do I go about thinking about just how far I want to um push into the AI space with those questions? Yeah. Push it, yeah. Any thoughts that you have on that?

SPEAKER_00:

Yeah. Um, so yeah, definitely. So I think when you're saying in your example of say moving into a new market, I think you know, there's an immense amount of research and planning that needs to be done. And when I think of research and planning, and I think of tools that can help me do that more efficiently, I'm not gonna go do research and planning and whatever the output that, you know, that research and planning, regardless of if I'm using an AI tool or not, that I'm just gonna take that and run with it. I'm gonna really, especially if it's something critical to my business, like moving into a new market, it's likely driven by the desire to grow or an opportunity that exists that's known and agreed upon by leadership. Um, I'm gonna really want to be thoughtful. So I'm going to collect as much of that research and knowledge as I can and kind of put it in the incubator with the team that's ultimately going to make the decision. So for me, like the the it kind of goes back to like sticking to the frameworks that I know have made me and or my team successful in the past, just using tools in the places that it can get me farther faster. So in the con the space of research and um, you know, say strategy building, at least initial strategy building, I think it makes a lot of sense because again, it can get you farther faster and you can kind of curate or put into the incubator and get your output. If you go a step farther and say you've you've built that plan and you're now in that market and you're executing and you make the decision, okay, we're in a new market. And regardless of what you do for a business, you say, you know, we have a touch point to our customers, whatever service or product we're producing, and we're gonna, for an efficiency play, have that completely delivered through an AI automated workflow or agent tool or what have you, without having been part of the design of it or the testing of it, or you know, gone through the QA process to know what that experience is like for a customer. And if you just go and do that and it's negative, it could be positive, right? But if it's negative and you haven't put the time in to kind of testing and validating that and having it aligned to what your business objectives and values are, then you're likely going to be left in a situation where you're gonna have you're not gonna have success. Like there's there's a higher risk and higher chance that your touch point to your customer stuff starts off on the wrong foot. Right. So that would be an area where I would not throttle to leveraging tech. I would scale that part back. I might have, say, the touch point to the customer be with the human agents or with the customer service people or with the doctors or nurses in the field, depending on what industry you're in. And I would have, say, the systems behind if those the if the back end systems can be automated or things can happen faster, then take care of that part of it. You know, so there's kind of this um, I almost envision it as like a sliding scale of the amount in which, depending on that, the importance of something, that is kind of where I would move the spectrum of leveraging a tool or not, uh, leveraging AI to solve the problem. There's also um, you know, when you're, you know, at the company I'm I'm currently working with, we created our own internal trust council. And what we've done is we've taken the brightest brains we have and created a council of individuals across both IT and technology and engineering and put them together and business operations and put them together as a resource for the clients and customers we're supporting. We create tools like checklists for our prospects and our customers to say, hey, if you're using AI, think of these things. Think of what is your data source? Is it safe? Is it secure? Do you have control over it? What tool are you using? Were you part of the design of that tool? Are you part of the QA of that tool? What control do you have over the output? What is the importance of the output output to your business outcomes? So really, really simple things like that can really just spark the thought that's needed to be able to make that decision on what's the level of importance, what's the level of risk, does this compromise our values or our objectives? Like it's really all of that kind of risk assessment work that's needed in these situations because you know, that there is a component of when you're using technology that you're kind of putting it out there to the world. You're putting it out there into the in into the hands of the technology, you know. So it's important that that's great.

SPEAKER_02:

I mean, that's ex sort of what I was talking about before is how do we how how do people make those assessments and even those simple checklists like you're talking about really useful.

SPEAKER_01:

So can I just say the fact that you've you've created a trust council in your organization? Amazing. In such a messy and complicated area, right? Again, it's AI is changing and adapting so fast. But also so we I want to applaud that. Amazing. I'm just like so, so, so jazzed about that. Wouldn't it also be cool if we had trust councils in every organization? Oh, for trust, period. Right? For making decisions that align with our values, with our integrity, like and there's um an organization here just dropping up doing some trust work with them where the T in their name stands for trust. So they wanted to operationalize trust in their organization. So we spent a year doing that. I want to call the client right now to say I think the next step is that this becomes part of a you form a trust council so that this work really, really, really integrates and lives and breathes through your organization.

SPEAKER_00:

Yeah, like in the fibers of the organization. Yes. And that was, I mean, we started it because of the our product offering and the need for it within our product offering. But there was a handful of us that really also advocated for the leadership of the trust council not just to be our solutioned individuals, not just to be those that are selling the solution, but also to be the ones internally that are managing the data around it. Because, like I told you, the importance of data integrity and compliance and so forth globally. Uh, and and the the benchmarks are higher in other parts of the world, even higher than they are in the US. So if you're a global organization, you know, you have to be thinking of that, you know, standards beyond your own. Um, but also that we we advocated hard and and I'm ex I'm very happy that the third prong is the internal part and the organization. And like, how do you it it speaks, I think, a lot of the work that I learned in uh the trust workshop that you know that you lead, Charles. And it's it's kind of taking trust within leadership, but also bringing those exercises and those skills into uh this trust council in a way in which it gets people talking about trust, um, kind of breaking down the pieces, framing up conversations. And it really, I have found it extracts everything you need to then build a great solution for somebody or offer a great product or provide a great service. It's like the foundational work. Um so it's amazing to me how many businesses just think of it from the selling arm and they don't think of it from the organizational, you know, component or don't include the organizational component, I should say. Um, but really I think to be holistic and authentic, you need to kind of hit it from all angles. And it really has the most power in that in that way.

SPEAKER_02:

That's that's fantastic to just hear about. Um, and I I I'm I'm with you, Ila, that it'd be great if if most, if not all, you you all of you out there think about creating a trust council within your organization that looks in both directions, looks out and also looks in. That's yeah.

SPEAKER_00:

In right, and and try and tries to keep those things aligned. Like how we're showing up internally is consistent with how we're what we're offering are showing up externally, and when those things are in disalignment, we have the common language or the the space, even just the vessel to pull that back in.

SPEAKER_02:

So kind of circling back to AI, which is part of why you have the trust council in the first place, but circling back to to AI and and using it and sort of trust or lack of trust that people have about AI and using it. Um where do you see it going kind of in the future? And and I think this is where a lot of people are frightened by AI. Um, is you know, where is it going to lead? And it's it uh according to a few people that I've read, um it's totally possible that AI could end up being perfectly fine without humans. Um so that's kind of one possible future scenario. Um, but also there's the current, you know, looking at you know job loss, AI taking over people's jobs, um, and will we be able to um replace those jobs in in other ways? Uh or are we just gonna have this whole class of people that don't have jobs, uh, don't have work, and what's what's going on with that? So, do you want to talk a little bit about that from your perspective as an AI um user and and a company that uses it and puts it out there?

SPEAKER_00:

Yeah, so I definitely think there's the possibility that AI can be autonomous and not need to leverage humans. I'll I'll use the kind of phrase that you know my business has been using for a long time, but has caught in a lot of uh attention of late, which is human in the loop. Uh, there is a world I think where AI could not need a human in the loop. I think right now what we're seeing is we have humans that are augmenting what they do or what they produce with AI. So again, I think pie chart, like majority of its Human with a little bit of AI. I definitely think that we could comfortably get to the world where that flips, where the bulk of it is AI, and a human is augmenting what the AI is doing or curating or being that final layer of review or what have you. Back to your insurance example, you know, the AI, the engine does the bulk of the work, and you're doing kind of that final review and ultimate decision making, right? Because the real meat of it sits in the analysis and decision making, not the information gathering. So I I comfortably feel like we're that kind of pie chart will continue to shift where AI is more uh the standard and the human is is you know less needed, particularly on uh lower skill set tasks, both personally and professionally. Um, but there's a lot of money to be made in the world of technology and the world of AI. And so I don't think that uh businesses that have that within reach will stop if the opportunity exists for them to continue with full AI solutions. And I think that uh AI, I mean, it's technology, it's built to get more efficient. That's the whole point. It's built, you prompt something once, it's gonna learn from that and continuously get smart. And the more the tool is used, the smarter the tool or the platform gets. Like that's the whole concept. So um, you know, I think of uh the business that I've been in for a while. When we first introduced AI decades, almost two decades ago, um the AI could handle, call it 70% of the things that we were using it for to transact. Now it's over 95%. And that's solely because of volume. That's solely because of its learned as it's gone and it's gotten so refined and and precise. That will be the case with everything we're using it for. Uh, so there's there's a lot of, you know, I hate to sound doom and gloom, but potential fear there from people on kind of, are we going to be in this world where like robots take over the world, you know, that like AI controls everything we do. Um, we, you know, we put on headsets and, you know, it tells us what to do, where to go, what decisions to make, you know, that kind of thing. Um, and we're very limited in the process. I think we could get to that place. I hope that we don't. I hope that as humans, we recognize the importance of the role that we play in best outcomes. And we recognize, I think what I think the biggest thing for me is like what we lose. And I think of the idea of just being like using technology for me, for the most part, is far more transactional. You you lose the the spirit, you lose the energy, you lose the emotion uh that comes with it. And so, you know, an idea of being completely over-indexed with technology and under-indexed on human, I think you lose a lot there. And I'm I'm hopeful that as a as a human population, we we recognize that and put in regulations, we put in uh things that allow guardrails that allow us to, as this technology evolves, that we we make smart decisions and don't kind of get to that overly scary place.

SPEAKER_02:

Yes, and that is is overly scary for most of us. I don't know, it is for me, because yeah, yeah. Um myself included. And so one of the things that we all of us, um, no matter how deeply or shallowly we're involved with AI, um, do need to do, I think, is to support the politicians and policy makers who really do want to do what you're just saying. There are some politicians and policymakers that want to just remove all the barriers and let the AI go. And let it, you know, just take off, feed it what it needs to be fed and let it go. Others who want to um put in more barriers, more controls, more guardrails, as we talk about. And I think that's one of the things that we as humans can do is support those people who are doing that. Um, I don't know, what do you what's what's your thought on that?

SPEAKER_00:

Yeah, I I think the the closest, the thing that's closest that I think we can relate it to is social media. I think it's, you know, that I think there's a lot of people that would agree that maybe different guidelines on social media earlier on would have probably been better.

SPEAKER_01:

Yeah.

SPEAKER_00:

Uh and I think this is a similar situation. Like let's let's learn from those experiences and let's do the hard work now to put in those guardrails and those frameworks to keep things in check, keep things organized, keep things within the proper boundaries.

SPEAKER_01:

I think that that takes courage from leaders in organizations where we already feel we're behind the eight ball, and I don't have time to do that. So to be courageous to say yes and we actually need to do that.

SPEAKER_00:

Yeah, absolutely. The the yes and. And I think within that courage oftentimes is cost, um, which, you know, businesses are driven by the dollar. And I think the idea of standing up governance, the idea of standing up, I mean, even a trust council, something as simple as something that's just internally built, if you want to, you know, really simplify it. You could take that same model and say the concept of an AI trust council in a private or small public company, that is a similar concept to having legislation that is supporting guardrails around AI, similar in concept. But it it takes the intentionality to do that and recognize the importance of that because things can get out of control quickly. I saw a great image, I forget, I think it was on LinkedIn, and it showed um, you know, it was like if we only knew in hindsight, and the first pillar was from like the 50s and it was a cigarette, and the next pillar was like uh asbestos in say the 80s, and then it was social media more recently. Like I think the next pillar would be AI. Like it stopped at the cell phone saying you know how much we've learned in hindsight. But yeah, I think AI and advanced technology is another example of that.

SPEAKER_02:

Yeah, we are great at hindsight. Um and if we learn if we if we learn from it, yes.

SPEAKER_00:

If we learn from it and we allow it to influence our decisions going forward.

SPEAKER_02:

And like Ila was saying, it takes courage for leaders to do that, to stand up and say, hey, we we need to do something here. And like you said, for business leaders, there is a cost. Um a piece of that is figuring in the cost of that. And what what is it? Are they gonna their company gonna end up behind all the other companies out there because they took a stand?

SPEAKER_00:

So yeah, well, and it, but I also think there's a bit of a short-term long-term gain too. Like what's the short-term loss for the longer-term gain, and vice versa. And so that's where kind of bringing this full circle to earlier parts of this conversation, really staying true to your values, really staying true to the principles and the discipline that the business or the individual is driven from, I think is you know what's gonna, you know, result in the most authentic and and genuine decision-making um strategies and execution. And that's hard to do. It's it's very hard to do. It's the hard work.

SPEAKER_01:

And I think that those organizations that are already aligned not just with their vision mission, but actually live their values, this isn't a huge step.

SPEAKER_00:

This no, I agree.

SPEAKER_01:

It's not a huge step, like it may it would make sense. I think there's a disconnect between organizations that still unfortunately have those nice words on a wall or on their website, but they don't mean anything. Yeah. So that's a bigger leap.

SPEAKER_00:

Well, and they don't yeah, and they don't um I I often find uh that most organizations don't take the time to break down those values into the behaviors. Right. What are the behaviors? Right. What are the tangible things that you are looking for from the people that make up this collective group that are all trying to do something, right? I'm really simplifying it. But like what is that? And uh I think as our world, as our days, as what hits our brains gets more and more complex and faster paced, if we just simplify it down to the basics, and if we simplify it down to these are the values, but this is how it shows up in the behaviors, this is what we're looking for. So if like strong communication is one of them, that's great. But my definition of communication could be very different from what it is for each of you. So being clear on, you know, we don't have side conversations, we don't use sarcasm, we are direct, we are kind with our words. Like if you're not sure, you have the right to ask. And that's you know, encouraged. Like getting specific goes a really, really long way. And I think the more complex our world gets and the more we're relying on things like technology that don't have that human experience, the more we have to be intentional with what we're looking for from each other.

SPEAKER_01:

This is an outdated stat, but it's from Brene Brown's Dare to Lead work and all the research she did. But at that point, she said less than 10% of organizations actually operationalize their values.

SPEAKER_02:

Less than 10%. I would not be surprised if that number is still accurate. Pretty darn accurate. Maybe less, yeah.

SPEAKER_00:

Still or even less. Yeah.

unknown:

Yeah.

SPEAKER_00:

Maybe less, because I would be curious in in with more businesses being remote post-pandemic, I'd be curious how much because there's so much. I think a lot of behavior also comes from, you know, body language, physical movement, gestures. Totally. And we're looking through screens.

unknown:

Yeah.

SPEAKER_02:

Which is, I mean, I think I know Hila, um, you you do a lot of this work with your clients. I do this kind of work with my clients is to operationalize those values, particularly around trust, but in in other ways, in other values that organizations claim as well. So um, and I and I think a lot of our colleagues are doing exactly the same thing. So this this has been a really rich conversation. Um, I think I'm gonna turn to my AI and say, hey, how do you listen to this tape and and or listen to this recording and tell me what you tell me?

SPEAKER_00:

Yeah, that'd be good. We should dump the transcript in and say, like, what were the yeah, what were the golden nuggets or what areas, you know, did Lindsay say anything wrong? Like what was incorrect? Fact check it, you know, like it is. You're absolutely stuff like that.

SPEAKER_02:

Um, and I think probably it's something we can do and and have fun with it and and also learn from it.

SPEAKER_01:

We can.

SPEAKER_02:

But I want to I want to say thank you, Lindsay. This is it's it's really been uh um great listening to you. I have a whole bunch of different ways of thinking about things um that kind of gone through my mind as I've listened to you talk about what you're doing, your trust council's doing, your thoughts about AI and where it sits in the this kind of pie chart of human and and machine um action in the world. Um so there's a lot more. I hear I hear it in what you're saying, and and I know it from where I've encountered it in other places. There's a lot more that we need to learn.

SPEAKER_00:

Um and there is, and making space for the conversation is I think one of the best things that we can do.

SPEAKER_01:

And I think even as a, you know, my little small business here, right? How have I put guardrails into place and how I want to be in integrity around using AI in my work? How can I influence the leaders that I work with? That this is a great conversation, even if you don't have impact in your whole organization, take it to your team. User sphere, right? Have these conversations with your team, start there. So I think these are even starting with the conversations about how we want to use our values to build those guardrails, having those conversations is a first not easy, but really important step that anybody can take. Yeah.

SPEAKER_00:

Well, and even just brainstorming, like I mentioned, a checklist. Yeah. Brainstorming a checklist and saying if we had a checklist for making sure that like we're all using AI tools various ways, but like we don't have company policy around it yet, like what are the couple things that we know about our company policy that like we want to make sure we adhere to so that we're not saying going beyond the boundaries of you know what our business expects us to do, you know, even simple things like that, just brainstorming or talking about it, can really open up a lot of conversation and thought that uh can be very empowering and and frankly, a lot of times exactly what's needed just to get things moving and started and it compounds on its own. Yes, because it's not going away. No, I think it's here.

SPEAKER_02:

No, it's not going away and no just beginning, taking those first steps, even if they're small steps, I think is I I totally agree with you that that's where we need to be. And everybody needs to be doing that from whatever vantage point they're looking. So again, thank you. This has been a wonderful conversation.

SPEAKER_00:

Of course. It's been wonderful for me as well. I I look up to the both of you and all the work you're doing, way more than you know. So thank you. I appreciate you both, and it was an honor to be here to spend some time with you.

SPEAKER_01:

Thank you, thank you.