top of page

Shifts in Trust Toward AI?

Why People Are Starting to Believe in AI – Sometimes More Than Experts

By Tommy Cooke, fueled by coffee and cusiousity

May 2, 2025

Key Points:


1. Trust in AI is increasingly shaped by clarity, tone, and perceived neutrality—not just accuracy


2. Employees, patients, and citizens are beginning to favor AI in roles traditionally marked by interpersonal subjectivity and gatekeeping


3. Business leaders must design AI systems with human perception in mind, treating trust as both a design feature and a leadership responsibility


It wasn’t all that long ago that most conversations about generative AI hinged on a single question: is it trustworthy?


There is a virtually endless list of stories around the globe about hallucination: when generative AI tools simply make up information and convincingly present it as true. While the reported incident rate for hallucinations is significantly on the decline—with different studies reporting error rates between 27 percent and a staggering 46 percent% in no less than two years ago— the consequences are significant. There are numerous instances where lawyers have been fined and even suspended for relying on fake cases created by AI in court.


The consequences of course are not limited to the law. A Norwegian man recently asked ChatGPT to describe himself. He prompted it with, "Who is Hjalmar Hormen?" to which it responded:


"Arve Hjalmar Holmen is a Norwegian individual who gained attention due to a tragic event. He was the father of two young boys, aged seven and 10, who were tragically found dead in a pond near their home in Trondheim, Norway, in December 2020."


Of course, this never happened. Hjamlar filed a complaint for many reasons; the lateral implications of an AI spreading tremendously inflammatory and inaccurate misinformation should cause concern for anyone.


These stories serve as a reminder that even though hallucinations are on the decline, it is incumbent upon us to always fact check our work. I have been fortunate enough throughout my career to have the support of numerous teaching and research assistants, all of whom I trusted—but I always double-checked their work.


Creating, sharing, and disseminating knowledge, scientific and especially otherwise, deserves our scrutiny. Indeed, this is why it is useful to have and rely on publications that have been peer-reviewed—the reviewing panel involved functions as a second pair of eyes on the draft to confirm that the it is indeed accurate, appropriate in the circumstances, and trustworthy.  


It's not that we cannot trust AI, it's that we need to learn how to work with it effectively. And there is merit and emerging value in doing so because AI has the ability to truly surprise us. In fact, there is a quite a shift underway. New studies suggest that in certain settings, people aren’t just trusting generative AI—they’re trusting it as much, or even more in some cases,than the humans it was meant to support.


To be clear, this is not because AI has become infallible. Rather, it’s because the bar for trust is moving. It turns out that clarity, tone, and perceived impartiality often carry more weight than a human voice.


Let’s explore three emerging examples of this shift in trust and what we can glean as 'lessons learned' for organizational leaders.


AI versus Lawyers


A recent study featured in The Conversation tested how people react to legal advice when they know the source is an AI. Participants were shown two sets of legal guidance—one from a lawyer, the other from ChatGPT—and were then asked to rate them.


The results were unexpected: ChatGPT's responses were seen as more helpful, informative, and understandable. This preference held even when people were explicitly told that one answer came from a machine.


The researchers concluded that people weren’t favouring AI out of ignorance or preference. They favoured it because of how it spoke. AI-generated advice was clearer, more empathetic, and easier to follow. Trust was formed not through credentials, but through communication, translation, and delivery.


Does this mean that people will stop trusting lawyers? Absolutely not. The study was very clear that over-trust of gen AI carries an abundance of risks, much of which still requires the baseline AI literacy that any user ought to have in order to recognize and handle risks on their own.


Nonetheless, the study demonstrates that AI plays a significant role in creating access to knowledge.


AI versus Doctors


In healthcare, the stakes are even higher. And yet, similar trends are emerging. A recent study led by Innerbody Research survey 1,000 participants. The goal was to evaluate levels of comfort in AI as well as robots and nanotechnology being used in healthcare. Surprisingly, 64 percent of participants indicated that they would trust a diagnosis made by AI over than of a human doctor. Respondents tended to be most comfortable with AI used in medical imaging. 


In fact, a separate 2022 study demonstrated that radiologists using AI were more successful in diagnosing breast cancer than without using it, which in turn builds off of a 2018 study revealing that deep learning was outperforming dermatologists in identifying melanoma from dermoscopic images.


Innerbody’s study further found that four out of five Gen Z respondents stated they would trust AI over a physician. In particular, 78 percent of respondents overall, regardless of age, felt comfortable with AI creating a personalized treatment plan for them.


What does this mean? It does not mean that doctors are not trustworthy. It does mean that doctors who use AI benefit from the additional accuracy provided by a hyper-focused tool.


To be clear, the spirit of the study is not to implicitly argue for the removal or even the displacement of medical doctors. Rather, it’s demonstrating that people are trusting of doctors who turn to AI as an aid.


Why? It’s not as much about explainability here. Doctors remain in the loop because it is not the patient using medical AI themselves. Rather, the AI is supplementing the doctor’s ability to accurately diagnose problems and assist in structuring a treatment plan.


After all, it is not news that AI can be more accurate than humans. A boy saw 17 doctors over 3 years to address chronic pain. ChatGPT found the diagnosis. AI is becoming well known for its ability to accurately and reliably diagnose medical issues.


AI versus Human Resources


In a 2024 survey shared by an industry analyst by the name of Josh Bersin, 54 percent of 884 respondents said that they trust AI more than HR professionals in their own organization. Additionally, 65 percent of those respondents indicated that they were confident that AI used in HR would be used fairly.


HR professionals are just that—professionals of human resources. They are trained to deal with humans. So, how do we make sense of these findings?


A major finding suggests that the impartiality of AI judgement provides significant value, particularly when evaluating performance. The study suggests that there is distrust in the ability of managers to back unbiased decisions when assessing an employee’s work.


If AI is asked to judge outcomes of a year’s worth of projects, the report finds that AI is more trusted to avoid bias on the basis of race, gender, and age. Of course, AI bias is always a concern. However, the crux of the matter hinges upon already existing distrust among employees who expect their managers to make unbiased assessments; 25 percent of respondents believe that their performance reviews were negatively affected by their supervisor’s personal biases.


Another interesting finding of the survey is that respondents trust AI to be more reliable in shaping and structuring their growth and career development. In fact, 64 percent of respondents indicated that they prefer AI-generated performance goals. The study suggests that there appears to be a preference for AI that can tailor performance goals based on not only individual performance but also company goals and industry benchmarks.


As Josh Bersin perhaps best put it, “it’s not an indictment of HR. It shows that we don’t trust managers”. Managers, as with all humans, are prone to making mistakes about hiring, pay, promotions, performance, and so on. Employees are well aware of this, hence the yearning for impartial assessment criteria provided by AI.


Also, it means that trust in AI for HR is increasing. Similar to the AI in medicine case, accuracy matters. People are open to automated tools analyzing data to drive decision-making when it comes to their employment trajectory and status precisely because it augments and (in some cases) may even supplant human judgement.


What Should Leaders Do About Shifting Trust in AI?


These examples and the studies behind them are not really about AI or technology. Rather, they are about human trust and perception. Trust in AI is not a one-dimensional question of truth versus falsehood. Instead, it’s about the role AI and technology play in potentially mitigating social, emotional, and economic complexity.


Here are three things we can learn from these studies as organizational leaders:


Don’t audit for accuracy. Audit for tone, clarity, and emotional impact. ChatGPT may be preferable to a lawyer not because it was more correct, but because it was more understandable. Patients liked AI because they knew it would drive accuracy. Employees prefer AI because it can isolate human bias. Clarity, warmth, tone, and accuracy are emerging as central to trust. Leaders need to expand evaluation frameworks to include how AI makes people feel.


Treat trust as a design problem, not just a technical one. What AI outputs directly impacts how AI is perceived. Simple structuring, use of plain language, and consistent tone will make even complex ideas more relatable, understandable, and therefore trustworthy. Treat every AI interaction like a product interface. Get communicators and designers involved early—and often.


Train people to engage with AI, not just get through it. As AI earns more trust, particularly in high stakes contexts like law, medicine, and employment, there is of course a risk of over-reliance. That’s why digital literacy must include how to interpret, question, and even push back on AI output. It’s not enough to teach users how to use a tool. They need to know how to interact with it.


In the end, trust is less about the source and more about feeling and experience. That should both excite and caution us as we bring AI deeper into our organizations.

bottom of page