top of page

The Most Honest AI Yet?

Why Admitting Uncertainty Might Be the Next Step in Responsible AI

By Tommy Cooke, powered by questions and curiousity

Jun 20, 2025

Key Points:


  1. MIT’s new AI model signals a deeper shift toward transparency, humility, and trust in AI systems


  2. Business leaders must recognize that unchecked confidence in AI carries serious reputational and legal risk


  3. The real competitive advantage in AI isn’t speed—it’s building cultures and systems that model integrity and caution


A client recently shared their favourite moment while attending a tech conference a few months ago. He listened to numerous speakers go on about the promises of AI, but he was not convinced. All he was hearing was the run-of-the-mill talking points: AI is great, it saves money, it increases revenue, it revolutionizes business, etc.


But he was taken aback at the end of the conference. The last speaker took a radically different approach and called out the purple elephant hiding in the corner of the room: hallucinations and dishonesty.


The speaker had taken serious issue with the fact that AI fails to admit when it’s wrong and  said something to the effect of: “Artificial Intelligence, as it were, has some work to do in terms of becoming Honest Intelligence”. The reflection was prompted by the question:


Will we ever get AI that simply admits when it doesn’t know the answer?


Good news for my client, and for the rest of us just like him. We might be closer than we think. Researchers from MIT and its Uncertainty Labs recently revealed an AI model that recognizes when it doesn’t know the answer… and say so. Yes. You read that correctly: an AI that admits confusion.


At first glance, this might seem like a modest or even trivial update in a field known for hype, bold claims, and massive ambition. But this humble innovation signals something bigger. If we squint, we can see the outlines of a bigger trend emerging here: after a gold rush of AI development characterized by confidence and speed, the market has been— as I unpacked recently—quietly shifting toward maturity, safety, and accountability.


Companies, regulators, and researchers are all starting to recognize that the future of AI is not only about performance—it is about trust. MIT’s “humble AI” may be one of the clearest signs yet of where we’re headed.


From Bold AI to Measured AI: What a Shift in Tone Reveals About Where AI is Heading


AI tools are designed to sound confident, but they often sound more confident than they should. When they hallucinate, they do so with conviction. That’s part of what makes them so compelling. It’s also what makes them so dangerous. They often cross the line from helping users to misleading them, whether through invented citations or persuasive but false narratives.


In this context then, the MIT team’s breakthrough stands out. Their new system doesn’t merely generate content. Interestingly, it calculates a “belief score” indicating how confident it is in each of its answers. It can express uncertainty in natural language and even abstain from answering altogether.


This is not just a technical improvement. It’s a philosophical one, and one that matters to you as a business leader. Why? It signals a new future of AI: one where the goal is not omniscience, but reliability and accuracy. One where companies don’t ask, “How fast can we scale this?” but instead, “Should we scale this at all?” Or, “Is this particular model the best investment?”


This tonal shift, from bold to measured, is worth noticing as well. It mirrors the shift we’re seeing in corporate strategy, regulation, and public sentiment in that leaders are realizing that the only sustainable AI is the kind that knows its own limits.


What Organizations Should Learn from Honest Artificial Intelligence


When AI demonstrates the value of knowing what it doesn’t know, it's a mirror to our own blind spots. Companies that adopt AI without building in mechanisms to identify uncertainty are asking for trouble. They're putting tools in front of employees and customers that may sound confident while being catastrophically wrong. And there are countless examples:


Air Canada ended up in court after it’s AI-assisted chatbots gave incorrect advice for securing a bereavement ticket fare

 

Dutch parliamentarians, including their Prime Minister, resigned after an investigation found that 20,000 families were defrauded by a discriminatory AI that the government had endorsed


A Norwegian man sued OpenAI after ChatGPT told him he had killed two of his sons and had been jailed for 21 years

 

In 2021, the fertility tracker app Flo Health was forced to settle a lawsuit initiated by the U.S. Federal Trade Commission after it was caught sharing private health data with Facebook and Google

 

I’d be remiss not to mention the near-endless instances of AI deepfakes and the eruption of chaos that they’ve caused, from simulating the voices of political leaders and creating fake sports news conferences to the infamous story of a finance worker a multinational firm who was tricked by AI into paying out $25 million to fraudsters who were using deepfake technology


To the business leaders reading this, it’s no longer enough to rely on disclaimers or internal policies that live in the fine print. That’s only half of the battle. As AI integrates into more visible, consequential workflows, business leaders will need to model transparency at the product level.


That’s precisely what the MIT model does. Its interface shows that uncertainty is real and accounted for. While this may seem like a setback or extra work, it could also be seen  as a leadership opportunity.


Why Business Leaders Ought to Normalize Uncertainty in AI


I truly believe that being a business leader is less about having the answers than it is about asking the right questions. After over a decade of lecturing in university classrooms, I often found that encouraging my students to ask hard questions not only positioned them to work collaboratively together to find compelling answers, but it  also conditioned them to be comfortable with discomfort and uncertainty.


As a society, I think that we far too often associate uncertainty with weakness, risk, and indecision. But in complex systems, uncertainty is not the enemy. It is a fact of life. Engaging with it demonstrates maturity and leadership. Conversely, avoiding it makes subordinates nervous about getting it wrong.


What MIT’s new model does is provide a practical blueprint for building uncertainty into the system architecture of AI tools. It’s a lesson worth internalizing and mobilizing into an attitude or ‘posture’ for an organization.


To work competently and confidently with AI, it’s crucial to foster a culture that trains employees to recognize the limits of AI outputs—not to ignore them or be afraid of them.


The Coming Competitive AI Advantage: Trust


As I noted in A Quiet Pivot to Safety in AI, the market is entering an age of AI responsibility. Not because it’s fashionable, but because it’s becoming foundational. In sectors like finance, healthcare, insurance, and education, AI that can explain or qualify its results won’t be a luxury; rather, it will become a baseline expectation.


We believe the real competitive advantage in AI will not come from the fastest deployment: it will come from building AI systems—and AI cultures—that can be trusted.


That means leaders should stop asking only “What can this tool do for us?”, and start asking, “What signals are we sending by how we use it?”


Modeling the behaviour you want to see, whether with customers, employees, or other stakeholders, is part of your brand now. To build from there, here’s the next steps you should take to continue fostering a culture that embraces uncertainty and complexity:


Train for skepticism. Help teams understand that AI outputs are not gospel. Teach them how to spot uncertainty, even if the tool doesn’t express it directly

 

Invest in explainability. Use or request tools with explainable outputs and uncertainty estimates. Vendors that don’t offer this should face higher scrutiny

 

Design escalation points. Don’t let ambiguous outputs become decision points. Build Human-in-the-Loop mechanisms so that ambiguous or low-confidence results are always reviewed


Leaders need to communicate these things outwardly. Customers will forgive slowness, but they will not forgive false confidence. Tell them when the AI isn’t sure—and make that a point of pride.


It is important to keep in mind that the best organizations won’t treat humility as a tone. They’ll build it into their infrastructure.

bottom of page