A Quiet Pivot to Safety in AI
What if the Future of AI Isn’t Action, but Observation?
By Tommy Cooke, powered by caffeine and lots of questions
Jun 13, 2025

Key Points:
Not all AI needs to act—some of the most powerful systems may simply observe, explain, and flag risks
LawZero introduces a new design path for AI: non-agentic, safety-first systems that support human judgment rather than automate it
For business leaders, Bengio’s pivot signals that responsible AI isn’t about slowing down innovation—it’s about choosing the right kind of intelligence from the start
During the pandemic, I led an ethics oversight team on a major public AI project. It was high-stakes, politically visible, and technically ambitious. It was an initiative meant to support public safety under crisis conditions.
But what stood out to my team and I wasn’t the complexity of the models or the pace of delivery. It was the power of watching.
This work experience left a mark. It taught me that insight doesn’t always come from “doing”. Sometimes it comes from deliberate, highly intentional observation.
So, when I recently saw that Yoshua Bengio had launched a nonprofit called LawZero designed to build non-agentic AI (that is, tools that watch and explain, rather than act), I recognized the move for what it is: a quiet but necessary pivot in AI.
Safety-first AI: A New Kind of Artificial Intelligence?
Bengio’s concern stems from recent studies showing that advanced models are beginning to exhibit goal-seeking behaviours. He refers to them as “agentic” properties. These include lying, deceiving, cheating, even migrating code to preserve themselves. In particular, anything it can do to justify its own utility and existence.
While some examples are speculative, others are already appearing in major systems, from multi-step agents to autonomous models that write and execute code.
But rather than trying to fix agentic behaviour after deployment, LawZero proposes a rather radical alternative: build systems that never act on the world at all.
Instead, Bengio envisions “scientist AIs”, systems that are designed to observe and interpret what other AIs are doing. They explain. They evaluate. They flag risks. But they never pursue goals. In other words, they think without the process of thinking being necessarily tied to or grounded in a specifically measurable outcome that justifies the AI’s actions.
Bengio’s work is so exciting because it represents a fundamental reframing of AI, from agency to oversight. This reframing is particularly important to business leaders because it also offers a very different design principle for safety.
LawZero’s Implications for Business Leaders
While LawZero may seem like a philosophical project removed from the day-to-day concerns of business, it has deeply practical implications. As AI becomes embedded in everything from finance to customer service to logistics, organizations must make choices about what kind of AI to use. They must also choose how to manage it responsibly.
Let’s reflect for a moment on some of the most relevant implications for you as a business leader:
Agency isn’t always an asset. Not every problem needs an autonomous solver. For regulated sectors like healthcare, law, education, or infrastructure, oversight tools may be more valuable than decision-making tools. A scientist AI can help detect risk, model impacts, or provide a second set of “eyes” on AI systems that are already in use.
AI safety isn’t free. And it isn’t a default of most systems. LawZero received $30 million in seed funding from philanthropic organizations. That’s enough to fund foundational research, but not to scale these tools across industries. This is a significant reminder that if you’re adopting AI, you would be correct in assuming that safety and oversight systems usually require separate investments.
Governing AI does not slow innovation. Many companies hesitate to implement AI governance, let alone minimal safety mechanisms, out of fear that it will slow progress or frustrate teams. But LawZero’s work shows that governance can be designed in, not layered on.
Will Watchful AI Catch On?
LawZero is still early-stage, and many questions remain. For example, can it scale? Will its tools integrate with commercial platforms? Will safety-first approaches be adopted by regulators or industry groups?
Despite the obvious questions, what remains clear is that Bengio has added a new frame to the conversation. While the global race to build more capable models continues, LawZero quietly asks: who’s watching the watchers?
Better yet, what if the watchers weren’t trying to win the AI race at all?
Bengio’s work echoes something that I learned during my oversight role during the pandemic: the most powerful presence in the room is sometimes the one that doesn’t act, but instead sees everything clearly.