top of page

Search Results

120 results found with an empty search

  • Closing the Gap: from Policies and Procedures to Practice | voyAIge strategy

    Closing the Gap: from Policies and Procedures to Practice Overcoming the policy/procedure-practice paradox requires focus and commitment By Tommy Cooke Sep 24, 2024 Key Points: Having AI policies doesn't automatically ensure ethical AI practices Regular audits and cross-functional teams are crucial for aligning AI with ethical standards Explainability and stakeholder engagement are key to responsible AI implementation Closing the Gap: from Policies and Procedures to Practice Organizations pride themselves on having comprehensive AI policies and procedures. They show care, diligence, and signal to your staff and stakeholders that you are take AI use and employee behaviour seriously as part of your business plan. However, AI policies and procedures don’t guarantee ethical AI. Even when multiple policies reference AI, there's often a gap between policy and procedures on the one hand, and practice on the other. This gap is a problem because it can catalyze unintended consequences and ethical breaches that undermine the very principles they otherwise uphold. The Policy/Procedure-Practice Paradox This problem is a paradox that is common in virtually every industry using AI. By paradox we mean a contradictory statement that, when investigated and explained, proves to be true. For example, say aloud to yourself, “the beginning is the end”. It sounds absurd, but when you think it through, it makes sense. This same phenomenon presents itself when thinking about “policies and procedures in practice”. Policies and procedures are documents, so how exactly are they practiced? The initial thought that a document practices anything is absurd. But when we read them, they guide how people ought to use and not use AI. The policy/procedure-practice paradox is a problem because failing to understand it means failing to address it. And in failing to address it, policies and procedures about AI often lead to broken and misinformed practices. Let’s consider a real-world example: Despite a company having an anti-bias policy in place, a facial recognition system used in stores across the United States for over eight years exhibited significant bias . The system struggled to accurately identify people with darker skin tones, leading to higher error rates for certain demographics. This occurred because the AI was trained on datasets disproportionately representing lighter skin tones. And so, even well-intentioned policies can fail in practice. The example above is not isolated. It’s a symptom of a larger issue in AI implementation. While the example I provided was caused by biased data, there are several other reasons why the policy/procedure paradox exists: Lack of Explainability: many AI systems operate as " black boxes ," making it difficult to understand their decision-making processes, even with transparency policies in place Rigid Rule Adherence: AI systems may strictly follow their programmed rules without understanding the nuanced ethical priorities of an organization Complexity of Ethical Standards: Translating abstract ethical concepts into concrete, programmable instructions is a complex task that often leaves room for interpretation and error Closing the Gap To mitigate the paradox, we need to close the gap that often exists between AI policies and procedures with AI practices. Here are some strategies to achieve this: Translate Policies into AI-Specific Guidelines: high-level policy language needs to be converted into actionable steps that can be implemented in AI systems. This translation ensures that AI operates on the same definitions of privacy, fairness, and transparency as the organization. Engage with your AI vendor to discuss how your policies can be integrated into the system's operations. Remember, AI systems often require fine-tuning to align with specific organizational needs . Conduct Regular Audits: periodic reviews of AI systems are essential to ensure they're behaving in line with ethical standards. These audits should be thorough and look for potential blind spots. They’re also excellent at discovering and mitigating issues that an organization may have previously missed . Compare your system's training data with the data your organization provides. Analyze the differences and involve your ethics and analytics teams in prioritizing findings for policy amendments. Build a Cross-Functional Ethics Team: bringing together technology champions, legal experts, and individuals with strong ethical compasses can provide a well-rounded perspective on AI implementation. Ensure this team regularly communicates with your AI vendor, especially during the implementation of new systems. When building this team, diversify it. As academics say, make it multidisciplinary, meaning the combination of professional specializations when approaching a problem . Promote Explainability: as the Electronic Frontier Foundation has advocated for years, explainability is crucial when using AI . Why? If an AI system's decisions can't be explained, it becomes difficult for an organization to claim accountability for its actions. Work with your vendor to ensure AI models are interpretable. Position the right people to explain system outputs to anyone in your organization and verify that these align with your founding principles. Engage External Stakeholders: as AI ethics expert Kristofer Bouchard recently argued , external perspectives, especially from customers, communities, and marginalized groups, are crucial when using AI. This is especially the case when it comes to identifying ethical blind spots. Regularly seek feedback from these groups when evaluating your AI systems. Their insights can be invaluable in uncovering unforeseen ethical implications. The Path Forward: Ongoing Oversight and Proactive Management The close the gap between AI policy and ethical practice requires keep the gap shut. Unfortunately, it’s not as simple as closing a door once-and-for-all. It needs to be closed because it can easily reopen many times during your AI journey. Closing the gap requires ongoing oversight, regular policy updates, and a commitment to aligning AI behavior with organizational values. Actively integrate the five strategies above as doing so can significantly minimize risks associated with AI use. Being proactive not only ensures compliance with ethical standards but also builds trust with stakeholders and positions the organization as a responsible leader in AI adoption. Remember, in the world of AI, accountability and responsibility are critical. The power of these systems demands continuous vigilance and active management. By committing to this process, organizations can harness the full potential of AI while upholding their ethical principles and societal responsibilities. Previous Next

  • APPROACH | voyAIge strategy

    Our 3-pronged method for integrating and managing AI successfully in your organization. OUR APPROACH What sets us apart from our competitors is our academic background and professional experience. We deliver affordable, award-winning research, insight, and strategy. Guiding Pillars We pride ourselves on our analytical rigor and high work ethic, which has been tried, tested, and true at the intersection of law, policy, and ethics - our Guiding Pillars . Each pillar forms the foundation of every engagement, ensuring success in delivering comprehensive, forward-thinking solutions. Law Our strategies incorporate leading legal practices to protect your organization from the uncertainty caused a rapidly evolving AI regulatory landscape Policy To ensure that your business initiatives are sustainable and forward-looking, you can rely on our expertise in policy and our AI frameworks, guidelines, and protocols Ethics Our top priority is to lead our work with moral integrity. We integrate transparency and accountability in our work to ensure that our clients' staff, employees, and stakeholders feel safe and are informed Expertise With over 20 years of experience in consulting, academia, and specializing in applicable law, we bring a wealth of interdisciplinary expertise and extensive experience in research, writing, educating, public speaking, and conference organization. We thrive in engaging diverse audiences to simplify complex ideas into practical solutions. We are committed to speaking the language of our clients, ensuring clarity and effectiveness in every interaction. Our Founders earned in PhDs in technology fields with specialized expertise in six areas: ETHICAL TECHNOLOGY We understand that diverse groups of Canadians have different ethical standards and expectations around technology We understand that consumers, populations, and marketplaces have different ethical standards and expectations around technology We understand that consumers, populations, and marketplaces have different ethical standards and expectations around technology CANADIAN LAW We understand Canadian laws and legislative policies, particularly as they relate to emerging and controversial technologies PRIVACY We are experienced in analyzing, developing, and advising policy design and solutions, particularly in the areas of informational and data privacy SURVEILLANCE We are professionally trained in understanding rights and liberties issues surrounding citizen and employee monitoring POLICY We regularly work with our clients and government to improve policy, procedure, process, and protocol around human-technology interactions MEDIA & COMMUNICATION We are professionally trained in studying the effects of media and digital communications in public and private enterprise Experience Project Management We have lead an ethics and privacy oversight team to support a public-private collaboration to use AI to study COVID-19 Alliances & Transparency We believe in strategic alignment through collaborative problem-solving. We look forward to working with our teammates, clients, and stakeholders to develop tailored solutions. We also open to partnering with organizations to pool expertise and creatively develop new ways of seeing and solving. As a reflection of our professional principles and values, our consultancy prioritizes understandable and honest communication. We are committed to clearly and fairly explaining client needs when scoping, assessing, and pricing engagements.

  • Adopt AI by Appealing to People | voyAIge strategy

    Adopt AI by Appealing to People AI leaders must understand that AI adoption is more than a technical exercise By Tommy Cooke, powered by caffeine and curiousity Jan 31, 2025 Key Points: 1. AI adoption is not just a technological upgrade because AI is not merely math. It’s a social force and cultural transformer that requires strong leadership 2. Successful AI leaders prioritize trust, communication, and training to navigate employee fears and resistance 3. Embedding AI into an organization demands a strategic, human-centered approach that aligns with business goals and workplace culture On January 29th we were invited to give a fireside chat , moderated by the incredible Christina Fox - CEO of the TechAlliance of Southwestern Ontario - hosted by the Canada Club of London (CCL). It was a fantastic opportunity to connect with people in a wide range of industries, from education and healthcare to finance and agriculture. What really stood out to Christina Fox, VS's Co-Founder Christina Catenacci and I was not only how much AI has impacted every single person in every industry, but that there seemed to be a common problem among many of them: being a leader in AI adoption. In fact, bringing AI into an organization is not merely a technical or financial challenge. It is also a matrix of social and cultural hurdles that require leadership to surpass without tripping along the way. While there is indeed a tendency to introduce and explain AI as merely a tool—as nothing more than linear algebra—this explanation is incomplete and dismissive of the fact that AI is built to interface with humans in deeply social and cultural ways. To be clear, I recognize that technical AI experts prefer to calm the waters by appealing to black and white, logical sensibilities. Some of the most brilliant AI engineers believe that disarming nerves and anxiety around AI ought to be achieved by simplifying AI on the basis of mathematics. However, AI is larger than the sum of its math. It functions in society, on humans, and that makes AI complex, not just a technology. It is a social force that shapes human behaviour and thinking. Allow me to further unpack this conundrum. When we polled our audience before our recent talk, we learned that Natural Language Processing (NLP), or AI that understands, manipulates, and interprets human language, was used by 90 percent of our audience. A year ago, at the CCL's inaugural AI event, almost nobody would have answered “yes”. This is significant because it not only means people are using AI, but they are interfacing with it at unprecedented rates. While our audience used a variety of tools, including Claude, ChatGPT, CoPilot and even DeepSeek, what all of these tools have in common is that they: Shape perceptions of reality. Users perceive AI as capable of making decisions, demonstrating intent, and even engaging in meaningful back-and-forth dialogue. This means that human perceptions drive adoption. It also means that AI, in turn, builds trust with users over time. They can also create dependency. When humans use AI, they are not merely interacting with math. They are communicating with a technology that emulates human thinking, reflection, and problem solving. Exhibit power. AI is not conscious, but it is very good at simulating sentience or, the ability to simulate feelings and cognition. This is precisely how AI influences human decision-making in the first place. By appealing to human attributes, AI has power over humans in its ability to convince us that the tool is not only correct but that we will eventually act upon its recommendations in both casual and non-casual ways. Exist in social and cultural contexts. AI does not exist in a vacuum. They are designed and deployed by people who have curiosities, fears, and biases. These things change over time. This is why AI is built to reflect their owners' and designers' ethics, world views, and even their personalities. The fact that AI feels human is not an accident—it's often their core feature. So, when leaders want to bring AI into their organization, this is what they are often up against. To successfully adopt AI in any organization, it is incumbent upon executives and senior management to become AI leaders, and that starts with recognizing that AI is more than merely an IT upgrade. The Leadership Imperative So, you've understood that AI is more than math. What are your next steps? They are not always obvious. Who is responsible for choosing AI? Who is accountable to its decisions? How do you bring multiple leaders with different opinions into alignment about AI? Will your employees and coworkers trust it? Without clear leadership, AI efforts stall, employees resist change, and organizations miss opportunities. Before getting into some recommended steps for becoming an AI leader in your organization, let's take inventory of some common adoption obstacles any AI leader faces in their organization. Common Obstacles to AI Adoption Lack of Strategic Alignment. AI initiatives often begin in silos, and the first silo tends to be housed by technical teams struggling to get executive buy-in. Cross-departmental collaboration makes things further complicated. Given what we discussed above regarding human curiosities, fears, and biases, it is perhaps unsurprising that people who are trained to do very different tasks are often the biggest obstacles to finding consensus on AI adoption. Employee Resistance. It is a well-known fact in 2025 that people working in any organization overwhelmingly feel unsure about what AI means to them , their job description, and even their job security. I heard from multiple audience members last night after the event that IT teams struggle to get executive buy-in because employees are afraid of what they do not understand. Training Needs. AI is not a hammer. It requires training to so individuals can use AI effectively and safely. This is not merely a task for technical teams but for executives and unit decision-makers—people with a vested interest in ensuring that if and when AI is used, people are trained in ways that speak to them, that resonate, and facilitate productive reflection. Ethics and Trust. AI introduces risks. All systems are susceptible to bias in decision-making, violating privacy laws, and creating unanticipated accountability gaps. Even as employees become more literate with AI, 62 percent of employees believe that their trust in AI will be broken by out-of-date public data in their organization's AI systems. Lack of Change Management. AI adoption requires the ability to manage the transformation taking place along the way. Without change management strategies, AI efforts fail to gain traction. Effective change management means fostering open dialogue, providing ongoing support, and ensuring that employees feel included in the transformation rather than merely be subjected to it. Five Leadership Priorities for Successful AI Adoption Given that human hurdles stall or even derail AI adoption, there are critical first steps that leaders can take to calm the waters and build momentum. Remembering that AI adoption is not just about technology—that it’s about people, trust, and change—leaders need to acknowledge and address these complexities early. Here are five steps you can follow: First, set a clear vision and take ownership. AI initiatives need leadership beyond your tech people. Appoint an AI champion or a “steering group” who can take inventory of employee sentiments, fears, and curiosities. Task them with figuring out not only the challenges and opportunities for bringing in AI, but make sure they do so in alignment with business objectives, especially with growth plans. Second, lead the conversation on training. AI success depends on human expertise. Figure out how your people learn best. Build learner personas of your workforce. Determine whether they prefer synchronous or asynchronous learning. Ensure that you take inventory of use cases and exemplify them. Be sure to also think about the long term. Training should be an ongoing effort that evolves alongside AI. Leaders must also ensure training includes not just technical skills but also critical thinking about AI's role, its risks, and ethical considerations. Prompt engineering is increasingly critical when using NLPs at work. Third, foster a culture of experimentation. Encourage your people to start wondering about the possibilities of AI. Prioritize building a sandbox environment where AI can be used on a limited dataset with no impact on existing databases or critical business operations. Invite people from across the business to play there, to break things, and to innovate along the way. Fourth, communicate, communicate, communicate. Do not let AI feel like a top-down decision. Engage employees early. Listen to them. Provide clear messaging on how AI supports and not replaces them. Update them along the journey. Don't wait until the end to tell them AI is coming. Create monthly update meetings and encourage feedback that can be responded to at these meetings. Fifth, plan for policies and procedures early. One of the most critically important first documents needed to safeguard your investment, and your people are AI in the Workplace policies and procedures. This is a document that not only establishes rules, expectations, roles, and responsibilities, but also generates efficient and productive lockstep so that your talent knows exactly how to optimize their new tools. AI leadership isn’t about knowing the most about AI. It's about guiding people through difficult, often fearful and unknown change. Leading with clarity, empathy, and deliberate intent will turn AI from an uncertain risk into an opportunity. Previous Next

  • Insider Threats in the Age of AI | voyAIge strategy

    Insider Threats in the Age of AI Employers deal with the dual challenge of leveraging AI for operations and defending against AI-powered internal threats By Christina Catenacci, human writer Mar 28, 2025 Key Points Insiders are trusted individuals who have been given access to, or has knowledge of, any company resources, data, or system that is not generally available to the public Insider risks are the potential for a person to use authorized access to the organization’s assets—either maliciously or unintentionally—in a way that negatively affects an organization As AI becomes more common in the workplace and performs tasks that are not completed by humans, organizations face a growing security risk from artificial insiders as well as human ones AI is making waves in the workplace—employers have been trying to find novel ways of implementing AI to improve their operations, while simultaneously defending against cyberattacks, including new methods of AI-powered attacks, from inside their organizations. This article explores the nature of internal threats in modern workplaces. What are insider risks? According to Microsoft, insider risks (before they become actual threats or attacks) are the potential for a person to use authorized access to the organization’s assets—either maliciously or unintentionally—in a way that negatively affects an organization. “Assets” mean information, processes, systems, and facilities. In this context, an “insider” is a trusted individual who has been given access to, or has knowledge of, any company resources, data, or system that is not generally available to the public. For example, an insider could be someone with a company computer with network access. It could be a person with a badge or device that allows them to continuously access the company’s physical property. Or it could be someone who has access to the corporate network, cloud storage, or data. It could even be a person who knows the company strategy and financial information. Some risk indicators include: Changes in user activity such as a person behaves in a way that is out-of-character Anomalous data exfiltration such as sharing or downloading unusual amounts of sensitive data A sequence of related risky activities, which could involve renaming confidential files to look less sensitive, downloading the files, saved to a portable device, and deleting the files from cloud storage Data exfiltration of a departing employee, such as a resigning employee downloading a copy of a previous project file to keep a record of accomplishments (unintentional) or knowingly downloading sensitive data for personal gain or to assist them in the next position at a new company (intentional) Abnormal system access, where employees download files that they do not need for their jobs Intimidation or harassment, which could involve an employee making a threatening, harassing, or a discriminatory statement Privileges escalation, such as employees trying to escalate their privileges without a clear business justification What are insider threats and attacks? Further down the continuum, insider threats have the potential to damage the system or asset. The threat could be intentional or unintentional. And even further down, an insider attack is an intentional malicious act that causes damage to a system or asset. Unlike threats, attacks are relatively easy to detect. Not all cyberattacks are data breaches. More specifically, a data breach is any security incident where unauthorized parties access sensitive or confidential information, including personal data like health information and corporate data like customer records, intellectual property, or financial information. The ultimate goal of these insiders could be to steal sensitive data or intellectual property, to sabotage data or systems, espionage, or even intimidate co-workers. What is the cost of an insider attack? Data breaches are serious: according to the IBM Cost of a Data Breach Report 2024 , the global average cost of a data breach has increased by 10 percent from 2023 and now has reached USD 4.88 million. But what is striking is that the average cost of a malicious insider attacks averaged about USD 4.99 million in 2024. In this regard, expensive attack vectors included business email compromise, phishing, social engineering, and stolen or compromised credentials. The most common ones were phishing and stolen or compromised credentials. What happens when you add AI? According to IBM, AI and automation are transforming the world of cybersecurity. They make it easier for bad actors to create and launch attacks at scale. For example, when they create phishing attacks, they make it easier to produce grammatically correct and plausible phishing messages. In fact, the ThreatLabz 2025 AI Security Report revealed that threat actors are currently leveraging AI to enhance phishing campaigns, automated attacks, and create realistic deepfake content. ThreatLabz researchers discovered how DeepSeek can be manipulated to quickly generate phishing pages that mimic trusted brands, and how attackers can create a fake AI platform to exploit interest in AI and trick victims into downloading malicious software. In addition, ThreatLabz suggests that organizations face a number of AI risks: Shadow AI and data leakage (using AI tools without formal approval or oversight of the IT department and causing data leaks) AI-generated phishing campaign (in about five prompts, a phishing page can be created) AI-driven social engineering, from deepfake videos to voice impersonation used to defraud businesses Malware campaigns exploiting interest in AI, where attackers lure victims with a fake AI platform to deliver the Rhadamanthys infostealer The dangers of open-source AI enabling accidental data exposure and more serious things like data exfiltration The rise of agentic AI, where there are autonomous AI systems capable of executing tasks with minimal human oversight Indeed, Security Intelligence claims that Gen AI is expanding the insider threat surface . We’re talking about chatbots, image synthesizers, voice cloning software, and deepfake video technology for creating virtual avatars. Employees are going to work and misusing AI to the point that some companies are starting to ban the use of Gen AI tools in the workplace. For instance, Samsung apparently made such a ban following an incident where employees were suspected of sharing sensitive data in conversations with OpenAI’s ChatGPT. This is concerning, especially since OpenAI records and archives all conversations, potentially for use in training future generations of the large language model. A combination of human and AI security internal threats Organizations face many internal security threats, which can be of a traditional or AI nature: we can see from the above discussion that as AI becomes more common in the workplace and performs tasks that are not completed by humans, organizations face a growing security risk from artificial insiders as well as human ones . These AI insiders would be even better at learning how to avoid detection by ingesting more information and becoming more adept at spotting patterns within that information. In fact, threat actors use AI-generated malware, exploit network traffic analysis to find weak points, manipulate AI models by injecting false data, and make advanced phishing messages that evade detection. And AI systems can be used to detect those risks—AI and machine learning can enhance the security of systems and data by analyzing vast amounts of data, recognizing patterns, and adapting to new threats. Insider risk reframed The above discussion touched on several risks that could result because of insiders, whether they are human or AI. These risks can be boiled down and examined by looking at people, processes, and technology. The following could be another way of thinking about internal risk: People : human insiders make errors, lie about what they are doing, behave in unusual or suspicious ways, engage in theft of confidential information or intellectual property, could be manipulated, do not comply with the company’s policies and procedures, have low levels of AI literacy, easily fall for phishing or give up credentials, have no human-in-the-loop, or lack AI governance Processes : the company may not have AI in the workplace policies and procedures in place, or if they do have them, they may not be regularly updated, monitored, or enforced Technology : there may be biased data, a lack of data hygiene leading to bad-quality data, no change management so people and systems are not supported during the transition, AI-generated malware, manipulated AI models by injecting false data, advanced phishing messages that evade detection, agentic AI that goes rogue, or model drift and consequent inaccurate predictions Previous Next

  • Newsom Vetoes California's SB 1047 | voyAIge strategy

    Newsom Vetoes California's SB 1047 A missed opportunity to lead in AI regulation By Christina Catenacci Oct 1, 2024 Key Points: Governor Newsom vetoed SB 1047, California’s AI safety bill, on September 29, 2024 Many view the veto as a missed opportunity for California to lead in AI regulation in the United States The bill created significant controversy in Silicon Valley because of the concern that it was too rigid and would stifle innovationAs you may have heard, Governor Gavin Newsom of California just vetoed California’s AI safety bill, SB 1047. We wrote about this potentially landmark bill earlier , where we explained the inner workings of the text. The article also noted that the bill was the first of its kind in the United States, and had the potential of influencing how other States crafted their AI statutes and regulations. Professors Hinton and Bengio supported it too . So why did the bill fail? The bill that attempted to balance AI innovation with safety made it through readings in the General Assembly and in the Senate—and was ready to be signed by Newsom. It seemed that the author of the bill, Senator Scott Weiner, was confident that a great deal of work had gone into it, and it deserved to pass. Let us examine the veto note signed by Newsom on September 29, 2024. First, it appears that Newsom was concerned about the fact that the bill focused only on the most expensive and large-scale models—something that could give the public a false sense of security about controlling AI. He pointed out that smaller, more specialized models could be equally or even more dangerous than the models that the bill targeted. Second, Newsom called the bill well-intentioned, but also remarked that it did not take into account whether an AI system was deployed in high-risk environments, involved critical decision-making, or dealt with the use of sensitive data. Newsom stated, “Instead, the bill applies stringent standards to even the most basic functions—so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology”. Third, Newsom agreed that we could not afford to wait for a major catastrophe to occur before taking action to protect the public, and he emphasized that California would not abandon its responsibility—but he did not agree that to keep the public safe, the State had to settle for a solution that was not informed by an empirical trajectory analysis of Al systems and capabilities. He accented that any framework for effectively regulating Al had to keep pace with the technology itself. He also added that the US AI Safety Institute was developing guidance on national security risks, informed by evidence-based approaches to guard against demonstratable risks to public safety. Additionally, he noted important initiatives that agencies within his Administration were taking in the form of performing risk analyses of the potential threats and vulnerabilities to California's critical infrastructure using Al. Newsom highlighted that a California-only approach might be warranted, especially absent federal action by Congress, but it had to be based on empirical evidence and science. He concluded his remarks by saying that he was committed to working with the Legislature, federal partners, technology experts, ethicists, and academia to find the appropriate path forward. He stated, “Given the stakes—protecting against actual threats without unnecessarily thwarting the promise of this technology to advance the public good—we must get this right”. He simply could not sign SB 1047. We should all keep in mind that this was a controversial bill that caused a kerfuffle in Silicon Valley. In fact, various tech leaders had been saying that the bill was too rigid and objecting about its potential to hinder innovation and driving them out of California. Several lobbyists and politicians had been communicating their concerns in the past few weeks, including former Speaker Nancy Pelosi . There were other powerful players in Silicon Valley, including venture capital firm Andreessen Horowitz, OpenAI, and trade groups representing Google and Meta, lobbied against the bill, arguing it would slow the development of AI and stifle growth for early-stage companies. It makes sense that there was significant concern about the bill in Silicon Valley: SB 1047 dealt with serious harms and set out considerable consequences for noncompliance. It is no surprise that those tech workers were in favour of avoiding them. Indeed, the bill was called, “hotly contested” since those in the tech industry complained about it. That said, there were many who supported the bill who viewed SB 1047 as an opportunity to lead the way on American AI regulation. To that end, tech industry leaders reacted positively to Newsom’s veto, and even expressed gratitude to Newsom on social media . On September 29, 2024, Senator Weiner responded to the decision to veto the bill, noting the following: The veto was a setback for everyone who believed in oversight of massive corporations that are making critical decisions that affected the safety and welfare of the public and the future of the planet While the large AI labs had made admirable commitments to monitor and mitigate risks, the truth was that voluntary commitments from industry were not enforceable and rarely worked out well for the public This veto left us with the troubling reality that companies aiming to create an extremely powerful technology faced no binding restrictions from American policymakers, particularly given “Congress’s continuing paralysis” around regulating the tech industry in any meaningful way With respect to Newsom’s criticisms, he stated that “SB 1047 was crafted by some of the leading AI minds on the planet, and any implication that it is not based in empirical evidence is patently absurd” The veto was a missed opportunity for California to once again lead on innovative tech regulation Weiner stated that California would continue to lead the conversation, and it was not going anywhere. We will have to wait and see what is proposed in the near future. In the meantime, it is important to note that there were some smaller AI bills that Newsom did sign into law. For instance, Newsom recently signed one to crack down on the spread of deepfakes during elections, and another to protect actors against their likenesses being replicated by AI without their consent. As for Weiner, he stated , “It’s disappointing that this veto happened, but this issue isn’t going away”. Previous Next

  • The Government of Canada launches an AI Strategy Task Force and Public Engagement | voyAIge strategy

    The Government of Canada launches an AI Strategy Task Force and Public Engagement Public Consultations run from October 1, 2025 to October 31, 2025 By Christina Catenacci, human writer Oct 3, 2025 Key Points On September 26, 2025, the Government of Canada announced the launch of an AI Strategy Task Force and a “30-day national sprint” that will help shape Canada’s approach to AI The government will be seeking advice on a broad range of AI-related themes, including: research and talent; AI adoption across industry and governments; commercialization of AI; scaling Canadian AI champions and attracting investments; building safe AI systems and strengthening public trust in AI; education and skills; building enabling infrastructure; and security of the Canadian infrastructure and capacity The AI Task Force is made of technical experts in areas such as research and talent, AI adoption, commercialization, scaling and attracting investment, education and skills, infrastructure, and security On September 26, 2025, the Government of Canada, via the Honourable Evan Solomon, Minister of Artificial Intelligence and Digital Innovation, announced the launch of an AI Strategy Task Force (Task Force) and a “30-day national sprint” (Consultation) that will help shape Canada’s approach to AI. The ultimate goal of the Task Force and Consultation is to set out a renewed AI strategy to position Canada. What Will the Task Force Do? The government will be seeking advice on a broad range of AI-related themes, including: research and talent AI adoption across industry and governments commercialization of AI scaling Canadian AI champions and attracting investments building safe AI systems and strengthening public trust in AI education and skills building enabling infrastructure security of the Canadian infrastructure and capacity The Task Force, a group of experts, will consult their networks to provide actionable insights and recommendations. What are the Consultations About? Canadians are invited to share their perspectives to help define the next chapter of Canada’s AI. The Consultation begins October 1, 2025 and ends on October 31 , 2025. Subsequently, ideally in November, the AI Strategy Task Force will share the ideas that they gather. Who is on the AI Task Force? The AI Strategy Task Force comprises several leaders of the AI ecosystem and will be consulting their networks on specific themes, as listed below: Research and Talent Gail Murphy, Professor of Computer Science and Vice-President – Research & Innovation, University of British Columbia and Vice-Chair at the Digital Research Alliance of Canada Diane Gutiw, Vice-President – Global AI Research Lead, CGI Canada and Co-Chair of the Advisory Council on AI Michael Bowling, Professor of Computer Science and Principal Investigator – Reinforcement Learning & Artificial Intelligence Lab, University of Alberta and Research Fellow, Alberta Machine Intelligence Institute and Canada CIFAR AI Chair Arvind Gupta, Professor of Computer Science, University of Toronto Adoption across industry and governments Olivier Blais, Co-Founder and Vice-President of AI, Moov.AI and Co-Chair of the Advisory Council on AI Cari Covent, Strategic Data and AI Advisor, Technology Executive Dan Debow, Chair of the Board, Build Canada Commercialization of AI Louis Têtu, Executive Chairman, Coveo Michael Serbinis, Founder and Chief Executive Officer, League and Board Chair of the Perimeter Institute Adam Keating, Founder and Chief Executive Officer, CoLab Scaling our champions and attracting investment Patrick Pichette, General Partner, Inovia Capital Ajay Agrawal, Professor of Strategic Management, University of Toronto and Founder, Next Canada and Founder, Creative Destruction Lab  Sonia Sennik, Chief Executive Officer, Creative Destruction Lab Ben Bergen, President, Council of Canadian Innovators  Building safe AI systems and public trust in AI Mary Wells, Dean of Engineering, University of Waterloo Joelle Pineau, Chief AI Officer, Cohere Taylor Owen, Founding Director, Center for Media, Technology and Democracy Education and Skills Natiea Vinson, Chief Executive Officer, First Nations Technology Council Alex Laplante, Vice-President – Cash Management Technology Canada, Royal Bank of Canada and Board Member at Mitacs David Naylor, Professor of Medicine and President Emeritus Sarah Ryan, Senior Research Officer, Canadian Union of Public Employees Infrastructure Garth Gibson, Chief Technology and AI Officer, VDURA Ian Rae, President and Chief Executive Officer, Aptum Marc Etienne Ouimette, Chair of the Board, Digital Moment and Member, OECD One AI Group of Experts, Affiliate researcher, sovereign AI, Cambridge University Bennett School of Public Policy Security Shelly Bruce, Distinguished Fellow, Centre for International Governance Innovation James Neufeld, Founder and Chief Executive Officer, samdesk Sam Ramadori, Co-President and Executive Director, LawZero What Can We Take from this Development? First and foremost, if you are interested in providing comments, do it right away (you may need to sprint to participate in the national sprint) because the deadline is fast approaching. Second, the AI Minister was quick to point out that Canada was the first country in the world to launch a funded national AI strategy—the Pan-Canadian AI Strategy (PCAIS)—to drive adoption of AI across Canada’s economy and society. In fact, he went so far as to deny that Canada is behind other countries in AI and needs to catch up. He seemed to be of the view that Canada is and remains to be a leader in AI—even though we have watched other countries surpass Canada in AI (and privacy) regulation. This response will likely be disappointing for many Canadians, since it is important to first acknowledge that there is a problem before it is possible to fix it. Third, the announcement also referred to the recent investment of $2 billion for the Canadian Sovereign AI Compute Strategy in 2024—despite the fact that other governments and companies in other countries have poured in a much larger amount and have gotten so much further ahead than Canada. For example, Google invested $33 billion in 2024; it plans on investing $75 billion in 2025. Fourth, the AI Task Force mentioned the list of experts, most of whom are involved in the technical aspects of AI. While it is important to have a task force made of technical experts in areas such as research and talent, AI adoption, commercialization, scaling and attracting investment, education and skills, infrastructure, and security, it is also important to include philosophers and ethicists, legal minds, and sociologists, as they would myriad perspectives and create a more holistic understanding of how AI needs to be implemented into society. Fifth, the AI Minister has said that the members of the AI Task Force will consult their networks to provide actionable insights and recommendations. Can I ask, What does this mean? It appears that experts who have something to offer the government can only participate in this phase if they fall into certain networks. This is disappointing given that many Canadians who are not part of these select networks could provide valuable insights into how we can create a renewed AI strategy in Canada. Previous Next

  • The 10-Year AI Regulation Moratorium Was Just Not Meant to Be | voyAIge strategy

    The 10-Year AI Regulation Moratorium Was Just Not Meant to Be The AI Moratorium Part of the Vote-a-Rama Ended in an Ultimate “Nay” By Christina Catenacci, human writer Jul 11, 2025 Key Points: On July 4, 2025, the Big Beautiful Bill was signed by the President and became law—but there was no 10-year AI regulation moratorium for States The Senators engaged in a lengthy Vote-a-Rama regarding the bill and the 10-year moratorium in particular, and it was decided that the provision was to be removed This means that US businesses are going to have to be on the lookout for any new AI laws that provide guardrails in the name of AI safety and responsible AI—and comply Following several amendments made at the last minute in an early morning Vote-a-Rama on July 4, 2025, US HR 1, the One Big Beautiful Bill Act was signed by President Trump and became Public Law No: 119-21 . This was a highly controversial and lengthy bill; the focus of this article is on the proposed 10-year AI regulation moratorium that would have been imposed on States. As we can see from the history of the bill in the House, the bill passed on May 22, 2025 by one vote—Yeas and Nays were 215–214. The House said “yea” to the bill—including the 10-year moratorium. Subsequently, the bill went to the Senate, where there were several debates and amendments made between June 27 and July 3, 2025. Finally, on July 4, 2025, the bill was signed by the President and became law. But there was no 10-year AI regulation moratorium in the final version of the bill. The White House website had a page suggesting that the bill was backed by American industry and had several corporate logos of the supporting companies running across the page. What was this bill about? On a high level, this was primarily a budget bill, where a number of measures would be used to transfer wealth from low-income taxpayers to rich taxpayers. For example, it would make the Trump 2017 tax cuts permanent. Those who opposed the bill coined the bill as the “ Big Ugly Bill ” and asserted that the bill stole from the poor to give to the ultra-rich. The result seems to be that 16 million people would lose their health insurance, there would be the largest cuts to nutrition assistance in history, and higher education would become less affordable. Worse, the changes would add $3 trillion to the national debt. Although the bill was supposed to be a budget bill, we can see in the large amount of text contained in the bill that much more was covered: agriculture, nutrition and forestry; Armed Forces; banking, housing, and urban affairs; commerce, science, and transportation; energy and natural resources; environment and public works; finance; health, education, labor, and pensions; homeland security and governmental affairs; and the judiciary. That is, it was a bill that would cram many topics into one bill. Only a small part of the initial version of the bill dealt with AI regulation by States, the focus of this article. What was the 10-year AI regulation moratorium about? Essentially, we can see in a previous version of the bill that there was a moratorium proposed, where no State or political subdivision thereof would be able to enforce, during a 10-year period, any law or regulation of that State or a political subdivision thereof limiting, restricting, or otherwise regulating artificial intelligence models, artificial intelligence systems, or automated decision systems entered into interstate commerce. An exception was that the rule could not be used to prohibit the enforcement of any law or regulation that: had a primary purpose and effect of removing legal impediments to, or facilitating the deployment or operation of, an artificial intelligence model, artificial intelligence system, or automated decision system; or streamlining licensing, permitting, routing, zoning, procurement, or reporting procedures in a manner that facilitates the adoption of artificial intelligence models, artificial intelligence systems, or automated decision systems did not impose any substantive design, performance, data-handling, documentation, civil liability, taxation, fee, or other requirement on artificial intelligence models, artificial intelligence systems, or automated decision systems unless such requirement is imposed under Federal law; or in the case of a requirement imposed under a generally applicable law, is imposed in the same manner on models and systems, other than artificial intelligence models, artificial intelligence systems, and automated decision systems, that provide comparable functions to artificial intelligence models, artificial intelligence systems, or automated decision systems, and did not impose a fee or bond unless such fee or bond was reasonable and cost-based; and under such fee or bond, artificial intelligence models, artificial intelligence systems, and automated decision systems were treated in the same manner as other models and systems that perform comparable functions or any provision of a law or regulation to the extent that the violation of such provision carried a criminal penalty Why did the moratorium die in the Senate? In an overnight session, the Senators decided to reject the 10-year moratorium on State-level AI regulation, voting 99–1 to remove the provision. To be sure, there were likely many disappointed tech companies like OpenAI or Google who lobbied to keep the provision in the bill to protect innovation, to no avail. The 10-year moratorium almost became a five-year moratorium. There was talk about what could constitute exceptions—many Senators wanted to push some kind of carving out (for instance, protecting children or creatives). But in the end, it became clear that there would be too much confusion created with the addition of any exceptions that would allow for AI protections for children online and musicians in the music industry. And some Senators pointed out that any moratorium, even for five years, would leave us with no AI protections and create a clear path to challenge any State law in court—it was troubling that the tech companies would have to comply with virtually no regulation. Others were concerned that any moratorium would be endangering a State’s rights to govern in its jurisdiction. Many were pleased that the vote turned out as it did—some Senators stated that the vote sent a clear message to Big Tech that Congress would not sell out kids and local communities in order to pad the pockets of Big Tech billionaires. The bipartisan opposition to the moratorium also signalled that there would be no Big Tech power grab. So for several reasons, the 10-year moratorium died. What can we take from this development? We can say goodbye to the 10-year moratorium on State-level AI regulation. This means that States are still free to enact and enforce State laws pertaining to AI models, AI systems, or automated decision systems, without the federal government coming in and preventing it from taking place. It also means that US businesses are going to have to be on the lookout for any new AI laws that provide guardrails in the name of AI safety and responsible AI. To be clear, it means that there will be legal consequences in cases where companies do not comply with the law. The area of AI regulation is growing, and it may increase rapidly now that this recent Senate Vote-a-Rama has taken place. As you may recall, some States are further ahead than others legislatively speaking. To that end, we recommend that businesses prepare for upcoming laws and remain in compliance. Previous Next

  • What Apple’s $500 Billion AI Investment Means to You | voyAIge strategy

    What Apple’s $500 Billion AI Investment Means to You The Continued Normalization of AI as Core Infrastructure By Tommy Cooke, powered by caffeine and curiousity Mar 3, 2025 Key Points: Apple's $500 billion AI investment signals that AI is shifting from an innovation tool to core business infrastructure Increased AI integration in Apple’s ecosystem will shape consumer, employee, and investor expectations, pushing businesses to adapt Businesses should focus on preparing for AI-driven shifts in consumer expectations, workforce dynamics, and regulatory landscapes Apple recently announced a $500 billion investment in AI. The moment is not merely a landmark in the technology work. It is also monumental for U.S. manufacturing, U.S. talent development, and the U.S.’s foothold in the global AI economy. This news is not merely a corporate push for technology. It’s a sign that AI is becoming intimately embedded in business infrastructure; quickly fading are the days of thinking of AI as merely an emerging, experimental tool. With AI-capable smartphones forecasting to grow significantly over the next three years at a compounded annual growth rate of nearly 63 percent , coupled with the fact that Apple accounts for more than half of the smartphone device market share in the U.S., what business leaders need to recognize that their employees, investors, partners, and customers alike – the Apple device lovers in your professional and personal networks – will be interfacing with AI at unprecedented rates in the few short years to come. Whether your organization is adopting AI or not, here’s why Apple’s announcement matters to you. AI as Infrastructure, Not Merely Innovation For years, AI has been treated as an innovation driver or a business enabler, something that enhances products, streamlines workflows, or creates new capabilities. But with Apple’s recent announcement, a deeper reality is setting in: AI is increasingly recognized as an operational necessity. Apple’s announcement, which includes an AI server manufacturing facility in Texas and 20,000 new research and development jobs along with a new AI academy in Michigan, signals a broader shift—AI is no longer niche, it is foundation. This reclassification matters. Apple’s investment will push AI further into the mainstream. It is altering expectations for AI-readiness across multiple industries. Additionally, and as Apple continues to integrate AI more deeply into its own ecosystem, more consumers, employees, partners, and investors will be regularly exposed to AI-driven interactions and functionalities. This broad exposure means that businesses need to be prepared for shifting human expectations of AI. AI Normalization and Business Implications As AI becomes more infrastructural, normalization will follow. What is important to recognize here is that this level of financial investment will create jobs, accelerate workforce transformation, and even generate a new AI training and research facility—this is about much, much more than declaring AI is crucial to the company’s internal operations. It will also significantly affect their external ecosystem in sending a very clear message about the value of AI. Here are three reasons why Apple’s investment matters to you: AI is Becoming More Accessible. As AI infrastructure expands, smaller enterprises will have increased access to AI capabilities. This means even organizations without extensive tech teams must begin discussing AI integration and management. Consumers and Employees Expect AI. With AI becoming more embedded in Apple’s ecosystem (through Siri advancements, AI-enhanced applications, and automated workflows) customer and employee expectations around AI-driven interactions will evolve as well. Businesses must anticipate and meet these new expectations. Remember, whether your leadership believes in AI or not, the people working with you and for you have ideas, dreams, and visions of AI making their jobs easier. AI will be an integral, core component of Apple devices moving forward. Accordingly, expectations will change. Policy and Regulation Will Evolve. Large-scale AI investment has a high likelihood to accelerate regulation. As AI becomes a fundamental part of economic infrastructure, governments will refine legal frameworks around AI use, data privacy, and corporate accountability. While regulatory change is rather cumbersome in North America, it will be important to keep an eye on global regulators and civil society discourse as there will be adjustments in the tone, frame, and focus of AI law and AI ethics concepts. The Takeaway: A Wake-Up Call for Businesses Regardless of whether your organization is navigating AI, it is important to start thinking about the relationship between you, your people, and their increasingly AI-driven Apple devices. Businesses are recommended to invest in AI literacy, establish decision-making plans, and if they are on the cusp of integrating AI, lead the charge on the conversation. In this way, businesses will be more equipped to respond to the fact that people outside and inside their organizations are comparing their agility, creativity, and flexibility to new standards driven by AI models. Previous Next

  • Between Hype and Fear of Chatbots | voyAIge strategy

    Between Hype and Fear of Chatbots What AI Chatbots Are Actually Doing to Work By Tommy Cooke, powered by coffee and curiousity May 16, 2025 Key Points: The labour market impact of AI chatbots is far smaller than headlines suggest, with minimal changes to work time, earnings, or job displacement AI’s true influence lies not in replacing workers, but in reorganizing workflows, expectations, and trust especially when organizations prepare for adoption Without a supportive culture, training, and governance, AI tools reflect existing flaws rather than deliver transformation, meaning leadership must shape readiness For years now, the narrative around AI chatbots in the workplace has swung wildly between utopia and dystopia. Depending on who you ask, chatbots are either eliminating jobs or boosting productivity at unprecedented rates. But recent research offers a sobering counterpoint: the reality is far less dramatic. To find out what was really going on, researchers surveyed 25,000 people working in 7,000 places, covering 11 jobs that were previously believed to be on the path to destruction due to AI, over 2023 and 2024.. The survey results were published in a study last year, exploring how AI has affected the labour market. The result? Not very much—not yet. Headlines tend to focus on how AI replaces jobs and transforms sectors. Yet, the study finds that the average time saved from using AI chatbots was just 2.8 percent of total work time. No meaningful impact on earnings. No significant shift in hours worked. No sweeping revolution or impending doom. This kind of insight isn’t great news, but it certainly isn’t bad news either. Rather, it’s a clarifying moment and it’s one we need to pay attention to. Are We Asking the Wrong Questions About AI? The majority of mainstream discourse about AI has been driven by scale: “How many jobs will be lost?” or “How much time will be saved?” But these questions flatten the real path of technologically-fueled change. They assume impact must be immediate, measurable, and massive to matter. Between the hype and fear is a more mundane story. It’s a real story, and it’s not about replacement. It’s about reorganization of workflows, of expectations, and of trust. When we ask if AI is "changing the world of work," we have to ask what kind of change we're actually expecting. Take this example from a recent MIT-Stanford study : customer support agents who used generative AI saw measurable gains, especially with junior agents. They closed tickets faster and improved customer satisfaction. But this was a narrow use case, supported by extensive prompts and training. And even then, senior agents saw little benefit. Why? Because the AI was trained on transcripts written by those same senior agents. The AI amplified what already worked. It didn’t invent a new system or innovate a new way of working. This tells us something critical: AI is not replacing expertise. It's merely supplementing it. AI Perception Problems Are Real Problems that Must be Addressed There’s another undercurrent in the reality of chatbot impact on the labour market. Once more, this undercurrent doesn’t often show up often in mainstream discourse, nor do we see it in cited labour stats: the social cost of using AI. More specifically, a separate study by researchers at Duke and Princeton found that employees who used AI tools like ChatGPT were perceived as less competent, less hardworking, and even lazier by their peers. Even if someone uses AI effectively to save time and focus on higher-order work, they may still be socially punished for it. In corporate environments where appearances matter, that stigma can be enough to stop adoption altogether, especially if organizations fail to create the right messaging and workplace culture around AI. Let’s keep in mind that we are talking about a social issue and not a cultural one. Reading Between the Lines of Recent AI Studies on the Labour Market There is a lesson that is not explicitly articulated within the studies referenced thus far: uncertainty and underinvestment shapes AI impact. In our experience, most organizations do not train staff on how to use AI well, lack internal policies to govern safe or appropriate use, rely on ad hoc champions rather than structured leadership, and neglect to reconfigure workflows to accommodate AI as a partner. Dropping a chatbot into a workplace doesn’t magically enhance it. Rather, it will often mirror its flaws. My point is simple: the companies seeing real value from AI (even if modest) are doing the preparatory work that matters behind the scenes. They're adapting processes, not just adding tools. They're establishing clear boundaries, strong incentives, and internal trust. They're shifting AI from being a novelty into something more like organizational infrastructure. Moving From Transaction to Transformation So, what should you as a business leader do with this information? Here’s a simple takeaway: don’t expect AI to change your business on its own. Start changing your business so AI fits into it. This means moving from transactions (throwing AI at a task) to transformations (rebuilding how work is done). It also means slowing down and doing the essential work of governance, policy, training, communication, and culture building. Here’s what we suggest: Create a clear internal stance . Make your organization’s expectations for AI use visible and affirming. Let people know they won’t be punished for using AI and they won’t be rewarded for hiding it. Focus on real use cases. Don’t adopt AI for its own sake. Instead, identify where it can reliably augment work, especially for less experienced employees or for low-risk internal processes. Track impacts quietly and consistently. You may not see big numbers at first. That’s okay. Instead, look for friction reduction, faster onboarding, or error detection. The wins may be small, but they compound. Build a governance wrapper. Define what “responsible AI” means for your organization and operationalize it with processes, documentation, and regular review. Don’t just govern the tool. Govern the ecosystem around AI itself. AI Opportunities Hiding in Plain Sight The study I opened with may seem like it dampens AI excitement. But there is another way of reading it. My takeaway from the study is that opportunities for growth and change are dependent upon leaders shaping their organizations. Rather than wait for a chatbot to change your organization, change your organization so that the chatbot fits. That’s the real work. And it’s where the value lives. Previous Next

  • AI & The Future of Work Report | voyAIge strategy

    Analyses & Recommendations on AI & The Future of Work The future of work has arrived. AI is transforming industries at a pace that is faster than most leaders expected. This in-depth report distills global research into clear insights for: C-suite executives who are navigating AI disruption Employers and HR who are at the forefront of dealing with layoffs, mass terminations, and hiring freezes Policymakers who are trying to address these technological changes and influence legal and policy direction Governments who are trying to make positive change for society Inside our report, you'll learn: Which jobs and skills are most at risk and where new growth is emerging as a result of AI The economic, psychological, and sociological implications of AI disruption The current legal and policy landscape in the United States, Canada, and the European Union The steps that leaders can take to balance innovation, compliance, and employee well-being Why read it? AI is not a technology problem—it is a leadership test. Learn how to manage disruption proactively and seize new opportunities to grow with AI Submit to Download and Stay Informed Sign up to download the full report and join our monthly newsletter First Name Last Name Email Submit to Download

  • The Most Honest AI Yet? | voyAIge strategy

    The Most Honest AI Yet? Why Admitting Uncertainty Might Be the Next Step in Responsible AI By Tommy Cooke, powered by questions and curiousity Jun 20, 2025 Key Points: MIT’s new AI model signals a deeper shift toward transparency, humility, and trust in AI systems Business leaders must recognize that unchecked confidence in AI carries serious reputational and legal risk The real competitive advantage in AI isn’t speed—it’s building cultures and systems that model integrity and caution A client recently shared their favourite moment while attending a tech conference a few months ago. He listened to numerous speakers go on about the promises of AI, but he was not convinced. All he was hearing was the run-of-the-mill talking points: AI is great, it saves money, it increases revenue, it revolutionizes business, etc. But he was taken aback at the end of the conference. The last speaker took a radically different approach and called out the purple elephant hiding in the corner of the room: hallucinations and dishonesty. The speaker had taken serious issue with the fact that AI fails to admit when it’s wrong and said something to the effect of: “Artificial Intelligence, as it were, has some work to do in terms of becoming Honest Intelligence”. The reflection was prompted by the question: Will we ever get AI that simply admits when it doesn’t know the answer? Good news for my client, and for the rest of us just like him. We might be closer than we think. Researchers from MIT and its Uncertainty Labs recently revealed an AI model that recognizes when it doesn’t know the answer … and say so. Yes. You read that correctly: an AI that admits confusion. At first glance, this might seem like a modest or even trivial update in a field known for hype , bold claims, and massive ambition. But this humble innovation signals something bigger. If we squint, we can see the outlines of a bigger trend emerging here: after a gold rush of AI development characterized by confidence and speed, the market has been— as I unpacked recently —quietly shifting toward maturity, safety, and accountability. Companies, regulators, and researchers are all starting to recognize that the future of AI is not only about performance—it is about trust. MIT’s “humble AI” may be one of the clearest signs yet of where we’re headed. From Bold AI to Measured AI: What a Shift in Tone Reveals About Where AI is Heading AI tools are designed to sound confident, but they often sound more confident than they should. When they hallucinate, they do so with conviction. That’s part of what makes them so compelling. It’s also what makes them so dangerous. They often cross the line from helping users to misleading them, whether through invented citations or persuasive but false narratives. In this context then, the MIT team’s breakthrough stands out. Their new system doesn’t merely generate content. Interestingly, it calculates a “belief score” indicating how confident it is in each of its answers. It can express uncertainty in natural language and even abstain from answering altogether. This is not just a technical improvement. It’s a philosophical one, and one that matters to you as a business leader. Why? It signals a new future of AI: one where the goal is not omniscience, but reliability and accuracy. One where companies don’t ask, “How fast can we scale this?” but instead, “Should we scale this at all?” Or, “Is this particular model the best investment?” This tonal shift, from bold to measured, is worth noticing as well. It mirrors the shift we’re seeing in corporate strategy, regulation, and public sentiment in that leaders are realizing that the only sustainable AI is the kind that knows its own limits. What Organizations Should Learn from Honest Artificial Intelligence When AI demonstrates the value of knowing what it doesn’t know, it's a mirror to our own blind spots. Companies that adopt AI without building in mechanisms to identify uncertainty are asking for trouble. They're putting tools in front of employees and customers that may sound confident while being catastrophically wrong. And there are countless examples: Air Canada ended up in court after it’s AI-assisted chatbots gave incorrect advice for securing a bereavement ticket fare Dutch parliamentarians, including their Prime Minister, resigned after an investigation found that 20,000 families were defrauded by a discriminatory AI that the government had endorsed A Norwegian man sued OpenAI after ChatGPT told him he had killed two of his sons and had been jailed for 21 years In 2021, the fertility tracker app Flo Health was forced to settle a lawsuit initiated by the U.S. Federal Trade Commission after it was caught sharing private health data with Facebook and Google I’d be remiss not to mention the near-endless instances of AI deepfakes and the eruption of chaos that they’ve caused, from simulating the voices of political leaders and creating fake sports news conferences to the infamous story of a finance worker a multinational firm who was tricked by AI into paying out $25 million to fraudsters who were using deepfake technology To the business leaders reading this, it’s no longer enough to rely on disclaimers or internal policies that live in the fine print. That’s only half of the battle. As AI integrates into more visible, consequential workflows, business leaders will need to model transparency at the product level. That’s precisely what the MIT model does. Its interface shows that uncertainty is real and accounted for. While this may seem like a setback or extra work, it could also be seen as a leadership opportunity . Why Business Leaders Ought to Normalize Uncertainty in AI I truly believe that being a business leader is less about having the answers than it is about asking the right questions. After over a decade of lecturing in university classrooms, I often found that encouraging my students to ask hard questions not only positioned them to work collaboratively together to find compelling answers, but it also conditioned them to be comfortable with discomfort and uncertainty. As a society, I think that we far too often associate uncertainty with weakness, risk, and indecision. But in complex systems, uncertainty is not the enemy. It is a fact of life. Engaging with it demonstrates maturity and leadership. Conversely, avoiding it makes subordinates nervous about getting it wrong. What MIT’s new model does is provide a practical blueprint for building uncertainty into the system architecture of AI tools. It’s a lesson worth internalizing and mobilizing into an attitude or ‘posture’ for an organization. To work competently and confidently with AI, it’s crucial to foster a culture that trains employees to recognize the limits of AI outputs—not to ignore them or be afraid of them. The Coming Competitive AI Advantage: Trust As I noted in A Quiet Pivot to Safety in AI , the market is entering an age of AI responsibility. Not because it’s fashionable, but because it’s becoming foundational. In sectors like finance, healthcare, insurance, and education, AI that can explain or qualify its results won’t be a luxury; rather, it will become a baseline expectation. We believe the real competitive advantage in AI will not come from the fastest deployment: it will come from building AI systems—and AI cultures—that can be trusted. That means leaders should stop asking only “What can this tool do for us?”, and start asking, “What signals are we sending by how we use it?” Modeling the behaviour you want to see, whether with customers, employees, or other stakeholders, is part of your brand now. To build from there, here’s the next steps you should take to continue fostering a culture that embraces uncertainty and complexity: Train for skepticism. Help teams understand that AI outputs are not gospel. Teach them how to spot uncertainty, even if the tool doesn’t express it directly Invest in explainability. Use or request tools with explainable outputs and uncertainty estimates. Vendors that don’t offer this should face higher scrutiny Design escalation points. Don’t let ambiguous outputs become decision points. Build Human-in-the-Loop mechanisms so that ambiguous or low-confidence results are always reviewed Leaders need to communicate these things outwardly. Customers will forgive slowness, but they will not forgive false confidence. Tell them when the AI isn’t sure—and make that a point of pride. It is important to keep in mind that the best organizations won’t treat humility as a tone. They’ll build it into their infrastructure. Previous Next

  • Chatbots at Work – Emerging Risks and Mitigation Strategies | voyAIge strategy

    Chatbots at Work – Emerging Risks and Mitigation Strategies How to Recognize and Overcome the Invisible Risks of AI By Tommy Cooke Nov 22, 2024 Key Points: Personal AI chatbots in the workplace can pose significant risks to data privacy, security, and regulatory compliance, which can lead to severe legal and reputational consequences Employees using personal AI tools can inadvertently expose proprietary information, increasing the risk of intellectual property breaches and confidentiality violations Organizations can mitigate these risks through clear policies, employee education, and proactive monitoring, allowing for responsible AI usage without compromising security or creativity AI is rapidly transforming where and how we work and play. As our Co-Founder Dr. Christina Catenacci deftly describes , AI chatbots have become commonplace friends, mentors, and even romantic partners. At a rate that is surprising most any observer in any industry, AI is creating incredible opportunities that are often fraught with challenges. AI services tend to be so quickly packaged and sold that subscribers do not find much space to reflect on fit , appropriateness, and potential blind spots that could cause misalignment in even the most well-intentioned organization. As a result, a new kind of workplace is emerging. The remote and hybrid work models ushered in via the pandemic seem a distant memory from yesterday, now that employees are bringing their own personal AI into the office. In-pocket AI is appealing. Why wouldn’t an organization want its employees to benefit from improved workflows and creativity, especially if they don’t have to pay for it? A critical dynamic in this new pocket AI workplace reality is that employers are seeing new blind spots and challenges emerge. Understanding and navigating them is crucial for avoiding data leaks, maintaining compliance, and protecting intellectual property. As we head into 2025, organizations must take time to recognize that invisible AI is unmanaged. This exposes organizations to far-reaching consequences for itself and its stakeholders. By understanding these risks, they can be addressed in ways that not only protect an organization, but position its executives as thought leaders capable of aligning values, building trust, and enhancing overall efficiency without compromising employee creativity and freedom. The Risks of AI Chatbots Data privacy and security as well as compliance are top of mind for most employers we speak to. Because an employee’s personal AI chatbot requires constant internet access and cloud storage, access is facilitated by an employer’s Wi-Fi network. This increases the risk of corporate data being stored incorrectly on third-porty servers or inadvertently intercepted and exposed. It’s important to recognize that personal AI chatbots in a workplace thus raise susceptibility to industry regulations like the GDPR or HIPAA , thus significantly raising an organization’s legal exposure to fines or penalties. Most AI chatbot services train their AI models in real-time off the data its users provide them—and this includes sensitive intellectual property. Consider the following hypothetical prompt that a marketing employee at a pharmaceutical company may enter into their personal AI chatbot: “I have a client named [x] who has 37 patients in New York State with [y] medical conditions. They are born in [a, b, c, and d] years. Analyze our database to identify suitable drug plans. Be sure to reference our latest cancer treatment strategy, named [this].” First, the prompt may lead to privacy issues sine it includes potentially identifiable information about patients, such as their location, medical conditions, and birth years. Depending on how an AI chatbot processes and stores this information, it could lead to violations of HIPAA as sharing protected health information (PHI) with an unapproved, third-party application puts the employer at risk of serious regulatory breaches not to mention reputational damage. Moreover, patients’ identities have been incidentally reverse-engineered with far less data via far more seemingly innocuous methods. Second, the hypothetical prompt contains confidential information when it mentioned the employer’s latest cancer treatment strategy. Strategic information related to drug plans or treatment approaches may be inadvertently referenced and/or suggested to competitors’ employees who are using the same AI chatbot. Third, the hypothetical prompt entered by the make-believe employee incorrectly assumes that the AI chatbot has access to one of the company’s secure databases. Despite having uploaded a few protected PDFs to the AI chatbot, the employee had used the wrong terminology. The potential for this to cause problems is quite significant as it can trigger the AI chatbot to creatively but silently fill-in-the-blanks; as we know, AI chatbots have a tendency to hallucinate. Remember, they do not reflect the living world but rather analyze data models of the real world that you and your employees live in. The point is that the AI chatbot may generate misleading or inaccurate information simply because it is only as robust and comprehensive as the data it trains from. There is a significant risk that the employee recommends to their clients and colleagues to follow a particular drug plan that could be based off flawed and incomplete health, medication, and business data. Mitigation Strategies AI is here to stay. The solution is not to ban or forbid these tools. That is unrealistic and may inadvertently cause friction for an employer when it decides to implement its own AI tools for employees down the road. Here are some proactive steps any organization can follow to minimize risks while enabling employees to use AI responsibly: 1. Build a Policy Set expectations that outline what is and is not allowed when it comes to personal AI chatbots. Include rules about handling sensitive data, consequences for non-compliance, and standards for AI tool vetting. Moreover, generate a one-stop guideline PDF that gives your employees steps they should follow along with examples of both problematic and approved prompts. 2. Educate your Employees Training employees on AI risks and best practices ensures they understand their role in protecting your organization and not merely existing inside of it. Training is always the first line of defense, and it is a proven method for promoting awareness and responsible use of AI. 3. Monitor and Audit Numerous security solutions exist to identify what tools are being used inside of a company’s network. Implement systems to track AI tool usage and audit their data flows tools to identify unauthorized or high-risk applications. Inform your employees that you are monitoring AI-based network activity and will be conducting annual audits to ensure that their activity is compliant with organizational policy requirements. Mindfully Embracing an Opportunity The rapid proliferation of AI companions challenges organizations to rethink how they relate to their employees. Risks certainly exist but they are manageable through thoughtful policies, regular monitoring, and a strong training commitment. Allowing employees to use personal AI chatbots isn’t merely a risk – it’s an opportunity. When that opportunity is embraced, it signals trust, adaptability, and a forward-thinking culture that thinks proactively and not reactively to AI. HR leaders, IT pros, and virtually every executive can enable the ability for employees to innovate and create while simplifying tedious workflows through AI chatbots. This can significantly work in the organization’s favor while doing so safely. Consider doing so to show your organization, your employees, and your clients that you are ready for the rapidly evolving digital landscape ahead in 2025 and the years to come. Previous Next

bottom of page