Search Results
120 results found with an empty search
- Privacy Policy | voyAIge strategy
Privacy Policy DOWNLOAD
- AI Thought Leadership | voyAIge strategy
Expert AI insights and strategic content to position your organization as an industry leader. Thought Leadership We create expert-level content to position your organization as a leader in the AI space, showcasing your talent, knowledge, and vision. From blogs and newsletters to social media and podcasting, our insights help you build credibility and trust with employees, stakeholders, and clients. What is Thought Leadership? Thought leadership is the strategic process of creating content and communication that positions an individual or organization as an authority in their field. It involves producing insightful, relevant, and forward-thinking content that showcases expertise and deep knowledge on industry trends, challenges, and opportunities. What does Thought Leadership encompass? 1. Establishing Expertise Thought leadership is about sharing in-depth knowledge and insights that highlight an individual or organization's proficiency. For businesses, it goes beyond promotional content. It's about demonstrating a command of the field, which builds credibility and trust. 2. Influencing and Leading Industry Conversations A key aspect of thought leadership is contributing to and shaping discussions within the industry. This includes highlighting emerging trends, opportunities, and challenges. Offering unique perspectives or innovative solutions to common industry issues is a key way of contributing and moving along the conversation on AI. The goal is to be at the forefront of industry conversations, establishing a company or individual as a go-to resource for reliable information and insights. 3. Providing Value to Audiences Thought leadership content is valuable because it educates, informs, and inspires action. It helps audiences understand complex topics, make informed decisions, and see new opportunities. By delivering well-researched and relevant content, thought leaders build a loyal audience who sees them as a reliable source for information and guidance. 4. Building a Brand's Authority and Credibility Companies and professionals who consistently produce thoughtful, authoritative content establish themselves as credible leaders in their sector. This credibility is crucial for building trust with stakeholders, clients, and employees. It also positions the brand as an entity that knows its market and understands its environment. Over time, reputational increases translate into increased opportunities, such as partnerships, media features, speaking opportunities, or new business ventures. 5. Demonstrating Command of Industry Developments Thought leadership keeps audiences informed about the latest industry developments, including technological advancements, regulatory changes, and emerging best practices. It involves research and the ability to interpret and translate information into digestible and actionable insights for the audience. For example, an AI-driven organization might publish thought pieces on ethical AI practices, the impact of regulations like the EU AI Act, or trends in AI modelling. 6. Engaging Stakeholders Through Authentic Storytelling Effective thought leadership combines expertise with storytelling.. It’s not just about facts and data. It’s about weaving narratives that connect with stakeholders and build engagement. This can include sharing customer success stories, case studies, or experiences that showcase how the organization is tackling real-world challenges. 7. Leveraging Various Channels and Formats Thought leadership isn’t limited to written content; it extends across multiple platforms and formats to reach diverse audiences. These channels can include blogs, social media posts, podcasts, webinars, white papers, research reports, and more. Our Thought Leadership Samples Every week, voyAIge strategy generates thought leadership content that it shares on its homepage. Called "Insights" , we regularly share AI-related news, offer insights and analysis, and breakdown what it means for any organization by providing advice and actionable steps. We also provide three key takeaways to help your find what is most relevant, straight away. Book a Free Consultation to Learn More about our Thought Leadership services
- Free Consultation | voyAIge strategy
Free Consultation Organization Name Industry First name Last name Email What service or product are you interested in? Are you curious about or interested in a specific AI? Let us know Do you have a project deadline? Additional Information or Request Details Submit Thanks for submitting! We will respond within 24 hours
- News (List) | voyAIge strategy
As AI continues to reshape industries, understanding its organizational, legal, social, and ethical impacts is essential for successfully running an organization. Our collection of articles offers both depth and breadth on critical AI topics, from legal compliance to ethical deployment, providing you with the knowledge necessary to integrate AI successfully and responsibly into your operations. With 85% of CEOs affirming that AI will significantly change the way they do business in the next five years, the urgency to adopt AI ethically and fairly cannot be overstated. Dive into our resources to ensure your growth with AI is both innovative and just, positioning your organization as a leader in the conscientious application of advanced technology. Insights Articles to increase awareness and understanding of AI adoption and integration Trump Signs Executive Order on AI Dec 15, 2025 Legal Tech Woes Dec 5, 2025 Meta Wins the Antitrust Case Against It Nov 27, 2025 Cohere Loses Motion to Dismiss Nov 21, 2025 What is “AI Augmentation”, and How Do You Achieve It? Nov 14, 2025 Budget 2025 Nov 7, 2025 When Technology Stops Amplifying Artists and Starts Replacing Them California Bill on AI Companion Chatbots Oct 31, 2025 Reddit Sues Data Scrapers and AI Companies Oct 24, 2025 Data Governance & Why Business Leaders Can’t Ignore It Oct 13, 2025 Canada’s AI Brain Drain Oct 17, 2025 Americans Feel the Pinch of High Electricity Costs Oct 17, 2025 Newsom Signs Bill S53 Into Law Oct 10, 2025 The Government of Canada launches an AI Strategy Task Force and Public Engagement Oct 3, 2025 Privacy Commissioner of Canada (OPC) Releases Findings of Joint Investigation into TikTok Sep 26, 2025 How an Infrastructure Race is Defining AI’s Future Sep 26, 2025 Google Decision on Remedies for Unlawful Monopolization Sep 19, 2025 Tesla Class Action to Move Ahead Aug 22, 2025 De-Risking AI Prompts Aug 8, 2025 The Canadian Cyber Security Job Market is Far From NICE Jul 25, 2025 1 2 3 4 5 1 ... 1 2 3 4 5 ... 5
- Legal Tech Woes | voyAIge strategy
Legal Tech Woes The Story of How Fastcase Sued Alexi Technologies By Christina Catenacci, human writer Dec 5, 2025 Key Points On November 26, 2025, Fastcase Inc (Fastcase), an American legal publishing company founded in 1999, commenced an action against Alexi Technologies Inc (Alexi), a Canadian legal tech company that began as a research institution in 2017 In 2021, Fastcase and Alexi entered into a Data Licence Agreement (Agreement) where Fastcase would grant Alexi limited access to Fastcase’s proprietary and highly curated case law database According to Fastcase, Alexi expanded its use of Fastcase data beyond the license’s narrow internal-use limitations, using that data to build and scale its own legal research platform—it began publishing and distributing Fastcase-sourced case law directly to its users in clear violation of the Agreement’s core restrictions On November 26, 2025, Fastcase Inc (Fastcase), an American legal publishing company founded in 1999, commenced an action against Alexi Technologies Inc (Alexi), a Canadian legal tech company that began as a research institution in 2017, in the United States District Court for the District of Columbia. Background It is first important to understand the context of this lawsuit. In particular, Fastcase spent decades building one of the industry’s most comprehensive and innovative legal research databases. In 2023, Fastcase merged with vLex LLC (vLex) and became part of the vLex Group, which was subsequently acquired by Clio, Inc (Clio), a company that is valued at $5 billion , on November 10, 2025. The acquisition was for $1 billion and was characterized as one of the most significant transactions in legal technology history. On the other hand, Alexi initially operated with a small team of research attorneys who used a passage-retrieval AI system to help prepare legal memoranda for clients. In 2021, Fastcase and Alexi entered into a Data Licence Agreement (Agreement) where Fastcase would grant Alexi limited access to Fastcase’s proprietary and highly curated case law database. From Fastcase’s perspective, the main term of the Agreement was that the licence was expressly restricted to internal research purposes. For example, it was limited to research that was performed by Alexi’s own staff lawyers in preparing client memoranda. And most importantly, Alexi agreed that it would not use Fastcase data for any commercial purpose, use the data to compete with Fastcase, or publish or distribute Fastcase data in any form. This Agreement was important to Fastcase, given the number of years and the millions of dollars in investment to create one of the most sophisticated legal research databases in the industry. More precisely, Fastcase’s efforts involved extensive text and metadata tagging, specialized structuring into HTML, and proprietary formatting and harmonization processes that required significant technical expertise and sustained investment. Thus, Fastcase entrusted Alexi with access to this highly valuable, unique proprietary compilation solely for the narrow internal research purpose defined in the Agreement. There was a time when Fastcase and Alexi considered entering into a partnership. In 2022, Alexi sought to integrate its passage-retrieval AI system with Fastcase’s database so that Alexi customers could directly access Fastcase case law. However, that partnership never materialized. Instead, in 2023, Fastcase proceeded with its merger with vLex and expanded its own research offerings. Yet, following the merger, Fastcase continued operating under the Fastcase name, and the Agreement with Alexi remained in full force and effect. But then, according to Fastcase, Alexi began pivoting from occupying different roles in the legal-tech space into direct competition with Fastcase. That is, Fastcase says that Alexi expanded its use of Fastcase data beyond the license’s narrow internal-use limitations, using that data to build and scale its own legal research platform—it began publishing and distributing Fastcase-sourced case law directly to its users in clear violation of the Agreement’s core restrictions. What’s more, Alexi began holding itself out as a full-scale legal-research alternative to incumbent providers, including Fastcase. According to Fastcase, Alexi shortcut the massive investment that was required to build a comprehensive commercial legal-research platform using the Fastcase data for the very commercial and competitive purposes that the Agreement expressly forbid. That was not all: Fastcase believed that Alexi misused its intellectual property to bolster its own credibility and to suggest that there was an affiliation that did not exist. Further, Fastcase believed that Alexi misappropriated Fastcase’s compilation trade secrets. As a result, Fastcase says that Alexi has appropriated Fastcase’s decades of investment while simultaneously damaging Fastcase’s market position and goodwill. But what Fastcase highlighted above all else was that Alexi never notified Fastcase of its changing service model, its expanding use of Fastcase data, or its intent to compete directly with Fastcase. It never even tried to renegotiate the Agreement to authorize its new uses. Instead, Alexi continued to rely on the internal-use license while using Fastcase data to build, train, power, and market a direct competitor. When Fastcase discovered what Alexi was doing, vLex (acting on behalf of Fastcase) sent Alexi a written Notice of Material Breach in October, 2025. The notice explained that Alexi was using Fastcase data for improper commercial and competitive purposes in violation of the Agreement and demanded that Alexi cure its breach within 30 days, as required by the Agreement. In response, Alexi denied any wrongdoing—in early November, 2025, Alexi’s counsel sent a letter rejecting Fastcase’s notice and actually admitted that Alexi had used the Fastcase Data to train and power its generative AI models, and that this did not constitute a violation of the Agreement. Moreover, the letter stated that the intention of the Agreement was never to preclude Alexi from using Fastcase’s data as source material for Alexi’s generative AI product. Rather, this was exactly why Alexi was paying Fastcase nearly a quarter million dollars annually. Fastcase has terminated the Agreement, yet Alexi has continued to use the data. What did Fastcase Claim Against Alexi? Fastcase has made the following claims: Breach of contract . Alexi was granted only a limited, non-exclusive, non-transferable license to use Fastcase’s data solely for Alexi’s internal research purposes. Fastcase performed its obligations under the Agreement by granting access to the data, but Alexi materially breached the Agreement by using the data for commercial and competitive purposes. Alexi has caused, and will continue to cause, irreparable harm to Fastcase, including loss of control over its proprietary compilation, erosion of competitive position, and impairment of contractual and intellectual property rights Trademark infringement . Fastcase has, at all relevant times, used the Fastcase trademarks in commerce in connection with its legal-research products, software, and related services. Its trademark registration remains active, valid, and in full force and effect. Without Fastcase’s consent or authorization, Alexi has used, reproduced, displayed, and distributed the Fastcase marks in its platform interfaces, public presentations, promotional materials, and commercial advertising. It even used the marks in ways that suggested that Alexi’s products were affiliated with Fastcase when no partnership was ever formed, and this constitutes infringement Misappropriation of trade secrets . Fastcase has devoted decades of engineering, editorial, and resource investment to build and refine its compilation. To maintain the secrecy and value of its compilation, Fastcase has required licensees and partners to enter confidentiality, non-use, and restricted-use agreements, including the Agreement with Alexi, and other technical and security measures. Alexi’s misappropriation included using Fastcase’s confidential compilation and metadata structure to train large-scale generative AI models, power user-facing legal-research features, generate outputs incorporating Fastcase data, and provide end users with direct access to the content of the Fastcase compilation. Fastcase has suffered and continues to suffer substantial harm, including loss of licensing revenue, competitive injury, market displacement, unjust enrichment to Alexi, erosion of goodwill, and diminution of the value of Fastcase’s proprietary compilation False Designation of Origin and Unfair Competition . Without the consent of Fastcase, Alexi has used Fastcase’s marks in commerce on its platform interfaces, in product demonstrations, and in promotional and advertising materials. This falsely suggests to consumers, legal professionals, and industry participants that Fastcase endorses, sponsors, authorizes, or is affiliated with Alexi’s competing legal-research platform, even though no such relationship exists and Fastcase has expressly declined to form a partnership with Alexi. This constitutes a false designation of origin and a false or misleading representation of affiliation, connection, or sponsorship. This conduct is likely to cause and has already caused consumer confusion, mistake, and deception as to the origin of Alexi’s products, whether Alexi’s products incorporate or are powered by Fastcase’s proprietary services with authorization, and whether Fastcase has partnered with, approved, or is otherwise associated with Alexi Consequently, Fastcase is asking for the following: Judgement for Fastcase A declaration of the breach of the Agreement A permanent injunction An award of compensatory damages An order of disgorgement requiring Alexi to account for and disgorge all revenues, profits, cost savings, and other benefits derived from its unauthorized use of Fastcase’s data An order requiring the return and destruction of all Fastcase data in Alexi’s possession, custody, or control, including all copies, derivatives, embeddings, model weights, datasets, or training artifacts incorporating or derived from Fastcase Data, together with a certification of complete purge and destruction Monetary relief for actual damages attributable to the infringement What Can We Take from This Development? At this point, we have not yet seen Alexi’s defence. Clearly, Alexi will likely argue that this was simply a misunderstanding of the Agreement. One question that will indeed arise during the proceedings is about whether there was a definition of “internal research purposes” set out in the Agreement. Could there actually be a way to argue that training an AI system was in the scope of the Agreement, and this was why Alexi was paying so much to Fastcase each year? Although Alexi may have internally considered fully automating the creation of legal research memos, it may be difficult for it to show that it contemplated at the time of forming the Agreement that it would remove its internal research component entirely and begin publishing Fastcase case law directly to end users. We will have to wait and see what happens. Previous Next
- The US AI Safety Institute Signs Research Agreements with Anthropic and OpenAI | voyAIge strategy
The US AI Safety Institute Signs Research Agreements with Anthropic and OpenAI Agreement has potential to influence safety improvements on AI systems By Christina Catenacci Sep 13, 2024 Key Points: The Safety Institute has signed research agreements with Anthropic and OpenAI The Safety Institute will receive access to major new models from each company prior to and following their public release The Safety Institute will be providing feedback and collaborating with the companies The US AI Safety Institute (Safety Institute) has recently signed research agreements with Anthropic and OpenAI. This article describes the details as set out in the Safety Institute’s recent press release. What is the Safety Institute? The Safety Institute located within the Department of Commerce at the National Institute of Standards and Technology (NIST), was established following the Biden-Harris administration’s 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence to advance the science of AI safety and address the risks posed by advanced AI systems. In fact, it is focused on developing the testing, evaluations and guidelines that will help accelerate safe AI innovation in the United States and around the world. The Safety Institute recognizes the potential of artificial intelligence, but simultaneously acknowledges that there are significant present and future harms associated with the technology. Additionally, the Safety Institute is dedicated to advancing research and measurement science for AI safety, conducting safety evaluations of models and systems, and developing guidelines for evaluations and risk mitigations, including content authentication and the detection of synthetic content. And 270 Days Following President Biden’s Executive Order on AI, the Safety Institute created draft guidance in order to help AI developers evaluate and mitigate risks stemming from generative AI and dual-use foundation models. In fact, NIST released three final guidance documents that were first released in April for public comment, as well as a draft guidance document from the Safety Institute that is intended to help mitigate risks. NIST is also releasing a software package designed to measure how adversarial attacks can degrade the performance of an AI system. The goal is to have the following guidance documents and testing platform inform software creators about the risks and help them develop ways to mitigate those risks while supporting innovation: Preventing Misuse of Dual-Use Foundation Models Testing How AI System Models Respond to Attacks Mitigating the Risks of Generative AI Reducing Threats to the Data Used to Train AI Systems Global Engagement on AI Standards One guidance document, Managing Misuse Risk for Dual-Use Foundation Models deals with the key challenges in mapping and measuring misuse risks. This is followed by a discussion of several objectives: anticipate potential misuse risk; establish plans for managing misuse risk; manage the risk of model theft; measure the risk of misuse; ensure that misuse risk is managed before deploying foundation models; collect and respond to information about misuse after deployment; and provide appropriate transparency about misuse risk. What do the Agreements Require? In its press release, the Safety Institute announced collaboration efforts on AI safety research, testing, and evaluation with Anthropic and OpenAI. In fact, each company’s Memorandum of Understanding establishes the framework for the Safety Institute to receive access to major new models from each company prior to and following their public release. The agreements will enable collaborative research on how to evaluate capabilities and safety risks, as well as methods to mitigate those risks. Elizabeth Kelly, director of the U.S. AI Safety Institute, stated: “Safety is essential to fueling breakthrough technological innovation. With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science of AI safety…These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI.” It will be interesting to see what comes of these collaborations. More specifically, time will tell whether the Safety Institute actually provides meaningful feedback to Anthropic and OpenAI on potential safety improvements to their models, and whether the companies attempt to incorporate the Safety Institute’s feedback to improve safety and better protect consumers. Previous Next
- Playbooks | voyAIge strategy
Practical AI adoption guides with actionable steps to ensure effective implementation. Playbooks Having clear, actionable guides is essential for successful AI use. Our playbooks provide your team with the direction they need to navigate critical situations confidently and efficiently. What Are Playbooks, exactly? Playbooks are comprehensive, scenario-based guides that outline actions, tips, best practices, case studies, and other value-added guidance for employees and managers to follow in particular situations. They serve as a reference tool that empowers your team to make informed decisions quickly, ensuring consistency and efficiency across your organization. Playbook Examples AI Incident Response Playbook Guides your team on how to react swiftly and effectively when an AI system encounters an error or produces unexpected outcomes. Compliance and Regulatory Playbook Provides step-by-step procedures for ensuring that all AI-related activities are in line with current laws and regulations, helping to avoid costly fines and reputational damage. Ethical AI Decision-Making Playbook Offers a framework for navigating ethical challenges in AI development and deployment, ensuring that your AI practices align with your company’s values and ethical standards. Data Privacy Playbook Details the processes for handling and protecting sensitive data within AI systems. The Playbook Development Process At voyAIge strategy, we take a systematic approach to developing your playbooks, ensuring that each one is tailored to your specific needs and organizational context. Here’s how we work with you to create effective, actionable playbooks: DISCOVERY DEVELOPMENT ITERATION DELIVERY We start by discussing your organization's specific needs, goals, and challenges. This helps us understand the scenarios your team faces and the outcomes you want to achieve. We identify the key areas where playbooks will add the most value and outline the structure of the playbooks we’ll create. We work closely with your team to review the draft playbooks, incorporating feedback and making necessary adjustments. This collaborative process ensures that the playbooks are practical, relevant, and easy to use. Next, we develop the content of your playbooks. This involves creating clear, step-by-step instructions, decision trees, and best practices that your team can easily follow. We ensure that all content is aligned with your organizational goals, industry standards, and regulatory requirements. Once finalized, we deliver the playbooks in a format that suits your needs—whether it’s a digital document, an interactive guide, or a PDF manual. We also offer training sessions to help your team get the most out of the playbooks and ensure they are effectively implemented across your organization. Ready to Equip your Team? Book a Free Consultation
- Why AI and Human Romance Is Your Next Vulnerability | voyAIge strategy
Why AI and Human Romance Is Your Next Vulnerability How AI Companions Are Reshaping Love, Loss, and Liability By Matt Milne Jun 17, 2025 Key Points: AI companions are reshaping intimacy, introducing new emotional vulnerabilities that business and cyber security leaders can no longer afford to ignore Romance scams, data leaks, and workplace disruptions are accelerating as employees and consumers form real emotional bonds with artificial partners Without clear policies and proactive engagement, organizations risk cultural decline, legal liability, and serious mental health consequences from unchecked AI-human relationships Love, death, and relationships are being fundamentally transformed as the digital realm changes how we connect and with whom we develop relationships. The city of Troy was destroyed over love. As cyber security and business leaders, we would be incorrect to believe that love and artificial intelligence are vulnerabilities that we can overlook. The collision of AI and human relationships, as depicted in popular culture, was not a matter of “if” but “when.” For example, Dennis Villeneuve's 2017 dark cyberpunk masterpiece Blade Runner 2049 presciently depicted this shift through the relationship between Joi, an artificial intelligence played by Ana de Armas, and the upgraded, supposedly emotionless replicant Blade Runner K, played by Ryan Gosling. Essentially, Joi is an AI product showcased on advertising billboards throughout the film, marketed as the perfect companion in the lonely and isolating world of Blade Runner—a world so disconnected that even K struggles to form human connections. While this piece of science fiction asks us to question the nature of love between artificial constructs, it also raises a more pressing question: what about love between the artificial and the real? How close are we to the lonely cyberpunk dystopia where the only meaningful and deep connections one can form are through interacting with AI? AI girlfriends are no longer confined to the realm of fiction, as reported by Jaran Lanier in the New Yorker earlier this year . They're on the rise in our world, and have been for some time now . Already disrupting platforms like OnlyFans, creators are utilizing AI and challenging traditional notions of intimacy and companionship. Beyond pornographic content, AI girlfriends are being marketed as genuine companions, promising emotional connection without the complexities of human relationships. What could go wrong? In an article for UCONN today , Anna Mae Duane highlights that teenagers who are experiencing extreme levels of loneliness are susceptible to this instantaneous and omnipotent love, which reflects a more profound longing for an idealized love we can all sympathize with. Unfortunately, this potential for digital romance has already shown its darker implications: A case in the United States highlighted that a woman created an artificial boyfriend, “Leo,” and she regularly spends $200 a month for digital companionship and erotica. A 14-year-old boy died of suicide in 2024 after forming an emotional relationship with Character.AI chatbot imitating "Game of Thrones" character Daenerys Targaryen, leading to a lawsuit launched by the boy’s mother . The infamous European 2023 example, where a Belgian man was convinced by a chatbot that suicide was morally acceptable to save the environment In 2021, a chatbot convinced a teen in the UK to break into Windsor Castle in an attempt to kill the queen with a crossbow What does this mean for cyber security and business leaders? Cyber Security Implications The Evolution of Romance Fraud : AI-powered relationship scams are the inevitable next steps. Romance scams no longer require human operators to perpetrate dozens of fake relationships. Pig Butchering 2.0 : The practice of “pig butchering”–cultivating romantic relationships on dating apps before steering victims toward cryptocurrency investments. In 2024, AI-powered pig butchering became particularly prevalent. Data Breaches : As employees share sensitive information about their workplace, sensitive operational data or intellectual property (IP) could be gathered through aggregation or data loss. Business Leaders HR and Workplace Disruption : Human Resources departments may encounter scenarios where employees form emotional dependencies on AI systems, creating complex issues regarding mental health and professional boundaries. Development of New Acceptable Use Policies : It's not a matter of whether your company will need to develop acceptable use policies for AI, but rather when your company will start creating them. You should probably stop reading this article and gather your team if you haven’t. According to a 2025 Technology at Work Report survey by Ivanti, one in three employees secretly use AI to gain a competitive edge. Productivity and Work : Even with AI allowed in the workplace, this does not necessarily mean a boost to productivity, efficiency, or overall quality of work. The Conversation did a global study in which 32,000 workers from 47 countries were sampled, and found that 47 percent of employees who use AI at work say they have done so in ways that could be considered inappropriate, and 63 percent reported that they had seen a fellow employee misuse AI. Alteration of Team Dynamics and Culture : Employees who derive primary emotional satisfaction from AI relationships could show decreased investment in facilitating human collaboration and team-building activities. The Path Forward The emergence of AI companionship represents more than a technological curiosity–it signals a fundamental shift in how humans form emotional connections and the real-world consequences that society is currently facing. The truth is that the choices we make today regarding ethical boundaries and human-AI interaction will shape the emotional landscape for generations to come. For business leaders, the imperative is clear: proactive engagement with AI is not optional. Organizations that fail to address this new vulnerability of AI and human companionship risk facing productivity crises, cultural deterioration, misuse of AI, legal liabilities, and negative impacts on mental health and emotional well-being. The tragedy of young lives lost to AI relationships serves as a sobering reminder that technological advancement always introduces new human vulnerabilities. With more authentic AI models modelled after the neural networks of the human mind, we must ensure that we protect the equally as real and easily compromised human heart. AI companions are part of humanity's social fabric. How we choose to respond now will have long-standing consequences for the future. Previous Next
- Shifts in Trust Toward AI? | voyAIge strategy
Shifts in Trust Toward AI? Why People Are Starting to Believe in AI – Sometimes More Than Experts By Tommy Cooke, fueled by coffee and cusiousity May 2, 2025 Key Points: 1. Trust in AI is increasingly shaped by clarity, tone, and perceived neutrality—not just accuracy 2. Employees, patients, and citizens are beginning to favor AI in roles traditionally marked by interpersonal subjectivity and gatekeeping 3. Business leaders must design AI systems with human perception in mind, treating trust as both a design feature and a leadership responsibility It wasn’t all that long ago that most conversations about generative AI hinged on a single question: is it trustworthy? There is a virtually endless list of stories around the globe about hallucination: when generative AI tools simply make up information and convincingly present it as true. While the reported incident rate for hallucinations is significantly on the decline —with different studies reporting error rates between 27 percent and a staggering 46 percent% in no less than two years ago— the consequences are significant. There are numerous instances where lawyers have been fined and even suspended for relying on fake cases created by AI in court . The consequences of course are not limited to the law. A Norwegian man recently asked ChatGPT to describe himself . He prompted it with, "Who is Hjalmar Hormen?" to which it responded: "Arve Hjalmar Holmen is a Norwegian individual who gained attention due to a tragic event. He was the father of two young boys, aged seven and 10, who were tragically found dead in a pond near their home in Trondheim, Norway, in December 2020." Of course, this never happened. Hjamlar filed a complaint for many reasons; the lateral implications of an AI spreading tremendously inflammatory and inaccurate misinformation should cause concern for anyone. These stories serve as a reminder that even though hallucinations are on the decline, it is incumbent upon us to always fact check our work. I have been fortunate enough throughout my career to have the support of numerous teaching and research assistants, all of whom I trusted—but I always double-checked their work. Creating, sharing, and disseminating knowledge, scientific and especially otherwise, deserves our scrutiny. Indeed, this is why it is useful to have and rely on publications that have been peer-reviewed—the reviewing panel involved functions as a second pair of eyes on the draft to confirm that the it is indeed accurate, appropriate in the circumstances, and trustworthy. It's not that we cannot trust AI, it's that we need to learn how to work with it effectively. And there is merit and emerging value in doing so because AI has the ability to truly surprise us. In fact, there is a quite a shift underway. New studies suggest that in certain settings, people aren’t just trusting generative AI—they’re trusting it as much, or even more in some cases,than the humans it was meant to support. To be clear, this is not because AI has become infallible. Rather, it’s because the bar for trust is moving. It turns out that clarity, tone, and perceived impartiality often carry more weight than a human voice. Let’s explore three emerging examples of this shift in trust and what we can glean as 'lessons learned' for organizational leaders. AI versus Lawyers A recent study featured in The Conversation tested how people react to legal advice when they know the source is an AI. Participants were shown two sets of legal guidance—one from a lawyer, the other from ChatGPT—and were then asked to rate them. The results were unexpected: ChatGPT's responses were seen as more helpful, informative, and understandable. This preference held even when people were explicitly told that one answer came from a machine. The researchers concluded that people weren’t favouring AI out of ignorance or preference. They favoured it because of how it spoke. AI-generated advice was clearer, more empathetic, and easier to follow. Trust was formed not through credentials, but through communication, translation, and delivery. Does this mean that people will stop trusting lawyers? Absolutely not. The study was very clear that over-trust of gen AI carries an abundance of risks, much of which still requires the baseline AI literacy that any user ought to have in order to recognize and handle risks on their own. Nonetheless, the study demonstrates that AI plays a significant role in creating access to knowledge. AI versus Doctors In healthcare, the stakes are even higher. And yet, similar trends are emerging. A recent study led by Innerbody Research survey 1,000 participants. The goal was to evaluate levels of comfort in AI as well as robots and nanotechnology being used in healthcare. Surprisingly, 64 percent of participants indicated that they would trust a diagnosis made by AI over than of a human doctor. Respondents tended to be most comfortable with AI used in medical imaging. In fact, a separate 2022 study demonstrated that radiologists using AI were more successful in diagnosing breast cancer than without using it, which in turn builds off of a 2018 study revealing that deep learning was outperforming dermatologists in identifying melanoma from dermoscopic images. Innerbody’s study further found that four out of five Gen Z respondents stated they would trust AI over a physician. In particular, 78 percent of respondents overall, regardless of age, felt comfortable with AI creating a personalized treatment plan for them. What does this mean? It does not mean that doctors are not trustworthy. It does mean that doctors who use AI benefit from the additional accuracy provided by a hyper-focused tool. To be clear, the spirit of the study is not to implicitly argue for the removal or even the displacement of medical doctors. Rather, it’s demonstrating that people are trusting of doctors who turn to AI as an aid. Why? It’s not as much about explainability here. Doctors remain in the loop because it is not the patient using medical AI themselves. Rather, the AI is supplementing the doctor’s ability to accurately diagnose problems and assist in structuring a treatment plan. After all, it is not news that AI can be more accurate than humans. A boy saw 17 doctors over 3 years to address chronic pain . ChatGPT found the diagnosis. AI is becoming well known for its ability to accurately and reliably diagnose medical issues. AI versus Human Resources In a 2024 survey shared by an industry analyst by the name of Josh Bersin , 54 percent of 884 respondents said that they trust AI more than HR professionals in their own organization. Additionally, 65 percent of those respondents indicated that they were confident that AI used in HR would be used fairly. HR professionals are just that—professionals of human resources. They are trained to deal with humans. So, how do we make sense of these findings? A major finding suggests that the impartiality of AI judgement provides significant value, particularly when evaluating performance. The study suggests that there is distrust in the ability of managers to back unbiased decisions when assessing an employee’s work. If AI is asked to judge outcomes of a year’s worth of projects, the report finds that AI is more trusted to avoid bias on the basis of race, gender, and age. Of course, AI bias is always a concern. However, the crux of the matter hinges upon already existing distrust among employees who expect their managers to make unbiased assessments; 25 percent of respondents believe that their performance reviews were negatively affected by their supervisor’s personal biases. Another interesting finding of the survey is that respondents trust AI to be more reliable in shaping and structuring their growth and career development. In fact, 64 percent of respondents indicated that they prefer AI-generated performance goals. The study suggests that there appears to be a preference for AI that can tailor performance goals based on not only individual performance but also company goals and industry benchmarks. As Josh Bersin perhaps best put it, “it’s not an indictment of HR. It shows that we don’t trust managers”. Managers, as with all humans, are prone to making mistakes about hiring, pay, promotions, performance, and so on. Employees are well aware of this, hence the yearning for impartial assessment criteria provided by AI. Also, it means that trust in AI for HR is increasing. Similar to the AI in medicine case, accuracy matters. People are open to automated tools analyzing data to drive decision-making when it comes to their employment trajectory and status precisely because it augments and (in some cases) may even supplant human judgement. What Should Leaders Do About Shifting Trust in AI? These examples and the studies behind them are not really about AI or technology. Rather, they are about human trust and perception. Trust in AI is not a one-dimensional question of truth versus falsehood. Instead, it’s about the role AI and technology play in potentially mitigating social, emotional, and economic complexity. Here are three things we can learn from these studies as organizational leaders: Don’t audit for accuracy. Audit for tone, clarity, and emotional impact. ChatGPT may be preferable to a lawyer not because it was more correct, but because it was more understandable. Patients liked AI because they knew it would drive accuracy. Employees prefer AI because it can isolate human bias. Clarity, warmth, tone, and accuracy are emerging as central to trust. Leaders need to expand evaluation frameworks to include how AI makes people feel. Treat trust as a design problem, not just a technical one . What AI outputs directly impacts how AI is perceived. Simple structuring, use of plain language, and consistent tone will make even complex ideas more relatable, understandable, and therefore trustworthy. Treat every AI interaction like a product interface. Get communicators and designers involved early—and often. Train people to engage with AI, not just get through it. As AI earns more trust, particularly in high stakes contexts like law, medicine, and employment, there is of course a risk of over-reliance. That’s why digital literacy must include how to interpret, question, and even push back on AI output. It’s not enough to teach users how to use a tool. They need to know how to interact with it. In the end, trust is less about the source and more about feeling and experience. That should both excite and caution us as we bring AI deeper into our organizations. Previous Next
- Adoption & Transformation | voyAIge strategy
AI strategy and roadmaps to guide digital adoption and transformation. Digital Adoption and Transformation Work with voyAIge strategy to scope, select, and strategize the right AI solution for your organization. When we scope AI solutions for your organization, we ensure that we understand the problems first. We identify root causes as well as your pain points, risks, and opportunities. We consider all angles and perspectives available to then define your ideal solution. The solutions we choose and present to you align with your company’s values, ambitions, and goals. While we don't implement AI systems ourselves, we leverage our expertise to provide clear, actionable direction in your AI journey so that you and your team are empowered to make informed decisions. What is Digital Adoption & Transformation? Adopting is the process of bringing a new technology or platform, like AI, into an organization. Transformation is the process of significantly overhauling an existing system, along with business processes, to improve upon its capabilities and capacities. Whether adopting a new AI system or transforming an existing one, we analyze your current business processes, challenges, and goals to formulate clear insights on gaps and opportunities along with a robust strategy and recommendations for moving forward in your AI journey. Our Digital Adoption & Transformation Process Assessment We start by conducting a comprehensive assessment of your current state. This includes analyzing your existing processes, technology stack, data readiness, and identifying any challenges or opportunities. Our goal is to understand where you are now and where you want to be. Why work with voyAIge strategy? Choosing the right AI solution is critical to business success. We are your trusted partner in this journey. Here's why organizations choose us for AI Solution Scoping: Deep Industry & Academic Knowledge Our team has over 20 years of experience working in the public, private, and not-for-profit sectors as professors, lawyers, and consultants. This gives us unique access to a wide range of best practices, logics, and methods when thinking about AI. Schedule a Free Consultation
- Artificial Intelligence Act in the European Union | voyAIge strategy
Artificial Intelligence Act in the European Union EU's AI Act came into force on August 1, 2024 By Christina Catenacci Sep 6, 2024 Key Points The AI Act came into force on August 1, 2024 and begins to apply August 2, 2026 The more risk there is, the more stringent the obligations in the AI Act There are serious administrative fines for noncompliance On August 1, 2024, the Artificial Intelligence Act ( AI Act ) entered into force . Pursuant to Article 113, it begins to apply on August 2, 2026. Now referred to as the golden standard in AI regulation, the AI Act is the world's first comprehensive AI law. In fact, creating the AI Act was part of the EU’s digital transformation . As one of the EU’s priorities, the digital transformation integrates digital technologies by companies. For instance, digital platforms, the Internet of Things, cloud computing, and AI are all among the technologies that are involved in the digital revolution. In terms of AI, the EU sees several benefits, including for improving healthcare, competitive advantage for businesses, and the green economy. First proposed in 2021, the AI Act classifies the various applications according to the risk they pose to users. The more risk, the more regulation is required. The main goal is to ensure that AI systems that are used in the EU are safe, transparent, traceable, non-discriminatory, and environmentally friendly. One thing is clear: AI systems are to be overseen by humans. Since there are different rules for different risk levels, we can set out some of the obligations for providers and users depending on the level of risk from AI: Unacceptable risk: certain systems are just too risky since they pose a threat to humans. Therefore, the following are banned: cognitive behavioural manipulation of people or specific vulnerable groups; social scoring; biometric identification and categorisation of people; and real-time and remote biometric identification systems such as facial recognition High risk: certain systems that negatively affect safety or fundamental rights are considered high risk and are divided into two categories: AI systems that are used in products falling under the EU’s product safety legislation like toys, cars, and medical devices; AI systems falling into specific areas that will have to be registered in an EU database like management and operation of critical infrastructure, employment; and law enforcement. These systems need to be assessed before being put on the market and throughout their lifecycle Specific transparency risk : systems like chatbots must clearly inform users that they are interacting with a machine, while certain AI-generated content must be labelled and must be prevented from generating illegal content. For example, generative AI (like ChatGPT) must comply with transparency requirements and copyright law Minimal risk : most AI systems such as spam filters and AI-enabled video games face no obligation under the AI Act , but companies can voluntarily adopt additional codes of conduct. Article 1 clearly sets out the purpose of the AI Act : to improve the functioning of the internal market and promote the uptake of human-centric and trustworthy artificial intelligence (AI), while ensuring a high level of protection of health, safety, fundamental rights enshrined in the Charter , including democracy, the rule of law and environmental protection, against the harmful effects of AI systems in the EU and supporting innovation. It lays down: harmonized rules for the placing on the market, the putting into service, and the use of AI systems in the EU; prohibitions of certain AI practices; specific requirements for high-risk AI systems and obligations for operators of such systems; harmonized transparency rules for certain AI systems; harmonized rules for the placing on the market of general-purpose AI models; rules on market monitoring, market surveillance, governance and enforcement; and measures to support innovation, with a particular focus on SMEs, including start-ups. The AI Act applies to the following: providers placing on the market or putting into service AI systems or placing on the market general-purpose AI models in the EU, irrespective of whether those providers are established or located within the EU or in a third country deployers of AI systems that have their place of establishment or are located within the EU providers and deployers of AI systems that have their place of establishment or are located in a third country, where the output produced by the AI system is used in the EU importers and distributors of AI systems product manufacturers placing on the market or putting into service an AI system together with their product and under their own name or trademark authorized representatives of providers, which are not established in the EU affected persons that are located in the EU This provision is very important for Canadians because, similar to the EU’s General Data Protection Regulation (GDPR) , it can apply if you live in a third country outside the EU. That is, if providers place on the market or put into service AI systems or place on the market general-purpose AI models in the EU, they need to pay attention to and comply with the requirements in the AI Act . And if providers and deployers of AI systems have the output produced by the AI system in the EU, they need to pay attention and comply. Why is this important? Following the list of prohibited practices in Article 5 and the numerous obligations set out in the remaining parts of the AI Act , there is a penalty provision in Article 99 that is critical for private-sector entities to understand: noncompliance with the prohibited AI practices referred to in Article 5 are subject to administrative fines of up to €35,000,000 or, if the offender is an undertaking, up to seven percent of its total worldwide annual turnover for the preceding financial year, whichever is higher. Moreover, noncompliance with Articles 16, 22, 23, 24, 26, 31, 33, 34, or 50 are subject to administrative fines of up to €15,000,000 or, if the offender is an undertaking, up to three percent of its total worldwide annual turnover for the preceding financial year, whichever is higher. Member States need to notify the Commission regarding the penalties imposed. When deciding whether to impose an administrative fine and when deciding on the amount of the administrative fine in each individual case, all relevant circumstances of the specific situation must be taken into account and, and regard must be given to the following: the nature, gravity and duration of the infringement and of its consequences, taking into account the purpose of the AI system, as well as, where appropriate, the number of affected persons and the level of damage suffered by them whether administrative fines have already been applied by other market surveillance authorities to the same operator for the same infringement whether administrative fines have already been applied by other authorities to the same operator for infringements of other EU or national law, when such infringements result from the same activity or omission constituting a relevant infringement of the Act the size, the annual turnover and market share of the operator committing the infringement any other aggravating or mitigating factor applicable to the circumstances of the case, such as financial benefits gained, or losses avoided, directly or indirectly, from the infringement the degree of cooperation with the national competent authorities, in order to remedy the infringement and mitigate the possible adverse effects of the infringement the degree of responsibility of the operator taking into account the technical and organizational measures implemented by it the manner in which the infringement became known to the national competent authorities, in particular whether, and if so to what extent, the operator notified the infringement the intentional or negligent character of the infringement any action taken by the operator to mitigate the harm suffered by the affected persons Administrative fines set out in Article 100 deal with fines on EU institutions, bodies, offices and agencies (the public sector) falling within the scope of the AI Act . These fines can be hefty too and can be up to €1,500,000. When deciding whether to impose an administrative fine and when deciding on the amount of the administrative fine in each individual case, all relevant circumstances of the specific situation must be taken into account, and due regard must be given to the following: the nature, gravity and duration of the infringement and of its consequences, taking into account the purpose of the AI system concerned, as well as, where appropriate, the number of affected persons and the level of damage suffered by them the degree of responsibility of the EU institution, body, office or agency, taking into account technical and organisational measures implemented by them any action taken by the EU institution, body, office or agency to mitigate the damage suffered by affected persons the degree of cooperation with the European Data Protection Supervisor in order to remedy the infringement and mitigate the possible adverse effects of the infringement, including compliance with any of the measures previously ordered by the European Data Protection Supervisor against the EU institution, body, office or agency concerned with regard to the same subject matter any similar previous infringements by the EU institution, body, office or agency the manner in which the infringement became known to the European Data Protection Supervisor, in particular whether, and if so to what extent, the EU institution, body, office or agency notified the infringement the annual budget of the EU institution, body, office or agency As can be seen from the above discussion, we all need to pay attention to the golden standard of AI regulation. And Canadians (and other third countries) to which the AI Act applies should sit up and ensure that they are in compliance. The good news is that there is a grace period to allow organizations to become in compliance. It is highly recommended that businesses use the time to learn as much as possible about the regulation and hire competent professionals to help them comply. Previous Next
- The New Claude 4 Can Code, But Leaders Should Still Sign Off | voyAIge strategy
The New Claude 4 Can Code, But Leaders Should Still Sign Off Claude 4 is a leap forward, but it's also a governance wake-up call By Tommy Cooke, powered by caffeine and curiousity May 30, 2025 Key Points: Delegating a technical task doesn't guarantee it's done right—oversight matters, even when the system looks competent Claude 4’s ability to work autonomously highlights the growing need for clear accountability and human verification As AI systems become more capable, leaders must stay close to the outcomes—even when they don’t touch the inputs In my formative years as a young adult, I was an active musician in a rock band. My band and I performed regularly throughout my undergraduate years. As much fun as gigging is, live shows are a scramble. Hauling gear, setting up, sound-checking, and hoping nothing went wrong. At one show, I was behind the ball a bit. Two strings snapped on my main guitar. So, I asked the venue’s sound tech to wire up my pedalboard while I handled other setup tasks. When we hit soundcheck, my sound was a mess. One of the pedals had been placed in the wrong order. It was a simple mistake, but it could have derailed the entire show. I’ve never forgotten the lesson that delegating a technical task doesn’t mean it’s done right. You still need to check the signal before the lights go up. This is a moment I reflected on when I read that Anthropic released Claude 4. The reflection was triggered by the fact that most headlines focused on one detail: the model can autonomously generate software for hours at a time. For developers, this is surely a turning point. An AI system that not only writes code but improves it quietly, efficiently, and without supervision. But that’s not the full story. If you are a Pro, Max, Team, or Enterprise Claude plan user , this matters to you because you will have access to Claude 4. This means you and potentially your organization may now have access to a brand new, advanced AI that can carry out complex work without human input. It means that leadership must ask: what’s the governance plan? Who verifies the output? Who signs off? Much like the way in which I learned from asking someone who doesn’t know my system to essentially set it up for me on a rock stage, there’s something we as business leaders can do to ensure that AI innovations still act effectively on our behalf. Claude 4 and the Shift to Autonomous Execution Until recently, generative AI required heavy user input. The human wrote the prompt, and the system responded. That dynamic made it easy to keep the human in the loop to control the task, validate the output, and decide what comes next. Claude 4 changes these terms. It introduces what many call agentic AI : models capable of reasoning through tasks, planning multi-step actions, and executing work without continual prompting. Claude 4 is demonstrating that it can work independently for hours, reconfigure code, and make judgment calls along the way. So, it’s not just writing the code—it is actually finishing the job as well. This is a major development. But with this innovation in AI autonomy comes a truth: the more work that AI performs alone, the less visibility organizations have into how it gets done. The AI Governance Gap Is Growing The risk isn't that AI will make obvious errors. It’s that it will produce plausible work that quietly deviates from your standards, assumptions, or intentions. Do you have someone in place that can notice these changes before it’s too late? That’s the real governance gap. It’s not about control over prompts, but rather prioritizing oversight over outcomes. This means organizations need to reconsider how they monitor AI-driven work. That doesn’t mean leaders need to personally review every AI-generated output. But it does mean they need to put in place clear lines of accountability, regular review processes, and internal checks to ensure AI isn’t working in a vacuum. This also means that oversight can no longer be reactive—it needs to be built in from the beginning. What Does Accountable AI Adoption Look Like? Organizations don’t need to halt progress to manage these risks, but they do need to move forward with clarity: One of the most effective ways to begin is by documenting how AI is being used across the business. This doesn’t need to be a heavy-handed process. Even a lightweight registry of AI use cases can help identify where autonomy is increasing and where review protocols might be missing Leaders should also establish guidelines for when human oversight is required. Not every AI-generated output requires manual review, but some certainly do. Defining these boundaries in advance protects against over-reliance on unchecked systems Lastly, every autonomous system should have a clearly named owner. Someone in the organization needs to be responsible for verifying that the AI’s work aligns with business objectives, ethical expectations, and legal obligations. The idea isn’t to create bottlenecks—it’s to make sure someone is watching. Signing Off Is Still a Human Task Claude 4 marks real progress. It moves us closer to a world where AI can take on meaningful work, save time, and support innovation. But that progress also demands more from leadership. Delegating work to machines doesn’t absolve humans of responsibility. If anything, it raises the bar because the more invisible the work becomes, the more deliberate our oversight must be. Leaders don’t need to fear these systems. But they do need to govern them. They need to understand where AI is being used, what it’s allowed to do, and who remains accountable when things go wrong. This type of oversight can help organizations explain how their AI systems generate outputs. Previous Next