top of page

Search Results

123 results found with an empty search

  • Budget 2025 | voyAIge strategy

    Budget 2025 Canada’s Plans for AI By Christina Catenacci, human writer Nov 7, 2025 Key Points On November 4, 2025, the Government of Canada released Budget 2025: Canada Strong The federal government made several proposals to invest in AI and quantum computing One of the first things that Canadians will likely see is fresh feedback on how the consultations went, along with an update on the status of the development of the AI Strategy On November 4, 2025, the Government of Canada announced the release of Budget 2025: Canada Strong. Generally speaking, the federal government plans to transform Canada’s economy from one that is reliant on a single trade partner, to one that is stronger, more self-sufficient, and more resilient to global shocks. Essentially, Canada has just delivered an investment budget: the goal is to spend less on government operations and invest more in the workers, businesses, and nation-building infrastructure that will grow the economy. More specifically, the budget includes a total of $60 billion in savings and revenues over five years, and makes generational investments in housing, infrastructure, defence, productivity, and competitiveness. These strategic investments will enable $1 trillion in total investments over the next five years through smarter public spending and stronger capital investment. Budget 2025 rests on two fiscal anchors: Balancing day-to-day operating spending with revenues by 2028–29, shifting spending toward investments that grow the economy Maintaining a declining deficit-to-GDP ratio to ensure disciplined fiscal management for future generations Indeed, Budget 2025 was passionately delivered by The Honourable François-Philippe Champagne, Minister of Finance and National Revenue. He noted that these are difficult times, but we need to rest assured that the government will not back down, will be there for Canadians now and for as long as it takes, and will do what Canadians do best in times of need—we look after each other and help each other. He stated, “That’s the Canadian way, our way!” That said, he acknowledged that meeting this challenge requires both ambition and discipline. To mark the day, Champagne even made some shoes for the occasion: they were made by Canadians for Canadians to hammer home the point that we need to be our own best customers. We cannot forget the ending of the speech: “Long live Canada!” The entire budget is a lengthy document; this article deals with what the budget has articulated with respect to Canada’s plans for AI. What Does Budget 2025 Say about AI? Canada wants to seize the full potential of AI. The purpose is to create opportunities for millions of Canadians, businesses, and the economy. Budget 2025 will facilitate the creation of AI compute infrastructure, including the development of a Sovereign Canadian Cloud. Ultimately, AI will help to create new jobs and economic growth. It is not only about AI: the government plans to allocate funds to foster innovation in both AI and quantum technologies. More precisely, Budget 2025 aims to: Provide $925.6 million over five years, starting in 2025-26 : this is, to support a large-scale sovereign public AI infrastructure that will boost AI compute availability and support access to sovereign AI compute capacity for public and private research. The investment will ensure that Canada has the capacity needed to be globally competitive in a secure and sovereign environment. Of this amount, $800 million will be sourced from funds that were previously provisioned in the fiscal framework. This means that $800 million of the $925.6-million investment will come from funds that were set aside by the last federal budget, which announced a total of $2 billion to boost domestic AI compute capacity and build public supercomputing infrastructure Enable the Minister of Artificial Intelligence and Digital Innovation, Evan Solomon, to engage with industry to identify new promising AI infrastructure projects and enter into Memoranda of Understanding with those projects. Along the same lines, the government intends to enable the Canada Infrastructure Bank to invest in AI infrastructure projects Allocate $25 million over six years, starting in 2025-26, and $4.5 million ongoing for Statistics Canada to implement the Artificial Intelligence and Technology Measurement Program (TechStat). TechStat will use data and insights to measure how AI is used by organizations, and understand its impact on Canadian society, the labour force, and the economy Explore options for the National Research Council of Canada’s Canadian Photonics Fabrication Centre to best position it to attract private capital, scale its operations, and serve as a platform for Canadian innovation and new photonic applications, including in the face of the rise of AI and related compute infrastructure Provide, through the Defence Industrial Strategy, $334.3 million over five years to strengthen Canada’s quantum ecosystem. It is important to note that computing problems that are currently considered to be intractable even with the most powerful classical computers could be solved using quantum computers Enable Canada to unlock significant economic benefits through commercialising the associated intellectual property (IP) and being among the first to put it to use. For example, when it comes to IP, the budge plans on providing $84.4 million over four years, starting in 2026-27, to Innovation, Science and Economic Development Canada to extend the Elevate IP program, as well as $22.5 million over three years, starting in 2026-27, to renew support for the Innovation Asset Collective’s Patent Collective Establish a new Office of Digital Transformation that will lead the adoption of AI and other new technologies across government. On top of that, there will be near-term procurement of made-in-Canada sovereign AI tools for the public service, which will lead to a more efficient government Enable the Shared Services Canada (SSC) and the Department of National Defence and the Communications Security Establishment to will develop a made-in-Canada AI tool which will be deployed across the federal government. The goal is to facilitate the partnership between the SSC and leading Canadian AI companies to develop the internal tool As I wrote about here , the government announced in September, 2025 the launch of an AI Strategy Task Force and a “30-day national sprint” (consultations) that will help shape Canada’s approach to AI. The government is set to develop a new AI strategy by the end of 2025. It will also consider whether new AI incentives and supports should be provided. Already, the government has decided to work with Cohere to use AI to improve the public service . In fact, the parties signed an agreement to set up an early-stage collaboration so that Cohere can identify areas where AI can enhance public service operations. What Can We Take from Budget 2025? As Champagne has highlighted, Canada is strong and has a lot going for it. AI and quantum computing are part of this. In the context of this investment budget, we see that the government has allocated significant resources to improve Canada’s AI and quantum computing posture. One of the first things that Canadians will likely see is fresh feedback on how the consultations went, along with an update on the status of the development of the AI Strategy. We can only wait and see if the above proposals will come to fruition. Previous Next

  • Why the AI Chip Controversy Matters | voyAIge strategy

    Why the AI Chip Controversy Matters How Semiconductor Tensions Shape AI Strategy By Tommy Cooke, fueled by light roast coffee May 23, 2025 Key Points: AI strategy now depends as much on chip supply and trade stability as on internal capability Semiconductor restrictions are fragmenting the global AI landscape, creating risks and perhaps some opportunities for business leaders as well Business leaders must proactively monitor supply chains, policy shifts, and emerging markets to future-proof their AI investments The semiconductor tensions between the U.S. and China aren’t just about geopolitics. They reveal a deeper truth about the future of artificial intelligence. A semiconductor is a material (usually silicon) that conducts electricity under some conditions—and not others. This characteristic makes them ideal for controlling electrical signals. It is also why they are used as the foundation for microchips, which of course power everything from smartphones to cars and AI systems. In the case of AI, microchips are used in the processors used to handle the massive calculations that AI requires. You’ve probably heard of them: graphics processing units (GPUs) and tensor processing units (TPUs). Back to the controversy at issue: at its core, the controversy isn’t about semiconductors and microchips. It’s about who controls the speed, shape, and scale of AI innovation globally. For business leaders exploring AI adoption, understanding these supply-side dynamics is crucial. AI systems are only as powerful as the chips that run them, and those chips are subject to competition, trade restrictions, and access limitations. That means that today’s decisions around AI aren’t just about what tools to use. They’re also about where those tools come from, how stable the supply pipeline is, and whether your organization is prepared for the long-term implications of this shifting terrain. Simply put, if you are investing in AI now, the controversy may impact your ROI calculations. Understanding the Core of the Controversy At the heart of the issue lies the U.S. government's implementation of strict export controls on advanced AI chips. The intention is to limit China's access to cutting-edge semiconductor technology. These measures, including the recently rescinded AI Diffusion Rule, sought to categorize countries and restrict chip exports accordingly. Industry leaders, like Nvidia's CEO Jensen Huang, have criticized these policies as counterproductive. He argues that they not only diminish U.S. companies' market share, but they also inadvertently accelerate domestic innovation within China. Implications for the AI Landscape While the chip export restrictions may seem like merely a trade issue, they are already reshaping how and where AI systems are being built and deployed. These changes have ripple effects across industries, from vendor availability and cost structures to innovation cycles and long-term planning. Here are some of the most prevalent implications on the horizon: Acceleration of Domestic Alternatives. The restrictions have spurred Chinese companies to invest heavily in developing local semiconductor technologies. This means that China is investing in a capacity for self-reliance, which could lead to the emergence of competitive alternatives to U.S. and European products. Market Share and Revenue Impact. U.S. companies like Nvidia have experienced significant reductions in their Chinese market share, dropping from 95 percent to 50 percent over four years . These declines not only affect revenues, but they also influence global competitiveness and innovation leadership. On this point alone, we ought to pay close attention to Nvidia’s future ability to supply GPUs required for supporting U.S.-driven AI innovation. Global AI Development Dynamics. Building from the previous point, the export controls may inadvertently fragment the global AI development landscape. This may, in turn, lead to parallel ecosystems with differing standards and technologies. This is what is referred to as a bifurcation: the division of something into two or more branches or parts, like a river that splits into two because of elevated terrain. A marketplace bifurcation may eventually encourage further self-reliance and innovation, but it will almost certainly complicate international collaboration and AI system interoperability at the same time. Partnerships and trust are at threat, to say the least. Strategic Considerations for Business Leaders in the Wake of the AI Chip Controversy This controversy is a warning sign. It reveals how AI adoption is no longer just about internal capability or budget. It’s also about navigating a volatile global landscape. Business leaders must now consider not only what AI tools can do, but also where those tools originate, whether future access will be reliable, and how international policy may affect ongoing AI strategies. As the supply side of AI becomes more political, leaders must become more strategic. Here are some tips that you should consider when internally canvasing the right fit, especially as a reflection of your ROI priorities: Assess Supply Chains and Diversify. Assess and diversify your supply chains. It’s important to mitigate risks associated with geopolitical tensions and export restrictions. Who is selling? Where are they sourcing their solutions from? Where are your vendors’ data farms? Ask these questions now to avoid issues later. Invest in R&D. To maintain a competitive edge, invest in research and development. Start now because it will become important later, particularly in areas less susceptible to export controls. The idea is to, at the very least, begin exposing yourself to an R&D process so that you can learn more about strategic AI-related investments downstream (no pun intended). Monitor, Monitor, Monitor. The everchanging regulatory landscape matters a lot here. Stay informed about evolving export regulations and international trade policies. It is essential for strategic planning, let alone compliance. Explore New Markets. With certain markets becoming less accessible due to restrictions, identifying and cultivating alternative markets can help offset potential losses. Who are the emerging suppliers around the globe? Where are AI innovations specific to your industry and use cases growing? Expand your horizon. The AI chip export controversy is as a reminder of the intricate balance between national priorities and global technological development. For business leaders, navigating this landscape requires awareness, agility, and informed decision-making. This is what a proactive approach looks like. Remember, AI adoption doesn’t happen in a vacuum. The semiconductor debate makes it clear that the tools we choose, and the ecosystems we rely on, matter more than ever. Previous Next

  • News (List) | voyAIge strategy

    As AI continues to reshape industries, understanding its organizational, legal, social, and ethical impacts is essential for successfully running an organization. Our collection of articles offers both depth and breadth on critical AI topics, from legal compliance to ethical deployment, providing you with the knowledge necessary to integrate AI successfully and responsibly into your operations. With 85% of CEOs affirming that AI will significantly change the way they do business in the next five years, the urgency to adopt AI ethically and fairly cannot be overstated. Dive into our resources to ensure your growth with AI is both innovative and just, positioning your organization as a leader in the conscientious application of advanced technology. Insights Articles to increase awareness and understanding of AI adoption and integration Canada’s Innovation Crossroads Jan 16, 2026 New York Governor Hochul Signs AI Safety and Transparency Bill into Law Jan 23, 2026 Privacy Commissioner Investigation into Social Media Platform, X Jan 23, 2026 Trump Signs Executive Order on AI Dec 15, 2025 Legal Tech Woes Dec 5, 2025 Meta Wins the Antitrust Case Against It Nov 27, 2025 Cohere Loses Motion to Dismiss Nov 21, 2025 What is “AI Augmentation”, and How Do You Achieve It? Nov 14, 2025 Budget 2025 Nov 7, 2025 When Technology Stops Amplifying Artists and Starts Replacing Them California Bill on AI Companion Chatbots Oct 31, 2025 Reddit Sues Data Scrapers and AI Companies Oct 24, 2025 Data Governance & Why Business Leaders Can’t Ignore It Oct 13, 2025 Canada’s AI Brain Drain Oct 17, 2025 Americans Feel the Pinch of High Electricity Costs Oct 17, 2025 Newsom Signs Bill S53 Into Law Oct 10, 2025 The Government of Canada launches an AI Strategy Task Force and Public Engagement Oct 3, 2025 Privacy Commissioner of Canada (OPC) Releases Findings of Joint Investigation into TikTok Sep 26, 2025 How an Infrastructure Race is Defining AI’s Future Sep 26, 2025 Google Decision on Remedies for Unlawful Monopolization Sep 19, 2025 1 2 3 4 5 1 ... 1 2 3 4 5 ... 5

  • Cohere Loses Motion to Dismiss | voyAIge strategy

    Cohere Loses Motion to Dismiss Copyright and Trademark Infringement Lawsuit Must Proceed By Christina Catenacci, human writer Nov 21, 2025 Key Points A large number of news publishers (Publishers) previously sued AI company Cohere for copyright and trademark infringement Cohere just brought a partial motion to dismiss the lawsuit, and it lost—on November 13, 2025, McMahon, J of the United States District Court Southern District of New York denied Cohere’s partial motion to dismiss the lawsuit This dispute is one of more than 50 lawsuits that are currently before the courts challenging the use of copyrighted works by AI companies to train their large language models—each case depends on the circumstances, and we will have to wait and see Back in March, 2025, I wrote about how several news publishers (Publishers) sued AI company Cohere for copyright and trademark infringement. As a refresher, the Publishers alleged that Cohere, without permission or compensation, used scraped copies of their articles, through training, real-time use, and in outputs, to power its AI service, which in turn competed with Publisher offerings and the emerging market for AI licensing. The Publishers accused Cohere of stealing their works to the point where actual verbatim copies were produced in outputs, and of blatantly manufacturing fake pieces and attributing them to the Publishers, which misled the public and tarnished their brands. What’s more, when RAG was turned off, the AI model, Command, hallucinated answers. Ultimately, the Publishers claimed that Cohere’s actions amounted to “massive, systematic copyright infringement and trademark infringement, and have caused significant injury to Publishers”. Well now, a new development has emerged—Cohere brought a partial motion to dismiss the lawsuit, and it lost. That is, on November 13, 2025, McMahon, J of the United States District Court Southern District of New York denied Cohere’s partial motion to dismiss Counts II, III, and IV of the Publishers' complaint. Why Did the Court Deny Cohere’s Motion? In this motion, the judge found the following: Cohere's Motion to Dismiss the Publishers' Direct Copyright Infringement Claim was Denied . Cohere wanted to dismiss the Publishers' claim for direct copyright infringement only to the extent it alleged that Cohere was directly liable for generating "substitutive summaries" of the Publishers' work. In a nutshell, Cohere argued that though the Publishers could show they owned valid copyrights, Command’s summaries were not substantially similar to the Publishers’ works. The court disagreed with Cohere and concluded that the Publishers adequately alleged that Command's outputs were quantitatively and qualitatively similar—they argued that Command's output heavily paraphrased and copied phrases verbatim from the source article, and that these summaries went well beyond a limited recitation of facts. Also, the Publishers provided 75 examples of Cohere's alleged copyright infringement (50 allegedly included verbatim copies and a further 25 examples had a mix of verbatim copying and close paraphrasing) Cohere's Motion to Dismiss the Publishers' Secondary Copyright Infringement Claim was Denied . The Publishers claimed that Cohere was secondarily liable for unlawfully reproducing, displaying, distributing, and preparing derivatives of the Publishers' copyrighted works under each of three theories: contributory infringement by material contribution, contributory infringement by inducement, and vicarious infringement. Cohere agued that all three theories failed, but the court held that each of Cohere’s arguments were without merit. In particular, the Publishers adequately alleged underlying direct infringement; the Publishers adequately alleged Cohere’s knowledge of direct infringement; and the Publishers adequately alleged inducement Cohere's Motion to Dismiss the Publishers' Lanham Act (trademark) Claims was Denied . Cohere argued that the Publishers failed to allege use of their marks in commerce and a plausible likelihood of consumer confusion and that Cohere's use of the marks was lawful as nominative fair use. However, the court disagreed with Cohere because the Publishers adequately alleged Cohere’s use in commerce; the Publishers adequately alleged a likelihood of confusion; and the nominative fair use doctrine did not apply on the facts of this case—"All I can and will do is conclude that the complaint adequately alleges facts that could, if proved, cause a trier of fact to reject application of that doctrine” Needless to say, the court was simply unconvinced by Cohere’s arguments and shot them all down. Since Cohere was unsuccessful, it will have to prepare for a trial. What Does This Mean for the Case? This is not good for Cohere. At this point in the case, it is striking that the Publishers put Cohere on notice that it was not allowed to do what it was doing—the Publishers had copyright notices and terms of service on their websites, and they also sent do-not-crawl instructions to Cohere’s bots using robots.txt protocols. In fact, the Publishers also claimed that Cohere kept unlawfully copying their works even after they sent a cease-and-desist letter. From the perspective of the Publishers, this motion went well; if this is an indication of what is to come later down the line, they will be content. What Does This Mean for the Other AI Infringement Cases? The court noted that this dispute is one of more than 50 lawsuits that are currently before the courts challenging the use of copyrighted works by AI companies to train their large language models. Some may view the decision as a foreshadowing of what could transpire in some of the other cases, but it is important to note that this was just one motion in one case; the decisions of those other cases will depend on the circumstances of those cases. We can only wait and see. Previous Next

  • What Apple’s $500 Billion AI Investment Means to You | voyAIge strategy

    What Apple’s $500 Billion AI Investment Means to You The Continued Normalization of AI as Core Infrastructure By Tommy Cooke, powered by caffeine and curiousity Mar 3, 2025 Key Points: Apple's $500 billion AI investment signals that AI is shifting from an innovation tool to core business infrastructure Increased AI integration in Apple’s ecosystem will shape consumer, employee, and investor expectations, pushing businesses to adapt Businesses should focus on preparing for AI-driven shifts in consumer expectations, workforce dynamics, and regulatory landscapes Apple recently announced a $500 billion investment in AI. The moment is not merely a landmark in the technology work. It is also monumental for U.S. manufacturing, U.S. talent development, and the U.S.’s foothold in the global AI economy. This news is not merely a corporate push for technology. It’s a sign that AI is becoming intimately embedded in business infrastructure; quickly fading are the days of thinking of AI as merely an emerging, experimental tool. With AI-capable smartphones forecasting to grow significantly over the next three years at a compounded annual growth rate of nearly 63 percent , coupled with the fact that Apple accounts for more than half of the smartphone device market share in the U.S., what business leaders need to recognize that their employees, investors, partners, and customers alike – the Apple device lovers in your professional and personal networks – will be interfacing with AI at unprecedented rates in the few short years to come. Whether your organization is adopting AI or not, here’s why Apple’s announcement matters to you. AI as Infrastructure, Not Merely Innovation For years, AI has been treated as an innovation driver or a business enabler, something that enhances products, streamlines workflows, or creates new capabilities. But with Apple’s recent announcement, a deeper reality is setting in: AI is increasingly recognized as an operational necessity. Apple’s announcement, which includes an AI server manufacturing facility in Texas and 20,000 new research and development jobs along with a new AI academy in Michigan, signals a broader shift—AI is no longer niche, it is foundation. This reclassification matters. Apple’s investment will push AI further into the mainstream. It is altering expectations for AI-readiness across multiple industries. Additionally, and as Apple continues to integrate AI more deeply into its own ecosystem, more consumers, employees, partners, and investors will be regularly exposed to AI-driven interactions and functionalities. This broad exposure means that businesses need to be prepared for shifting human expectations of AI. AI Normalization and Business Implications As AI becomes more infrastructural, normalization will follow. What is important to recognize here is that this level of financial investment will create jobs, accelerate workforce transformation, and even generate a new AI training and research facility—this is about much, much more than declaring AI is crucial to the company’s internal operations. It will also significantly affect their external ecosystem in sending a very clear message about the value of AI. Here are three reasons why Apple’s investment matters to you: AI is Becoming More Accessible. As AI infrastructure expands, smaller enterprises will have increased access to AI capabilities. This means even organizations without extensive tech teams must begin discussing AI integration and management. Consumers and Employees Expect AI. With AI becoming more embedded in Apple’s ecosystem (through Siri advancements, AI-enhanced applications, and automated workflows) customer and employee expectations around AI-driven interactions will evolve as well. Businesses must anticipate and meet these new expectations. Remember, whether your leadership believes in AI or not, the people working with you and for you have ideas, dreams, and visions of AI making their jobs easier. AI will be an integral, core component of Apple devices moving forward. Accordingly, expectations will change. Policy and Regulation Will Evolve. Large-scale AI investment has a high likelihood to accelerate regulation. As AI becomes a fundamental part of economic infrastructure, governments will refine legal frameworks around AI use, data privacy, and corporate accountability. While regulatory change is rather cumbersome in North America, it will be important to keep an eye on global regulators and civil society discourse as there will be adjustments in the tone, frame, and focus of AI law and AI ethics concepts. The Takeaway: A Wake-Up Call for Businesses Regardless of whether your organization is navigating AI, it is important to start thinking about the relationship between you, your people, and their increasingly AI-driven Apple devices. Businesses are recommended to invest in AI literacy, establish decision-making plans, and if they are on the cusp of integrating AI, lead the charge on the conversation. In this way, businesses will be more equipped to respond to the fact that people outside and inside their organizations are comparing their agility, creativity, and flexibility to new standards driven by AI models. Previous Next

  • Closing the Gap: from Policies and Procedures to Practice | voyAIge strategy

    Closing the Gap: from Policies and Procedures to Practice Overcoming the policy/procedure-practice paradox requires focus and commitment By Tommy Cooke Sep 24, 2024 Key Points: Having AI policies doesn't automatically ensure ethical AI practices Regular audits and cross-functional teams are crucial for aligning AI with ethical standards Explainability and stakeholder engagement are key to responsible AI implementation Closing the Gap: from Policies and Procedures to Practice Organizations pride themselves on having comprehensive AI policies and procedures. They show care, diligence, and signal to your staff and stakeholders that you are take AI use and employee behaviour seriously as part of your business plan. However, AI policies and procedures don’t guarantee ethical AI. Even when multiple policies reference AI, there's often a gap between policy and procedures on the one hand, and practice on the other. This gap is a problem because it can catalyze unintended consequences and ethical breaches that undermine the very principles they otherwise uphold. The Policy/Procedure-Practice Paradox This problem is a paradox that is common in virtually every industry using AI. By paradox we mean a contradictory statement that, when investigated and explained, proves to be true. For example, say aloud to yourself, “the beginning is the end”. It sounds absurd, but when you think it through, it makes sense. This same phenomenon presents itself when thinking about “policies and procedures in practice”. Policies and procedures are documents, so how exactly are they practiced? The initial thought that a document practices anything is absurd. But when we read them, they guide how people ought to use and not use AI. The policy/procedure-practice paradox is a problem because failing to understand it means failing to address it. And in failing to address it, policies and procedures about AI often lead to broken and misinformed practices. Let’s consider a real-world example: Despite a company having an anti-bias policy in place, a facial recognition system used in stores across the United States for over eight years exhibited significant bias . The system struggled to accurately identify people with darker skin tones, leading to higher error rates for certain demographics. This occurred because the AI was trained on datasets disproportionately representing lighter skin tones. And so, even well-intentioned policies can fail in practice. The example above is not isolated. It’s a symptom of a larger issue in AI implementation. While the example I provided was caused by biased data, there are several other reasons why the policy/procedure paradox exists: Lack of Explainability: many AI systems operate as " black boxes ," making it difficult to understand their decision-making processes, even with transparency policies in place Rigid Rule Adherence: AI systems may strictly follow their programmed rules without understanding the nuanced ethical priorities of an organization Complexity of Ethical Standards: Translating abstract ethical concepts into concrete, programmable instructions is a complex task that often leaves room for interpretation and error Closing the Gap To mitigate the paradox, we need to close the gap that often exists between AI policies and procedures with AI practices. Here are some strategies to achieve this: Translate Policies into AI-Specific Guidelines: high-level policy language needs to be converted into actionable steps that can be implemented in AI systems. This translation ensures that AI operates on the same definitions of privacy, fairness, and transparency as the organization. Engage with your AI vendor to discuss how your policies can be integrated into the system's operations. Remember, AI systems often require fine-tuning to align with specific organizational needs . Conduct Regular Audits: periodic reviews of AI systems are essential to ensure they're behaving in line with ethical standards. These audits should be thorough and look for potential blind spots. They’re also excellent at discovering and mitigating issues that an organization may have previously missed . Compare your system's training data with the data your organization provides. Analyze the differences and involve your ethics and analytics teams in prioritizing findings for policy amendments. Build a Cross-Functional Ethics Team: bringing together technology champions, legal experts, and individuals with strong ethical compasses can provide a well-rounded perspective on AI implementation. Ensure this team regularly communicates with your AI vendor, especially during the implementation of new systems. When building this team, diversify it. As academics say, make it multidisciplinary, meaning the combination of professional specializations when approaching a problem . Promote Explainability: as the Electronic Frontier Foundation has advocated for years, explainability is crucial when using AI . Why? If an AI system's decisions can't be explained, it becomes difficult for an organization to claim accountability for its actions. Work with your vendor to ensure AI models are interpretable. Position the right people to explain system outputs to anyone in your organization and verify that these align with your founding principles. Engage External Stakeholders: as AI ethics expert Kristofer Bouchard recently argued , external perspectives, especially from customers, communities, and marginalized groups, are crucial when using AI. This is especially the case when it comes to identifying ethical blind spots. Regularly seek feedback from these groups when evaluating your AI systems. Their insights can be invaluable in uncovering unforeseen ethical implications. The Path Forward: Ongoing Oversight and Proactive Management The close the gap between AI policy and ethical practice requires keep the gap shut. Unfortunately, it’s not as simple as closing a door once-and-for-all. It needs to be closed because it can easily reopen many times during your AI journey. Closing the gap requires ongoing oversight, regular policy updates, and a commitment to aligning AI behavior with organizational values. Actively integrate the five strategies above as doing so can significantly minimize risks associated with AI use. Being proactive not only ensures compliance with ethical standards but also builds trust with stakeholders and positions the organization as a responsible leader in AI adoption. Remember, in the world of AI, accountability and responsibility are critical. The power of these systems demands continuous vigilance and active management. By committing to this process, organizations can harness the full potential of AI while upholding their ethical principles and societal responsibilities. Previous Next

  • Meta Refuses to Sign the EU’s AI Code of Practice | voyAIge strategy

    Meta Refuses to Sign the EU’s AI Code of Practice A closer look at the reasons why By Christina Catenacci, human writer Jul 30, 2025 Key Points On July 18, 2025, the European Commission released its General-Purpose AI Code of Practice and its Guidelines on the scope of the obligations for providers of general-purpose AI models under the AI Act Many companies have complained about the Code of Practice, and some have gone so far as to refuse to sign it—like Meta Businesses who are in the European Union and who are outside but do business with the EU (see Article 2 regarding application) are recommended to review the AI Act, Code of Practice, and Guidelines and comply Meta has just refused to sign the European Union’s General-Purpose AI Code of Practice for the AI Act . That’s right—Joel Kaplan, the Chief Global Affairs Officer of Meta, said in a LinkedIn post on July 18, 2025 that “Meta won’t be signing it”. By general-purpose AI, I mean an AI model, including when trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications. This does not cover AI models that are used before release on the market for research, development and prototyping activities. What is the purpose of the AI Act? As you may recall, section (1) of the Preamble of the AI Act states that the purpose is to: “The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform legal framework in particular for the development, the placing on the market, the putting into service and the use of artificial intelligence systems (AI systems) in the Union, in accordance with Union values, to promote the uptake of human centric and trustworthy artificial intelligence (AI) while ensuring a high level of protection of health, safety, fundamental rights as enshrined in the Charter of Fundamental Rights of the European Union (the ‘Charter’), including democracy, the rule of law and environmental protection, to protect against the harmful effects of AI systems in the Union, and to support innovation. This Regulation ensures the free movement, cross-border, of AI-based goods and services, thus preventing Member States from imposing restrictions on the development, marketing and use of AI systems, unless explicitly authorized by this Regulation” The AI Act classifies AI according to risk and prohibits unacceptable risk like social scoring systems and manipulative AI. High-risk AI is regulated, limited risk has lighter obligations, and minimal risk is unregulated. The AI Act entered into force on August 1, 2024, but its prohibitions will be phased in over time. The first set of regulations, which take effect on February 2, 2025, ban certain unacceptable risk AI systems. After this, a wave of obligations over the next two to three years, with full compliance for high-risk AI systems expected by 2027 (August 2, 2025, February 2, 2026, and August 2, 2027 have certain requirements). Those involved in general-purpose AI may have to take additional steps (e.g., develop of Codes of Practice by 2025), and may be subject to specific provisions for general-purpose AI models and systems. See the timeline for particulars. What is the Code of Practice for the AI Act ? The Code of Practice is a voluntary tool (not a binding law), prepared by independent experts in a multi-stakeholder process, designed to help industry comply with the AI Act’s obligations for providers of general-purpose AI models. More specifically, the specific objectives of the Code of Practice are to: serve as a guiding document for demonstrating compliance with the obligations provided for in the AI Act , while recognising that adherence to the Code of Practice does not constitute conclusive evidence of compliance with these obligations under the AI Act ensure providers of general-purpose AI models comply with their obligations under the AI Act and enable the AI Office to assess compliance of providers of general-purpose AI models who choose to rely on the Code of Practice to demonstrate compliance with their obligations under the AI Act Released on July 10, 2025, it has three parts: Transparency : Commitments of Signatories include Documentation (there is a Model Documentation Form containing general information, model properties, methods of distribution and licenses, use, training process, information on the data used for training, testing, and validation, computational resources, and energy consumption during training and inference) Copyright : Commitments of Signatories include putting in place a Copyright policy Safety and Security : Commitments of Signatories include adopting a Safety and security framework; Systemic risk identification; Systemic risk analysis; Systemic risk acceptance determination; Safety mitigations; Security mitigations; Safety and security model reports; Systemic risk responsibility allocation; Serious incident reporting; Additional documentation and transparency For each Commitment that Signatories sign onto, there is a corresponding Article of the AI Act to which it relates. In this way, Signatories can understand what parts of the AI Act are being triggered and complied with. For example, the Transparency chapter deals with obligations under Article 53(1)(a) and (b), 53(2), 53(7), and Annexes XI and XII of the AI Act . Similarly, the Copyright chapter deals with obligations under Article 53(1)(c) of the AI Act . And the Safety and Security chapter deals with obligations under Articles 53, 55, and 56 and Recitals 110, 114, and 115 AI Act. In a nutshell, adhering to the Code of Practice that is assessed as adequate by the AI Office and the Board will offer a simple and transparent way to demonstrate compliance with the AI Ac t. The plan is that the Code of Practice will be complemented by Commission guidelines on key concepts related to general-purpose AI models, also published in July. An explanation of these guidelines is set out below. Why are tech companies not happy with the Code of Practice? To start, we should examine the infamous LinkedIn post: “Europe is heading down the wrong path on AI. We have carefully reviewed the European Commission’s Code of Practice for general-purpose AI (GPAI) models and Meta won’t be signing it. This Code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act. Businesses and policymakers across Europe have spoken out against this regulation. Earlier this month, over 40 of Europe’s largest businesses signed a letter calling for the Commission to ‘Stop the Clock’ in its implementation. We share concerns raised by these businesses that this over-reach will throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them. The post criticizes the European Union for going down the wrong path. It also talks about legal uncertainties, measures which go far beyond the scope of the AI Act , as well as stunting development of AI models and companies. There was also mention of other companies wanting to delay the need to comply. To be sure, CEOs from more than 40 European companies including ASML, Philips, Siemens and Mistral, asked for a “two-year clock-stop” on the AI Act before key obligations enter into force this August. In fact, the bottom part of the open letter to European Commission President Ursula von der Leyen called “Stop the Clock” asked for more simplified and practical AI regulation and spoke of a need to postpone the enforcement of the AI Act . Essentially, the companies want a pause on obligations on high-risk AI systems that are due to take effect as of August 2026, and to obligations for general-purpose AI models that are due to enter into force as of August 2025. Contrastingly, the top of the document is entitled “EU Champions AI Initiative”, with logos of over 110 organizations that have over $3 billion in market cap and over 3.7 million jobs across Europe. In response to the feedback, the European Commission is mulling giving companies who sign a Code of Practice on general-purpose AI a grace period before they need to comply with the European Union's AI Ac t. This is a switch from the July 10, 2025 announcement that the EU would be moving forward notwithstanding the complaints. The final word appears to be that there is no stop the clock or pauses or grace periods, period. New guidelines also released July 18, 2025 In addition, the European Commission published detailed Guidelines on the scope of the obligations for providers of general-purpose AI models under the AI Act (Regulation EU 2024/1689)—right before the AI Act’s key compliance date, August 2, 2025. The goal is to help AI developers and downstream providers by providing clarification. For example, it explains which providers of general-purpose AI models are in and out of scope of the AI Act’s obligations. In fact, the European Commission stated that “The aim is to provide legal certainty to actors across the AI value chain by clarifying when and how they are required to comply with these obligations”. The Guidelines focus on four main areas: General-purpose AI model Providers of general-purpose AI models Exemptions from certain obligations Enforcement of obligations The intention is to use clear definitions, a pragmatic approach, and exemptions for open-source. That said, the Guidelines consist of 36 pages of dense material that need to be reviewed and understood. For instance, the Guidelines answer the question, “When is a model a general-purpose AI model? Examples are provided for models in scope and out of scope. What happens next? As we can see from the above discussion, there are serious obligations that need to be complied with—soon. To that end, businesses in the European Union or who do business in the European Union (see Article 2 regarding application) are recommended to review the AI Act, the Code of Practice, and the Guidelines to ensure that they are ready for August 2, 2025. After August 2, 2025, providers placing general-purpose AI models on the market must comply with their respective AI Act obligations. Providers of general-purpose AI models that will be classified as general-purpose AI models with systemic risk must notify the AI Office without delay. In the first year after entry into application of these obligations, the AI Office will work closely with providers, in particular those who adhere to the General-Purpose AI Code of Practice, to help them comply with the rules. From 2 August 2026, the Commission’s enforcement powers enter into application. And by August 2, 2027, providers of general-purpose AI models placed on the market before August 2, 2025 must comply. Previous Next

  • Compliance | voyAIge strategy

    AI policies and frameworks to help your organization meet legal and ethical standards. Compliance At voyAIge strategy, compliance is a foundation for our analysis on the legal, policy, and ethical dimensions of AI. We understand the intricacies of the laws of many jurisdictions and can guide you through every step of your compliance journey. Today's rapidly evolving digital landscape is fueled by the exponential rate that AI transforms not just business practices and ways of seeing but entire industries. However, with innovation comes new challenges - particularly in compliance. Governments and regulatory bodies around the world are racing to keep up. They are creating complex legal requirements with which business must comply. For businesses, navigating this complexity is not just about avoiding fines and penalties. It's about safeguarding reputation, building trust with stakeholders, and ensuring sustainability. What's Your Compliance Challenge? Understanding jurisdiction, sector, applicable legislation, data types, and data flows are some of the many considerations we take into account when identifying regulatory bodies relevant to your organization. The following are some examples that may apply to a business now or in the future: GDPR Enforces the General Data Protection Regulation (GDPR). Applies to all businesses with clients and customers in Europe NYC LL 144 New York City regulates how business can and cannot use AI to assist in hiring employees AI ACT Regulates and governs the use of all Artificial Intelligence inside the European Union FDA The US Food and Drug Administration contains specific AI regulations governing medical devices, evidence curation, and market monitoring CCPA The California Consumer Privacy Act has a significant impact on how AI systems used in businesses handle data AIDA Enforces the General Data Protection Regulation (GDPR). Applies to all businesses with clients and customers in Europe OESA Governs employee monitoring as well as using AI to recruit employees in Ontario, Canada SB 1047 California's Safe and Secure Innovation for Frontier AI Models Act imposes safety restrictions on advanced AI PIPEDA Canada's Personal Information Protection and Electronic Documents Act governs how companies collect, use, and share personal information Did You know? Up to €30 million or 6% of Global Annual Turnover for prohibited AI practices under the GDPR. Amazon was fined €746 million by Luxembourg’s data protection authority for how it processed personal data for targeted advertising using AI-driven systems. Canada's proposed Artificial Intelligence and Data Act plans to impose administrative monetary penalties $10 million or 3% of Global Annual Revenue - including criminal penalties such as jail time for AI decisions causing significant harm. Expert Insights, Experience & Resources Book a free consultation to chat with us about correctly and accurately identifying compliance and regulation applicable to your organization. Stay informed by subscribing to our VS-AI Observer Substack , where we offer articles, whitepapers, case studies, and video content that will keep your organization ahead of emerging compliance challenges, requirements, and issues. Book a Free Consultation

  • Partnership Opportunities | voyAIge strategy

    Collaborating with organizations to drive responsible and effective AI adoption. Partnership Opportunities We are passionate about partnerships and alliances. If your business or organization is interested in collaborating with us on business opportunities, CfPs, or other forms of co-operation for mutual benefit, we would like to hear from you. For all other inquiries, please contact us here . First Name Last Name Email Partnership Inquiry Details Send Thanks for submitting!

  • Governing AI by Learning from Cohere’s Mistakes | voyAIge strategy

    Governing AI by Learning from Cohere’s Mistakes Why it is Crucial to Demonstrate Control By Tommy Cooke, powered by caffeine and curiousity and a strong desire for sunny weather Mar 7, 2025 Key Points: AI governance is essential because it ensures organizations maintain transparency, accountability, and oversight over how AI systems are trained, deployed, and used Leaders must proactively assess where AI models source their data, ensuring compliance with intellectual property laws and mitigating risks related to unauthorized content use We don’t govern AI to avoid lawsuits. We govern AI to demonstrate that we are in control. This is how we build trust with stakeholders In my 20s, I spent a lot of time traveling to Germany to visit a close friend. He had an early GPS unit in his car. It was an outdated system that relied on CDs for updates and generated very blurry little arrows on a tiny screen nowhere near the driver's eyes. On one trip, we thought we were heading west of Frankfurt to a favourite restaurant. Instead, we ended up 75 kilometers south. We found ourselves sitting in his car at a dead end, with his high beams on, staring into a dark farmer's field. The system led us astray because we over-relied on old data inside a brand-new consumer technology. When I speak with leaders adopting AI for the first time, I often think of getting lost in the rural German countryside. AI, like early GPS, promises efficiency, but its reliability depends entirely on its data. Organizations are under pressure to adopt AI to streamline operations, reduce costs, and drive creativity. But AI isn’t a magic bullet. It’s a tool. And like an unreliable GPS, AI trained on flawed or unauthorized data can take your organization in the wrong direction. It relinquishes control. Cohere, a major AI company in Canada, is facing a significant lawsuit over how it trained its models; Cohere used copyrighted content without permission or providing compensation. This case is one you should know about because it's more than just a legal battle. It’s a reminder: AI adoption isn’t just about capability. It’s about building and maintaining responsible control. So, how exactly do you ensure you are in control? The answer begins and ends with an ethical strategy. The Ethical Fault Line The lawsuit against Cohere, a Toronto-based AI company, highlights the growing tension between AI developers and content creators. Major media organizations allege that AI companies are scraping and reproducing their content without consent or compensation. This raises a critical question: Who controls knowledge in the era of AI? This isn’t just a tech industry issue—it’s a governance challenge with real consequences. AI systems generate content, provide insights, and automate decisions based on their underlying data. If that data is sourced irresponsibly—such as using newspaper articles without publisher consent—organizations risk reputational harm, legal liability, and a breakdown of trust with employees, customers, and industry partners. Lessons for AI Leaders: How to Stay on the Right Side of AI Ethics As AI continues to reshape industries, its impact will depend on how it is developed and deployed. Business leaders don’t need to be AI engineers, but they do need to ensure that they are using AI transparently. Here's why: Transparency is the Foundation of Trust. AI should not be a "black box"—a technology that operates mysteriously without clear explanation. Leaders need visibility into how AI works, what data it uses, and what safeguards are in place. This means two things: first, working with AI vendors to receive clear documentation on data sources and model behaviour. If a company can’t explain how its AI makes decisions, that’s a red flag. Second, leaders need a communication strategy—something that they can reference to explain AI’s role to any stakeholder Respect Intellectual Property from the Start. Whether using AI to generate content, analyze trends, or assist in decision-making, stakeholders expect AI leaders to account for where AI data comes from. If an organization uses internal data from sales reports, for example, this needs to be documented. If outsourcing data from a third-party vendor, it’s not enough to say that the data is external—leaders must be able to confirm the vendor’s ownership and rights to that data Governing AI Is Not Optional. Responsible AI use requires a governance framework. Companies need clear policies that define how AI is trained, where data comes from, and how the system and its outputs are monitored. Think of AI governance like driving a car: just as drivers follow traffic laws and speed limits, AI systems require rules to ensure safe and ethical operation. AI governance is a business strategy that demonstrates commitment to legal, compliant, and ethical AI development—ensuring transparency, explainability, and accountability. Ethical AI is an Advantage. Not a Financial Burden Much like the way a driver is expected to maintain control of their vehicle, to abide by rules, and to ensure their own and others' safety, drivers build trust with their passengers and other drivers by continuously demonstrating that they are in control. The same holds true with AI ethics. We don't govern AI to avoid lawsuits. We govern AI to demonstrate that we are in control. Previous Next

  • De-Risking AI Prompts | voyAIge strategy

    De-Risking AI Prompts How to Make AI Use Safer for Business By Tommy Cooke, fueled by caffeine and curiousity Aug 8, 2025 Key Points: Small, well-intentioned actions can quietly introduce risk when staff lack clear guidance and boundaries De-risking AI isn’t about restricting use. It’s about educating staff, adopting prompt training into workflows, and developing a support team Safe and effective AI use begins when leadership models responsible practices and builds a culture of clarity, not control Many moons ago, I was working with a data centre on a surveillance experiment. One of the interns was a motivated student. He was tasked with investigating third parties that we suspected were abusing access to sensitive location data within one of our experiment’s smartphones. Without telling anyone, the student sent sample data from our smartphone to an organization we were actively investigating. It was an organization whose credibility was under intense scrutiny for abusive data practices. The student wasn’t acting out of malice. They were trying to be helpful, to show responsiveness, to move the work forward. But they didn’t understand the stakes. To them, the data was “just a sample.” To us, it signaled loss of control and a risky alignment with an actor we hadn’t finished vetting. The problem wasn’t the intern. The problem was that we hadn’t taken the time to review and discuss contract terms—to find ways to guide interns on both the best practices and boundaries around their work. This is what prompting GPT looks like in many organizations today. Staff often using AI to accelerate their work, lighten work loads, and inject some creativity into their craft. AI is a tool that is attractive to staff for many reasons, and so it is not surprising to us here at VS to hear that staff turn to AI to also respond to mounting work pressures; now that AI is available, executives increasingly expect their teams to work harder, faster, and better with it as a result of its existence. But with less than 25 percent of organizations having an AI policy in place , and even fewer educating their staff on how to use AI, it’s not surprising that how your staff use AI is not only highly risky, but you are also likely unaware of precisely what they are doing with AI. To most organizations we speak to, this risk is entirely unacceptable. While we highly advocate for a robust AI policy in place, as well as training around that policy, let’s dive into what you can be doing to de-risk your organization’s AI use. De-Risking AI Is Not Just About Restricting Use Before we take a deeper dive, it’s important to address a common knee-jerk reaction among business leaders. There is a temptation to de-risk by locking down: restricting access to GPTs, blocking it from the firewall, or ban prompts that mention sensitive keywords. These reactions are just that: reactions. They are not responses because they are not planned, considered, and contextualized. They are rigid, inflexible, and as such, they often backfire. What’s as important here is that they send a very clear message to your staff: AI is dangerous and not learnable. This subsequently pushes experimentation underground and creates a shadow use problem that’s harder to monitor or support. Instead, and as I mentioned earlier, the safer and more sustainable path is to educate, empower, and build clarity. It’s impossible to eliminate risk entirely, but you can reduce it by building good habits, providing effective guidance, and a sharing an understanding of what safe prompting looks like. What Team Leads and Business Owners Can Do If you lead a team or own a business, here are some steps you can take right now to start de-risking GPT use without killing its potential and promise: Create a prompt playbook. A living document that outlines safe and unsafe prompting practices, gives examples, and evolves over time. This could include do’s and don’ts, safe phrasing suggestions, and reminders about privacy, intellectual property, and any other related laws and policies relevant to the scenario at hand. It doesn't have to be long—it just has to be usable and user-friendly. Build training around real workflows. It’s quite common for organizations to bring in third-parties to offer cookie cutter training on how to use AI safely and effectively. Don’t do that. Abstraction doesn’t resonate on the front line, nor do we find it effective in resonating with executives either. Bring in an organization that can offer training that reflects how your people actually use AI and the daily nuances of their work. Schedule prompt review . Designate an AI team. Task them with making it normal to collect, analyze, and assess how your staff talk to AI. Encourage them to ask questions like, “is this a safe way to talk to AI?” We want to create a culture where prompt sharing and refinement is part of collaboration. Designate prompt leaders . Identify or train a few people, ideally within the aforementioned team, who can act as internal advisors on AI use. Not to gatekeep, but to support. Let staff know who to ask when they're unsure if a prompt might cause issues. Make it part of their job description and KPIs to lift up and support employees when they use AI. Develop internal guardrails. This is also something I discussed before, and something that Christina and I discuss ad nauseum in our articles. If you're using GPT through an API, platform, or organization-wide license, get AI policies in place. Set rules, automate flags, or integrate prompt logging for sensitive areas like legal, HR, or R&D. Communicate the purpose. Let people know why prompting guidance and safe use matters. Use examples to show how good prompting helps them avoid mistakes and do better work, not just follow rules. Ensure that you show the implications when things go wrong, and then follow up by reassuring staff that you have contingency plans in place. Let them know that you have a plan for when things go wrong, and that they shouldn’t be afraid to use AI if they follow their training. Signal leadership’s involvement. Executives and leaders should model good prompting habits, or they should at least acknowledge the importance of prompting. Lead by example, not just by word. The intern I mentioned earlier didn’t intend to create risk. The boundaries were drawn, but the intern was not familiar with them. We avoided damage to the project, and that damage was never about malice or recklessness. It was about misunderstanding what small mistakes could catalyze, especially when they go unrecognized by a staff member. Previous Next

bottom of page