Search Results
123 results found with an empty search
- voyAIge strategy | Data & AI Governance Specialists
Our governance solutions ensures your successful and safe use of data and AI. We are seasoned specialists in AI law, policy, and ethics. We make AI and people work Tommy Cooke Co-Founder voyAIge strategy (VS) helps organizations to responsibly navigate AI through innovative strategic solutions, such as tailored policies, thought leadership content, as well as workforce and executive training. Our goal is to make AI and people work by bridging the gap between AI and people. We align AI use with real human needs and talents. Our solutions empower people to use AI confidently, safely, and effectively. Christina Catenacci Co-Founder Book a Free Consultation Managed Data & AI Governance Services Guidance, Oversight, and Leadership for your AI Journey Our Managed Data & AI Governance S ervices offer your organization the confidence to move forward with data and AI. Think of us as your virtual Chief Data or Chief AI Officer: someone on-call and working with you. We meet you where you are in your data and AI journey and provide you the support services you need at an affordable monthly rate that is within your budget. From a tailored AI strategy and risk assessments to executive and staff training to communications planning, we help ensure your data and AI use are aligned with your goals, and is safe, responsible, and built to last. Whether you are building your first use cases, integrating data across departments, or exploring risk mitigation,we provide the leadership and structure to make AI work for your people and your goals. Our extensive experience in AI and related areas such as data governance, data privacy and security, intellectual property, and confidential information, has assisted us to craft a simple yet reliable three-step approach to guide you in your AI journey: Data Governance End-to-End Management We partner to set up, maintain and evolve the structures, roles, policies and operations required to treat data as a trusted asset across the enterprise AI Governance Strategic AI Enablement We help you govern AI within your business: from use-case identification, model selection, deployment, through to monitoring, ethics & control frameworks Ecosystem Oversight Define data ownership, steward roles, access controls, metadata management and lifecycle processes Use-case to Value Identify high-impact AI opportunities, run pilots, operationalize them and embed the capabilities in business processes Compliance & Risk Reduction We ensure your data handling meets legal, regulatory and ethical standards Operational Readiness Equip your organisation with the governance structure that enables reliable analytics, BI and advanced data-capabilities Operational Readiness Deploy AI governance frameworks, playbooks and training so your teams act confidently and safely Compliance & Risk Reduction We ensure your data handling and AI operations meet legal, regulatory and ethical standards as well as industry best practices Our Managed Data & Governance Services Deliver Benefits That Accelerate Safe and Successful Growth Clarity on strategy and direction We help you set a focused, realistic AI roadmap Compliance and risk management Your AI stays aligned with law and compliance Expertise without full-time cost Access senior-level guidance at a fraction of the cost Support that grows with your needs We adapt as your AI use evolves Faster, safer Implementation Avoid false starts with structured deployment Confidence across teams and stakeholders Build trust in AI with clear guidance and communication Most Organizations Encounter the Same AI Challenges Most organizations encounter the same kinds of roadblocks when adopting AI. These challenges can stall progress, create risk, and leave teams overwhelmed or misaligned. VS provides solutions to address these challenges: Fear of AI Executives fear lost ROI as well as strategic or stakeholder misalignment Employees fear replacement, uncertainty, extra work, and inadequate training The solution is Training Inappropriate Use of AI Executives worry about employee misuse, data leaks, and non-compliance while employees often lack clarity on the rules and accidentally share sensitive information The solution is AI Policies Lack of Preparedness Organizations are unsure if they are ready for AI, lack budget clarity, and struggle to communicate effectively with stakeholders The solutions are thought leadership and stakeholder engagement No Leadership Organizations often do not have an internal AI expert. There may be no AI direction, and no coordination between departments. No one in charge of decision-making. The solution is VS's Managed AI Services No Strategy Leaders do not know what AI tools are the right fit, or they are overwhelmed by options. There is no roadmap or strategy for AI adoption, nor is there a change management plan in place. The solution is Adoption and Transformation Too Many Questions "Where do we start? "Do we need a plan?" "Is AI worth the investment? "What AI do we need?" The solution is the AI Helpdesk Strategic and Critical Insights to Guide your AI Journey Canada’s Innovation Crossroads Read More New York Governor Hochul Signs AI Safety and Transparency Bill into Law Read More Privacy Commissioner Investigation into Social Media Platform, X Read More Read More Testimonials "voyAIge has delivered exceptional work with their AI in the Workplace Policy for OneFeather. By centering Indigenous data sovereignty, collective growth, and the principle of 'leaving the table better set than we found it,' they've created more than just a policy, they've provided a blueprint for ethical AI implementation that protects community interests and removes systemic barriers" Jerret Taylor / Chief Technical Officer / OneFeather Mobile Technologies Ltd. Contact Us Partnership Opportunities Submit RFP Stay Informed Get expert perspectives on AI risks, solutions, and strategies Email address: Submit
- News (List) | voyAIge strategy
As AI continues to reshape industries, understanding its organizational, legal, social, and ethical impacts is essential for successfully running an organization. Our collection of articles offers both depth and breadth on critical AI topics, from legal compliance to ethical deployment, providing you with the knowledge necessary to integrate AI successfully and responsibly into your operations. With 85% of CEOs affirming that AI will significantly change the way they do business in the next five years, the urgency to adopt AI ethically and fairly cannot be overstated. Dive into our resources to ensure your growth with AI is both innovative and just, positioning your organization as a leader in the conscientious application of advanced technology. Insights Articles to increase awareness and understanding of AI adoption and integration Canada’s Innovation Crossroads Jan 16, 2026 New York Governor Hochul Signs AI Safety and Transparency Bill into Law Jan 23, 2026 Privacy Commissioner Investigation into Social Media Platform, X Jan 23, 2026 Trump Signs Executive Order on AI Dec 15, 2025 Legal Tech Woes Dec 5, 2025 Meta Wins the Antitrust Case Against It Nov 27, 2025 Cohere Loses Motion to Dismiss Nov 21, 2025 What is “AI Augmentation”, and How Do You Achieve It? Nov 14, 2025 Budget 2025 Nov 7, 2025 When Technology Stops Amplifying Artists and Starts Replacing Them California Bill on AI Companion Chatbots Oct 31, 2025 Reddit Sues Data Scrapers and AI Companies Oct 24, 2025 Data Governance & Why Business Leaders Can’t Ignore It Oct 13, 2025 Canada’s AI Brain Drain Oct 17, 2025 Americans Feel the Pinch of High Electricity Costs Oct 17, 2025 Newsom Signs Bill S53 Into Law Oct 10, 2025 The Government of Canada launches an AI Strategy Task Force and Public Engagement Oct 3, 2025 Privacy Commissioner of Canada (OPC) Releases Findings of Joint Investigation into TikTok Sep 26, 2025 How an Infrastructure Race is Defining AI’s Future Sep 26, 2025 Google Decision on Remedies for Unlawful Monopolization Sep 19, 2025 1 2 3 4 5 1 ... 1 2 3 4 5 ... 5
- AI Policy | voyAIge strategy
AI Policy DOWNLOAD
- Cohere Loses Motion to Dismiss | voyAIge strategy
Cohere Loses Motion to Dismiss Copyright and Trademark Infringement Lawsuit Must Proceed By Christina Catenacci, human writer Nov 21, 2025 Key Points A large number of news publishers (Publishers) previously sued AI company Cohere for copyright and trademark infringement Cohere just brought a partial motion to dismiss the lawsuit, and it lost—on November 13, 2025, McMahon, J of the United States District Court Southern District of New York denied Cohere’s partial motion to dismiss the lawsuit This dispute is one of more than 50 lawsuits that are currently before the courts challenging the use of copyrighted works by AI companies to train their large language models—each case depends on the circumstances, and we will have to wait and see Back in March, 2025, I wrote about how several news publishers (Publishers) sued AI company Cohere for copyright and trademark infringement. As a refresher, the Publishers alleged that Cohere, without permission or compensation, used scraped copies of their articles, through training, real-time use, and in outputs, to power its AI service, which in turn competed with Publisher offerings and the emerging market for AI licensing. The Publishers accused Cohere of stealing their works to the point where actual verbatim copies were produced in outputs, and of blatantly manufacturing fake pieces and attributing them to the Publishers, which misled the public and tarnished their brands. What’s more, when RAG was turned off, the AI model, Command, hallucinated answers. Ultimately, the Publishers claimed that Cohere’s actions amounted to “massive, systematic copyright infringement and trademark infringement, and have caused significant injury to Publishers”. Well now, a new development has emerged—Cohere brought a partial motion to dismiss the lawsuit, and it lost. That is, on November 13, 2025, McMahon, J of the United States District Court Southern District of New York denied Cohere’s partial motion to dismiss Counts II, III, and IV of the Publishers' complaint. Why Did the Court Deny Cohere’s Motion? In this motion, the judge found the following: Cohere's Motion to Dismiss the Publishers' Direct Copyright Infringement Claim was Denied . Cohere wanted to dismiss the Publishers' claim for direct copyright infringement only to the extent it alleged that Cohere was directly liable for generating "substitutive summaries" of the Publishers' work. In a nutshell, Cohere argued that though the Publishers could show they owned valid copyrights, Command’s summaries were not substantially similar to the Publishers’ works. The court disagreed with Cohere and concluded that the Publishers adequately alleged that Command's outputs were quantitatively and qualitatively similar—they argued that Command's output heavily paraphrased and copied phrases verbatim from the source article, and that these summaries went well beyond a limited recitation of facts. Also, the Publishers provided 75 examples of Cohere's alleged copyright infringement (50 allegedly included verbatim copies and a further 25 examples had a mix of verbatim copying and close paraphrasing) Cohere's Motion to Dismiss the Publishers' Secondary Copyright Infringement Claim was Denied . The Publishers claimed that Cohere was secondarily liable for unlawfully reproducing, displaying, distributing, and preparing derivatives of the Publishers' copyrighted works under each of three theories: contributory infringement by material contribution, contributory infringement by inducement, and vicarious infringement. Cohere agued that all three theories failed, but the court held that each of Cohere’s arguments were without merit. In particular, the Publishers adequately alleged underlying direct infringement; the Publishers adequately alleged Cohere’s knowledge of direct infringement; and the Publishers adequately alleged inducement Cohere's Motion to Dismiss the Publishers' Lanham Act (trademark) Claims was Denied . Cohere argued that the Publishers failed to allege use of their marks in commerce and a plausible likelihood of consumer confusion and that Cohere's use of the marks was lawful as nominative fair use. However, the court disagreed with Cohere because the Publishers adequately alleged Cohere’s use in commerce; the Publishers adequately alleged a likelihood of confusion; and the nominative fair use doctrine did not apply on the facts of this case—"All I can and will do is conclude that the complaint adequately alleges facts that could, if proved, cause a trier of fact to reject application of that doctrine” Needless to say, the court was simply unconvinced by Cohere’s arguments and shot them all down. Since Cohere was unsuccessful, it will have to prepare for a trial. What Does This Mean for the Case? This is not good for Cohere. At this point in the case, it is striking that the Publishers put Cohere on notice that it was not allowed to do what it was doing—the Publishers had copyright notices and terms of service on their websites, and they also sent do-not-crawl instructions to Cohere’s bots using robots.txt protocols. In fact, the Publishers also claimed that Cohere kept unlawfully copying their works even after they sent a cease-and-desist letter. From the perspective of the Publishers, this motion went well; if this is an indication of what is to come later down the line, they will be content. What Does This Mean for the Other AI Infringement Cases? The court noted that this dispute is one of more than 50 lawsuits that are currently before the courts challenging the use of copyrighted works by AI companies to train their large language models. Some may view the decision as a foreshadowing of what could transpire in some of the other cases, but it is important to note that this was just one motion in one case; the decisions of those other cases will depend on the circumstances of those cases. We can only wait and see. Previous Next
- What Apple’s $500 Billion AI Investment Means to You | voyAIge strategy
What Apple’s $500 Billion AI Investment Means to You The Continued Normalization of AI as Core Infrastructure By Tommy Cooke, powered by caffeine and curiousity Mar 3, 2025 Key Points: Apple's $500 billion AI investment signals that AI is shifting from an innovation tool to core business infrastructure Increased AI integration in Apple’s ecosystem will shape consumer, employee, and investor expectations, pushing businesses to adapt Businesses should focus on preparing for AI-driven shifts in consumer expectations, workforce dynamics, and regulatory landscapes Apple recently announced a $500 billion investment in AI. The moment is not merely a landmark in the technology work. It is also monumental for U.S. manufacturing, U.S. talent development, and the U.S.’s foothold in the global AI economy. This news is not merely a corporate push for technology. It’s a sign that AI is becoming intimately embedded in business infrastructure; quickly fading are the days of thinking of AI as merely an emerging, experimental tool. With AI-capable smartphones forecasting to grow significantly over the next three years at a compounded annual growth rate of nearly 63 percent , coupled with the fact that Apple accounts for more than half of the smartphone device market share in the U.S., what business leaders need to recognize that their employees, investors, partners, and customers alike – the Apple device lovers in your professional and personal networks – will be interfacing with AI at unprecedented rates in the few short years to come. Whether your organization is adopting AI or not, here’s why Apple’s announcement matters to you. AI as Infrastructure, Not Merely Innovation For years, AI has been treated as an innovation driver or a business enabler, something that enhances products, streamlines workflows, or creates new capabilities. But with Apple’s recent announcement, a deeper reality is setting in: AI is increasingly recognized as an operational necessity. Apple’s announcement, which includes an AI server manufacturing facility in Texas and 20,000 new research and development jobs along with a new AI academy in Michigan, signals a broader shift—AI is no longer niche, it is foundation. This reclassification matters. Apple’s investment will push AI further into the mainstream. It is altering expectations for AI-readiness across multiple industries. Additionally, and as Apple continues to integrate AI more deeply into its own ecosystem, more consumers, employees, partners, and investors will be regularly exposed to AI-driven interactions and functionalities. This broad exposure means that businesses need to be prepared for shifting human expectations of AI. AI Normalization and Business Implications As AI becomes more infrastructural, normalization will follow. What is important to recognize here is that this level of financial investment will create jobs, accelerate workforce transformation, and even generate a new AI training and research facility—this is about much, much more than declaring AI is crucial to the company’s internal operations. It will also significantly affect their external ecosystem in sending a very clear message about the value of AI. Here are three reasons why Apple’s investment matters to you: AI is Becoming More Accessible. As AI infrastructure expands, smaller enterprises will have increased access to AI capabilities. This means even organizations without extensive tech teams must begin discussing AI integration and management. Consumers and Employees Expect AI. With AI becoming more embedded in Apple’s ecosystem (through Siri advancements, AI-enhanced applications, and automated workflows) customer and employee expectations around AI-driven interactions will evolve as well. Businesses must anticipate and meet these new expectations. Remember, whether your leadership believes in AI or not, the people working with you and for you have ideas, dreams, and visions of AI making their jobs easier. AI will be an integral, core component of Apple devices moving forward. Accordingly, expectations will change. Policy and Regulation Will Evolve. Large-scale AI investment has a high likelihood to accelerate regulation. As AI becomes a fundamental part of economic infrastructure, governments will refine legal frameworks around AI use, data privacy, and corporate accountability. While regulatory change is rather cumbersome in North America, it will be important to keep an eye on global regulators and civil society discourse as there will be adjustments in the tone, frame, and focus of AI law and AI ethics concepts. The Takeaway: A Wake-Up Call for Businesses Regardless of whether your organization is navigating AI, it is important to start thinking about the relationship between you, your people, and their increasingly AI-driven Apple devices. Businesses are recommended to invest in AI literacy, establish decision-making plans, and if they are on the cusp of integrating AI, lead the charge on the conversation. In this way, businesses will be more equipped to respond to the fact that people outside and inside their organizations are comparing their agility, creativity, and flexibility to new standards driven by AI models. Previous Next
- Closing the Gap: from Policies and Procedures to Practice | voyAIge strategy
Closing the Gap: from Policies and Procedures to Practice Overcoming the policy/procedure-practice paradox requires focus and commitment By Tommy Cooke Sep 24, 2024 Key Points: Having AI policies doesn't automatically ensure ethical AI practices Regular audits and cross-functional teams are crucial for aligning AI with ethical standards Explainability and stakeholder engagement are key to responsible AI implementation Closing the Gap: from Policies and Procedures to Practice Organizations pride themselves on having comprehensive AI policies and procedures. They show care, diligence, and signal to your staff and stakeholders that you are take AI use and employee behaviour seriously as part of your business plan. However, AI policies and procedures don’t guarantee ethical AI. Even when multiple policies reference AI, there's often a gap between policy and procedures on the one hand, and practice on the other. This gap is a problem because it can catalyze unintended consequences and ethical breaches that undermine the very principles they otherwise uphold. The Policy/Procedure-Practice Paradox This problem is a paradox that is common in virtually every industry using AI. By paradox we mean a contradictory statement that, when investigated and explained, proves to be true. For example, say aloud to yourself, “the beginning is the end”. It sounds absurd, but when you think it through, it makes sense. This same phenomenon presents itself when thinking about “policies and procedures in practice”. Policies and procedures are documents, so how exactly are they practiced? The initial thought that a document practices anything is absurd. But when we read them, they guide how people ought to use and not use AI. The policy/procedure-practice paradox is a problem because failing to understand it means failing to address it. And in failing to address it, policies and procedures about AI often lead to broken and misinformed practices. Let’s consider a real-world example: Despite a company having an anti-bias policy in place, a facial recognition system used in stores across the United States for over eight years exhibited significant bias . The system struggled to accurately identify people with darker skin tones, leading to higher error rates for certain demographics. This occurred because the AI was trained on datasets disproportionately representing lighter skin tones. And so, even well-intentioned policies can fail in practice. The example above is not isolated. It’s a symptom of a larger issue in AI implementation. While the example I provided was caused by biased data, there are several other reasons why the policy/procedure paradox exists: Lack of Explainability: many AI systems operate as " black boxes ," making it difficult to understand their decision-making processes, even with transparency policies in place Rigid Rule Adherence: AI systems may strictly follow their programmed rules without understanding the nuanced ethical priorities of an organization Complexity of Ethical Standards: Translating abstract ethical concepts into concrete, programmable instructions is a complex task that often leaves room for interpretation and error Closing the Gap To mitigate the paradox, we need to close the gap that often exists between AI policies and procedures with AI practices. Here are some strategies to achieve this: Translate Policies into AI-Specific Guidelines: high-level policy language needs to be converted into actionable steps that can be implemented in AI systems. This translation ensures that AI operates on the same definitions of privacy, fairness, and transparency as the organization. Engage with your AI vendor to discuss how your policies can be integrated into the system's operations. Remember, AI systems often require fine-tuning to align with specific organizational needs . Conduct Regular Audits: periodic reviews of AI systems are essential to ensure they're behaving in line with ethical standards. These audits should be thorough and look for potential blind spots. They’re also excellent at discovering and mitigating issues that an organization may have previously missed . Compare your system's training data with the data your organization provides. Analyze the differences and involve your ethics and analytics teams in prioritizing findings for policy amendments. Build a Cross-Functional Ethics Team: bringing together technology champions, legal experts, and individuals with strong ethical compasses can provide a well-rounded perspective on AI implementation. Ensure this team regularly communicates with your AI vendor, especially during the implementation of new systems. When building this team, diversify it. As academics say, make it multidisciplinary, meaning the combination of professional specializations when approaching a problem . Promote Explainability: as the Electronic Frontier Foundation has advocated for years, explainability is crucial when using AI . Why? If an AI system's decisions can't be explained, it becomes difficult for an organization to claim accountability for its actions. Work with your vendor to ensure AI models are interpretable. Position the right people to explain system outputs to anyone in your organization and verify that these align with your founding principles. Engage External Stakeholders: as AI ethics expert Kristofer Bouchard recently argued , external perspectives, especially from customers, communities, and marginalized groups, are crucial when using AI. This is especially the case when it comes to identifying ethical blind spots. Regularly seek feedback from these groups when evaluating your AI systems. Their insights can be invaluable in uncovering unforeseen ethical implications. The Path Forward: Ongoing Oversight and Proactive Management The close the gap between AI policy and ethical practice requires keep the gap shut. Unfortunately, it’s not as simple as closing a door once-and-for-all. It needs to be closed because it can easily reopen many times during your AI journey. Closing the gap requires ongoing oversight, regular policy updates, and a commitment to aligning AI behavior with organizational values. Actively integrate the five strategies above as doing so can significantly minimize risks associated with AI use. Being proactive not only ensures compliance with ethical standards but also builds trust with stakeholders and positions the organization as a responsible leader in AI adoption. Remember, in the world of AI, accountability and responsibility are critical. The power of these systems demands continuous vigilance and active management. By committing to this process, organizations can harness the full potential of AI while upholding their ethical principles and societal responsibilities. Previous Next
- Meta Refuses to Sign the EU’s AI Code of Practice | voyAIge strategy
Meta Refuses to Sign the EU’s AI Code of Practice A closer look at the reasons why By Christina Catenacci, human writer Jul 30, 2025 Key Points On July 18, 2025, the European Commission released its General-Purpose AI Code of Practice and its Guidelines on the scope of the obligations for providers of general-purpose AI models under the AI Act Many companies have complained about the Code of Practice, and some have gone so far as to refuse to sign it—like Meta Businesses who are in the European Union and who are outside but do business with the EU (see Article 2 regarding application) are recommended to review the AI Act, Code of Practice, and Guidelines and comply Meta has just refused to sign the European Union’s General-Purpose AI Code of Practice for the AI Act . That’s right—Joel Kaplan, the Chief Global Affairs Officer of Meta, said in a LinkedIn post on July 18, 2025 that “Meta won’t be signing it”. By general-purpose AI, I mean an AI model, including when trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications. This does not cover AI models that are used before release on the market for research, development and prototyping activities. What is the purpose of the AI Act? As you may recall, section (1) of the Preamble of the AI Act states that the purpose is to: “The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform legal framework in particular for the development, the placing on the market, the putting into service and the use of artificial intelligence systems (AI systems) in the Union, in accordance with Union values, to promote the uptake of human centric and trustworthy artificial intelligence (AI) while ensuring a high level of protection of health, safety, fundamental rights as enshrined in the Charter of Fundamental Rights of the European Union (the ‘Charter’), including democracy, the rule of law and environmental protection, to protect against the harmful effects of AI systems in the Union, and to support innovation. This Regulation ensures the free movement, cross-border, of AI-based goods and services, thus preventing Member States from imposing restrictions on the development, marketing and use of AI systems, unless explicitly authorized by this Regulation” The AI Act classifies AI according to risk and prohibits unacceptable risk like social scoring systems and manipulative AI. High-risk AI is regulated, limited risk has lighter obligations, and minimal risk is unregulated. The AI Act entered into force on August 1, 2024, but its prohibitions will be phased in over time. The first set of regulations, which take effect on February 2, 2025, ban certain unacceptable risk AI systems. After this, a wave of obligations over the next two to three years, with full compliance for high-risk AI systems expected by 2027 (August 2, 2025, February 2, 2026, and August 2, 2027 have certain requirements). Those involved in general-purpose AI may have to take additional steps (e.g., develop of Codes of Practice by 2025), and may be subject to specific provisions for general-purpose AI models and systems. See the timeline for particulars. What is the Code of Practice for the AI Act ? The Code of Practice is a voluntary tool (not a binding law), prepared by independent experts in a multi-stakeholder process, designed to help industry comply with the AI Act’s obligations for providers of general-purpose AI models. More specifically, the specific objectives of the Code of Practice are to: serve as a guiding document for demonstrating compliance with the obligations provided for in the AI Act , while recognising that adherence to the Code of Practice does not constitute conclusive evidence of compliance with these obligations under the AI Act ensure providers of general-purpose AI models comply with their obligations under the AI Act and enable the AI Office to assess compliance of providers of general-purpose AI models who choose to rely on the Code of Practice to demonstrate compliance with their obligations under the AI Act Released on July 10, 2025, it has three parts: Transparency : Commitments of Signatories include Documentation (there is a Model Documentation Form containing general information, model properties, methods of distribution and licenses, use, training process, information on the data used for training, testing, and validation, computational resources, and energy consumption during training and inference) Copyright : Commitments of Signatories include putting in place a Copyright policy Safety and Security : Commitments of Signatories include adopting a Safety and security framework; Systemic risk identification; Systemic risk analysis; Systemic risk acceptance determination; Safety mitigations; Security mitigations; Safety and security model reports; Systemic risk responsibility allocation; Serious incident reporting; Additional documentation and transparency For each Commitment that Signatories sign onto, there is a corresponding Article of the AI Act to which it relates. In this way, Signatories can understand what parts of the AI Act are being triggered and complied with. For example, the Transparency chapter deals with obligations under Article 53(1)(a) and (b), 53(2), 53(7), and Annexes XI and XII of the AI Act . Similarly, the Copyright chapter deals with obligations under Article 53(1)(c) of the AI Act . And the Safety and Security chapter deals with obligations under Articles 53, 55, and 56 and Recitals 110, 114, and 115 AI Act. In a nutshell, adhering to the Code of Practice that is assessed as adequate by the AI Office and the Board will offer a simple and transparent way to demonstrate compliance with the AI Ac t. The plan is that the Code of Practice will be complemented by Commission guidelines on key concepts related to general-purpose AI models, also published in July. An explanation of these guidelines is set out below. Why are tech companies not happy with the Code of Practice? To start, we should examine the infamous LinkedIn post: “Europe is heading down the wrong path on AI. We have carefully reviewed the European Commission’s Code of Practice for general-purpose AI (GPAI) models and Meta won’t be signing it. This Code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act. Businesses and policymakers across Europe have spoken out against this regulation. Earlier this month, over 40 of Europe’s largest businesses signed a letter calling for the Commission to ‘Stop the Clock’ in its implementation. We share concerns raised by these businesses that this over-reach will throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them. The post criticizes the European Union for going down the wrong path. It also talks about legal uncertainties, measures which go far beyond the scope of the AI Act , as well as stunting development of AI models and companies. There was also mention of other companies wanting to delay the need to comply. To be sure, CEOs from more than 40 European companies including ASML, Philips, Siemens and Mistral, asked for a “two-year clock-stop” on the AI Act before key obligations enter into force this August. In fact, the bottom part of the open letter to European Commission President Ursula von der Leyen called “Stop the Clock” asked for more simplified and practical AI regulation and spoke of a need to postpone the enforcement of the AI Act . Essentially, the companies want a pause on obligations on high-risk AI systems that are due to take effect as of August 2026, and to obligations for general-purpose AI models that are due to enter into force as of August 2025. Contrastingly, the top of the document is entitled “EU Champions AI Initiative”, with logos of over 110 organizations that have over $3 billion in market cap and over 3.7 million jobs across Europe. In response to the feedback, the European Commission is mulling giving companies who sign a Code of Practice on general-purpose AI a grace period before they need to comply with the European Union's AI Ac t. This is a switch from the July 10, 2025 announcement that the EU would be moving forward notwithstanding the complaints. The final word appears to be that there is no stop the clock or pauses or grace periods, period. New guidelines also released July 18, 2025 In addition, the European Commission published detailed Guidelines on the scope of the obligations for providers of general-purpose AI models under the AI Act (Regulation EU 2024/1689)—right before the AI Act’s key compliance date, August 2, 2025. The goal is to help AI developers and downstream providers by providing clarification. For example, it explains which providers of general-purpose AI models are in and out of scope of the AI Act’s obligations. In fact, the European Commission stated that “The aim is to provide legal certainty to actors across the AI value chain by clarifying when and how they are required to comply with these obligations”. The Guidelines focus on four main areas: General-purpose AI model Providers of general-purpose AI models Exemptions from certain obligations Enforcement of obligations The intention is to use clear definitions, a pragmatic approach, and exemptions for open-source. That said, the Guidelines consist of 36 pages of dense material that need to be reviewed and understood. For instance, the Guidelines answer the question, “When is a model a general-purpose AI model? Examples are provided for models in scope and out of scope. What happens next? As we can see from the above discussion, there are serious obligations that need to be complied with—soon. To that end, businesses in the European Union or who do business in the European Union (see Article 2 regarding application) are recommended to review the AI Act, the Code of Practice, and the Guidelines to ensure that they are ready for August 2, 2025. After August 2, 2025, providers placing general-purpose AI models on the market must comply with their respective AI Act obligations. Providers of general-purpose AI models that will be classified as general-purpose AI models with systemic risk must notify the AI Office without delay. In the first year after entry into application of these obligations, the AI Office will work closely with providers, in particular those who adhere to the General-Purpose AI Code of Practice, to help them comply with the rules. From 2 August 2026, the Commission’s enforcement powers enter into application. And by August 2, 2027, providers of general-purpose AI models placed on the market before August 2, 2025 must comply. Previous Next
- Compliance | voyAIge strategy
AI policies and frameworks to help your organization meet legal and ethical standards. Compliance At voyAIge strategy, compliance is a foundation for our analysis on the legal, policy, and ethical dimensions of AI. We understand the intricacies of the laws of many jurisdictions and can guide you through every step of your compliance journey. Today's rapidly evolving digital landscape is fueled by the exponential rate that AI transforms not just business practices and ways of seeing but entire industries. However, with innovation comes new challenges - particularly in compliance. Governments and regulatory bodies around the world are racing to keep up. They are creating complex legal requirements with which business must comply. For businesses, navigating this complexity is not just about avoiding fines and penalties. It's about safeguarding reputation, building trust with stakeholders, and ensuring sustainability. What's Your Compliance Challenge? Understanding jurisdiction, sector, applicable legislation, data types, and data flows are some of the many considerations we take into account when identifying regulatory bodies relevant to your organization. The following are some examples that may apply to a business now or in the future: GDPR Enforces the General Data Protection Regulation (GDPR). Applies to all businesses with clients and customers in Europe NYC LL 144 New York City regulates how business can and cannot use AI to assist in hiring employees AI ACT Regulates and governs the use of all Artificial Intelligence inside the European Union FDA The US Food and Drug Administration contains specific AI regulations governing medical devices, evidence curation, and market monitoring CCPA The California Consumer Privacy Act has a significant impact on how AI systems used in businesses handle data AIDA Enforces the General Data Protection Regulation (GDPR). Applies to all businesses with clients and customers in Europe OESA Governs employee monitoring as well as using AI to recruit employees in Ontario, Canada SB 1047 California's Safe and Secure Innovation for Frontier AI Models Act imposes safety restrictions on advanced AI PIPEDA Canada's Personal Information Protection and Electronic Documents Act governs how companies collect, use, and share personal information Did You know? Up to €30 million or 6% of Global Annual Turnover for prohibited AI practices under the GDPR. Amazon was fined €746 million by Luxembourg’s data protection authority for how it processed personal data for targeted advertising using AI-driven systems. Canada's proposed Artificial Intelligence and Data Act plans to impose administrative monetary penalties $10 million or 3% of Global Annual Revenue - including criminal penalties such as jail time for AI decisions causing significant harm. Expert Insights, Experience & Resources Book a free consultation to chat with us about correctly and accurately identifying compliance and regulation applicable to your organization. Stay informed by subscribing to our VS-AI Observer Substack , where we offer articles, whitepapers, case studies, and video content that will keep your organization ahead of emerging compliance challenges, requirements, and issues. Book a Free Consultation
- Partnership Opportunities | voyAIge strategy
Collaborating with organizations to drive responsible and effective AI adoption. Partnership Opportunities We are passionate about partnerships and alliances. If your business or organization is interested in collaborating with us on business opportunities, CfPs, or other forms of co-operation for mutual benefit, we would like to hear from you. For all other inquiries, please contact us here . First Name Last Name Email Partnership Inquiry Details Send Thanks for submitting!
- Governing AI by Learning from Cohere’s Mistakes | voyAIge strategy
Governing AI by Learning from Cohere’s Mistakes Why it is Crucial to Demonstrate Control By Tommy Cooke, powered by caffeine and curiousity and a strong desire for sunny weather Mar 7, 2025 Key Points: AI governance is essential because it ensures organizations maintain transparency, accountability, and oversight over how AI systems are trained, deployed, and used Leaders must proactively assess where AI models source their data, ensuring compliance with intellectual property laws and mitigating risks related to unauthorized content use We don’t govern AI to avoid lawsuits. We govern AI to demonstrate that we are in control. This is how we build trust with stakeholders In my 20s, I spent a lot of time traveling to Germany to visit a close friend. He had an early GPS unit in his car. It was an outdated system that relied on CDs for updates and generated very blurry little arrows on a tiny screen nowhere near the driver's eyes. On one trip, we thought we were heading west of Frankfurt to a favourite restaurant. Instead, we ended up 75 kilometers south. We found ourselves sitting in his car at a dead end, with his high beams on, staring into a dark farmer's field. The system led us astray because we over-relied on old data inside a brand-new consumer technology. When I speak with leaders adopting AI for the first time, I often think of getting lost in the rural German countryside. AI, like early GPS, promises efficiency, but its reliability depends entirely on its data. Organizations are under pressure to adopt AI to streamline operations, reduce costs, and drive creativity. But AI isn’t a magic bullet. It’s a tool. And like an unreliable GPS, AI trained on flawed or unauthorized data can take your organization in the wrong direction. It relinquishes control. Cohere, a major AI company in Canada, is facing a significant lawsuit over how it trained its models; Cohere used copyrighted content without permission or providing compensation. This case is one you should know about because it's more than just a legal battle. It’s a reminder: AI adoption isn’t just about capability. It’s about building and maintaining responsible control. So, how exactly do you ensure you are in control? The answer begins and ends with an ethical strategy. The Ethical Fault Line The lawsuit against Cohere, a Toronto-based AI company, highlights the growing tension between AI developers and content creators. Major media organizations allege that AI companies are scraping and reproducing their content without consent or compensation. This raises a critical question: Who controls knowledge in the era of AI? This isn’t just a tech industry issue—it’s a governance challenge with real consequences. AI systems generate content, provide insights, and automate decisions based on their underlying data. If that data is sourced irresponsibly—such as using newspaper articles without publisher consent—organizations risk reputational harm, legal liability, and a breakdown of trust with employees, customers, and industry partners. Lessons for AI Leaders: How to Stay on the Right Side of AI Ethics As AI continues to reshape industries, its impact will depend on how it is developed and deployed. Business leaders don’t need to be AI engineers, but they do need to ensure that they are using AI transparently. Here's why: Transparency is the Foundation of Trust. AI should not be a "black box"—a technology that operates mysteriously without clear explanation. Leaders need visibility into how AI works, what data it uses, and what safeguards are in place. This means two things: first, working with AI vendors to receive clear documentation on data sources and model behaviour. If a company can’t explain how its AI makes decisions, that’s a red flag. Second, leaders need a communication strategy—something that they can reference to explain AI’s role to any stakeholder Respect Intellectual Property from the Start. Whether using AI to generate content, analyze trends, or assist in decision-making, stakeholders expect AI leaders to account for where AI data comes from. If an organization uses internal data from sales reports, for example, this needs to be documented. If outsourcing data from a third-party vendor, it’s not enough to say that the data is external—leaders must be able to confirm the vendor’s ownership and rights to that data Governing AI Is Not Optional. Responsible AI use requires a governance framework. Companies need clear policies that define how AI is trained, where data comes from, and how the system and its outputs are monitored. Think of AI governance like driving a car: just as drivers follow traffic laws and speed limits, AI systems require rules to ensure safe and ethical operation. AI governance is a business strategy that demonstrates commitment to legal, compliant, and ethical AI development—ensuring transparency, explainability, and accountability. Ethical AI is an Advantage. Not a Financial Burden Much like the way a driver is expected to maintain control of their vehicle, to abide by rules, and to ensure their own and others' safety, drivers build trust with their passengers and other drivers by continuously demonstrating that they are in control. The same holds true with AI ethics. We don't govern AI to avoid lawsuits. We govern AI to demonstrate that we are in control. Previous Next
- De-Risking AI Prompts | voyAIge strategy
De-Risking AI Prompts How to Make AI Use Safer for Business By Tommy Cooke, fueled by caffeine and curiousity Aug 8, 2025 Key Points: Small, well-intentioned actions can quietly introduce risk when staff lack clear guidance and boundaries De-risking AI isn’t about restricting use. It’s about educating staff, adopting prompt training into workflows, and developing a support team Safe and effective AI use begins when leadership models responsible practices and builds a culture of clarity, not control Many moons ago, I was working with a data centre on a surveillance experiment. One of the interns was a motivated student. He was tasked with investigating third parties that we suspected were abusing access to sensitive location data within one of our experiment’s smartphones. Without telling anyone, the student sent sample data from our smartphone to an organization we were actively investigating. It was an organization whose credibility was under intense scrutiny for abusive data practices. The student wasn’t acting out of malice. They were trying to be helpful, to show responsiveness, to move the work forward. But they didn’t understand the stakes. To them, the data was “just a sample.” To us, it signaled loss of control and a risky alignment with an actor we hadn’t finished vetting. The problem wasn’t the intern. The problem was that we hadn’t taken the time to review and discuss contract terms—to find ways to guide interns on both the best practices and boundaries around their work. This is what prompting GPT looks like in many organizations today. Staff often using AI to accelerate their work, lighten work loads, and inject some creativity into their craft. AI is a tool that is attractive to staff for many reasons, and so it is not surprising to us here at VS to hear that staff turn to AI to also respond to mounting work pressures; now that AI is available, executives increasingly expect their teams to work harder, faster, and better with it as a result of its existence. But with less than 25 percent of organizations having an AI policy in place , and even fewer educating their staff on how to use AI, it’s not surprising that how your staff use AI is not only highly risky, but you are also likely unaware of precisely what they are doing with AI. To most organizations we speak to, this risk is entirely unacceptable. While we highly advocate for a robust AI policy in place, as well as training around that policy, let’s dive into what you can be doing to de-risk your organization’s AI use. De-Risking AI Is Not Just About Restricting Use Before we take a deeper dive, it’s important to address a common knee-jerk reaction among business leaders. There is a temptation to de-risk by locking down: restricting access to GPTs, blocking it from the firewall, or ban prompts that mention sensitive keywords. These reactions are just that: reactions. They are not responses because they are not planned, considered, and contextualized. They are rigid, inflexible, and as such, they often backfire. What’s as important here is that they send a very clear message to your staff: AI is dangerous and not learnable. This subsequently pushes experimentation underground and creates a shadow use problem that’s harder to monitor or support. Instead, and as I mentioned earlier, the safer and more sustainable path is to educate, empower, and build clarity. It’s impossible to eliminate risk entirely, but you can reduce it by building good habits, providing effective guidance, and a sharing an understanding of what safe prompting looks like. What Team Leads and Business Owners Can Do If you lead a team or own a business, here are some steps you can take right now to start de-risking GPT use without killing its potential and promise: Create a prompt playbook. A living document that outlines safe and unsafe prompting practices, gives examples, and evolves over time. This could include do’s and don’ts, safe phrasing suggestions, and reminders about privacy, intellectual property, and any other related laws and policies relevant to the scenario at hand. It doesn't have to be long—it just has to be usable and user-friendly. Build training around real workflows. It’s quite common for organizations to bring in third-parties to offer cookie cutter training on how to use AI safely and effectively. Don’t do that. Abstraction doesn’t resonate on the front line, nor do we find it effective in resonating with executives either. Bring in an organization that can offer training that reflects how your people actually use AI and the daily nuances of their work. Schedule prompt review . Designate an AI team. Task them with making it normal to collect, analyze, and assess how your staff talk to AI. Encourage them to ask questions like, “is this a safe way to talk to AI?” We want to create a culture where prompt sharing and refinement is part of collaboration. Designate prompt leaders . Identify or train a few people, ideally within the aforementioned team, who can act as internal advisors on AI use. Not to gatekeep, but to support. Let staff know who to ask when they're unsure if a prompt might cause issues. Make it part of their job description and KPIs to lift up and support employees when they use AI. Develop internal guardrails. This is also something I discussed before, and something that Christina and I discuss ad nauseum in our articles. If you're using GPT through an API, platform, or organization-wide license, get AI policies in place. Set rules, automate flags, or integrate prompt logging for sensitive areas like legal, HR, or R&D. Communicate the purpose. Let people know why prompting guidance and safe use matters. Use examples to show how good prompting helps them avoid mistakes and do better work, not just follow rules. Ensure that you show the implications when things go wrong, and then follow up by reassuring staff that you have contingency plans in place. Let them know that you have a plan for when things go wrong, and that they shouldn’t be afraid to use AI if they follow their training. Signal leadership’s involvement. Executives and leaders should model good prompting habits, or they should at least acknowledge the importance of prompting. Lead by example, not just by word. The intern I mentioned earlier didn’t intend to create risk. The boundaries were drawn, but the intern was not familiar with them. We avoided damage to the project, and that damage was never about malice or recklessness. It was about misunderstanding what small mistakes could catalyze, especially when they go unrecognized by a staff member. Previous Next
- The Strategic Values of Local AI | voyAIge strategy
The Strategic Values of Local AI What it Means to Use AI in-house versus in-the-cloud By Tommy Cooke, powered by unusually high amounts of pollen in the air for this time of year Jul 18, 2025 Key Points: 1. Local AI keeps sensitive data in-house, helps businesses meet regulatory requirements, and reduces risk 2. Compared to escalating cloud costs, local AI offers predictable long-term savings for organizations with consistent workloads 3. A hybrid approach (using cloud AI for scale and local AI for control) is emerging as the most strategic model for enterprise AI deployment When you open the ChatGPT app on your phone, its AI runs in the cloud. What do I mean by that, you ask? The AI itself does not happen in your phone—it happens in a server somewhere else in the world. Your phone sends data, the data is ingested into a room filled with processors, and the output is returned to your phone. But there is another way for AI to function. And that way is called “ local AI ”: AI that is deployed, operated, and lives entirely within an organization’s walls. While the idea of local AI seemed like a farfetched dream merely a couple of years ago (and for good reason, as it was correctly perceived to be quite expensive at the time), it is a superior alternative to many organizations that are risk averse; privacy, control, cost predictability, and operational resilience are qualities all at the heart of local AI. Let’s unpack these qualities in further detail as I imagine many of you reading this right now will be highly interested in exploring local AI for your own organizations. Sovereignty over Sensitive Data The premier benefit of local AI is guaranteeing that sensitive data never leaves the organization. In tightly regulated industries, such as healthcare and finance, data sovereignty is critical. Using cloud-based AI, even with robust security protocols, creates uncertainties: Who audits the vendor’s access logs? Where is the data stored, geographically? How is it being used to train backend models? These are unanswered questions that haunt compliance officers and auditing teams. On the other hand, using local AI means that every bit of data that is processed stays within your full visibility and stewardship—this gives businesses a critical advantage particularly during a time of proliferating regulations. Simplified Compliance in a Complex Legal Landscape Regulations such as the EU’s GDPR and Canada’s PIPEDA impose strict obligations on data transfers and processing. This makes local AI models, which operate entirely within the bounds of these regulations, capable of sidestepping many of the issues that cloud AI systems are still struggling to navigate. That is, by minimizing the need to transfer data across and through jurisdictions, local AI reduces exposure to many legal complications. Moreover, and because all operations occur in-house, audit readiness becomes more straightforward: logs, model versions, and access records remain under corporate control. Predictable Operating Costs Cloud-based AI is often marketed as pay-for-use or as something that you can sign up for and begin using immediately. This makes mainstream AIs like ChatGPT and the like attractive: they are elastic, cost-efficient, and easy to access. However, as workloads grow, so too do fees. Application Programming Interface (API) calls, data storage, and compute time are but some of the many characteristics that all begin to add up. Cloud services also often carry usage-based or subscription-based pricing that tend to escalate over time. To be fair, the initial capital expenditure for local AI may be higher, but once it is set up, those costs amortize. For bounded workloads like batch processing, document classification, and real-time inference making, the cumulative total cost of ownership is considerably lower than continual cloud usage. Latency, Resilience, and Offline Capability Local processing also provides tremendous improvements in speed. Without the back-and-forth delays caused by network requests, turnaround times interacting with AI shrinks considerably. This is particularly attractive for real-time applications like manufacturing quality assurance or point-of-care diagnostics. Moreover, local AI continues to operate amidst network disruptions. For instance, remote sites, field offices, or secure facilities with limited connectivity can maintain uninterrupted service by using local AI. In an age where downtime translates directly into lost revenue and reputational risk, it is worth considering avoiding cloud-driven AI. Customization Although generalist cloud models have dazzling bread they often stumble in the face of domain-specific syntax. This is where local AI offers the opportunity to fine-tune with proprietary data: legal briefs, clinical records, manufacturing logs. Additionally, this makes local AIs considerably more reliable than their cloud counterparts in terms of avoiding hallucinations. Practically speaking, that means cleaner summaries, safer predictions, and fewer erroneous suggestions. Enhanced Data Governance Running models locally brings a considerable benefit by way of transparency. When you control the entire stack, from data ingestion to output, you gain visibility into model behaviour. This facilitates a higher level of explainability compared with cloud-driven AI. Local AI means that you are no longer reliant on opaque APIs; this can be a deal breaker for many prospective clients and customers. A Hybrid Future: Balancing Reach and Responsibility between Local and Cloud AI It is important to stress that local AI does not necessarily need to be seen as supplanting cloud-based systems. Rather, they can be complimentary. The optimal business model for many organizations is modular, or using both: Cloud-based AI for delivering massive-scale capabilities (think complex reasoning, multi-domain synthesis, and vast world knowledge) Local AI for handling sensitive tasks, private data, or immediate-response scenarios This balanced, hybrid approach is the future of enterprise AI. It's a “precision-first” approach and is one that could do wonders for aligning AI deployment within the context, risk tolerance, and regulatory demands within your industry. Local AI is not a niche pursuit. It is a strategic investment for businesses that are seeking to reconcile innovation, privacy, and compliance. Through local deployment, companies gain control over data, reduce long-term costs, improve performance, tighten governance, and allow them to converse using their own language or jargon. For businesses that are serious about data privacy, customer trust, and operational continuity, local AI is not just an alternative—it is a better, smarter, more principled choice. And this approach has the flexibility to enable use in conjunction with cloud-driven solutions. Previous Next
