top of page

Search Results

123 results found with an empty search

  • L&E Analysis: Reddit Sues Anthropic | voyAIge strategy

    L&E Analysis: Reddit Sues Anthropic What is Reddit Claiming in this Complaint? By Christina Catenacci, human writer Jun 20, 2025 Legal & Ethical Analysis: Issue 2° Key Points On June 4, 2025, Reddit filed a Complaint against Anthropic in the Superior Court of the State of California in San Francisco This Complaint follows the one against Anthropic launched by the major music publishers commenced in October 18, 2023, which was ultimately settled The Complaint by the music publishers was about copyright law, and the Reddit Complaint was about violating the User Agreement and the privacy of Reddit users. It will be interesting to see how the Reddit Complaint is resolved On June 4, 2025, Reddit filed a Complaint against Anthropic in the Superior Court of the State of California in San Francisco. This is not the first Complaint that has been commenced against Anthropic—we cannot forget what recently took place when the music publishers sued Anthropic for copyright infringement. What is the claim about? What does Reddit want? How is this claim different than the one against Anthropic that was launched by the music publishers? What can we take from this development? This article answers these questions. What is Reddit Claiming? Essentially, Reddit has stated that although Anthropic claims it is the “white knight of the AI industry” that prioritizes honesty and high trust, it is “anything but” and it uses empty marketing gimmicks. Reddit asserts in its Complaint that it has a User Agreement that contains the following excerpts of sections 3 and 7: “ 3.Your Use of the Services Except and solely to the extent such a restriction is impermissible under applicable law, you may not, without our written agreement: license, sell, transfer, assign, distribute, host, or otherwise commercially exploit the Services or Content” “ 7.Things You Cannot Do Access, search, or collect data from the Services by any means (automated or otherwise) except as permitted in these Terms or in a separate agreement with Reddit (we conditionally grant permission to crawl the Services in accordance with the parameters set forth in our robots.txt file, but scraping the Services without Reddit’s prior written consent is prohibited)” Even though Reddit has this provision, it claims that Anthropic has intentionally trained on the personal data of Reddit users without ever requesting their consent. In fact, it claims that Anthropic has been ignoring the provision and has had bots repeatedly hit Reddit’s servers over 100,000 times notwithstanding the statements of Reddit’s CEO that Anthropic has unlawfully exploited Reddit content. Further, Reddit states that Anthropic has refused to respect Reddit users’ privacy rights—contrary to Anthropic’s own values. By training its model, Claude, on Reddit’s posts without authorization, Reddit claims that it is in direct violation of Reddit’s User Agreement. In a nutshell, Reddit says that Anthropic has scraped and used Reddit content in its commercial offerings—Claude even provides output statements confirming that it has been trained on Reddit. Anthropic refused to respect Reddit’s guardrails and enter into a lisensing agreement like Google and OpenAI has. Reddit asserts that instead, Anthropic continued to commercialize Reddit content without authorization. Interestingly, Reddit states that Anthropic has admitted that it scrapes Reddit content, but it has provided several excuses—all of which are unacceptable. To that end, Reddit is advancing the following claims against Anthropic: Breach of Contract : Anthropic has violated Reddit’s User Agreement by acting contrary to sections 3 and 7 of the Agreement Unjust Enrichment : Anthropic was unjustly enriched at the expense of Reddit when it scraped and used Reddit content to train and power a model to the tune of billions of dollars Trespass to Chattels : Anthropic intentionally entered into, and made use of, Reddit’s platform and technological infrastructure, including its software and servers, to access and obtain Reddit content and information for its own economic benefit Tortious Interference With Contract : Anthropic intentionally interfered with Reddit’s contractual relationships with its users by: scraping Reddit content without entering into a licensing agreement that would provide the necessary guardrails to protect users’ privacy rights; bypassing connecting to Reddit’s Compliance API, which automatically notifies licensees when users delete posts or comments; training its AI models on user content without any mechanism to respect Reddit user deletion requests; and continuing to scrape Reddit content after being notified that such conduct violated Reddit’s obligations to its users. This intentional interference diminished Reddit’s capacity to fulfill its obligations to its users Unfair Competition : Anthropic has engaged in acts of unfair competition, including unlawful, unfair, and/or fraudulent business acts and practices as defined by the Business and Professions Code . Anthropic has trespassed on Reddit’s platform and taken possession of Reddit content and data without authority or permission, and interfered with Reddit’s contractual relationships with Reddit’s users. Anthropic has also engaged in fraudulent business practices by falsely stating that it was no longer scraping the Reddit platform, even as Anthropic continued to scrape to acquire and use Reddit content to train its AI models for commercial gain In addition, Reddit has requested a jury trial. What is Reddit Asking for in the Complaint? Reddit is asked for the following: Specific Performance, compensatory damages, consequential damages, lost profits, and/or disgorgement of Anthropic’s profits An injunction Restitution for the amount by which Anthropic has been enriched by its scraping and use of Reddit content Pre-judgment and post-judgment interest Punitive damages Fees, costs, and any other appropriate relief Another Previous Complaint by Music Publishers We cannot forget that on October 18, 2023, several major music publishers (Music Publishers) filed a Complaint against Anthropic in the United States District Court for the Middle District of Tennessee Nashville Division. Essentially, the Music Publishers brought the action to address the systematic and widespread infringement of their copyrighted song lyrics by Anthropic. That is, they asserted that Anthropic unlawfully copied and disseminated vast amounts of copyrighted works—including the lyrics to myriad musical compositions owned or controlled by the Music Publishers. In the very first paragraph, the Music Publishers stated: “(Music) Publishers embrace innovation and recognize the great promise of AI when used ethically and responsibly. But Anthropic violates these principles on a systematic and widespread basis. Anthropic must abide by well-established copyright laws, just as countless other technology companies regularly do” The Music Publishers explained that they partnered with innovators, including entrepreneurs, start-ups, and established companies—they recognized and drove true innovation (for instance, Universal used AI in its business and production operations). However, Anthropic’s copyright infringement did not constitute innovation: “in layman’s terms, it’s theft”. In fact, the Music Publishers claimed that Anthropic violated the United States Copyright Act . They acknowledged that AI was a new technology, but they insisted that AI companies still had to follow the law. Technological advances could not come at the expense of the creators who essentially served as the backbone for AI’s development. Anthropic built AI models by scraping and ingesting massive amounts of text from the internet (and potentially other sources), using all of it to train its AI models (Claude 2 in this case) and generate output based on this copied text. The Music Publishers claimed that Anthropic copied the data to fuel its AI models lyrics to their musical compositions. They urged that copyrighted material was not free just because it could be found on the internet—in this case, Anthropic never asked for permission. Notwithstanding Anthropic’s company Constitution (the goal was to be harmless, respectful, and ethical), the Music Publishers passionately argued that Anthropic committed copyright infringement since it generated identical or nearly identical copies of their lyrics. In the Complaint, the Music Publishers provided examples where famous songs were either completely or partially outputted in the response to user prompts. In fact, the Music Publishers argued that: Anthropic directly infringed the Music Publishers’ exclusive rights as copyright holders, including the rights of reproduction, preparation of derivative works, distribution, and public display Anthropic unlawfully enabled, encouraged, and profited from massive copyright infringement by its users, so it was secondarily liable for the infringing acts of its users under well-established theories of contributory infringement and vicarious infringement Anthropic’s AI output often omitted critical copyright management information regarding these works, making it so that the composers of the song lyrics frequently did not get recognition for being the creators of the works that were being distributed The Music Publishers stated, “It is unfathomable for Anthropic to treat itself as exempt from the ethical and legal rules it purports to embrace”. According to the Music Publishers, there was no doubt that Anthropic profited from the infringement of the Music Publishers’ repertoires, since Anthropic was already valued at $5 billion, received billions of dollars in funding, and boasted about numerous high-profile commercial customers and partnerships. The Music Publishers stated in the Claim: “None of that would be possible without the vast troves of copyrighted material that Anthropic scrapes from the internet and exploits as the input and output for its AI models” The Music Publishers noted that nothing about Anthropic was creative—Anthropic depended on the creativity of others and paid them nothing. This caused substantial and irreparable harm. The Claim set out how Anthropic trained the data: copied massive amounts of text from the internet (and potentially other sources), by “scraping” (or copying or downloading) the text directly from websites and other digital sources onto Anthropic’s servers, using automated tools, such as bots and web crawlers, and/or by working from collections prepared by third parties cleaned the copied text to remove material that it perceived as inconsistent with its business model, whether technical or subjective in nature (such as deduplication or removal of offensive language) copied the massive “corpus” and processed it in multiple ways to train the Claude AI models (encoding the text into tokens) processed the data further to finetune the Claude AI model and engaged in additional reinforcement learning, based both on human feedback and AI feedback, all of which may require additional copying of the collected text The following are Claims for Relief: Count I: Direct Copyright Infringement Count II: Contributory Infringement Count III: Vicarious Infringement Count IV: Removal or Alteration of Copyright Management Information To that end, the Music Publishers requested relief against Anthropic in the form of Judgement on each of the claims above, an order for equitable relief, an order requiring Anthropic to pay the Music Publishers statutory damages, an order requiring Anthropic to provide an accounting of the training data and methods (and the lyrics on which it trained AI models), an order to destroy (under Court supervision) all infringing copies of the Music Publishers’ copyrighted works, costs, and interest. On October 23, 2023, Anthropic initially stated that training its AI models constituted “fair use”, meaning that it was a lawful exemption in copyright law. Why? Because Anthropic was engaging in a use that was highly transformative. Indeed, the company mentioned that it did not intend to violate the law. Furthermore, on November 16, 2023, the Music Publishers brought a Motion asking Anthropic to stop using their music lyrics, asking for a preliminary injunction. However, by January 6, 2025, it was reported that Anthropic and the Music Publishers reached a settlement where Anthropic would implement robust measures to ensure compliance with the law, namely revising its data collection and training methodologies to exclude copyrighted content, unless proper licenses or permissions have been obtained. Also, it agreed to have more stringent oversight on its data sources to mitigate the risks of inadvertently using protected material in future AI training. What Can We Take from This Development? Following the debacle with the Music Publishers, one would think that Anthropic would be striving to promote ethical AI practices and foster trust with both the creators of artistic works and the wider public. One would think that Anthropic learned its expensive lesson. But now, Anthropic has to face Reddit. The disappointing part of the story is that Anthropic similarly scraped Reddit content from the internet and used it to train Claude models—clearly without permission and without entering into a proper licensing agreement. While this is technically not a copyright infringement case, it is similar in that the User Agreement was allegedly breached because there were terms that Anthropic needed to comply with, but instead it appeared to have violated them. In particular, to use Reddit, the terms in clauses 3 and 7 had to be complied with. And Anthropic was warned —but Anthropic continued to hit Reddit’s servers and scrape away so that Anthropic could train Claude (without paying). This appears to be the first time that a big tech company has challenged an AI model provider over its training data practices, and it will be interesting to see what happens. As with the case with the Music Publishers, Anthropic will likely have to settle and promise not to do this again. This may be what the company has to ultimately do in order to preserve the delicate balance between innovation and the rights of companies like Reddit (along with its users). For a company that was just valued at 61.5 billion (up from 5 billion noted in the Complaint a couple of years ago), it may be something that Anthropic needs to do sooner rather than later in order to preserve its reputation. Anthropic spoke with TechCrunch recently and said that it disagreed with Reddit and would vigorously defend itself. We shall see what happens in the coming months… Previous Next

  • 23andMe Goes Bankrupt | voyAIge strategy

    23andMe Goes Bankrupt What will happen to all of that genetic data? By Christina Catenacci, human writer Apr 11, 2025 Key Points: 23andMe, a direct-to-consumer genetic testing company, has declared bankruptcy There is significant concern about how customers’ genetic data will be protected: now and in the future Consumers are urged to delete their data, and businesses are encouraged to learn from the data breach (review policies and procedures and ensure that there is compliance with the law) What happened? 23andMe , the company that provided basic ancestry as well as health and ancestry services (with touted 99 precent accuracy), has filed for bankruptcy in the United States. This entails filing a under Chapter 11 of the United States Bankruptcy Code . The direct-to-consumer genetic testing company was founded in 2006. It was the first company to offer autosomal testing by asking users to directly submit saliva samples that would be analyzed to produce charts of their background and lineage. But the company experienced a serious data breach in 2023, where seven million customers were subject to unauthorized access of their genetic data. The ordeal took about five months to resolve, and it ruined the company’s reputation. What’s more, the affected customers launched a class action in the United States, which they ultimately settled with the company for $30 million. At this point, the company has secured financing and will continue to operate during a sale process. The company has listed its assets and estimated liabilities to be between USD 100 million and 500 million. Although it is still operating while trying to find a buyer, the company recently laid off 40 percent of its workforce and has ended its therapeutics division. What will happen to the genetic data involved in the 23andMe data breach? We need to examine the full Privacy Statement (Statement). Last updated March 14, 2025, the Statement says that “At 23andMe, Privacy is in our DNA”. The information that the company collects includes Individual-level Information (information about a single individual, such as their genotypes, diseases or other traits or characteristics) and De-identified Information (information that has been stripped of identifying data, such as name and contact information, so that an individual cannot reasonably be identified). The types of personal information collected includes registration information; genetic information; sample information; self-reported information; user content; and web-behaviour information. The company collects the information through the customer providing it, service providers collecting cookies or analytics tools, other third parties involving customers such as gifting a testing kit, and the company itself with its inferences. The company uses personal information in order to: provide their services’ analyze and measure trends and usage of the services; communicate with customers; personalize, contextualize and market their services to customers; provide cross-context behavioural or targeted advertising; enhance the safety, integrity, and security of their services; enforce, investigate, and report conduct violating their Terms of Service or other policies; conduct surveys or polls, and obtain testimonials or stories; comply with their legal, licensing, and regulatory obligations; and conduct 23andMe Research, if customers choose to participate. More precisely, the purpose of 23andMe Research is to make new discoveries about genetics and other factors behind diseases and traits. “23andMe Research” means research activities performed by 23andMe, either independently or jointly with third parties, and overseen by an independent ethics review board. In terms of data sharing, the company shares with service providers, friends and family members if the customer so wishes, affiliates and commonly owned entities, and third parties related to law, harm, and the public interest. That said, the Statement clearly stipulates that the company does not share customer information with public databases, insurers, employers, or law enforcement absent a valid court order, subpoena, or search warrant. With respect to security, the company states that it implements physical, technical, and administrative measures aimed at preventing unauthorized access to or disclosure of customers’ Personal Information. Moreover, it advises that “Please recognize that protecting your Personal Information is also your responsibility. Be mindful of keeping your password and other authentication information safe from third parties, and immediately notify 23andMe of any unauthorized use of your login credentials”. In addition, the Statement clarifies that it retains personal information for as long as necessary to provide the services and fulfill transactions that are requested by customers. Customers can also choose (or choose not to) store their sample; view health reports; share their information with genetic relatives or other users; receive personalized recommendations based on sensitive data categories; receive promotional communications; and participate in research. It is clear that despite these points made in the Statement, there have been several criticisms of the company’s handling of personal data, particularly genetic data. For instance, it has been noted that the data breach took place because there was an attack that exploited weak security practices . That is, there was no multi-factor authentication feature, unnecessary information disclosure where the DNA Relatives and Family Tree features exposed data from other users, amplifying the breach’s impact, and users were also reusing passwords across different services. According to Digital Defenders, there were things that the company should have done : Use multi-factor authentication Monitor for security events to stop attacks earlier Rate limits on logins to slow down and frustrate attackers using automated tools Have account lockout policies so accounts get locked after a set number of failed attempts Have stronger password policies to reduce password reuse risks Incorporate data minimization where less data is collected in the first place Use the principle of least privilege so that users only have access to the data they need The class action settlement in the United States can help to compensate for any customer losses related to the data breach. However, it is currently not clear how the genetic information of customers will be handled by a new successor company (the company could be sold to a new company, which may want to make new terms and conditions). What has happened in Canada? On June 10, 2024 , the privacy authorities for Canada and the United Kingdom (UK) launched a joint investigation into the data breach that was discovered in October 2023 at the global direct-to-consumer genetic testing company 23andMe. On the Privacy Commissioner of Canada (OPC) website, the announcement stated that 23andMe is a custodian of highly sensitive personal information including genetic information which does not change over time. The data can reveal information about an individual and their family members, including about their health, ethnicity, and biological relationships. This makes public trust in these services essential. Presently, the OPC is still investigating the matter. Both Canada (the OPC and provincial Commissioners) and the UK noted that the sensitive information needs to be protected: “In the wrong hands, an individual’s genetic information could be misused for surveillance or discrimination,” said Commissioner Philippe Dufresne. “Ensuring that personal information is adequately protected against attacks by malicious actors is an important focus for privacy authorities in Canada and around the world”. Likewise, the Information Commissioner’s Office (ICO) in the UK made an announcement in June, 2024 of the investigation with the OPC. The goal is to examine: the scope of information that was exposed by the breach and potential harms to affected people whether 23andMe had adequate safeguards to protect the highly sensitive information within its control whether the company provided adequate notification about the breach to the two regulators and affected people as required under Canadian and UK data protection laws The ICO recently announced that in early March, 2025, it issued 23andMe with provisional findings, a notice of intent to fine £4.59 million and a preliminary enforcement notice: “We would stress these findings are provisional and, as with all preliminary findings, are subject to representations from 23andMe including in relation to affordability considerations. The ICO will carefully consider any representations made before taking a final decision. We are aware that 23andMe has filed for Chapter 11 bankruptcy in the US to facilitate a sale process. We are monitoring the situation closely and are in contact with the company. As a matter of UK law, the protections and restrictions of the UK GDPR continue to apply and 23andMe remains under an obligation to protect the personal information of its customers." Given the settlement in this case in the United States, it will be interesting what takes place in Canada. It will be important to note that in Ontario, the Personal Health Information Protection Act states in section 4 that “personal health information” means identifying information about an individual in oral or recorded form, if the information relates to the physical or mental health of the individual, including information that consists of the health history of the individual’s family. It will be critical to see how the phrase “including information that consists of the health history of the individual’s family” is treated by the regulator—many family members are also caught in the 23andMe mess thanks to their relatives hastily giving up their genetic data. The ramifications are very serious when it comes to thinking about how employers and insurers may obtain and use this information in the future. Correspondingly, in the federal spere, the Personal Information Protection and Electronic Documents Act (PIPEDA) states in section 2 that “personal health information” is personal health information, with respect to an individual, whether living or deceased, means information concerning the physical or mental health of the individual. There could also be human rights provisions that are triggered regarding genetic discrimination. For instance, in Ontario, the Ontario Human Rights Commission has urged insurance companies to avoid using enumerated grounds of discrimination contained in the Human Rights Code and genetic testing information for measuring risk. It has also cautioned employers that they can only test job applicants with pre-employment medical exams if determining a person’s ability to perform essential job duties. Furthermore, in the federal sphere, the Canadian Human Rights Act states in section 3 that genetic characteristics is one of the prohibited grounds of discrimination. Moreover, section 3(3) of the Act states the following: “Where the ground of discrimination is refusal of a request to undergo a genetic test or to disclose, or authorize the disclosure of, the results of a genetic test, the discrimination shall be deemed to be on the ground of genetic characteristics” And in order to protect workers from the “interview” that consists of requiring the taking of a genetic test (and potential consequent refusal to hire or promote), legislation in the federal spere, namely the Canada Labour Code , has a considerable thoughtful section called Division XV.3: Genetic Testing. Essentially, every employee: is entitled not to undergo or be required to undergo a genetic test is entitled not to disclose or be required to disclose the results of a genetic test Most importantly, employers are not allowed to dismiss, suspend, lay off, or demote an employee, impose a financial or other penalty on an employee, or refuse to pay an employee remuneration in respect of any period that the employee would, but for the exercise of the employee’s rights under this Division, have worked, or take any disciplinary action against or threaten to take any such action against an employee just because the employee refused a request by the employer to undergo a genetic test, refused to disclose the results of a genetic test, or on the basis of the results of a genetic test undergone by the employee. Employees can make a complaint if their employers contravene these provisions. These provisions were a result of the forward-thinking Genetic Non-Discrimination Act (a 2017 amendment) that made changes to the human rights and employment provisions in the federal legislation. In 2020, the Supreme Court of Canada confirmed in Reference re Genetic Non‑Discrimination Act, that the Genetic Non-Discrimination Act of 2017 was indeed constitutional despite jurisdictional concerns, and applied to everyone in Canada. What the foregoing suggests is that both Ontario and the federal government have basic protections in place to protect employees and job applicants, as well as individuals who need to buy insurance. Just imagine a job applicant going to an interview with an employer: in the near future, will that employer ask the applicant for a sample such as a saliva test? This all reminds me of a 1997 movie called Gattaca , where a future society that was centred on eugenics. The main character, Vincent, was not conceived through genetic selection and was called an “invalid” (unlike his brother, Anton, who was a “valid”) and faced several instances of genetic discrimination, even though it was illegal. In fact, he found a way to live among the valids and achieve his lifetime goal of working in the spaceflight conglomerate Gattaca Aerospace Corporation. But he had to pose as a valid to do this using donated hair, skin, blood, and urine samples of a valid who was in an accident and was paralyzed after being hit by a car. In my view, Gattaca, which was set in the “not too distant future”, could happen in reality. Although there are currently protections in place in Canada, the strengths of the Canada Human Rights Act and the Canada Labour Code should be duplicated all across Canada. Since both regimes are provincially-regulated, the changes need to be reflected in both human rights and employment legislation of the provinces and territories. What should consumers do? Many 23andMe customers have been recommended to delete their data and their accounts. It is clear that nothing has changed since the data breach—the company is still operating in the same manner when it comes to storing, managing, or protecting customer data. The Ontario Information and Privacy Commissioner warns consumers in March, 2025 about what will happen to their genetic data—she points out that there is a risk that the data privacy safeguards that customers initially signed on to may change. That is, when company ownership changes hands, the terms of engagement could as well. Also, it is important to note that there is a class action in Canada , where the Supreme Court of British Columbia has appointed a representative plaintiff and established the class membership criteria on December 20, 2023. In an interview with CBC and the representative plaintiff, the plaintiff expressed regret that he gave up a significant amount of intimate data: “You're giving them everything. You're basically giving them the raw code of yourself, if you will — you at your most finest essence" How did the data breach happen? Hackers initially got into around 14,000 accounts by using old compromised passwords that customers had recycled from other accounts on other sites, and then used those accounts to access 5.5 million DNA relatives profiles. In a blog dated March 26, 2025 , 23andMe states that it is required to comply with its privacy policy and the law with respect to the treatment of customer data. Also, it states that “Under Chapter 11, we intend to use the sale process to maximize the value of our business while continuing to operate”. While the company tries to find a new buyer, customers are still able to go in and access their accounts, genetic reports, and any stored data. They can delete their data and accounts, which is recommended. Additionally, the blog states the following: “Through the sale process, 23andMe will look to secure a partner who shares in its commitment to customer data privacy and will further its mission of helping people access, understand and benefit from the human genome. Any buyer will be required to comply with our privacy policy and with all applicable law with respect to treatment of customer data. Our users’ privacy and data are important considerations in any transaction, and we remain committed to our users’ privacy and to being transparent with our customers about how their data is managed. You have choices. You can opt into and out of our research at any time by updating your consent status in your account settings. If you opt out, we will stop using your information for research going forward (we cannot affect studies that have already been completed) and will discontinue use of your data within 30 days” What can businesses learn from 23andMe? Canadian businesses are recommended to review their privacy policies and security safeguards in order to ensure that any data that is under their control is being properly protected. When it comes to commercial transactions that are covered by PIPEDA , there are specific obligations that businesses must meet if a data breach is discovered. Businesses must act quickly and make the necessary notifications to the Privacy Commissioner and affected individuals. It is interesting that 23andMe is promising that the unknown buyer would have to comply with its privacy policy—as the Information and Privacy Commissioner pointed out, a new company can change the terms of engagement and thus the way in which it protects user privacy. We may find out soon what the results of the OPC’s investigation of 23andMe as well. The report may contain additional information and learnings—we will keep you posted. Previous Next

  • Code of Conduct | voyAIge strategy

    Code of Conduct DOWNLOAD

  • Legal Tech Woes | voyAIge strategy

    Legal Tech Woes The Story of How Fastcase Sued Alexi Technologies By Christina Catenacci, human writer Dec 5, 2025 Key Points On November 26, 2025, Fastcase Inc (Fastcase), an American legal publishing company founded in 1999, commenced an action against Alexi Technologies Inc (Alexi), a Canadian legal tech company that began as a research institution in 2017 In 2021, Fastcase and Alexi entered into a Data Licence Agreement (Agreement) where Fastcase would grant Alexi limited access to Fastcase’s proprietary and highly curated case law database According to Fastcase, Alexi expanded its use of Fastcase data beyond the license’s narrow internal-use limitations, using that data to build and scale its own legal research platform—it began publishing and distributing Fastcase-sourced case law directly to its users in clear violation of the Agreement’s core restrictions On November 26, 2025, Fastcase Inc (Fastcase), an American legal publishing company founded in 1999, commenced an action against Alexi Technologies Inc (Alexi), a Canadian legal tech company that began as a research institution in 2017, in the United States District Court for the District of Columbia. Background It is first important to understand the context of this lawsuit. In particular, Fastcase spent decades building one of the industry’s most comprehensive and innovative legal research databases. In 2023, Fastcase merged with vLex LLC (vLex) and became part of the vLex Group, which was subsequently acquired by Clio, Inc (Clio), a company that is valued at $5 billion , on November 10, 2025. The acquisition was for $1 billion and was characterized as one of the most significant transactions in legal technology history. On the other hand, Alexi initially operated with a small team of research attorneys who used a passage-retrieval AI system to help prepare legal memoranda for clients. In 2021, Fastcase and Alexi entered into a Data Licence Agreement (Agreement) where Fastcase would grant Alexi limited access to Fastcase’s proprietary and highly curated case law database. From Fastcase’s perspective, the main term of the Agreement was that the licence was expressly restricted to internal research purposes. For example, it was limited to research that was performed by Alexi’s own staff lawyers in preparing client memoranda. And most importantly, Alexi agreed that it would not use Fastcase data for any commercial purpose, use the data to compete with Fastcase, or publish or distribute Fastcase data in any form. This Agreement was important to Fastcase, given the number of years and the millions of dollars in investment to create one of the most sophisticated legal research databases in the industry. More precisely, Fastcase’s efforts involved extensive text and metadata tagging, specialized structuring into HTML, and proprietary formatting and harmonization processes that required significant technical expertise and sustained investment. Thus, Fastcase entrusted Alexi with access to this highly valuable, unique proprietary compilation solely for the narrow internal research purpose defined in the Agreement. There was a time when Fastcase and Alexi considered entering into a partnership. In 2022, Alexi sought to integrate its passage-retrieval AI system with Fastcase’s database so that Alexi customers could directly access Fastcase case law. However, that partnership never materialized. Instead, in 2023, Fastcase proceeded with its merger with vLex and expanded its own research offerings. Yet, following the merger, Fastcase continued operating under the Fastcase name, and the Agreement with Alexi remained in full force and effect. But then, according to Fastcase, Alexi began pivoting from occupying different roles in the legal-tech space into direct competition with Fastcase. That is, Fastcase says that Alexi expanded its use of Fastcase data beyond the license’s narrow internal-use limitations, using that data to build and scale its own legal research platform—it began publishing and distributing Fastcase-sourced case law directly to its users in clear violation of the Agreement’s core restrictions. What’s more, Alexi began holding itself out as a full-scale legal-research alternative to incumbent providers, including Fastcase. According to Fastcase, Alexi shortcut the massive investment that was required to build a comprehensive commercial legal-research platform using the Fastcase data for the very commercial and competitive purposes that the Agreement expressly forbid. That was not all: Fastcase believed that Alexi misused its intellectual property to bolster its own credibility and to suggest that there was an affiliation that did not exist. Further, Fastcase believed that Alexi misappropriated Fastcase’s compilation trade secrets. As a result, Fastcase says that Alexi has appropriated Fastcase’s decades of investment while simultaneously damaging Fastcase’s market position and goodwill. But what Fastcase highlighted above all else was that Alexi never notified Fastcase of its changing service model, its expanding use of Fastcase data, or its intent to compete directly with Fastcase. It never even tried to renegotiate the Agreement to authorize its new uses. Instead, Alexi continued to rely on the internal-use license while using Fastcase data to build, train, power, and market a direct competitor. When Fastcase discovered what Alexi was doing, vLex (acting on behalf of Fastcase) sent Alexi a written Notice of Material Breach in October, 2025. The notice explained that Alexi was using Fastcase data for improper commercial and competitive purposes in violation of the Agreement and demanded that Alexi cure its breach within 30 days, as required by the Agreement. In response, Alexi denied any wrongdoing—in early November, 2025, Alexi’s counsel sent a letter rejecting Fastcase’s notice and actually admitted that Alexi had used the Fastcase Data to train and power its generative AI models, and that this did not constitute a violation of the Agreement. Moreover, the letter stated that the intention of the Agreement was never to preclude Alexi from using Fastcase’s data as source material for Alexi’s generative AI product. Rather, this was exactly why Alexi was paying Fastcase nearly a quarter million dollars annually. Fastcase has terminated the Agreement, yet Alexi has continued to use the data. What did Fastcase Claim Against Alexi? Fastcase has made the following claims: Breach of contract . Alexi was granted only a limited, non-exclusive, non-transferable license to use Fastcase’s data solely for Alexi’s internal research purposes. Fastcase performed its obligations under the Agreement by granting access to the data, but Alexi materially breached the Agreement by using the data for commercial and competitive purposes. Alexi has caused, and will continue to cause, irreparable harm to Fastcase, including loss of control over its proprietary compilation, erosion of competitive position, and impairment of contractual and intellectual property rights Trademark infringement . Fastcase has, at all relevant times, used the Fastcase trademarks in commerce in connection with its legal-research products, software, and related services. Its trademark registration remains active, valid, and in full force and effect. Without Fastcase’s consent or authorization, Alexi has used, reproduced, displayed, and distributed the Fastcase marks in its platform interfaces, public presentations, promotional materials, and commercial advertising. It even used the marks in ways that suggested that Alexi’s products were affiliated with Fastcase when no partnership was ever formed, and this constitutes infringement Misappropriation of trade secrets . Fastcase has devoted decades of engineering, editorial, and resource investment to build and refine its compilation. To maintain the secrecy and value of its compilation, Fastcase has required licensees and partners to enter confidentiality, non-use, and restricted-use agreements, including the Agreement with Alexi, and other technical and security measures. Alexi’s misappropriation included using Fastcase’s confidential compilation and metadata structure to train large-scale generative AI models, power user-facing legal-research features, generate outputs incorporating Fastcase data, and provide end users with direct access to the content of the Fastcase compilation. Fastcase has suffered and continues to suffer substantial harm, including loss of licensing revenue, competitive injury, market displacement, unjust enrichment to Alexi, erosion of goodwill, and diminution of the value of Fastcase’s proprietary compilation False Designation of Origin and Unfair Competition . Without the consent of Fastcase, Alexi has used Fastcase’s marks in commerce on its platform interfaces, in product demonstrations, and in promotional and advertising materials. This falsely suggests to consumers, legal professionals, and industry participants that Fastcase endorses, sponsors, authorizes, or is affiliated with Alexi’s competing legal-research platform, even though no such relationship exists and Fastcase has expressly declined to form a partnership with Alexi. This constitutes a false designation of origin and a false or misleading representation of affiliation, connection, or sponsorship. This conduct is likely to cause and has already caused consumer confusion, mistake, and deception as to the origin of Alexi’s products, whether Alexi’s products incorporate or are powered by Fastcase’s proprietary services with authorization, and whether Fastcase has partnered with, approved, or is otherwise associated with Alexi Consequently, Fastcase is asking for the following: Judgement for Fastcase A declaration of the breach of the Agreement A permanent injunction An award of compensatory damages An order of disgorgement requiring Alexi to account for and disgorge all revenues, profits, cost savings, and other benefits derived from its unauthorized use of Fastcase’s data An order requiring the return and destruction of all Fastcase data in Alexi’s possession, custody, or control, including all copies, derivatives, embeddings, model weights, datasets, or training artifacts incorporating or derived from Fastcase Data, together with a certification of complete purge and destruction Monetary relief for actual damages attributable to the infringement What Can We Take from This Development? At this point, we have not yet seen Alexi’s defence. Clearly, Alexi will likely argue that this was simply a misunderstanding of the Agreement. One question that will indeed arise during the proceedings is about whether there was a definition of “internal research purposes” set out in the Agreement. Could there actually be a way to argue that training an AI system was in the scope of the Agreement, and this was why Alexi was paying so much to Fastcase each year? Although Alexi may have internally considered fully automating the creation of legal research memos, it may be difficult for it to show that it contemplated at the time of forming the Agreement that it would remove its internal research component entirely and begin publishing Fastcase case law directly to end users. We will have to wait and see what happens. Previous Next

  • Chatbots at Work – Emerging Risks and Mitigation Strategies | voyAIge strategy

    Chatbots at Work – Emerging Risks and Mitigation Strategies How to Recognize and Overcome the Invisible Risks of AI By Tommy Cooke Nov 22, 2024 Key Points: Personal AI chatbots in the workplace can pose significant risks to data privacy, security, and regulatory compliance, which can lead to severe legal and reputational consequences Employees using personal AI tools can inadvertently expose proprietary information, increasing the risk of intellectual property breaches and confidentiality violations Organizations can mitigate these risks through clear policies, employee education, and proactive monitoring, allowing for responsible AI usage without compromising security or creativity AI is rapidly transforming where and how we work and play. As our Co-Founder Dr. Christina Catenacci deftly describes , AI chatbots have become commonplace friends, mentors, and even romantic partners. At a rate that is surprising most any observer in any industry, AI is creating incredible opportunities that are often fraught with challenges. AI services tend to be so quickly packaged and sold that subscribers do not find much space to reflect on fit , appropriateness, and potential blind spots that could cause misalignment in even the most well-intentioned organization. As a result, a new kind of workplace is emerging. The remote and hybrid work models ushered in via the pandemic seem a distant memory from yesterday, now that employees are bringing their own personal AI into the office. In-pocket AI is appealing. Why wouldn’t an organization want its employees to benefit from improved workflows and creativity, especially if they don’t have to pay for it? A critical dynamic in this new pocket AI workplace reality is that employers are seeing new blind spots and challenges emerge. Understanding and navigating them is crucial for avoiding data leaks, maintaining compliance, and protecting intellectual property. As we head into 2025, organizations must take time to recognize that invisible AI is unmanaged. This exposes organizations to far-reaching consequences for itself and its stakeholders. By understanding these risks, they can be addressed in ways that not only protect an organization, but position its executives as thought leaders capable of aligning values, building trust, and enhancing overall efficiency without compromising employee creativity and freedom. The Risks of AI Chatbots Data privacy and security as well as compliance are top of mind for most employers we speak to. Because an employee’s personal AI chatbot requires constant internet access and cloud storage, access is facilitated by an employer’s Wi-Fi network. This increases the risk of corporate data being stored incorrectly on third-porty servers or inadvertently intercepted and exposed. It’s important to recognize that personal AI chatbots in a workplace thus raise susceptibility to industry regulations like the GDPR or HIPAA , thus significantly raising an organization’s legal exposure to fines or penalties. Most AI chatbot services train their AI models in real-time off the data its users provide them—and this includes sensitive intellectual property. Consider the following hypothetical prompt that a marketing employee at a pharmaceutical company may enter into their personal AI chatbot: “I have a client named [x] who has 37 patients in New York State with [y] medical conditions. They are born in [a, b, c, and d] years. Analyze our database to identify suitable drug plans. Be sure to reference our latest cancer treatment strategy, named [this].” First, the prompt may lead to privacy issues sine it includes potentially identifiable information about patients, such as their location, medical conditions, and birth years. Depending on how an AI chatbot processes and stores this information, it could lead to violations of HIPAA as sharing protected health information (PHI) with an unapproved, third-party application puts the employer at risk of serious regulatory breaches not to mention reputational damage. Moreover, patients’ identities have been incidentally reverse-engineered with far less data via far more seemingly innocuous methods. Second, the hypothetical prompt contains confidential information when it mentioned the employer’s latest cancer treatment strategy. Strategic information related to drug plans or treatment approaches may be inadvertently referenced and/or suggested to competitors’ employees who are using the same AI chatbot. Third, the hypothetical prompt entered by the make-believe employee incorrectly assumes that the AI chatbot has access to one of the company’s secure databases. Despite having uploaded a few protected PDFs to the AI chatbot, the employee had used the wrong terminology. The potential for this to cause problems is quite significant as it can trigger the AI chatbot to creatively but silently fill-in-the-blanks; as we know, AI chatbots have a tendency to hallucinate. Remember, they do not reflect the living world but rather analyze data models of the real world that you and your employees live in. The point is that the AI chatbot may generate misleading or inaccurate information simply because it is only as robust and comprehensive as the data it trains from. There is a significant risk that the employee recommends to their clients and colleagues to follow a particular drug plan that could be based off flawed and incomplete health, medication, and business data. Mitigation Strategies AI is here to stay. The solution is not to ban or forbid these tools. That is unrealistic and may inadvertently cause friction for an employer when it decides to implement its own AI tools for employees down the road. Here are some proactive steps any organization can follow to minimize risks while enabling employees to use AI responsibly: 1. Build a Policy Set expectations that outline what is and is not allowed when it comes to personal AI chatbots. Include rules about handling sensitive data, consequences for non-compliance, and standards for AI tool vetting. Moreover, generate a one-stop guideline PDF that gives your employees steps they should follow along with examples of both problematic and approved prompts. 2. Educate your Employees Training employees on AI risks and best practices ensures they understand their role in protecting your organization and not merely existing inside of it. Training is always the first line of defense, and it is a proven method for promoting awareness and responsible use of AI. 3. Monitor and Audit Numerous security solutions exist to identify what tools are being used inside of a company’s network. Implement systems to track AI tool usage and audit their data flows tools to identify unauthorized or high-risk applications. Inform your employees that you are monitoring AI-based network activity and will be conducting annual audits to ensure that their activity is compliant with organizational policy requirements. Mindfully Embracing an Opportunity The rapid proliferation of AI companions challenges organizations to rethink how they relate to their employees. Risks certainly exist but they are manageable through thoughtful policies, regular monitoring, and a strong training commitment. Allowing employees to use personal AI chatbots isn’t merely a risk – it’s an opportunity. When that opportunity is embraced, it signals trust, adaptability, and a forward-thinking culture that thinks proactively and not reactively to AI. HR leaders, IT pros, and virtually every executive can enable the ability for employees to innovate and create while simplifying tedious workflows through AI chatbots. This can significantly work in the organization’s favor while doing so safely. Consider doing so to show your organization, your employees, and your clients that you are ready for the rapidly evolving digital landscape ahead in 2025 and the years to come. Previous Next

  • The Chief AI Officer (CAIO) | voyAIge strategy

    The Chief AI Officer (CAIO) An example of AI leadership in organizations By Christina Catenacci, Human Writer Jan 31, 2025 Key Points Newly hired CAIOs are expected to be one of the most strategic members of the organization The role of CAIO is interdisciplinary because the CAIO must ensure that AI is thoughtfully integrated to add value for clients and have impact on every aspect of the organization The CAIO, a senior executive, must define the AI strategy and oversee AI projects, manage AI risks, and manage stakeholder relationships The role of CAIO is a relatively new; indeed, it is gaining prominence within organizations deploying GenAI. In fact, about 11 percent of midsize and large organizations have already filled a CAIO role, and another 21 percent are actively seeking one. That said, we are not there yet: a Gartner study revealed that although half of organizations have an AI leader, a whopping 88 percent do not have a CAIO. Newly hired CAIOs are expected to be one of the most strategic members of the organization, likely because the CAIO has a 360-degree perspective on AI across the organization. As a senior executive who is responsible for the overall strategy, development, and implementation of AI initiatives within an organization, the CAIO has several responsibilities. What are the main responsibilities of the CAIO? It is important to note that this type of role is interdisciplinary : the CAIO must ensure that AI is thoughtfully integrated to add value for clients and have impact on every aspect of the organization. This, of course, requires the CAIO to always stay on top of everything. But it is noteworthy that CAIOs emphasize the importance of finding ways for the company to use the newest technology to help the clients—not just for technology’s sake. The following includes some of the main responsibilities of the CAIO : Defining the company’s AI strategy : working with the CEO and other senior executives, the CAIO defines the organization's AI strategy, which involves creating goals, a roadmap for achieving those goals, and resource allocation for AI initiatives Overseeing the development and implementation of AI projects : the CAIO oversees the development and implementation of AI projects across the organization, which includes working with cross-functional teams to ensure that AI projects are aligned with the organization's overall strategy and delivered on time and within budget Managing AI risks : the CAIO is responsible for managing AI risks, which includes identifying and mitigating the risks associated with AI projects, and ensuring that AI is used responsibly and ethically Building and maintaining relationships with key stakeholders : the CAIO builds and maintains relationships with key stakeholders, such as customers, partners, and regulators in order to ensure that AI is used in a way that meets the needs of all stakeholders and is line with the organisation's overall goals In order to fulfill these responsibilities, the CAIO needs to have the following skills: Technical skills Business acumen Leadership skills Communication skills Collaboration skills Vision Knowledge about data governance and privacy Legal and ethical knowledge Let us stake a few examples of what a CAIO may do: a CAIO may need to: develop an AI strategy identify key business areas where AI can drive innovation, improve efficiency, and enhance decision-making lead cross-functional teams to integrate AI capabilities into products, services, and internal processes create policies and frameworks to ensure the ethical and responsible use of AI exists across the organization collaborate with legal and compliance teams to address data privacy, bias mitigation, and regulatory requirements related to AI promote transparency and accountability in AI-driven decision-making conduct regular audits of AI models to identify vulnerabilities, such as bias, inaccuracies, or potential data breaches Benefits There are several benefits . On one hand, some of the short-term benefits could include the CAIO helping to enable AI adoption and digital transformation in the organization and becoming a leader in the tech industry. On the other hand, some of the long-term benefits could include playing a critical role in shaping the future of AI and its impact on business and society. Conclusion Clearly, the CAIO is well-positioned to make a real impact on an organization by using AI to improve efficiency, productivity, and strong team building across the organization. Although it is difficult to predict what will happen in the future, I think that the CAIO is likely to be around for some time, given that the expectation is that CAIOs will become more strategic, cross-functional, global, and ethical. More specifically, CAIOs will likely ensure that AI is increasingly integrated into all aspects of business operations. Companies with CAIOs are most likely going to be the most adaptive so that they do not fall behind. As suggested by many CAIOs, the time to get started is now, and taking small steps can work. One small step could involve embracing devices that use AI to address training and skill barriers. Another step could be to start partnering with organizations. As they say, when is now a good time to start? Previous Next

  • Free Consultation | voyAIge strategy

    Free Consultation Organization Name Industry First name Last name Email What service or product are you interested in? Are you curious about or interested in a specific AI? Let us know Do you have a project deadline? Additional Information or Request Details Submit Thanks for submitting! We will respond within 24 hours

  • Privacy Policy | voyAIge strategy

    Privacy Policy DOWNLOAD

  • Technology Lessons from Orange Shirt Day | voyAIge strategy

    Technology Lessons from Orange Shirt Day Applying Indigenous Principles to Build Ethical and Inclusive Organizational Practices By Tommy Cooke Oct 4, 2024 Key Points: OCAP principles guide respectful and community-centered data management Indigenous Data Sovereignty emphasizes transparency and accountability in data use Indigenizing technology creates inclusive systems that honor cultural perspectives The National Day for Truth and Reconciliation, also known as Orange Shirt Day in Canada, is a time to reflect on the painful history of residential schools and it is a time to honor survivors and those who never came home. It is a day of remembrance – and also a time for learning and growth. At voyAIge strategy, we have recently reflected on the many lessons Indigenous technology and media leaders provide through their work and example. We recognize that Indigenous business practices present an opportunity for organizations to revitalize their own practices. Our Co-Founder Tommy hosts the What’s That Noise?! Podcast . For the past year, it has been the home to a special series called One Feather, Two Pens , co-hosted by Lawrence Lewis , Co-Founder and CEO of OneFeather . This Insight draws upon some of the series’ nine podcast episodes to share key values and principles that can serve as powerful growth opportunities for organizations working with data and complex technology like AI. Here are four principles and values that could help your organization foster more ethical, inclusive, and accountable data practices: 1. Ownership, Control, Access, and Possession (OCAP) The OCAP principles – Ownership, Control, Access, and Possession – are essential for understanding respectful data management. Indigenous communities have long advocated that they must have the ability to govern data about their people and lands. OCAP is not merely about data privacy. It speaks to the heart of how communities can control and protect data that are used to generate stories about Indigenous peoples. These principles are also crucial for protecting the ability for Indigenous peoples to continue authentically telling their own stories. At any organization, OCAP can reshape how data is treated. Data are not just numbers. They represent people, their histories, and their futures. As Ja'Elle Leite , CEO of Ultralogix , mentioned on a recent episode of One Feather, Two Pens, stories from Indigenous communities carry important lessons for those who are willing to listen. Reflecting on how your organization handles and protects data, particularly when it relates to vulnerable or underrepresented groups, is a crucial step toward demonstrating ethical technology use. This applies to AI as much as it does any technology. 2. Indigenous Data Sovereignty Indigenous Data Sovereignty refers to the right of Indigenous peoples to govern data about their communities, cultures, and lands. If OCAP are data management steps an organization can take, Indigenous Data Sovereignty is a goal for many Indigenous peoples who encounter data and technology. In the era of Big Data, an organization can quickly lose sight of who controls the narratives that data create. Indigenous Data Sovereignty emphasizes the importance of giving Indigenous communities agency over their data and ensuring that its use aligns with their values. For your organization, applying this principle could mean being transparent about how data is collected, stored, and shared. It involves making sure that these processes can be explained and understood by the community members themselves. If your organization is gathering data that involves Indigenous populations, this principle is crucial for maintaining trust. 3. Indigenization of Technology Indigenization means embedding Indigenous perspectives and values into existing systems. It’s the deliberate practice of protecting and promoting culture through tools and technologies. It’s helpful to recognize that bringing an Indigenous lens into an organization doesn’t just benefit Indigenous stakeholders. It also helps organizations make their technology more inclusive and culturally aware. James Delorme , CEO of Indigelink Digital Inc., who was featured in Episode 8 of the podcast series, highlighted the importance of intentionally bringing Indigenous perspectives into spaces and places where they haven’t traditionally existed. For businesses, this might look like creating spaces for Indigenous collaboration, particularly in decision-making processes related to technology and data. For example, this could involve re-evaluating how data flows through your company, ensuring that systems in place don’t marginalize Indigenous voices or stories. 4. Narrative Authority Elle-Máijá Tailfeathers , award-winning filmmaker and storyteller, shared in Episode 5 of our series the importance of narrative authority – the power and right of individuals and communities to control, shape, and share their own stories. In an organizational setting, narrative authority prompts organizations to think deeply about how they present information, especially when it relates to Indigenous peoples. Organizations must be self-aware of how their data collection, products, and services might filter or alter Indigenous narratives. Engaging directly with Indigenous communities is vital when your data or technology practices involve or represent their stories. This ensures that narratives are not only accurate but also owned and told by the right people and voices. Bridging Indigenous Data and Technology Insights with Organizational Practice Reflecting on these principles offers organizations a chance to recalibrate their ethical approaches to data and technology. Ethics in the digital age isn’t just about compliance or creating polished policies. It’s about respecting the stories behind the data and the people represented by them. As we learned from Indigenous thought leaders, ethical technology practices require constant dialogue, humility, and an openness. Here are three tips to help bridge Indigenous principles with your own organizational practices: Translate Ethics into Action : Don’t just publish ethics policies. Turn them into daily actions. Ask how OCAP or Indigenous Data Sovereignty can be applied to your organization's specific context. Engage Communities : Actively engage Indigenous voices, especially when your technology touches their data, culture, or representation. Make space for dialogue and collaboration. Be Accountable : Ensure transparency and accountability in how data is managed and shared. Being answerable to the communities your data affects is a hallmark of ethical practice. Orange Shirt Day reminds us of the power of stories, and Indigenous communities have much to teach us if we listen. By adopting even some of their many data and technology principles, organizations can not only create more ethical and inclusive data systems but also honor the cultural wisdom that strengthens them. Previous Next

  • Future of Jobs Report | voyAIge strategy

    Future of Jobs Report What is Projected for the Future of Work? By Christina Catenacci, human writer May 1, 2025 Key Points The Future of Jobs Report 2025 has thoroughly examined a number of macrotrends and technology trends, and reported on where countries are individually and globally in terms of predicting the share of organizations that have identified the trend as likely to drive transformation in their organization It is predicted that by 2030, new job creation will amount to 170 million jobs (about 14 percent of today’s total employment), offset by the displacement of 92 million current jobs (about eight percent of today’s total employment)—this means there will be a net growth of 78 million jobs (seven percent of today’s total employment) Advances in technology are anticipated to drive skills change more than any other trend over the next five years. The most common workforce strategy in response to the macrotrends analyzed in the report is upskilling the workforce (85 percent) What is Projected for the Future of Work? The World Economic Forum recently released its Future of Jobs Report 2025 . The report discusses the perspectives of over 1,000 leading global employers across 22 industry clusters and 55 economies from around the world. What does it say about the future of work? What does it predict about AI in the workplace? This article answers these questions. More specifically, the article touches on drivers of labour-market transformation; the jobs outlook; the skills outlook; barriers to transformation and strategies that can improve talent availability; and industry insights. What is the Global Labour Market Landscape in 2025? Undoubtedly, 2025 has been marked by the rising cost of living, geopolitical conflicts, climate issues, and economic downturns. This report is dated January, 2025— before the recent tariff wars that were launched by America on several countries. At this point, the longer-term effects of these tariff wars are unclear when it comes to the markets, unemployment, and inflation. The projections for between 2025 and 2030 are outlined below. What are the Drivers of Labour-Market Transformation? The following trends are considered drivers of transformation in the global market, which reshape both jobs and required skills: Technological developments : Hands down, 60 percent of employers expect broadening digital access to transform their businesses, which is more than any other trend. This makes sense since growing digital access is a critical enabler for new technologies to transform labour markets. The three technologies that are expected to have the greatest impact on business transformation are AI, robots and autonomous systems, as well as energy generation and storage technologies. By far, AI is expected to have the most impact, with employers responding that 86 percent expect that AI will transform their businesses by 2030. Indeed, there has been a rapid increase in investment and adoption across several sectors, and a surge in demand for GenAI skills Economic uncertainty : Based on 2024 economic performance, there is some cautious optimism about the global economic outlook; however, more chief economists expect conditions to worsen rather than strengthen. Slow growth and political volatility keep many countries at risk of economic shocks, and 42 percent expect slower growth to impact their operations. Inflation is still high in low-income countries because of high food prices due to supply chain disruptions influenced by climate shocks, regional conflicts, and geopolitical tensions Geoeconomic fragmentation : Geoeconomic tensions threaten trade and supply chains, especially in lower-income economies. Globally, governments are responding to geoeconomic challenges by imposing trade and investment restrictions, increasing subsidies, and adjusting industrial policies. The shift toward geoeconomic fragmentation has significant macroeconomic implications. In fact, about 34 percent of surveyed employers view heightened geopolitical tensions and conflicts as a key driver of organizational transformation The green transition : About 47 percent of employers consider the ramping up of efforts and investments to reduce carbon emissions as a key driver for organizational transformation. As well, 41 percent of employers see the increased efforts and investments made to adapt to climate change as a significant driver for organizational change. The demand for green skills will continue to outpace supply. The report states, “To fully capitalize on opportunities created by the green transition and harness them in a way that is fair and inclusive, prioritizing green skilling is essential”. Employers agree, in that 71 percent in the Automotive and Aerospace industry and 69 percent in the Mining and Metals industry expect carbon emissions reductions to transform their organizations Demographic shifts : we have an aging and declining working-age population predominantly in higher-income economies (due to declining birth rates and longer life expectancy), and a growing working-age population in many lower-income economies (where younger populations are progressively entering the labour market). As a result, we are putting greater pressure on a smaller pool of working-age individuals and raising concerns about long-term labour availability. Many employers facing the effects of the aging population are more pessimistic about talent availability and expect to face bigger challenges in attracting talent, and believe that they may need to rely on automation (79 percent) and advance workforce augmentation (67 percent). In fact, 92 percent of employers think that they will need to prioritize upskilling and reskilling in the next five years What is the Jobs Outlook? The Jobs Outlook addresses the issue of how employers expect certain jobs to grow and decline in response to the above-mentioned trends. It is predicted that by 2030, new job creation will amount to 170 million jobs (about 14 percent of today’s total employment), offset by the displacement of 92 million current jobs (about eight percent of today’s total employment). This means there will be a net growth of 78 million jobs (seven percent of today’s total employment). The fastest growing job roles are driven by technological developments—Big Data Specialist, FinTech Engineers, AI and Machine Learning Specialists, Software and Applications Developers, Security Management Specialists, Data Warehousing Specialists, and Autonomous and EV Specialists . On the other hand, some of the top fastest declining jobs involve clerical roles, such as Cashiers and Ticket Clerks, Administrative Assistants and Executive Secretaries, Bank Tellers, as well as Accounting, Bookkeeping, and Payroll Clerks. Moreover, the largest growing jobs for 2025–2030 include Farmworkers, Labourers, and other Agricultural Workers, Light Truck or Delivery Services Drivers, Software and Applications Developers, Building Farmers Finishers, related Trades Workers, and Shop Salespersons. Conversely, the largest declining jobs include Cashiers and Tickets Clerks, Administrative Assistants and Executive Secretaries, Building Caretakers and Cleaners, Material-recording and Stock-keeping clerks, Printing and related Trades workers, as well as Accounting and Bookkeeping Clerks. The researchers also examined how the above trends would affect employment. Technology has been predicted to be the most divergent driver of labour-market change—broadening digital access will likely create and displace more jobs than any other macrotrend. That is, 19 million jobs would be created, and nine million jobs would be displaced. Also, AI and information processing technology are expected to create 11 million jobs and displace 9 million jobs. When it comes to robotics and autonomous systems, it is predicted to be the largest job displacer with a net decline of five million jobs. In fact, broadening digital access, advancements in AI and information processing, and robotics and autonomous systems technologies are the drivers of the fastest growing and declining jobs. When it comes to technology, there is some question about the interplay between humans, machines, and algorithms as they redefine job roles across industries—is it about autonomation or augmentation? Automation will change the way in which people work. In particular, as technology becomes more versatile, the proportional share of tasks performed solely by humans is expected to decline. Today, 47 percent of work tasks are performed mainly by humans alone, with 22 percent performed mainly by technology (machines and algorithms), and 30 percent completed by a combination of both. But by 2030, employers expect these proportions to be nearly evenly split across these three things. Interestingly, the report states: “both machines and humans might be significantly more productive in 2030 – performing more or higher value tasks in the same or less amount of time than it would have taken them to do so in 2025 – so any concern about humans “running out of things to do” due to automation would be misplaced” Along the same lines, the researchers asked this question: If an increasing amount of a firm’s total output and income is derived from advanced machines and proprietary algorithms, to what extent will human workers be able to share in this prosperity? They stressed that technology could be designed and developed in a way that complements and enhances, rather than displaces, human work. In fact, they underscore the importance of ensuring that talent development, reskilling, and upskilling strategies are designed and delivered in a way to enable and optimize human-machine collaboration. That said, at an industry level, all sectors are expected to see a reduction in the proportion of work tasks performed by humans alone by 2030, but they differ in the share of this reduction that is projected to be attributable to automation versus augmentation and human-machine collaboration. For instance, there are four sectors where automation is projected to reduce the proportion of total work tasks done by humans alone and reduce the share of total work tasks currently delivered through human-machine collaboration. With respect to geoeconomic fragmentation, employers view increased government subsidies and industrial policy, increased geopolitical division and conflicts, and increased restrictions to global trade and investment to be net job creators. Additionally, increased government subsidies and industrial policy are expected to drive increased demand for Business Intelligence Analysts and Business Development Professionals. Increased restrictions to global trade and investment are also predicted to drive growth in those roles, as well as in Strategic Advisors and Supply Chain and Logistics Specialists. And increased geopolitical division and conflicts are projected to drive growth in all these roles, in addition to Information Security Analysts and Security Management Specialists. Employers were also asked about whether they were planning to offshore parts of their workforce, or move operations closer to home through reshoring, nearshoring, or friendshoring. Employers are driven to off-shoring and re-shoring due to the above geoeconomic trends. In terms of the green transition, climate change adaptation is likely to be the third largest contributor to net growth in global jobs by 2030, with an additional five million net jobs; similarly, climate change mitigation is the sixth largest contributor, with an additional three million net jobs. In this context, some fast-growing jobs (they are in the top 15 fastest growing jobs) include Environmental Engineers and Renewable Energy Engineers. Some other fast-growing jobs include Sustainability Specialists and Renewable Energy Technicians. Additionally, green transition macrotrends will also drive labour-market transformation; for instance, there will likely be net job growth for Building Framers, Finishers, and Related Trades Workers. In regards to demographic shifts, the trend of growing working-age populations is expected to be the second largest driver of global net job creation, with nine million net additional jobs by 2030. Likewise, aging and declining working-age populations are expected to be the third-largest driver of job creation (with 11 million additional jobs), as well as the main factor in a global reduction in jobs (with seven million fewer jobs). These demographic trends will likely be drivers for growth in roles for Assembly and Factory Workers, Vocational Education Teachers, Nurses, Sales and Hospitality professionals, Shop Salespersons, Wholesale and Manufacturing Sales Representatives, Food and Beverage Servers, as well as University and Higher Education Teachers and Secondary Education Teachers. The slower economic growth has caused employers to believe that there will be more job destruction (three million jobs) than creation (two million jobs). Similarly, employers believe that the rising cost of living and higher prices will cause some job creation (four million jobs) and displacement (three million jobs). This economic uncertainty will likely contribute to the decline in roles such as Building Caretakers, Cleaners, and Housekeepers, while slower economic growth is also among the top contributors to job decline in Business Services and Administration Managers, General and Operations Managers, and Sales and Marketing Professionals. That said, slower economic growth is also projected to be a top driver for growth in roles such as Business Development Professionals and Sales Representatives. Furthermore, growth in roles driven by increasing cost of living is concentrated in jobs associated with finding ways of increasing efficiency, such as AI and Machine Learning Specialists, Business Development Professionals, and Supply Chain and Logistics Specialists. What is the Skills Outlook? This part discusses expectations of skill disruption by 2030, the skills currently required for work, and whether employers anticipate that these skills will increase or decrease in importance over the next five years. It also examines the skills that are expected to become core skills by 2030, the key drivers of skill transformation, and anticipated training needs. When it comes to skills disruptions, there have been rapid advancements in frontier technologies (tech that significantly changes how we communicate, solve problems, and conduct business) since the pandemic—the post-pandemic era, we see an accelerated adoption of digital tools, remote work solutions, and advanced technologies such as machine learning and generative AI. At this point, employers expect 39 percent of workers’ core skills to change by 2030 and 61 percent of core skills that would remain the same; compared to this global average, Canada is at 38 percent, and the United States is at 35 percent of core skills will change by 2030. This may be why there is a growing focus on continuous learning along with upskilling and reskilling programmes. In fact, about 50 percent have completed training as part of long-term learning strategies. Interestingly, the top 15 skills that are the core skills in today’s workforce: analytical thinking; resilience, flexibility, and agility; leadership and social influence; creative thinking; motivation and self-awareness; technological literacy; empathy and active listening; curiosity and lifelong learning; talent management; service orientation and customer service; AI and big data; systems thinking; resource management and operations; dependability and attention to detail; quality control; and teaching and mentoring. Similarly, the top 15 skills that are on the rise include: AI and big data; networks and cybersecurity; technological literacy; creative thinking; resilience, flexibility, and agility; curiosity and lifelong learning; leadership and social influence; talent management; analytical thinking; environmental stewardship; systems thinking; motivation and self-awareness; empathy and active listening; and design and user experience. It is important to keep in mind that there are industry-specific variations in the evolving importance of skills. For example, both analytical thinking as well as curiosity and lifelong learning are at the top of the list with respect to what is needed in education and training; likewise, environmental stewardship is at the top of the list for what is needed in oil and gas. How are the main trends expected to influence the skills evolution by 2030? In terms of technological change, advances in technology are anticipated to drive skills change more than any other trend over the next five years. In fact, the increasing importance of AI and big data, networks and cybersecurity, and technological literacy is driven by the expansion of digital access and the integration of AI and information processing technologies. These trends have also been seen as responsible for the growing importance of analytical thinking and systems thinking. In a data-driven landscape, there is an increasing complexity of decision-making and the need for critical problem solving. Similarly, design and user experience, along with marketing and media skills, are expected to grow because of technological advancements. On the other hand, technology has accelerated the decline in some skills, including manual dexterity, endurance, precision, and reading, writing, and mathematics—likely due to robotics and automation. As discussed above, the hope is that technologies such as Gen AI will help to augment human skills through human-machine collaboration instead of replacing them, and so there is a continued importance of human-centred skills. In fact, the report states: “These findings underscore an urgent need for appropriate reskilling and upskilling strategies to bridge emerging divides. Such strategies will be essential in helping workers transition to roles that blend technical expertise with human-centred capabilities, supporting a more adaptable workforce in an increasingly technology-driven landscape” The researchers recommend that employers recognize the need for training and upskilling initiatives that focus on both advanced prompt-writing skills and broader GenAI literacy. In terms of geoeconomic fragmentation and economic uncertainty, these trends have led to a demand for network and cybersecurity skills in order to protect digital infrastructure from emerging threats. They have also led to a need for human-centred skills including resilience, flexibility, agility, leadership and social influence, and global citizenship to manage multiple crises and complex social dynamics. With respect to the green transition, environmental skills are becoming increasingly integral across diverse sectors. Moreover, employers that anticipate a rise in the importance of global citizenship cite the convergence of climate-change adaptation, geoeconomic fragmentation, and broadening digital access as key factors. We cannot forget about demographic shifts as a driver of skills demand—aging and declining working-age populations are pressing organizations to prioritize talent management, teaching and mentoring, as well as motivation and self-awareness. At the same time, there is a rising focus on empathy and active listening, resource management, and customer service. This emphasizes the growing need for interpersonal and operational skills that can address the specific needs of an aging workforce and foster more inclusive work environments. What does this all mean when it comes to skills? Employers have increasingly invested in reskilling and upskilling initiatives to ensure that workforce skills are aligned with evolving demands. Since 50 percent of workforces have completed training across nearly all industries, there is a growing recognition of the importance of continuous skill development. However, some industries are outliers: Agriculture, Forestry and Fishing, and Real Estate are the only sectors that have seen a decline in training completion since 2023. For a representative sample of 100 workers, 41 will not require significant training by 2030; 11 will require training, but it will not be accessible to them in the foreseeable future; and 29 will require training and be upskilled within their current roles. Additionally, 19 out of 100 workers will require training and will be reskilled and redeployed within their organization by 2030. To fund the training, employers expect to fund their own training programmes (86 percent), free of cost training (27 percent), government (20 percent), public-private funding (18 percent), and co-funding across the industry (16 percent). From training initiatives, employers expect enhanced productivity (77 percent), and improved competitiveness (70 percent). What are Workforce Strategies? Employers were asked about the workforce strategies that they anticipate adopting in response to the macrotrends mentioned above that will shape the future of work. Also, this part also touches on key barriers to organizational transformation, talent availability, as well as planned workplace practices and policies. The main barrier to organizational transformation is skill gaps in the labour market (63 percent). This challenge exists across practically all industries and geographies. Second and third in line are organizational culture and resistance to change (46 percent), and outdated or inflexible regulatory framework (39 percent). In terms of talent availability outlook, it has decreased since 2023: in 2025, only 29 percent of businesses expect talent availability to improve between 2025–2030. That said, employers are optimistic about talent development (70 percent). But when it comes to talent retention, only 44 percent expect to see improvements in their ability to retain talent. The most common workforce strategy in response to the above macrotrends is upskilling the workforce (85 percent). This is the case across all geographies and economies at all income levels, with employers in high-income economies (87 percent) slightly ahead of those in upper-middle-income (84 percent) and lower-middle-income (82 percent) economies. In addition, process and task automation is expected to be the second most common workforce strategy (73 percent). Automation is a more pronounced strategy in high-income economies (77 percent), compared to upper-middle-income (74 percent) and lower-middle-income economies (57 percent). And third on the list, employers plan on complementing and augmenting their workforce with new technologies (63 percent). It is important to note that 70 percent of organizations plan to hire new staff with emerging in-demand skills, 51 percent plan to transition staff from declining to growing roles internally, and 41 percent plan to reduce staff due to skills obsolescence. Also, 10 percent plan to move operations within closer control through reshoring, nearshoring or friendshoring, and eight percent plan to offshore significant parts of their workforce. In terms of business practices, a top priority is supporting employee health and well-being (64 percent). Other top priorities include providing effective reskilling and upskilling (63 percent), improving talent progression and promotion processes (62 percent), offering higher wages (50 percent), tapping diverse talent pools (47 percent), and offering remote and hybrid work opportunities within countries (43 percent). In regards to public policies, employers identified funding for reskilling and upskilling (55 percent) and provision of reskilling and upskilling (52 percent) as the two most crucial policy measures. The researchers state that there is a clear desire for sustained public investment in skills development to align workforce capabilities with future labour-market demands. Interestingly, 83 percent of employers have already implemented diversity, equity, and inclusion measures; this represents a marked increase since 2023, where there were 67 percent of employers. These measures include training for managers and staff, recruitment and retention initiatives, setting goals and quotas, pay equity reviews, salary audits, having anti-harassment protocols, and ensuring these goals are across the supply chain. And wages are also affected by these trends—52 percent of employers expect to see an increase in the share of their revenue allocated to wages by 2030, 41 percent expect to see wages stay stable, and seven percent expect to see a reduction in wages. It appears that two main factors are related to wage expectations: aligning wages to productivity and performance (77 percent) and competing to retain talent (71 percent). With respect to assessing skills, work experience continues to be the most common assessment mechanism in hiring (81 percent plan on continuing to use this strategy). Second in line is pre-employment tests (48 percent), and third is psychometric tests (34 percent). Of course, resumes are still important (43 percent). Thus, in addition to education, employers want to see applicants use their skills and demonstrate their behavioural traits, cognitive abilities, and cultural fit. In response to AI adoption, 86 percent of employers expect that AI and information processing technologies will transform their businesses by 2030—though certain sectors would have higher numbers due to possible higher anticipated AI exposure, such as the Financial Services (97 percent) and Electronics (95 percent) sectors. In contrast, certain sectors have lower numbers likely due to lower exposure to AI disruption, including Energy Technology and Utilities (72 percent) and Government and Public Sector (76 percent). The following are barriers to AI adoption: lack of skills to support adoption (50 percent), lack of vision among managers and leaders (43 percent), high costs of AI products and services (29 percent), lack of customization to local business needs (24 percent), complex regulations around AI and data usage (21 percent), and lack of consumer demand (16 percent). What the foregoing suggests is that there is a gap in skills required for AI adoption for managers and workers alike. The most anticipated workforce strategy among employers (77 percent) in response to AI disruption is reskilling and upskilling of the existing workforce to work more effectively alongside AI (this applies to 45 out of the 55 covered economies). Moreover, 69 percent plan to recruit talent skilled in AI tool design and enhancement, and 62 percent anticipate that they will hire people with skills in working with AI. What’s more, 49 percent expect to reorient their business models toward new AI-driven opportunities, and 47 percent expect to transition employees from AI-disrupted roles to other positions. However, it is important to keep in mind that 41 percent expect to downsize their workforce as AI capabilities to replicate roles expand. The report also contains insights involving the various macrotrends mentioned above in relation to particular regions and industries. For instance, in North America, technological advancements, demographic shifts, and economic uncertainties are driving strategic decisions of companies. Focusing more precisely on Canada, employers are anticipating an evolving business landscape marked by advances in digital technologies, geoeconomic fragmentation, and increased climate-mitigation efforts. It is important to note that 97 percent of companies expect AI and information processing technologies to transform their operations. In order to ensure that there is a steady flow of talent, employers in Canada are trying to improve talent progression and promotion processes and invest in reskilling and upskilling. The Economy Profile on Canada also contains some helpful information. In Canada, 90 percent have secondary education and 68 percent have tertiary education. However, Canada only invests in mid-career training at a rate of five percent. Moreover, Canada’s individual rates on macrotrends and technology trends (share of organizations that identified the trend as likely to drive transformation in their organization) compared to the global rates were presented: Broadening digital access : 70 percent compared to the global rate of 60 percent Increased geopolitical division and conflicts : 58 percent compared to the global rate of 34 percent Increased efforts and investments to reduce carbon : 54 percent compared to the global rate of 47 percent Increased efforts and investments to adapt to climate change : 52 percent compared to the global rate of 41 percent Slower economic growth : 52 percent compared to the global rate of 42 percent Rising cost of living, higher prices or inflation : 47 percent compared to the global rate of 50 percent Ageing and declining working-age population : 42 percent compared to the global rate of 40 percent Increased focus on labour and social issues : 41 percent compared to the global rate of 46 percent Growing working-age populations : 30 percent compared to the global rate of 24 percent Increased restrictions to global trade and investment : 27 percent compared to the global rate of 23 percent Increased government subsidies and industrial policy : 16 percent compared to the global rate of 21 percent Stricter anti-trust and competition regulations : 16 percent compared to the global rate of 17 percent AI and information processing technologies (big data, VR, AR) : 97 percent compared to the global rate of 86 percent Robots and autonomous systems : 54 percent compared to the global rate of 58 percent Energy generation, storage, and distribution : 40 percent compared to the global rate of 41 percent New materials and composites : 24 percent compared to the global rate of 30 percent Semiconductors and computing technologies : 21 percent compared to the global rate of 20 percent What Can We Take from This Report? This report surveyed over 1,000 global employers on several topics involving employment. For instance, we learned about the trends that will affect organizations and drive business transformation up to 2030, including the rising cost of living, geopolitical conflicts, climate issues, and economic downturns—these issues were noted before the tariff wars began, and the tariffs could worsen the situation and cause further economic uncertainty. Given the above findings, I would like to suggest that employers need to prioritize upskilling and reskilling their workforces—and start thinking about this as soon as possible. Throughout this article, there were important revelations suggesting that, when it comes to skills, there is great opportunity with upskilling and reskilling and most employers say that it is their top workforce strategy that will help address skills misalignments and shape the future of work. Indeed, employers have identified funding for reskilling and upskilling and provision of reskilling and upskilling as the two most crucial policy measures. Employers also want to ensure that there is sustained public investment in skills development to align workforce capabilities with future labour-market demands. The researchers recommend that employers recognize the need for training and upskilling initiatives that focus on both advanced prompt-writing skills and broader GenAI literacy. As I wrote here , the purpose of improving an employee skill (upskilling) or teaching a brand-new skill or skills (reskilling) is to appreciate the nature of continuous learning. Previous Next

  • There is a New Minister of AI in Canada | voyAIge strategy

    There is a New Minister of AI in Canada What can Canadians Expect? By Christina Catenacci May 23, 2025 It has been reported that Prime Minister Mark Carney has recently created a new Ministry in Canada—he has chosen former journalist Evan Solomon to be the new Minister of AI and Digital Innovation. Solomon was elected for the first time in the April 28, 2025 election in the riding of Toronto Centre. Before that, he worked as a broadcaster for both CBC and CTV. Previously, the topic of AI fell under the industry portfolio—in the Trudeau government, the person who was responsible for something like Bill C-27 (it contained both a privacy and AI proposed piece of legislation) was François-Philippe Champagne , who is now responsible for Finance and is representing the riding of Saint-Maurice. As Minister of Innovation, Science and Industry from 2021 to 2025, he helped attract major investments into Canada, advanced the development and adoption of clean technologies, strengthened research and development, and bolstered Canada’s position in environmental sustainability. What Will the New AI Minister Do? As we have recently seen, Prime Minister Carney has announced his single mandate letter with some streamlined top priorities: Establishing a new economic and security relationship with the United States and strengthening our collaboration with reliable trading partners and allies around the world Building one Canadian economy by removing barriers to interprovincial trade and identifying and expediting nation-building projects that will connect and transform our country Bringing down costs for Canadians and helping them to get ahead Making housing more affordable by unleashing the power of public-private cooperation, catalysing a modern housing industry, and creating new careers in the skilled trades Protecting Canadian sovereignty and keeping Canadians safe by strengthening the Canadian Armed Forces, securing our borders, and reinforcing law enforcement Attracting the best talent in the world to help build our economy, while returning our overall immigration rates to sustainable levels Spending less on government operations so that Canadians can invest more in the people and businesses that will build the strongest economy in the G7 No, AI is not mentioned in there. However, in the preamble of the letter, he touched on AI when he stated: “The combination of the scale of this infrastructure build and the transformative nature of artificial intelligence (AI) will create opportunities for millions of Canadians to find new rewarding careers – provided they have timely access to the education and training they need to develop the necessary skills. Government itself must become much more productive by deploying AI at scale, by focusing on results over spending, and by using scarce tax dollars to catalyse multiples of private investment.” Who is the New Minister of AI and Digital Innovation—Evan Solomon? To many, including Ottawa Law professor Michael Geist, Evan Solomon is smart and tech savvy— exactly what Canada needs to move the ball rolling in AI. In the past, Solomon was the CBC host of Power and Politics on CBC and The House podcast on Radio Canada. He was even considered to be someone who could replace Peter Mansbridge on The National . However, CBC terminated him in 2015 after the Star reported that he was taking secret commission payments from wealthy art buyers related to art sales involving people that he dealt with as a host. Apparently, he took commissions of more than $300,000 for several pieces of art and did not disclose to the buyer that he was being paid fees for introducing buyer and seller. Some of the people that he dealt with included Jim Balsillie and Mark Carney himself. What’s more, Solomon’s appointment was met with criticism , mostly because he does not have a formal science or tech background, and also because of a mishap in March when he briefly reposted a photoshopped offensive image of Carney from a parody account. In fact, some critics argue that someone who could not identify manipulated content in his own social media feed may struggle to develop effective policies to protect Canadians from increasingly sophisticated AI-generated deception. But he is back now, as AI Minister. He will have a lot of work to do in his new role, and we hope that one thing he does is deal with the introduction of a good-quality Canadian AI law. What Can we Take from the Mandate Letter? We heard Prime Minister Carney talk about AI in his election platform , where he promised to make sure Canada takes advantage of the opportunities presented by AI, since it is critical for our competitiveness as the global economy shifts—and for making sure we have a government that actually works. More specifically, he promised to do the following in the area of AI under the build portion of the platform: Build AI infrastructure. The Prime Minister had planned on i nvesting in nation-building energy infrastructure and cutting red tape to make Canada the best place in the world to build data centres. Canada must have the capacity to deploy the AI of the future and ensure we have technological sovereignty. Also, he planned on building the next generation of data centres quickly and efficiently by leveraging federal funding and partnering with the private sector to secure Canada’s technological advantage Invest in AI training, adoption, and commercialization . The Prime Minister had planned on measuring growth by tracking the economic impacts of AI in real-time so we can proactively help Canadians seize new opportunities, boost productivity, and ensure no one is left behind. Also, he planned on boosting adoption with a new AI deployment tax credit for small and medium-sized businesses that incentivizes businesses to leverage AI to boost their bottom lines, create jobs, and support existing employees. Companies would leverage a 20 percent credit on qualifying AI adoption projects, as long as they can demonstrate that they are increasing jobs. Further, he planned on catalyzing commercialization by expanding successful programs at Canada’s AI institutes (Mila, Vector, Amii) so that we can connect more Canadian researchers and startups with businesses across the country, which will supercharge adoption of Canadian innovation in businesses, create jobs, and strengthen our AI ecosystem Improve AI procurement . Prime Minister Carney had planned on establishing a dedicated Office of Digital Transformation at the centre of government to proactively identify, implement, and scale technology solutions and eliminate duplicative and redundant red tape. This will enhance public service delivery for all Canadians and reduce barriers for businesses to operate in Canada, which will grow our economy. This is about fundamentally transforming how Canadians interact with their government, ensuring timely, accessible, and high-quality services that meet Canadians’ needs. Also, he planned on enabling the Office of Digital Transformation to centralize innovative procurement and take a whole-of-government approach to service delivery improvement. This could mean using AI to address government service backlogs and improve service delivery times, so that Canadians get better services, faster. There were some great ideas in the election platform, and I’m sure that Canadians hope that they will manifest. It is important to note that the priorities that were identified in the election platform are encouraging, as they will help both government and SMBs in the private sector with tax credits that incentivizes businesses to leverage AI to boost their bottom lines, create jobs, and support existing employees. Businesses could sure use some help with training existing employees via upskilling and reskilling, as well as AI literacy. With respect to the more general mandate letter that has recently surfaced, it is possible that this means that any additional and prescribed mandate letters to individual Ministers will not be shared with the public. That would be concerning, since public-facing mandate letters have become the norm during the Trudeau government. We will have to see on this issue. Moreover, the couple of paragraphs in the mandate letter’s preamble suggests that there will be targeted improvements for both the public and private sectors. The letter emphasized training and scaling AI. These goals are lofty, but necessary. But on the whole, things are looking promising given the commitment to build, invest, and improve AI procurement. What can Canadians Expect? In my view, it is still too early to tell. But I’m hoping that Prime Minister Carney comes through for Canada. If the government gets this right, Canada could catch up to other jurisdictions like the EU, and become a real leader in AI. Previous Next

  • AI Governance in 2025 | voyAIge strategy

    AI Governance in 2025 Trust, Compliance, and Innovation to Take Center Stage this Year By Tommy Cooke, powered by caffeine and curiousity Jan 20, 2025 Key Points: AI governance is transitioning from a reactive compliance measure to a proactive discipline Innovations like AI impact assessments help organizations operationalize transparency AI governance frameworks are no longer regulatory shields. They enhance brands What was an emerging concern over the last few years will become a mature and necessary strategic discipline in 2025. As we move deeper into another year and while AI remains in its infancy, it is necessary to have the guardrails in place to ensure that AI grows and contributes successfully. The landscape of AI governance is thus evolving in many meaningful ways, much of which is due to growing international regulatory pressure, increasing stakeholder expectations, and the ongoing need to ensure significant financial investments in AI generate reliable returns. This Insight looks at what has changed over the last couple of years and looks ahead to how AI governance is maturing – and why these shifts matter. From Awareness to Structure In the couple short years of AI’s proliferation across virtually every industry, AI governance can be characterized as reactive. Organizations leveraging AI to innovate and reduce costs – particularly those with high stakes in demonstrating that AI can be trustworthy – has tended to approach governance as a checkbox exercise. Unless an organization existed within the purview of a particular jurisdiction requiring compliance, like the EU’s AI Act , holding up a compass to determine how and whether an organization required a dedicated office with a detailed AI governance strategy depended largely on its own awareness and relationship with its stakeholders. Moving forward, that awareness is intensifying. Organizations are no longer waiting for compliance to simply arrive. Even in the face of shifting political landscapes in North America where AI regulation seems to be losing momentum, the AI governance market is expected to grow from $890 million USD to $5.5 billion USD by 2029 . This statistic is indeed a reflection of regulatory pressure abroad – and it is also a reflection of the maturating need for structured management of AI. With AI systems earning the trust of organizations around the world to make critical decisions, the potential for damage and unintended consequences is becoming far too risky: algorithmic bias, breaches, and ethical violations can cause significant reputational liabilities and financial penalties that would almost certainly erase any organization’s AI investment; non-compliance with the EU’s AI Act , for example, can result in fines up to €35 million or 7 percent of an organization’s annual turnover. Transparency in the Spotlight Over the last couple of years, transparency has been a buzzword. It existed in a gray space because organizations tended to use the term strategically in public-facing white papers and proposal packages. The word “transparency” often appears through corporate “ AI-washing ”: the process of exaggerating the use of AI in products and services, particularly when companies make misleading claims about the safety and ethics of their systems. Moreover, transparency tends to be perceived as difficult to achieve. Many large-scale AI adopters believe that AI systems’ outputs are difficult to explain or that its processes are virtually impossible for laypeople to understand. This adage will no longer be satisfactory in 2025 and the years ahead. Why? Contrary to what some may belief, societal, political, and ethical pressures for transparency are growing. And those pressures are leading to AI transparency innovations. Here are two examples: AI impact assessments (AI-IAs) are not merely designed to identify positive and negative impacts of AI – they are also growing in popularity because they are positioning organizations to critically reflect and discuss the social and ethical consequences of using AI. What AI-IAs essentially do is commit an organization to comprehending how their AI systems can be improved as well as what the risks may be – whether emerging or existing in the present moment. These dynamics already exist in every AI system. By making them visible, organizations take crucial steps toward demonstrating transparent and accountable relationships with their AI systems 2024 observed a significant maturation in the AI model documentation: an explanation of what an AI system’s model is, what it was trained on, any tests or experiments conducted on it, etc. The goal is to document what the AI system is doing. By noting what the system does, an organization provides its stakeholders with a track record that can be examined to ensure responsible and ethical use as well as to demonstrate compliance Data Sovereignty on the Rise While data privacy has been a long-standing focus throughout the previous years, 2025 will mark a significant shift toward data sovereignty. As regulatory, geopolitical, and social concerns continue to rise around responsible and ethical use of AI, 2025 will observe organizations increasingly designing AI systems in ways that deal with how data is stored, processed, and accessed. Compliance with data residency laws to ensure that sensitive data will remain within a national boundary or specific jurisdiction, for example, will trend this year. We will hear more about other privacy-preserving technologies in AI systems, such as federated learning : a machine learning technique that allows AI to be trained across datasets from different sources without transferring sensitive data across borders. Data is no longer viewed as merely a business asset but a national asset. For organizations operating globally, this can make daily operations rather complicated due to the existence of multiple international laws and norms if they are to avoid penalties while maintaining trust. When data involves national security, healthcare, or financial sectors, demonstrating the ability to respect data storage laws when using AI will be a top priority for organizations this year. Ethical AI as a Top Operational Priority Much like the way in which buzzwords like transparency have been used to gesture toward responsible AI use, ethical AI will finally emerge as fully operationalized practices. Unlike the stretch of AI adoption in the early 2020’s where AI ethics tended to be little more than a vague concept, ethical AI discourse and debate have been sustained for a considerable period of time. Organizations are recognizing that failing to act upon the principles and values of ethical AI not only poses reputational harms and financial risks, but they can also harm operational integrity. Organizations have been and will continue to conduct structured reviews to identify potential bias, discrimination, and unintended social consequences of their AI systems. These kinds of assessments are being applied across an AI system’s lifecycle, from design to monitoring after implementation. According to the AI Ethics Institute (2023) , 74 percent of consumers prefer to use products certified as ethically deployed and developed. This makes comprehensive AI training with a focus on governance, privacy, and of course – ethics – a must. The same will hold true for selecting AI vendors and designers committed to embedding ethical considerations into their products from the beginning and not merely as an afterthought. AI Governance as a Strategic Differentiator A commonplace perception of all-things related to privacy, ethics, and governance is that they are expensive and stifle innovation. It is often echoed by techno-libertarians , those who believe that innovation and business should be left largely unregulated in order to maximize growth and creativity – people who resist external intervention unless mandated by law. What proponents of these perceptions and beliefs fail to understand is that proactivity in the realm of responsible and ethical management of technology is becoming extraordinarily risky and costly. In 2025, AI governance will be embraced as business strategies that not only mitigate risks but also allow organizations to actively differentiate themselves in competitive markets. AI-IAs, audits, transparent reporting, and other AI governance-related activities will be more directly attributable to brand equity and stakeholder confidence. In recognizing that the world’s legal, social, political, and ethical standards are strengthening around the use of AI, organizations are realizing that demonstrating and sharing a robust AI governance framework showcases the organization and its talent as thought leaders who are able to navigate complex technology while building trust-based relationships with customers, partners, regulators, and so on. So, Why Now? What is driving the maturation and higher adoption rates of AI governance? Here are three catalysts to consider: Regulatory Evolution: despite the resurgence of techno-liberalists who may be slowing the advance of AI-related regulatory agendas, this only applies to limited jurisdictions. It’s important to remember that sub-sovereign jurisdictions (e.g., state and provincial level government authorities) are developing their own regulations. Whether they deal specifically with AI or not, data and privacy laws are always changing – and they almost always have implications on how organizations use AI. Public Scrutiny: High-profile AI failures have made stakeholders more vigilant about ethical and operational risks. Consumers are increasingly skeptical about how organizations use AI. C-suite executives are becoming more and more aware of how important it is to demonstrate to stakeholders that they are using AI responsibly, necessitating the need to implement strong AI governance frameworks – and prove that they work. Market Maturity: Markets do not mature merely due to an invisible economic hand. Much of their maturation is driven by the behaviour, perception, and demands of its consumers. As AI becomes integral to business operations, it is perhaps unsurprising that consumers do not trust organizations that do not openly disclose that they use AI. Final Thoughts AI governance in 2025 represents a pivotal shift from a regulatory afterthought to a core strategic priority. Organizations that adopt structured governance frameworks, emphasize transparency, and prioritize ethical AI are not only mitigating risks but also distinguishing themselves in competitive markets. As regulatory landscapes evolve and public scrutiny intensifies, investing in robust AI governance is no longer optional. Previous Next

bottom of page