top of page

Search Results

123 results found with an empty search

  • voyAIge strategy | Data & AI Governance Specialists

    Our governance solutions ensures your successful and safe use of data and AI. We are seasoned specialists in AI law, policy, and ethics. We make AI and people work Tommy Cooke Co-Founder voyAIge strategy (VS) helps organizations to responsibly navigate AI through innovative strategic solutions, such as tailored policies, thought leadership content, as well as workforce and executive training. Our goal is to make AI and people work by bridging the gap between AI and people. We align AI use with real human needs and talents. Our solutions empower people to use AI confidently, safely, and effectively. Christina Catenacci Co-Founder Book a Free Consultation Managed Data & AI Governance Services Guidance, Oversight, and Leadership for your AI Journey Our Managed Data & AI Governance S ervices offer your organization the confidence to move forward with data and AI. Think of us as your virtual Chief Data or Chief AI Officer: someone on-call and working with you. We meet you where you are in your data and AI journey and provide you the support services you need at an affordable monthly rate that is within your budget. From a tailored AI strategy and risk assessments to executive and staff training to communications planning, we help ensure your data and AI use are aligned with your goals, and is safe, responsible, and built to last. Whether you are building your first use cases, integrating data across departments, or exploring risk mitigation,we provide the leadership and structure to make AI work for your people and your goals. Our extensive experience in AI and related areas such as data governance, data privacy and security, intellectual property, and confidential information, has assisted us to craft a simple yet reliable three-step approach to guide you in your AI journey: Data Governance End-to-End Management We partner to set up, maintain and evolve the structures, roles, policies and operations required to treat data as a trusted asset across the enterprise AI Governance Strategic AI Enablement We help you govern AI within your business: from use-case identification, model selection, deployment, through to monitoring, ethics & control frameworks Ecosystem Oversight Define data ownership, steward roles, access controls, metadata management and lifecycle processes Use-case to Value Identify high-impact AI opportunities, run pilots, operationalize them and embed the capabilities in business processes Compliance & Risk Reduction We ensure your data handling meets legal, regulatory and ethical standards Operational Readiness Equip your organisation with the governance structure that enables reliable analytics, BI and advanced data-capabilities Operational Readiness Deploy AI governance frameworks, playbooks and training so your teams act confidently and safely Compliance & Risk Reduction We ensure your data handling and AI operations meet legal, regulatory and ethical standards as well as industry best practices Our Managed Data & Governance Services Deliver Benefits That Accelerate Safe and Successful Growth Clarity on strategy and direction We help you set a focused, realistic AI roadmap Compliance and risk management Your AI stays aligned with law and compliance Expertise without full-time cost Access senior-level guidance at a fraction of the cost Support that grows with your needs We adapt as your AI use evolves Faster, safer Implementation Avoid false starts with structured deployment Confidence across teams and stakeholders Build trust in AI with clear guidance and communication Most Organizations Encounter the Same AI Challenges Most organizations encounter the same kinds of roadblocks when adopting AI. These challenges can stall progress, create risk, and leave teams overwhelmed or misaligned. VS provides solutions to address these challenges: Fear of AI Executives fear lost ROI as well as strategic or stakeholder misalignment Employees fear replacement, uncertainty, extra work, and inadequate training The solution is Training Inappropriate Use of AI Executives worry about employee misuse, data leaks, and non-compliance while employees often lack clarity on the rules and accidentally share sensitive information The solution is AI Policies Lack of Preparedness Organizations are unsure if they are ready for AI, lack budget clarity, and struggle to communicate effectively with stakeholders The solutions are thought leadership and stakeholder engagement No Leadership Organizations often do not have an internal AI expert. There may be no AI direction, and no coordination between departments. No one in charge of decision-making. The solution is VS's Managed AI Services No Strategy Leaders do not know what AI tools are the right fit, or they are overwhelmed by options. There is no roadmap or strategy for AI adoption, nor is there a change management plan in place. The solution is Adoption and Transformation Too Many Questions "Where do we start? "Do we need a plan?" "Is AI worth the investment? "What AI do we need?" The solution is the AI Helpdesk Strategic and Critical Insights to Guide your AI Journey Canada’s Innovation Crossroads Read More New York Governor Hochul Signs AI Safety and Transparency Bill into Law Read More Privacy Commissioner Investigation into Social Media Platform, X Read More Read More Testimonials "voyAIge has delivered exceptional work with their AI in the Workplace Policy for OneFeather. By centering Indigenous data sovereignty, collective growth, and the principle of 'leaving the table better set than we found it,' they've created more than just a policy, they've provided a blueprint for ethical AI implementation that protects community interests and removes systemic barriers" Jerret Taylor / Chief Technical Officer / OneFeather Mobile Technologies Ltd. Contact Us Partnership Opportunities Submit RFP Stay Informed Get expert perspectives on AI risks, solutions, and strategies Email address: Submit

  • AI Helpdesk | voyAIge strategy

    Ask us anything, no strings attached. Welcome to the AI Helpdesk. Your Questions Answered. No Strings Attached. Talk to an Expert Now How We Help. Book a session to get expert guidance on topics such as: Chatbot assistance and prompt engineering: make AI work better for you. Data privacy & intellectual property: navigate AI-related risks and compliance. Safeguards and best practices: ask for advice on setting up policy, rules, and responsibilities. AI use ideas and vendor selection: identify the right tools for your organization. Auditing 101 & risk detection: discuss AI risks before they become problems. How it Works. 1 2 3 Book a Session. Book yourself into our calendar. Submit your questions. Bring Questions. In 30 minutes, we tackle your AI challenges together. Get Advice. Walk away with insights you can use right away. Need further support? Learn more about our Virtual Chief AI Officer (VCAIO) services.

  • Clearview AI Fined by the Dutch Data Protection Authority | voyAIge strategy

    Clearview AI Fined by the Dutch Data Protection Authority Clearview AI fined by The Netherlands for violating the General Data Protection Rule By Christina Catenacci Sep 6, 2024 Key Points: Clearview AI was fined a significant amount of money for scraping the faces and biometric information of people on the Internet, and then not properly informing them that it had their data If Clearview AI does not stop what it is doing, it will receive further fines of up to 5.1 million Euros on top of the 30.5 million Euro fine It is important to note that Clearview AI is an American company that does not have an establishment in the EU—it just received a hefty fine anyhow, since some of the faces it scraped were photos of Dutch people. Thus, it is no surprise that the GDPR applied Despite the fact that Clearview AI is an American company that does not have an establishment in the EU, the company has just received a hefty fine—about €30.5 million (plus up to €5.1 million fine if there is further noncompliance)—courtesy of the Dutch Data Protection Authority (DPA). What happened? Clearview AI is a commercial business that offers facial recognition services to intelligence and investigative services. In fact, it has acquired 30 billion photos of people (including photos of Dutch people). How has it accumulated so many images? It has scraped them from the Internet and converted each image to a unique biometric code. This of course, has been accomplished without obtaining the consent of the people whose faces were scraped. According to Clearview AI, it provides its services to intelligence and investigative services outside the EU only. However, the DPA has concluded that Clearview AI is operating illegally . In fact, since using the services of Clearview AI is also prohibited, the DPA has warned that Dutch organizations that use Clearview AI may expect serious fines as well. What were the violations? The DPA found that Clearview AI has violated the General Data Protection Regulation (GDPR) . The DPA has stated that the company never should have built its database in the first place. The following are the main violations by Clearview AI: Collected and used facial images and biometric data Insufficiently informed people who were in the database that the company had their data, and did not cooperate in requests for access to the information Again, the DPA asked Clearview AI to stop doing these things. If it does not stop, there will be further fines of up to 5.1 million Euros—on top of the 30.5 million Euro fine. What can we take from this development? The message here is clear: Clearview AI has to stop doing what it is doing, and it will be the DPA that stops it. This conclusion about Clearview AI may be made in other jurisdictions too. Again, Article 3 of the GDPR deals with territorial scope and unequivocally states that the GDPR applies to the processing of personal data of data subjects who are in the EU where there is processing of personal data related to: The offering of goods or services to data subjects in the EU (regardless of whether payment is required), or The monitoring of the behaviour of data subjects where their behaviour takes place within the EU DPA Chairman, Aleid Wilfsen stated: “Facial recognition is a highly intrusive technology, that you cannot simply unleash on anyone in the world…If there is a photo of you on the Internet – and doesn't that apply to all of us? – then you can end up in the database of Clearview and be tracked. This is not a doom scenario from a scary film. Nor is it something that could only be done in China…This really shouldn't go any further. We have to draw a very clear line at incorrect use of this sort of technology” Wilfsen pointed out that it was important for safety reasons to be able to detect criminals using the facial recognition technology, but he highlighted that this should not be done by commercial businesses. Rather, he noted that facial recognition should only be used by competent authorities and only in exceptional cases. For instance, the police can manage the software and database themselves—under the watchful eye of the DPA and other supervisory authorities. There can be no appeal in this case because Clearview AI did not object to the decision. The message is clear: collecting (in this case, scraping) sensitive data from the Internet and subsequently not informing individuals that the data is in their control is not going to work for businesses. In fact, companies that do this are likely to be fined considerably. Previous Next

  • Meta Wins the Antitrust Case Against It | voyAIge strategy

    Meta Wins the Antitrust Case Against It No Monopoly Found By Christina Catenacci, human writer Nov 27, 2025 Key Points On November 18, 2025, the United States District Court for the District of Columbia confirmed that Meta did not have a monopoly This decision confirms that Meta will not have to break off Instagram and WhatsApp This antitrust decision is markedly different than the Google antitrust decisions involving Search and online ads On November 18, 2025, James E. Boasberg, Chief Judge at the United States District Court for the District of Columbia, confirmed that Meta did not have a monopoly. Accordingly, Meta will not have to break off Instagram and WhatsApp. As I mentioned here , Meta had its antitrust trial about seven months ago, where the main question was whether Meta had a monopoly in social media by acquiring Instagram and WhatsApp about ten years ago (2012 and 2014 respectively). Essentially, Mark Zucherbergh was the first to give testimony and apparently, while he was on the stand, he was asked to look at his own previous emails that he wrote to associates before and after the acquisition of Instagram and WhatsApp to clarify his motives. More specifically, the questions were, “Was the purchase to halt Instagram’s growth and get rid of a threat? Or was it to improve Meta’s product by having WhatsApp run as an independent brand?” In short, the ultimate decision was that Meta won: it did not have a monopoly and did not have to break up Instagram and WhatsApp. What Did the Judge Decide? Initial Comments The judge made a point of beginning with the comment, “The Court emphasizes that Facebook and Instagram have significantly transformed over the last several years”. In fact, it noted that Facebook bought Instagram back in 2012, and WhatsApp in 2014. In addition, the court described two other relevant social media apps, TikTok and YouTube, which allowed users to watch and upload videos. The court explained the history of evolution of Meta’s apps. For example, as Meta moved to showing TikTok-style videos, TikTok moved to adding Meta-style features to share them with friends. Technological changes have made video apps more social. More specifically, smartphone usage exploded; cellphone data got better; the steady progress of cellular data was followed by a massive leap in AI; and as social networks matured, the alternatives to AI-recommended content have become less appealing. The court detailed the lengthy history of proceedings beginning with the initial Complaint that was filed in 2021. Again, the court stated straight away in Facebook’s motion to dismiss that it had doubts that the Federal Trade Commission (FTC) could state a claim for injunctive relief. The court granted Facebook’s motion to dismiss but allowed the FTC to amend its Complaint. The FTC indeed created an Amended Complaint and alleged that Facebook held a monopoly in personal social networking and that Facebook maintained the monopoly by buying both Instagram and WhatsApp to eliminate them as competitive threats. The court found that the FTC had plausibly alleged that Facebook held monopoly power and that the acquisitions of Instagram and WhatsApp constituted monopolization. That said, the court did say that the FTC may have challenges proving its allegations in court. Subsequently, the parties each moved for summary judgment. The court denied both motions and indicated that the FTC had met its forgiving summary judgment standard, but the FTC faced hard questions about whether its claims could hold up in the crucible of trial. At trial, the court heard testimony for over six weeks and considered thousands of documents. Decision at Trial The court found the following: Section 2 of the Sherman Act prohibited monopolization. The main elements included holding monopoly power (power over some market) and maintaining it through means other than competition on the merits. Plaintiffs typically proved monopoly power indirectly by showing that a firm had a dominant share of a market that was protected by barriers to entry A big question in this case was, When did Meta have monopoly power? The FTC had to show that Meta was violating the law now or imminently and could only seek to enjoin the conduct that currently or imminently violated the law (the FTC incorrectly argued that Meta broke the law in the past, and this violation is still harming competition) The court defined the product market as the smallest set of products such that if a hypothetical monopolist controlled them all, then it would maximize its profits by raising prices significantly above competitive levels. The court confirmed that the FTC had the burden of proving the market’s bounds The court found that consumers treated TikTok and YouTube as substitutes for Facebook and Instagram. For instance, this could be seen when there was a shutdown of TikTok in the United States: users switched to other apps like Facebook, and later Instagram, and then YouTube. The court commented, “The amount of time that TikTok seems to be taking from Meta’s apps is stunning”. In fact, the court noted that when consumers could not use Facebook and Instagram, they turned first to TikTok and YouTube, and when they could not use TikTok or YouTube, they turned to Facebook and Instagram—Meta itself had no doubt that TikTok and YouTube competed with it. Thus, even when considering only qualitative evidence, the court found that Meta’s apps were reasonably interchangeable with TikTok and YouTube In assessing Meta’s monopoly power, the court considered a market that comprised Facebook, Instagram, Snapchat, MeWe, TikTok, and YouTube. Also, the court found that the best single measure of market share here was total time spent—the companies themselves often measured their market share using this measure. The court noted that Meta’s market share was falling, and what counted most regarding market share was the ability to maintain market share. A given market share was less likely to add up to a monopoly if it was eroding—if monopoly power was the power to control prices or exclude competition, then that power seemed to have slipped from Meta’s grip. The court concluded that YouTube and TikTok belonged in the product market, and they prevented Meta from holding a monopoly. Even if YouTube were not included in the product market, including TikTok alone defeated the FTC’s case Social media moved so quickly that it never looked the same way twice since the case began in 2021. The competitors changed significantly too. Previous decisions in motions did not even mention TikTok. Yet today, TikTok was Meta’s fiercest rival. It was understandable that the FTC was unable to fix the boundaries of Meta’s product market. Accordingly, the court stated: “Whether or not Meta enjoyed monopoly power in the past, though, the [FTC] must show that it continues to hold such power now. The Court’s verdict today determines that the FTC has not done so” Therefore, the case against Meta was dismissed. What Can we Take from this Development? Meta did not have a monopoly in social networking and survived a very serious existential challenge—it will not have to break the company apart as a result of this decision. The results of this decision were the polar opposite of the Google decision, where there was indeed a confirmed monopoly in Google Search and online ads . Why such a different result? The first clue came right at the beginning of this Meta decision, when the judge noted that the question was whether Meta had monopoly power now or imminently. In particular, there was no determination about whether there had had been a monopoly in the past (as the FTC incorrectly alleged), because it was irrelevant. That is, Meta may have had a monopoly in the past, but the FTC had to show that it had one now. Unlike the judge in the Google decision, the judge in the Meta case was able to show that the test for monopoly power was not met, primarily because the FTC could not show that Meta currently had monopoly power (power over some market) and maintained it through means other than competition on the merits. Second, unlike the Google decision, the product market had changed considerably since the FTC launched the Complaint, to the point where Meta’s strongest competitor right now, TikTok, had not even come on people’s radars. The judge made an important finding that consumers treated TikTok and YouTube as substitutes for Facebook and Instagram. After considering the evidence, the court found that TikTok had to now be included in the product market. This was significant and played a large role in the court dismissing the case. Most strikingly, the judge stated, “Even if YouTube were not included in the product market, including TikTok alone defeated the FTC’s case”. Third, throughout the previous Meta decisions since 2021, there was foreshadowing by the court that the FTC may struggle to prove its allegations in court. This was not so in the Google case, which involved the company using exclusionary contracts and other means to create and maintain its monopoly, which it still has. It is not just the DOJ that thinks Google currently has the monopoly—the EU has also fined Google significantly for having and maintaining a monopoly in Search and online ads. Fourth, it became clear that Meta’s market share had decreased, likely because of TikTok and YouTube—this made it difficult for the FTC to prove that there was a monopoly where Meta would have the opportunity to charge more, or demand more time spent. Recall that a main measure in this sphere was time spent, and the court stated that the amount of time that TikTok seemed to be taking from Meta’s apps was stunning. On the other hand, in Google’s case, Google had—and still has—89 percent of the global search engine market share. Sure, Mark Zuckerberg made comments in 2008 emails , “It is better to buy than compete”, but even if that were true, the court just showed that the FTC could no longer meet the test for holding a monopoly. Some may question why there is such importance placed on antitrust trials. Speaking about its competition mission, the FTC states: “Free and open markets are the foundation of a vibrant economy. Aggressive competition among sellers in an open marketplace gives consumers — both individuals and businesses — the benefits of lower prices, higher quality products and services, more choices, and greater innovation” Previous Next

  • OECD’s AI Benchmark is the Message | voyAIge strategy

    OECD’s AI Benchmark is the Message What the OECD's AI Capability Indicators Mean for Business Leaders By Tommy Cooke, powered by coffee and Blue Jays baseball Jun 27, 2025 Key Points: The OECD’s AI Capability Indicators mark a critical shift from AI hype to measurable, human-centered benchmarking Even today’s most advanced AI systems perform unevenly, often failing to match basic human adaptability, judgment, and emotional nuance For business leaders, these benchmarks are not just technical assessments — they’re a practical guide to knowing when to automate and when to elevate human talent A quiet evolution is underway. After years of loud debates fraught with both hype and doubt surrounding AI's risks, promises, and potential, the Organisation for Economic Co-operation and Development (OECD) has done something refreshingly uncontroversial: it is measuring AI. This is not measuring in the abstract. It is happening through a concrete framework that evaluates how even the most advanced AI systems stack up against human capabilities. The release of the OECD AI Capability Indicators earlier this month offers business leaders something valuable: a way to separate performance from perception. Moreover, business leaders ought to pay attention because the OECD’s efforts have established a benchmark precedent that encourages people to really focus on the differences between AI-generated and human-generated outputs; AI is learning, but so too are humans. From Possibility to Performance: OECD’s AI Capability Indicators at a Glance For years, AI discourse has been dominated by extremes. From utopian visions of self-aware machines to dystopian warnings of job-stealing automation, humans have been pulled back and forth for a long time now. The trouble here is that between the excitement and fear, there is little space to comfortably find and occupy a middle ground. That middle ground is important. Why? It’s a space where we can more pragmatically and objectively reflect upon what AI is currently capable of. The OECD’s new framework benchmark is important scaffolding for that middle ground. What they have done is identify nine human-relevant capabilities including language, vision, reasoning, creativity, and social interaction. Using a scale from zero to five, the goal of the benchmark is to measure and compare how well AI systems perform like humans. What makes the framework so significant isn’t just the scoring—it’s the mindset shift. For the first time, governments, researchers, and businesses are using a shared scale to evaluate AI performance not against hype, but against actual, measurable human capability. This benchmark is significant because it reinforces a growing recognition among humans that intelligence isn’t binary. It cannot be reduced to linear algebra. It’s gradual, contextual, and measurable. AI can be scored in terms of its ability to not just talk like a human, but whether it sees, manipulates, critically reflects, memorizes, problem solves, and learns like a human. The Verdict: What OECD’s AI Capability Indicators Reveal about The State of AI So, how is AI performing against these indicators? The verdict is not encouraging. Even state-of-the-art systems like GPT-4 and Claude fall between Level 2 and Level 3. This means that they exhibit some general human skills. However, they do not do so consistently, nor are they robust and adaptive in their ability to perform like humans. This is a sobering truth. AI is strong but against this benchmark, its strength is not very compelling. Let’s have a look at a few key indicators and see how extant AI is doing: Language. At level 3 , AI generates coherent summaries, it translates text, and it mimics tone and even emotions like empathy. But, it still hallucinates. It misses nuance, and it consistently struggles with factual accuracy Problem-solving and Knowledge Retention. These capabilities are important. Can AI find solutions and retain them consistently? This is important for structured tasks like drafting reports, generating legal summaries, or even conducting market analyses. AI performs moderately here, around level 3 . Why? It still struggles to creatively synthesize. It also has a hard time judging criteria in the way that humans do Visual Understanding. Image recognition and labeling are reasonably advanced at level 3 . However, it struggles to interpret diagrams with text (also called multi-modal coordination. Most AI that generates images also fail to consistently learn from images that users provide them Social and Emotional Capability . This is perhaps where AI lags behind the most. Averaging level 1 or 2 means that its efforts merely emulate social and emotion intelligence. AI mimics politeness, but it cannot understand or respond to real human emotion. Empathy is still beyond the machine, contrary to what some engineers believe The OECD’s AI Benchmark Is the Message The most important thing that the OECD has done here isn’t simply rating AI. It’s changing how we talk about it. For far too long, AI has been treated as both mysterious and magical. It’s been treated as a black box that fires magic bullets . By establishing shared indicators, the OECD has invited the world to measure AI’s strengths and weaknesses like any other technology. It is, in effect, demystifying AI for us . As importantly, the benchmark reminds us of something that is far too easy to forget: human standards matter. These indicators don’t merely reveal what AI can do, but also it reveals to us what AI cannot yet do. This is more than semantics, especially if you are a business leader. When reflecting upon the OECD’s benchmark, I encourage you to start exploring and asking how your own AI performs. At the very least, it should reveal to you not only where your people are important, but also why they are so important. Consider doing the following: First, map your AI efforts. What are your core business processes, and which of them perform at level 1 to 2 ? They are likely repetitive, structured, and predictable. These are the tasks that are prime for automation. Think customer service scripts. Monthly reporting. Content generation. Invoice verification, and so on Second, identify your human talent. This is your competitive edge, after all. Where are you relying on social insight, flexible judgment, ethical nuance, and cultural awareness? These are not just hard to automate, but also where your people offer the most value Third, design integration with purpose. Don’t just deploy AI because your competitors are doing it. Deploy it because you understand where it fits, and it should fit your people and your organization like a glove—not like a raincoat Lastly, build a feedback loop. The OECD indicators will evolve. Your business should too. Treat the OECD’s indicators as a maturity benchmark that are living metrics. They will change and maturate just like your human talent. Revisit them often and use them to evaluate vendors, assess risks, and communicate clearly with stakeholders There’s a quiet elegance to what the OECD has done. In a world obsessed with what’s next, they’ve grounded us in what’s now. Previous Next

  • Research & Reporting | voyAIge strategy

    In-depth analysis and reports to support informed AI decision-making. Research & Reporting We are highly experienced researchers and writers. We are passionate about generating bespoke and informative reports, government bids, and thought leadership submissions to regulators and corporate communications teams. Your Content, Our Expertise Whitepapers & Thought Leadership Position your organization as an industry leader with expertly crafted whitepapers and thought leadership articles. We conduct thorough research to produce insightful, well-argued content that not only informs but also engages your audience. Our whitepapers cover the latest trends, challenges, and innovations in AI and related fields, helping you shape the conversation in your industry. The Research Process 1. Consultation We start by understanding your needs, goals, and audiences. 2. Research We collect and analyze data from industry and academic sources. 3. Draft & Develop We craft clear, engaging, and structured outputs that convey your message. 4. Review & Finalize We collaborate with you to gather feedback and make any requested changes. 5. Present We love to present. We are happy to communicate our outputs to your staff, stakeholders, and investors. Request Our Research & Reporting Samples We have numerous samples of our previous research and writing projects, including excerpts from whitepapers, case studies, and policy documents. Contact us to learn more.

  • California Legislature Approves AI Bill | voyAIge strategy

    California Legislature Approves AI Bill Bill 1047 passes in the California Legislative Assembly By Christina Catenacci Aug 30, 2024 Key Points: California could be the first to launch comprehensive AI legislation in the United States Some controversy has arisen in response to the AI bill There are significant penalties associated with contraventions In August 2024, Senate Bill 1047, Safe and Secure Innovation for Frontier Artificial Intelligence Models Act , was read for a third time and passed in the California Legislative Assembly. It was subsequently ordered to the Senate. By August 29, 2024, the bill passed in the Senate (29-9 votes). It now must be signed by Governor Newsom. There are rumblings that he is taking his time weighing the pros and cons of signing a bill that has caused some controversy in Silicon Valley. What does the bill say? Bill 1047 defines important concepts such as advanced persistent threat, AI safety incident, covered model (and derivative), critical harm, developer, fine-tuning, full shutdown, post-training modification, and safety and security protocol. The bill also requires that developers, before beginning to initially train a covered model, comply with several requirements, including using administrative, technical, and physical cybersecurity safeguards; implementing the capability to promptly enact a full shutdown; and implementing a written and separate safety and security protocol. Moreover, the bill requires developers to retain an unredacted copy of the safety and security protocol for as long as the covered model is made available for commercial, public, or foreseeably public use, plus five years. Developers must grant to the Attorney General access to the unredacted safety and security protocol. Also, developers must annually review the protocol and make any necessary modifications to the policy. Additionally, Bill 1047 prohibits developers from using a covered model or derivative for a purpose that is not exclusively related to the training or reasonable evaluation of the covered model, compliance with State or federal law, or making a covered model or derivative available for commercial or public, or foreseeably public use, if there is an unreasonable risk that the covered model or derivative will cause or materially enable a critical harm. Bill 1047 also requires developers, beginning January 1, 2026, to annually retain a third-party auditor to perform an independent audit, consistent with best practices, of compliance with the provisions. The auditor must produce an audit report and require developers to retain an unredacted copy of the audit report for as long as the covered model is made available for commercial, public, or foreseeably public use, plus five years. Developers must grant to the Attorney General access to the unredacted auditor’s report upon request. Bill 1047 requires developers of a covered model to submit to the Attorney General a statement of compliance with these provisions. The bill also requires developers of a covered model to report each AI safety incident affecting the covered model or derivative controlled by the developer to the Attorney General. The bill requires a person who operates a computing cluster to implement written policies and procedures to do certain things when a customer utilizes compute resources that would be sufficient to train a covered model, including assess whether a prospective customer intends on utilizing the computing cluster to train a covered model. There are some hefty penalties contained in Bill 1047. The bill authorizes the Attorney General to bring a civil action for a violation—this includes for a violation that causes death or bodily harm to another human, harm to property, or theft. In this case, as of January 1, 2026, a civil penalty can be in an amount not exceeding 10 percent of the cost of the quantity of computing power used to train the covered model, and it is 30 percent for any subsequent violation. The bill also contains whistleblower protections, whereby developers, contractors, or subcontractors are not allowed to prevent an employee from disclosing information, or retaliate against an employee for disclosing information to the Attorney General or Labor Commissioner if the employee has reasonable cause to believe the information indicates the developer is out of compliance with certain requirements or that the covered model poses an unreasonable risk of critical harm. In this case, the civil penalty is found under the Labor Code . Other violations involving a computer cluster can result in penalties of up to $50,000 for a first violation, $100,000 for any subsequent violation, and a penalty not exceeding $10 million in the aggregate. Also, the Attorney General is free to recover injunctive or declaratory relief, monetary damages as well as punitive damages, fees, costs, and any other relief it deems appropriate. Bill 1047 creates the Board of Frontier Models within the Government Operations Agency, independent of the Department of Technology, and provide for the board’s membership. The Agency is required to, on or before January 1, 2027 and annually thereafter, issue regulations to update the definition of a “covered model,” as provided. The bill establishes in the Agency a consortium required to develop a framework for the creation of a public cloud computing cluster to be known as “CalCompute” that advances the development and deployment of AI that is safe, ethical, equitable, and sustainable by, among other things, fostering research and innovation that benefits the public. On or before January 1, 2026, the Agency must submit a report from the consortium to the Legislature with that framework. What can we take from this development? The main author of the bill, Senator Scott Weiner, has talked about the fact that the bill took a lot of work and collaboration with industry, and emphasized that it deserves to be enacted. Though there has been some criticism arguing that the bill is overly focused on the harms, it can be said that Bill 1047 is the first of its kind in the United States—it requires AI companies operating in California to comply with several requirements when it comes to training AI models. And businesses will have some time to prepare so they can be in compliance. In the preamble, it is declared that California is leading the world in AI innovation and research. One might question whether Canada is even part of the equation any longer given the slow-moving progress of Bill C-27 . And if an election takes place in Canada, there will be further delays in enacting a meaningful piece of AI (and privacy) legislation. We will have to wait and see. Previous Next

  • L&E Analysis: Reddit Sues Anthropic | voyAIge strategy

    L&E Analysis: Reddit Sues Anthropic What is Reddit Claiming in this Complaint? By Christina Catenacci, human writer Jun 20, 2025 Legal & Ethical Analysis: Issue 2° Key Points On June 4, 2025, Reddit filed a Complaint against Anthropic in the Superior Court of the State of California in San Francisco This Complaint follows the one against Anthropic launched by the major music publishers commenced in October 18, 2023, which was ultimately settled The Complaint by the music publishers was about copyright law, and the Reddit Complaint was about violating the User Agreement and the privacy of Reddit users. It will be interesting to see how the Reddit Complaint is resolved On June 4, 2025, Reddit filed a Complaint against Anthropic in the Superior Court of the State of California in San Francisco. This is not the first Complaint that has been commenced against Anthropic—we cannot forget what recently took place when the music publishers sued Anthropic for copyright infringement. What is the claim about? What does Reddit want? How is this claim different than the one against Anthropic that was launched by the music publishers? What can we take from this development? This article answers these questions. What is Reddit Claiming? Essentially, Reddit has stated that although Anthropic claims it is the “white knight of the AI industry” that prioritizes honesty and high trust, it is “anything but” and it uses empty marketing gimmicks. Reddit asserts in its Complaint that it has a User Agreement that contains the following excerpts of sections 3 and 7: “ 3.Your Use of the Services Except and solely to the extent such a restriction is impermissible under applicable law, you may not, without our written agreement: license, sell, transfer, assign, distribute, host, or otherwise commercially exploit the Services or Content” “ 7.Things You Cannot Do Access, search, or collect data from the Services by any means (automated or otherwise) except as permitted in these Terms or in a separate agreement with Reddit (we conditionally grant permission to crawl the Services in accordance with the parameters set forth in our robots.txt file, but scraping the Services without Reddit’s prior written consent is prohibited)” Even though Reddit has this provision, it claims that Anthropic has intentionally trained on the personal data of Reddit users without ever requesting their consent. In fact, it claims that Anthropic has been ignoring the provision and has had bots repeatedly hit Reddit’s servers over 100,000 times notwithstanding the statements of Reddit’s CEO that Anthropic has unlawfully exploited Reddit content. Further, Reddit states that Anthropic has refused to respect Reddit users’ privacy rights—contrary to Anthropic’s own values. By training its model, Claude, on Reddit’s posts without authorization, Reddit claims that it is in direct violation of Reddit’s User Agreement. In a nutshell, Reddit says that Anthropic has scraped and used Reddit content in its commercial offerings—Claude even provides output statements confirming that it has been trained on Reddit. Anthropic refused to respect Reddit’s guardrails and enter into a lisensing agreement like Google and OpenAI has. Reddit asserts that instead, Anthropic continued to commercialize Reddit content without authorization. Interestingly, Reddit states that Anthropic has admitted that it scrapes Reddit content, but it has provided several excuses—all of which are unacceptable. To that end, Reddit is advancing the following claims against Anthropic: Breach of Contract : Anthropic has violated Reddit’s User Agreement by acting contrary to sections 3 and 7 of the Agreement Unjust Enrichment : Anthropic was unjustly enriched at the expense of Reddit when it scraped and used Reddit content to train and power a model to the tune of billions of dollars Trespass to Chattels : Anthropic intentionally entered into, and made use of, Reddit’s platform and technological infrastructure, including its software and servers, to access and obtain Reddit content and information for its own economic benefit Tortious Interference With Contract : Anthropic intentionally interfered with Reddit’s contractual relationships with its users by: scraping Reddit content without entering into a licensing agreement that would provide the necessary guardrails to protect users’ privacy rights; bypassing connecting to Reddit’s Compliance API, which automatically notifies licensees when users delete posts or comments; training its AI models on user content without any mechanism to respect Reddit user deletion requests; and continuing to scrape Reddit content after being notified that such conduct violated Reddit’s obligations to its users. This intentional interference diminished Reddit’s capacity to fulfill its obligations to its users Unfair Competition : Anthropic has engaged in acts of unfair competition, including unlawful, unfair, and/or fraudulent business acts and practices as defined by the Business and Professions Code . Anthropic has trespassed on Reddit’s platform and taken possession of Reddit content and data without authority or permission, and interfered with Reddit’s contractual relationships with Reddit’s users. Anthropic has also engaged in fraudulent business practices by falsely stating that it was no longer scraping the Reddit platform, even as Anthropic continued to scrape to acquire and use Reddit content to train its AI models for commercial gain In addition, Reddit has requested a jury trial. What is Reddit Asking for in the Complaint? Reddit is asked for the following: Specific Performance, compensatory damages, consequential damages, lost profits, and/or disgorgement of Anthropic’s profits An injunction Restitution for the amount by which Anthropic has been enriched by its scraping and use of Reddit content Pre-judgment and post-judgment interest Punitive damages Fees, costs, and any other appropriate relief Another Previous Complaint by Music Publishers We cannot forget that on October 18, 2023, several major music publishers (Music Publishers) filed a Complaint against Anthropic in the United States District Court for the Middle District of Tennessee Nashville Division. Essentially, the Music Publishers brought the action to address the systematic and widespread infringement of their copyrighted song lyrics by Anthropic. That is, they asserted that Anthropic unlawfully copied and disseminated vast amounts of copyrighted works—including the lyrics to myriad musical compositions owned or controlled by the Music Publishers. In the very first paragraph, the Music Publishers stated: “(Music) Publishers embrace innovation and recognize the great promise of AI when used ethically and responsibly. But Anthropic violates these principles on a systematic and widespread basis. Anthropic must abide by well-established copyright laws, just as countless other technology companies regularly do” The Music Publishers explained that they partnered with innovators, including entrepreneurs, start-ups, and established companies—they recognized and drove true innovation (for instance, Universal used AI in its business and production operations). However, Anthropic’s copyright infringement did not constitute innovation: “in layman’s terms, it’s theft”. In fact, the Music Publishers claimed that Anthropic violated the United States Copyright Act . They acknowledged that AI was a new technology, but they insisted that AI companies still had to follow the law. Technological advances could not come at the expense of the creators who essentially served as the backbone for AI’s development. Anthropic built AI models by scraping and ingesting massive amounts of text from the internet (and potentially other sources), using all of it to train its AI models (Claude 2 in this case) and generate output based on this copied text. The Music Publishers claimed that Anthropic copied the data to fuel its AI models lyrics to their musical compositions. They urged that copyrighted material was not free just because it could be found on the internet—in this case, Anthropic never asked for permission. Notwithstanding Anthropic’s company Constitution (the goal was to be harmless, respectful, and ethical), the Music Publishers passionately argued that Anthropic committed copyright infringement since it generated identical or nearly identical copies of their lyrics. In the Complaint, the Music Publishers provided examples where famous songs were either completely or partially outputted in the response to user prompts. In fact, the Music Publishers argued that: Anthropic directly infringed the Music Publishers’ exclusive rights as copyright holders, including the rights of reproduction, preparation of derivative works, distribution, and public display Anthropic unlawfully enabled, encouraged, and profited from massive copyright infringement by its users, so it was secondarily liable for the infringing acts of its users under well-established theories of contributory infringement and vicarious infringement Anthropic’s AI output often omitted critical copyright management information regarding these works, making it so that the composers of the song lyrics frequently did not get recognition for being the creators of the works that were being distributed The Music Publishers stated, “It is unfathomable for Anthropic to treat itself as exempt from the ethical and legal rules it purports to embrace”. According to the Music Publishers, there was no doubt that Anthropic profited from the infringement of the Music Publishers’ repertoires, since Anthropic was already valued at $5 billion, received billions of dollars in funding, and boasted about numerous high-profile commercial customers and partnerships. The Music Publishers stated in the Claim: “None of that would be possible without the vast troves of copyrighted material that Anthropic scrapes from the internet and exploits as the input and output for its AI models” The Music Publishers noted that nothing about Anthropic was creative—Anthropic depended on the creativity of others and paid them nothing. This caused substantial and irreparable harm. The Claim set out how Anthropic trained the data: copied massive amounts of text from the internet (and potentially other sources), by “scraping” (or copying or downloading) the text directly from websites and other digital sources onto Anthropic’s servers, using automated tools, such as bots and web crawlers, and/or by working from collections prepared by third parties cleaned the copied text to remove material that it perceived as inconsistent with its business model, whether technical or subjective in nature (such as deduplication or removal of offensive language) copied the massive “corpus” and processed it in multiple ways to train the Claude AI models (encoding the text into tokens) processed the data further to finetune the Claude AI model and engaged in additional reinforcement learning, based both on human feedback and AI feedback, all of which may require additional copying of the collected text The following are Claims for Relief: Count I: Direct Copyright Infringement Count II: Contributory Infringement Count III: Vicarious Infringement Count IV: Removal or Alteration of Copyright Management Information To that end, the Music Publishers requested relief against Anthropic in the form of Judgement on each of the claims above, an order for equitable relief, an order requiring Anthropic to pay the Music Publishers statutory damages, an order requiring Anthropic to provide an accounting of the training data and methods (and the lyrics on which it trained AI models), an order to destroy (under Court supervision) all infringing copies of the Music Publishers’ copyrighted works, costs, and interest. On October 23, 2023, Anthropic initially stated that training its AI models constituted “fair use”, meaning that it was a lawful exemption in copyright law. Why? Because Anthropic was engaging in a use that was highly transformative. Indeed, the company mentioned that it did not intend to violate the law. Furthermore, on November 16, 2023, the Music Publishers brought a Motion asking Anthropic to stop using their music lyrics, asking for a preliminary injunction. However, by January 6, 2025, it was reported that Anthropic and the Music Publishers reached a settlement where Anthropic would implement robust measures to ensure compliance with the law, namely revising its data collection and training methodologies to exclude copyrighted content, unless proper licenses or permissions have been obtained. Also, it agreed to have more stringent oversight on its data sources to mitigate the risks of inadvertently using protected material in future AI training. What Can We Take from This Development? Following the debacle with the Music Publishers, one would think that Anthropic would be striving to promote ethical AI practices and foster trust with both the creators of artistic works and the wider public. One would think that Anthropic learned its expensive lesson. But now, Anthropic has to face Reddit. The disappointing part of the story is that Anthropic similarly scraped Reddit content from the internet and used it to train Claude models—clearly without permission and without entering into a proper licensing agreement. While this is technically not a copyright infringement case, it is similar in that the User Agreement was allegedly breached because there were terms that Anthropic needed to comply with, but instead it appeared to have violated them. In particular, to use Reddit, the terms in clauses 3 and 7 had to be complied with. And Anthropic was warned —but Anthropic continued to hit Reddit’s servers and scrape away so that Anthropic could train Claude (without paying). This appears to be the first time that a big tech company has challenged an AI model provider over its training data practices, and it will be interesting to see what happens. As with the case with the Music Publishers, Anthropic will likely have to settle and promise not to do this again. This may be what the company has to ultimately do in order to preserve the delicate balance between innovation and the rights of companies like Reddit (along with its users). For a company that was just valued at 61.5 billion (up from 5 billion noted in the Complaint a couple of years ago), it may be something that Anthropic needs to do sooner rather than later in order to preserve its reputation. Anthropic spoke with TechCrunch recently and said that it disagreed with Reddit and would vigorously defend itself. We shall see what happens in the coming months… Previous Next

  • 23andMe Goes Bankrupt | voyAIge strategy

    23andMe Goes Bankrupt What will happen to all of that genetic data? By Christina Catenacci, human writer Apr 11, 2025 Key Points: 23andMe, a direct-to-consumer genetic testing company, has declared bankruptcy There is significant concern about how customers’ genetic data will be protected: now and in the future Consumers are urged to delete their data, and businesses are encouraged to learn from the data breach (review policies and procedures and ensure that there is compliance with the law) What happened? 23andMe , the company that provided basic ancestry as well as health and ancestry services (with touted 99 precent accuracy), has filed for bankruptcy in the United States. This entails filing a under Chapter 11 of the United States Bankruptcy Code . The direct-to-consumer genetic testing company was founded in 2006. It was the first company to offer autosomal testing by asking users to directly submit saliva samples that would be analyzed to produce charts of their background and lineage. But the company experienced a serious data breach in 2023, where seven million customers were subject to unauthorized access of their genetic data. The ordeal took about five months to resolve, and it ruined the company’s reputation. What’s more, the affected customers launched a class action in the United States, which they ultimately settled with the company for $30 million. At this point, the company has secured financing and will continue to operate during a sale process. The company has listed its assets and estimated liabilities to be between USD 100 million and 500 million. Although it is still operating while trying to find a buyer, the company recently laid off 40 percent of its workforce and has ended its therapeutics division. What will happen to the genetic data involved in the 23andMe data breach? We need to examine the full Privacy Statement (Statement). Last updated March 14, 2025, the Statement says that “At 23andMe, Privacy is in our DNA”. The information that the company collects includes Individual-level Information (information about a single individual, such as their genotypes, diseases or other traits or characteristics) and De-identified Information (information that has been stripped of identifying data, such as name and contact information, so that an individual cannot reasonably be identified). The types of personal information collected includes registration information; genetic information; sample information; self-reported information; user content; and web-behaviour information. The company collects the information through the customer providing it, service providers collecting cookies or analytics tools, other third parties involving customers such as gifting a testing kit, and the company itself with its inferences. The company uses personal information in order to: provide their services’ analyze and measure trends and usage of the services; communicate with customers; personalize, contextualize and market their services to customers; provide cross-context behavioural or targeted advertising; enhance the safety, integrity, and security of their services; enforce, investigate, and report conduct violating their Terms of Service or other policies; conduct surveys or polls, and obtain testimonials or stories; comply with their legal, licensing, and regulatory obligations; and conduct 23andMe Research, if customers choose to participate. More precisely, the purpose of 23andMe Research is to make new discoveries about genetics and other factors behind diseases and traits. “23andMe Research” means research activities performed by 23andMe, either independently or jointly with third parties, and overseen by an independent ethics review board. In terms of data sharing, the company shares with service providers, friends and family members if the customer so wishes, affiliates and commonly owned entities, and third parties related to law, harm, and the public interest. That said, the Statement clearly stipulates that the company does not share customer information with public databases, insurers, employers, or law enforcement absent a valid court order, subpoena, or search warrant. With respect to security, the company states that it implements physical, technical, and administrative measures aimed at preventing unauthorized access to or disclosure of customers’ Personal Information. Moreover, it advises that “Please recognize that protecting your Personal Information is also your responsibility. Be mindful of keeping your password and other authentication information safe from third parties, and immediately notify 23andMe of any unauthorized use of your login credentials”. In addition, the Statement clarifies that it retains personal information for as long as necessary to provide the services and fulfill transactions that are requested by customers. Customers can also choose (or choose not to) store their sample; view health reports; share their information with genetic relatives or other users; receive personalized recommendations based on sensitive data categories; receive promotional communications; and participate in research. It is clear that despite these points made in the Statement, there have been several criticisms of the company’s handling of personal data, particularly genetic data. For instance, it has been noted that the data breach took place because there was an attack that exploited weak security practices . That is, there was no multi-factor authentication feature, unnecessary information disclosure where the DNA Relatives and Family Tree features exposed data from other users, amplifying the breach’s impact, and users were also reusing passwords across different services. According to Digital Defenders, there were things that the company should have done : Use multi-factor authentication Monitor for security events to stop attacks earlier Rate limits on logins to slow down and frustrate attackers using automated tools Have account lockout policies so accounts get locked after a set number of failed attempts Have stronger password policies to reduce password reuse risks Incorporate data minimization where less data is collected in the first place Use the principle of least privilege so that users only have access to the data they need The class action settlement in the United States can help to compensate for any customer losses related to the data breach. However, it is currently not clear how the genetic information of customers will be handled by a new successor company (the company could be sold to a new company, which may want to make new terms and conditions). What has happened in Canada? On June 10, 2024 , the privacy authorities for Canada and the United Kingdom (UK) launched a joint investigation into the data breach that was discovered in October 2023 at the global direct-to-consumer genetic testing company 23andMe. On the Privacy Commissioner of Canada (OPC) website, the announcement stated that 23andMe is a custodian of highly sensitive personal information including genetic information which does not change over time. The data can reveal information about an individual and their family members, including about their health, ethnicity, and biological relationships. This makes public trust in these services essential. Presently, the OPC is still investigating the matter. Both Canada (the OPC and provincial Commissioners) and the UK noted that the sensitive information needs to be protected: “In the wrong hands, an individual’s genetic information could be misused for surveillance or discrimination,” said Commissioner Philippe Dufresne. “Ensuring that personal information is adequately protected against attacks by malicious actors is an important focus for privacy authorities in Canada and around the world”. Likewise, the Information Commissioner’s Office (ICO) in the UK made an announcement in June, 2024 of the investigation with the OPC. The goal is to examine: the scope of information that was exposed by the breach and potential harms to affected people whether 23andMe had adequate safeguards to protect the highly sensitive information within its control whether the company provided adequate notification about the breach to the two regulators and affected people as required under Canadian and UK data protection laws The ICO recently announced that in early March, 2025, it issued 23andMe with provisional findings, a notice of intent to fine £4.59 million and a preliminary enforcement notice: “We would stress these findings are provisional and, as with all preliminary findings, are subject to representations from 23andMe including in relation to affordability considerations. The ICO will carefully consider any representations made before taking a final decision. We are aware that 23andMe has filed for Chapter 11 bankruptcy in the US to facilitate a sale process. We are monitoring the situation closely and are in contact with the company. As a matter of UK law, the protections and restrictions of the UK GDPR continue to apply and 23andMe remains under an obligation to protect the personal information of its customers." Given the settlement in this case in the United States, it will be interesting what takes place in Canada. It will be important to note that in Ontario, the Personal Health Information Protection Act states in section 4 that “personal health information” means identifying information about an individual in oral or recorded form, if the information relates to the physical or mental health of the individual, including information that consists of the health history of the individual’s family. It will be critical to see how the phrase “including information that consists of the health history of the individual’s family” is treated by the regulator—many family members are also caught in the 23andMe mess thanks to their relatives hastily giving up their genetic data. The ramifications are very serious when it comes to thinking about how employers and insurers may obtain and use this information in the future. Correspondingly, in the federal spere, the Personal Information Protection and Electronic Documents Act (PIPEDA) states in section 2 that “personal health information” is personal health information, with respect to an individual, whether living or deceased, means information concerning the physical or mental health of the individual. There could also be human rights provisions that are triggered regarding genetic discrimination. For instance, in Ontario, the Ontario Human Rights Commission has urged insurance companies to avoid using enumerated grounds of discrimination contained in the Human Rights Code and genetic testing information for measuring risk. It has also cautioned employers that they can only test job applicants with pre-employment medical exams if determining a person’s ability to perform essential job duties. Furthermore, in the federal sphere, the Canadian Human Rights Act states in section 3 that genetic characteristics is one of the prohibited grounds of discrimination. Moreover, section 3(3) of the Act states the following: “Where the ground of discrimination is refusal of a request to undergo a genetic test or to disclose, or authorize the disclosure of, the results of a genetic test, the discrimination shall be deemed to be on the ground of genetic characteristics” And in order to protect workers from the “interview” that consists of requiring the taking of a genetic test (and potential consequent refusal to hire or promote), legislation in the federal spere, namely the Canada Labour Code , has a considerable thoughtful section called Division XV.3: Genetic Testing. Essentially, every employee: is entitled not to undergo or be required to undergo a genetic test is entitled not to disclose or be required to disclose the results of a genetic test Most importantly, employers are not allowed to dismiss, suspend, lay off, or demote an employee, impose a financial or other penalty on an employee, or refuse to pay an employee remuneration in respect of any period that the employee would, but for the exercise of the employee’s rights under this Division, have worked, or take any disciplinary action against or threaten to take any such action against an employee just because the employee refused a request by the employer to undergo a genetic test, refused to disclose the results of a genetic test, or on the basis of the results of a genetic test undergone by the employee. Employees can make a complaint if their employers contravene these provisions. These provisions were a result of the forward-thinking Genetic Non-Discrimination Act (a 2017 amendment) that made changes to the human rights and employment provisions in the federal legislation. In 2020, the Supreme Court of Canada confirmed in Reference re Genetic Non‑Discrimination Act, that the Genetic Non-Discrimination Act of 2017 was indeed constitutional despite jurisdictional concerns, and applied to everyone in Canada. What the foregoing suggests is that both Ontario and the federal government have basic protections in place to protect employees and job applicants, as well as individuals who need to buy insurance. Just imagine a job applicant going to an interview with an employer: in the near future, will that employer ask the applicant for a sample such as a saliva test? This all reminds me of a 1997 movie called Gattaca , where a future society that was centred on eugenics. The main character, Vincent, was not conceived through genetic selection and was called an “invalid” (unlike his brother, Anton, who was a “valid”) and faced several instances of genetic discrimination, even though it was illegal. In fact, he found a way to live among the valids and achieve his lifetime goal of working in the spaceflight conglomerate Gattaca Aerospace Corporation. But he had to pose as a valid to do this using donated hair, skin, blood, and urine samples of a valid who was in an accident and was paralyzed after being hit by a car. In my view, Gattaca, which was set in the “not too distant future”, could happen in reality. Although there are currently protections in place in Canada, the strengths of the Canada Human Rights Act and the Canada Labour Code should be duplicated all across Canada. Since both regimes are provincially-regulated, the changes need to be reflected in both human rights and employment legislation of the provinces and territories. What should consumers do? Many 23andMe customers have been recommended to delete their data and their accounts. It is clear that nothing has changed since the data breach—the company is still operating in the same manner when it comes to storing, managing, or protecting customer data. The Ontario Information and Privacy Commissioner warns consumers in March, 2025 about what will happen to their genetic data—she points out that there is a risk that the data privacy safeguards that customers initially signed on to may change. That is, when company ownership changes hands, the terms of engagement could as well. Also, it is important to note that there is a class action in Canada , where the Supreme Court of British Columbia has appointed a representative plaintiff and established the class membership criteria on December 20, 2023. In an interview with CBC and the representative plaintiff, the plaintiff expressed regret that he gave up a significant amount of intimate data: “You're giving them everything. You're basically giving them the raw code of yourself, if you will — you at your most finest essence" How did the data breach happen? Hackers initially got into around 14,000 accounts by using old compromised passwords that customers had recycled from other accounts on other sites, and then used those accounts to access 5.5 million DNA relatives profiles. In a blog dated March 26, 2025 , 23andMe states that it is required to comply with its privacy policy and the law with respect to the treatment of customer data. Also, it states that “Under Chapter 11, we intend to use the sale process to maximize the value of our business while continuing to operate”. While the company tries to find a new buyer, customers are still able to go in and access their accounts, genetic reports, and any stored data. They can delete their data and accounts, which is recommended. Additionally, the blog states the following: “Through the sale process, 23andMe will look to secure a partner who shares in its commitment to customer data privacy and will further its mission of helping people access, understand and benefit from the human genome. Any buyer will be required to comply with our privacy policy and with all applicable law with respect to treatment of customer data. Our users’ privacy and data are important considerations in any transaction, and we remain committed to our users’ privacy and to being transparent with our customers about how their data is managed. You have choices. You can opt into and out of our research at any time by updating your consent status in your account settings. If you opt out, we will stop using your information for research going forward (we cannot affect studies that have already been completed) and will discontinue use of your data within 30 days” What can businesses learn from 23andMe? Canadian businesses are recommended to review their privacy policies and security safeguards in order to ensure that any data that is under their control is being properly protected. When it comes to commercial transactions that are covered by PIPEDA , there are specific obligations that businesses must meet if a data breach is discovered. Businesses must act quickly and make the necessary notifications to the Privacy Commissioner and affected individuals. It is interesting that 23andMe is promising that the unknown buyer would have to comply with its privacy policy—as the Information and Privacy Commissioner pointed out, a new company can change the terms of engagement and thus the way in which it protects user privacy. We may find out soon what the results of the OPC’s investigation of 23andMe as well. The report may contain additional information and learnings—we will keep you posted. Previous Next

  • Code of Conduct | voyAIge strategy

    Code of Conduct DOWNLOAD

  • Legal Tech Woes | voyAIge strategy

    Legal Tech Woes The Story of How Fastcase Sued Alexi Technologies By Christina Catenacci, human writer Dec 5, 2025 Key Points On November 26, 2025, Fastcase Inc (Fastcase), an American legal publishing company founded in 1999, commenced an action against Alexi Technologies Inc (Alexi), a Canadian legal tech company that began as a research institution in 2017 In 2021, Fastcase and Alexi entered into a Data Licence Agreement (Agreement) where Fastcase would grant Alexi limited access to Fastcase’s proprietary and highly curated case law database According to Fastcase, Alexi expanded its use of Fastcase data beyond the license’s narrow internal-use limitations, using that data to build and scale its own legal research platform—it began publishing and distributing Fastcase-sourced case law directly to its users in clear violation of the Agreement’s core restrictions On November 26, 2025, Fastcase Inc (Fastcase), an American legal publishing company founded in 1999, commenced an action against Alexi Technologies Inc (Alexi), a Canadian legal tech company that began as a research institution in 2017, in the United States District Court for the District of Columbia. Background It is first important to understand the context of this lawsuit. In particular, Fastcase spent decades building one of the industry’s most comprehensive and innovative legal research databases. In 2023, Fastcase merged with vLex LLC (vLex) and became part of the vLex Group, which was subsequently acquired by Clio, Inc (Clio), a company that is valued at $5 billion , on November 10, 2025. The acquisition was for $1 billion and was characterized as one of the most significant transactions in legal technology history. On the other hand, Alexi initially operated with a small team of research attorneys who used a passage-retrieval AI system to help prepare legal memoranda for clients. In 2021, Fastcase and Alexi entered into a Data Licence Agreement (Agreement) where Fastcase would grant Alexi limited access to Fastcase’s proprietary and highly curated case law database. From Fastcase’s perspective, the main term of the Agreement was that the licence was expressly restricted to internal research purposes. For example, it was limited to research that was performed by Alexi’s own staff lawyers in preparing client memoranda. And most importantly, Alexi agreed that it would not use Fastcase data for any commercial purpose, use the data to compete with Fastcase, or publish or distribute Fastcase data in any form. This Agreement was important to Fastcase, given the number of years and the millions of dollars in investment to create one of the most sophisticated legal research databases in the industry. More precisely, Fastcase’s efforts involved extensive text and metadata tagging, specialized structuring into HTML, and proprietary formatting and harmonization processes that required significant technical expertise and sustained investment. Thus, Fastcase entrusted Alexi with access to this highly valuable, unique proprietary compilation solely for the narrow internal research purpose defined in the Agreement. There was a time when Fastcase and Alexi considered entering into a partnership. In 2022, Alexi sought to integrate its passage-retrieval AI system with Fastcase’s database so that Alexi customers could directly access Fastcase case law. However, that partnership never materialized. Instead, in 2023, Fastcase proceeded with its merger with vLex and expanded its own research offerings. Yet, following the merger, Fastcase continued operating under the Fastcase name, and the Agreement with Alexi remained in full force and effect. But then, according to Fastcase, Alexi began pivoting from occupying different roles in the legal-tech space into direct competition with Fastcase. That is, Fastcase says that Alexi expanded its use of Fastcase data beyond the license’s narrow internal-use limitations, using that data to build and scale its own legal research platform—it began publishing and distributing Fastcase-sourced case law directly to its users in clear violation of the Agreement’s core restrictions. What’s more, Alexi began holding itself out as a full-scale legal-research alternative to incumbent providers, including Fastcase. According to Fastcase, Alexi shortcut the massive investment that was required to build a comprehensive commercial legal-research platform using the Fastcase data for the very commercial and competitive purposes that the Agreement expressly forbid. That was not all: Fastcase believed that Alexi misused its intellectual property to bolster its own credibility and to suggest that there was an affiliation that did not exist. Further, Fastcase believed that Alexi misappropriated Fastcase’s compilation trade secrets. As a result, Fastcase says that Alexi has appropriated Fastcase’s decades of investment while simultaneously damaging Fastcase’s market position and goodwill. But what Fastcase highlighted above all else was that Alexi never notified Fastcase of its changing service model, its expanding use of Fastcase data, or its intent to compete directly with Fastcase. It never even tried to renegotiate the Agreement to authorize its new uses. Instead, Alexi continued to rely on the internal-use license while using Fastcase data to build, train, power, and market a direct competitor. When Fastcase discovered what Alexi was doing, vLex (acting on behalf of Fastcase) sent Alexi a written Notice of Material Breach in October, 2025. The notice explained that Alexi was using Fastcase data for improper commercial and competitive purposes in violation of the Agreement and demanded that Alexi cure its breach within 30 days, as required by the Agreement. In response, Alexi denied any wrongdoing—in early November, 2025, Alexi’s counsel sent a letter rejecting Fastcase’s notice and actually admitted that Alexi had used the Fastcase Data to train and power its generative AI models, and that this did not constitute a violation of the Agreement. Moreover, the letter stated that the intention of the Agreement was never to preclude Alexi from using Fastcase’s data as source material for Alexi’s generative AI product. Rather, this was exactly why Alexi was paying Fastcase nearly a quarter million dollars annually. Fastcase has terminated the Agreement, yet Alexi has continued to use the data. What did Fastcase Claim Against Alexi? Fastcase has made the following claims: Breach of contract . Alexi was granted only a limited, non-exclusive, non-transferable license to use Fastcase’s data solely for Alexi’s internal research purposes. Fastcase performed its obligations under the Agreement by granting access to the data, but Alexi materially breached the Agreement by using the data for commercial and competitive purposes. Alexi has caused, and will continue to cause, irreparable harm to Fastcase, including loss of control over its proprietary compilation, erosion of competitive position, and impairment of contractual and intellectual property rights Trademark infringement . Fastcase has, at all relevant times, used the Fastcase trademarks in commerce in connection with its legal-research products, software, and related services. Its trademark registration remains active, valid, and in full force and effect. Without Fastcase’s consent or authorization, Alexi has used, reproduced, displayed, and distributed the Fastcase marks in its platform interfaces, public presentations, promotional materials, and commercial advertising. It even used the marks in ways that suggested that Alexi’s products were affiliated with Fastcase when no partnership was ever formed, and this constitutes infringement Misappropriation of trade secrets . Fastcase has devoted decades of engineering, editorial, and resource investment to build and refine its compilation. To maintain the secrecy and value of its compilation, Fastcase has required licensees and partners to enter confidentiality, non-use, and restricted-use agreements, including the Agreement with Alexi, and other technical and security measures. Alexi’s misappropriation included using Fastcase’s confidential compilation and metadata structure to train large-scale generative AI models, power user-facing legal-research features, generate outputs incorporating Fastcase data, and provide end users with direct access to the content of the Fastcase compilation. Fastcase has suffered and continues to suffer substantial harm, including loss of licensing revenue, competitive injury, market displacement, unjust enrichment to Alexi, erosion of goodwill, and diminution of the value of Fastcase’s proprietary compilation False Designation of Origin and Unfair Competition . Without the consent of Fastcase, Alexi has used Fastcase’s marks in commerce on its platform interfaces, in product demonstrations, and in promotional and advertising materials. This falsely suggests to consumers, legal professionals, and industry participants that Fastcase endorses, sponsors, authorizes, or is affiliated with Alexi’s competing legal-research platform, even though no such relationship exists and Fastcase has expressly declined to form a partnership with Alexi. This constitutes a false designation of origin and a false or misleading representation of affiliation, connection, or sponsorship. This conduct is likely to cause and has already caused consumer confusion, mistake, and deception as to the origin of Alexi’s products, whether Alexi’s products incorporate or are powered by Fastcase’s proprietary services with authorization, and whether Fastcase has partnered with, approved, or is otherwise associated with Alexi Consequently, Fastcase is asking for the following: Judgement for Fastcase A declaration of the breach of the Agreement A permanent injunction An award of compensatory damages An order of disgorgement requiring Alexi to account for and disgorge all revenues, profits, cost savings, and other benefits derived from its unauthorized use of Fastcase’s data An order requiring the return and destruction of all Fastcase data in Alexi’s possession, custody, or control, including all copies, derivatives, embeddings, model weights, datasets, or training artifacts incorporating or derived from Fastcase Data, together with a certification of complete purge and destruction Monetary relief for actual damages attributable to the infringement What Can We Take from This Development? At this point, we have not yet seen Alexi’s defence. Clearly, Alexi will likely argue that this was simply a misunderstanding of the Agreement. One question that will indeed arise during the proceedings is about whether there was a definition of “internal research purposes” set out in the Agreement. Could there actually be a way to argue that training an AI system was in the scope of the Agreement, and this was why Alexi was paying so much to Fastcase each year? Although Alexi may have internally considered fully automating the creation of legal research memos, it may be difficult for it to show that it contemplated at the time of forming the Agreement that it would remove its internal research component entirely and begin publishing Fastcase case law directly to end users. We will have to wait and see what happens. Previous Next

  • Chatbots at Work – Emerging Risks and Mitigation Strategies | voyAIge strategy

    Chatbots at Work – Emerging Risks and Mitigation Strategies How to Recognize and Overcome the Invisible Risks of AI By Tommy Cooke Nov 22, 2024 Key Points: Personal AI chatbots in the workplace can pose significant risks to data privacy, security, and regulatory compliance, which can lead to severe legal and reputational consequences Employees using personal AI tools can inadvertently expose proprietary information, increasing the risk of intellectual property breaches and confidentiality violations Organizations can mitigate these risks through clear policies, employee education, and proactive monitoring, allowing for responsible AI usage without compromising security or creativity AI is rapidly transforming where and how we work and play. As our Co-Founder Dr. Christina Catenacci deftly describes , AI chatbots have become commonplace friends, mentors, and even romantic partners. At a rate that is surprising most any observer in any industry, AI is creating incredible opportunities that are often fraught with challenges. AI services tend to be so quickly packaged and sold that subscribers do not find much space to reflect on fit , appropriateness, and potential blind spots that could cause misalignment in even the most well-intentioned organization. As a result, a new kind of workplace is emerging. The remote and hybrid work models ushered in via the pandemic seem a distant memory from yesterday, now that employees are bringing their own personal AI into the office. In-pocket AI is appealing. Why wouldn’t an organization want its employees to benefit from improved workflows and creativity, especially if they don’t have to pay for it? A critical dynamic in this new pocket AI workplace reality is that employers are seeing new blind spots and challenges emerge. Understanding and navigating them is crucial for avoiding data leaks, maintaining compliance, and protecting intellectual property. As we head into 2025, organizations must take time to recognize that invisible AI is unmanaged. This exposes organizations to far-reaching consequences for itself and its stakeholders. By understanding these risks, they can be addressed in ways that not only protect an organization, but position its executives as thought leaders capable of aligning values, building trust, and enhancing overall efficiency without compromising employee creativity and freedom. The Risks of AI Chatbots Data privacy and security as well as compliance are top of mind for most employers we speak to. Because an employee’s personal AI chatbot requires constant internet access and cloud storage, access is facilitated by an employer’s Wi-Fi network. This increases the risk of corporate data being stored incorrectly on third-porty servers or inadvertently intercepted and exposed. It’s important to recognize that personal AI chatbots in a workplace thus raise susceptibility to industry regulations like the GDPR or HIPAA , thus significantly raising an organization’s legal exposure to fines or penalties. Most AI chatbot services train their AI models in real-time off the data its users provide them—and this includes sensitive intellectual property. Consider the following hypothetical prompt that a marketing employee at a pharmaceutical company may enter into their personal AI chatbot: “I have a client named [x] who has 37 patients in New York State with [y] medical conditions. They are born in [a, b, c, and d] years. Analyze our database to identify suitable drug plans. Be sure to reference our latest cancer treatment strategy, named [this].” First, the prompt may lead to privacy issues sine it includes potentially identifiable information about patients, such as their location, medical conditions, and birth years. Depending on how an AI chatbot processes and stores this information, it could lead to violations of HIPAA as sharing protected health information (PHI) with an unapproved, third-party application puts the employer at risk of serious regulatory breaches not to mention reputational damage. Moreover, patients’ identities have been incidentally reverse-engineered with far less data via far more seemingly innocuous methods. Second, the hypothetical prompt contains confidential information when it mentioned the employer’s latest cancer treatment strategy. Strategic information related to drug plans or treatment approaches may be inadvertently referenced and/or suggested to competitors’ employees who are using the same AI chatbot. Third, the hypothetical prompt entered by the make-believe employee incorrectly assumes that the AI chatbot has access to one of the company’s secure databases. Despite having uploaded a few protected PDFs to the AI chatbot, the employee had used the wrong terminology. The potential for this to cause problems is quite significant as it can trigger the AI chatbot to creatively but silently fill-in-the-blanks; as we know, AI chatbots have a tendency to hallucinate. Remember, they do not reflect the living world but rather analyze data models of the real world that you and your employees live in. The point is that the AI chatbot may generate misleading or inaccurate information simply because it is only as robust and comprehensive as the data it trains from. There is a significant risk that the employee recommends to their clients and colleagues to follow a particular drug plan that could be based off flawed and incomplete health, medication, and business data. Mitigation Strategies AI is here to stay. The solution is not to ban or forbid these tools. That is unrealistic and may inadvertently cause friction for an employer when it decides to implement its own AI tools for employees down the road. Here are some proactive steps any organization can follow to minimize risks while enabling employees to use AI responsibly: 1. Build a Policy Set expectations that outline what is and is not allowed when it comes to personal AI chatbots. Include rules about handling sensitive data, consequences for non-compliance, and standards for AI tool vetting. Moreover, generate a one-stop guideline PDF that gives your employees steps they should follow along with examples of both problematic and approved prompts. 2. Educate your Employees Training employees on AI risks and best practices ensures they understand their role in protecting your organization and not merely existing inside of it. Training is always the first line of defense, and it is a proven method for promoting awareness and responsible use of AI. 3. Monitor and Audit Numerous security solutions exist to identify what tools are being used inside of a company’s network. Implement systems to track AI tool usage and audit their data flows tools to identify unauthorized or high-risk applications. Inform your employees that you are monitoring AI-based network activity and will be conducting annual audits to ensure that their activity is compliant with organizational policy requirements. Mindfully Embracing an Opportunity The rapid proliferation of AI companions challenges organizations to rethink how they relate to their employees. Risks certainly exist but they are manageable through thoughtful policies, regular monitoring, and a strong training commitment. Allowing employees to use personal AI chatbots isn’t merely a risk – it’s an opportunity. When that opportunity is embraced, it signals trust, adaptability, and a forward-thinking culture that thinks proactively and not reactively to AI. HR leaders, IT pros, and virtually every executive can enable the ability for employees to innovate and create while simplifying tedious workflows through AI chatbots. This can significantly work in the organization’s favor while doing so safely. Consider doing so to show your organization, your employees, and your clients that you are ready for the rapidly evolving digital landscape ahead in 2025 and the years to come. Previous Next

bottom of page