top of page

Search Results

123 results found with an empty search

  • Contact | voyAIge strategy

    How to get in touch with our AI strategy and governance experts. Contact Us For questions, inquiries, or requests that require a personal response, we will respond within 48 hours. If you are submitting a request for a quote about our products or services , please use this form here . If you are requesting a proposal or bid , please use this form here . First Name Last Name Email Message Send Thanks for submitting!

  • Trump Signs Executive Order on AI | voyAIge strategy

    Trump Signs Executive Order on AI A Unilateral Resurgence of the AI Moratorium By Christina Catenacci, human writer Dec 15, 2025 Key Points Notwithstanding the fact that the AI moratorium was completely rejected in the recent Vote-a-Rama involving the Big Beautiful Bill, Trump has released an Executive Order on AI State Governors have reacted negatively to Trump’s announcement, and have insisted that this would constitute federal government overreach In line with Trump’s AI Action Plan, a consequence of continuing to have or enact AI state laws could mean a withdrawal of federal funding from noncompliant states You may recall that I wrote about the lengthy Vote-a-Rama that took place in the early morning hours regarding the Big Beautiful Bill, which contained a 10-year moratorium on state-enacted and enforced AI laws. Ultimately, on July 4, 2025, the Big Beautiful Bill was signed by the President and became law—but there was no inclusion of the 10-year AI regulation moratorium for states. More specifically, it was ultimately decided that the moratorium provision was to be removed entirely. As a refresher, the moratorium stipulated that no state or political subdivision thereof would be able to enforce, during a 10-year period, any law or regulation of that state or a political subdivision thereof limiting, restricting, or otherwise regulating AI models, AI systems, or automated decision systems entered into interstate commerce. Notwithstanding the fact that the AI moratorium was completely rejected, Trump has signed an Executive Order on December 11, 2025. Thus, there has been a unilateral resurgence of the AI Moratorium. There Were Hints of a 10-Year AI Moratorium Apparently, Trump recently confirmed that he planned to sign an executive order that pre-empted AI regulations at the state level. Where did he announce this decision? On Truth Social, of course. His post stated: “There must be only One Rulebook if we are going to continue to lead in AI… We are beating ALL COUNTRIES at this point in the race, but that won’t last long if we are going to have 50 States, many of them bad actors, involved in RULES and the APPROVAL PROCESS…You can’t expect a company to get 50 Approvals every time they want to do something…AI WILL BE DESTROYED IN ITS INFANCY!” Despite what academics, safety groups, and state lawmakers on both sides of the aisle have said, Trump remained adamant that this deregulation push would happen. More specifically, he believed that there should be one unified AI law in the United States, and the individual states were interfering with that goal. How Have State Governors Reacted to this Tweet? Simply put, not well. For example, take Florida Governor Ron DeSantis referred to federal government overreach in his recent response on X: “Stripping states of jurisdiction to regulate AI is a subsidy to Big Tech and will prevent states from protecting against online censorship of political speech, predatory applications that target children, violations of intellectual property rights and data center intrusions on power/water resources” Similarly, Governor Gavin Newsom has called Trump’s attempt to restrict states from regulating AI “disgusting”, and has stated in a social media post : “This moratorium threatens to defund states like California with strong laws against #AI -generated child porn…But no surprise given Trump’s years palling around with Jeffrey Epstein.” Newsom’s spokesperson stated: “California did not become the innovation hub of the nation by turning its back on new technology — and we can help ensure that future growth happens responsibly and safely” Along the same lines, several organizations, including tech employee unions and other labor groups, tech safety and consumer protection nonprofits, and educational institutions, also signed letters to Congress opposing the idea of blocking state AI regulations and raising alarms about AI safety risks. I fact, there are many individuals who have expressed zero sum view that it was innovation (AI deregulation) versus AI safety and accountability. It is either the tech companies in Silicon Valley, or it is red tape and bottlenecks. For instance, Sacha Haworth, Executive Director of The Tech Oversight Project, stated: “We’re in a fight to determine who will benefit from AI: Big Tech CEOs or the American people… We cannot afford to spend the next decade with Big Tech in the driver’s seat, steering us toward massive job losses, surveillance pricing algorithms that jack up the cost of living, and data centers that are skyrocketing home energy bills” What is in the Executive Order? The following are essential points that are contained in the Executive Order: The goal is to lead in AI and promote United States national and economic security and dominance across many domains. In particular, United States AI companies must be free to innovate without cumbersome regulation. Also, there must be a minimally burdensome national standard in AI, not 50 discordant state ones Within 30 days of the date of this order, the Attorney General must establish an AI Litigation Task Force (Task Force) whose sole responsibility is to challenge state AI laws that are inconsistent with the policy to dominate through a minimally burdensome national policy framework for AI (policy) Within 90 days of the date of this order, the Secretary of Commerce must publish an evaluation of existing state AI laws that identifies onerous laws that conflict with the policy, as well as laws that should be referred to the Task Force. That evaluation must, at a minimum, identify laws that require AI models to alter their truthful outputs, or that may compel AI developers or deployers to disclose or report information in a manner that would violate the First Amendment or any other provision of the Constitution Within 90 days of the date of this order, the Secretary of Commerce must issue a Policy Notice specifying the conditions under which states may be eligible for remaining funding under the Broadband Equity Access and Deployment (BEAD) Program that was saved through my Administration’s “Benefit of the Bargain” reforms, consistent with 47 U.S.C. 1702(e)-(f). That Policy Notice must provide that states with onerous AI laws are ineligible for non-deployment funds, to the maximum extent allowed by Federal law. The Policy Notice must also describe how a fragmented State regulatory landscape for AI threatens to undermine BEAD-funded deployments, the growth of AI applications reliant on high-speed networks, and BEAD’s mission of delivering universal, high-speed connectivity Executive departments and agencies (agencies) must assess their discretionary grant programs in consultation with the Special Advisor for AI and Crypto and determine whether agencies may condition such grants on states either not enacting an AI law that conflicts with the policy, or for those states that have enacted such laws, on those states entering into a binding agreement with the relevant agency not to enforce any such laws during the performance period in which it receives the discretionary funding The Chairman of the Federal Communications Commission must, in consultation with the Special Advisor for AI and Crypto, initiate a proceeding to determine whether to adopt a federal reporting and disclosure standard for AI models that pre-empts conflicting state laws Within 90 days of the date of this order, the Chairman of the Federal Trade Commission must, in consultation with the Special Advisor for AI and Crypto, issue a policy statement on the application of the Federal Trade Commission Act’s ( FTCA ) prohibition on unfair and deceptive acts or practices under 15 U.S.C. 45 to AI models. That policy statement must explain the circumstances under which state laws that require alterations to the truthful outputs of AI models are pre-empted by the FTCA’s prohibition on engaging in deceptive acts or practices affecting commerce The Special Advisor for AI and Crypto and the Assistant to the President for Science and Technology must jointly prepare a legislative recommendation establishing a uniform Federal policy framework for AI that pre-empts state AI laws that conflict with the policy set forth in this order However, the legislative recommendation above must not propose pre-empting otherwise lawful state AI laws relating to: child safety protections; AI compute and data center infrastructure, other than generally applicable permitting reforms; state government procurement and use of AI; and other topics as must be determined What are the Consequences of this Development? Whether Trump believes it or not, there are serious risks associated with AI, which need to be mitigated—not ignored. It may be news to some, but there can be simultaneous innovation and responsibility and accountability in the form of appropriate AI governance. It does not need to be one or the other. Put another way, it is not the tech companies against individuals in society. It is not that zero sum. We can have both simultaneously: we can support innovation and protect individuals in society. In particular, we can protect against AI hallucinations, AI chatbots that encourage self-harm, or exposing children to inappropriate sexualized content and other societal harms—with responsible AI state regulation (there is no federal AI law, and one is not expected to be made in the near future). States know their needs best, and it is troubling that Trump aims to silence them and prevent them from legislating in their own states. It will be no surprise when individual progressive states with substantive AI laws launch complaints against the federal administration for raising the possibility of a federal withdrawal of funding if states do not comply with the unilateral Executive Order. In fact, we may see court cases commenced as soon as possible—well before any action from the Task Force pursuant to the Executive Order. It is troubling that the Executive Order frames the issue of AI innovation as a zero-sum game: “To win, United States AI companies must be free to innovate without cumbersome regulation” Previous Next

  • AI for Inventory Management | voyAIge strategy

    AI for Inventory Management How AI, RFID, and Real-Time Data are Reshaping Retail By Tommy Cooke, fueled by caffeine and creativity Apr 4, 2025 Key Points AI in inventory management isn’t about replacing people—it’s about removing guesswork so that people can do better work Old Navy’s partnership with RADAR shows that when AI, RFID, and vision systems combine, customer experience gets more personal, not less Before AI can work its magic, organizations must confront messy data, tangled systems, and human hesitation—because the tech isn’t the hard part, the people are I had a client that had trouble selling flip flops–the sandals. They are a major pharmacy with a significant retail component to the business. Flip flops were causing three issues: 1. flip flops were piling up in storerooms across the continent 2. the stockpile of dated, old flip flops was growing significantly 3. it was taking too much time to scan inventory of flip flops that nobody wanted The answer to these three pain points was found in AI for inventory management. It was actually three disparate AI systems working in tandem, bundled into a new technological solution. This new solution analyzed historical data to determine when flip flops should be put out on the floor and advertised on sale, triggered automatic replenishment of flip flops (so as to avoid over-ordering), and a system that actively monitored when flip flops would be physically removed from a shelf. The solution is becoming more commonplace. Old Navy, a subsidiary of Gap Inc., recently made retail headlines: they are embarking on a multi-year plan to integrate RADAR’s AI-driven RFID technology into its stores. The idea is to provide associates on the floor with real-time inventory data so that they can locate items quickly within the store. By combining RFID with AI and computer vision (to physically see inventory), Old Navy is not only aiming to improve associate efficiency and accuracy, but they are also aiming to enhance customer service experience. I don’t know about you, but I’m particularly excited; Old Navy never seems to have my size of jeans–ever. Much like my previous client who struggled with selling flip flops, AI can make a significant impact on inventory management. Let’s dive into this a bit further. AI for Inventory Management Enhanced Demand Forecasting. Much like the flip flop example, AI algorithms can analyze historical sales data and marketing data internally. Those data can be combined with external data, such as market and consumption trends, to anticipate future demand. The benefit of doing so shouldn’t be understated. Smart demand forecasting allows retailers to maintain optimal stock levels, thereby saving costs in terms of reducing overstock. For example, rather than just knowing that swimsuits sell better in July, an AI model might flag an early-season heatwave in a particular region, cross-reference those measurements with historical sales surges, and recommend adjust stock levels in that cluster of stores. Automated Replenishment. Think of this as the reactive component of demand forecasting the other side of the same coin. In this instance, AI systems work off inventory data to automate the reordering of new inventory. In the past, replenishment often relied on static rules: if stock drops below five, reorder ten. But AI can flip this logic on its head, making replenishment smarter and not just faster. Much like the way Old Navy will do so with RADAR, RFID monitors shelf-level data and warehouse status simultaneously. If a product is selling quickly in one store but not others, the system can auto-generate a transfer request. Or it can pause auto-orders if it predicts a drop in demand due to, for example, weather events or shifts in promotional priorities. This is a particularly attractive capability for retailers because it means that inventory management becomes more granular and adaptive. Operational Efficiency. Better forecasting and replenishment do not just make inventory numbers look nice. They free up actual people to do better work. When store associates stop manually counting items or looking for hidden stock in the storeroom, they focus on customers. When warehouse teams stop scrambling to process last-minute shipments due to stockouts, they can plan strategically. Take RADAR, for example: this system tracks the movement of every tagged item in real time and allows associates to search for an item using a mobile app and be guided directly to it. It’s a small change, but compounding small changes have a ripple effect, specifically faster order fulfillment. It means a customer can actually find what they came in for. It means an employee gets to spend more time helping someone, and less time on scavenger hunts. Implementation Considerations For all the power AI brings to inventory management, integrating it successfully is not just a matter of plug-and-play. In each of the two examples I discussed above–of my former client and Old Navy–the real challenge is not the technology: it’s the people involved. The following points below are ones to consider. Data Quality. AI is only as smart as the data it’s trained on as well as the data it receives in real time. But retail data is messy. Product SKUs vary across systems, sales data are often fragmented between platforms, and real-time inventory counts can be inconsistent at best. So, for AI to work, organizations must undergo a data hygiene campaign that involves cleaning, labeling, and integrating data sources. It’s a critical step, it’s not particularly enjoyable, and it involves time from people across IT, operations, finance, management, and frontline staff. So, remember that if a system doesn’t trust its data, it can’t act on it, nor will your people be able to trust it. Proper data preparation needs to be preceded by proper communications and proper training plans. Legacy Integration. Many retailers, especially largescale organizations, operate on a patchwork of legacy tools. Sometimes they are systems that are decades old. Many of these systems are bespoke, custom-built designs. Others are bolt-on afterthoughts. When AI is integrated or interacts with these systems, operations can become prohibitively complicated, not to mention expensive. I’d be remiss not to mention the impact of these changes on your people, too. A guide approach is required, one that takes into account what it means to bring together modern technology with antiquated software–particularly when your staff are more accustomed to the former than the latter. Ethics and Privacy. RFID and computer vision systems can, intentionally or not, start to resemble surveillance. Tracking product movement is one thing—tracking employee and shopper behaviour is quite another. If AI systems are being used to monitor human productivity or shopper movements without transparency, it will lead to mistrust let alone legal risks and morale issues. It is not a new issue that consumers are concerned about what data is collected on them when they enter stores. When you add AI to the mix, concerns increase. Retailers need to be thoughtful about how they use these systems, what data they collect, and who has access. Ethical use is not just a matter of compliance, but a matter of culture, priorities, and principles. The Future of AI in Inventory Management The trajectory of AI in inventory management points towards increasingly sophisticated applications and use cases. Machine learning, computer vision, and robotics are set to further enhance inventory accuracy and operational efficiency. There is much to be saved and salvaged through these advancements, though it is important to recognize that these advancements are investments. They require planning, time, and reflection. Old Navy's partnership with RADAR precisely exemplifies the transformative power of AI in inventory management. It’s also a reminder that people always matter when working with AI. Previous Next

  • News Publishers Sue Canadian AI Startup Cohere for Copyright and Trademark Infringement | voyAIge strategy

    News Publishers Sue Canadian AI Startup Cohere for Copyright and Trademark Infringement Another AI and intellectual property infringement lawsuit By Christina Catenacci, human writer Mar 7, 2025 Key Points On February 13, 2025, Cohere Inc (Cohere) was sued by a number of news publishers for copyright and trademark infringement The Cohere case is very similar to the Thomson Reuters case, where Thomson Reuters was successful and granted partial summary judgment for the infringements and fair use claims The Thomson Reuters case is instructive when it comes to the Cohere and the New York Times cases On February 13, 2025, Cohere Inc (Cohere) was sued by a number of news publishers, including Advance Local Media LLC; Advance Magazine Publishers Inc. D/B/A Conde Nast; The Atlantic Monthly Group LLC; Forbes Media LLC; Guardian News & Media Limited; Insider, Inc.; Los Angeles Times Communications LLC; The McClatchy Company, LLC; Newsday LLC; Plain Dealer Publishing Co.; Politico LLC; The Republican Company; Toronto Star Newspapers Limited; and Vox Media, LLC (Publishers). What is the lawsuit about? The first paragraph of the lawsuit discusses the nature of the case, and points out that the lawsuit is about protecting journalism from systemic copyright and trademark infringement: “Rather than create its own content, Cohere takes the creative output of Publishers, some of the largest, most enduring, and most important news, magazine, and digital publishers in the United States and around the world. Without permission or compensation, Cohere uses scraped copies of our articles, through training, real-time use, and in outputs, to power its artificial intelligence (AI) service, which in turn competes with Publisher offerings and the emerging market for AI licensing. Not content with just stealing our works, Cohere also blatantly manufactures fake pieces and attributes them to us, misleading the public and tarnishing our brands” In fact, the lawsuit clearly talks about how publishers spend enormous amounts of time investigating, reporting, and ultimately publishing their expressive and groundbreaking pieces, which span the full spectrum of investigative reporting, breaking news, opinion pieces, arts and entertainment reviews, sports coverage, and political and business journalism. The Publishers claim that Cohere, with its valuation of over $5 billion, fails to license the content it uses and takes the Publishers’ valuable articles, without authorization and without providing compensation. It copies, uses, and disseminates the Publishers’ news and magazine articles to build and deliver a commercial service that mimics, undercuts, and competes with lawful sources for their articles and that displaces existing and emerging licensing markets. More specifically, the Publishers claim that Cohere copies the Publishers’ works to train its suite of LLM AI systems of products. They claim that the Publishers value innovation and AI if ethically deployed—they already license their articles to AI companies. However, they say that Cohere improperly usurps their creative labour and investments for the sake of its own profits. Most troubling, the Publishers claim that Cohere’s AI models deliver outputs that include full verbatim copies, substantial excerpts, and substitutive summaries of Publishers’ works—even current, breaking news pieces and articles protected by paywalls. Ultimately, the Publishers claim that Cohere’s actions amount to “massive, systematic copyright infringement and trademark infringement, and have caused significant injury to Publishers”. Moreover, they are adamant that left unfettered, such misconduct threatens the continued availability of the valuable news, magazine, and media content that Publishers produce. In fact, there are over 4,000 articles that the Publishers claim are registered copyrighted works that have been infringed. Worse, they claim that Cohere has passed off its own hallucinated articles as articles from the Publishers. The following is the list of the Publishers’ Causes of Action: Count I: Direct Copyright Infringement, in violation of the Copyright Act —each infringement constitutes a separate and distinct act of infringement, and Cohere’s acts of infringement are willful, intentional, and purposeful, in disregard of and with indifference to the Publishers’ rights Count II: Secondary Copyright Infringement, in violation of the Copyright Act —to the extent Cohere seeks to shirk responsibility for its own conduct by shifting blame onto its users and customers, the Publishers also bring claims for secondary liability in the alternative Count III: Trademark Infringement, in violation of the Lanham Act —Cohere uses marks that are either identical to, variations on, or colourable imitations of Publishers’ federally registered trademarks in connection with the generation and distribution of hallucinated articles that Publishers did not publish. Cohere has caused and is likely to cause confusion, mistake, or deception as to whether the hallucinated articles Cohere provides are associated or affiliated with, or are sponsored, endorsed, or approved by the Publishers Count IV: False Designation of Origin, in violation of the Lanham Act —Cohere has used and continues to use the Publishers’ marks in interstate commerce in a misleading manner, falsely associating the Publishers’ valuable trademarks and trusted brands with Cohere and Cohere’s products and services. As a result, Cohere’s users are deceived and are likely to continue to be deceived by the appearance of the Publishers’ trademarks on Cohere’s hallucinated articles The Publishers are asking for judgment that Cohere is liable under the Copyright Act and the Lanham Act ; equitable relief including a permanent injunction; an order telling Cohere to stop training or fine-tuning AI models or generating content from AI models; an order requiring Cohere to destroy under the court’s supervision all infringing copies of the Publishers’ works; statutory damages and actual damages; fees; and interest. Who is Cohere, and What is its Reaction to the Lawsuit? Cohere is a Canadian company with its principal places of business in Toronto, San Francisco, London (UK), and New York. It is a multinational technology company focused on AI for the enterprise, specializing in large language models. It has been reported that Cohere’s response is that the company expects that the court will side with Cohere because it has long worked to mitigate the risk of intellectual property infringement. Further, Josh Gartner apparently said that Cohere “strongly stands by its practices for responsibly training its enterprise AI” and believes the lawsuit is “misguided and frivolous.” This case is part of a string of intellectual property cases against AI companies As you may recall, I wrote an article titled, New York Times Sues OpenAI and Microsoft for Copyright Infringement , which discussed the copyright lawsuit that the New York Times launched against OpenAI and Microsoft for the same type of alleged infringement. This case has not yet been decided, and as we shall see below, the New York Times could very well win its case. A Similar Case that Supported a Copyright Holder In another similar case, Thomson Reuters and West Publishing Group sued Ross Intelligence in a District Court for the District of Delaware. This February 11, 2025 decision is interesting, and uniquely written by Bibas, the Circuit Judge. This was his first paragraph in the decision: “A smart man knows when he is right; a wise man knows when he is wrong. Wisdom does not always find me, so I try to embrace it when it does––even if it comes late, as it did here” This was because he actually revised his previous 2023 decision and ultimately granted Thomson Reuter’s motion for partial summary judgment on direct copyright infringement and related defenses, and on fair use. To that end, he denied Ross’s motion for summary judgment on fair use and Ross’s motion for summary judgment on Thomson Reuters’s copyright claims. What happened in this case? As we know, Thomson Reuters owns one of the largest legal research platforms, Westlaw. Users need to pay to access and use the platform. Westlaw also contains editorial content and annotations such as the headnotes that summarize key points of law and case holdings. Westlaw organizes its content using the Key Number System, a numerical taxonomy—Thomson Reuters owns copyrights in Westlaw’s copyrightable material. Ross decided to make a legal research search engine that used AI and that competed with Westlaw. Ross needed a database of legal questions and answers to train the tool; therefore, Ross asked to license Westlaw’s content. Thomson Reuters refused. Consequently, Ross made a deal with LegalEase to get training data in the form of “Bulk Memos”, which are lawyers’ compilations of legal questions with good and bad answers. Notably, LegalEase gave those lawyers a guide explaining how to create those questions using Westlaw headnotes, while clarifying that the lawyers should not just copy and paste headnotes directly into the questions. LegalEase sold Ross roughly 25,000 Bulk Memos, which Ross used to train its AI search tool. In response, Thomson Reuters sued for copyright infringement. More specifically, the company sued for direct copyright infringement and claimed that Ross’s defense of fair use was unsuccessful. When it came to direct copyright infringement, the judge stated that Thomson Reuters had to show both that (1) it owned a valid copyright and (2) Ross copied protectable elements of the copyrighted work. The second element required showing that Ross actually copied the work and that its copy was substantially similar to the work. The judge granted summary judgment for Thomson Reuters on whether the headnotes and the Key Number System were original enough to prevent Ross from rebutting any presumption of validity. Looking at about 4,000 headnotes, the judge talked to experts and compared judicial decisions to the headnotes, and decided that Thomson Reuters should be granted summary judgment on actual copying of the data. Then, the judge asked whether an ordinary user of a product would find it substantially similar to the copyrighted work, and answered in the affirmative: Thomson Reuters was granted summary judgment on substantial similarity regarding the headnotes. Moreover, the judge confirmed that Ross’s defenses failed—all of them. He swiftly rejected the innocent infringement claim since innocence did not limit liability. Similarly, he disagreed with Ross that Thomson Reuters misused its own copyrights. He also rejected Ross’s claim involving a merger of expression. He also quickly rejected a claim of scenes à faire (a principle in copyright law in which certain elements of a creative work are held to be not protected when they are mandated by or customary to the genre). But the interesting defense that was raised by Ross was the fair use defense. Section 107(1–4) of the Copyright Act required the judge to consider at least these four factors in the analysis: The use’s purpose and character, including whether it is commercial or nonprofit: this one went to Thomson Reuters since Ross’s use was commercial and not transformative The copyrighted work’s nature: this one went to Ross since Westlaw’s work was not that creative How much of the work was used and how substantial a part it was relative to the copyrighted work’s whole: this one went to Ross since the judge stated that “What matters is not “the amount and substantiality of the portion used in making a copy, but rather the amount and substantiality of what is thereby made accessible to a public for which it may serve as a competing substitute” How Ross’s use affected the copyrighted work’s value or potential market (this was the most important factor): this one went to Thomson Reuters since Ross tried to compete with Westlaw by developing a market substitute, and it did not matter whether Thomson Reuters had used the data to train its own legal search tools—the effect on a potential market for AI training data was enough Thus, when balancing the factors, it became clear that Thomson Reuters had to be successful. The judge granted partial summary judgment to Thomson Reuters on direct copyright infringement for the headnotes. For those headnotes, the only remaining factual issue on liability was that some of those copyrights may have expired or been untimely created. This factual question underlying copyright validity was saved for the jury. The judge also granted summary judgment to Thomson Reuters against Ross’s defenses of innocent infringement, copyright misuse, merger, scenes à faire, and fair use. Likewise, the judge denied Ross’s motions for summary judgment on direct copyright infringement and fair use. Though this newer decision replaced many parts of the 2023 decision, some parts remained relevant, including rulings on contributory liability, vicarious liability, and tortious interference with contract. What does this mean for AI and copyright? As can be seen from the Thomson Reuters case, the defense of fair use is not likely to be successful for defendants who directly copy material and use it to train AI models. We see that judges complete a side-by-side comparison of the products at issue to make their decisions. The defendants would have to show that their new work was transformative. This decision may be instructive for the upcoming decisions related to the New York Times and Cohere. In fact, judges who examine the following four key factors in the fair use analyses go through each fact situation to make a determination: The use’s purpose and character, including whether it is commercial or nonprofit The copyrighted work’s nature How much of the work was used and how substantial a part it was relative to the copyrighted work’s whole How the use affected the copyrighted work’s value or potential market (this is the most important factor) It will be interesting to see if the New York Times and the Publishers will be successful in light of this Thomson Reuters decision. We will keep you posted… Previous Next

  • Request a Quote | voyAIge strategy

    Inquire about our diverse range of AI solutions. Request a Quote Please let us know what services and/or products you are interested in and we will contact you within 48 hours. If you are requesting a proposal or bid , please use this form here . First Name Last Name Email Quote Request Details Send Thanks for submitting!

  • AI & The Future of Work DL | voyAIge strategy

    Thank you for subscribing! Download the Report Here Read the Report Online Here

  • The New Claude 4 Can Code, But Leaders Should Still Sign Off | voyAIge strategy

    The New Claude 4 Can Code, But Leaders Should Still Sign Off Claude 4 is a leap forward, but it's also a governance wake-up call By Tommy Cooke, powered by caffeine and curiousity May 30, 2025 Key Points: Delegating a technical task doesn't guarantee it's done right—oversight matters, even when the system looks competent Claude 4’s ability to work autonomously highlights the growing need for clear accountability and human verification As AI systems become more capable, leaders must stay close to the outcomes—even when they don’t touch the inputs In my formative years as a young adult, I was an active musician in a rock band. My band and I performed regularly throughout my undergraduate years. As much fun as gigging is, live shows are a scramble. Hauling gear, setting up, sound-checking, and hoping nothing went wrong. At one show, I was behind the ball a bit. Two strings snapped on my main guitar. So, I asked the venue’s sound tech to wire up my pedalboard while I handled other setup tasks. When we hit soundcheck, my sound was a mess. One of the pedals had been placed in the wrong order. It was a simple mistake, but it could have derailed the entire show. I’ve never forgotten the lesson that delegating a technical task doesn’t mean it’s done right. You still need to check the signal before the lights go up. This is a moment I reflected on when I read that Anthropic released Claude 4. The reflection was triggered by the fact that most headlines focused on one detail: the model can autonomously generate software for hours at a time. For developers, this is surely a turning point. An AI system that not only writes code but improves it quietly, efficiently, and without supervision. But that’s not the full story. If you are a Pro, Max, Team, or Enterprise Claude plan user , this matters to you because you will have access to Claude 4. This means you and potentially your organization may now have access to a brand new, advanced AI that can carry out complex work without human input. It means that leadership must ask: what’s the governance plan? Who verifies the output? Who signs off? Much like the way in which I learned from asking someone who doesn’t know my system to essentially set it up for me on a rock stage, there’s something we as business leaders can do to ensure that AI innovations still act effectively on our behalf. Claude 4 and the Shift to Autonomous Execution Until recently, generative AI required heavy user input. The human wrote the prompt, and the system responded. That dynamic made it easy to keep the human in the loop to control the task, validate the output, and decide what comes next. Claude 4 changes these terms. It introduces what many call agentic AI : models capable of reasoning through tasks, planning multi-step actions, and executing work without continual prompting. Claude 4 is demonstrating that it can work independently for hours, reconfigure code, and make judgment calls along the way. So, it’s not just writing the code—it is actually finishing the job as well. This is a major development. But with this innovation in AI autonomy comes a truth: the more work that AI performs alone, the less visibility organizations have into how it gets done. The AI Governance Gap Is Growing The risk isn't that AI will make obvious errors. It’s that it will produce plausible work that quietly deviates from your standards, assumptions, or intentions. Do you have someone in place that can notice these changes before it’s too late? That’s the real governance gap. It’s not about control over prompts, but rather prioritizing oversight over outcomes. This means organizations need to reconsider how they monitor AI-driven work. That doesn’t mean leaders need to personally review every AI-generated output. But it does mean they need to put in place clear lines of accountability, regular review processes, and internal checks to ensure AI isn’t working in a vacuum. This also means that oversight can no longer be reactive—it needs to be built in from the beginning. What Does Accountable AI Adoption Look Like? Organizations don’t need to halt progress to manage these risks, but they do need to move forward with clarity: One of the most effective ways to begin is by documenting how AI is being used across the business. This doesn’t need to be a heavy-handed process. Even a lightweight registry of AI use cases can help identify where autonomy is increasing and where review protocols might be missing Leaders should also establish guidelines for when human oversight is required. Not every AI-generated output requires manual review, but some certainly do. Defining these boundaries in advance protects against over-reliance on unchecked systems Lastly, every autonomous system should have a clearly named owner. Someone in the organization needs to be responsible for verifying that the AI’s work aligns with business objectives, ethical expectations, and legal obligations. The idea isn’t to create bottlenecks—it’s to make sure someone is watching. Signing Off Is Still a Human Task Claude 4 marks real progress. It moves us closer to a world where AI can take on meaningful work, save time, and support innovation. But that progress also demands more from leadership. Delegating work to machines doesn’t absolve humans of responsibility. If anything, it raises the bar because the more invisible the work becomes, the more deliberate our oversight must be. Leaders don’t need to fear these systems. But they do need to govern them. They need to understand where AI is being used, what it’s allowed to do, and who remains accountable when things go wrong. This type of oversight can help organizations explain how their AI systems generate outputs. Previous Next

  • Trump's AI Action Plan | voyAIge strategy

    Trump's AI Action Plan America's Bold Bid for Global AI Dominance: A Marked Departure from Biden’s emphasis on AI Safety By Christina Catenacci, human writer Jul 30, 2025 Key Points On July 23, 2025, the Trump administration released the document, "Winning the AI Race: America's AI Action Plan” ( Action Plan ) The Action Plan focuses on accelerating AI innovation, building AI infrastructure, and leading international diplomacy and security The Action Plan suggests that there is urgency in completing these policy actions, but there are no clear deadlines with which to comply On July 23, 2025, the Trump administration released the document, "Winning the AI Race: America's AI Action Plan” ( Action Plan ). The authors of the Action Plan include Michael J. Kratsios (Assistant to the President for Science and Technology), David O. Sacks (Special Advisor for AI and Crypto), and Marco A. Rubio (Assistant to the President for National Security Affairs). This move represents a dramatic shift in US AI policy. It is built on three strategic pillars: accelerating AI innovation, building AI infrastructure, and leading international diplomacy and security. The Action Plan outlines federal policy actions that are designed to cement American dominance in the global AI race. Unlike Biden’s previous safety-first approach, Trump's plan prioritizes deregulation, rapid deployment, and ideological neutrality in AI systems. Indeed, this plan is in line with Trump’s 2025 Executive Order on AI and VP Vance’s comments from the February 2025 AI Action Summit in Paris, both of which downplayed AI safety and highlighted the importance of AI innovation, AI deregulation, and American dominance. It may be challenging for the US to assert its dominance, when the European Union is the dominant one in this regard. What is in Trump's AI Action Plan? Biden's October 2023 AI executive order was titled, “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence”, and emphasized safety testing, bias mitigation, and careful deployment. On the other hand, Trump's Action Plan prioritizes speed, deregulation, and ideological neutrality. It is clear that there is some urgency contained in this Action Plan: the opening paragraph is a quote by the President himself: “As our global competitors race to exploit these technologies, it is a national security imperative for the United States to achieve and maintain unquestioned and unchallenged global technological dominance. To secure our future, we must harness the full power of American innovation” In fact, the first sentence of the introduction notes that the US is in a race to achieve global dominance in AI. Pillar I: Accelerate AI Innovation The first pillar is introduced by this statement: “America must have the most powerful AI systems in the world, but we must also lead the world in creative and transformative application of these systems. Achieving these goals requires the Federal government to create the conditions where private-sector-led innovation can flourish” There are several policy actions that are recommended to be taken under each of these priorities: Removing red tape and onerous regulation Ensuring that frontier AI protects free speech and American values Encouraging open-source and open-weight AI Enabling AI adoption Empowering American workers in the age of AI Supporting next-generation manufacturing Investing AI-enabled science Building world-class scientific datasets Advancing the science of AI Investing in AI interpretability, control, and robustness breakthroughs Building an AI evaluations ecosystem Accelerating AI adoption in government Driving adoption of AI within the department of defense Protecting commercial and government AI innovations Combatting synthetic media in the legal system For example, under enabling AI adoption, some of the main policy actions include: Establish regulatory sandboxes or AI Centers of Excellence around the country where researchers, startups, and established enterprises can rapidly deploy and test AI tools while committing to open sharing of data and results. These efforts would be enabled by regulatory agencies such as the Food and Drug Administration (FDA) and the Securities and Exchange Commission (SEC), with support from DOC through its AI evaluation initiatives at NIST Launch several domain-specific efforts (e.g., in healthcare, energy, and agriculture), led by NIST at DOC, to convene a broad range of public, private, and academic stakeholders to accelerate the development and adoption of national standards for AI systems and to measure how much AI increases productivity at realistic tasks in those domains Led by the Department of Defense (DOD) in coordination with the Office of the Director of National Intelligence (ODNI), regularly update joint DOD-Intelligence Community (IC) assessments of the comparative level of adoption of AI tools by the United States, its competitors, and its adversaries’ national security establishments, and establish an approach for continuous adaptation of the DOD and IC’s respective AI adoption initiatives based on these AI net assessments Prioritize, collect, and distribute intelligence on foreign frontier AI projects that may have national security implications, via collaboration between the IC, the Department of Energy (DOE), CAISI at DOC, the National Security Council (NSC), and OSTP Pillar II: Build American AI Infrastructure The second pillar is introduced by this statement: “AI is the first digital service in modern life that challenges America to build vastly greater energy generation than we have today. American energy capacity has stagnated since the 1970s while China has rapidly built out their grid. America’s path to AI dominance depends on changing this troubling trend” There are several policy actions that are recommended to be taken under each of these priorities: Creating streamlined permitting for data centres, semiconductor manufacturing facilities, and energy infrastructure while guaranteeing security Developing a grid to match the pace of AI innovation Restoring American semiconductor manufacturing Building high-security data centres for military and intelligence community usage Training a skilled workforce for AI infrastructure Bolstering critical infrastructure cybersecurity Promoting secure-by-design AI technologies and applications Promoting mature federal capacity for AI incident response For instance, when it cones to training a skilled workforce, we see the following policy actions: Led by DOL and DOC, create a national initiative to identify high-priority occupations essential to the buildout of AI-related infrastructure. This effort would convene employers, industry groups, and other workforce stakeholders to develop or identify national skill frameworks and competency models for these roles. These frameworks would provide voluntary guidance that may inform curriculum design, credential development, and alignment of workforce investments Through DOL, DOE, ED, NSF, and DOC, partner with state and local governments and workforce system stakeholders to support the creation of industry-driven training programs that address workforce needs tied to priority AI infrastructure occupations. These programs should be co-developed by employers and training partners to ensure individuals who complete the program are job-ready and directly connected to the hiring process. Models could also be explored that incentivize employer upskilling of incumbent workers into priority occupations. DOC should integrate these training models as a core workforce component of its infrastructure investment programs. Funding for this strategy will be prioritized based on a program’s ability to address identified pipeline gaps and deliver talent outcomes aligned to employer demand Led by DOL, ED, and NSF, partner with education and workforce system stakeholders to expand early career exposure programs and pre-apprenticeships that engage middle and high school students in priority AI infrastructure occupations. These efforts should focus on creating awareness and excitement about these jobs, aligning with local employer needs, and providing on-ramps into high-quality training and Registered Apprenticeship programs Through the ED Office of Career, Technical, and Adult Education, provide guidance to state and local CTE systems about how to update programs of study to align with priority AI infrastructure occupations. This includes refreshing curriculum, expanding dual enrollment options, and strengthening connections between CTE programs, employers, and training providers serving AI infrastructure occupations Led by DOL, expand the use of Registered Apprenticeships in occupations critical to AI infrastructure. Efforts should focus on streamlining the launch of new programs in priority industries and occupations and removing barriers to employer adoption, including simplifying registration, supporting intermediaries, and aligning program design with employer needs Led by DOE, expand the hands-on research training and development opportunities for undergraduate, graduate, and postgraduate students and educators, leveraging expertise and capabilities in AI across its national laboratories. This should include partnering with community colleges and technical/career colleges to prepare new workers and help transition the existing workforce to fill critical AI roles Pillar III: Lead in International AI Diplomacy and Security The third pillar is introduced by this statement: “To succeed in the global AI competition, America must do more than promote AI within its own borders. The United States must also drive adoption of American AI systems, computing hardware, and standards throughout the world. America currently is the global leader on data center construction, computing hardware performance, and models. It is imperative that the United States leverage this advantage into an enduring global alliance, while preventing our adversaries from free-riding on our innovation and investment” There are several policy actions that are recommended to be taken under each of these priorities: Exporting American AI to allies and partners Countering Chinese influence in international governance bodies Strengthening AI compute export control enforcement Plugging loopholes in existing semiconductor manufacturing export controls Aligning protection measures globally Ensuring that the US Government is at the forefront of evaluating national security risks in frontier models Investing in biosecurity By way of example, with respect to countering Chinese influence in international governance bodies, we see the following policy action: Led by DOS and DOC, leverage the U.S. position in international diplomatic and standard-setting bodies to vigorously advocate for international AI governance approaches that promote innovation, reflect American values, and counter authoritarian influence Timeline One would think that the US administration has set an aggressive timeline for implementation of the Action Plan. Key agencies including the Department of Commerce, Department of Energy, and NIST have been tasked with developing specific implementation plans in near term. The introduction states, “Simply put, we need to “Build, Baby, Build!”. But there are no clear deadlines. The success of this ambitious agenda will likely depend on several factors: Congress's willingness to provide necessary funding, the private sector's ability to scale infrastructure rapidly, and the international community's receptiveness to American AI leadership. What Can We Take from This Development? Trump's Action Plan represents one of the most comprehensive technology policy initiatives in American history. How will this play out? Its success could successfully cement American dominance in AI, and its failure could leave America trailing in a competition that is viewed by many as existential. Indeed, the promise is stated right in the Introduction of this Action Plan: "Winning the AI race will usher in a new golden age of human flourishing, economic competitiveness, and national security for the American people” Interestingly, the introduction also states: “Whoever has the largest AI ecosystem will set global AI standards and reap broad economic and military benefits” As we know, the European Union has already set the golden standard for AI regulation with its AI Act . Many countries already look to this golden standard when legislating and enforcing AI laws in their jurisdictions. It will be difficult to see how the American federal government, an administration that has not yet even created a national privacy regulation much less a comprehensive law, will come even close to catching up to the European Union. In fact, it may be challenging for the US to be taken seriously in this regard; interestingly, the US suggests that this Action Plan will uphold American values and then influence all jurisdictions around the world. Again, it is the European Union that has already been an influence on the world in the technology sphere—not the US—since it has encouraged countries around the world to act in accordance with European values and laws. In my view, countries will not turn around and change their groundbreaking laws or change what laws they will comply with just to please the US. Previous Next

  • Reddit Sues Data Scrapers and AI Companies | voyAIge strategy

    Reddit Sues Data Scrapers and AI Companies The Accusation: Stealing Valuable User-Generated Data By Christina Catenacci, human writer Oct 24, 2025 Key Points On October 22, 2025, Reddit filed a Complaint in the United States District Court, Southern District of New York against Serpapi LLC, Oxylabs UAB, Awmproxy, and Perplexity AI, Inc Reddit has alleged that the Defendants have violated copyright law and used its user-generated content without permission and without entering into an agreement with Reddit that protects users The Defendants have denied the allegations and plan to defend themselves in court On October 22, 2025, Reddit filed a Complaint in the United States District Court, Southern District of New York against Serpapi LLC, Oxylabs UAB, Awmproxy, and Perplexity AI, Inc (Defendants). In short, Reddit accused the Defendants of stealing valuable copyrighted user content without permission and without entering into an agreement with Reddit that protects users. The case gets at the tension between content owners like Reddit and AI companies that use user-generated data for commercial gain. What’s more, this lawsuit deals with not just AI companies, but also data scrapers that get the data from Google’s Search Results Pages to circumvent technological protections. What Happened? According to Reddit, the lawsuit was commenced because it was necessary to stop the “the industrial-scale, unlawful circumvention of data protections by a group of bad actors who will stop at nothing to get their hands on valuable copyrighted content on Reddit”. Three of the Defendants, Oxylabs UAB, AWMProxy, and SerpApi (a Lithuanian data scraper, a former Russian botnet, and a Texas company that publicly advertises its shady circumvention tactics), are data-scraping service providers who specialize in creating and selling tools that are designed to circumvent digital defenses and scrape others’ content. The tools aim to bypass two levels of security: evading Reddit’s own anti-scraping measures, and second, circumventing Google’s controls and scraping Reddit content directly from Google’s Search Engine Results Pages . Reddit equated this behaviour to what bank robbers do—knowing that they cannot get into the bank vault, they break into the armored truck carrying the cash instead. The fourth Defendant, Perplexity AI Inc., was equated to a “North Korean hacker” and is a willing customer of at least one of its co-defendants. Reddit submits that Perplexity AI will apparently do anything to get the Reddit data to fuel its “answer engine”. Reddit, founded 20 years ago, is one of the largest repositories of human conversation in existence. In particular, over 100 million unique users engage in discussions each day across its hundreds of thousands of interest-based communities (or “subreddits”), which is a continuous stream of real-time and creative copyrighted works. According to Reddit, it is prohibited to engage in unauthorized commercialization of Reddit content unless there is an express agreement with guardrails in place to ensure that user rights are protected. In a nutshell, if AI companies want to legally access Reddit data, they need to comply with Reddit’s policies just like Google and OpenAI have. What is Reddit Claiming? Reddit has asserted that the first three Defendants have: scraped the data from Google’s Search Engine Results Pages instead of Reddit’s site (like the bank robbers attacking the truck carrying the cash) while masking their identities, hiding their locations, and disguising their web scrapers as regular people to circumvent or bypass the security restrictions meant to stop them Reddit has asserted that Perplexity AI has: ignored Reddit’s cease-and-desist letter after Reddit caught Perplexity AI red-handed by using the digital equivalent of marked bills (to use the bank robbery analogy) to track Reddit data and confirm that Perplexity AI was using Reddit data acquired through the scraping of Google Search Engine Results Pages In its Complaint, Reddit argued that Congress has already enacted the Digital Millennium Copyright Act to prevent what the Defendants are doing—bypassing technological measures to access copyrighted works. Moreover, Reddit has pointed out that the Defendants know that they do not have permission to do what they are doing, and has claimed the following: All Defendants have violated the Digital Millennium Copyright Act by unlawfully circumventing technological measures The Defendants, SerpApi and Oxylabs, have violated the Digital Millennium Copyright Act by trafficking of technology, product, service, or device for use in circumventing technological measure controlling access The Defendants, SerpApi and Oxylabs, have violated the Digital Millennium Copyright Act by trafficking of technology, product, service, or device for use in circumventing technological measure protecting the right of copyright owner All Defendants have gained access to and scraped Reddit data on a large-scale, unauthorized, and automated basis, including misappropriation of real-time Reddit content and services and the timely content authored by Reddit users, from which Defendants have been unjustly enriched at Reddit’s expense The Defendants, SerpApi and Perplexity AI, have engaged in civil conspiracy by entering into one or more contracts or business agreements for the purpose of circumventing the technological control measures described above in order to gain access to Reddit data on a large-scale, unauthorized, and automated basis, including Reddit content and services and the content authored by Reddit users Reddit has suffered harms since it depends on the contributions of Redditors and its business and reputation has been damaged by the Defendants’ misappropriation of Reddit data To that end, Reddit has requested that the court grant injunctive relief, damages, costs, and any other legal or equitable relief as the court deems just and proper. What was the Defendants’ Responses? Generally speaking, the Defendants all deny the allegations and plan on defending themselves in court. But it was Perplexity AI that made a statement right on Reddit . Essentially, Perplexity AI noted that “this is a sad example of what happens when public data becomes a big part of a public company’s business model”. More specifically, the AI company stated that the reason that it is being sued by Reddit is likely because it is about a show of force in Reddit’s training data negotiations with Google and OpenAI. Perplexity AI went on to say that it has not ignored Reddit—whenever anyone asks the company about content licensing, it explains that Perplexity AI, as an application-layer company, does not train AI models on content. In fact, it never has, and thus it is impossible for the company to sign a license agreement to do so. What Reddit does in fact, is summarize Reddit discussions, and cite Reddit threads in answers, just like people share links to posts on Reddit all the time. Perplexity invented citations in AI for two reasons: so that people can verify the accuracy of the AI-generated answers, and so they can follow the citation to learn more and expand their journey of curiosity. The way Reddit is acting is the opposite of an open internet. Lastly, Perplexity AI stated: “In any case, we won’t be extorted, and we won’t help Reddit extort Google, even if they’re our (huge) competitor. Perplexity will play fair, but we won’t cave. And we won’t let bigger companies use us in shell games” What Can We Take From This Development? I think that Reddit’s Chief Legal Officer said it best: “AI companies are locked in an arms race for quality human content - and that pressure has fueled an industrial-scale 'data laundering' economy” So this is the second Reddit lawsuit that has come up. As I wrote about the first Reddit case against Anthropic , we will need to wait and see what the court decides. These sorts of copyright cases are popping up at a rapid rate in the context of AI, and courts are going to have to set the fair balance between innovation and the rights of companies like Reddit (and its users). Plainly put, the court is going to have to decide what is fair. Is Reddit “extorting” Perplexity AI as has been alleged, or are the Defendants trying to unlawfully access and use Reddit content without permission? And where does this leave the users? Will there be a chilling effect as a result of these kinds of lawsuits—will users shy away from sharing their thoughts and creative works online because they are unsure of how they will be used against them in the future? Previous Next

  • Some of the Main Risks of Creating and Selling a Chatbot | voyAIge strategy

    Some of the Main Risks of Creating and Selling a Chatbot How to mitigate those risks before putting them on the market By Christina Catenacci Dec 5, 2024 Key Points As we approach 2025, chatbots are in high demand and businesses are wondering whether they should buy or build their own chatbots, and put them on the market Although there are several risks associated with businesses having an AI chatbot, there are also many ways to mitigate the risks. One example is that we need to accept that chatbots can be wrong (as in the case of hallucinations), so it is necessary to check the AI chatbot for accuracy There are clear benefits of using chatbot services in a business: chatbots are cost effective since it can automate tasks, chatbots can work 24/7 and don’t need breaks, chatbots can leverage user and data preferences to provide a tailored experience, and chatbots are scalable in that they can handle several conversations simultaneously Chatbots are hot these days, and plenty of businesspeople want to create them and put them on the market as soon as possible. They want to stay competitive. They want to make passive income. They want to ride the AI wave. It is no wonder that 79 percent of the top-performing businesses already have installed some form of conversational marketing tool . Chatbots are in demand, can unlock new revenue streams, and can help to create high returns. The more clients and the more mid-sized businesses that are involved, the higher the monthly revenue can be. In fact, it is possible to search use cases for ready-made chatbots to use in certain industries for certain tasks like lead generation, recruitment, or appointment booking. It is an exciting time to leverage technology to bolster a business’s services. Indeed, a 2022 study by ThriveMyWay , 24 percent of enterprises, 15 percent of midsized companies, and 16 percent of small firms utilized chatbot services. There are clear benefits of using chatbot services in a business: chatbots are cost effective since it can automate tasks, chatbots can work 24/7 and don’t need breaks, chatbots can leverage user and data preferences to provide a tailored experience, and chatbots are scalable in that they can handle several conversations simultaneously. They certainly appear helpful for businesses that wish to deliver enhanced customer experiences. Interested businesses who want to build their own chatbot instead of resell one are recommended to explore necessary action steps including signing up for a chatbot platform, build a demo chatbot for a simple client in a target niche, create a landing page that shows off the demo bot, reach out to a small number of prospects, and schedule consultations. But what are the risks of doing so, and how can we mitigate those risks? Are these risks present even if businesses become chatbot resellers (they buy a ready-made chatbot and then resell it to their clients) instead of builders (they make a bot and sell it to businesses)? Risks Some of the main risks include: Security and data leakage : if there is sensitive third-party or internal company information that is entered into a chatbot, it becomes part of the chatbot’s data model and may be shared with others who ask relevant questions. This could lead to data leakage and violate an organization’s security policies Hallucinations : if there is an inquiry, it is possible that the AI’s answer to that question could be an hallucination. What is this? Simply put, it is when the chatbot makes up stuff (including citations/references) The chatbot could go rogue : if there is a lack of human feedback, or poorly trained systems, the chatbots could provide unexpected, incorrect, or even harmful outputs Disinformation : if chatbots are making it easy for bad actors to create false information at a mass scale—cheaply and rapidly—a business could face reputational risks, legal risks, and other damage too. There is more: the same AI chatbot could teach another AI chatbot to spread even more harmful disinformation Bias and Discrimination : if bias is created because of the biased nature of the data on which AI tools are trained, or if users purposefully manipulate AI systems and chatbots to produce unflattering or prejudiced outputs, there could be problematic consequences. Worse, when decisions are made based on the biased information, a considerable risk of discriminatory decisions could occur Intellectual Property : if the AI system is trained on enormous amounts of data (including protected data like copyrighted data), the business that uses the data through the chatbot could be violating a business’s intellectual property and could end up on the receiving end of an infringement action Privacy and Confidentiality : if the AI system is trained on or is fed any sensitive information about a person, the business could be violating a person’s privacy. Similar to the Intellectual Property issue, the business could face privacy complaints or actions The chatbot could be insensitive and lack empathy . If the chatbot responds to client questions in an atypical or even an emotionally unintelligent manner, there could be some resulting issues including reputational harms or various actions Mitigating the risks Here are some mitigation strategies : Be cautious and acknowledge the risks before acting Create policies and procedures that can outline for employees what is acceptable and unacceptable use of AI in the workplace Accept that chatbots can be wrong, and check the references If you are a builder of a chatbot and you are selling, use contractual provisions to limit liability Use transparency when dealing with AI use and communicating with clients and employees Review AI outputs and check if there are bias and discriminatory impacts Create plans to address AI-powered disinformation Previous Next

  • HR Automation | voyAIge strategy

    HR Automation Some Issues that Could Arise By Christina Catenacci Dec 2, 2024 Key Points In the future, Ontario employers making publicly advertised job postings will be required to disclose within the job posting if they use AI to screen, assess or select applicants for publicly advertised job posts Where a job applicant participates in a video interview and sentiment analysis is used, there could be privacy and human rights issues that are triggered Other jurisdictions, such as New York City, have addressed AI tools in recruitment and hiring very differently than Ontario My co-founder, Tommy Cooke, just wrote an informative article regarding some of the main HR automation trends that have been pervasive in the business world in 2024. When it comes to these trends, it is worth taking a closer look at some of the issues that could become problematic. More specifically, I would like to examine the uses of AI in the area of recruitment and hiring. Whether it is using AI to automate resume screening or using AI to conduct video interviewing sentiment analysis, there could be some challenges for employers. In particular, employers will need to comply with Ontario’s Employment Standards Act , namely the AI provisions in Bill 149 , in the near future. As of some future date to be named by proclamation, employers making publicly advertised job postings will be required to disclose within the job posting if they use AI to screen, assess or select applicants for publicly advertised job posts. These employers will also have to retain copies of every publicly advertised job posting (as well as the associated application form) for three years after the post is taken down. In fact, I recently wrote an article on this topic. I wrote about how skeletal the AI provisions were in this bill. And the AI-related definitions were nowhere to be found. I compared the requirements in Ontario’s Bill 149 to those in New York City’s (NYC’s) hiring law involving using AI and automated decision-making tools. It was striking that NYC required employers to conduct a bias audit before using the tool; post a summary of the results of the bias audit on their websites; notify job candidates and existing employees that the tool would be used to assess them; include instructions for requesting accommodations; and post on their websites about the type and source of data that was used for the tool as well as their data retention policies. There were detailed definitions and fines for noncompliance too. Needless to say, this will be a challenge for provincially regulated employers in Ontario, and it is highly recommended that employers prepare now for these employment law changes. That said, it is understandable that employers may struggle with how to comply with such ambiguous provisions. Additional issues could arise, namely privacy and human rights issues. Let us take the example of the video interview where sentiment analysis is conducted. This is troubling from a privacy perspective—job applicants may not be comfortable participating in video interviews where their facial expressions and gestures are closely scrutinized with intrusive software that enables AI tools to analyze their sentiments. Moreover, employees who are up for a promotion may not appreciate video analytics of their video interview performance being retained for an unknown period of time, and accessible to an unknown number of actors in the workplace. Because job applicants are in a vulnerable position, they may not feel like they can object to the use of these AI hiring tools. In addition to privacy concerns, human rights issues could surface. The video interview could reveal various aspects of a person that may fall under any of the prohibited grounds of discrimination. For instance, under the Ontario Human Rights Code , section 5 of the Code prohibits discrimination in employment on the grounds of race, ancestry, place of origin, colour, ethnic origin, citizenship, creed, sex, sexual orientation, gender identity, gender expression, age, record of offences, marital status, family status or disability. It may be possible for an AI tool to be biased (unintentionally, but biased nonetheless), where it favours the younger candidates, gives them higher interview scores, and ultimately inadvertently discriminates on the ground of age. Since it may not be possible to detect these biased decisions immediately, it may be that some job applicants simply miss out on an employment opportunity due to an ageist AI tool. It will be interesting to see whether other jurisdictions come up with more extensive provisions to address the use of AI in recruitment and hiring. In Ontario, it is questionable whether we will see additional detail to help employers comply with the requirements. Previous Next

  • Canada’s AI Brain Drain | voyAIge strategy

    Canada’s AI Brain Drain A Silent Crisis for Canadian Business By Tommy Cooke, fueled by curiousity and ethically sourced coffee Oct 17, 2025 Key Points: Canada’s AI brain drain threatens national competitiveness by eroding the local talent base essential for innovation and execution Without retaining AI expertise, Canada risks becoming dependent on foreign ecosystems, which undermines sovereignty and commercialization potential Business leaders must treat AI talent development as a core strategy—Canadians need to build, invest, and upskill locally to remain competitive Canada is starting to punch above its weight in AI. With world-class research hubs in Toronto (Vector Institute), Montreal (Mila), and Edmonton (Amii), and visionaries such as Geoffrey Hinton and Yoshua Bengio driving Canada’s AI momentum, Canada is increasingly recognizable around the globe as a hotspot for innovation. Alas, as the global AI boom accelerates, Canada is at risk of losing that advantage through exodus of talent. The phenomenon, often dubbed the “AI brain drain”, refers to top researchers, engineers, and startup founders relocating (or aligning remotely) with U.S. or global tech hubs as opposed to building at home . For a business leader in Canada who is currently considering AI, this trend is one to keep an eye on because the stakes are high: how easily one can recruit, retain, and deploy AI talent will increasingly define which firms win or lose over the next half decade and beyond. Why Business Leaders Should Pay Attention Seeing talent leave or take jobs globally has multiple implications in terms of AI-driven innovation and the extent to which they can make an impact for Canadian businesses. Let’s take a closer look at three of them: First, the absence of talent is an AI execution bottleneck. In many industries, the difference between AI as a novelty versus a value-creator lies in execution, not algorithms. That execution depends on access to specialized engineers, ML researchers, operations talent, data scientists, hybrid roles, and so on. If a tech company plans on adopting or building AI, it will have to compete not only with other Canadian firms, but also with global tech giants offering premium compensation, equity, and prestige. That competitive pressure already manifests in Canada’s tech sector, where many former Canadian AI founders and researchers have relocated or anchored operations in Silicon Valley or U.S. hubs despite having roots here. Losing that talent, or failing to attract it, translates to longer timelines, lower quality, higher costs, or outright stalling of AI initiatives. Second, dependency on external ecosystems weakens innovation sovereignty. Relying on remote work or foreign talent is a short-term fix. If a company’s AI strategy depends on overseas labs, it risks instability from geopolitical shifts, visa regimes, cross-border regulation, or simple churn in remote teams. Canada’s recent announcement of a $2 billion+ Canadian AI Sovereign Compute Strategy is a response to such vulnerabilities: the federal government wants Canada to own its compute infrastructure rather than remain tethered to foreign cloud or GPU suppliers. Unfortunately, computing power alone is simply not enough. To leverage it fully, Canada needs people who know how to harness it. Without a base of AI talent anchored in Canada, compute investments risk underutilization and may be forced to finding support beyond the border. Moreover, it is important to keep in mind that investments in AI compute are considerably larger in other jurisdictions such as the United States; even if Canadian AI founders want to stay in Canada to use the new AI infrastructure, the Canadian compute will pale in comparison to the sorts of opportunities that the Americans are offering. Therefore, it will also be challenging to convince founders to take advantage of Canada’s AI compute. Third, the “imagination gap” will widen. Rather ironically, Canada lags many peers in actual AI adoption. Despite being a global leader in AI ideation and innovation, 12 percent of Canadian firms have broadly integrated AI into operations or products, putting the country near the bottom of OECD adoption rates. Some of this gap stems from cultural and literacy issues. But the primary issue is structural; if Canadian firms can’t access or retain top talent, pilots stay pilots, and experimentation never scales. The brain drain heightens that barrier. In effect, the Canadian market becomes a slow adopter, while global firms dominate the frontier. Catalysts of the Brain Drain It’s important to understand where the pressure comes from if Canada is to begin recognizing countermeasures. They are ubiquitous and complicated, but let’s quickly identify the most critical: Compensation and equity. U.S. tech firms routinely offer higher absolute compensation and more liquid equity upside Prestige. Many researchers seek the cachet of working at OpenAI, DeepMind, or leading U.S. AI labs Scale and data access. Larger U.S. and international firms have access to vast user bases and data that Canada-based projects often can’t match Funding scale . Global venture capital and public markets remain deeper and more aggressive than those in Canada Remote work . Many Canadian researchers don’t physically relocate now, but instead work remotely for international firms while remaining in Canada What Canadian Business Leaders Should Do About the AI Brain Drain, Right Now If you are serious about embedding AI in your organization, there are crucial steps you can take right now to join other business leaders seeking to alter the course of the brain drain. For starters, Canadian business owners need to invest in AI anchors. More specifically, it is important to create internal AI competence centers or labs rather than AI projects. It is also important to provide mandates, budgets, visibility, and career ladders. Ask yourself, What would talent want or need? It is necessary to attract Canadian talent to the centers and labs that have been created in such that the opportunity is interesting. It’s also important to offer compelling equity and long-term incentives. It’s expensive, but if the talent economy in Canada is to bolster itself, employers need to be thinking more strategically about matching or emulating international-esque equity models, grants, and research budgets. Engineers want to feel that they can build something significant—companies are recommended to do what they can to build the sandboxes that these engineers want to play in. Furthermore, companies are encouraged to partner with local colleges and universities for that they can align their interests with those of Canada’s top AI innovators. Develop interesting ways to fund cross appointments, joint labs, or even industrial research chairs. Companies may also wish to ask themselves, How skilled is our existing talent? If companies are not sure, they would benefit from upskilling. To begin, companies can drive internal reskilling and establish AI‐centric learning paths. That is, engaging workers in AI 101 learning sessions can help non-technical staff understand AI itself. Lastly, but perhaps most importantly, it is important to frame AI strategy as core business strategy—not a side project. AI disruption is old news. The ship has already set sail. Every industry is transforming. If companies are adopting AI now or are panning to do so in the near future, it is best to think strategically. For instance, ask, How might AI drive our business strategy as opposed to merely summarizing emails? By making AI more self-evidently valuable in terms of business growth, companies are more likely to attract talent. Delaying Investment in AI Talent is a Strategic Risk Canada’s AI brain drain may still feel distant to many executives, but the lead time for losing competitive edge is long. If Canadian firms don’t move to secure talent now, they’ll find themselves significantly behind their competitors. For any business in Canada that is eyeing AI, the choice is not whether to care about the brain drain. It’s whether to treat it as a strategic pillar. Previous Next

bottom of page