Search Results
123 results found with an empty search
- AI & The Future of Work Report | voyAIge strategy
Analyses & Recommendations on AI & The Future of Work The future of work has arrived. AI is transforming industries at a pace that is faster than most leaders expected. This in-depth report distills global research into clear insights for: C-suite executives who are navigating AI disruption Employers and HR who are at the forefront of dealing with layoffs, mass terminations, and hiring freezes Policymakers who are trying to address these technological changes and influence legal and policy direction Governments who are trying to make positive change for society Inside our report, you'll learn: Which jobs and skills are most at risk and where new growth is emerging as a result of AI The economic, psychological, and sociological implications of AI disruption The current legal and policy landscape in the United States, Canada, and the European Union The steps that leaders can take to balance innovation, compliance, and employee well-being Why read it? AI is not a technology problem—it is a leadership test. Learn how to manage disruption proactively and seize new opportunities to grow with AI Submit to Download and Stay Informed Sign up to download the full report and join our monthly newsletter First Name Last Name Email Submit to Download
- Managed Services | voyAIge strategy
Data and AI Leadership - without the overhead. Managed Data Governance & AI Governance Services Expert Leadership, Strategy, and Support at a Fixed Monthly Cost. Book a Free Consultation How our Managed Services Help We structure your journey. Whether you're just getting started or are already deploying tools, we help you assess readiness, define goals, and create a strategy that fits your organization's priorities. We build the right foundations. As experts in the law, policy, and ethics, we develop the nuanced solutions you need to grow without safely and successfully -without stifling innovation. We stay involved. As your challenges and opportunities change, we stay in touch with you to nimbly assess evaluate new use cases, manage compliance, and ensure your AI remains effective and aligned. Common AI Challenges Fear of AI Inappropriate Use of AI No AI Leadership No AI Strategy Too Many Questions Free Consultation Book a Free Consultation Book a no-strings-attached consultation session to see if our Managed AI Services can help you implement and use AI without the cost or complexity of doing it yourself. Organization Name First name Last name Email Briefly Describe Your Needs & Goals Submit Thanks for submitting! We will respond within 24 hours
- Tesla Class Action to Move Ahead | voyAIge strategy
Tesla Class Action to Move Ahead Advanced Driver Assistance Systems Litigation Proceeds in California By Christina Catenacci, human writer Aug 22, 2025 Key Points: On August 18, 2025, a United States District Judge granted a motion for class certification and appointed a representative plaintiff of the certified classes The court narrowed the classes and considered whether it was appropriate to hear the plaintiffs’ claims together in one class action The lesson here is that businesses need to be careful about what kinds of statements they make about their technology’s capabilities, or else they could face litigation from many plaintiffs, potentially leading to a class action On August 18, 2025, a United States District Judge, Rita F. Lin, granted a motion for class certification and appointed a representative plaintiff of the certified classes. In addition, she appointed class counsel and set a pathway for next steps leading to the case management conference. Let this story serve as a warning for businesses—be careful about what statements you make about the capabilities of your technology—whether it is on Twitter, YouTube, or any other channel. If you have a goal that you are striving to achieve, then say that. If you are promoting a new product with extensive capabilities, then do that. Just try not to make claims that are untrue, unless you want to be on the hook for those misleading misrepresentations. What is the Class Action About? Tesla did not sell its vehicles through third parties and did not engage in traditional marketing or advertising; in fact, the only way one could buy a Tesla vehicle was through its website. Tesla reached consumers also through its own YouTube channel, Instagram account, press conferences, sales events, marketing newsletters, and Elon Musk’s personal Twitter account. Additionally, customers could buy optional technology packages that were designed to enable autonomous vehicle operation. For example, customers could buy the “Enhanced Autopilot Package (EAP)” that had features such as Autopark, Dumb Summon, Actually Smart Summon, and Navigate on Autopilot (highway). Also, the “Full Self-Driving (FSD) Package” had all of the Enhanced Autopilot features, plus Stop Sign and Traffic Signal recognition and Autosteer on Streets. The EAP was offered as a stand-alone package only until the first quarter of 2019, and again for a limited period from the second quarter of 2022 through the second quarter of 2024; at other times, these features were only available as part of the FSD Package. Essentially, claimants were arguing that Tesla Inc. (Tesla) made misleading statements about the full self-driving capability of its vehicles. The plaintiffs alleged that they relied on two types of misrepresentations that Tesla made: that Tesla vehicles were equipped with the hardware necessary for full self-driving capability (“Hardware Statement”) that a Tesla vehicle would be able to drive itself across the country within the following year (“Cross-Country Statement”) When it came to the Hardware Statement, in October, 2016, Musk said at a press conference that second-generation autonomous driving hardware would have hardware necessary for Level 5 Autonomy (“literally meaning hardware capable of full self-driving for driver-less capability”). These statements were also on Tesla’s website and Tesla’s November 2016 newsletter. There was even a Tesla blog post dated October 2016 and a Tesla quarterly earnings call in May 2017 containing these statements. Musk even made comments that the self-driving hardware would enable full self-driving capability at a safety level that was greater than a human driver. Since 2016, the hardware had been updated to version 3.0 and version 4.0—these upgrades had a more powerful computer and cameras. In a 2024 earnings call, Musk stated that a further hardware upgrade would likely be necessary for customers who bought FSD with prior hardware configurations: “I mean, I think the honest answer is that we’re going to have to upgrade people’s hardware 3 computer for those that have bought full self driving. And that is the honest answer. And that’s going to be painful and difficult, but we’ll get it done. Now I’m kind of glad that not that many people bought the FSD package” When it came to the Cross-Country Statement, Musk stated at a 2016 press conference that people would be able to go from LA to New York—going from home in LA to dropping someone off in Times Square and then having the car park itself, without the need for a single touch including the charger. Musk posted versions of this claim on his personal Twitter account three times. In January 2016, Musk tweeted that “in 2 years, summon should work anywhere connected by land & not blocked by borders, eg you’re in LA and the car is in NY”. When asked for an update on these claims in May, 2017, Musk said that the demo was still on for the end of the year, and things were “just software limited”. And in May, 2019, when asked whether there were still plans to drive from NYC to LA on full autopilot, Musk said that he could have gamed this type of journey the previous year, but when he did it in 2019, everyone with Tesla Full Self-Driving would be able to do it too. That 2019 tweet generated about 2,000 engagements compared to 300 engagements following the 2016 tweet. In October, 2016, Tesla showed a video where a Tesla vehicle was driving autonomously (it is still on the Tesla site) and a similar video was shown on YouTube. Interestingly, Tesla does not dispute that any of the statements or videos were made—it simply states that the FSD could not be obtained until the completion of validation and regulatory approval. However, the plaintiff presented evidence that Tesla had not applied for regulatory approval to deploy a Society of Automotive Engineers Level 3 or higher vehicle in California, which was a necessary step for approval of a full self-driving vehicle. In terms of the technical claims, the plaintiffs alleged that Tesla violated California’s: Unfair Competition Law Consumer Legal Remedies Act False Advertising Law In addition, they alleged that Tesla engaged in fraud, negligent misrepresentation, and negligence. As a consequence, they filed a motion for class certification so that they could proceed to the next stage of litigation. What did the District Judge Decide? The judge had to go through the main elements to determine whether she could certify the class in the class action. With respect to the proposed class representative, the main plaintiff paid Tesla $5,000 for EAP and $3,000 for the FSD Packages for his new Tesla Model S car. He alleged that he purchased these packages because he was misled by the Hardware Statement and the Cross-Country Statement. He saw these things on the Tesla website in October, 2016 and in a Tesla newsletter sent in November, 2016. In addition, he read statements that led him to believe that a Tesla would soon drive across the country, and that self-driving software would be available in the next year or two. He claimed that he discovered the alleged fraud in April, 2022. In fact, five customers (including the above plaintiff) brought separate lawsuits against Tesla in September, 2022. They alleged similar things and accused Tesla of violating warranties, consumer protection statutes and engaged in fraud, negligence, and negligent misrepresentation. The court consolidated the cases, dismissal all warranty claims, and permitted all the plaintiffs’ fraud, negligence, and statutory claims to proceed to the extent that they were premised on the Hardware Statement and Cross-Country Statement. It is worth mentioning that some plaintiffs opted out of Tesla’s arbitration agreement. Subsequently, the court noted that the class certification was a two-step process: The plaintiff had to show that four requirements were met, namely numerosity (the class was so numerous that joinder of all members was impractical), commonality (there were questions of fact and law), typicality (the claims and defenses were typical of the claims and defenses of the class), and adequacy (the representative parties would fairly and adequately protect the interests of the class) The plaintiff had to show that one of the bases for certification was met, such as predominance and superiority (questions of law or fact common to class members predominated over any questions affecting only individual members, and a class action was superior to other available methods for fairly and efficiently adjudicating the controversy The judge concluded the following: There were some minor differences with the proposed classes. The judge certified two classes: (1) a California arbitration opt-out class where customers bought or leased a Tesla vehicle and bought the FSD package between May 207 and July 2024, and opted out of Tesla’s arbitration agreement; (2) a California pre-arbitration class where customers bought or leased a Tesla vehicle and paid for the FSD package from October 2016 to May 2017, and currently reside in California. Neither class dealt with the EAP, and both classes were narrowed slightly Tesla did not contest that numerosity was met The plaintiff was able to show that commonality and predominance were met. For the purposes of class certification, the claims were martially indistinguishable and could be analyzed together The plaintiff could show that the Hardware Statement would be material to an FSD purchaser. However, the plaintiff could not show common exposure to the Cross-Country Statement The plaintiff could show that the issue of whether Tesla vehicles were equipped with hardware sufficient for Full Self-Driving capability was subject to common proof The plaintiff was able to show that damages could be established through common proof. Under California law, the proper measure of restitution was the difference between what the plaintiff paid and the value of what the plaintiff received Although Tesla argued that there were many claims that were subject to the statute of limitations, separate examinations of each situation with each plaintiff were needed. The court disagreed and said that this was not fatal to class certification when there was a sufficient nucleus of common questions The requirement of adequacy was met Superiority was also established. The economies of scale made it desirable to concentrate all of the plaintiffs’ claims in one forum, and this case was manageable as a class action The court certified a narrower class, namely all members of the California Arbitration Opt-Out class and California Pre-Arbitration class who had stated that they wanted to purchase or subscribe to FSD in the future but could not rely on the product’s future advertising or labelling The plaintiff showed that he had standing to seek injunctive relief, since he had provided the general contours of an injunction that could be given greater substance at a later stage in the case Accordingly, all elements were met, and the class certification was granted, subject to the modified class definitions. Within 14 days, the plaintiff had to amend the class definition so that the parties could move on to the case management conference. The court also appointed the main plaintiff as the representative plaintiff for the class, and appointed class counsel. What can we Take from This Development? This was simply a motion to certify the class action. The judge went through the main elements and confirmed that the class action could move forward. The examination of each of the components in the test had to do with whether it was more effective to hear the claims together in one class action instead of addressing each claim separately in the court. This was not a decision that confirmed Tesla engaged in unfair competition, false advertising, negligent misrepresentation, or negligence. This was a preliminary decision that allowed the class action to proceed. Previous Next
- When Technology Stops Amplifying Artists and Starts Replacing Them | voyAIge strategy
When Technology Stops Amplifying Artists and Starts Replacing Them AI-generated creativity forces us to confront a new cultural crossroads: if machines can make the art, what remains uniquely human in the act of creating? This matters to any business owner By Tommy Cooke, powered by medium roast espresso Key Points: 1. AI isn't just automating routine tasks, it's beginning to replace human creativity where scale and predictability dominate 2. The future of work hinges on what only humans can bring: meaning, perspective, imperfection, and authentic connection 3. If efficiency becomes our only compass, we risk building a world rich in content but poor in humanity When I was in my 20’s, I was the lead guitarist in a regularly gigging and recording rock band. At our busiest, we were performing four to six nights a month while being full-time college students and holding down part-time jobs. We produced an album and two EPs along the way. When you are working that much at your craft while having an incredibly full plate, you always hope for a break. Ours came when our music was introduced to a major record executive. He enjoyed the album and thought one of our tunes would be an instant radio hit. He suggested that we work closely with a well known hitmaker to punch out more radio friendly versions of our album. But, in the meantime, there was a catch: to be on the label, we were required to have a certain number of followers on MySpace. We came well short of that number, exponentially. It would take us years to achieve that. To say that we were shocked would be an understatement. That was back in the late 2000s. Today, being a musician is far more difficult. Not only are targets harder to reach as a bare minimum entry point to talking to labels, but now AI has entered the scene. In a recent Global News article, the famous music journalist and historian Alan Cross speculated a future that may not be far away: record producers have every impetus to remove the artist from art. In other words, AI-driven music could forever change the way people consumer music. As AI-generated music becomes mainstream, Mr. Cross poses a worrying question: what happens to the human in the creative process? What does this shift teach us about AI, authenticity, and human purpose? At first glance, Alan’s article is a story about singers, streaming royalties, and rights-owners. But for business leaders, policy makers, and organisational strategists, the underlying theme is deeper: as AI moves from tool to creator, the boundary between human value and machine delivery is shifting. The Discomfort of AI As painful as it is for me to admit, in Alan’s dystopic future vision where AI drives content creation, musicians are just not that unique. They’re simply the first creative class to experience what economists have been warning about for years: automation begins where scale and predictability create the highest return. Let me give you an example. Pop music is formulaic. How many times have you seen this video or one exactly like it ? It’s a routine that has been done time again, ad nauseum. But they make a fascinating point. So, if you haven’t seen them, take a minute to watch. It’s very revealing in terms of how much the structure of popular music is replicated over, and over, and over again. This same thing can be said of social media personas and design principles, too. My point is that the more quantifiable and structure-driven human content continues to become, the more likely machines are to inhabit and reproduce human content. For years, people comforted themselves with a hopeful refrain: AI will take the routine tasks so we can focus on creative and strategic work. This is why Alan’s vision is alarming. It presents us with an existential tension: one where romantic expectations of technology and its actual outcomes forces us to question what happens when efficiency and extraction are prioritized over meaning. So, Alan asks us to stop and recognize a crossroads in front of us, and he’s asking us to do so by prompting ourselves with a critical question: what do we want human experience to mean when machines can perform the visible parts of it? This question will have different implications for everyone, and they matter in non-music contexts, too. What AI as a Creator means for People and Organisations While music is the most visible example, parallel dynamics are already unfolding in marketing, design, customer service, legal drafting, and more. Alan isn’t merely presenting a vision anymore. He’s offering a critical narrative, and it carries three key lessons: Human value must be re-defined . When an algorithm can generate content at scale, cost-effectively, and without human pain-points (sleep, illness, ego, negotiation), the “value” of human labour shifts. It’s no longer just about whether a person can you do the job, but also (a) what unique stance the person brings, and (b) how that person shifts from being a deliverer to a designer of meaning. In other words, business leaders, that treat humans as input-machines are likely to find that they experience a high turnover. If instead they ask: “What does only a human bring?”, they protect and amplify their human capital Authenticity and trust become strategic assets. Alan’s article raises a paradox: people found AI-generated music more arousing, yet human-composed music was familiar. That suggests a gap between novelty and connection. In a world of AI production, human stories, human flaws, and human context become competitive differentiators. Organizations that lean into their human-identity, align culture, ethics, and narrative, and resist the “machine everything” push will build stronger trust and attachment Strategy must account for the human-machine continuum, not just the machine. Leaders often frame AI as “how do we use this tool to generate faster/cheaper?” But the music story shows the existential side: “What if the machine becomes creator?” and “What if our work becomes obsolete?” The strategic imperative is two-fold: (a) define how humans and machines co-create value, and (b) define safeguards What struck me reading Cross’ article and thinking back to that moment in my twenties when a gatekeeper told a young band we needed tens of thousands of invisible followers to be worthy, is that technology has always mediated who gets seen. What’s different now is the scale: we’re not just gatekeeping humans—we’re replacing them in the system. And that invites a stubborn but necessary question: What role do we want people to play in a world of perfect synthesis and endless content? If we allow efficiency alone to steer the ship, we will build a culture optimized for frictionless consumption rather than lived experience; a world full of sound, but not necessarily any music. The point isn’t to fear AI or resist progress. It’s to remember what makes human work meaningful in the first place. Creativity isn’t merely output. It’s the accumulated weight of effort, failure, identity, memory, taste, temperament, private doubt, and public courage. It’s the quiet, unglamorous process of becoming someone capable of expression. So, as AI becomes a collaborator, producer, and in some cases a creator, our responsibility isn’t to compete with it on volume or speed but do double down on what only people can offer: perspective, dissonance, care, imperfection, and soul. Not to mention, community. Previous Next
- New York Times Sues OpenAI and Microsoft for Copyright Infringement | voyAIge strategy
New York Times Sues OpenAI and Microsoft for Copyright Infringement The NYTimes lawsuit has the potential to significantly shape copyright and AI policy By Christina Catenacci Aug 2, 2024 Key Points: The Times has sued both OpenAI and Microsoft, alleging copyright infringement, trademark dilution, and unfair competition by misappropriation OpenAI has responded to the Complaint on its website stating, “We support journalism, partner with news organizations, and believe The New York Times lawsuit is without merit” A decision in this case may provide much-needed clarification regarding the use of copyrighted works in the development of generative AI tools On December 27, 2023, the New York Times (The Times) sued OpenAI and Microsoft (Defendants) for copyright infringement in the United States District Court in New York. In its Complaint, The Times explained that its work is made possible through the efforts of a large and expensive organization that provides legal, security, and operational support, as well as editors who ensure their journalism meets the highest standards of accuracy and fairness. In fact, The Times has evolved into a diversified multi-media company with readers, listeners, and viewers around the globe with more than 10 million subscribers. But according to The Times, the joint efforts of the Defendants have harmed The Times, as seen by lost advertising revenue and fewer subscriptions to name a few. The Times alleges that that OpenAI unlawfully used its works to create artificial intelligence products. The Times argued in its Complaint that unauthorized copying of The Times works without payment to train Large Language Models (LLMs) is a substitutive use that is “not justified by any transformative purpose”. The Times has sued the Defendants as follows: Copyright infringement against all Defendants : by building training datasets containing millions of copies of The Times works (including by scraping copyrighted works from The Times’s websites and reproducing them from third-party datasets), the Defendants have directly infringed The Times’s exclusive rights in its copyrighted works. Also, by storing, processing, and reproducing the training datasets containing millions of copies of The Times works to train the GPT models on Microsoft’s supercomputing platform, Microsoft and the OpenAI Defendants have jointly directly infringed The Times’s exclusive rights in its copyrighted works Vicarious copyright Infringement against Microsoft and OpenAI : Microsoft controlled, directed, and profited from the infringement perpetrated by the OpenAI Defendants. Microsoft controls and directs the supercomputing platform used to store, process, and reproduce the training datasets containing millions of The Times works, the GPT models, and OpenAI’s ChatGPT offerings. The Times alleges that Microsoft profited from the infringement perpetrated by the OpenAI Defendants by incorporating the infringing GPT models trained on The Times works into its own product offerings, including Bing Chat Contributory copyright infringement against Microsoft : Microsoft materially contributed to and directly assisted in the direct infringement that is attributable to the OpenAI Defendants. The Times alleged that Microsoft provided the supercomputing infrastructure and directly assisted the OpenAI Defendants in: building training datasets containing millions of copies of Times Works; storing, processing, and reproducing the training datasets containing millions of copies of The Times works used to train the GPT models; providing the computing resources to host, operate, and commercialize the GPT models and GenAI products; and providing the Browse with Bing plug-in to facilitate infringement and generate infringing output. The Times said that Microsoft was fully aware of the infringement and OpenAI’s capabilities regarding ChatGPT-based products Digital Millennium Copyright Act–Removal of Copyright Management Information against all Defendants : The Times included several forms of copyright-management information in each of The Times’s infringed works, including: copyright notice, title and other identifying information, terms and conditions of use, and identifying numbers or symbols referring to the copyright-management information. However, The Times claimed that without The Times’s authority, the Defendants copied The Times’s works and used them as training data for their GenAI models. The Times believed that the Defendants removed The Times’s copyright-management information in building the training datasets containing millions of copies of The Times works, including removing The Times’s copyright-management information from Times Works that were scraped directly from The Times’s websites and removing The Times’s copyright-management information from The Times works reproduced from third-party datasets. Moreover, the Times asserted that the Defendants created copies and derivative works based on The Times’s works, and by distributing these works without their copyright-management information, the Defendants violated the Copyright Act . Unfair competition by misappropriation against all Defendants : by offering content that is created by GenAI but is the same or similar to content published by The Times, the Defendants’ GPT models directly compete with The Times content. The Defendants’ use of The Times content encoded within models and live Times content processed by models produces outputs that usurp specific commercial opportunities of The Times. In addition to copying The Times’ content, it altered the content by removing links to the products, thereby depriving The Times of the opportunity to receive referral revenue and appropriating that opportunity for Defendants. The Times now competes for traffic and has lost advertising and affiliate referral revenue Trademark dilution against all Defendants : in addition, The Times has registered several trademarks and argued that the Defendants’ unauthorized use of The Times’s marks on lower quality and inaccurate writing dilutes the quality of The Times’s trademarks by tarnishment. The Times asserts that the Defendants are fully aware that their GPT-based products produce inaccurate content that is falsely attributed to The Times, and yet continue to profit commercially from creating and attributing inaccurate content to The Times. The Defendant’s unauthorized use of The Times’s trademarks has resulted in several harms including damage to reputation for accuracy, originality, and quality, which has and will continue to cause it economic loss. The Times has asked for statutory damages, compensatory damages, restitution, disgorgement, and any other relief that may be permitted by law or equity. Additionally, The Times has requested that there be a jury trial. What can we take from this development? This has the makings of a landmark copyright case and can go a long way to shape copyright and AI policy for years to come. In fact, some have referred to this case as, “The biggest IP case ever”. In terms of a response to the Complaint, OpenAI has made a public statement in January 2024 on its website stating, “We support journalism, partner with news organizations, and believe The New York Times lawsuit is without merit”. The company set out its position as follows: “Our position can be summed up in these four points, which we flesh out below: We collaborate with news organizations and are creating new opportunities Training is fair use, but we provide an opt-out because it’s the right thing to do “Regurgitation” is a rare bug that we are working to drive to zero The New York Times is not telling the full story” Interestingly, OpenAI has stated that training AI models using publicly available internet materials is “fair use”, as supported by long-standing and widely accepted precedents. It stated, “We view this principle as fair to creators, necessary for innovators, and critical for US competitiveness”. However, the Defendants in this case may run into problems with this argument because The Times’ copyrighted works are behind a paywall. The Defendants are familiar with what this means—it is necessary to pay in order to read (with subscriptions) or use (with proper licensing). It is concerning that OpenAI refers to regurgitation (word-for-word memorization and presentation of content) as a bug that they are working on, but then says, “Because models learn from the enormous aggregate of human knowledge, any one sector—including news—is a tiny slice of overall training data, and any single data source—including The New York Times—is not significant for the model’s intended learning”. Essentially, OpenAI has downplayed the role that The Times’ works play in the training process, yet not addressing The Times’ arguments that the Defendants have ingested millions of copyrighted works without consent or compensation and have been outputting The Times works practically in their entirety. Another point of interest is that, in the Complaint, The Times stated that it reached out to OpenAI in order to build a partnership, but the negotiations never resulted in a resolution. However, OpenAI has stated in its website post that the discussions with The Times had appeared to be progressing constructively. It said that the negotiations focused on a high-value partnership around real-time display with attribution in ChatGPT, where The Times would gain a new way to connect with their existing and new readers, and their users would gain access to The Times reporting. It stated, “We had explained to The New York Times that, like any single source, their content didn't meaningfully contribute to the training of our existing models and also wouldn't be sufficiently impactful for future training. Their lawsuit on December 27—which we learned about by reading The New York Times—came as a surprise and disappointment to us”. Clearly, there are two different sides to this story, and the court will need to sort out what took place in order to make a determination. Ultimately, this case will have a significant impact on the relationship between generative AI and copyright law, particularly with respect to fair use . In p articular, a decision in this case may provide much-needed clarification regarding the use of copyrighted works in the development of generative AI tools, such as OpenAI’s ChatGPT and Microsoft’s Bing Chat (Copilot), both of which are built on top of OpenAI’s GPT model. Previous Next
- Contact | voyAIge strategy
How to get in touch with our AI strategy and governance experts. Contact Us For questions, inquiries, or requests that require a personal response, we will respond within 48 hours. If you are submitting a request for a quote about our products or services , please use this form here . If you are requesting a proposal or bid , please use this form here . First Name Last Name Email Message Send Thanks for submitting!
- Trump Signs Executive Order on AI | voyAIge strategy
Trump Signs Executive Order on AI A Unilateral Resurgence of the AI Moratorium By Christina Catenacci, human writer Dec 15, 2025 Key Points Notwithstanding the fact that the AI moratorium was completely rejected in the recent Vote-a-Rama involving the Big Beautiful Bill, Trump has released an Executive Order on AI State Governors have reacted negatively to Trump’s announcement, and have insisted that this would constitute federal government overreach In line with Trump’s AI Action Plan, a consequence of continuing to have or enact AI state laws could mean a withdrawal of federal funding from noncompliant states You may recall that I wrote about the lengthy Vote-a-Rama that took place in the early morning hours regarding the Big Beautiful Bill, which contained a 10-year moratorium on state-enacted and enforced AI laws. Ultimately, on July 4, 2025, the Big Beautiful Bill was signed by the President and became law—but there was no inclusion of the 10-year AI regulation moratorium for states. More specifically, it was ultimately decided that the moratorium provision was to be removed entirely. As a refresher, the moratorium stipulated that no state or political subdivision thereof would be able to enforce, during a 10-year period, any law or regulation of that state or a political subdivision thereof limiting, restricting, or otherwise regulating AI models, AI systems, or automated decision systems entered into interstate commerce. Notwithstanding the fact that the AI moratorium was completely rejected, Trump has signed an Executive Order on December 11, 2025. Thus, there has been a unilateral resurgence of the AI Moratorium. There Were Hints of a 10-Year AI Moratorium Apparently, Trump recently confirmed that he planned to sign an executive order that pre-empted AI regulations at the state level. Where did he announce this decision? On Truth Social, of course. His post stated: “There must be only One Rulebook if we are going to continue to lead in AI… We are beating ALL COUNTRIES at this point in the race, but that won’t last long if we are going to have 50 States, many of them bad actors, involved in RULES and the APPROVAL PROCESS…You can’t expect a company to get 50 Approvals every time they want to do something…AI WILL BE DESTROYED IN ITS INFANCY!” Despite what academics, safety groups, and state lawmakers on both sides of the aisle have said, Trump remained adamant that this deregulation push would happen. More specifically, he believed that there should be one unified AI law in the United States, and the individual states were interfering with that goal. How Have State Governors Reacted to this Tweet? Simply put, not well. For example, take Florida Governor Ron DeSantis referred to federal government overreach in his recent response on X: “Stripping states of jurisdiction to regulate AI is a subsidy to Big Tech and will prevent states from protecting against online censorship of political speech, predatory applications that target children, violations of intellectual property rights and data center intrusions on power/water resources” Similarly, Governor Gavin Newsom has called Trump’s attempt to restrict states from regulating AI “disgusting”, and has stated in a social media post : “This moratorium threatens to defund states like California with strong laws against #AI -generated child porn…But no surprise given Trump’s years palling around with Jeffrey Epstein.” Newsom’s spokesperson stated: “California did not become the innovation hub of the nation by turning its back on new technology — and we can help ensure that future growth happens responsibly and safely” Along the same lines, several organizations, including tech employee unions and other labor groups, tech safety and consumer protection nonprofits, and educational institutions, also signed letters to Congress opposing the idea of blocking state AI regulations and raising alarms about AI safety risks. I fact, there are many individuals who have expressed zero sum view that it was innovation (AI deregulation) versus AI safety and accountability. It is either the tech companies in Silicon Valley, or it is red tape and bottlenecks. For instance, Sacha Haworth, Executive Director of The Tech Oversight Project, stated: “We’re in a fight to determine who will benefit from AI: Big Tech CEOs or the American people… We cannot afford to spend the next decade with Big Tech in the driver’s seat, steering us toward massive job losses, surveillance pricing algorithms that jack up the cost of living, and data centers that are skyrocketing home energy bills” What is in the Executive Order? The following are essential points that are contained in the Executive Order: The goal is to lead in AI and promote United States national and economic security and dominance across many domains. In particular, United States AI companies must be free to innovate without cumbersome regulation. Also, there must be a minimally burdensome national standard in AI, not 50 discordant state ones Within 30 days of the date of this order, the Attorney General must establish an AI Litigation Task Force (Task Force) whose sole responsibility is to challenge state AI laws that are inconsistent with the policy to dominate through a minimally burdensome national policy framework for AI (policy) Within 90 days of the date of this order, the Secretary of Commerce must publish an evaluation of existing state AI laws that identifies onerous laws that conflict with the policy, as well as laws that should be referred to the Task Force. That evaluation must, at a minimum, identify laws that require AI models to alter their truthful outputs, or that may compel AI developers or deployers to disclose or report information in a manner that would violate the First Amendment or any other provision of the Constitution Within 90 days of the date of this order, the Secretary of Commerce must issue a Policy Notice specifying the conditions under which states may be eligible for remaining funding under the Broadband Equity Access and Deployment (BEAD) Program that was saved through my Administration’s “Benefit of the Bargain” reforms, consistent with 47 U.S.C. 1702(e)-(f). That Policy Notice must provide that states with onerous AI laws are ineligible for non-deployment funds, to the maximum extent allowed by Federal law. The Policy Notice must also describe how a fragmented State regulatory landscape for AI threatens to undermine BEAD-funded deployments, the growth of AI applications reliant on high-speed networks, and BEAD’s mission of delivering universal, high-speed connectivity Executive departments and agencies (agencies) must assess their discretionary grant programs in consultation with the Special Advisor for AI and Crypto and determine whether agencies may condition such grants on states either not enacting an AI law that conflicts with the policy, or for those states that have enacted such laws, on those states entering into a binding agreement with the relevant agency not to enforce any such laws during the performance period in which it receives the discretionary funding The Chairman of the Federal Communications Commission must, in consultation with the Special Advisor for AI and Crypto, initiate a proceeding to determine whether to adopt a federal reporting and disclosure standard for AI models that pre-empts conflicting state laws Within 90 days of the date of this order, the Chairman of the Federal Trade Commission must, in consultation with the Special Advisor for AI and Crypto, issue a policy statement on the application of the Federal Trade Commission Act’s ( FTCA ) prohibition on unfair and deceptive acts or practices under 15 U.S.C. 45 to AI models. That policy statement must explain the circumstances under which state laws that require alterations to the truthful outputs of AI models are pre-empted by the FTCA’s prohibition on engaging in deceptive acts or practices affecting commerce The Special Advisor for AI and Crypto and the Assistant to the President for Science and Technology must jointly prepare a legislative recommendation establishing a uniform Federal policy framework for AI that pre-empts state AI laws that conflict with the policy set forth in this order However, the legislative recommendation above must not propose pre-empting otherwise lawful state AI laws relating to: child safety protections; AI compute and data center infrastructure, other than generally applicable permitting reforms; state government procurement and use of AI; and other topics as must be determined What are the Consequences of this Development? Whether Trump believes it or not, there are serious risks associated with AI, which need to be mitigated—not ignored. It may be news to some, but there can be simultaneous innovation and responsibility and accountability in the form of appropriate AI governance. It does not need to be one or the other. Put another way, it is not the tech companies against individuals in society. It is not that zero sum. We can have both simultaneously: we can support innovation and protect individuals in society. In particular, we can protect against AI hallucinations, AI chatbots that encourage self-harm, or exposing children to inappropriate sexualized content and other societal harms—with responsible AI state regulation (there is no federal AI law, and one is not expected to be made in the near future). States know their needs best, and it is troubling that Trump aims to silence them and prevent them from legislating in their own states. It will be no surprise when individual progressive states with substantive AI laws launch complaints against the federal administration for raising the possibility of a federal withdrawal of funding if states do not comply with the unilateral Executive Order. In fact, we may see court cases commenced as soon as possible—well before any action from the Task Force pursuant to the Executive Order. It is troubling that the Executive Order frames the issue of AI innovation as a zero-sum game: “To win, United States AI companies must be free to innovate without cumbersome regulation” Previous Next
- AI for Inventory Management | voyAIge strategy
AI for Inventory Management How AI, RFID, and Real-Time Data are Reshaping Retail By Tommy Cooke, fueled by caffeine and creativity Apr 4, 2025 Key Points AI in inventory management isn’t about replacing people—it’s about removing guesswork so that people can do better work Old Navy’s partnership with RADAR shows that when AI, RFID, and vision systems combine, customer experience gets more personal, not less Before AI can work its magic, organizations must confront messy data, tangled systems, and human hesitation—because the tech isn’t the hard part, the people are I had a client that had trouble selling flip flops–the sandals. They are a major pharmacy with a significant retail component to the business. Flip flops were causing three issues: 1. flip flops were piling up in storerooms across the continent 2. the stockpile of dated, old flip flops was growing significantly 3. it was taking too much time to scan inventory of flip flops that nobody wanted The answer to these three pain points was found in AI for inventory management. It was actually three disparate AI systems working in tandem, bundled into a new technological solution. This new solution analyzed historical data to determine when flip flops should be put out on the floor and advertised on sale, triggered automatic replenishment of flip flops (so as to avoid over-ordering), and a system that actively monitored when flip flops would be physically removed from a shelf. The solution is becoming more commonplace. Old Navy, a subsidiary of Gap Inc., recently made retail headlines: they are embarking on a multi-year plan to integrate RADAR’s AI-driven RFID technology into its stores. The idea is to provide associates on the floor with real-time inventory data so that they can locate items quickly within the store. By combining RFID with AI and computer vision (to physically see inventory), Old Navy is not only aiming to improve associate efficiency and accuracy, but they are also aiming to enhance customer service experience. I don’t know about you, but I’m particularly excited; Old Navy never seems to have my size of jeans–ever. Much like my previous client who struggled with selling flip flops, AI can make a significant impact on inventory management. Let’s dive into this a bit further. AI for Inventory Management Enhanced Demand Forecasting. Much like the flip flop example, AI algorithms can analyze historical sales data and marketing data internally. Those data can be combined with external data, such as market and consumption trends, to anticipate future demand. The benefit of doing so shouldn’t be understated. Smart demand forecasting allows retailers to maintain optimal stock levels, thereby saving costs in terms of reducing overstock. For example, rather than just knowing that swimsuits sell better in July, an AI model might flag an early-season heatwave in a particular region, cross-reference those measurements with historical sales surges, and recommend adjust stock levels in that cluster of stores. Automated Replenishment. Think of this as the reactive component of demand forecasting the other side of the same coin. In this instance, AI systems work off inventory data to automate the reordering of new inventory. In the past, replenishment often relied on static rules: if stock drops below five, reorder ten. But AI can flip this logic on its head, making replenishment smarter and not just faster. Much like the way Old Navy will do so with RADAR, RFID monitors shelf-level data and warehouse status simultaneously. If a product is selling quickly in one store but not others, the system can auto-generate a transfer request. Or it can pause auto-orders if it predicts a drop in demand due to, for example, weather events or shifts in promotional priorities. This is a particularly attractive capability for retailers because it means that inventory management becomes more granular and adaptive. Operational Efficiency. Better forecasting and replenishment do not just make inventory numbers look nice. They free up actual people to do better work. When store associates stop manually counting items or looking for hidden stock in the storeroom, they focus on customers. When warehouse teams stop scrambling to process last-minute shipments due to stockouts, they can plan strategically. Take RADAR, for example: this system tracks the movement of every tagged item in real time and allows associates to search for an item using a mobile app and be guided directly to it. It’s a small change, but compounding small changes have a ripple effect, specifically faster order fulfillment. It means a customer can actually find what they came in for. It means an employee gets to spend more time helping someone, and less time on scavenger hunts. Implementation Considerations For all the power AI brings to inventory management, integrating it successfully is not just a matter of plug-and-play. In each of the two examples I discussed above–of my former client and Old Navy–the real challenge is not the technology: it’s the people involved. The following points below are ones to consider. Data Quality. AI is only as smart as the data it’s trained on as well as the data it receives in real time. But retail data is messy. Product SKUs vary across systems, sales data are often fragmented between platforms, and real-time inventory counts can be inconsistent at best. So, for AI to work, organizations must undergo a data hygiene campaign that involves cleaning, labeling, and integrating data sources. It’s a critical step, it’s not particularly enjoyable, and it involves time from people across IT, operations, finance, management, and frontline staff. So, remember that if a system doesn’t trust its data, it can’t act on it, nor will your people be able to trust it. Proper data preparation needs to be preceded by proper communications and proper training plans. Legacy Integration. Many retailers, especially largescale organizations, operate on a patchwork of legacy tools. Sometimes they are systems that are decades old. Many of these systems are bespoke, custom-built designs. Others are bolt-on afterthoughts. When AI is integrated or interacts with these systems, operations can become prohibitively complicated, not to mention expensive. I’d be remiss not to mention the impact of these changes on your people, too. A guide approach is required, one that takes into account what it means to bring together modern technology with antiquated software–particularly when your staff are more accustomed to the former than the latter. Ethics and Privacy. RFID and computer vision systems can, intentionally or not, start to resemble surveillance. Tracking product movement is one thing—tracking employee and shopper behaviour is quite another. If AI systems are being used to monitor human productivity or shopper movements without transparency, it will lead to mistrust let alone legal risks and morale issues. It is not a new issue that consumers are concerned about what data is collected on them when they enter stores. When you add AI to the mix, concerns increase. Retailers need to be thoughtful about how they use these systems, what data they collect, and who has access. Ethical use is not just a matter of compliance, but a matter of culture, priorities, and principles. The Future of AI in Inventory Management The trajectory of AI in inventory management points towards increasingly sophisticated applications and use cases. Machine learning, computer vision, and robotics are set to further enhance inventory accuracy and operational efficiency. There is much to be saved and salvaged through these advancements, though it is important to recognize that these advancements are investments. They require planning, time, and reflection. Old Navy's partnership with RADAR precisely exemplifies the transformative power of AI in inventory management. It’s also a reminder that people always matter when working with AI. Previous Next
- News Publishers Sue Canadian AI Startup Cohere for Copyright and Trademark Infringement | voyAIge strategy
News Publishers Sue Canadian AI Startup Cohere for Copyright and Trademark Infringement Another AI and intellectual property infringement lawsuit By Christina Catenacci, human writer Mar 7, 2025 Key Points On February 13, 2025, Cohere Inc (Cohere) was sued by a number of news publishers for copyright and trademark infringement The Cohere case is very similar to the Thomson Reuters case, where Thomson Reuters was successful and granted partial summary judgment for the infringements and fair use claims The Thomson Reuters case is instructive when it comes to the Cohere and the New York Times cases On February 13, 2025, Cohere Inc (Cohere) was sued by a number of news publishers, including Advance Local Media LLC; Advance Magazine Publishers Inc. D/B/A Conde Nast; The Atlantic Monthly Group LLC; Forbes Media LLC; Guardian News & Media Limited; Insider, Inc.; Los Angeles Times Communications LLC; The McClatchy Company, LLC; Newsday LLC; Plain Dealer Publishing Co.; Politico LLC; The Republican Company; Toronto Star Newspapers Limited; and Vox Media, LLC (Publishers). What is the lawsuit about? The first paragraph of the lawsuit discusses the nature of the case, and points out that the lawsuit is about protecting journalism from systemic copyright and trademark infringement: “Rather than create its own content, Cohere takes the creative output of Publishers, some of the largest, most enduring, and most important news, magazine, and digital publishers in the United States and around the world. Without permission or compensation, Cohere uses scraped copies of our articles, through training, real-time use, and in outputs, to power its artificial intelligence (AI) service, which in turn competes with Publisher offerings and the emerging market for AI licensing. Not content with just stealing our works, Cohere also blatantly manufactures fake pieces and attributes them to us, misleading the public and tarnishing our brands” In fact, the lawsuit clearly talks about how publishers spend enormous amounts of time investigating, reporting, and ultimately publishing their expressive and groundbreaking pieces, which span the full spectrum of investigative reporting, breaking news, opinion pieces, arts and entertainment reviews, sports coverage, and political and business journalism. The Publishers claim that Cohere, with its valuation of over $5 billion, fails to license the content it uses and takes the Publishers’ valuable articles, without authorization and without providing compensation. It copies, uses, and disseminates the Publishers’ news and magazine articles to build and deliver a commercial service that mimics, undercuts, and competes with lawful sources for their articles and that displaces existing and emerging licensing markets. More specifically, the Publishers claim that Cohere copies the Publishers’ works to train its suite of LLM AI systems of products. They claim that the Publishers value innovation and AI if ethically deployed—they already license their articles to AI companies. However, they say that Cohere improperly usurps their creative labour and investments for the sake of its own profits. Most troubling, the Publishers claim that Cohere’s AI models deliver outputs that include full verbatim copies, substantial excerpts, and substitutive summaries of Publishers’ works—even current, breaking news pieces and articles protected by paywalls. Ultimately, the Publishers claim that Cohere’s actions amount to “massive, systematic copyright infringement and trademark infringement, and have caused significant injury to Publishers”. Moreover, they are adamant that left unfettered, such misconduct threatens the continued availability of the valuable news, magazine, and media content that Publishers produce. In fact, there are over 4,000 articles that the Publishers claim are registered copyrighted works that have been infringed. Worse, they claim that Cohere has passed off its own hallucinated articles as articles from the Publishers. The following is the list of the Publishers’ Causes of Action: Count I: Direct Copyright Infringement, in violation of the Copyright Act —each infringement constitutes a separate and distinct act of infringement, and Cohere’s acts of infringement are willful, intentional, and purposeful, in disregard of and with indifference to the Publishers’ rights Count II: Secondary Copyright Infringement, in violation of the Copyright Act —to the extent Cohere seeks to shirk responsibility for its own conduct by shifting blame onto its users and customers, the Publishers also bring claims for secondary liability in the alternative Count III: Trademark Infringement, in violation of the Lanham Act —Cohere uses marks that are either identical to, variations on, or colourable imitations of Publishers’ federally registered trademarks in connection with the generation and distribution of hallucinated articles that Publishers did not publish. Cohere has caused and is likely to cause confusion, mistake, or deception as to whether the hallucinated articles Cohere provides are associated or affiliated with, or are sponsored, endorsed, or approved by the Publishers Count IV: False Designation of Origin, in violation of the Lanham Act —Cohere has used and continues to use the Publishers’ marks in interstate commerce in a misleading manner, falsely associating the Publishers’ valuable trademarks and trusted brands with Cohere and Cohere’s products and services. As a result, Cohere’s users are deceived and are likely to continue to be deceived by the appearance of the Publishers’ trademarks on Cohere’s hallucinated articles The Publishers are asking for judgment that Cohere is liable under the Copyright Act and the Lanham Act ; equitable relief including a permanent injunction; an order telling Cohere to stop training or fine-tuning AI models or generating content from AI models; an order requiring Cohere to destroy under the court’s supervision all infringing copies of the Publishers’ works; statutory damages and actual damages; fees; and interest. Who is Cohere, and What is its Reaction to the Lawsuit? Cohere is a Canadian company with its principal places of business in Toronto, San Francisco, London (UK), and New York. It is a multinational technology company focused on AI for the enterprise, specializing in large language models. It has been reported that Cohere’s response is that the company expects that the court will side with Cohere because it has long worked to mitigate the risk of intellectual property infringement. Further, Josh Gartner apparently said that Cohere “strongly stands by its practices for responsibly training its enterprise AI” and believes the lawsuit is “misguided and frivolous.” This case is part of a string of intellectual property cases against AI companies As you may recall, I wrote an article titled, New York Times Sues OpenAI and Microsoft for Copyright Infringement , which discussed the copyright lawsuit that the New York Times launched against OpenAI and Microsoft for the same type of alleged infringement. This case has not yet been decided, and as we shall see below, the New York Times could very well win its case. A Similar Case that Supported a Copyright Holder In another similar case, Thomson Reuters and West Publishing Group sued Ross Intelligence in a District Court for the District of Delaware. This February 11, 2025 decision is interesting, and uniquely written by Bibas, the Circuit Judge. This was his first paragraph in the decision: “A smart man knows when he is right; a wise man knows when he is wrong. Wisdom does not always find me, so I try to embrace it when it does––even if it comes late, as it did here” This was because he actually revised his previous 2023 decision and ultimately granted Thomson Reuter’s motion for partial summary judgment on direct copyright infringement and related defenses, and on fair use. To that end, he denied Ross’s motion for summary judgment on fair use and Ross’s motion for summary judgment on Thomson Reuters’s copyright claims. What happened in this case? As we know, Thomson Reuters owns one of the largest legal research platforms, Westlaw. Users need to pay to access and use the platform. Westlaw also contains editorial content and annotations such as the headnotes that summarize key points of law and case holdings. Westlaw organizes its content using the Key Number System, a numerical taxonomy—Thomson Reuters owns copyrights in Westlaw’s copyrightable material. Ross decided to make a legal research search engine that used AI and that competed with Westlaw. Ross needed a database of legal questions and answers to train the tool; therefore, Ross asked to license Westlaw’s content. Thomson Reuters refused. Consequently, Ross made a deal with LegalEase to get training data in the form of “Bulk Memos”, which are lawyers’ compilations of legal questions with good and bad answers. Notably, LegalEase gave those lawyers a guide explaining how to create those questions using Westlaw headnotes, while clarifying that the lawyers should not just copy and paste headnotes directly into the questions. LegalEase sold Ross roughly 25,000 Bulk Memos, which Ross used to train its AI search tool. In response, Thomson Reuters sued for copyright infringement. More specifically, the company sued for direct copyright infringement and claimed that Ross’s defense of fair use was unsuccessful. When it came to direct copyright infringement, the judge stated that Thomson Reuters had to show both that (1) it owned a valid copyright and (2) Ross copied protectable elements of the copyrighted work. The second element required showing that Ross actually copied the work and that its copy was substantially similar to the work. The judge granted summary judgment for Thomson Reuters on whether the headnotes and the Key Number System were original enough to prevent Ross from rebutting any presumption of validity. Looking at about 4,000 headnotes, the judge talked to experts and compared judicial decisions to the headnotes, and decided that Thomson Reuters should be granted summary judgment on actual copying of the data. Then, the judge asked whether an ordinary user of a product would find it substantially similar to the copyrighted work, and answered in the affirmative: Thomson Reuters was granted summary judgment on substantial similarity regarding the headnotes. Moreover, the judge confirmed that Ross’s defenses failed—all of them. He swiftly rejected the innocent infringement claim since innocence did not limit liability. Similarly, he disagreed with Ross that Thomson Reuters misused its own copyrights. He also rejected Ross’s claim involving a merger of expression. He also quickly rejected a claim of scenes à faire (a principle in copyright law in which certain elements of a creative work are held to be not protected when they are mandated by or customary to the genre). But the interesting defense that was raised by Ross was the fair use defense. Section 107(1–4) of the Copyright Act required the judge to consider at least these four factors in the analysis: The use’s purpose and character, including whether it is commercial or nonprofit: this one went to Thomson Reuters since Ross’s use was commercial and not transformative The copyrighted work’s nature: this one went to Ross since Westlaw’s work was not that creative How much of the work was used and how substantial a part it was relative to the copyrighted work’s whole: this one went to Ross since the judge stated that “What matters is not “the amount and substantiality of the portion used in making a copy, but rather the amount and substantiality of what is thereby made accessible to a public for which it may serve as a competing substitute” How Ross’s use affected the copyrighted work’s value or potential market (this was the most important factor): this one went to Thomson Reuters since Ross tried to compete with Westlaw by developing a market substitute, and it did not matter whether Thomson Reuters had used the data to train its own legal search tools—the effect on a potential market for AI training data was enough Thus, when balancing the factors, it became clear that Thomson Reuters had to be successful. The judge granted partial summary judgment to Thomson Reuters on direct copyright infringement for the headnotes. For those headnotes, the only remaining factual issue on liability was that some of those copyrights may have expired or been untimely created. This factual question underlying copyright validity was saved for the jury. The judge also granted summary judgment to Thomson Reuters against Ross’s defenses of innocent infringement, copyright misuse, merger, scenes à faire, and fair use. Likewise, the judge denied Ross’s motions for summary judgment on direct copyright infringement and fair use. Though this newer decision replaced many parts of the 2023 decision, some parts remained relevant, including rulings on contributory liability, vicarious liability, and tortious interference with contract. What does this mean for AI and copyright? As can be seen from the Thomson Reuters case, the defense of fair use is not likely to be successful for defendants who directly copy material and use it to train AI models. We see that judges complete a side-by-side comparison of the products at issue to make their decisions. The defendants would have to show that their new work was transformative. This decision may be instructive for the upcoming decisions related to the New York Times and Cohere. In fact, judges who examine the following four key factors in the fair use analyses go through each fact situation to make a determination: The use’s purpose and character, including whether it is commercial or nonprofit The copyrighted work’s nature How much of the work was used and how substantial a part it was relative to the copyrighted work’s whole How the use affected the copyrighted work’s value or potential market (this is the most important factor) It will be interesting to see if the New York Times and the Publishers will be successful in light of this Thomson Reuters decision. We will keep you posted… Previous Next
- Request a Quote | voyAIge strategy
Inquire about our diverse range of AI solutions. Request a Quote Please let us know what services and/or products you are interested in and we will contact you within 48 hours. If you are requesting a proposal or bid , please use this form here . First Name Last Name Email Quote Request Details Send Thanks for submitting!
- AI & The Future of Work DL | voyAIge strategy
Thank you for subscribing! Download the Report Here Read the Report Online Here
- The New Claude 4 Can Code, But Leaders Should Still Sign Off | voyAIge strategy
The New Claude 4 Can Code, But Leaders Should Still Sign Off Claude 4 is a leap forward, but it's also a governance wake-up call By Tommy Cooke, powered by caffeine and curiousity May 30, 2025 Key Points: Delegating a technical task doesn't guarantee it's done right—oversight matters, even when the system looks competent Claude 4’s ability to work autonomously highlights the growing need for clear accountability and human verification As AI systems become more capable, leaders must stay close to the outcomes—even when they don’t touch the inputs In my formative years as a young adult, I was an active musician in a rock band. My band and I performed regularly throughout my undergraduate years. As much fun as gigging is, live shows are a scramble. Hauling gear, setting up, sound-checking, and hoping nothing went wrong. At one show, I was behind the ball a bit. Two strings snapped on my main guitar. So, I asked the venue’s sound tech to wire up my pedalboard while I handled other setup tasks. When we hit soundcheck, my sound was a mess. One of the pedals had been placed in the wrong order. It was a simple mistake, but it could have derailed the entire show. I’ve never forgotten the lesson that delegating a technical task doesn’t mean it’s done right. You still need to check the signal before the lights go up. This is a moment I reflected on when I read that Anthropic released Claude 4. The reflection was triggered by the fact that most headlines focused on one detail: the model can autonomously generate software for hours at a time. For developers, this is surely a turning point. An AI system that not only writes code but improves it quietly, efficiently, and without supervision. But that’s not the full story. If you are a Pro, Max, Team, or Enterprise Claude plan user , this matters to you because you will have access to Claude 4. This means you and potentially your organization may now have access to a brand new, advanced AI that can carry out complex work without human input. It means that leadership must ask: what’s the governance plan? Who verifies the output? Who signs off? Much like the way in which I learned from asking someone who doesn’t know my system to essentially set it up for me on a rock stage, there’s something we as business leaders can do to ensure that AI innovations still act effectively on our behalf. Claude 4 and the Shift to Autonomous Execution Until recently, generative AI required heavy user input. The human wrote the prompt, and the system responded. That dynamic made it easy to keep the human in the loop to control the task, validate the output, and decide what comes next. Claude 4 changes these terms. It introduces what many call agentic AI : models capable of reasoning through tasks, planning multi-step actions, and executing work without continual prompting. Claude 4 is demonstrating that it can work independently for hours, reconfigure code, and make judgment calls along the way. So, it’s not just writing the code—it is actually finishing the job as well. This is a major development. But with this innovation in AI autonomy comes a truth: the more work that AI performs alone, the less visibility organizations have into how it gets done. The AI Governance Gap Is Growing The risk isn't that AI will make obvious errors. It’s that it will produce plausible work that quietly deviates from your standards, assumptions, or intentions. Do you have someone in place that can notice these changes before it’s too late? That’s the real governance gap. It’s not about control over prompts, but rather prioritizing oversight over outcomes. This means organizations need to reconsider how they monitor AI-driven work. That doesn’t mean leaders need to personally review every AI-generated output. But it does mean they need to put in place clear lines of accountability, regular review processes, and internal checks to ensure AI isn’t working in a vacuum. This also means that oversight can no longer be reactive—it needs to be built in from the beginning. What Does Accountable AI Adoption Look Like? Organizations don’t need to halt progress to manage these risks, but they do need to move forward with clarity: One of the most effective ways to begin is by documenting how AI is being used across the business. This doesn’t need to be a heavy-handed process. Even a lightweight registry of AI use cases can help identify where autonomy is increasing and where review protocols might be missing Leaders should also establish guidelines for when human oversight is required. Not every AI-generated output requires manual review, but some certainly do. Defining these boundaries in advance protects against over-reliance on unchecked systems Lastly, every autonomous system should have a clearly named owner. Someone in the organization needs to be responsible for verifying that the AI’s work aligns with business objectives, ethical expectations, and legal obligations. The idea isn’t to create bottlenecks—it’s to make sure someone is watching. Signing Off Is Still a Human Task Claude 4 marks real progress. It moves us closer to a world where AI can take on meaningful work, save time, and support innovation. But that progress also demands more from leadership. Delegating work to machines doesn’t absolve humans of responsibility. If anything, it raises the bar because the more invisible the work becomes, the more deliberate our oversight must be. Leaders don’t need to fear these systems. But they do need to govern them. They need to understand where AI is being used, what it’s allowed to do, and who remains accountable when things go wrong. This type of oversight can help organizations explain how their AI systems generate outputs. Previous Next