Search Results
120 results found with an empty search
- Research & Reporting | voyAIge strategy
In-depth analysis and reports to support informed AI decision-making. Research & Reporting We are highly experienced researchers and writers. We are passionate about generating bespoke and informative reports, government bids, and thought leadership submissions to regulators and corporate communications teams. Your Content, Our Expertise Whitepapers & Thought Leadership Position your organization as an industry leader with expertly crafted whitepapers and thought leadership articles. We conduct thorough research to produce insightful, well-argued content that not only informs but also engages your audience. Our whitepapers cover the latest trends, challenges, and innovations in AI and related fields, helping you shape the conversation in your industry. The Research Process 1. Consultation We start by understanding your needs, goals, and audiences. 2. Research We collect and analyze data from industry and academic sources. 3. Draft & Develop We craft clear, engaging, and structured outputs that convey your message. 4. Review & Finalize We collaborate with you to gather feedback and make any requested changes. 5. Present We love to present. We are happy to communicate our outputs to your staff, stakeholders, and investors. Request Our Research & Reporting Samples We have numerous samples of our previous research and writing projects, including excerpts from whitepapers, case studies, and policy documents. Contact us to learn more.
- L&E Analysis: Reddit Sues Anthropic | voyAIge strategy
L&E Analysis: Reddit Sues Anthropic What is Reddit Claiming in this Complaint? By Christina Catenacci, human writer Jun 20, 2025 Legal & Ethical Analysis: Issue 2° Key Points On June 4, 2025, Reddit filed a Complaint against Anthropic in the Superior Court of the State of California in San Francisco This Complaint follows the one against Anthropic launched by the major music publishers commenced in October 18, 2023, which was ultimately settled The Complaint by the music publishers was about copyright law, and the Reddit Complaint was about violating the User Agreement and the privacy of Reddit users. It will be interesting to see how the Reddit Complaint is resolved On June 4, 2025, Reddit filed a Complaint against Anthropic in the Superior Court of the State of California in San Francisco. This is not the first Complaint that has been commenced against Anthropic—we cannot forget what recently took place when the music publishers sued Anthropic for copyright infringement. What is the claim about? What does Reddit want? How is this claim different than the one against Anthropic that was launched by the music publishers? What can we take from this development? This article answers these questions. What is Reddit Claiming? Essentially, Reddit has stated that although Anthropic claims it is the “white knight of the AI industry” that prioritizes honesty and high trust, it is “anything but” and it uses empty marketing gimmicks. Reddit asserts in its Complaint that it has a User Agreement that contains the following excerpts of sections 3 and 7: “ 3.Your Use of the Services Except and solely to the extent such a restriction is impermissible under applicable law, you may not, without our written agreement: license, sell, transfer, assign, distribute, host, or otherwise commercially exploit the Services or Content” “ 7.Things You Cannot Do Access, search, or collect data from the Services by any means (automated or otherwise) except as permitted in these Terms or in a separate agreement with Reddit (we conditionally grant permission to crawl the Services in accordance with the parameters set forth in our robots.txt file, but scraping the Services without Reddit’s prior written consent is prohibited)” Even though Reddit has this provision, it claims that Anthropic has intentionally trained on the personal data of Reddit users without ever requesting their consent. In fact, it claims that Anthropic has been ignoring the provision and has had bots repeatedly hit Reddit’s servers over 100,000 times notwithstanding the statements of Reddit’s CEO that Anthropic has unlawfully exploited Reddit content. Further, Reddit states that Anthropic has refused to respect Reddit users’ privacy rights—contrary to Anthropic’s own values. By training its model, Claude, on Reddit’s posts without authorization, Reddit claims that it is in direct violation of Reddit’s User Agreement. In a nutshell, Reddit says that Anthropic has scraped and used Reddit content in its commercial offerings—Claude even provides output statements confirming that it has been trained on Reddit. Anthropic refused to respect Reddit’s guardrails and enter into a lisensing agreement like Google and OpenAI has. Reddit asserts that instead, Anthropic continued to commercialize Reddit content without authorization. Interestingly, Reddit states that Anthropic has admitted that it scrapes Reddit content, but it has provided several excuses—all of which are unacceptable. To that end, Reddit is advancing the following claims against Anthropic: Breach of Contract : Anthropic has violated Reddit’s User Agreement by acting contrary to sections 3 and 7 of the Agreement Unjust Enrichment : Anthropic was unjustly enriched at the expense of Reddit when it scraped and used Reddit content to train and power a model to the tune of billions of dollars Trespass to Chattels : Anthropic intentionally entered into, and made use of, Reddit’s platform and technological infrastructure, including its software and servers, to access and obtain Reddit content and information for its own economic benefit Tortious Interference With Contract : Anthropic intentionally interfered with Reddit’s contractual relationships with its users by: scraping Reddit content without entering into a licensing agreement that would provide the necessary guardrails to protect users’ privacy rights; bypassing connecting to Reddit’s Compliance API, which automatically notifies licensees when users delete posts or comments; training its AI models on user content without any mechanism to respect Reddit user deletion requests; and continuing to scrape Reddit content after being notified that such conduct violated Reddit’s obligations to its users. This intentional interference diminished Reddit’s capacity to fulfill its obligations to its users Unfair Competition : Anthropic has engaged in acts of unfair competition, including unlawful, unfair, and/or fraudulent business acts and practices as defined by the Business and Professions Code . Anthropic has trespassed on Reddit’s platform and taken possession of Reddit content and data without authority or permission, and interfered with Reddit’s contractual relationships with Reddit’s users. Anthropic has also engaged in fraudulent business practices by falsely stating that it was no longer scraping the Reddit platform, even as Anthropic continued to scrape to acquire and use Reddit content to train its AI models for commercial gain In addition, Reddit has requested a jury trial. What is Reddit Asking for in the Complaint? Reddit is asked for the following: Specific Performance, compensatory damages, consequential damages, lost profits, and/or disgorgement of Anthropic’s profits An injunction Restitution for the amount by which Anthropic has been enriched by its scraping and use of Reddit content Pre-judgment and post-judgment interest Punitive damages Fees, costs, and any other appropriate relief Another Previous Complaint by Music Publishers We cannot forget that on October 18, 2023, several major music publishers (Music Publishers) filed a Complaint against Anthropic in the United States District Court for the Middle District of Tennessee Nashville Division. Essentially, the Music Publishers brought the action to address the systematic and widespread infringement of their copyrighted song lyrics by Anthropic. That is, they asserted that Anthropic unlawfully copied and disseminated vast amounts of copyrighted works—including the lyrics to myriad musical compositions owned or controlled by the Music Publishers. In the very first paragraph, the Music Publishers stated: “(Music) Publishers embrace innovation and recognize the great promise of AI when used ethically and responsibly. But Anthropic violates these principles on a systematic and widespread basis. Anthropic must abide by well-established copyright laws, just as countless other technology companies regularly do” The Music Publishers explained that they partnered with innovators, including entrepreneurs, start-ups, and established companies—they recognized and drove true innovation (for instance, Universal used AI in its business and production operations). However, Anthropic’s copyright infringement did not constitute innovation: “in layman’s terms, it’s theft”. In fact, the Music Publishers claimed that Anthropic violated the United States Copyright Act . They acknowledged that AI was a new technology, but they insisted that AI companies still had to follow the law. Technological advances could not come at the expense of the creators who essentially served as the backbone for AI’s development. Anthropic built AI models by scraping and ingesting massive amounts of text from the internet (and potentially other sources), using all of it to train its AI models (Claude 2 in this case) and generate output based on this copied text. The Music Publishers claimed that Anthropic copied the data to fuel its AI models lyrics to their musical compositions. They urged that copyrighted material was not free just because it could be found on the internet—in this case, Anthropic never asked for permission. Notwithstanding Anthropic’s company Constitution (the goal was to be harmless, respectful, and ethical), the Music Publishers passionately argued that Anthropic committed copyright infringement since it generated identical or nearly identical copies of their lyrics. In the Complaint, the Music Publishers provided examples where famous songs were either completely or partially outputted in the response to user prompts. In fact, the Music Publishers argued that: Anthropic directly infringed the Music Publishers’ exclusive rights as copyright holders, including the rights of reproduction, preparation of derivative works, distribution, and public display Anthropic unlawfully enabled, encouraged, and profited from massive copyright infringement by its users, so it was secondarily liable for the infringing acts of its users under well-established theories of contributory infringement and vicarious infringement Anthropic’s AI output often omitted critical copyright management information regarding these works, making it so that the composers of the song lyrics frequently did not get recognition for being the creators of the works that were being distributed The Music Publishers stated, “It is unfathomable for Anthropic to treat itself as exempt from the ethical and legal rules it purports to embrace”. According to the Music Publishers, there was no doubt that Anthropic profited from the infringement of the Music Publishers’ repertoires, since Anthropic was already valued at $5 billion, received billions of dollars in funding, and boasted about numerous high-profile commercial customers and partnerships. The Music Publishers stated in the Claim: “None of that would be possible without the vast troves of copyrighted material that Anthropic scrapes from the internet and exploits as the input and output for its AI models” The Music Publishers noted that nothing about Anthropic was creative—Anthropic depended on the creativity of others and paid them nothing. This caused substantial and irreparable harm. The Claim set out how Anthropic trained the data: copied massive amounts of text from the internet (and potentially other sources), by “scraping” (or copying or downloading) the text directly from websites and other digital sources onto Anthropic’s servers, using automated tools, such as bots and web crawlers, and/or by working from collections prepared by third parties cleaned the copied text to remove material that it perceived as inconsistent with its business model, whether technical or subjective in nature (such as deduplication or removal of offensive language) copied the massive “corpus” and processed it in multiple ways to train the Claude AI models (encoding the text into tokens) processed the data further to finetune the Claude AI model and engaged in additional reinforcement learning, based both on human feedback and AI feedback, all of which may require additional copying of the collected text The following are Claims for Relief: Count I: Direct Copyright Infringement Count II: Contributory Infringement Count III: Vicarious Infringement Count IV: Removal or Alteration of Copyright Management Information To that end, the Music Publishers requested relief against Anthropic in the form of Judgement on each of the claims above, an order for equitable relief, an order requiring Anthropic to pay the Music Publishers statutory damages, an order requiring Anthropic to provide an accounting of the training data and methods (and the lyrics on which it trained AI models), an order to destroy (under Court supervision) all infringing copies of the Music Publishers’ copyrighted works, costs, and interest. On October 23, 2023, Anthropic initially stated that training its AI models constituted “fair use”, meaning that it was a lawful exemption in copyright law. Why? Because Anthropic was engaging in a use that was highly transformative. Indeed, the company mentioned that it did not intend to violate the law. Furthermore, on November 16, 2023, the Music Publishers brought a Motion asking Anthropic to stop using their music lyrics, asking for a preliminary injunction. However, by January 6, 2025, it was reported that Anthropic and the Music Publishers reached a settlement where Anthropic would implement robust measures to ensure compliance with the law, namely revising its data collection and training methodologies to exclude copyrighted content, unless proper licenses or permissions have been obtained. Also, it agreed to have more stringent oversight on its data sources to mitigate the risks of inadvertently using protected material in future AI training. What Can We Take from This Development? Following the debacle with the Music Publishers, one would think that Anthropic would be striving to promote ethical AI practices and foster trust with both the creators of artistic works and the wider public. One would think that Anthropic learned its expensive lesson. But now, Anthropic has to face Reddit. The disappointing part of the story is that Anthropic similarly scraped Reddit content from the internet and used it to train Claude models—clearly without permission and without entering into a proper licensing agreement. While this is technically not a copyright infringement case, it is similar in that the User Agreement was allegedly breached because there were terms that Anthropic needed to comply with, but instead it appeared to have violated them. In particular, to use Reddit, the terms in clauses 3 and 7 had to be complied with. And Anthropic was warned —but Anthropic continued to hit Reddit’s servers and scrape away so that Anthropic could train Claude (without paying). This appears to be the first time that a big tech company has challenged an AI model provider over its training data practices, and it will be interesting to see what happens. As with the case with the Music Publishers, Anthropic will likely have to settle and promise not to do this again. This may be what the company has to ultimately do in order to preserve the delicate balance between innovation and the rights of companies like Reddit (along with its users). For a company that was just valued at 61.5 billion (up from 5 billion noted in the Complaint a couple of years ago), it may be something that Anthropic needs to do sooner rather than later in order to preserve its reputation. Anthropic spoke with TechCrunch recently and said that it disagreed with Reddit and would vigorously defend itself. We shall see what happens in the coming months… Previous Next
- Partnership Opportunities | voyAIge strategy
Collaborating with organizations to drive responsible and effective AI adoption. Partnership Opportunities We are passionate about partnerships and alliances. If your business or organization is interested in collaborating with us on business opportunities, CfPs, or other forms of co-operation for mutual benefit, we would like to hear from you. For all other inquiries, please contact us here . First Name Last Name Email Partnership Inquiry Details Send Thanks for submitting!
- Compliance | voyAIge strategy
AI policies and frameworks to help your organization meet legal and ethical standards. Compliance At voyAIge strategy, compliance is a foundation for our analysis on the legal, policy, and ethical dimensions of AI. We understand the intricacies of the laws of many jurisdictions and can guide you through every step of your compliance journey. Today's rapidly evolving digital landscape is fueled by the exponential rate that AI transforms not just business practices and ways of seeing but entire industries. However, with innovation comes new challenges - particularly in compliance. Governments and regulatory bodies around the world are racing to keep up. They are creating complex legal requirements with which business must comply. For businesses, navigating this complexity is not just about avoiding fines and penalties. It's about safeguarding reputation, building trust with stakeholders, and ensuring sustainability. What's Your Compliance Challenge? Understanding jurisdiction, sector, applicable legislation, data types, and data flows are some of the many considerations we take into account when identifying regulatory bodies relevant to your organization. The following are some examples that may apply to a business now or in the future: GDPR Enforces the General Data Protection Regulation (GDPR). Applies to all businesses with clients and customers in Europe NYC LL 144 New York City regulates how business can and cannot use AI to assist in hiring employees AI ACT Regulates and governs the use of all Artificial Intelligence inside the European Union FDA The US Food and Drug Administration contains specific AI regulations governing medical devices, evidence curation, and market monitoring CCPA The California Consumer Privacy Act has a significant impact on how AI systems used in businesses handle data AIDA Enforces the General Data Protection Regulation (GDPR). Applies to all businesses with clients and customers in Europe OESA Governs employee monitoring as well as using AI to recruit employees in Ontario, Canada SB 1047 California's Safe and Secure Innovation for Frontier AI Models Act imposes safety restrictions on advanced AI PIPEDA Canada's Personal Information Protection and Electronic Documents Act governs how companies collect, use, and share personal information Did You know? Up to €30 million or 6% of Global Annual Turnover for prohibited AI practices under the GDPR. Amazon was fined €746 million by Luxembourg’s data protection authority for how it processed personal data for targeted advertising using AI-driven systems. Canada's proposed Artificial Intelligence and Data Act plans to impose administrative monetary penalties $10 million or 3% of Global Annual Revenue - including criminal penalties such as jail time for AI decisions causing significant harm. Expert Insights, Experience & Resources Book a free consultation to chat with us about correctly and accurately identifying compliance and regulation applicable to your organization. Stay informed by subscribing to our VS-AI Observer Substack , where we offer articles, whitepapers, case studies, and video content that will keep your organization ahead of emerging compliance challenges, requirements, and issues. Book a Free Consultation
- Disclaimer and Terms of Use | voyAIge strategy
Disclaimer and Terms of Use - Our Policies for Working with our Clients Disclaimer and Terms of Use DOWNLOAD
- HOME w/ insights | voyAIge strategy
Navigate your AI Journey with Confidence Innovative Risk Solutions | Strategic AI Management | Law, Policy & Ethics Expertise TALK TO AN EXPERT Trending Insights Trump Signs Executive Order on AI Read More Legal Tech Woes Read More Meta Wins the Antitrust Case Against It Read More SUBSCRIBE TO STAY INFORMED Our Services Streamline your operations with our expertly crafted policies and procedures, ensuring your AI initiatives are both effective and compliant. For organizations who: need to establish new or update existing AI governance frameworks Benefits organizations by: ensuring that AI is used safely and ethically within set guidelines Examples: An insurance company adopts our crafted policies to govern and manage their AI claims Policy & Procedure Ensure your AI systems adhere to all legal standards with our thorough compliance review, which aims to detect and minimize risk while enhancing trustworthiness. For organizations who: need to ensure their AI systems comply with regulatory standards or anticipate forthcoming legislative changes Benefits organizations by: reducing legal, financial, and reputational risk while enhancing organizational clarity Example: A healthcare provider utilizes our service to align their patient data processing AI tools with PHIPA Compliance We lend our extensive experience in professional research and writing to provide insightful, impactful content tailored to support and drive your AI-related needs. For organizations who: for companies and institutions requiring in-depth analysis of AI topics Benefits organizations by: providing expert insight and professionally written content to support decision-making Example: an App developer contracts us to create a market analysis report to detail AI advancements and emerging industry trends Research & Writing We engage your audiences with unique viewpoints that demystify the complex landscape of scholarly, political, popular, and media understandings of AI. For organizations who: need an expert spokesperson on AI for corporate or public engagement events Benefits organizations by: educating audiences with the latest insights in AI ethics, law, and policy Example: our founders deliver a keynote on AI trends in neighbouring countries so as to prepare Canadians for forthcoming AI legislation in 2025 Invited Talks Empower your team with the latest AI knowledge as well as legal and ethical perspectives, designed to enhance and extend AI decision-making. For organizations who: are aiming to elevate their team's understanding and capabilities in AI Benefits organizations by: boosting AI literacy, enhancing both strategic and operational capacities Examples: a multinational corporation uses our services to enhance their annual leadership development program Executive & Staff Training We take deep dives in your privacy policies as well as data and AI operations to uncover and resolve otherwise hidden risks and biases. For organizations who: want to understand the broader implications of AI projects Benefits organizations by: identifying impacts on employees, customers, and stakeholders as well as operational processes Example: we assess the impact of using AI in public service delivery of a location-based asset tracking system Impact Assessments Navigate the complexities of AI with confidence as we design your own guide to implementing and strategically responding to AI issues effectively. For organizations who: are mindful of their employees, customers, and stakeholders think of AI's impacts Benefits organizations by: guiding teams in ethical decision-making, fostering trust and transparency Example: we create a customized playbook focusing on perceived labour reduction implications around AI adoption Ethical AI Playbook Our team assesses your organization's needs, painpoints, and opportunities. Work with us to discover the right AI solution for you. For organizations who: are exploring potential AI solutions to address specific operational challenges Benefits organizations by: clarifying the feasibility, scope, and value of AI solutions in alignment with business objectives Example: a retailer engages us to scope AI solutions for automating inventory replenishment AI Solution Scoping Maximize AI adoption and AI project success as we assist you in ensuring all parties are informed, involved, and invested from the outset. For organizations who: require buy-in from internal and external stakeholders Benefits organizations by: ensuring that parties are informed, engaged and supportive of your AI initiative Example: a software company utilizes our service to facilitate workshops that bring together developers and end-users in the AI adoption process Stakeholder Engagement Our Servie Our Products Our Products AI Launch Packages Kickstart your AI implementation with: Tools and Resources for Pre, During, and Post AI Deployment Compliance & Ethics Frameworks n Expert Guidance on Best Practices & Strategies LEARN MORE Who We Are We are lawyers and professors with over 20 years of experience in advising private, public, and non-profit organizations on the intersection of technology with law, ethics, and policy. Dr. Tommy Cooke, BA, MA, PhD Co-Founder LEARN MORE Dr. Christina Catenacci, LLM, LLB, PhD Co-Founder Who Are We? Why Hire Us? Canada's Artificial Intelligence and Data Act (AIDA) will launch in 2025. It will place stringent requirements on Canadian organizations using AI. Organizations will require expert guidance to prepare for AIDA . LEARN MORE Why Hire Us Contact Us CONTACT US First Name Last Name Email Subject Message Submit Thanks for submitting! BACK TO TOP
- Governing AI by Learning from Cohere’s Mistakes | voyAIge strategy
Governing AI by Learning from Cohere’s Mistakes Why it is Crucial to Demonstrate Control By Tommy Cooke, powered by caffeine and curiousity and a strong desire for sunny weather Mar 7, 2025 Key Points: AI governance is essential because it ensures organizations maintain transparency, accountability, and oversight over how AI systems are trained, deployed, and used Leaders must proactively assess where AI models source their data, ensuring compliance with intellectual property laws and mitigating risks related to unauthorized content use We don’t govern AI to avoid lawsuits. We govern AI to demonstrate that we are in control. This is how we build trust with stakeholders In my 20s, I spent a lot of time traveling to Germany to visit a close friend. He had an early GPS unit in his car. It was an outdated system that relied on CDs for updates and generated very blurry little arrows on a tiny screen nowhere near the driver's eyes. On one trip, we thought we were heading west of Frankfurt to a favourite restaurant. Instead, we ended up 75 kilometers south. We found ourselves sitting in his car at a dead end, with his high beams on, staring into a dark farmer's field. The system led us astray because we over-relied on old data inside a brand-new consumer technology. When I speak with leaders adopting AI for the first time, I often think of getting lost in the rural German countryside. AI, like early GPS, promises efficiency, but its reliability depends entirely on its data. Organizations are under pressure to adopt AI to streamline operations, reduce costs, and drive creativity. But AI isn’t a magic bullet. It’s a tool. And like an unreliable GPS, AI trained on flawed or unauthorized data can take your organization in the wrong direction. It relinquishes control. Cohere, a major AI company in Canada, is facing a significant lawsuit over how it trained its models; Cohere used copyrighted content without permission or providing compensation. This case is one you should know about because it's more than just a legal battle. It’s a reminder: AI adoption isn’t just about capability. It’s about building and maintaining responsible control. So, how exactly do you ensure you are in control? The answer begins and ends with an ethical strategy. The Ethical Fault Line The lawsuit against Cohere, a Toronto-based AI company, highlights the growing tension between AI developers and content creators. Major media organizations allege that AI companies are scraping and reproducing their content without consent or compensation. This raises a critical question: Who controls knowledge in the era of AI? This isn’t just a tech industry issue—it’s a governance challenge with real consequences. AI systems generate content, provide insights, and automate decisions based on their underlying data. If that data is sourced irresponsibly—such as using newspaper articles without publisher consent—organizations risk reputational harm, legal liability, and a breakdown of trust with employees, customers, and industry partners. Lessons for AI Leaders: How to Stay on the Right Side of AI Ethics As AI continues to reshape industries, its impact will depend on how it is developed and deployed. Business leaders don’t need to be AI engineers, but they do need to ensure that they are using AI transparently. Here's why: Transparency is the Foundation of Trust. AI should not be a "black box"—a technology that operates mysteriously without clear explanation. Leaders need visibility into how AI works, what data it uses, and what safeguards are in place. This means two things: first, working with AI vendors to receive clear documentation on data sources and model behaviour. If a company can’t explain how its AI makes decisions, that’s a red flag. Second, leaders need a communication strategy—something that they can reference to explain AI’s role to any stakeholder Respect Intellectual Property from the Start. Whether using AI to generate content, analyze trends, or assist in decision-making, stakeholders expect AI leaders to account for where AI data comes from. If an organization uses internal data from sales reports, for example, this needs to be documented. If outsourcing data from a third-party vendor, it’s not enough to say that the data is external—leaders must be able to confirm the vendor’s ownership and rights to that data Governing AI Is Not Optional. Responsible AI use requires a governance framework. Companies need clear policies that define how AI is trained, where data comes from, and how the system and its outputs are monitored. Think of AI governance like driving a car: just as drivers follow traffic laws and speed limits, AI systems require rules to ensure safe and ethical operation. AI governance is a business strategy that demonstrates commitment to legal, compliant, and ethical AI development—ensuring transparency, explainability, and accountability. Ethical AI is an Advantage. Not a Financial Burden Much like the way a driver is expected to maintain control of their vehicle, to abide by rules, and to ensure their own and others' safety, drivers build trust with their passengers and other drivers by continuously demonstrating that they are in control. The same holds true with AI ethics. We don't govern AI to avoid lawsuits. We govern AI to demonstrate that we are in control. Previous Next
- A Geography of AI Influence | voyAIge strategy
A Geography of AI Influence By Tommy Cooke, fueled by summer sunshine Jul 11, 2025 Key Points: Cohere and Shopify's leaders are calling on Canadian tech talent to stay and build domestically, framing it as a strategic imperative rather than a sentimental one The future of Canadian AI won’t hinge on billion-dollar exits, but on whether small- and mid-sized businesses adopt, invest in, and champion local AI tools Business owners hold more power than they realize to shape Canada’s AI ecosystem—through the partners they choose, the tools they deploy, and the future that they help fund It’s not often that a co-founder of one of the world’s most advanced AI companies shares the stage with the president of Canada’s most iconic tech firm and calls on the next generation to build in Canada. But that’s exactly what happened last week in Toronto. At a fireside chat during Tech Week , Cohere’s Aidan Gomez and Shopify’s Harley Finkelstein made a public plea. They urged Canadian tech talent to stay and build at home, rather than taking their talent beyond Canada’s borders. For Canadian business owners, whether you’re running a startup or scaling a service firm, this message isn’t someone else’s rallying cry. It’s a turning point that could change how, and where, AI is built, accessed, and governed. It matters because the AI ecosystem that emerges in Canada over the next two years could determine Canada’s short to medium-term economic outlook, not to mention the future of AI industry in Canada. A Geography of AI Influence For those who are unfamiliar, Cohere is not a modest startup. It’s one of the leading global competitors to OpenAI that buildspowerful large language models (LLMs) that power tools like conversational AI, search, and workflow automation. Headquartered in Toronto, Cohere has emerged as one of the most serious Canadian contenders in the AI space. Gomez, its co-founder, was one of the authors of the famous 2017 “Attention Is All You Need” paper that launched the transformer revolution—a milestone in AI history. Despite its Canadian roots, Cohere has faced a dilemma common to many domestic tech firms: when foreign capital arrives and acquisition offers roll in, what should a Canadian founder do? For Gomez, the answer is to build in Canada. This is not a matter of sentimentality. Rather, it is one about fueling the domestic tech ecosystem so that local companies are able to leverage AI and launch into international business. Finkelstein, whose company built a successful platform out of Ottawa, echoed this belief at the fire side chat in Toronto. AI is Finding a Canadian Home AI infrastructure (also known as an AI stack, is a term that refers to the hardware and software needed to create and deploy AI-powered applications and solutions) or its ecosystem (such as compute access, data centres, model training, model governance, and so on) is not evenly distributed. It’s consolidated in the hands of a few powerful firms around the globe. This means that Canadian businesses are increasingly dependent on foreign service providers and AI marketplaces when it comes to scaling with AI. This is why Gomez and Finkelstein feel strongly about AI remaining in Canada. They are calling for domestic alternatives. The core components of their vision involve having Canadian-built AI firms, to be eventually governed by Canadian law and accountable to Canadian regulators whilst designed with Canadian values in mind.. Because companies like Cohere remain here, for example, their infrastructure, their partnerships, and their hiring practices influence the domestic AI marketplace. To these founders, keeping AI local means having better alignment on privacy and compliance. It also means easier hiring and upskilling, accelerated domestic collaboration, and earlier access to tools. This is what I mean by a geography of AI influence —AI in Canada for Canadians. The Capital Problem And yet, their vision is about more than location. At the heart of Gomez and Finkelstein’s appeal was a deep criticism of Canadian venture capital. As Gomez put it, too many Canadian investors are conservative. While they are quick to fund proven models, they are slow and hesitant to hedge their bets in a Canadian tech future. While their economic conservatism may be rational in a small Canadian market, it can be entirely detrimental to the growth of Canadian AI. Model training requires investment and infrastructure to scale. This is why you as a Canadian business owner need to pay attention. The future of Canadian AI will not be determined by billion-dollar IPOs. That is not a reality in Canada. Rather, it will be shaped by how many small- and mid-sized businesses choose to adopt local AI tools, contract with domestic firms, or participate in Canadian pilot programs. If businesses choose only the cheapest or most convenient global option, Canadian AI startups will face the same fate as so many before them: early acquisition, talent drain, and missed opportunity. A Quiet Call to Action If there’s one thing that should stick with Canadian business owners from this story, it’s this: you have more influence over the future of AI in this country than you think. By supporting local AI vendors, engaging with domestic AI development and research, and giving Canadian partners a chance, you have the ability to shape a burgeoning ecosystem. And as the stakes around AI governance, data privacy, and strategy rise, that ecosystem may be one of the most valuable assets Canadian businesses will have. Previous Next
- There is a New Minister of AI in Canada | voyAIge strategy
There is a New Minister of AI in Canada What can Canadians Expect? By Christina Catenacci May 23, 2025 It has been reported that Prime Minister Mark Carney has recently created a new Ministry in Canada—he has chosen former journalist Evan Solomon to be the new Minister of AI and Digital Innovation. Solomon was elected for the first time in the April 28, 2025 election in the riding of Toronto Centre. Before that, he worked as a broadcaster for both CBC and CTV. Previously, the topic of AI fell under the industry portfolio—in the Trudeau government, the person who was responsible for something like Bill C-27 (it contained both a privacy and AI proposed piece of legislation) was François-Philippe Champagne , who is now responsible for Finance and is representing the riding of Saint-Maurice. As Minister of Innovation, Science and Industry from 2021 to 2025, he helped attract major investments into Canada, advanced the development and adoption of clean technologies, strengthened research and development, and bolstered Canada’s position in environmental sustainability. What Will the New AI Minister Do? As we have recently seen, Prime Minister Carney has announced his single mandate letter with some streamlined top priorities: Establishing a new economic and security relationship with the United States and strengthening our collaboration with reliable trading partners and allies around the world Building one Canadian economy by removing barriers to interprovincial trade and identifying and expediting nation-building projects that will connect and transform our country Bringing down costs for Canadians and helping them to get ahead Making housing more affordable by unleashing the power of public-private cooperation, catalysing a modern housing industry, and creating new careers in the skilled trades Protecting Canadian sovereignty and keeping Canadians safe by strengthening the Canadian Armed Forces, securing our borders, and reinforcing law enforcement Attracting the best talent in the world to help build our economy, while returning our overall immigration rates to sustainable levels Spending less on government operations so that Canadians can invest more in the people and businesses that will build the strongest economy in the G7 No, AI is not mentioned in there. However, in the preamble of the letter, he touched on AI when he stated: “The combination of the scale of this infrastructure build and the transformative nature of artificial intelligence (AI) will create opportunities for millions of Canadians to find new rewarding careers – provided they have timely access to the education and training they need to develop the necessary skills. Government itself must become much more productive by deploying AI at scale, by focusing on results over spending, and by using scarce tax dollars to catalyse multiples of private investment.” Who is the New Minister of AI and Digital Innovation—Evan Solomon? To many, including Ottawa Law professor Michael Geist, Evan Solomon is smart and tech savvy— exactly what Canada needs to move the ball rolling in AI. In the past, Solomon was the CBC host of Power and Politics on CBC and The House podcast on Radio Canada. He was even considered to be someone who could replace Peter Mansbridge on The National . However, CBC terminated him in 2015 after the Star reported that he was taking secret commission payments from wealthy art buyers related to art sales involving people that he dealt with as a host. Apparently, he took commissions of more than $300,000 for several pieces of art and did not disclose to the buyer that he was being paid fees for introducing buyer and seller. Some of the people that he dealt with included Jim Balsillie and Mark Carney himself. What’s more, Solomon’s appointment was met with criticism , mostly because he does not have a formal science or tech background, and also because of a mishap in March when he briefly reposted a photoshopped offensive image of Carney from a parody account. In fact, some critics argue that someone who could not identify manipulated content in his own social media feed may struggle to develop effective policies to protect Canadians from increasingly sophisticated AI-generated deception. But he is back now, as AI Minister. He will have a lot of work to do in his new role, and we hope that one thing he does is deal with the introduction of a good-quality Canadian AI law. What Can we Take from the Mandate Letter? We heard Prime Minister Carney talk about AI in his election platform , where he promised to make sure Canada takes advantage of the opportunities presented by AI, since it is critical for our competitiveness as the global economy shifts—and for making sure we have a government that actually works. More specifically, he promised to do the following in the area of AI under the build portion of the platform: Build AI infrastructure. The Prime Minister had planned on i nvesting in nation-building energy infrastructure and cutting red tape to make Canada the best place in the world to build data centres. Canada must have the capacity to deploy the AI of the future and ensure we have technological sovereignty. Also, he planned on building the next generation of data centres quickly and efficiently by leveraging federal funding and partnering with the private sector to secure Canada’s technological advantage Invest in AI training, adoption, and commercialization . The Prime Minister had planned on measuring growth by tracking the economic impacts of AI in real-time so we can proactively help Canadians seize new opportunities, boost productivity, and ensure no one is left behind. Also, he planned on boosting adoption with a new AI deployment tax credit for small and medium-sized businesses that incentivizes businesses to leverage AI to boost their bottom lines, create jobs, and support existing employees. Companies would leverage a 20 percent credit on qualifying AI adoption projects, as long as they can demonstrate that they are increasing jobs. Further, he planned on catalyzing commercialization by expanding successful programs at Canada’s AI institutes (Mila, Vector, Amii) so that we can connect more Canadian researchers and startups with businesses across the country, which will supercharge adoption of Canadian innovation in businesses, create jobs, and strengthen our AI ecosystem Improve AI procurement . Prime Minister Carney had planned on establishing a dedicated Office of Digital Transformation at the centre of government to proactively identify, implement, and scale technology solutions and eliminate duplicative and redundant red tape. This will enhance public service delivery for all Canadians and reduce barriers for businesses to operate in Canada, which will grow our economy. This is about fundamentally transforming how Canadians interact with their government, ensuring timely, accessible, and high-quality services that meet Canadians’ needs. Also, he planned on enabling the Office of Digital Transformation to centralize innovative procurement and take a whole-of-government approach to service delivery improvement. This could mean using AI to address government service backlogs and improve service delivery times, so that Canadians get better services, faster. There were some great ideas in the election platform, and I’m sure that Canadians hope that they will manifest. It is important to note that the priorities that were identified in the election platform are encouraging, as they will help both government and SMBs in the private sector with tax credits that incentivizes businesses to leverage AI to boost their bottom lines, create jobs, and support existing employees. Businesses could sure use some help with training existing employees via upskilling and reskilling, as well as AI literacy. With respect to the more general mandate letter that has recently surfaced, it is possible that this means that any additional and prescribed mandate letters to individual Ministers will not be shared with the public. That would be concerning, since public-facing mandate letters have become the norm during the Trudeau government. We will have to see on this issue. Moreover, the couple of paragraphs in the mandate letter’s preamble suggests that there will be targeted improvements for both the public and private sectors. The letter emphasized training and scaling AI. These goals are lofty, but necessary. But on the whole, things are looking promising given the commitment to build, invest, and improve AI procurement. What can Canadians Expect? In my view, it is still too early to tell. But I’m hoping that Prime Minister Carney comes through for Canada. If the government gets this right, Canada could catch up to other jurisdictions like the EU, and become a real leader in AI. Previous Next
- Americans Feel the Pinch of High Electricity Costs | voyAIge strategy
Americans Feel the Pinch of High Electricity Costs Data Centres are Sucking People’s Ability to Pay to Heat or Cool their Home By Christina Catenacci, human writer Oct 17, 2025 Key Points: American residents are experiencing energy poverty, an inability to afford to keep their homes warm or cool Though the cost of electricity is based on several factors, a main driver of the spike in energy prices involves the energy required to power data centres due to the demand from AI Many states are pushing back (passing laws or reaching settlements with large tech companies) in order to keep the prices fair for residents According to CBS News , the cost of electricity has increased from $0.14 per kilowatt hour in 2019 to $0.18 per kilowatt hour in 2024—this represents a change of more than 28.5 percent. The result: the average American is now paying nearly $300 a month just in utilities. This phenomenon is referred to as energy poverty. Why is this happening? To be sure, the cost of electricity is based on several factors, including the volatile prices for natural gas, wildfire risk, electricity transmission and distribution, regulations and inflation. That being said, there is also the heat—rising temperatures fuel extreme weather events, such as heat waves in the summer and snowstorms in the winter, which then increases energy consumption as people try to keep their homes warm or cool. Climate change has only exacerbated the frequency and intensity of these extreme weather events. But there are also data centres . In fact, data centers are projected to consume up to 12 percent of American electricity within the next three years. How is this happening? Simply put, the expansion of power-hungry data centers that is required to support the surge in AI usage is a main factor in the rising costs that Americans are experiencing. As a consequence, American states are feeling the pressure to act . However, it us unclear that any state has a solution to the issue of data centers wreaking havoc on people’s electricity bills. Many have noted that answering this question may require taking a hard line against large tech companies that are rapidly investing in and using a large number of data centres. To be clear, we are talking about data centres that may require more electricity than entire cities—large factories would pale in comparison. Ari Peskoe, who directs the Electricity Law Initiative at Harvard University, states: “A lot of this infrastructure, billions of dollars of it, is being built just for a few customers and a few facilities and these happen to be the wealthiest companies in the world. I think some of the fundamental assumptions behind all this just kind of breaks down” In fact, Peskoe suggests that there could be a can of worms that pits ratepayer classes against each other. Moreover, Tricia Pridemore, who sits on Georgia’s Public Service Commission and is president of the National Association of Regulatory Utility Commissioners, noted that there is already a tightened electricity supply and increasing costs for power lines, utility poles, transformers and generators as utilities replace aging equipment or harden it against extreme weather. Pridemore mentioned that the data centers that are required to deal with the AI boom are still in the regulatory planning stages. But it is important to keep in mind that unless utilities negotiate higher specialized rates, other ratepayer classes (residential, commercial and industrial) are likely paying for data center power needs. For now, there is recent research conducted by Monitoring Analytics, the independent market watchdog for the mid-Atlantic grid, showing that 70 percent, or $9.3 billion, of last year’s increased electricity cost was the result of data center demand. How are states responding? In short, five governors led by Pennsylvania’s Josh Shapiro began pushing back against power prices set by the mid-Atlantic grid operator, PJM Interconnection, after the amount spiked nearly sevenfold. In response, PJM Interconnection has not yet proposed ways that would guarantee that data centres pay their fair share. On the other hand, Monitoring Analytics is floating the idea that data centers should be required to procure their own power. The company likened the residents’ payment for electricity as a “massive wealth transfer” from average people to tech companies. In addition, at least 12 states are considering ways to make data centers pay higher local transmission costs. For example, in Oregon, a law was passed in June that orders state utility regulators to develop new power rates for data centers. The Oregon Citizens’ Utility Board has said that there is clear evidence that costs going to serve data centers are being spread across all customers at a time when some electric bills are up 50 percent over the past four years. By way of another example, New Jersey’s governor signed legislation last month commissioning state utility regulators to study whether ratepayers are being hit with “unreasonable rate increases” in order to connect data centers and to develop a specialized rate to charge data centers. Some states are trying to reach settlements. For example, in Indiana, state utility regulators approved a settlement between Indiana Michigan Power Co., Amazon, Google, Microsoft and consumer advocates that set parameters for data center payments for service. In Pennsylvania, the state utility commission is drafting a model rate structure for utilities to consider adopting. While it is important for utilities and states to attract big customers like data centres, it is necessary to appreciate what is fair; transmission upgrades and other similar initiatives could costs millions of dollars, and it is not fair to put it all on residents. Large AI companies will need to take the above discussion into consideration when making plans to expand and adding power-hungry data centres. They may anticipate being approached by various states in order to create fair settlements so that the cost of energy is not transferred entirely to residents. Previous Next
- How an Infrastructure Race is Defining AI’s Future | voyAIge strategy
How an Infrastructure Race is Defining AI’s Future Why Nvidia’s $100B investment in OpenAI signals a shift every business leader must understand By Tommy Cooke, powered by really great espresso Sep 26, 2025 Key Points: 1. Access to compute, not software features, will determine who can compete in AI 2. Vendor entanglement may speed adoption but increases dependency and lock-in risks 3. The AI arms race is accelerating, shrinking the competitive window for differentiation Nvidia has committed up to US$100 billion in a staged investment into OpenAI, with the funds intended to build massive AI data centres powered by Nvidia’s own chips. The first deployments are scheduled for 2026, with each dependent on new infrastructure coming online. On the surface, this is a story about one company betting big on another. But if you are a business leader, it signals something deeper. It means that access to compute power (the chips, servers, and energy needed to run AI) will continue to determine who can compete, how fast they can innovate, and whether they can deliver reliable AI products to clients. Therefore, if you are building, selling, or integrating AI, your advantage is no longer defined by software features alone. It is defined by whether you can access and afford the infrastructure that makes those features possible. Compute as the New Moat OpenAI has said bluntly: “everything starts with compute”. Nvidia’s investment proves the point. Frontier AI models are limited not by imagination but by access to chips, data centres, and power. For businesses, this flips the equation. Software can be replicated, but compute capacity cannot be conjured overnight. The companies that secure infrastructure will enjoy a durable moat. This means faster model training, better uptime, and the ability to scale globally. Those without access risk being left behind—no matter how strong their ideas or datasets. Vendor Financing at Unprecedented Scale This deal is also significant to us as business leaders because it blurs the line between supplier and customer. Nvidia is both investing in OpenAI and guaranteeing that OpenAI’s infrastructure will be built on Nvidia hardware. Some analysts call it vendor financing at an unprecedented scale. The lesson for business leaders is twofold: First, expect suppliers to become more embedded in clients’ strategic direction, offering capital and integration alongside products Second, recognize the risk. Deeper vendor entanglement often accelerates adoption but reduces bargaining power. Tech vendors who become dependent on a single infrastructure partner may find themselves locked into costs and roadmaps that they cannot control Capital Intensity as a Barrier to Entry A single gigawatt of Nvidia systems may cost US$35 billion in hardware alone. This makes clear that the frontier of AI is not just technologically complex. It is financially punishing. For most organizations, the takeaway is not to match Nvidia or OpenAI dollar-for-dollar. Rather, it is to understand that capital intensity itself is now a barrier to entry. Competing at the frontier requires access to extraordinary financial and infrastructure resources. Vendors and enterprises need to calibrate their vision, invest in the right scale of AI for their market, partner strategically where necessary, and focus on ROI-driven deployments rather than chasing the biggest models. Regulatory and Market Risks Nvidia already dominates the global AI chip market. Adding a deep financial stake in OpenAI potentially raises antitrust concerns about preferential access and market distortion. Governments are watching closely. To you, the business leader, this matters because regulation could reshape market dynamics in ways that affect everyone. Just as governments regulated telecom and energy to ensure fair access, AI infrastructure could face new rules that mandate openness, limit exclusivity, or scrutinize vertical integrations. Leaders must anticipate these shifts and avoid strategies that depend on fragile or privileged vendor relationships. The Acceleration Effect Perhaps the most significant implication is this: the investment accelerates the AI arms race. By de-risking OpenAI’s infrastructural future, Nvidia is ensuring that larger models can be trained and deployed faster, compressing innovation cycles from years to months. For businesses, the competitive window is shrinking. The pace of AI progress means that differentiators based solely on early adoption will fade quickly. Staying competitive will require constant reinvestment and operational agility—not just one-time pilots. What Leaders Should Do Now Leaders are recommended to do the following: Treat Infrastructure as Strategy. AI isn’t just software. It depends on access to compute, bandwidth, and energy. Executives must recognize infrastructure as a strategic variable, not an IT detail Diversify Dependencies. Relying on a single vendor—whether for chips, cloud, or capital—is a risk. Explore multi-cloud strategies, alternative hardware, and hybrid deployments Negotiate Beyond Cost. Vendor agreements should secure more than price. Push for supply guarantees, roadmap visibility, and exit flexibility Anticipate Regulation. Monitor antitrust and AI policy developments. Regulation may alter vendor dynamics and market access. Build Literacy. Equip your teams with an understanding of latency, scaling costs, and compute economics. The winners will be those who can align AI ambition with operational reality. Focus on The Bigger Picture Nvidia’s $100 billion bet is more than a financial deal. It is a signal that AI’s future will be shaped by who controls the foundations of compute. For business leaders, the message is clear: innovation, product design, and customer experience flow from infrastructure. The AI market will not be won by those with the cleverest algorithms alone, but by those who can reliably access the chips and data centres that make those algorithms work at scale. This is why the infrastructure race matters, not only to Nvidia and OpenAI, but to every vendor and enterprise hoping to compete in the AI-driven economy. Previous Next
- L&E Analysis: What is Neural Privacy, and Why is it Important? | voyAIge strategy
L&E Analysis: What is Neural Privacy, and Why is it Important? More US States are Regulating it By Christina Catenacci Mar 14, 2025 Legal & Ethical Analysis: Issue 1° Key Points Neural data is very sensitive and can reveal a great deal about a person The law is starting to catch up to the tech and the ethicists’ concerns In North America, California and Colorado are leading the way when it comes to creating mental privacy protections in relation to neurotechnology This is a hot topic, but what is it? Generally speaking, neural data is information that is generated by measuring activity in a person's central or peripheral nervous systems, including brain activity (seen in EEGs, fMRIs, or implanted devices); signals from the peripheral nervous system (such as nerves that extend from the brain and spine); and data that can be used to infer mental states, emotions, and cognitive processes. Interestingly, this data has been used to create artificial neural networks . For instance, machine vision can be used to identify a person's emotions by analyzing their facial expressions. Some may be surprised to know that there are many types of neurotechnology (neurotech) in existence today, but what is that? Neurotechnology bridges the gap between neuroscience, the scientific study of the nervous system, and technology. The goal of neurotech is to understand how the brain can be enhanced by technological advancements to create applications that improve both brain function and overall human health. In fact, some may characterize this growing area as “a thrilling glimpse into the potential of human ingenuity to transform lives”. Others have noted that neurotechnology, combined with the explosion of AI, opens up a world of infinite possibilities . One simple way of explaining neurotech is to divide it into two categories : invasive (such as implants), and non-invasive (such as wearables). More specifically, invasive neurotech is mostly used in the medical area to deal with conditions such as neurological disorders like Parkinson’s disease. Neural privacy has to do with being confident that we have control over the access to our own neural data and to the information about our mental processes. This article delves into the law of neural privacy and the ethics of neurotech. The Law Involving Neural Privacy In the United States, there has been a flurry of activity in this regard. Why is neural privacy important? Essentially, this type of data is very sensitive personal data as it can reveal thoughts, emotions, and intentions. Certain companies have a lot to gain if they are privy to this information—think about employers, insurers, or law enforcement—this could affect how workers are able to work, individuals apply for insurance coverage, or citizens engage with their societies. Another aspect is data ownership: who owns one’s thoughts? Some may believe that this question is for the distant future, but it might be worth mentioning that Neuralink has already had its first human patient using a brain implant to play online chess . It is here already! This may be why the UN Special Rapporteur on the right to privacy has recently set out the foundations and principles for the regulation of neurotechnologies and the processing of neurodata from the perspective of the right to privacy. More precisely, the UN Report deals with key definitions and establishes fundamental principles to guide regulation in this area, including the protection of human dignity, the safeguarding of mental privacy, the recognition of neurodata as highly sensitive personal data, and the requirement of informed consent for the processing of this data. Emphasis is placed on the inclusion of ethical values and the protection of human rights in the design. While Canada has not yet legislated on mental privacy, we note that the United States has in the following jurisdictions: California : the California Consumer Privacy Act (CCPA) was amended with SB 1223 that included “neural data” in the definition of sensitive personal information, and defined “neural data” as “information that is generated by measuring the activity of a consumer’s central or peripheral nervous system, and that is not inferred from nonneural information”. Governor Newsom has already approved this amendment. California also has two new bills, SB-44 ( Neural Data Protection Ac t) that would deal with brain-computer interfaces and govern the disclosure of medical information by an employer, a provider of health care, a health care service plan, or a contractor—to include new protections for neural data; and SB-7 ( Automated Decision Systems (ADS) in the Workplace ) that would require an employer, or a vendor engaged by the employer, to provide a written notice that an ADS, for the purpose of making employment-related decisions, is in use at the workplace to all workers that will be directly or indirectly affected by the ADS Colorado : the Colorado Privacy Act was also amended with HB 24-1058 that defines “neural data” as “information that is generated by the measurement of the activity of an individual's central or peripheral nervous systems and that can be processed by or with the assistance of a device”, and adds neural data to the definition of biological data and sensitive data. This has already been signed into law Connecticut : SB 1356 ( An Act Concerning Data Privacy, Online Monitoring, Social Media, and Data Brokers ), is a bill that would amend the Connecticut Data Privacy Act , define “neural data” as any information that is generated by measuring the activity of an individual's central or peripheral nervous system”, and include it in the definition of sensitive data Illinois : HB 2984 is a bill that would amend the Biometric Information Privacy Ac t , define “neural data” as “information that is generated by the measurement of activity of an individual's central or peripheral nervous system, and that is not inferred from non-neural information”, and add neural data to the definition of biometric identifier Massachusetts : HD 4127 ( Neural Data Privacy Protection Act ) is a bill that would define “neural data” as “information that is generated by measuring the activity of an individual’s central or peripheral nervous system, and that is not inferred from non-neural information” and include it in the definition of sensitive covered data. This is a significant step since there is no comprehensive consumer privacy law at this point Minnesota : SF 1240 is a bill that would not amend the consumer privacy legislation, but would rather be a standalone piece of legislation that provides the right to mental data and sets out neurotech rights concerning brain-computer interfaces. It would begin to apply in August, 2025 Vermont : there are three bills involving neural data protection: H210 (Age-Appropriate Design Code Act), H208 (Data Privacy and Online Surveillance Act), and H366 (An Act Relating to Neurological Rights). In a nutshell, these bills would define “neural data” as “information that is collected through biosensors and that could be processed to infer or predict mental states”, provide individuals with the right to mental or neural data privacy, protect minors specifically, and create a comprehensive consumer privacy bill that includes specific protections for neural data Clearly, it is becoming more important to enact mental or neurological privacy protections when it comes to neurotech and automated decision-making systems. In North America, these States are leading the way and could influence the direction of legislation for both Canada and the entire United States. That is, they are adding provisions to their consumer privacy legislation or creating standalone statutes. Ethics of Neurotechnology Let us begin this discussion with the question, Why is neural data unique? Simply put, neural data is not just a phone number or a person’s age. It is very sensitive and can reveal much more about a person. This is why Cooley lawyers refer to it as a kind of digital “source code” for an individual , potentially uncovering thoughts, emotions and even intentions: “From EEG readings to fMRI scans, neural data allows insights into neural activity that could, in the future, decode neural data into speech, detect truthfulness or create a digital clone of an individual’s personality” Several thinkers have asked about what needs to be protected. For example, the Neurorights Foundation tackles the issue of human rights for the age of neurotech. It advocates for promoting innovation, protecting human rights, and ensuring the ethical development of neurotech. The foundation has created a number of research reports, including Safeguarding Brain Data: Assessing the Privacy Practices of Consumer Neurotechnology Companies , which analyzed the data practices and user rights of consumer neurotechnology products. In this report, there were several areas of concern: access to information, data collection and storage, data sharing, user rights, as well as data safety and security. The conclusion was that the consumer neurotechnology space is growing at a rate that has outpaced research and regulation. Further, most existing neurotechnology companies do not adequately inform consumers or protect their neural data from misuse and abuse. The report was created so that companies and investors can appreciate the kinds of specific further measures that are needed to responsibly expand neurotechnology into the consumer sphere. Additionally, UNESCO has pointed out that there are several innovative neurotechnology techniques such as brain stimulation or neuroimaging techniques, which have changed the face of our understanding of the nervous system. Neurotechnology has helped us to address many challenges, especially in the context of neurological disorders; however there are also ethical issues and problems particularly with its use of non-invasive interventions. For example, neurotechnology can directly access, manipulate, and emulate the structure of the brain—it can produce information about our identities, our emotions, our fears. If you combine this neurotech with AI, there can be a threat to notions of human identity, human dignity, freedom of thought, autonomy, (mental) privacy and well-being. UNESCO states that the fast-developing field of neurotechnology is promising, but we need a solid governance framework: “Combined with artificial intelligence, these techniques can enable developers, public or private, to abuse of cognitive biases and trigger reactions and emotions without consent. Consequently, this is not a technological debate, but a societal one. We need to react and tackle this together, now!” In fact, UNESCO has drafted a Working Document regarding the Ethics of Neurotechnology, and includes a discussion of the following ethical principles and human rights: Beneficence, proportionality, and do no harm : Neurotechnology should promote health, awareness, and well-being, empower individuals to make informed decisions about their brain and mental health while fostering a better understanding of themselves. That said, restrictions on human rights need to adhere to legal principles, including legality, legitimate aim, necessity, and proportionality Self-determination and freedom of thought : Throughout the entire lifecycle of neurotechnology, the protection and promotion of freedom of thought, mental self-determination, and mental privacy must be secured. That is, neurotechnology should never be used to exert undue influence or manipulation, whether through force, coercion, or other means that compromise cognitive liberty Mental privacy and the protection of neural data : With neural data, there is a risk of stigmatization/discrimination, and revealing neurobiological correlates of diseases, disorders, or general mental states without the authorization of the person from whom data is collected. Mental privacy is fundamental for the protection of human dignity, personal identity, and agency. The collection, processing, and sharing of neural data must be conducted with free and informed consent, in ways that respect the ethical and human rights principles outlined by UNESCO, including safeguarding against the misuse or unauthorized access of neural and cognitive biometric data, particularly in contexts where such data might be aggregated with other sources Trustworthiness : Neurotechnology systems for human use should always ensure trustworthiness across their entire lifecycle to guarantee the respect, promotion and protection of human rights and fundamental freedoms. This requires, that these systems do not replicate or amplify biases; are transparent, traceable and explainable; are grounded on solid scientific evidence; and define clear conditions for responsibility and accountability Epistemic and global justice : Public awareness of brain and mental health and understanding of neurotechnology and the importance of neural data should be promoted through open and accessible education, public engagement, training, capacity-building, and science communication Best interests of the child and protection of future generations : It is important to balance against the potential benefits of enhancing cognitive function through neurotechnology for early diagnosis, instruction, education, and continuous learning with a commitment to the holistic development of the child. This includes nurturing their social life, fostering meaningful relationships, and promoting a healthy lifestyle encompassing nutrition and physical activity Enjoying the benefits of scientific-technological progress and its applications : Access to neurotechnology that contributes to human health and wellbeing should be equitable. The benefits of these technologies should be fairly distributed across individuals and communities globally The document also touches on areas outside health such as employment. For instance, as neurotechnology evolves and converges with other technologies in the workplace, they present unique opportunities and risks in labour settings. It is necessary to develop policy frameworks that protect employees’ mental privacy and the right to self-determination but also promote their health and wellbeing to balance the potential for human flourishing with the imperative to safeguard against practices that could infringe on mental privacy and dignity. In Four ethical priorities for neurotechnologies and AI , the author discussed AI and brain-computer interfaces and explored four ethical concerns with respect to neurotech: Privacy and consent : it is trite to say that an extraordinary level of personal information can already be obtained from people's data trails, however, this is how the concern is framed. The author stresses that individuals should have the ability and right to keep their neural data private—the default choice needs to be “opt out” Agency and identity : the author asserts that as neurotechnologies develop and corporations, governments and others need to start striving to endow people with new capabilities, individual identity (bodily and mental integrity) and agency (the ability to choose our actions) must be protected as basic human rights Augmentation : there will be pressure to enhance ourselves, such as adopting enhancing neurotechnologies like those that allow people to radically expand their endurance or sensory or mental capacities. This will likely change societal norms, raise issues of equitable access, and generate new forms of discrimination. And the author notes that outright bans of certain technologies could simply push them underground. Thus, decisions must be made within a culture-specific context, while respecting universal rights and global guidelines Bias : a major concern is that biases could become embedded in neural devices, and therefore, it is necessary to develop countermeasures to combat bias and include probable user groups (especially those who are already marginalized) to add their input into the design of algorithms and devices as another way to ensure that biases are addressed from the first stages of technology development The paper also touched on the need for responsible neuroengineering: there was a call for industry and academic researchers to take on the responsibilities that came with devising these devices and systems. The authors suggested that researchers draw on existing frameworks that have been developed for responsible innovation. In Philosophical foundation of the right to mental integrity in the age of neurotechnologies , the author has equated neurorights such as mental privacy, freedom of thought, and mental integrity to basic human rights. The author created philosophical foundation to a specific right, the right to mental integrity. It included both the classical concepts of privacy and non-interference in our mind/brain. In addition, the author considered a philosophical foundation with certain features of the mind that could not be reached directly from the outside: intentionality, first-person perspective, personal autonomy in moral choices and in the construction of one's narrative, and relational identity. The author asserted that a variety of neurotechnologies or other tools, including AI, alone or in combination, could by their very availability, threaten our mental integrity. To that end, the author proposed philosophical foundations for a right to mental integrity that encompassed both privacy and protection from direct interference in mind/brain states and processes. Such foundations focused on aspects that were well known within philosophy of mind, but not commonly considered in the literature on neurotechnology and neurorights. Intentionality, the first-person perspective, moral choice, and the construction of one’s identity were concepts and processes that needed as precise a theoretical definition as possible. The author stated: “In our perspective, such a right should not be understood as a guarantee against malicious uses of technologies, but as a general warning against the availability of means that potentially endanger a fundamental dimension of the human being. Therefore, the recognition of the existence of the right to mental integrity takes the form of a necessary first step, even prior to its potential specific applications” In Neurorights – Do we Need New Human Rights? A Reconsideration of the Right to Freedom of Thought , the author stated that the progress in neurotechnology and AI provided unprecedented insights into the human brain. Likewise, there were increasing opportunities to influence and measure brain activity. These developments raised several legal and ethical questions. The author argued that the right to freedom of thought could be coherently interpreted as providing comprehensive protection of mental processes and brain data, which could offer a normative basis regarding the use of neurotechnologies. Moreover, an evolving interpretation of the right to freedom of thought was more convincing than introducing a new human right to mental self-determination. What Can We Take from These Developments? Undoubtedly, ethicists have spent a considerable amount of time thinking about mental privacy in the age of neurotech, and exactly what is at risk if there are no privacy and AI protections in place. Fortunately, the law is starting to catch up to the tech and the ethicists’ concerns. For example, California and Colorado have already enacted provisions to add to their consumer privacy statutes, and more bills have been introduced to address the issues. Previous Next