top of page

Search Results

120 results found with an empty search

  • Compliance | voyAIge strategy

    AI policies and frameworks to help your organization meet legal and ethical standards. Compliance At voyAIge strategy, compliance is a foundation for our analysis on the legal, policy, and ethical dimensions of AI. We understand the intricacies of the laws of many jurisdictions and can guide you through every step of your compliance journey. Today's rapidly evolving digital landscape is fueled by the exponential rate that AI transforms not just business practices and ways of seeing but entire industries. However, with innovation comes new challenges - particularly in compliance. Governments and regulatory bodies around the world are racing to keep up. They are creating complex legal requirements with which business must comply. For businesses, navigating this complexity is not just about avoiding fines and penalties. It's about safeguarding reputation, building trust with stakeholders, and ensuring sustainability. What's Your Compliance Challenge? Understanding jurisdiction, sector, applicable legislation, data types, and data flows are some of the many considerations we take into account when identifying regulatory bodies relevant to your organization. The following are some examples that may apply to a business now or in the future: GDPR Enforces the General Data Protection Regulation (GDPR). Applies to all businesses with clients and customers in Europe NYC LL 144 New York City regulates how business can and cannot use AI to assist in hiring employees AI ACT Regulates and governs the use of all Artificial Intelligence inside the European Union FDA The US Food and Drug Administration contains specific AI regulations governing medical devices, evidence curation, and market monitoring CCPA The California Consumer Privacy Act has a significant impact on how AI systems used in businesses handle data AIDA Enforces the General Data Protection Regulation (GDPR). Applies to all businesses with clients and customers in Europe OESA Governs employee monitoring as well as using AI to recruit employees in Ontario, Canada SB 1047 California's Safe and Secure Innovation for Frontier AI Models Act imposes safety restrictions on advanced AI PIPEDA Canada's Personal Information Protection and Electronic Documents Act governs how companies collect, use, and share personal information Did You know? Up to €30 million or 6% of Global Annual Turnover for prohibited AI practices under the GDPR. Amazon was fined €746 million by Luxembourg’s data protection authority for how it processed personal data for targeted advertising using AI-driven systems. Canada's proposed Artificial Intelligence and Data Act plans to impose administrative monetary penalties $10 million or 3% of Global Annual Revenue - including criminal penalties such as jail time for AI decisions causing significant harm. Expert Insights, Experience & Resources Book a free consultation to chat with us about correctly and accurately identifying compliance and regulation applicable to your organization. Stay informed by subscribing to our VS-AI Observer Substack , where we offer articles, whitepapers, case studies, and video content that will keep your organization ahead of emerging compliance challenges, requirements, and issues. Book a Free Consultation

  • Disclaimer and Terms of Use | voyAIge strategy

    Disclaimer and Terms of Use - Our Policies for Working with our Clients Disclaimer and Terms of Use DOWNLOAD

  • HOME w/ insights | voyAIge strategy

    Navigate your AI Journey with Confidence Innovative Risk Solutions | Strategic AI Management | Law, Policy & Ethics Expertise TALK TO AN EXPERT Trending Insights Trump Signs Executive Order on AI Read More Legal Tech Woes Read More Meta Wins the Antitrust Case Against It Read More SUBSCRIBE TO STAY INFORMED Our Services Streamline your operations with our expertly crafted policies and procedures, ensuring your AI initiatives are both effective and compliant. For organizations who: need to establish new or update existing AI governance frameworks Benefits organizations by: ensuring that AI is used safely and ethically within set guidelines Examples: An insurance company adopts our crafted policies to govern and manage their AI claims Policy & Procedure Ensure your AI systems adhere to all legal standards with our thorough compliance review, which aims to detect and minimize risk while enhancing trustworthiness. For organizations who: need to ensure their AI systems comply with regulatory standards or anticipate forthcoming legislative changes Benefits organizations by: reducing legal, financial, and reputational risk while enhancing organizational clarity Example: A healthcare provider utilizes our service to align their patient data processing AI tools with PHIPA Compliance We lend our extensive experience in professional research and writing to provide insightful, impactful content tailored to support and drive your AI-related needs. For organizations who: for companies and institutions requiring in-depth analysis of AI topics Benefits organizations by: providing expert insight and professionally written content to support decision-making Example: an App developer contracts us to create a market analysis report to detail AI advancements and emerging industry trends Research & Writing We engage your audiences with unique viewpoints that demystify the complex landscape of scholarly, political, popular, and media understandings of AI. For organizations who: need an expert spokesperson on AI for corporate or public engagement events Benefits organizations by: educating audiences with the latest insights in AI ethics, law, and policy Example: our founders deliver a keynote on AI trends in neighbouring countries so as to prepare Canadians for forthcoming AI legislation in 2025 Invited Talks Empower your team with the latest AI knowledge as well as legal and ethical perspectives, designed to enhance and extend AI decision-making. For organizations who: are aiming to elevate their team's understanding and capabilities in AI Benefits organizations by: boosting AI literacy, enhancing both strategic and operational capacities Examples: a multinational corporation uses our services to enhance their annual leadership development program Executive & Staff Training We take deep dives in your privacy policies as well as data and AI operations to uncover and resolve otherwise hidden risks and biases. For organizations who: want to understand the broader implications of AI projects Benefits organizations by: identifying impacts on employees, customers, and stakeholders as well as operational processes Example: we assess the impact of using AI in public service delivery of a location-based asset tracking system Impact Assessments Navigate the complexities of AI with confidence as we design your own guide to implementing and strategically responding to AI issues effectively. For organizations who: are mindful of their employees, customers, and stakeholders think of AI's impacts Benefits organizations by: guiding teams in ethical decision-making, fostering trust and transparency Example: we create a customized playbook focusing on perceived labour reduction implications around AI adoption Ethical AI Playbook Our team assesses your organization's needs, painpoints, and opportunities. Work with us to discover the right AI solution for you. For organizations who: are exploring potential AI solutions to address specific operational challenges Benefits organizations by: clarifying the feasibility, scope, and value of AI solutions in alignment with business objectives Example: a retailer engages us to scope AI solutions for automating inventory replenishment AI Solution Scoping Maximize AI adoption and AI project success as we assist you in ensuring all parties are informed, involved, and invested from the outset. For organizations who: require buy-in from internal and external stakeholders Benefits organizations by: ensuring that parties are informed, engaged and supportive of your AI initiative Example: a software company utilizes our service to facilitate workshops that bring together developers and end-users in the AI adoption process Stakeholder Engagement Our Servie Our Products Our Products AI Launch Packages Kickstart your AI implementation with: Tools and Resources for Pre, During, and Post AI Deployment Compliance & Ethics Frameworks n Expert Guidance on Best Practices & Strategies LEARN MORE Who We Are We are lawyers and professors with over 20 years of experience in advising private, public, and non-profit organizations on the intersection of technology with law, ethics, and policy. Dr. Tommy Cooke, BA, MA, PhD Co-Founder LEARN MORE Dr. Christina Catenacci, LLM, LLB, PhD Co-Founder Who Are We? Why Hire Us? Canada's Artificial Intelligence and Data Act (AIDA) will launch in 2025. It will place stringent requirements on Canadian organizations using AI. Organizations will require expert guidance to prepare for AIDA . LEARN MORE Why Hire Us Contact Us CONTACT US First Name Last Name Email Subject Message Submit Thanks for submitting! BACK TO TOP

  • Governing AI by Learning from Cohere’s Mistakes | voyAIge strategy

    Governing AI by Learning from Cohere’s Mistakes Why it is Crucial to Demonstrate Control By Tommy Cooke, powered by caffeine and curiousity and a strong desire for sunny weather Mar 7, 2025 Key Points: AI governance is essential because it ensures organizations maintain transparency, accountability, and oversight over how AI systems are trained, deployed, and used Leaders must proactively assess where AI models source their data, ensuring compliance with intellectual property laws and mitigating risks related to unauthorized content use We don’t govern AI to avoid lawsuits. We govern AI to demonstrate that we are in control. This is how we build trust with stakeholders In my 20s, I spent a lot of time traveling to Germany to visit a close friend. He had an early GPS unit in his car. It was an outdated system that relied on CDs for updates and generated very blurry little arrows on a tiny screen nowhere near the driver's eyes. On one trip, we thought we were heading west of Frankfurt to a favourite restaurant. Instead, we ended up 75 kilometers south. We found ourselves sitting in his car at a dead end, with his high beams on, staring into a dark farmer's field. The system led us astray because we over-relied on old data inside a brand-new consumer technology. When I speak with leaders adopting AI for the first time, I often think of getting lost in the rural German countryside. AI, like early GPS, promises efficiency, but its reliability depends entirely on its data. Organizations are under pressure to adopt AI to streamline operations, reduce costs, and drive creativity. But AI isn’t a magic bullet. It’s a tool. And like an unreliable GPS, AI trained on flawed or unauthorized data can take your organization in the wrong direction. It relinquishes control. Cohere, a major AI company in Canada, is facing a significant lawsuit over how it trained its models; Cohere used copyrighted content without permission or providing compensation. This case is one you should know about because it's more than just a legal battle. It’s a reminder: AI adoption isn’t just about capability. It’s about building and maintaining responsible control. So, how exactly do you ensure you are in control? The answer begins and ends with an ethical strategy. The Ethical Fault Line The lawsuit against Cohere, a Toronto-based AI company, highlights the growing tension between AI developers and content creators. Major media organizations allege that AI companies are scraping and reproducing their content without consent or compensation. This raises a critical question: Who controls knowledge in the era of AI? This isn’t just a tech industry issue—it’s a governance challenge with real consequences. AI systems generate content, provide insights, and automate decisions based on their underlying data. If that data is sourced irresponsibly—such as using newspaper articles without publisher consent—organizations risk reputational harm, legal liability, and a breakdown of trust with employees, customers, and industry partners. Lessons for AI Leaders: How to Stay on the Right Side of AI Ethics As AI continues to reshape industries, its impact will depend on how it is developed and deployed. Business leaders don’t need to be AI engineers, but they do need to ensure that they are using AI transparently. Here's why: Transparency is the Foundation of Trust. AI should not be a "black box"—a technology that operates mysteriously without clear explanation. Leaders need visibility into how AI works, what data it uses, and what safeguards are in place. This means two things: first, working with AI vendors to receive clear documentation on data sources and model behaviour. If a company can’t explain how its AI makes decisions, that’s a red flag. Second, leaders need a communication strategy—something that they can reference to explain AI’s role to any stakeholder Respect Intellectual Property from the Start. Whether using AI to generate content, analyze trends, or assist in decision-making, stakeholders expect AI leaders to account for where AI data comes from. If an organization uses internal data from sales reports, for example, this needs to be documented. If outsourcing data from a third-party vendor, it’s not enough to say that the data is external—leaders must be able to confirm the vendor’s ownership and rights to that data Governing AI Is Not Optional. Responsible AI use requires a governance framework. Companies need clear policies that define how AI is trained, where data comes from, and how the system and its outputs are monitored. Think of AI governance like driving a car: just as drivers follow traffic laws and speed limits, AI systems require rules to ensure safe and ethical operation. AI governance is a business strategy that demonstrates commitment to legal, compliant, and ethical AI development—ensuring transparency, explainability, and accountability. Ethical AI is an Advantage. Not a Financial Burden Much like the way a driver is expected to maintain control of their vehicle, to abide by rules, and to ensure their own and others' safety, drivers build trust with their passengers and other drivers by continuously demonstrating that they are in control. The same holds true with AI ethics. We don't govern AI to avoid lawsuits. We govern AI to demonstrate that we are in control. Previous Next

  • A Geography of AI Influence | voyAIge strategy

    A Geography of AI Influence By Tommy Cooke, fueled by summer sunshine Jul 11, 2025 Key Points: Cohere and Shopify's leaders are calling on Canadian tech talent to stay and build domestically, framing it as a strategic imperative rather than a sentimental one The future of Canadian AI won’t hinge on billion-dollar exits, but on whether small- and mid-sized businesses adopt, invest in, and champion local AI tools Business owners hold more power than they realize to shape Canada’s AI ecosystem—through the partners they choose, the tools they deploy, and the future that they help fund It’s not often that a co-founder of one of the world’s most advanced AI companies shares the stage with the president of Canada’s most iconic tech firm and calls on the next generation to build in Canada. But that’s exactly what happened last week in Toronto. At a fireside chat during Tech Week , Cohere’s Aidan Gomez and Shopify’s Harley Finkelstein made a public plea. They urged Canadian tech talent to stay and build at home, rather than taking their talent beyond Canada’s borders. For Canadian business owners, whether you’re running a startup or scaling a service firm, this message isn’t someone else’s rallying cry. It’s a turning point that could change how, and where, AI is built, accessed, and governed. It matters because the AI ecosystem that emerges in Canada over the next two years could determine Canada’s short to medium-term economic outlook, not to mention the future of AI industry in Canada. A Geography of AI Influence For those who are unfamiliar, Cohere is not a modest startup. It’s one of the leading global competitors to OpenAI that buildspowerful large language models (LLMs) that power tools like conversational AI, search, and workflow automation. Headquartered in Toronto, Cohere has emerged as one of the most serious Canadian contenders in the AI space. Gomez, its co-founder, was one of the authors of the famous 2017 “Attention Is All You Need” paper that launched the transformer revolution—a milestone in AI history. Despite its Canadian roots, Cohere has faced a dilemma common to many domestic tech firms: when foreign capital arrives and acquisition offers roll in, what should a Canadian founder do? For Gomez, the answer is to build in Canada. This is not a matter of sentimentality. Rather, it is one about fueling the domestic tech ecosystem so that local companies are able to leverage AI and launch into international business. Finkelstein, whose company built a successful platform out of Ottawa, echoed this belief at the fire side chat in Toronto. AI is Finding a Canadian Home AI infrastructure (also known as an AI stack, is a term that refers to the hardware and software needed to create and deploy AI-powered applications and solutions) or its ecosystem (such as compute access, data centres, model training, model governance, and so on) is not evenly distributed. It’s consolidated in the hands of a few powerful firms around the globe. This means that Canadian businesses are increasingly dependent on foreign service providers and AI marketplaces when it comes to scaling with AI. This is why Gomez and Finkelstein feel strongly about AI remaining in Canada. They are calling for domestic alternatives. The core components of their vision involve having Canadian-built AI firms, to be eventually governed by Canadian law and accountable to Canadian regulators whilst designed with Canadian values in mind.. Because companies like Cohere remain here, for example, their infrastructure, their partnerships, and their hiring practices influence the domestic AI marketplace. To these founders, keeping AI local means having better alignment on privacy and compliance. It also means easier hiring and upskilling, accelerated domestic collaboration, and earlier access to tools. This is what I mean by a geography of AI influence —AI in Canada for Canadians. The Capital Problem And yet, their vision is about more than location. At the heart of Gomez and Finkelstein’s appeal was a deep criticism of Canadian venture capital. As Gomez put it, too many Canadian investors are conservative. While they are quick to fund proven models, they are slow and hesitant to hedge their bets in a Canadian tech future. While their economic conservatism may be rational in a small Canadian market, it can be entirely detrimental to the growth of Canadian AI. Model training requires investment and infrastructure to scale. This is why you as a Canadian business owner need to pay attention. The future of Canadian AI will not be determined by billion-dollar IPOs. That is not a reality in Canada. Rather, it will be shaped by how many small- and mid-sized businesses choose to adopt local AI tools, contract with domestic firms, or participate in Canadian pilot programs. If businesses choose only the cheapest or most convenient global option, Canadian AI startups will face the same fate as so many before them: early acquisition, talent drain, and missed opportunity. A Quiet Call to Action If there’s one thing that should stick with Canadian business owners from this story, it’s this: you have more influence over the future of AI in this country than you think. By supporting local AI vendors, engaging with domestic AI development and research, and giving Canadian partners a chance, you have the ability to shape a burgeoning ecosystem. And as the stakes around AI governance, data privacy, and strategy rise, that ecosystem may be one of the most valuable assets Canadian businesses will have. Previous Next

  • There is a New Minister of AI in Canada | voyAIge strategy

    There is a New Minister of AI in Canada What can Canadians Expect? By Christina Catenacci May 23, 2025 It has been reported that Prime Minister Mark Carney has recently created a new Ministry in Canada—he has chosen former journalist Evan Solomon to be the new Minister of AI and Digital Innovation. Solomon was elected for the first time in the April 28, 2025 election in the riding of Toronto Centre. Before that, he worked as a broadcaster for both CBC and CTV. Previously, the topic of AI fell under the industry portfolio—in the Trudeau government, the person who was responsible for something like Bill C-27 (it contained both a privacy and AI proposed piece of legislation) was François-Philippe Champagne , who is now responsible for Finance and is representing the riding of Saint-Maurice. As Minister of Innovation, Science and Industry from 2021 to 2025, he helped attract major investments into Canada, advanced the development and adoption of clean technologies, strengthened research and development, and bolstered Canada’s position in environmental sustainability. What Will the New AI Minister Do? As we have recently seen, Prime Minister Carney has announced his single mandate letter with some streamlined top priorities: Establishing a new economic and security relationship with the United States and strengthening our collaboration with reliable trading partners and allies around the world Building one Canadian economy by removing barriers to interprovincial trade and identifying and expediting nation-building projects that will connect and transform our country Bringing down costs for Canadians and helping them to get ahead Making housing more affordable by unleashing the power of public-private cooperation, catalysing a modern housing industry, and creating new careers in the skilled trades Protecting Canadian sovereignty and keeping Canadians safe by strengthening the Canadian Armed Forces, securing our borders, and reinforcing law enforcement Attracting the best talent in the world to help build our economy, while returning our overall immigration rates to sustainable levels Spending less on government operations so that Canadians can invest more in the people and businesses that will build the strongest economy in the G7 No, AI is not mentioned in there. However, in the preamble of the letter, he touched on AI when he stated: “The combination of the scale of this infrastructure build and the transformative nature of artificial intelligence (AI) will create opportunities for millions of Canadians to find new rewarding careers – provided they have timely access to the education and training they need to develop the necessary skills. Government itself must become much more productive by deploying AI at scale, by focusing on results over spending, and by using scarce tax dollars to catalyse multiples of private investment.” Who is the New Minister of AI and Digital Innovation—Evan Solomon? To many, including Ottawa Law professor Michael Geist, Evan Solomon is smart and tech savvy— exactly what Canada needs to move the ball rolling in AI. In the past, Solomon was the CBC host of Power and Politics on CBC and The House podcast on Radio Canada. He was even considered to be someone who could replace Peter Mansbridge on The National . However, CBC terminated him in 2015 after the Star reported that he was taking secret commission payments from wealthy art buyers related to art sales involving people that he dealt with as a host. Apparently, he took commissions of more than $300,000 for several pieces of art and did not disclose to the buyer that he was being paid fees for introducing buyer and seller. Some of the people that he dealt with included Jim Balsillie and Mark Carney himself. What’s more, Solomon’s appointment was met with criticism , mostly because he does not have a formal science or tech background, and also because of a mishap in March when he briefly reposted a photoshopped offensive image of Carney from a parody account. In fact, some critics argue that someone who could not identify manipulated content in his own social media feed may struggle to develop effective policies to protect Canadians from increasingly sophisticated AI-generated deception. But he is back now, as AI Minister. He will have a lot of work to do in his new role, and we hope that one thing he does is deal with the introduction of a good-quality Canadian AI law. What Can we Take from the Mandate Letter? We heard Prime Minister Carney talk about AI in his election platform , where he promised to make sure Canada takes advantage of the opportunities presented by AI, since it is critical for our competitiveness as the global economy shifts—and for making sure we have a government that actually works. More specifically, he promised to do the following in the area of AI under the build portion of the platform: Build AI infrastructure. The Prime Minister had planned on i nvesting in nation-building energy infrastructure and cutting red tape to make Canada the best place in the world to build data centres. Canada must have the capacity to deploy the AI of the future and ensure we have technological sovereignty. Also, he planned on building the next generation of data centres quickly and efficiently by leveraging federal funding and partnering with the private sector to secure Canada’s technological advantage Invest in AI training, adoption, and commercialization . The Prime Minister had planned on measuring growth by tracking the economic impacts of AI in real-time so we can proactively help Canadians seize new opportunities, boost productivity, and ensure no one is left behind. Also, he planned on boosting adoption with a new AI deployment tax credit for small and medium-sized businesses that incentivizes businesses to leverage AI to boost their bottom lines, create jobs, and support existing employees. Companies would leverage a 20 percent credit on qualifying AI adoption projects, as long as they can demonstrate that they are increasing jobs. Further, he planned on catalyzing commercialization by expanding successful programs at Canada’s AI institutes (Mila, Vector, Amii) so that we can connect more Canadian researchers and startups with businesses across the country, which will supercharge adoption of Canadian innovation in businesses, create jobs, and strengthen our AI ecosystem Improve AI procurement . Prime Minister Carney had planned on establishing a dedicated Office of Digital Transformation at the centre of government to proactively identify, implement, and scale technology solutions and eliminate duplicative and redundant red tape. This will enhance public service delivery for all Canadians and reduce barriers for businesses to operate in Canada, which will grow our economy. This is about fundamentally transforming how Canadians interact with their government, ensuring timely, accessible, and high-quality services that meet Canadians’ needs. Also, he planned on enabling the Office of Digital Transformation to centralize innovative procurement and take a whole-of-government approach to service delivery improvement. This could mean using AI to address government service backlogs and improve service delivery times, so that Canadians get better services, faster. There were some great ideas in the election platform, and I’m sure that Canadians hope that they will manifest. It is important to note that the priorities that were identified in the election platform are encouraging, as they will help both government and SMBs in the private sector with tax credits that incentivizes businesses to leverage AI to boost their bottom lines, create jobs, and support existing employees. Businesses could sure use some help with training existing employees via upskilling and reskilling, as well as AI literacy. With respect to the more general mandate letter that has recently surfaced, it is possible that this means that any additional and prescribed mandate letters to individual Ministers will not be shared with the public. That would be concerning, since public-facing mandate letters have become the norm during the Trudeau government. We will have to see on this issue. Moreover, the couple of paragraphs in the mandate letter’s preamble suggests that there will be targeted improvements for both the public and private sectors. The letter emphasized training and scaling AI. These goals are lofty, but necessary. But on the whole, things are looking promising given the commitment to build, invest, and improve AI procurement. What can Canadians Expect? In my view, it is still too early to tell. But I’m hoping that Prime Minister Carney comes through for Canada. If the government gets this right, Canada could catch up to other jurisdictions like the EU, and become a real leader in AI. Previous Next

  • Americans Feel the Pinch of High Electricity Costs | voyAIge strategy

    Americans Feel the Pinch of High Electricity Costs Data Centres are Sucking People’s Ability to Pay to Heat or Cool their Home By Christina Catenacci, human writer Oct 17, 2025 Key Points: American residents are experiencing energy poverty, an inability to afford to keep their homes warm or cool Though the cost of electricity is based on several factors, a main driver of the spike in energy prices involves the energy required to power data centres due to the demand from AI Many states are pushing back (passing laws or reaching settlements with large tech companies) in order to keep the prices fair for residents According to CBS News , the cost of electricity has increased from $0.14 per kilowatt hour in 2019 to $0.18 per kilowatt hour in 2024—this represents a change of more than 28.5 percent. The result: the average American is now paying nearly $300 a month just in utilities. This phenomenon is referred to as energy poverty. Why is this happening? To be sure, the cost of electricity is based on several factors, including the volatile prices for natural gas, wildfire risk, electricity transmission and distribution, regulations and inflation. That being said, there is also the heat—rising temperatures fuel extreme weather events, such as heat waves in the summer and snowstorms in the winter, which then increases energy consumption as people try to keep their homes warm or cool. Climate change has only exacerbated the frequency and intensity of these extreme weather events. But there are also data centres . In fact, data centers are projected to consume up to 12 percent of American electricity within the next three years. How is this happening? Simply put, the expansion of power-hungry data centers that is required to support the surge in AI usage is a main factor in the rising costs that Americans are experiencing. As a consequence, American states are feeling the pressure to act . However, it us unclear that any state has a solution to the issue of data centers wreaking havoc on people’s electricity bills. Many have noted that answering this question may require taking a hard line against large tech companies that are rapidly investing in and using a large number of data centres. To be clear, we are talking about data centres that may require more electricity than entire cities—large factories would pale in comparison. Ari Peskoe, who directs the Electricity Law Initiative at Harvard University, states: “A lot of this infrastructure, billions of dollars of it, is being built just for a few customers and a few facilities and these happen to be the wealthiest companies in the world. I think some of the fundamental assumptions behind all this just kind of breaks down” In fact, Peskoe suggests that there could be a can of worms that pits ratepayer classes against each other. Moreover, Tricia Pridemore, who sits on Georgia’s Public Service Commission and is president of the National Association of Regulatory Utility Commissioners, noted that there is already a tightened electricity supply and increasing costs for power lines, utility poles, transformers and generators as utilities replace aging equipment or harden it against extreme weather. Pridemore mentioned that the data centers that are required to deal with the AI boom are still in the regulatory planning stages. But it is important to keep in mind that unless utilities negotiate higher specialized rates, other ratepayer classes (residential, commercial and industrial) are likely paying for data center power needs. For now, there is recent research conducted by Monitoring Analytics, the independent market watchdog for the mid-Atlantic grid, showing that 70 percent, or $9.3 billion, of last year’s increased electricity cost was the result of data center demand. How are states responding? In short, five governors led by Pennsylvania’s Josh Shapiro began pushing back against power prices set by the mid-Atlantic grid operator, PJM Interconnection, after the amount spiked nearly sevenfold. In response, PJM Interconnection has not yet proposed ways that would guarantee that data centres pay their fair share. On the other hand, Monitoring Analytics is floating the idea that data centers should be required to procure their own power. The company likened the residents’ payment for electricity as a “massive wealth transfer” from average people to tech companies. In addition, at least 12 states are considering ways to make data centers pay higher local transmission costs. For example, in Oregon, a law was passed in June that orders state utility regulators to develop new power rates for data centers. The Oregon Citizens’ Utility Board has said that there is clear evidence that costs going to serve data centers are being spread across all customers at a time when some electric bills are up 50 percent over the past four years. By way of another example, New Jersey’s governor signed legislation last month commissioning state utility regulators to study whether ratepayers are being hit with “unreasonable rate increases” in order to connect data centers and to develop a specialized rate to charge data centers. Some states are trying to reach settlements. For example, in Indiana, state utility regulators approved a settlement between Indiana Michigan Power Co., Amazon, Google, Microsoft and consumer advocates that set parameters for data center payments for service. In Pennsylvania, the state utility commission is drafting a model rate structure for utilities to consider adopting. While it is important for utilities and states to attract big customers like data centres, it is necessary to appreciate what is fair; transmission upgrades and other similar initiatives could costs millions of dollars, and it is not fair to put it all on residents. Large AI companies will need to take the above discussion into consideration when making plans to expand and adding power-hungry data centres. They may anticipate being approached by various states in order to create fair settlements so that the cost of energy is not transferred entirely to residents. Previous Next

  • How an Infrastructure Race is Defining AI’s Future | voyAIge strategy

    How an Infrastructure Race is Defining AI’s Future Why Nvidia’s $100B investment in OpenAI signals a shift every business leader must understand By Tommy Cooke, powered by really great espresso Sep 26, 2025 Key Points: 1. Access to compute, not software features, will determine who can compete in AI 2. Vendor entanglement may speed adoption but increases dependency and lock-in risks 3. The AI arms race is accelerating, shrinking the competitive window for differentiation Nvidia has committed up to US$100 billion in a staged investment into OpenAI, with the funds intended to build massive AI data centres powered by Nvidia’s own chips. The first deployments are scheduled for 2026, with each dependent on new infrastructure coming online. On the surface, this is a story about one company betting big on another. But if you are a business leader, it signals something deeper. It means that access to compute power (the chips, servers, and energy needed to run AI) will continue to determine who can compete, how fast they can innovate, and whether they can deliver reliable AI products to clients. Therefore, if you are building, selling, or integrating AI, your advantage is no longer defined by software features alone. It is defined by whether you can access and afford the infrastructure that makes those features possible. Compute as the New Moat OpenAI has said bluntly: “everything starts with compute”. Nvidia’s investment proves the point. Frontier AI models are limited not by imagination but by access to chips, data centres, and power. For businesses, this flips the equation. Software can be replicated, but compute capacity cannot be conjured overnight. The companies that secure infrastructure will enjoy a durable moat. This means faster model training, better uptime, and the ability to scale globally. Those without access risk being left behind—no matter how strong their ideas or datasets. Vendor Financing at Unprecedented Scale This deal is also significant to us as business leaders because it blurs the line between supplier and customer. Nvidia is both investing in OpenAI and guaranteeing that OpenAI’s infrastructure will be built on Nvidia hardware. Some analysts call it vendor financing at an unprecedented scale. The lesson for business leaders is twofold: First, expect suppliers to become more embedded in clients’ strategic direction, offering capital and integration alongside products Second, recognize the risk. Deeper vendor entanglement often accelerates adoption but reduces bargaining power. Tech vendors who become dependent on a single infrastructure partner may find themselves locked into costs and roadmaps that they cannot control Capital Intensity as a Barrier to Entry A single gigawatt of Nvidia systems may cost US$35 billion in hardware alone. This makes clear that the frontier of AI is not just technologically complex. It is financially punishing. For most organizations, the takeaway is not to match Nvidia or OpenAI dollar-for-dollar. Rather, it is to understand that capital intensity itself is now a barrier to entry. Competing at the frontier requires access to extraordinary financial and infrastructure resources. Vendors and enterprises need to calibrate their vision, invest in the right scale of AI for their market, partner strategically where necessary, and focus on ROI-driven deployments rather than chasing the biggest models. Regulatory and Market Risks Nvidia already dominates the global AI chip market. Adding a deep financial stake in OpenAI potentially raises antitrust concerns about preferential access and market distortion. Governments are watching closely. To you, the business leader, this matters because regulation could reshape market dynamics in ways that affect everyone. Just as governments regulated telecom and energy to ensure fair access, AI infrastructure could face new rules that mandate openness, limit exclusivity, or scrutinize vertical integrations. Leaders must anticipate these shifts and avoid strategies that depend on fragile or privileged vendor relationships. The Acceleration Effect Perhaps the most significant implication is this: the investment accelerates the AI arms race. By de-risking OpenAI’s infrastructural future, Nvidia is ensuring that larger models can be trained and deployed faster, compressing innovation cycles from years to months. For businesses, the competitive window is shrinking. The pace of AI progress means that differentiators based solely on early adoption will fade quickly. Staying competitive will require constant reinvestment and operational agility—not just one-time pilots. What Leaders Should Do Now Leaders are recommended to do the following: Treat Infrastructure as Strategy. AI isn’t just software. It depends on access to compute, bandwidth, and energy. Executives must recognize infrastructure as a strategic variable, not an IT detail Diversify Dependencies. Relying on a single vendor—whether for chips, cloud, or capital—is a risk. Explore multi-cloud strategies, alternative hardware, and hybrid deployments Negotiate Beyond Cost. Vendor agreements should secure more than price. Push for supply guarantees, roadmap visibility, and exit flexibility Anticipate Regulation. Monitor antitrust and AI policy developments. Regulation may alter vendor dynamics and market access. Build Literacy. Equip your teams with an understanding of latency, scaling costs, and compute economics. The winners will be those who can align AI ambition with operational reality. Focus on The Bigger Picture Nvidia’s $100 billion bet is more than a financial deal. It is a signal that AI’s future will be shaped by who controls the foundations of compute. For business leaders, the message is clear: innovation, product design, and customer experience flow from infrastructure. The AI market will not be won by those with the cleverest algorithms alone, but by those who can reliably access the chips and data centres that make those algorithms work at scale. This is why the infrastructure race matters, not only to Nvidia and OpenAI, but to every vendor and enterprise hoping to compete in the AI-driven economy. Previous Next

  • L&E Analysis: What is Neural Privacy, and Why is it Important? | voyAIge strategy

    L&E Analysis: What is Neural Privacy, and Why is it Important? More US States are Regulating it By Christina Catenacci Mar 14, 2025 Legal & Ethical Analysis: Issue 1° Key Points Neural data is very sensitive and can reveal a great deal about a person The law is starting to catch up to the tech and the ethicists’ concerns In North America, California and Colorado are leading the way when it comes to creating mental privacy protections in relation to neurotechnology This is a hot topic, but what is it? Generally speaking, neural data is information that is generated by measuring activity in a person's central or peripheral nervous systems, including brain activity (seen in EEGs, fMRIs, or implanted devices); signals from the peripheral nervous system (such as nerves that extend from the brain and spine); and data that can be used to infer mental states, emotions, and cognitive processes. Interestingly, this data has been used to create artificial neural networks . For instance, machine vision can be used to identify a person's emotions by analyzing their facial expressions. Some may be surprised to know that there are many types of neurotechnology (neurotech) in existence today, but what is that? Neurotechnology bridges the gap between neuroscience, the scientific study of the nervous system, and technology. The goal of neurotech is to understand how the brain can be enhanced by technological advancements to create applications that improve both brain function and overall human health. In fact, some may characterize this growing area as “a thrilling glimpse into the potential of human ingenuity to transform lives”. Others have noted that neurotechnology, combined with the explosion of AI, opens up a world of infinite possibilities . One simple way of explaining neurotech is to divide it into two categories : invasive (such as implants), and non-invasive (such as wearables). More specifically, invasive neurotech is mostly used in the medical area to deal with conditions such as neurological disorders like Parkinson’s disease. Neural privacy has to do with being confident that we have control over the access to our own neural data and to the information about our mental processes. This article delves into the law of neural privacy and the ethics of neurotech. The Law Involving Neural Privacy In the United States, there has been a flurry of activity in this regard. Why is neural privacy important? Essentially, this type of data is very sensitive personal data as it can reveal thoughts, emotions, and intentions. Certain companies have a lot to gain if they are privy to this information—think about employers, insurers, or law enforcement—this could affect how workers are able to work, individuals apply for insurance coverage, or citizens engage with their societies. Another aspect is data ownership: who owns one’s thoughts? Some may believe that this question is for the distant future, but it might be worth mentioning that Neuralink has already had its first human patient using a brain implant to play online chess . It is here already! This may be why the UN Special Rapporteur on the right to privacy has recently set out the foundations and principles for the regulation of neurotechnologies and the processing of neurodata from the perspective of the right to privacy. More precisely, the UN Report deals with key definitions and establishes fundamental principles to guide regulation in this area, including the protection of human dignity, the safeguarding of mental privacy, the recognition of neurodata as highly sensitive personal data, and the requirement of informed consent for the processing of this data. Emphasis is placed on the inclusion of ethical values and the protection of human rights in the design. While Canada has not yet legislated on mental privacy, we note that the United States has in the following jurisdictions: California : the California Consumer Privacy Act (CCPA) was amended with SB 1223 that included “neural data” in the definition of sensitive personal information, and defined “neural data” as “information that is generated by measuring the activity of a consumer’s central or peripheral nervous system, and that is not inferred from nonneural information”. Governor Newsom has already approved this amendment. California also has two new bills, SB-44 ( Neural Data Protection Ac t) that would deal with brain-computer interfaces and govern the disclosure of medical information by an employer, a provider of health care, a health care service plan, or a contractor—to include new protections for neural data; and SB-7 ( Automated Decision Systems (ADS) in the Workplace ) that would require an employer, or a vendor engaged by the employer, to provide a written notice that an ADS, for the purpose of making employment-related decisions, is in use at the workplace to all workers that will be directly or indirectly affected by the ADS Colorado : the Colorado Privacy Act was also amended with HB 24-1058 that defines “neural data” as “information that is generated by the measurement of the activity of an individual's central or peripheral nervous systems and that can be processed by or with the assistance of a device”, and adds neural data to the definition of biological data and sensitive data. This has already been signed into law Connecticut : SB 1356 ( An Act Concerning Data Privacy, Online Monitoring, Social Media, and Data Brokers ), is a bill that would amend the Connecticut Data Privacy Act , define “neural data” as any information that is generated by measuring the activity of an individual's central or peripheral nervous system”, and include it in the definition of sensitive data Illinois : HB 2984 is a bill that would amend the Biometric Information Privacy Ac t , define “neural data” as “information that is generated by the measurement of activity of an individual's central or peripheral nervous system, and that is not inferred from non-neural information”, and add neural data to the definition of biometric identifier Massachusetts : HD 4127 ( Neural Data Privacy Protection Act ) is a bill that would define “neural data” as “information that is generated by measuring the activity of an individual’s central or peripheral nervous system, and that is not inferred from non-neural information” and include it in the definition of sensitive covered data. This is a significant step since there is no comprehensive consumer privacy law at this point Minnesota : SF 1240 is a bill that would not amend the consumer privacy legislation, but would rather be a standalone piece of legislation that provides the right to mental data and sets out neurotech rights concerning brain-computer interfaces. It would begin to apply in August, 2025 Vermont : there are three bills involving neural data protection: H210 (Age-Appropriate Design Code Act), H208 (Data Privacy and Online Surveillance Act), and H366 (An Act Relating to Neurological Rights). In a nutshell, these bills would define “neural data” as “information that is collected through biosensors and that could be processed to infer or predict mental states”, provide individuals with the right to mental or neural data privacy, protect minors specifically, and create a comprehensive consumer privacy bill that includes specific protections for neural data Clearly, it is becoming more important to enact mental or neurological privacy protections when it comes to neurotech and automated decision-making systems. In North America, these States are leading the way and could influence the direction of legislation for both Canada and the entire United States. That is, they are adding provisions to their consumer privacy legislation or creating standalone statutes. Ethics of Neurotechnology Let us begin this discussion with the question, Why is neural data unique? Simply put, neural data is not just a phone number or a person’s age. It is very sensitive and can reveal much more about a person. This is why Cooley lawyers refer to it as a kind of digital “source code” for an individual , potentially uncovering thoughts, emotions and even intentions: “From EEG readings to fMRI scans, neural data allows insights into neural activity that could, in the future, decode neural data into speech, detect truthfulness or create a digital clone of an individual’s personality” Several thinkers have asked about what needs to be protected. For example, the Neurorights Foundation tackles the issue of human rights for the age of neurotech. It advocates for promoting innovation, protecting human rights, and ensuring the ethical development of neurotech. The foundation has created a number of research reports, including Safeguarding Brain Data: Assessing the Privacy Practices of Consumer Neurotechnology Companies , which analyzed the data practices and user rights of consumer neurotechnology products. In this report, there were several areas of concern: access to information, data collection and storage, data sharing, user rights, as well as data safety and security. The conclusion was that the consumer neurotechnology space is growing at a rate that has outpaced research and regulation. Further, most existing neurotechnology companies do not adequately inform consumers or protect their neural data from misuse and abuse. The report was created so that companies and investors can appreciate the kinds of specific further measures that are needed to responsibly expand neurotechnology into the consumer sphere. Additionally, UNESCO has pointed out that there are several innovative neurotechnology techniques such as brain stimulation or neuroimaging techniques, which have changed the face of our understanding of the nervous system. Neurotechnology has helped us to address many challenges, especially in the context of neurological disorders; however there are also ethical issues and problems particularly with its use of non-invasive interventions. For example, neurotechnology can directly access, manipulate, and emulate the structure of the brain—it can produce information about our identities, our emotions, our fears. If you combine this neurotech with AI, there can be a threat to notions of human identity, human dignity, freedom of thought, autonomy, (mental) privacy and well-being. UNESCO states that the fast-developing field of neurotechnology is promising, but we need a solid governance framework: “Combined with artificial intelligence, these techniques can enable developers, public or private, to abuse of cognitive biases and trigger reactions and emotions without consent. Consequently, this is not a technological debate, but a societal one. We need to react and tackle this together, now!” In fact, UNESCO has drafted a Working Document regarding the Ethics of Neurotechnology, and includes a discussion of the following ethical principles and human rights: Beneficence, proportionality, and do no harm : Neurotechnology should promote health, awareness, and well-being, empower individuals to make informed decisions about their brain and mental health while fostering a better understanding of themselves. That said, restrictions on human rights need to adhere to legal principles, including legality, legitimate aim, necessity, and proportionality Self-determination and freedom of thought : Throughout the entire lifecycle of neurotechnology, the protection and promotion of freedom of thought, mental self-determination, and mental privacy must be secured. That is, neurotechnology should never be used to exert undue influence or manipulation, whether through force, coercion, or other means that compromise cognitive liberty Mental privacy and the protection of neural data : With neural data, there is a risk of stigmatization/discrimination, and revealing neurobiological correlates of diseases, disorders, or general mental states without the authorization of the person from whom data is collected. Mental privacy is fundamental for the protection of human dignity, personal identity, and agency. The collection, processing, and sharing of neural data must be conducted with free and informed consent, in ways that respect the ethical and human rights principles outlined by UNESCO, including safeguarding against the misuse or unauthorized access of neural and cognitive biometric data, particularly in contexts where such data might be aggregated with other sources Trustworthiness : Neurotechnology systems for human use should always ensure trustworthiness across their entire lifecycle to guarantee the respect, promotion and protection of human rights and fundamental freedoms. This requires, that these systems do not replicate or amplify biases; are transparent, traceable and explainable; are grounded on solid scientific evidence; and define clear conditions for responsibility and accountability Epistemic and global justice : Public awareness of brain and mental health and understanding of neurotechnology and the importance of neural data should be promoted through open and accessible education, public engagement, training, capacity-building, and science communication Best interests of the child and protection of future generations : It is important to balance against the potential benefits of enhancing cognitive function through neurotechnology for early diagnosis, instruction, education, and continuous learning with a commitment to the holistic development of the child. This includes nurturing their social life, fostering meaningful relationships, and promoting a healthy lifestyle encompassing nutrition and physical activity Enjoying the benefits of scientific-technological progress and its applications : Access to neurotechnology that contributes to human health and wellbeing should be equitable. The benefits of these technologies should be fairly distributed across individuals and communities globally The document also touches on areas outside health such as employment. For instance, as neurotechnology evolves and converges with other technologies in the workplace, they present unique opportunities and risks in labour settings. It is necessary to develop policy frameworks that protect employees’ mental privacy and the right to self-determination but also promote their health and wellbeing to balance the potential for human flourishing with the imperative to safeguard against practices that could infringe on mental privacy and dignity. In Four ethical priorities for neurotechnologies and AI , the author discussed AI and brain-computer interfaces and explored four ethical concerns with respect to neurotech: Privacy and consent : it is trite to say that an extraordinary level of personal information can already be obtained from people's data trails, however, this is how the concern is framed. The author stresses that individuals should have the ability and right to keep their neural data private—the default choice needs to be “opt out” Agency and identity : the author asserts that as neurotechnologies develop and corporations, governments and others need to start striving to endow people with new capabilities, individual identity (bodily and mental integrity) and agency (the ability to choose our actions) must be protected as basic human rights Augmentation : there will be pressure to enhance ourselves, such as adopting enhancing neurotechnologies like those that allow people to radically expand their endurance or sensory or mental capacities. This will likely change societal norms, raise issues of equitable access, and generate new forms of discrimination. And the author notes that outright bans of certain technologies could simply push them underground. Thus, decisions must be made within a culture-specific context, while respecting universal rights and global guidelines Bias : a major concern is that biases could become embedded in neural devices, and therefore, it is necessary to develop countermeasures to combat bias and include probable user groups (especially those who are already marginalized) to add their input into the design of algorithms and devices as another way to ensure that biases are addressed from the first stages of technology development The paper also touched on the need for responsible neuroengineering: there was a call for industry and academic researchers to take on the responsibilities that came with devising these devices and systems. The authors suggested that researchers draw on existing frameworks that have been developed for responsible innovation. In Philosophical foundation of the right to mental integrity in the age of neurotechnologies , the author has equated neurorights such as mental privacy, freedom of thought, and mental integrity to basic human rights. The author created philosophical foundation to a specific right, the right to mental integrity. It included both the classical concepts of privacy and non-interference in our mind/brain. In addition, the author considered a philosophical foundation with certain features of the mind that could not be reached directly from the outside: intentionality, first-person perspective, personal autonomy in moral choices and in the construction of one's narrative, and relational identity. The author asserted that a variety of neurotechnologies or other tools, including AI, alone or in combination, could by their very availability, threaten our mental integrity. To that end, the author proposed philosophical foundations for a right to mental integrity that encompassed both privacy and protection from direct interference in mind/brain states and processes. Such foundations focused on aspects that were well known within philosophy of mind, but not commonly considered in the literature on neurotechnology and neurorights. Intentionality, the first-person perspective, moral choice, and the construction of one’s identity were concepts and processes that needed as precise a theoretical definition as possible. The author stated: “In our perspective, such a right should not be understood as a guarantee against malicious uses of technologies, but as a general warning against the availability of means that potentially endanger a fundamental dimension of the human being. Therefore, the recognition of the existence of the right to mental integrity takes the form of a necessary first step, even prior to its potential specific applications” In Neurorights – Do we Need New Human Rights? A Reconsideration of the Right to Freedom of Thought , the author stated that the progress in neurotechnology and AI provided unprecedented insights into the human brain. Likewise, there were increasing opportunities to influence and measure brain activity. These developments raised several legal and ethical questions. The author argued that the right to freedom of thought could be coherently interpreted as providing comprehensive protection of mental processes and brain data, which could offer a normative basis regarding the use of neurotechnologies. Moreover, an evolving interpretation of the right to freedom of thought was more convincing than introducing a new human right to mental self-determination. What Can We Take from These Developments? Undoubtedly, ethicists have spent a considerable amount of time thinking about mental privacy in the age of neurotech, and exactly what is at risk if there are no privacy and AI protections in place. Fortunately, the law is starting to catch up to the tech and the ethicists’ concerns. For example, California and Colorado have already enacted provisions to add to their consumer privacy statutes, and more bills have been introduced to address the issues. Previous Next

  • Six Keys to Consider Before Implementing AI Agents | voyAIge strategy

    Six Keys to Consider Before Implementing AI Agents Issues and Solutions to Help Prepare your Organization By Tommy Cooke Oct 25, 2024 Key Points: Identify specific tasks and processes where AI agents can make compelling contributions Understand and address privacy, security, and ethics vulnerabilities prior to implementation Communicate transparently with employees, offer training, and address their concerns AI agents have been explosively trending this last week. AI agents, as voyAIge strategy’s Co-Founder Dr. Christina Catenacci describes , are applications that use a combination of different AI models together with conventional automation frameworks to process information – without human interaction. The goal of the AI agent is thus to supplement or even supplant human workers. Unlike the AI many of us have explored in the course of the last calendar year, AI agents have autonomy. They make decisions, perform tasks, and create outputs based on defined goals. With companies like Microsoft and Salesforce introducing AI agents, many organizations are considering incorporating them into their operations, and rather quickly. However, it is essential to consider precisely what is at stake in adopting AI agents. How will they change workflows? What impact will they have on workforce morale? In what ways do they – and perhaps do they not – align with your organization’s goals and growth plans? Thoughtful, informed preparation is crucial if we are to ensure that AI agents enhance as opposed to impede critical processes. Here are key considerations that can help generate conversations with your business line leaders and executives that can cultivate the strategic planning and insight your organization requires if/when implementing AI agents – especially if you are considering them in the long term: Define Objectives, Values, and Use Cases Before adopting AI agents, pinpoint exactly what value they can provide. A common misconception is that AI agents can immediately replace complex tasks or roles. This is not the case, especially in early stages where human oversight and model adjustment is required. Moreover, different departments and processes will benefit from AI agents in different ways. Begin by auditing your organization. Identify tasks that are repetitive, rule-based, and time-consuming. These are often prime candidates for automation. Basic administrative tasks, inventory management, calendar scheduling – these are all prime examples of time-consuming tasks that can benefit from an AI agent’s assistance. Start small. Focus on specific workflows and expand from there. Prioritize Security and Privacy AI agents process large amounts of data. Some of those data are sensitive. Given that 51% of employees have tried tools like ChatGPT in the last year and only 17% of employers have AI policies that may otherwise regulate what information employees give to AI systems, there is already a risk in your organization whereby employees are feeding to an AI system client information, data measurements, and insights valuable to your brand. Employees are significantly outpacing their employers in using AI at work and this blind-spot can be highly problematic, opening your organization up to non-compliance and legal risks because most AI systems that employees are unofficially using at work train and model from the data anyone provides them. When dealing with AI agents, the potential risk increases quite significantly. If you do not have AI policies in place as well as data privacy policies, these are crucial requirements. You will also need to develop and implement a clear data governance policy to ensure that AI agents handle your own and your stakeholders’ data safely and securely. Establish a Clear Accountability Structure Because AI agents act autonomously without direct human input, it is difficult to know how their decision-making will align with and deviate from an organization’s values, priorities, and procedures. If and when an AI agent misses the mark, who is accountable for reporting and addressing these errors – and to whom are they accountable, exactly? Establish ethical guidelines for AI agents before they are fully implemented and deployed. How should an AI agent handle sensitive situations? What exactly defines a sensitive situation as well as sensitive information? When should certain decisions by deferred to a human decision-maker? Layers of oversight that are clearly articulated in AI ethics policies and accountability policies ensuring that AI agents can be audited and adjusted if they make inappropriate or harmful decisions. Create a governance structure that holds specific teams and roles accountable for monitoring and managing an AI agent’s actions. Assess and Understand Workforce Impact The introduction of AI agents will change the nature of work in your organization. While it is true that AI agents will likely return valuable time to your hardworking employees, 38% of employees are nervous that AI will replace them while 51% worry that AI will negatively impact their mental health . There is considerable likelihood that your workforce will have questions and concerns. Engaging employees honestly, accurately, and giving them avenues to be heard is critical if any AI system is to work in an organization. It is recommended that businesses use transparent communication with employees . It is paramount that AI agents are clearly explained, described and situated within specific roles and contexts. Employees need to hear that they will not be replaced. HR leaders are recommended to be proactive in providing training opportunities that help employees adapt to these changes as well. Work should be undertaken to restructure roles to reflect altered workflows. Start Small with a Pilot Program Mass implementation of any digital solution is risky. While software and automated tools can save time, they need to be learned before they are fully understood and embraced. A pilot program is a small-scale, isolated, and controlled study that allows an organization to understand feasibility, cost, roadblocks, and opportunities in isolation. Choose a single team, department, or process for your pilot project. For example, automating customer complaint responses. A controlled approach with a small team testing an AI agent over the course of a couple of months allows you to gather data on the effectiveness of AI while minimizing disruption. Choose a team of individuals that understand and familiar with AI, preferably employees that are excited about AI who can champion the cause and socialize it later. Use the pilot’s findings to make any necessary adjustments. Develop a Monitoring Plan AI systems learn. They change by adjusting their behaviour in an attempt to improve over time. Like a teacher in a classroom, how one learns is as important to the learning process itself. Testing, monitoring, and providing opportunities to expand horizons is crucial to any person or AI system’s successful growth. To facilitate successful growth, establish a team that will monitor the AI agent’s performance. Regular audits should become a part of your organization’s AI governance repertoire. Establish key performance indicators (KPIs) to measure the AI agent’s success in meeting its goals, adjusting along the way as needed. Continuous monitoring not only helps mitigate risks, but it will ensure successful and smooth operation in the long-term. Adopt for Long-Term Success, Not Short-Term Gain As with any release of new technology, industry and media hype tremendously stimulates early adoption. A challenge of early adoption is not being aware of a technology’s limits and challenges. Left unchecked and misunderstood, they can derail investments and disturb workforces. Start small, stay informed, and ensure that both technology and human talent are aligned for future success. Previous Next

  • The Who Why and How of Human in the Loop | voyAIge strategy

    The Who Why and How of Human in the Loop Embracing AI Failures to Turn Mistakes into Growth By Tommy Cooke Oct 18, 2024 Key Points: Human-in-the-Loop (HITL) positions employees as guides of their own AI systems AI failures reveal system limitations but also offer opportunities for refinement, turning errors into a pathway for improvement A balanced HITL approach is essential to integrate human values, prevent biases, and ensure AI evolves responsibly and in alignment with an organization’s values 57% of American workers have tried ChatGPT at work. AI adoption at work is on the rise. As more and more employees interact regularly with tools like ChatGPT, they becoming increasingly more familiar with them. Relationships are developing between your employees and Natural Language Processors like ChatGPT . Understanding that relationship and how to leverage it to improve upon them is crucial to using AI successfully in your organization. As the relationship continues to grow, your employee is becoming a kind of AI supervisor. Over the last year, the concept of “Human-in-the-Loop” has become a commonplace, fundamental concept of AI. It refers it a form of human oversight of an AI system, such as an engineer adjusting a large AI system behind closed doors. However, if your employees are using systems like ChatGPT regularly, they are positioned to participate in HITL in ways that can be tremendously valuable. When an employee interacts with AI, they are not merely a passive user. They can actively become a part of its learning process . Every prompt, correction, and piece of feedback they provide it refines the AI, guiding it to better align itself with the employee’s goals. More simply, the employee becomes their own Human-in-the-Loop. Take my recent experience as an example. I asked ChatGPT to help me refresh a jazz guitar lesson plan. I’ve been studying jazz guitar for a couple of years now, and one of my goals is to get more comfortable with inverted chord voicings (for non-musicians, if a chord is the sum of its notes, changing the order of those notes can enrich your writing and playing). Initially, ChatGPT suggested I spend an extra 15 minutes a week drilling scales (individual notes, not chords). I was confused. That is not my goal here. So, I told ChatGPT, “Thanks for the suggestion, but I’m not interested in scales right now. Let’s make sure we’re focusing on the goals I explicitly share so we can build a plan that better uses my time.” ChatGPT apologized. It then provided me with a set of inverted chord exercises that will keep me busy through April 2025. My interaction is a common one among ChatGPT users and it is important. As individuals, employees have the power to shape and improve AI outputs directly. AI is bound to make mistakes, like misunderstanding intentions and goals. The key is recognizing that limitations are not setbacks. They are opportunities for growth. When employees see themselves as in-the-loop with AI, they can take an active role in recognizing its limitations and pushing it to improve. They can be essential to refining AI and making it more aligned with their own or an organization’s needs. Each interaction, correction, and bit of guidance we provide helps the AI learn more effectively. Recognizing AI failures are sources of insight and growth is an important capability for any organization. Not only do failures reveal how systems are built, their tendencies, and their biases exist – it reveals a pathway for encouraging the system to develop and perform in a more efficient and more accurate manner in the future. Leveraging Failures as Opportunities AI failures provide critical insight into the dynamics of how humans develop relationships with AI. More specifically, the dynamics at play between systems that learn and human intention . Human decision-making is complex. It involves values, context, empathy, historical biases, and so on. AI can struggle in its inability to understand these subtle human complexities. This is part of what makes a Human-in-the-Loop so important. It ensures that human judgement and intent are integrated into an AI system’s learning process so that it becomes better at replicating human behaviours. For example, AI models used for job recruitment have been known to be biased against certain minority groups . It is crucial to identify why this bias exists and address it directly. A Human-in-the-Loop plays a critical role here. They can analyze data and algorithms to determine where the issue occurred, what parameters led to the unintended outcome and begin designing a solution. How You and Your Employees Can Be Their Your Own Human-in-the-Loop Here are three actionable tips that you, your colleagues, or any employee can follow to encourage tools like ChatGPT to improve, especially when it makes a mistake: Explain : Clearly point out what went wrong and why. Provide the rationale behind your thinking. This helps the system learn more effectively and align with your specific needs. Coach : If AI seems to misunderstand your request, guide it by rephrasing or breaking down your request into smaller components. This makes it easier for the tool to understand your intentions and learn from them. Validate : Positive reinforcement can help. When AI gets it right, acknowledge it. This validation encourages AI to replicate the improved behavior in the future. HITL as a Balanced Partnership, When Kept in Check Building a relationship with AI is a dynamic process. It is about fostering growth, accountability, and understanding. While employees can be their own Human-in-the-Loop, it is essential that there is at least an awareness and intention to align an employee’s guidance of AI with the organization's goals and priorities. Without this alignment, individuals adjusting and guiding AI may inadvertently trigger old or create new biases that may diverge from strategic objectives. Human-in-the-Loop is certainly a bridge that connects human values, expertise, and context with the computational power of AI – but it must be implemented thoughtfully. As organizations become more comfortable integrating AI, remember that Human-in-the-Loop is about creating synergy between human insight and machine learning. By maintaining human involvement, we ensure that AI evolves in a direction that benefits not only the organization but also its employees, customers, and broader stakeholders. Previous Next

bottom of page