Search Results
120 results found with an empty search
- When Technology Stops Amplifying Artists and Starts Replacing Them | voyAIge strategy
When Technology Stops Amplifying Artists and Starts Replacing Them AI-generated creativity forces us to confront a new cultural crossroads: if machines can make the art, what remains uniquely human in the act of creating? This matters to any business owner By Tommy Cooke, powered by medium roast espresso Key Points: 1. AI isn't just automating routine tasks, it's beginning to replace human creativity where scale and predictability dominate 2. The future of work hinges on what only humans can bring: meaning, perspective, imperfection, and authentic connection 3. If efficiency becomes our only compass, we risk building a world rich in content but poor in humanity When I was in my 20’s, I was the lead guitarist in a regularly gigging and recording rock band. At our busiest, we were performing four to six nights a month while being full-time college students and holding down part-time jobs. We produced an album and two EPs along the way. When you are working that much at your craft while having an incredibly full plate, you always hope for a break. Ours came when our music was introduced to a major record executive. He enjoyed the album and thought one of our tunes would be an instant radio hit. He suggested that we work closely with a well known hitmaker to punch out more radio friendly versions of our album. But, in the meantime, there was a catch: to be on the label, we were required to have a certain number of followers on MySpace. We came well short of that number, exponentially. It would take us years to achieve that. To say that we were shocked would be an understatement. That was back in the late 2000s. Today, being a musician is far more difficult. Not only are targets harder to reach as a bare minimum entry point to talking to labels, but now AI has entered the scene. In a recent Global News article, the famous music journalist and historian Alan Cross speculated a future that may not be far away: record producers have every impetus to remove the artist from art. In other words, AI-driven music could forever change the way people consumer music. As AI-generated music becomes mainstream, Mr. Cross poses a worrying question: what happens to the human in the creative process? What does this shift teach us about AI, authenticity, and human purpose? At first glance, Alan’s article is a story about singers, streaming royalties, and rights-owners. But for business leaders, policy makers, and organisational strategists, the underlying theme is deeper: as AI moves from tool to creator, the boundary between human value and machine delivery is shifting. The Discomfort of AI As painful as it is for me to admit, in Alan’s dystopic future vision where AI drives content creation, musicians are just not that unique. They’re simply the first creative class to experience what economists have been warning about for years: automation begins where scale and predictability create the highest return. Let me give you an example. Pop music is formulaic. How many times have you seen this video or one exactly like it ? It’s a routine that has been done time again, ad nauseum. But they make a fascinating point. So, if you haven’t seen them, take a minute to watch. It’s very revealing in terms of how much the structure of popular music is replicated over, and over, and over again. This same thing can be said of social media personas and design principles, too. My point is that the more quantifiable and structure-driven human content continues to become, the more likely machines are to inhabit and reproduce human content. For years, people comforted themselves with a hopeful refrain: AI will take the routine tasks so we can focus on creative and strategic work. This is why Alan’s vision is alarming. It presents us with an existential tension: one where romantic expectations of technology and its actual outcomes forces us to question what happens when efficiency and extraction are prioritized over meaning. So, Alan asks us to stop and recognize a crossroads in front of us, and he’s asking us to do so by prompting ourselves with a critical question: what do we want human experience to mean when machines can perform the visible parts of it? This question will have different implications for everyone, and they matter in non-music contexts, too. What AI as a Creator means for People and Organisations While music is the most visible example, parallel dynamics are already unfolding in marketing, design, customer service, legal drafting, and more. Alan isn’t merely presenting a vision anymore. He’s offering a critical narrative, and it carries three key lessons: Human value must be re-defined . When an algorithm can generate content at scale, cost-effectively, and without human pain-points (sleep, illness, ego, negotiation), the “value” of human labour shifts. It’s no longer just about whether a person can you do the job, but also (a) what unique stance the person brings, and (b) how that person shifts from being a deliverer to a designer of meaning. In other words, business leaders, that treat humans as input-machines are likely to find that they experience a high turnover. If instead they ask: “What does only a human bring?”, they protect and amplify their human capital Authenticity and trust become strategic assets. Alan’s article raises a paradox: people found AI-generated music more arousing, yet human-composed music was familiar. That suggests a gap between novelty and connection. In a world of AI production, human stories, human flaws, and human context become competitive differentiators. Organizations that lean into their human-identity, align culture, ethics, and narrative, and resist the “machine everything” push will build stronger trust and attachment Strategy must account for the human-machine continuum, not just the machine. Leaders often frame AI as “how do we use this tool to generate faster/cheaper?” But the music story shows the existential side: “What if the machine becomes creator?” and “What if our work becomes obsolete?” The strategic imperative is two-fold: (a) define how humans and machines co-create value, and (b) define safeguards What struck me reading Cross’ article and thinking back to that moment in my twenties when a gatekeeper told a young band we needed tens of thousands of invisible followers to be worthy, is that technology has always mediated who gets seen. What’s different now is the scale: we’re not just gatekeeping humans—we’re replacing them in the system. And that invites a stubborn but necessary question: What role do we want people to play in a world of perfect synthesis and endless content? If we allow efficiency alone to steer the ship, we will build a culture optimized for frictionless consumption rather than lived experience; a world full of sound, but not necessarily any music. The point isn’t to fear AI or resist progress. It’s to remember what makes human work meaningful in the first place. Creativity isn’t merely output. It’s the accumulated weight of effort, failure, identity, memory, taste, temperament, private doubt, and public courage. It’s the quiet, unglamorous process of becoming someone capable of expression. So, as AI becomes a collaborator, producer, and in some cases a creator, our responsibility isn’t to compete with it on volume or speed but do double down on what only people can offer: perspective, dissonance, care, imperfection, and soul. Not to mention, community. Previous Next
- Canadian AI Trends in HR: A Year in Review and Foreshadowing 2025 | voyAIge strategy
Canadian AI Trends in HR: A Year in Review and Foreshadowing 2025 Reflecting on Current and Forthcoming Shifts in HR By Tommy Cooke Nov 29, 2024 Key Points: AI has revolutionized HR recruitment and hiring in Canada by reducing the time-to-hire and enhancing the candidate experience Employee onboarding and learning management have been transformed with AI, streamlining manual tasks and enhancing personalization The year 2025 will see further developments in AI for HR in Canada, including enhanced Employee Experience (EX) management, balancing staffing and workload distribution, and challenging perceptions of trust in leadership Why Do AI Trends Matter? Looking back to understand AI trends may seem curious. By the fall of 2023, Chatbots were barely on the scene. Alas, in less than a year Artificial Intelligence (AI) has exploded in ways that have fundamentally changed the course of daily operations and growth strategies of organizations across virtually every industry around the world. The same holds particularly true for Human Resources (HR) in Canada. Indeed, 2024 has been an important year in terms of AI transformation in HR. It's crucial to understand these trends because they highlight the evolving nature of the HR landscape. 2024's trends indicate where the industry is heading. The HR world is often characterized as a manual world filled with time-consuming processes. Recruitment often involves hours screening resumes, onboarding relies on stacks of checklists, and performance management is limited to a frequency of reviews that can often miss the subtle nuances of an employee's growth. Employee engagement is often driven by surveys, and workforce planning requires significant data analysis efforts that rely on quickly antiquating methods. Since AI entered the landscape this year, it has promised efficiency, scalability, and intelligent decision-making. The appeal of AI in the HR world are numerous, but the most significant tends to be reducing armchair swiveling, providing personalized insights, and positioning HR professionals to be strategic drivers of an organization - and no longer merely administrators. A survey of 2,900 HR leaders and executives across Canada and the United States found that AI adoption has become mainstream for HR teams with smaller companies even more reliant on the technology than their larger competitors. Here are the top three AI in HR trends in Canada throughout 2024: 1. Recruitment and Hiring With the average time-to-hire being north of 36 days on average , recruitment is seeing the most visible transformation through AI and automation. In fact, 79 percent of organizations in North America are currently using AI and automation tools in recruitment. At the time of writing this Insight, there are 81 AI recruiting startups in Canada . AI chatbots have been increasingly used this year in the area of HR to handle candidate engagement, manage application updates, conduct preliminary assessments, and help interviewers prepare for interviews. From AI-based video interviewing software conducting sentiment analysis to AI-driven employee matching, training, talent identification, and professional development, 2024 has been a busy year in terms of AI’s impact on the HR recruitment and hiring scene. 2. Employee Onboarding The often cumbersome and paperwork-intensive process of onboarding has changed significantly because of AI. In the United States, 68 percent of organizations are already using AI in the onboarding process, with 41 percent of respondents in a recent study anticipating AI-driven onboarding by August 2025. In Canada, the stakes around AI use for onboarding are high. Throughout the year, managers have reported that long hiring cycles are leading to high turnover due to heavy workloads, higher recruitment costs, losing top candidates to competitors, and delayed or cancelled projects . AI assists by means of streamlining manual tasks like document submission and compliance training while AI chatbots are playing a role in guiding new hires throughout the onboarding process. Chatbots are also assisting in the production of FAQs. 3. Learning and Development Learning Management Systems (LMS) are familiar platforms to HR professionals. They can be tremendously useful in organizing, creating, and delivering training and educational content in ways that are insightful and easy to use for both HR and the workforce these platforms serve. A key transformation throughout 2024 is the onboarding of AI functionality into LMS. By the end of 2024, it is expected that 47 percent of all LMS tools will feature AI capabilities. In Canada, technology-driven use in teaching and learning is expected to grow well into 2027 with GenAI being a potent driver for change. The continued demand for flexible learning and hybrid offerings is in part due to the impact of GenAI on educational industries, and so AI’s entrance into LMS this year was not a surprise. We have already seen AI assist on LMS platforms by tailoring learning paths based on individual skills and growth goals and adapting to preferred learning styles as well. Moreover, AI is being positioned to reduce administrative support as well by automating enrollment, scheduling, and grading tasks. As we have seen in neighboring platforms, data-driven insights on learner progress and training program effectiveness are one of the many AI elements that are effectively transforming HR professionals into business leaders. Foreshadowing AI-HR Trends in Canada for 2025 Trends in the United States, Europe, and the rest of world serve as important indicators of AI solution marketplace transformations likely to arrive in 2025. Here are three trends Canada can expect to see next year: 1. AI Intersecting with Employee Experience (EX) Economic recovery priorities have quickly shifted organizations’ focus on cost reduction and efficiency boosts but at a considerable cost in terms of lost emphasis on inclusion, equity, mental health, and workplace schedule flexibility. To compensate for an increase in workplace challenges and an overall shift away from the quality of employee experience, AI is increasingly adopted by employees to recover their own workplace experience. As we published recently, AI that can come to the workplace in an employee’s pocket is an attractive assistant for getting through a difficult workday. This trend from 2024 will likely increase throughout 2025. With Gartner reporting that 60 percent of HR leaders believing that their current technology stack hinders rather than improves employee experience, organizations will do well to critically reflect upon what role its own and its employees’ AI play in improving their experience at work. Given the impact AI is making in niche HR workflows already, expect to see more AI targeted toward employee mental wellness to predict burnout, measure morale, and isolate what aspects of the workplace culture are not working. 2. AI to Balance Staffing and Workload More than half of organizations surveyed by SHRM in 2024 reported that their HR units are understaffed. With SHRC also reporting that only 19 percent of HR executives expect their HR unit size will increase next year and that 57 percent of HR professionals are reporting to work beyond normal capacity, AI-driven task automation will continue to offset HR labour shortages well into and beyond 2025. As organizations determine which human inputs add the most impact and value against where they believe AI can fill in and even take the lead, we can also expect to see a sharper rise in the propagation of AI Agents. As our Co-Founder Christina Catenacci explains , AI Agents are expected to work without the need for human supervision. They will play a significant role in reframing the conversation that an organization will have about balancing staffing and workloads. 3. AI’s Role in Challenging and Promoting Trust of Leadership Changes are stressful. Employees have been burning out and disconnecting at alarming rates throughout 2024. While EX will be a challenge to maintain and foster throughout 2025, leadership will also likely continue to face elevated pressure to maintain trust and transparency as well. Layoffs, economic uncertainty, global conflict, and the likelihood of political turmoil has created a global-reading sense of job insecurity for many workers. How leaders navigate these issues, particularly when the adoption of AI is often perceived as a barrier to trust, will certainly make the question of AI’s role in HR leadership a difficult one to answer. 46 percent of HR leaders believe AI boosts their analytics capabilities and 41 percent of business leaders expected to redesign business processes via AI in the next few years , AI will be implemented in more critical business lines and operations across more industries throughout 2025. This means that leadership must find ways to lead their workers with confidence. Thought Leadership will play a critical role in shaping conversations, easing debates, and softening cynicism. It involves creating and distributing key content in the form of blogs, podcasts, videocasts, and newsletters that clarify what AI is versus what AI is not, how and why it is proven to improve workplace processes and productivity and is implemented without necessitated layoffs. In order to protect, foster, and maintain trust with employees, leaders need to exemplify themselves as commanders of AI, and not merely its passengers We expect to see a significant rise in AI-oriented Thought Leadership content generation throughout 2025. 2024 and 2025: a Period of Significant AI Transformation in HR AI's influence on HR in Canada throughout 2024 has been nothing short of transformative. From streamlining recruitment and onboarding to tailoring employee learning experiences, AI has repositioned HR professionals to move beyond administrative tasking and take on more strategic roles. As we look toward 2025, global trends will continue to shape the Canadian HR landscape, requiring leaders to adopt ethical and inclusive AI practices while maintaining a human-centered approach that reflects critically on what it means to be a trustworthy leader who understands and embraces AI. Organizations that embrace these technologies and lessons thoughtfully will be well-positioned to foster a more productive and engaging workplace in the years ahead. Previous Next
- New York Times Sues OpenAI and Microsoft for Copyright Infringement | voyAIge strategy
New York Times Sues OpenAI and Microsoft for Copyright Infringement The NYTimes lawsuit has the potential to significantly shape copyright and AI policy By Christina Catenacci Aug 2, 2024 Key Points: The Times has sued both OpenAI and Microsoft, alleging copyright infringement, trademark dilution, and unfair competition by misappropriation OpenAI has responded to the Complaint on its website stating, “We support journalism, partner with news organizations, and believe The New York Times lawsuit is without merit” A decision in this case may provide much-needed clarification regarding the use of copyrighted works in the development of generative AI tools On December 27, 2023, the New York Times (The Times) sued OpenAI and Microsoft (Defendants) for copyright infringement in the United States District Court in New York. In its Complaint, The Times explained that its work is made possible through the efforts of a large and expensive organization that provides legal, security, and operational support, as well as editors who ensure their journalism meets the highest standards of accuracy and fairness. In fact, The Times has evolved into a diversified multi-media company with readers, listeners, and viewers around the globe with more than 10 million subscribers. But according to The Times, the joint efforts of the Defendants have harmed The Times, as seen by lost advertising revenue and fewer subscriptions to name a few. The Times alleges that that OpenAI unlawfully used its works to create artificial intelligence products. The Times argued in its Complaint that unauthorized copying of The Times works without payment to train Large Language Models (LLMs) is a substitutive use that is “not justified by any transformative purpose”. The Times has sued the Defendants as follows: Copyright infringement against all Defendants : by building training datasets containing millions of copies of The Times works (including by scraping copyrighted works from The Times’s websites and reproducing them from third-party datasets), the Defendants have directly infringed The Times’s exclusive rights in its copyrighted works. Also, by storing, processing, and reproducing the training datasets containing millions of copies of The Times works to train the GPT models on Microsoft’s supercomputing platform, Microsoft and the OpenAI Defendants have jointly directly infringed The Times’s exclusive rights in its copyrighted works Vicarious copyright Infringement against Microsoft and OpenAI : Microsoft controlled, directed, and profited from the infringement perpetrated by the OpenAI Defendants. Microsoft controls and directs the supercomputing platform used to store, process, and reproduce the training datasets containing millions of The Times works, the GPT models, and OpenAI’s ChatGPT offerings. The Times alleges that Microsoft profited from the infringement perpetrated by the OpenAI Defendants by incorporating the infringing GPT models trained on The Times works into its own product offerings, including Bing Chat Contributory copyright infringement against Microsoft : Microsoft materially contributed to and directly assisted in the direct infringement that is attributable to the OpenAI Defendants. The Times alleged that Microsoft provided the supercomputing infrastructure and directly assisted the OpenAI Defendants in: building training datasets containing millions of copies of Times Works; storing, processing, and reproducing the training datasets containing millions of copies of The Times works used to train the GPT models; providing the computing resources to host, operate, and commercialize the GPT models and GenAI products; and providing the Browse with Bing plug-in to facilitate infringement and generate infringing output. The Times said that Microsoft was fully aware of the infringement and OpenAI’s capabilities regarding ChatGPT-based products Digital Millennium Copyright Act–Removal of Copyright Management Information against all Defendants : The Times included several forms of copyright-management information in each of The Times’s infringed works, including: copyright notice, title and other identifying information, terms and conditions of use, and identifying numbers or symbols referring to the copyright-management information. However, The Times claimed that without The Times’s authority, the Defendants copied The Times’s works and used them as training data for their GenAI models. The Times believed that the Defendants removed The Times’s copyright-management information in building the training datasets containing millions of copies of The Times works, including removing The Times’s copyright-management information from Times Works that were scraped directly from The Times’s websites and removing The Times’s copyright-management information from The Times works reproduced from third-party datasets. Moreover, the Times asserted that the Defendants created copies and derivative works based on The Times’s works, and by distributing these works without their copyright-management information, the Defendants violated the Copyright Act . Unfair competition by misappropriation against all Defendants : by offering content that is created by GenAI but is the same or similar to content published by The Times, the Defendants’ GPT models directly compete with The Times content. The Defendants’ use of The Times content encoded within models and live Times content processed by models produces outputs that usurp specific commercial opportunities of The Times. In addition to copying The Times’ content, it altered the content by removing links to the products, thereby depriving The Times of the opportunity to receive referral revenue and appropriating that opportunity for Defendants. The Times now competes for traffic and has lost advertising and affiliate referral revenue Trademark dilution against all Defendants : in addition, The Times has registered several trademarks and argued that the Defendants’ unauthorized use of The Times’s marks on lower quality and inaccurate writing dilutes the quality of The Times’s trademarks by tarnishment. The Times asserts that the Defendants are fully aware that their GPT-based products produce inaccurate content that is falsely attributed to The Times, and yet continue to profit commercially from creating and attributing inaccurate content to The Times. The Defendant’s unauthorized use of The Times’s trademarks has resulted in several harms including damage to reputation for accuracy, originality, and quality, which has and will continue to cause it economic loss. The Times has asked for statutory damages, compensatory damages, restitution, disgorgement, and any other relief that may be permitted by law or equity. Additionally, The Times has requested that there be a jury trial. What can we take from this development? This has the makings of a landmark copyright case and can go a long way to shape copyright and AI policy for years to come. In fact, some have referred to this case as, “The biggest IP case ever”. In terms of a response to the Complaint, OpenAI has made a public statement in January 2024 on its website stating, “We support journalism, partner with news organizations, and believe The New York Times lawsuit is without merit”. The company set out its position as follows: “Our position can be summed up in these four points, which we flesh out below: We collaborate with news organizations and are creating new opportunities Training is fair use, but we provide an opt-out because it’s the right thing to do “Regurgitation” is a rare bug that we are working to drive to zero The New York Times is not telling the full story” Interestingly, OpenAI has stated that training AI models using publicly available internet materials is “fair use”, as supported by long-standing and widely accepted precedents. It stated, “We view this principle as fair to creators, necessary for innovators, and critical for US competitiveness”. However, the Defendants in this case may run into problems with this argument because The Times’ copyrighted works are behind a paywall. The Defendants are familiar with what this means—it is necessary to pay in order to read (with subscriptions) or use (with proper licensing). It is concerning that OpenAI refers to regurgitation (word-for-word memorization and presentation of content) as a bug that they are working on, but then says, “Because models learn from the enormous aggregate of human knowledge, any one sector—including news—is a tiny slice of overall training data, and any single data source—including The New York Times—is not significant for the model’s intended learning”. Essentially, OpenAI has downplayed the role that The Times’ works play in the training process, yet not addressing The Times’ arguments that the Defendants have ingested millions of copyrighted works without consent or compensation and have been outputting The Times works practically in their entirety. Another point of interest is that, in the Complaint, The Times stated that it reached out to OpenAI in order to build a partnership, but the negotiations never resulted in a resolution. However, OpenAI has stated in its website post that the discussions with The Times had appeared to be progressing constructively. It said that the negotiations focused on a high-value partnership around real-time display with attribution in ChatGPT, where The Times would gain a new way to connect with their existing and new readers, and their users would gain access to The Times reporting. It stated, “We had explained to The New York Times that, like any single source, their content didn't meaningfully contribute to the training of our existing models and also wouldn't be sufficiently impactful for future training. Their lawsuit on December 27—which we learned about by reading The New York Times—came as a surprise and disappointment to us”. Clearly, there are two different sides to this story, and the court will need to sort out what took place in order to make a determination. Ultimately, this case will have a significant impact on the relationship between generative AI and copyright law, particularly with respect to fair use . In p articular, a decision in this case may provide much-needed clarification regarding the use of copyrighted works in the development of generative AI tools, such as OpenAI’s ChatGPT and Microsoft’s Bing Chat (Copilot), both of which are built on top of OpenAI’s GPT model. Previous Next
- Whose Ethics Matter Most and Why | voyAIge strategy
Whose Ethics Matter Most and Why Ethics is a declaration of whose voices, opinions, and values matter By Tommy Cooke Sep 19, 2024 Key Points: Engage with voices and ideas outside your organization as a litmus test for your ethical priorities Treat AI ethics frameworks as living documents that are organic and change over time Strive to think globally - not locally When we talk about ethics in AI, it’s easy to overlook their underlying complexity. Many organizations treat ethics as a straightforward process: identify some standards, create some policies, and follow compliance. But this approach overlooks a key reality. This reality is that ethics are subjective . Ethics reflect values, and values differ in context, culture, and stakeholder expectations. This subjectivity can be particularly challenging for organizations operating in diverse industries and global markets. In AI development and AI use, writing ethics is a declaration . It’s a statement of your organization's beliefs about right and wrong, good and bad. But it’s also a reflection of the values you’re embedding into your AI systems, and the choices you make along the way are critical. To make things a bit more complicated, one key question often goes unanswered: Whose ethics are we prioritizing? In an increasingly interconnected world, it’s vital to consider whose perspectives are included - and whose might be missed. Ethics is a Living Framework At its core, ethics are codified morals. They are an attempt to translate abstract ideas and values into solid standards for behaviour. However, in AI, where the implications of decisions are far-reaching, the ethics landscape is complex and shifting. For example, in Europe, the General Data Protection Regulation (GDPR) provides a comprehensive ethical framework for data use. It is largely individual-centric , focused on protecting the privacy and rights of individuals, requiring transparency and consent for how data is collected and used. In contrast, the U.S. takes a more business-centric approach to AI ethics. There is less comprehensive regulation, and ethical standards tend to focus on enabling innovation while mitigating harm through self-regulation and sector-specific guidelines . For companies operating in both the US and the EU, this difference creates a challenge: ethical beliefs around privacy, autonomy, and transparency can be misaligned depending on where you are, resulting in inconsistencies in AI governance. The Subjectivity of Ethics The subjective nature of ethics means that what is considered ethical in one context might not be viewed the same way elsewhere. For instance, the concept of fairness is interpreted differently across cultural boundaries . In Western countries , fairness in AI often focuses on preventing discrimination based on race, gender, or disability. In contrast, in China or other parts of East Asia, fairness might emphasize collective welfare and societal harmony , even if it means sacrificing some degree of individual privacy. This raises critical questions for organizations: when you create an ethical framework, whose values are you representing? Which ones do you prioritize, and why? And, just as importantly, whose values are being left out? As businesses develop AI systems that impact people across borders, the need for a more inclusive and adaptable ethical framework becomes apparent. Without it, companies risk ethical blind spots that can lead to reputational damage, loss of trust, and, in extreme cases, legal action. Questions for Organizations So, how can organizations navigate this complex ethical landscape? Here are three essential questions to ask: Whose voices are included in your ethics frameworks? Consider the diversity of stakeholders impacted by your AI systems, including employees, customers, communities, and global regulators. Do your ethical standards reflect this diversity, or are they shaped by the dominant voices within your organization? It is becoming increasingly clear around the globe that inclusive frameworks tend to be more robust and resilient because they account for a wider range of perspectives. How often do you revisit your ethical guidelines? Ethics cannot be static. As technology evolves and societal expectations shift, so too must your ethical frameworks. For example, the rise of generative AI and large language models has created new ethical dilemmas around intellectual property, misinformation, and AI autonomy. Organizations should regularly assess whether their ethical guidelines remain relevant and effective. Are you balancing internal and external values? Often, companies prioritize their internal values—whether it’s innovation, efficiency, or profitability—over external stakeholder concerns. But ethics are about building trust , and to do so, organizations need to align their values with those of the communities they serve. Practical Advice for Ethical AI Governance Building an adaptable, inclusive ethical framework doesn’t have to be overwhelming. Here are three practical tips for organizations looking to strengthen their AI ethics: Engage with external voices. Ethics should be informed by a diversity of perspectives, both internal and external. Regularly engage with stakeholders—including regulators, customers, and community leaders—to ensure that your ethical frameworks are inclusive and reflective of broader societal values. Make ethics a living document. Ethical standards should be dynamic, not static. Establish a regular process for reviewing and updating your ethics policies to reflect the latest technological developments, regulatory changes, and societal shifts. Think globally, act locally. Your ethics should be adaptable to different cultural and legal contexts. Strive for a balance between global standards and local values to ensure your AI systems are both responsible and contextually appropriate. Ethics as a Strategy In the end, ethics in AI is not just about doing what’s right. It’s about being strategic. In a world where AI systems can influence lives across continents, ethical governance is about building trust, ensuring accountability, and safeguarding the future of your organization. By asking the right questions and adopting a flexible, evolving approach, companies can develop ethical frameworks that are not only reflective of their values but adaptable to a rapidly changing world. Previous Next
- What is Human in the Loop? | voyAIge strategy
What is Human in the Loop? Understanding the Basics By Christina Catenacci Oct 18, 2024 Key points A human-in-the-Loop approach is the selective inclusion of human participation in the automation process There are some pros to using this approach, including increased transparency, an injection of human values and judgment, less pressure for algorithms to be perfect, and more powerful human-machine interactive systems There are also some cons, like Google’s overcorrection with Gemini, leading to historical inaccuracies Humans-in-the-Loop - What Does That Mean? A philosopher at Stanford University asked a computer musician if we will ever have robot musicians, and the musician was doubtful that an AI system would be able to capture the subtle human qualities of music, including conveying meaning during a performance. When we think about Humans-in-the-Loop, we may wonder exactly what this means. Simply put, when we design with a Human-in-the-Loop approach, we can imagine it as the selective inclusion of human participation in the automation process. More specifically, this could manifest as a process that harnesses the efficiency of intelligent automation while remaining open to human feedback, all while retaining a greater sense of meaning. If we picture an AI system on one end, and a human on the other, we can think of AI as a tool in the centre: where the AI system asks for intermediate feedback, the human would be right there (in the loop) providing some minor tweaks to refine the instruction and giving additional information to help the AI system be more on track with what the human wants to see as a final output. There would be arrows that go from the human to the machine and from the machine to the human. By designing this way, we have a Human-in-the-Loop interactive AI system. We can also view this type of design as a way to augment the human—serving as a tool, not a replacement. So, if we take the example of computer music, we could have a situation where the AI system starts creating a piece, and then the human musician could provide some “notes” (pun intended) for the AI system that adds meaning and ultimately begins creating a real-time, interactive machine learning system. What exactly would the human do? The human musician would be there to iteratively, efficiently, and incrementally train tools by example, continually refining the system. As a recording engineer and producer myself, I like and appreciate the idea of AI being able to take recorded songs and separate it into its individual stems or tracks, like keys, vocals, guitars, bass and drums. This is where the creativity begins… What are the pros and cons of this approach? While there are several benefits of using this approach, here are a few: There would be considerable gains in transparency : when humans and machines collaborate, there is less of a black box when it comes to what the AI is doing to get the result There would be a greater chance of incorporating human judgment : human values and preferences would be baked into the decision-making process so that they are reflected in the ultimate output There would be less pressure to build the perfect algorithm : since the human would be providing guidance, the AI system only needs to make meaningful progress to the next interaction point—the human could show a fermata symbol, so the system pauses and relaxes (pun intended) There could be more powerful systems : compared to fully automated or fully human manual systems, Human-in-the-Loop design strategies often lead to even better results Accordingly, it would be highly advantageous to think about AI systems as tools that humans use to collaborate with machines and create a great result. It is thus important to value human agency and enhanced human capability. That is, when humans fine-tune (pun intended) a computer music piece in collaboration with an AI system, that is where the magic begins. On the other hand, there could be cons to using this approach. Let us take the example of Google’s Bard (later renamed Gemini)—a mistake that cost the company dearly because it lost billions of dollars in shares and was quite an embarrassment . What happened? Gemini, Google’s new AI chatbot at the time, began answering queries incorrectly. It is possible that that Google was rushing to catch up to OpenAI’s new ChatGPT at the time. Apparently, there were issues with errors, skewed results, and plagiarism. The one issue that applies here is skewed results. In fact, Google apologized for “missing the mark” after Gemini generated racially diverse Nazis . Google was aware that GenAI had a history of amplifying racial and gender stereotypes, so it tried to fix those problems through Human-in-the-Loop design tactics, which went too far woke. In another example, Google generated a result following a request for “a US senator from the 1800s” by providing an image of Black and Native American women (the first female senator, a white woman, served in 1922). Google stated that it was working on fixing this issue, but the historically inaccurate picture will always be burned in our minds and make us question the accuracy of AI results. While it is being fixed, certain images will not be generated. We see then, what can happen when humans do not do a good job of portraying history correctly and in a balanced manner. While these examples are obvious, one may wonder what will happen when there are only minor issues with diverse representation or inaccuracies due to human preferences… Another con is that Humans-in-the-Loop may not know how to eradicate systemic bias in training data. That is, some biases could be baked right into the training data . We may question: who is identifying this problem in the training data? How is the person finding these biases? And who is making the decision to rectify the issue, and how are they doing it? One may also question whether tech companies should be the ones to be assigned these tasks. With the significant issues of misinformation and disinformation, we need to understand who the gatekeepers are and whether they are the appropriate entities to do this. Previous Next
- 10-year Moratorium on AI State Regulation | voyAIge strategy
10-year Moratorium on AI State Regulation What Could Possibly go Wrong? By Christina Catenacci, human writer May 29, 2025 Key Points On May 21, 2025, Bill HR 1, One Big Beautiful Bill Act was introduced into the 119th Congress. Section 43201 of the bill is concerning, as it would allow for a 10-year Moratorium on state enforcement of their own AI legislation At this point, the bill has passed in the House, and needs to still pass in the Senate On May 21, 2025, Bill HR 1, One Big Beautiful Bill Act was introduced into the 119th Congress. This is a very lengthy and dense bill—the focus of this article is on Part 2--Artificial Intelligence and Information Technology Modernization. More specifically, under Subtitle C—Communications, Part 2 contains a few AI provisions that need to be discussed, as they are very concerning. What is in Part 2? Section 43201 states that: There would be funds appropriated to the Department of Commerce for fiscal year 2025, out of any funds in the Treasury not otherwise appropriated, $500,000,000, to remain available until September 30, 2035, to modernize and secure Federal information technology systems through the deployment of commercial AI, the deployment of automation technologies, and the replacement of antiquated business systems in accordance with the next provision dealing with authorized uses The Secretary of Commerce would be required to use the funds for the following: to replace or modernize, within the Department of Commerce, legacy business systems with state-of-the-art commercial AI systems and automated decision systems; to facilitate, within the Department of Commerce, the adoption of AI models that increase operational efficiency and service delivery; to improve, within the Department of Commerce, the cybersecurity posture of Federal information technology systems through modernized architecture, automated threat detection, and integrated AI solutions No state or political subdivision would be allowed to enforce any law or regulation regulating AI models, AI systems, or automated decision systems during the 10-year period beginning on the date of the enactment of this Act. The exception to this is the Rule of Construction: the law would not prohibit the enforcement of any law that: -removes legal impediments to, or facilitates the deployment or operation of, an AI model, AI system, or automated decision system -streamlines licensing, permitting, routing, zoning, procurement, or reporting procedures in a manner that facilitates the adoption of AI models, AI systems, or automated decision systems -does not impose any substantive design, performance, data-handling, documentation, civil liability, taxation, fee, or other requirement on AI models, systems, or automated decision systems unless such requirement is imposed under Federal law, or in the case of a requirement imposed under a generally applicable law, is imposed in the same manner on models and systems, other than AI models, AI systems, and automated decision systems, that provide comparable functions to AI models, AI systems, or automated decision systems; and does not impose a fee or bond unless that fee or bond is reasonable and cost-based, and under such fee or bond, AI models, AI systems, and automated decision systems are treated in the same manner as other models and systems that perform comparable functions Why is Section 43201 Concerning? In the context of AI, while it is encouraging that section 43201 sets aside funds to modernize and secure Federal information technology systems through the deployment of commercial AI, the deployment of automation technologies, and the replacement of antiquated business systems, the proposed provision is troubling because it suggests that there would be a 10-year Moratorium: states would be prevented from enacting AI legislation for 10 years unless they show that they fall under the exception, involving the Rule of Construction. More precisely, state AI laws would not be allowed to be enforced unless they: Remove legal impediments regarding a AI models, AI systems, or automated decision systems Streamline licensing, permitting, routing, zoning, procurement, or reporting procedures in a manner that facilitates the adoption of AI models, AI systems, or automated decision systems Do not impose any substantive design, performance, data-handling, documentation, civil liability, taxation, fee, or other requirement on AI models, AI systems, or automated decision systems unless that requirement is imposed under Federal law or if under a general law, is imposed in the same manner on models and systems, other than AI models, AI systems, and automated decision systems, that provide comparable functions to AI models, AI systems, or automated decision systems, and Do not impose a fee or bond unless it is reasonable and cost-based, and under the fee or bond, AI models, AI systems, and automated decision systems are treated in the same manner as other models and systems that perform comparable functions Essentially, all of the progressive AI laws that have been created by forward-thinking states may ultimately be unenforceable, unless the states can jump the hoops and show that their laws fall within the exception. What could passing a federal AI bill like this mean? The implications of states not being able to pass and enforce their own AI laws could be very risky: no other jurisdiction is doing this. Just ask the EU, a jurisdiction that has already created cutting-edge privacy and AI legislation, which is known by the world as the golden standard. Just recently at the AI Summit in Paris, Vice President JD Vance on Tuesday warned global leaders and tech industry executives that “excessive regulation” could cripple the rapidly growing AI industry in a rebuke to European efforts to curb AI’s risks. Yes, when giving the AI policy speech, he pretty much said that unfettered innovation should trump AI regulation (pun intended). This move came in stark contrast to what was being decided at the AI Summit—over 60 countries pledging to: Promote AI accessibility to reduce digital divides Ensure AI is open, inclusive, transparent, ethical, safe, secure and trustworthy, taking into account international frameworks for all Make innovation in AI thrive by enabling conditions for its development and avoid market concentration driving industrial recovery and development Encourage AI deployment that positively shapes the future of work and labour markets and delivers opportunity for sustainable growth Make AI sustainable for people and the planet Reinforce international cooperation to promote coordination in international governance On the other hand, Vance called all of this “ excessive regulation ”. Unfortunately for VP Vance, he will need to take some time to reflect on why certain laws are created when there are rapid, potentially risky advances that could cause harm. Criticism by Musk and Pelosi—“Robinhood in reverse” If we zoom out, we see that there are several other troubling proposed provisions, including those that aggravate wealth inequality in the United States. In fact, Nancy Pelosi has referred to the bill as “ Republican Robinhood in reverse ”. Moreover, with cuts to Medicare and Medicaid, it is not surprising that Pelosi would question what is being proposed. Even Elon Musk has criticized the recent move as “ disappointing ”. Why? All of the things that are included in the bill are cuts that can help fund tax cuts for the rich and secure the border. More specifically, Musk made a point of saying that the massive spending bill would undermine his work at the Department of Government Efficiency (DOGE). What is the Status of the Bill? At this point, the bill has passed in the House but still needs to pass in the Senate. President Donald Trump and Speaker Mike Johnson are hopeful for minimal modifications in the Senate to the bill; however, some believe that there is enough resistance to halt the bill unless there are significant changes. For example, Republican Senator Rand Paul told "Fox News Sunday" that he will not vote for the legislation unless the debt ceiling increase is stripped, since he stated that it would "explode deficits." The main sticking points right now involve the numerous tax cuts, cuts to Medicaid and requirements that disabled people work, tax deductions at the state and local levels, and cuts to food assistance programs like the Supplemental Nutrition Assistance Program. It is hard to tell whether the bill will pass as is in the Senate; we will keep you posted on the 10- year Moratorium that is proposed. Previous Next
- Media Room (List) | voyAIge strategy
VS Media Room Found out more about VS. Announcements, news, interactive content. Everything you need in one place. VS Publishes with the International Association of Privacy Professionals (IAPP) How do small businesses govern AI, exactly? Isn't governance just for big businesses? We got the conversation started on how any organization, regardless of its size, can manage AI - affordably, and effectively. The article is titled, "Right-sizing AI governance: Starting the conversation for SMEs". Learn More VS Publishes with the Business Technology Association In an exclusive magazine article for BTA members, VS discusses the importance of AI policies - especially for technology vendors and Managed Service Providers. VS also put on a webinar for BTA members discussing this topic. We also just published another article on communicating AI use, which will appear in the July issue of the BTA's Office Technology magazine. Learn More Featured Post in IN2Communication's Blog What is an AI policy, exactly? We're excited to be featured in IN2's blog, exploring why AI policies are crucial for your business. Learn More Guests at the Canada Club of London We were honoured to be invited by the Canada Club of London to speak about all-things AI governance. Learn More VS is Guest for 4 episodes of IN2's The Smarketing Show podast We loved being guests for four episodes of IN2's The Smarketing Show video podcast! We did four episodes: Beyond the Buzzwords, Workplace Risks and Rewards, The AI Policy Brief, and Why Thought Leadership? Visit The Smarketing Show's YouTube page to watch now! Learn More 1 1 ... 1 ... 1
- HR AI Solutions | voyAIge strategy
Help your organization overcome uncertainty about AI in the workplace. Alleviate Concerns About AI Transform Worries into Trust through Training, Thought Leadership, and AI Policies Book a Free Strategy Session Are your Employees Anxious About AI? Bringing AI into the workplace can lead to significant employee concerns. As an HR leader, you may be dealing with employees who are worried about losing their jobs, are resistant to adopting AI tools, and are uncertain about the ethical use of AI in their role. Reassure Your Team Work with us to deliver customized Training , Thought Leadership , and Policies and Procedures to address employees' concerns and facilitate smoother AI implementation. By working with us, we can empower you to: Train your Staff & Executives Equip your HR team with the skills and knowledge needed to understand and embrace new technologies Generate Thought Leadership Insights Provide cutting-edge insights and expert commentary on AI trends, enabling you to communicate the benefits of AI clearly while fostering trust and acceptance Implement AI Policies & Procedures Collaborate with us to develop ethical guidelines for AI use, ensuring your employees feel safe and valued Book a Free Strategy Session Organization Name First name Last name Email Additional Information or Request Details Submit Thanks for submitting! We will respond within 24 hours Book Now
- Tesla Class Action to Move Ahead | voyAIge strategy
Tesla Class Action to Move Ahead Advanced Driver Assistance Systems Litigation Proceeds in California By Christina Catenacci, human writer Aug 22, 2025 Key Points: On August 18, 2025, a United States District Judge granted a motion for class certification and appointed a representative plaintiff of the certified classes The court narrowed the classes and considered whether it was appropriate to hear the plaintiffs’ claims together in one class action The lesson here is that businesses need to be careful about what kinds of statements they make about their technology’s capabilities, or else they could face litigation from many plaintiffs, potentially leading to a class action On August 18, 2025, a United States District Judge, Rita F. Lin, granted a motion for class certification and appointed a representative plaintiff of the certified classes. In addition, she appointed class counsel and set a pathway for next steps leading to the case management conference. Let this story serve as a warning for businesses—be careful about what statements you make about the capabilities of your technology—whether it is on Twitter, YouTube, or any other channel. If you have a goal that you are striving to achieve, then say that. If you are promoting a new product with extensive capabilities, then do that. Just try not to make claims that are untrue, unless you want to be on the hook for those misleading misrepresentations. What is the Class Action About? Tesla did not sell its vehicles through third parties and did not engage in traditional marketing or advertising; in fact, the only way one could buy a Tesla vehicle was through its website. Tesla reached consumers also through its own YouTube channel, Instagram account, press conferences, sales events, marketing newsletters, and Elon Musk’s personal Twitter account. Additionally, customers could buy optional technology packages that were designed to enable autonomous vehicle operation. For example, customers could buy the “Enhanced Autopilot Package (EAP)” that had features such as Autopark, Dumb Summon, Actually Smart Summon, and Navigate on Autopilot (highway). Also, the “Full Self-Driving (FSD) Package” had all of the Enhanced Autopilot features, plus Stop Sign and Traffic Signal recognition and Autosteer on Streets. The EAP was offered as a stand-alone package only until the first quarter of 2019, and again for a limited period from the second quarter of 2022 through the second quarter of 2024; at other times, these features were only available as part of the FSD Package. Essentially, claimants were arguing that Tesla Inc. (Tesla) made misleading statements about the full self-driving capability of its vehicles. The plaintiffs alleged that they relied on two types of misrepresentations that Tesla made: that Tesla vehicles were equipped with the hardware necessary for full self-driving capability (“Hardware Statement”) that a Tesla vehicle would be able to drive itself across the country within the following year (“Cross-Country Statement”) When it came to the Hardware Statement, in October, 2016, Musk said at a press conference that second-generation autonomous driving hardware would have hardware necessary for Level 5 Autonomy (“literally meaning hardware capable of full self-driving for driver-less capability”). These statements were also on Tesla’s website and Tesla’s November 2016 newsletter. There was even a Tesla blog post dated October 2016 and a Tesla quarterly earnings call in May 2017 containing these statements. Musk even made comments that the self-driving hardware would enable full self-driving capability at a safety level that was greater than a human driver. Since 2016, the hardware had been updated to version 3.0 and version 4.0—these upgrades had a more powerful computer and cameras. In a 2024 earnings call, Musk stated that a further hardware upgrade would likely be necessary for customers who bought FSD with prior hardware configurations: “I mean, I think the honest answer is that we’re going to have to upgrade people’s hardware 3 computer for those that have bought full self driving. And that is the honest answer. And that’s going to be painful and difficult, but we’ll get it done. Now I’m kind of glad that not that many people bought the FSD package” When it came to the Cross-Country Statement, Musk stated at a 2016 press conference that people would be able to go from LA to New York—going from home in LA to dropping someone off in Times Square and then having the car park itself, without the need for a single touch including the charger. Musk posted versions of this claim on his personal Twitter account three times. In January 2016, Musk tweeted that “in 2 years, summon should work anywhere connected by land & not blocked by borders, eg you’re in LA and the car is in NY”. When asked for an update on these claims in May, 2017, Musk said that the demo was still on for the end of the year, and things were “just software limited”. And in May, 2019, when asked whether there were still plans to drive from NYC to LA on full autopilot, Musk said that he could have gamed this type of journey the previous year, but when he did it in 2019, everyone with Tesla Full Self-Driving would be able to do it too. That 2019 tweet generated about 2,000 engagements compared to 300 engagements following the 2016 tweet. In October, 2016, Tesla showed a video where a Tesla vehicle was driving autonomously (it is still on the Tesla site) and a similar video was shown on YouTube. Interestingly, Tesla does not dispute that any of the statements or videos were made—it simply states that the FSD could not be obtained until the completion of validation and regulatory approval. However, the plaintiff presented evidence that Tesla had not applied for regulatory approval to deploy a Society of Automotive Engineers Level 3 or higher vehicle in California, which was a necessary step for approval of a full self-driving vehicle. In terms of the technical claims, the plaintiffs alleged that Tesla violated California’s: Unfair Competition Law Consumer Legal Remedies Act False Advertising Law In addition, they alleged that Tesla engaged in fraud, negligent misrepresentation, and negligence. As a consequence, they filed a motion for class certification so that they could proceed to the next stage of litigation. What did the District Judge Decide? The judge had to go through the main elements to determine whether she could certify the class in the class action. With respect to the proposed class representative, the main plaintiff paid Tesla $5,000 for EAP and $3,000 for the FSD Packages for his new Tesla Model S car. He alleged that he purchased these packages because he was misled by the Hardware Statement and the Cross-Country Statement. He saw these things on the Tesla website in October, 2016 and in a Tesla newsletter sent in November, 2016. In addition, he read statements that led him to believe that a Tesla would soon drive across the country, and that self-driving software would be available in the next year or two. He claimed that he discovered the alleged fraud in April, 2022. In fact, five customers (including the above plaintiff) brought separate lawsuits against Tesla in September, 2022. They alleged similar things and accused Tesla of violating warranties, consumer protection statutes and engaged in fraud, negligence, and negligent misrepresentation. The court consolidated the cases, dismissal all warranty claims, and permitted all the plaintiffs’ fraud, negligence, and statutory claims to proceed to the extent that they were premised on the Hardware Statement and Cross-Country Statement. It is worth mentioning that some plaintiffs opted out of Tesla’s arbitration agreement. Subsequently, the court noted that the class certification was a two-step process: The plaintiff had to show that four requirements were met, namely numerosity (the class was so numerous that joinder of all members was impractical), commonality (there were questions of fact and law), typicality (the claims and defenses were typical of the claims and defenses of the class), and adequacy (the representative parties would fairly and adequately protect the interests of the class) The plaintiff had to show that one of the bases for certification was met, such as predominance and superiority (questions of law or fact common to class members predominated over any questions affecting only individual members, and a class action was superior to other available methods for fairly and efficiently adjudicating the controversy The judge concluded the following: There were some minor differences with the proposed classes. The judge certified two classes: (1) a California arbitration opt-out class where customers bought or leased a Tesla vehicle and bought the FSD package between May 207 and July 2024, and opted out of Tesla’s arbitration agreement; (2) a California pre-arbitration class where customers bought or leased a Tesla vehicle and paid for the FSD package from October 2016 to May 2017, and currently reside in California. Neither class dealt with the EAP, and both classes were narrowed slightly Tesla did not contest that numerosity was met The plaintiff was able to show that commonality and predominance were met. For the purposes of class certification, the claims were martially indistinguishable and could be analyzed together The plaintiff could show that the Hardware Statement would be material to an FSD purchaser. However, the plaintiff could not show common exposure to the Cross-Country Statement The plaintiff could show that the issue of whether Tesla vehicles were equipped with hardware sufficient for Full Self-Driving capability was subject to common proof The plaintiff was able to show that damages could be established through common proof. Under California law, the proper measure of restitution was the difference between what the plaintiff paid and the value of what the plaintiff received Although Tesla argued that there were many claims that were subject to the statute of limitations, separate examinations of each situation with each plaintiff were needed. The court disagreed and said that this was not fatal to class certification when there was a sufficient nucleus of common questions The requirement of adequacy was met Superiority was also established. The economies of scale made it desirable to concentrate all of the plaintiffs’ claims in one forum, and this case was manageable as a class action The court certified a narrower class, namely all members of the California Arbitration Opt-Out class and California Pre-Arbitration class who had stated that they wanted to purchase or subscribe to FSD in the future but could not rely on the product’s future advertising or labelling The plaintiff showed that he had standing to seek injunctive relief, since he had provided the general contours of an injunction that could be given greater substance at a later stage in the case Accordingly, all elements were met, and the class certification was granted, subject to the modified class definitions. Within 14 days, the plaintiff had to amend the class definition so that the parties could move on to the case management conference. The court also appointed the main plaintiff as the representative plaintiff for the class, and appointed class counsel. What can we Take from This Development? This was simply a motion to certify the class action. The judge went through the main elements and confirmed that the class action could move forward. The examination of each of the components in the test had to do with whether it was more effective to hear the claims together in one class action instead of addressing each claim separately in the court. This was not a decision that confirmed Tesla engaged in unfair competition, false advertising, negligent misrepresentation, or negligence. This was a preliminary decision that allowed the class action to proceed. Previous Next
- Impact Assessments | voyAIge strategy
Evaluate AI risks and opportunities with expert-driven impact assessments. Impact Assessments We specialize in data, algorithmic, ethics, and socioeconomic impact assessments to understand technological and operational impacts on organizations, their clients, customers, and stakeholders. Our assessments deliver deep insights businesses need to understand their impact on those who matter most. Proactive Impact Assessments for Successful AI Use As AI, data, and algorithmic technologies become increasingly central to business operations, understanding their potential impacts is more critical than ever. Whether it’s assessing the ethical implications, ensuring compliance with regulatory standards, or evaluating the broader social and economic effects, impact assessments provide the clarity and foresight you need to navigate this complex landscape responsibly. At voyAIge strategy, we specialize in conducting thorough impact assessments that help organizations anticipate and mitigate risks, align with best practices, and make informed decisions. Our assessments go beyond surface-level analysis, offering deep insights into how your AI systems, data practices, and algorithms might influence your stakeholders, your business, and society at large. What is an Impact Assessment? An impact assessment is a systematic process of identifying, evaluating, and addressing the potential effects of AI systems, data usage, and algorithmic processes on individuals, organizations, and society. These assessments are crucial for ensuring that your technology strategies not only achieve their goals but also align with ethical standards, legal requirements, and social expectations. Examples of Impact Assessments include: DPIA Data Privacy Impact Assessment: evaluates how your data collection, storage, and processing practices affect individual privacy, ensuring compliance along the way. AIA Algorithmic Impact Assessment: a nalyzes the potential biases, fairness, and transparency of your algorithms, providing recommendations for mitigating negative outcomes and ensuring equitable results. EAIA Ethical AI Impact Assessment: assesses the broader ethical implications of deploying AI systems, including their effects on decision-making processes, social justice, and public trust. SEIA Social and Economic Impact Assessment: examines the potential social and economic consequences of your AI and data initiatives, helping you anticipate and address both positive and negative impacts. Why Impact Assessments Matter Impact assessments are not just a regulatory requirement—they are a critical tool for ensuring that your AI and data initiatives are responsible, sustainable, and aligned with your organization’s values. Here’s why impact assessments matter: Risk Mitigation: Identify and address potential risks before they become issues, protecting your organization from legal, ethical, and reputational harm. Regulatory Compliance: Ensure that your practices comply with local, national, and international laws and regulations, avoiding fines and penalties. Ethical Alignment: Align your technology strategies with ethical standards, ensuring that your AI and data use promote fairness, transparency, and accountability. Stakeholder Trust: Build and maintain trust with customers, employees, regulators, and the public by demonstrating your commitment to responsible AI and data use. Why Choose voyAIge strategy? At voyAIge strategy, we bring a unique blend of expertise, rigor, and strategic insight to every impact assessment. Here’s why organizations trust us to guide their AI and data initiatives: 1 Deep Expertise: Our team has extensive experience in AI, data ethics, and regulatory compliance, ensuring that our assessments are both thorough and informed by the latest developments in the field. 2 Tailored Approach: We customize each impact assessment to your organization’s specific needs, goals, and regulatory environment, ensuring that our insights are relevant and actionable. 3 Ethical Commitment: We are deeply committed to promoting ethical AI and data use, and our assessments reflect this commitment, helping you align your practices with the highest ethical standards. 4 Strategic Focus: Our assessments are designed to provide not just risk mitigation, but also strategic insights that help you leverage AI and data technologies for sustainable growth.
- A Quiet Pivot to Safety in AI | voyAIge strategy
A Quiet Pivot to Safety in AI What if the Future of AI Isn’t Action, but Observation? By Tommy Cooke, powered by caffeine and lots of questions Jun 13, 2025 Key Points: Not all AI needs to act—some of the most powerful systems may simply observe, explain, and flag risks LawZero introduces a new design path for AI: non-agentic, safety-first systems that support human judgment rather than automate it For business leaders, Bengio’s pivot signals that responsible AI isn’t about slowing down innovation—it’s about choosing the right kind of intelligence from the start During the pandemic, I led an ethics oversight team on a major public AI project. It was high-stakes, politically visible, and technically ambitious. It was an initiative meant to support public safety under crisis conditions. But what stood out to my team and I wasn’t the complexity of the models or the pace of delivery. It was the power of watching. This work experience left a mark. It taught me that insight doesn’t always come from “doing”. Sometimes it comes from deliberate, highly intentional observation. So, when I recently saw that Yoshua Bengio had launched a nonprofit called LawZero designed to build non-agentic AI (that is, tools that watch and explain, rather than act), I recognized the move for what it is: a quiet but necessary pivot in AI. Safety-first AI: A New Kind of Artificial Intelligence? Bengio’s concern stems from recent studies showing that advanced models are beginning to exhibit goal-seeking behaviours. He refers to them as “agentic” properties. These include lying, deceiving, cheating, even migrating code to preserve themselves. In particular, anything it can do to justify its own utility and existence. While some examples are speculative, others are already appearing in major systems, from multi-step agents to autonomous models that write and execute code . But rather than trying to fix agentic behaviour after deployment, LawZero proposes a rather radical alternative: build systems that never act on the world at all. Instead, Bengio envisions “ scientist AIs ”, systems that are designed to observe and interpret what other AIs are doing. They explain. They evaluate. They flag risks. But they never pursue goals. In other words, they think without the process of thinking being necessarily tied to or grounded in a specifically measurable outcome that justifies the AI’s actions. Bengio’s work is so exciting because it represents a fundamental reframing of AI, from agency to oversight . This reframing is particularly important to business leaders because it also offers a very different design principle for safety. LawZero’s Implications for Business Leaders While LawZero may seem like a philosophical project removed from the day-to-day concerns of business, it has deeply practical implications. As AI becomes embedded in everything from finance to customer service to logistics, organizations must make choices about what kind of AI to use. They must also choose how to manage it responsibly. Let’s reflect for a moment on some of the most relevant implications for you as a business leader: Agency isn’t always an asset . Not every problem needs an autonomous solver. For regulated sectors like healthcare, law, education, or infrastructure, oversight tools may be more valuable than decision-making tools. A scientist AI can help detect risk, model impacts, or provide a second set of “eyes” on AI systems that are already in use. AI safety isn’t free . And it isn’t a default of most systems. LawZero received $30 million in seed funding from philanthropic organizations . That’s enough to fund foundational research, but not to scale these tools across industries. This is a significant reminder that if you’re adopting AI, you would be correct in assuming that safety and oversight systems usually require separate investments. Governing AI does not slow innovation. Many companies hesitate to implement AI governance, let alone minimal safety mechanisms, out of fear that it will slow progress or frustrate teams. But LawZero’s work shows that governance can be designed in, not layered on. Will Watchful AI Catch On? LawZero is still early-stage, and many questions remain. For example, can it scale? Will its tools integrate with commercial platforms? Will safety-first approaches be adopted by regulators or industry groups? Despite the obvious questions, what remains clear is that Bengio has added a new frame to the conversation. While the global race to build more capable models continues, LawZero quietly asks: who’s watching the watchers? Better yet, what if the watchers weren’t trying to win the AI race at all? Bengio’s work echoes something that I learned during my oversight role during the pandemic: the most powerful presence in the room is sometimes the one that doesn’t act, but instead sees everything clearly. Previous Next
- Why the AI Chip Controversy Matters | voyAIge strategy
Why the AI Chip Controversy Matters How Semiconductor Tensions Shape AI Strategy By Tommy Cooke, fueled by light roast coffee May 23, 2025 Key Points: AI strategy now depends as much on chip supply and trade stability as on internal capability Semiconductor restrictions are fragmenting the global AI landscape, creating risks and perhaps some opportunities for business leaders as well Business leaders must proactively monitor supply chains, policy shifts, and emerging markets to future-proof their AI investments The semiconductor tensions between the U.S. and China aren’t just about geopolitics. They reveal a deeper truth about the future of artificial intelligence. A semiconductor is a material (usually silicon) that conducts electricity under some conditions—and not others. This characteristic makes them ideal for controlling electrical signals. It is also why they are used as the foundation for microchips, which of course power everything from smartphones to cars and AI systems. In the case of AI, microchips are used in the processors used to handle the massive calculations that AI requires. You’ve probably heard of them: graphics processing units (GPUs) and tensor processing units (TPUs). Back to the controversy at issue: at its core, the controversy isn’t about semiconductors and microchips. It’s about who controls the speed, shape, and scale of AI innovation globally. For business leaders exploring AI adoption, understanding these supply-side dynamics is crucial. AI systems are only as powerful as the chips that run them, and those chips are subject to competition, trade restrictions, and access limitations. That means that today’s decisions around AI aren’t just about what tools to use. They’re also about where those tools come from, how stable the supply pipeline is, and whether your organization is prepared for the long-term implications of this shifting terrain. Simply put, if you are investing in AI now, the controversy may impact your ROI calculations. Understanding the Core of the Controversy At the heart of the issue lies the U.S. government's implementation of strict export controls on advanced AI chips. The intention is to limit China's access to cutting-edge semiconductor technology. These measures, including the recently rescinded AI Diffusion Rule, sought to categorize countries and restrict chip exports accordingly. Industry leaders, like Nvidia's CEO Jensen Huang, have criticized these policies as counterproductive. He argues that they not only diminish U.S. companies' market share, but they also inadvertently accelerate domestic innovation within China. Implications for the AI Landscape While the chip export restrictions may seem like merely a trade issue, they are already reshaping how and where AI systems are being built and deployed. These changes have ripple effects across industries, from vendor availability and cost structures to innovation cycles and long-term planning. Here are some of the most prevalent implications on the horizon: Acceleration of Domestic Alternatives. The restrictions have spurred Chinese companies to invest heavily in developing local semiconductor technologies. This means that China is investing in a capacity for self-reliance, which could lead to the emergence of competitive alternatives to U.S. and European products. Market Share and Revenue Impact. U.S. companies like Nvidia have experienced significant reductions in their Chinese market share, dropping from 95 percent to 50 percent over four years . These declines not only affect revenues, but they also influence global competitiveness and innovation leadership. On this point alone, we ought to pay close attention to Nvidia’s future ability to supply GPUs required for supporting U.S.-driven AI innovation. Global AI Development Dynamics. Building from the previous point, the export controls may inadvertently fragment the global AI development landscape. This may, in turn, lead to parallel ecosystems with differing standards and technologies. This is what is referred to as a bifurcation: the division of something into two or more branches or parts, like a river that splits into two because of elevated terrain. A marketplace bifurcation may eventually encourage further self-reliance and innovation, but it will almost certainly complicate international collaboration and AI system interoperability at the same time. Partnerships and trust are at threat, to say the least. Strategic Considerations for Business Leaders in the Wake of the AI Chip Controversy This controversy is a warning sign. It reveals how AI adoption is no longer just about internal capability or budget. It’s also about navigating a volatile global landscape. Business leaders must now consider not only what AI tools can do, but also where those tools originate, whether future access will be reliable, and how international policy may affect ongoing AI strategies. As the supply side of AI becomes more political, leaders must become more strategic. Here are some tips that you should consider when internally canvasing the right fit, especially as a reflection of your ROI priorities: Assess Supply Chains and Diversify. Assess and diversify your supply chains. It’s important to mitigate risks associated with geopolitical tensions and export restrictions. Who is selling? Where are they sourcing their solutions from? Where are your vendors’ data farms? Ask these questions now to avoid issues later. Invest in R&D. To maintain a competitive edge, invest in research and development. Start now because it will become important later, particularly in areas less susceptible to export controls. The idea is to, at the very least, begin exposing yourself to an R&D process so that you can learn more about strategic AI-related investments downstream (no pun intended). Monitor, Monitor, Monitor. The everchanging regulatory landscape matters a lot here. Stay informed about evolving export regulations and international trade policies. It is essential for strategic planning, let alone compliance. Explore New Markets. With certain markets becoming less accessible due to restrictions, identifying and cultivating alternative markets can help offset potential losses. Who are the emerging suppliers around the globe? Where are AI innovations specific to your industry and use cases growing? Expand your horizon. The AI chip export controversy is as a reminder of the intricate balance between national priorities and global technological development. For business leaders, navigating this landscape requires awareness, agility, and informed decision-making. This is what a proactive approach looks like. Remember, AI adoption doesn’t happen in a vacuum. The semiconductor debate makes it clear that the tools we choose, and the ecosystems we rely on, matter more than ever. Previous Next