Search Results
120 results found with an empty search
- Whose Ethics Matter Most and Why | voyAIge strategy
Whose Ethics Matter Most and Why Ethics is a declaration of whose voices, opinions, and values matter By Tommy Cooke Sep 19, 2024 Key Points: Engage with voices and ideas outside your organization as a litmus test for your ethical priorities Treat AI ethics frameworks as living documents that are organic and change over time Strive to think globally - not locally When we talk about ethics in AI, it’s easy to overlook their underlying complexity. Many organizations treat ethics as a straightforward process: identify some standards, create some policies, and follow compliance. But this approach overlooks a key reality. This reality is that ethics are subjective . Ethics reflect values, and values differ in context, culture, and stakeholder expectations. This subjectivity can be particularly challenging for organizations operating in diverse industries and global markets. In AI development and AI use, writing ethics is a declaration . It’s a statement of your organization's beliefs about right and wrong, good and bad. But it’s also a reflection of the values you’re embedding into your AI systems, and the choices you make along the way are critical. To make things a bit more complicated, one key question often goes unanswered: Whose ethics are we prioritizing? In an increasingly interconnected world, it’s vital to consider whose perspectives are included - and whose might be missed. Ethics is a Living Framework At its core, ethics are codified morals. They are an attempt to translate abstract ideas and values into solid standards for behaviour. However, in AI, where the implications of decisions are far-reaching, the ethics landscape is complex and shifting. For example, in Europe, the General Data Protection Regulation (GDPR) provides a comprehensive ethical framework for data use. It is largely individual-centric , focused on protecting the privacy and rights of individuals, requiring transparency and consent for how data is collected and used. In contrast, the U.S. takes a more business-centric approach to AI ethics. There is less comprehensive regulation, and ethical standards tend to focus on enabling innovation while mitigating harm through self-regulation and sector-specific guidelines . For companies operating in both the US and the EU, this difference creates a challenge: ethical beliefs around privacy, autonomy, and transparency can be misaligned depending on where you are, resulting in inconsistencies in AI governance. The Subjectivity of Ethics The subjective nature of ethics means that what is considered ethical in one context might not be viewed the same way elsewhere. For instance, the concept of fairness is interpreted differently across cultural boundaries . In Western countries , fairness in AI often focuses on preventing discrimination based on race, gender, or disability. In contrast, in China or other parts of East Asia, fairness might emphasize collective welfare and societal harmony , even if it means sacrificing some degree of individual privacy. This raises critical questions for organizations: when you create an ethical framework, whose values are you representing? Which ones do you prioritize, and why? And, just as importantly, whose values are being left out? As businesses develop AI systems that impact people across borders, the need for a more inclusive and adaptable ethical framework becomes apparent. Without it, companies risk ethical blind spots that can lead to reputational damage, loss of trust, and, in extreme cases, legal action. Questions for Organizations So, how can organizations navigate this complex ethical landscape? Here are three essential questions to ask: Whose voices are included in your ethics frameworks? Consider the diversity of stakeholders impacted by your AI systems, including employees, customers, communities, and global regulators. Do your ethical standards reflect this diversity, or are they shaped by the dominant voices within your organization? It is becoming increasingly clear around the globe that inclusive frameworks tend to be more robust and resilient because they account for a wider range of perspectives. How often do you revisit your ethical guidelines? Ethics cannot be static. As technology evolves and societal expectations shift, so too must your ethical frameworks. For example, the rise of generative AI and large language models has created new ethical dilemmas around intellectual property, misinformation, and AI autonomy. Organizations should regularly assess whether their ethical guidelines remain relevant and effective. Are you balancing internal and external values? Often, companies prioritize their internal values—whether it’s innovation, efficiency, or profitability—over external stakeholder concerns. But ethics are about building trust , and to do so, organizations need to align their values with those of the communities they serve. Practical Advice for Ethical AI Governance Building an adaptable, inclusive ethical framework doesn’t have to be overwhelming. Here are three practical tips for organizations looking to strengthen their AI ethics: Engage with external voices. Ethics should be informed by a diversity of perspectives, both internal and external. Regularly engage with stakeholders—including regulators, customers, and community leaders—to ensure that your ethical frameworks are inclusive and reflective of broader societal values. Make ethics a living document. Ethical standards should be dynamic, not static. Establish a regular process for reviewing and updating your ethics policies to reflect the latest technological developments, regulatory changes, and societal shifts. Think globally, act locally. Your ethics should be adaptable to different cultural and legal contexts. Strive for a balance between global standards and local values to ensure your AI systems are both responsible and contextually appropriate. Ethics as a Strategy In the end, ethics in AI is not just about doing what’s right. It’s about being strategic. In a world where AI systems can influence lives across continents, ethical governance is about building trust, ensuring accountability, and safeguarding the future of your organization. By asking the right questions and adopting a flexible, evolving approach, companies can develop ethical frameworks that are not only reflective of their values but adaptable to a rapidly changing world. Previous Next
- What is Human in the Loop? | voyAIge strategy
What is Human in the Loop? Understanding the Basics By Christina Catenacci Oct 18, 2024 Key points A human-in-the-Loop approach is the selective inclusion of human participation in the automation process There are some pros to using this approach, including increased transparency, an injection of human values and judgment, less pressure for algorithms to be perfect, and more powerful human-machine interactive systems There are also some cons, like Google’s overcorrection with Gemini, leading to historical inaccuracies Humans-in-the-Loop - What Does That Mean? A philosopher at Stanford University asked a computer musician if we will ever have robot musicians, and the musician was doubtful that an AI system would be able to capture the subtle human qualities of music, including conveying meaning during a performance. When we think about Humans-in-the-Loop, we may wonder exactly what this means. Simply put, when we design with a Human-in-the-Loop approach, we can imagine it as the selective inclusion of human participation in the automation process. More specifically, this could manifest as a process that harnesses the efficiency of intelligent automation while remaining open to human feedback, all while retaining a greater sense of meaning. If we picture an AI system on one end, and a human on the other, we can think of AI as a tool in the centre: where the AI system asks for intermediate feedback, the human would be right there (in the loop) providing some minor tweaks to refine the instruction and giving additional information to help the AI system be more on track with what the human wants to see as a final output. There would be arrows that go from the human to the machine and from the machine to the human. By designing this way, we have a Human-in-the-Loop interactive AI system. We can also view this type of design as a way to augment the human—serving as a tool, not a replacement. So, if we take the example of computer music, we could have a situation where the AI system starts creating a piece, and then the human musician could provide some “notes” (pun intended) for the AI system that adds meaning and ultimately begins creating a real-time, interactive machine learning system. What exactly would the human do? The human musician would be there to iteratively, efficiently, and incrementally train tools by example, continually refining the system. As a recording engineer and producer myself, I like and appreciate the idea of AI being able to take recorded songs and separate it into its individual stems or tracks, like keys, vocals, guitars, bass and drums. This is where the creativity begins… What are the pros and cons of this approach? While there are several benefits of using this approach, here are a few: There would be considerable gains in transparency : when humans and machines collaborate, there is less of a black box when it comes to what the AI is doing to get the result There would be a greater chance of incorporating human judgment : human values and preferences would be baked into the decision-making process so that they are reflected in the ultimate output There would be less pressure to build the perfect algorithm : since the human would be providing guidance, the AI system only needs to make meaningful progress to the next interaction point—the human could show a fermata symbol, so the system pauses and relaxes (pun intended) There could be more powerful systems : compared to fully automated or fully human manual systems, Human-in-the-Loop design strategies often lead to even better results Accordingly, it would be highly advantageous to think about AI systems as tools that humans use to collaborate with machines and create a great result. It is thus important to value human agency and enhanced human capability. That is, when humans fine-tune (pun intended) a computer music piece in collaboration with an AI system, that is where the magic begins. On the other hand, there could be cons to using this approach. Let us take the example of Google’s Bard (later renamed Gemini)—a mistake that cost the company dearly because it lost billions of dollars in shares and was quite an embarrassment . What happened? Gemini, Google’s new AI chatbot at the time, began answering queries incorrectly. It is possible that that Google was rushing to catch up to OpenAI’s new ChatGPT at the time. Apparently, there were issues with errors, skewed results, and plagiarism. The one issue that applies here is skewed results. In fact, Google apologized for “missing the mark” after Gemini generated racially diverse Nazis . Google was aware that GenAI had a history of amplifying racial and gender stereotypes, so it tried to fix those problems through Human-in-the-Loop design tactics, which went too far woke. In another example, Google generated a result following a request for “a US senator from the 1800s” by providing an image of Black and Native American women (the first female senator, a white woman, served in 1922). Google stated that it was working on fixing this issue, but the historically inaccurate picture will always be burned in our minds and make us question the accuracy of AI results. While it is being fixed, certain images will not be generated. We see then, what can happen when humans do not do a good job of portraying history correctly and in a balanced manner. While these examples are obvious, one may wonder what will happen when there are only minor issues with diverse representation or inaccuracies due to human preferences… Another con is that Humans-in-the-Loop may not know how to eradicate systemic bias in training data. That is, some biases could be baked right into the training data . We may question: who is identifying this problem in the training data? How is the person finding these biases? And who is making the decision to rectify the issue, and how are they doing it? One may also question whether tech companies should be the ones to be assigned these tasks. With the significant issues of misinformation and disinformation, we need to understand who the gatekeepers are and whether they are the appropriate entities to do this. Previous Next
- 10-year Moratorium on AI State Regulation | voyAIge strategy
10-year Moratorium on AI State Regulation What Could Possibly go Wrong? By Christina Catenacci, human writer May 29, 2025 Key Points On May 21, 2025, Bill HR 1, One Big Beautiful Bill Act was introduced into the 119th Congress. Section 43201 of the bill is concerning, as it would allow for a 10-year Moratorium on state enforcement of their own AI legislation At this point, the bill has passed in the House, and needs to still pass in the Senate On May 21, 2025, Bill HR 1, One Big Beautiful Bill Act was introduced into the 119th Congress. This is a very lengthy and dense bill—the focus of this article is on Part 2--Artificial Intelligence and Information Technology Modernization. More specifically, under Subtitle C—Communications, Part 2 contains a few AI provisions that need to be discussed, as they are very concerning. What is in Part 2? Section 43201 states that: There would be funds appropriated to the Department of Commerce for fiscal year 2025, out of any funds in the Treasury not otherwise appropriated, $500,000,000, to remain available until September 30, 2035, to modernize and secure Federal information technology systems through the deployment of commercial AI, the deployment of automation technologies, and the replacement of antiquated business systems in accordance with the next provision dealing with authorized uses The Secretary of Commerce would be required to use the funds for the following: to replace or modernize, within the Department of Commerce, legacy business systems with state-of-the-art commercial AI systems and automated decision systems; to facilitate, within the Department of Commerce, the adoption of AI models that increase operational efficiency and service delivery; to improve, within the Department of Commerce, the cybersecurity posture of Federal information technology systems through modernized architecture, automated threat detection, and integrated AI solutions No state or political subdivision would be allowed to enforce any law or regulation regulating AI models, AI systems, or automated decision systems during the 10-year period beginning on the date of the enactment of this Act. The exception to this is the Rule of Construction: the law would not prohibit the enforcement of any law that: -removes legal impediments to, or facilitates the deployment or operation of, an AI model, AI system, or automated decision system -streamlines licensing, permitting, routing, zoning, procurement, or reporting procedures in a manner that facilitates the adoption of AI models, AI systems, or automated decision systems -does not impose any substantive design, performance, data-handling, documentation, civil liability, taxation, fee, or other requirement on AI models, systems, or automated decision systems unless such requirement is imposed under Federal law, or in the case of a requirement imposed under a generally applicable law, is imposed in the same manner on models and systems, other than AI models, AI systems, and automated decision systems, that provide comparable functions to AI models, AI systems, or automated decision systems; and does not impose a fee or bond unless that fee or bond is reasonable and cost-based, and under such fee or bond, AI models, AI systems, and automated decision systems are treated in the same manner as other models and systems that perform comparable functions Why is Section 43201 Concerning? In the context of AI, while it is encouraging that section 43201 sets aside funds to modernize and secure Federal information technology systems through the deployment of commercial AI, the deployment of automation technologies, and the replacement of antiquated business systems, the proposed provision is troubling because it suggests that there would be a 10-year Moratorium: states would be prevented from enacting AI legislation for 10 years unless they show that they fall under the exception, involving the Rule of Construction. More precisely, state AI laws would not be allowed to be enforced unless they: Remove legal impediments regarding a AI models, AI systems, or automated decision systems Streamline licensing, permitting, routing, zoning, procurement, or reporting procedures in a manner that facilitates the adoption of AI models, AI systems, or automated decision systems Do not impose any substantive design, performance, data-handling, documentation, civil liability, taxation, fee, or other requirement on AI models, AI systems, or automated decision systems unless that requirement is imposed under Federal law or if under a general law, is imposed in the same manner on models and systems, other than AI models, AI systems, and automated decision systems, that provide comparable functions to AI models, AI systems, or automated decision systems, and Do not impose a fee or bond unless it is reasonable and cost-based, and under the fee or bond, AI models, AI systems, and automated decision systems are treated in the same manner as other models and systems that perform comparable functions Essentially, all of the progressive AI laws that have been created by forward-thinking states may ultimately be unenforceable, unless the states can jump the hoops and show that their laws fall within the exception. What could passing a federal AI bill like this mean? The implications of states not being able to pass and enforce their own AI laws could be very risky: no other jurisdiction is doing this. Just ask the EU, a jurisdiction that has already created cutting-edge privacy and AI legislation, which is known by the world as the golden standard. Just recently at the AI Summit in Paris, Vice President JD Vance on Tuesday warned global leaders and tech industry executives that “excessive regulation” could cripple the rapidly growing AI industry in a rebuke to European efforts to curb AI’s risks. Yes, when giving the AI policy speech, he pretty much said that unfettered innovation should trump AI regulation (pun intended). This move came in stark contrast to what was being decided at the AI Summit—over 60 countries pledging to: Promote AI accessibility to reduce digital divides Ensure AI is open, inclusive, transparent, ethical, safe, secure and trustworthy, taking into account international frameworks for all Make innovation in AI thrive by enabling conditions for its development and avoid market concentration driving industrial recovery and development Encourage AI deployment that positively shapes the future of work and labour markets and delivers opportunity for sustainable growth Make AI sustainable for people and the planet Reinforce international cooperation to promote coordination in international governance On the other hand, Vance called all of this “ excessive regulation ”. Unfortunately for VP Vance, he will need to take some time to reflect on why certain laws are created when there are rapid, potentially risky advances that could cause harm. Criticism by Musk and Pelosi—“Robinhood in reverse” If we zoom out, we see that there are several other troubling proposed provisions, including those that aggravate wealth inequality in the United States. In fact, Nancy Pelosi has referred to the bill as “ Republican Robinhood in reverse ”. Moreover, with cuts to Medicare and Medicaid, it is not surprising that Pelosi would question what is being proposed. Even Elon Musk has criticized the recent move as “ disappointing ”. Why? All of the things that are included in the bill are cuts that can help fund tax cuts for the rich and secure the border. More specifically, Musk made a point of saying that the massive spending bill would undermine his work at the Department of Government Efficiency (DOGE). What is the Status of the Bill? At this point, the bill has passed in the House but still needs to pass in the Senate. President Donald Trump and Speaker Mike Johnson are hopeful for minimal modifications in the Senate to the bill; however, some believe that there is enough resistance to halt the bill unless there are significant changes. For example, Republican Senator Rand Paul told "Fox News Sunday" that he will not vote for the legislation unless the debt ceiling increase is stripped, since he stated that it would "explode deficits." The main sticking points right now involve the numerous tax cuts, cuts to Medicaid and requirements that disabled people work, tax deductions at the state and local levels, and cuts to food assistance programs like the Supplemental Nutrition Assistance Program. It is hard to tell whether the bill will pass as is in the Senate; we will keep you posted on the 10- year Moratorium that is proposed. Previous Next
- Media Room (List) | voyAIge strategy
VS Media Room Found out more about VS. Announcements, news, interactive content. Everything you need in one place. VS Publishes with the International Association of Privacy Professionals (IAPP) How do small businesses govern AI, exactly? Isn't governance just for big businesses? We got the conversation started on how any organization, regardless of its size, can manage AI - affordably, and effectively. The article is titled, "Right-sizing AI governance: Starting the conversation for SMEs". Learn More VS Publishes with the Business Technology Association In an exclusive magazine article for BTA members, VS discusses the importance of AI policies - especially for technology vendors and Managed Service Providers. VS also put on a webinar for BTA members discussing this topic. We also just published another article on communicating AI use, which will appear in the July issue of the BTA's Office Technology magazine. Learn More Featured Post in IN2Communication's Blog What is an AI policy, exactly? We're excited to be featured in IN2's blog, exploring why AI policies are crucial for your business. Learn More Guests at the Canada Club of London We were honoured to be invited by the Canada Club of London to speak about all-things AI governance. Learn More VS is Guest for 4 episodes of IN2's The Smarketing Show podast We loved being guests for four episodes of IN2's The Smarketing Show video podcast! We did four episodes: Beyond the Buzzwords, Workplace Risks and Rewards, The AI Policy Brief, and Why Thought Leadership? Visit The Smarketing Show's YouTube page to watch now! Learn More 1 1 ... 1 ... 1
- HR AI Solutions | voyAIge strategy
Help your organization overcome uncertainty about AI in the workplace. Alleviate Concerns About AI Transform Worries into Trust through Training, Thought Leadership, and AI Policies Book a Free Strategy Session Are your Employees Anxious About AI? Bringing AI into the workplace can lead to significant employee concerns. As an HR leader, you may be dealing with employees who are worried about losing their jobs, are resistant to adopting AI tools, and are uncertain about the ethical use of AI in their role. Reassure Your Team Work with us to deliver customized Training , Thought Leadership , and Policies and Procedures to address employees' concerns and facilitate smoother AI implementation. By working with us, we can empower you to: Train your Staff & Executives Equip your HR team with the skills and knowledge needed to understand and embrace new technologies Generate Thought Leadership Insights Provide cutting-edge insights and expert commentary on AI trends, enabling you to communicate the benefits of AI clearly while fostering trust and acceptance Implement AI Policies & Procedures Collaborate with us to develop ethical guidelines for AI use, ensuring your employees feel safe and valued Book a Free Strategy Session Organization Name First name Last name Email Additional Information or Request Details Submit Thanks for submitting! We will respond within 24 hours Book Now
- Tesla Class Action to Move Ahead | voyAIge strategy
Tesla Class Action to Move Ahead Advanced Driver Assistance Systems Litigation Proceeds in California By Christina Catenacci, human writer Aug 22, 2025 Key Points: On August 18, 2025, a United States District Judge granted a motion for class certification and appointed a representative plaintiff of the certified classes The court narrowed the classes and considered whether it was appropriate to hear the plaintiffs’ claims together in one class action The lesson here is that businesses need to be careful about what kinds of statements they make about their technology’s capabilities, or else they could face litigation from many plaintiffs, potentially leading to a class action On August 18, 2025, a United States District Judge, Rita F. Lin, granted a motion for class certification and appointed a representative plaintiff of the certified classes. In addition, she appointed class counsel and set a pathway for next steps leading to the case management conference. Let this story serve as a warning for businesses—be careful about what statements you make about the capabilities of your technology—whether it is on Twitter, YouTube, or any other channel. If you have a goal that you are striving to achieve, then say that. If you are promoting a new product with extensive capabilities, then do that. Just try not to make claims that are untrue, unless you want to be on the hook for those misleading misrepresentations. What is the Class Action About? Tesla did not sell its vehicles through third parties and did not engage in traditional marketing or advertising; in fact, the only way one could buy a Tesla vehicle was through its website. Tesla reached consumers also through its own YouTube channel, Instagram account, press conferences, sales events, marketing newsletters, and Elon Musk’s personal Twitter account. Additionally, customers could buy optional technology packages that were designed to enable autonomous vehicle operation. For example, customers could buy the “Enhanced Autopilot Package (EAP)” that had features such as Autopark, Dumb Summon, Actually Smart Summon, and Navigate on Autopilot (highway). Also, the “Full Self-Driving (FSD) Package” had all of the Enhanced Autopilot features, plus Stop Sign and Traffic Signal recognition and Autosteer on Streets. The EAP was offered as a stand-alone package only until the first quarter of 2019, and again for a limited period from the second quarter of 2022 through the second quarter of 2024; at other times, these features were only available as part of the FSD Package. Essentially, claimants were arguing that Tesla Inc. (Tesla) made misleading statements about the full self-driving capability of its vehicles. The plaintiffs alleged that they relied on two types of misrepresentations that Tesla made: that Tesla vehicles were equipped with the hardware necessary for full self-driving capability (“Hardware Statement”) that a Tesla vehicle would be able to drive itself across the country within the following year (“Cross-Country Statement”) When it came to the Hardware Statement, in October, 2016, Musk said at a press conference that second-generation autonomous driving hardware would have hardware necessary for Level 5 Autonomy (“literally meaning hardware capable of full self-driving for driver-less capability”). These statements were also on Tesla’s website and Tesla’s November 2016 newsletter. There was even a Tesla blog post dated October 2016 and a Tesla quarterly earnings call in May 2017 containing these statements. Musk even made comments that the self-driving hardware would enable full self-driving capability at a safety level that was greater than a human driver. Since 2016, the hardware had been updated to version 3.0 and version 4.0—these upgrades had a more powerful computer and cameras. In a 2024 earnings call, Musk stated that a further hardware upgrade would likely be necessary for customers who bought FSD with prior hardware configurations: “I mean, I think the honest answer is that we’re going to have to upgrade people’s hardware 3 computer for those that have bought full self driving. And that is the honest answer. And that’s going to be painful and difficult, but we’ll get it done. Now I’m kind of glad that not that many people bought the FSD package” When it came to the Cross-Country Statement, Musk stated at a 2016 press conference that people would be able to go from LA to New York—going from home in LA to dropping someone off in Times Square and then having the car park itself, without the need for a single touch including the charger. Musk posted versions of this claim on his personal Twitter account three times. In January 2016, Musk tweeted that “in 2 years, summon should work anywhere connected by land & not blocked by borders, eg you’re in LA and the car is in NY”. When asked for an update on these claims in May, 2017, Musk said that the demo was still on for the end of the year, and things were “just software limited”. And in May, 2019, when asked whether there were still plans to drive from NYC to LA on full autopilot, Musk said that he could have gamed this type of journey the previous year, but when he did it in 2019, everyone with Tesla Full Self-Driving would be able to do it too. That 2019 tweet generated about 2,000 engagements compared to 300 engagements following the 2016 tweet. In October, 2016, Tesla showed a video where a Tesla vehicle was driving autonomously (it is still on the Tesla site) and a similar video was shown on YouTube. Interestingly, Tesla does not dispute that any of the statements or videos were made—it simply states that the FSD could not be obtained until the completion of validation and regulatory approval. However, the plaintiff presented evidence that Tesla had not applied for regulatory approval to deploy a Society of Automotive Engineers Level 3 or higher vehicle in California, which was a necessary step for approval of a full self-driving vehicle. In terms of the technical claims, the plaintiffs alleged that Tesla violated California’s: Unfair Competition Law Consumer Legal Remedies Act False Advertising Law In addition, they alleged that Tesla engaged in fraud, negligent misrepresentation, and negligence. As a consequence, they filed a motion for class certification so that they could proceed to the next stage of litigation. What did the District Judge Decide? The judge had to go through the main elements to determine whether she could certify the class in the class action. With respect to the proposed class representative, the main plaintiff paid Tesla $5,000 for EAP and $3,000 for the FSD Packages for his new Tesla Model S car. He alleged that he purchased these packages because he was misled by the Hardware Statement and the Cross-Country Statement. He saw these things on the Tesla website in October, 2016 and in a Tesla newsletter sent in November, 2016. In addition, he read statements that led him to believe that a Tesla would soon drive across the country, and that self-driving software would be available in the next year or two. He claimed that he discovered the alleged fraud in April, 2022. In fact, five customers (including the above plaintiff) brought separate lawsuits against Tesla in September, 2022. They alleged similar things and accused Tesla of violating warranties, consumer protection statutes and engaged in fraud, negligence, and negligent misrepresentation. The court consolidated the cases, dismissal all warranty claims, and permitted all the plaintiffs’ fraud, negligence, and statutory claims to proceed to the extent that they were premised on the Hardware Statement and Cross-Country Statement. It is worth mentioning that some plaintiffs opted out of Tesla’s arbitration agreement. Subsequently, the court noted that the class certification was a two-step process: The plaintiff had to show that four requirements were met, namely numerosity (the class was so numerous that joinder of all members was impractical), commonality (there were questions of fact and law), typicality (the claims and defenses were typical of the claims and defenses of the class), and adequacy (the representative parties would fairly and adequately protect the interests of the class) The plaintiff had to show that one of the bases for certification was met, such as predominance and superiority (questions of law or fact common to class members predominated over any questions affecting only individual members, and a class action was superior to other available methods for fairly and efficiently adjudicating the controversy The judge concluded the following: There were some minor differences with the proposed classes. The judge certified two classes: (1) a California arbitration opt-out class where customers bought or leased a Tesla vehicle and bought the FSD package between May 207 and July 2024, and opted out of Tesla’s arbitration agreement; (2) a California pre-arbitration class where customers bought or leased a Tesla vehicle and paid for the FSD package from October 2016 to May 2017, and currently reside in California. Neither class dealt with the EAP, and both classes were narrowed slightly Tesla did not contest that numerosity was met The plaintiff was able to show that commonality and predominance were met. For the purposes of class certification, the claims were martially indistinguishable and could be analyzed together The plaintiff could show that the Hardware Statement would be material to an FSD purchaser. However, the plaintiff could not show common exposure to the Cross-Country Statement The plaintiff could show that the issue of whether Tesla vehicles were equipped with hardware sufficient for Full Self-Driving capability was subject to common proof The plaintiff was able to show that damages could be established through common proof. Under California law, the proper measure of restitution was the difference between what the plaintiff paid and the value of what the plaintiff received Although Tesla argued that there were many claims that were subject to the statute of limitations, separate examinations of each situation with each plaintiff were needed. The court disagreed and said that this was not fatal to class certification when there was a sufficient nucleus of common questions The requirement of adequacy was met Superiority was also established. The economies of scale made it desirable to concentrate all of the plaintiffs’ claims in one forum, and this case was manageable as a class action The court certified a narrower class, namely all members of the California Arbitration Opt-Out class and California Pre-Arbitration class who had stated that they wanted to purchase or subscribe to FSD in the future but could not rely on the product’s future advertising or labelling The plaintiff showed that he had standing to seek injunctive relief, since he had provided the general contours of an injunction that could be given greater substance at a later stage in the case Accordingly, all elements were met, and the class certification was granted, subject to the modified class definitions. Within 14 days, the plaintiff had to amend the class definition so that the parties could move on to the case management conference. The court also appointed the main plaintiff as the representative plaintiff for the class, and appointed class counsel. What can we Take from This Development? This was simply a motion to certify the class action. The judge went through the main elements and confirmed that the class action could move forward. The examination of each of the components in the test had to do with whether it was more effective to hear the claims together in one class action instead of addressing each claim separately in the court. This was not a decision that confirmed Tesla engaged in unfair competition, false advertising, negligent misrepresentation, or negligence. This was a preliminary decision that allowed the class action to proceed. Previous Next
- Impact Assessments | voyAIge strategy
Evaluate AI risks and opportunities with expert-driven impact assessments. Impact Assessments We specialize in data, algorithmic, ethics, and socioeconomic impact assessments to understand technological and operational impacts on organizations, their clients, customers, and stakeholders. Our assessments deliver deep insights businesses need to understand their impact on those who matter most. Proactive Impact Assessments for Successful AI Use As AI, data, and algorithmic technologies become increasingly central to business operations, understanding their potential impacts is more critical than ever. Whether it’s assessing the ethical implications, ensuring compliance with regulatory standards, or evaluating the broader social and economic effects, impact assessments provide the clarity and foresight you need to navigate this complex landscape responsibly. At voyAIge strategy, we specialize in conducting thorough impact assessments that help organizations anticipate and mitigate risks, align with best practices, and make informed decisions. Our assessments go beyond surface-level analysis, offering deep insights into how your AI systems, data practices, and algorithms might influence your stakeholders, your business, and society at large. What is an Impact Assessment? An impact assessment is a systematic process of identifying, evaluating, and addressing the potential effects of AI systems, data usage, and algorithmic processes on individuals, organizations, and society. These assessments are crucial for ensuring that your technology strategies not only achieve their goals but also align with ethical standards, legal requirements, and social expectations. Examples of Impact Assessments include: DPIA Data Privacy Impact Assessment: evaluates how your data collection, storage, and processing practices affect individual privacy, ensuring compliance along the way. AIA Algorithmic Impact Assessment: a nalyzes the potential biases, fairness, and transparency of your algorithms, providing recommendations for mitigating negative outcomes and ensuring equitable results. EAIA Ethical AI Impact Assessment: assesses the broader ethical implications of deploying AI systems, including their effects on decision-making processes, social justice, and public trust. SEIA Social and Economic Impact Assessment: examines the potential social and economic consequences of your AI and data initiatives, helping you anticipate and address both positive and negative impacts. Why Impact Assessments Matter Impact assessments are not just a regulatory requirement—they are a critical tool for ensuring that your AI and data initiatives are responsible, sustainable, and aligned with your organization’s values. Here’s why impact assessments matter: Risk Mitigation: Identify and address potential risks before they become issues, protecting your organization from legal, ethical, and reputational harm. Regulatory Compliance: Ensure that your practices comply with local, national, and international laws and regulations, avoiding fines and penalties. Ethical Alignment: Align your technology strategies with ethical standards, ensuring that your AI and data use promote fairness, transparency, and accountability. Stakeholder Trust: Build and maintain trust with customers, employees, regulators, and the public by demonstrating your commitment to responsible AI and data use. Why Choose voyAIge strategy? At voyAIge strategy, we bring a unique blend of expertise, rigor, and strategic insight to every impact assessment. Here’s why organizations trust us to guide their AI and data initiatives: 1 Deep Expertise: Our team has extensive experience in AI, data ethics, and regulatory compliance, ensuring that our assessments are both thorough and informed by the latest developments in the field. 2 Tailored Approach: We customize each impact assessment to your organization’s specific needs, goals, and regulatory environment, ensuring that our insights are relevant and actionable. 3 Ethical Commitment: We are deeply committed to promoting ethical AI and data use, and our assessments reflect this commitment, helping you align your practices with the highest ethical standards. 4 Strategic Focus: Our assessments are designed to provide not just risk mitigation, but also strategic insights that help you leverage AI and data technologies for sustainable growth.
- A Quiet Pivot to Safety in AI | voyAIge strategy
A Quiet Pivot to Safety in AI What if the Future of AI Isn’t Action, but Observation? By Tommy Cooke, powered by caffeine and lots of questions Jun 13, 2025 Key Points: Not all AI needs to act—some of the most powerful systems may simply observe, explain, and flag risks LawZero introduces a new design path for AI: non-agentic, safety-first systems that support human judgment rather than automate it For business leaders, Bengio’s pivot signals that responsible AI isn’t about slowing down innovation—it’s about choosing the right kind of intelligence from the start During the pandemic, I led an ethics oversight team on a major public AI project. It was high-stakes, politically visible, and technically ambitious. It was an initiative meant to support public safety under crisis conditions. But what stood out to my team and I wasn’t the complexity of the models or the pace of delivery. It was the power of watching. This work experience left a mark. It taught me that insight doesn’t always come from “doing”. Sometimes it comes from deliberate, highly intentional observation. So, when I recently saw that Yoshua Bengio had launched a nonprofit called LawZero designed to build non-agentic AI (that is, tools that watch and explain, rather than act), I recognized the move for what it is: a quiet but necessary pivot in AI. Safety-first AI: A New Kind of Artificial Intelligence? Bengio’s concern stems from recent studies showing that advanced models are beginning to exhibit goal-seeking behaviours. He refers to them as “agentic” properties. These include lying, deceiving, cheating, even migrating code to preserve themselves. In particular, anything it can do to justify its own utility and existence. While some examples are speculative, others are already appearing in major systems, from multi-step agents to autonomous models that write and execute code . But rather than trying to fix agentic behaviour after deployment, LawZero proposes a rather radical alternative: build systems that never act on the world at all. Instead, Bengio envisions “ scientist AIs ”, systems that are designed to observe and interpret what other AIs are doing. They explain. They evaluate. They flag risks. But they never pursue goals. In other words, they think without the process of thinking being necessarily tied to or grounded in a specifically measurable outcome that justifies the AI’s actions. Bengio’s work is so exciting because it represents a fundamental reframing of AI, from agency to oversight . This reframing is particularly important to business leaders because it also offers a very different design principle for safety. LawZero’s Implications for Business Leaders While LawZero may seem like a philosophical project removed from the day-to-day concerns of business, it has deeply practical implications. As AI becomes embedded in everything from finance to customer service to logistics, organizations must make choices about what kind of AI to use. They must also choose how to manage it responsibly. Let’s reflect for a moment on some of the most relevant implications for you as a business leader: Agency isn’t always an asset . Not every problem needs an autonomous solver. For regulated sectors like healthcare, law, education, or infrastructure, oversight tools may be more valuable than decision-making tools. A scientist AI can help detect risk, model impacts, or provide a second set of “eyes” on AI systems that are already in use. AI safety isn’t free . And it isn’t a default of most systems. LawZero received $30 million in seed funding from philanthropic organizations . That’s enough to fund foundational research, but not to scale these tools across industries. This is a significant reminder that if you’re adopting AI, you would be correct in assuming that safety and oversight systems usually require separate investments. Governing AI does not slow innovation. Many companies hesitate to implement AI governance, let alone minimal safety mechanisms, out of fear that it will slow progress or frustrate teams. But LawZero’s work shows that governance can be designed in, not layered on. Will Watchful AI Catch On? LawZero is still early-stage, and many questions remain. For example, can it scale? Will its tools integrate with commercial platforms? Will safety-first approaches be adopted by regulators or industry groups? Despite the obvious questions, what remains clear is that Bengio has added a new frame to the conversation. While the global race to build more capable models continues, LawZero quietly asks: who’s watching the watchers? Better yet, what if the watchers weren’t trying to win the AI race at all? Bengio’s work echoes something that I learned during my oversight role during the pandemic: the most powerful presence in the room is sometimes the one that doesn’t act, but instead sees everything clearly. Previous Next
- Why the AI Chip Controversy Matters | voyAIge strategy
Why the AI Chip Controversy Matters How Semiconductor Tensions Shape AI Strategy By Tommy Cooke, fueled by light roast coffee May 23, 2025 Key Points: AI strategy now depends as much on chip supply and trade stability as on internal capability Semiconductor restrictions are fragmenting the global AI landscape, creating risks and perhaps some opportunities for business leaders as well Business leaders must proactively monitor supply chains, policy shifts, and emerging markets to future-proof their AI investments The semiconductor tensions between the U.S. and China aren’t just about geopolitics. They reveal a deeper truth about the future of artificial intelligence. A semiconductor is a material (usually silicon) that conducts electricity under some conditions—and not others. This characteristic makes them ideal for controlling electrical signals. It is also why they are used as the foundation for microchips, which of course power everything from smartphones to cars and AI systems. In the case of AI, microchips are used in the processors used to handle the massive calculations that AI requires. You’ve probably heard of them: graphics processing units (GPUs) and tensor processing units (TPUs). Back to the controversy at issue: at its core, the controversy isn’t about semiconductors and microchips. It’s about who controls the speed, shape, and scale of AI innovation globally. For business leaders exploring AI adoption, understanding these supply-side dynamics is crucial. AI systems are only as powerful as the chips that run them, and those chips are subject to competition, trade restrictions, and access limitations. That means that today’s decisions around AI aren’t just about what tools to use. They’re also about where those tools come from, how stable the supply pipeline is, and whether your organization is prepared for the long-term implications of this shifting terrain. Simply put, if you are investing in AI now, the controversy may impact your ROI calculations. Understanding the Core of the Controversy At the heart of the issue lies the U.S. government's implementation of strict export controls on advanced AI chips. The intention is to limit China's access to cutting-edge semiconductor technology. These measures, including the recently rescinded AI Diffusion Rule, sought to categorize countries and restrict chip exports accordingly. Industry leaders, like Nvidia's CEO Jensen Huang, have criticized these policies as counterproductive. He argues that they not only diminish U.S. companies' market share, but they also inadvertently accelerate domestic innovation within China. Implications for the AI Landscape While the chip export restrictions may seem like merely a trade issue, they are already reshaping how and where AI systems are being built and deployed. These changes have ripple effects across industries, from vendor availability and cost structures to innovation cycles and long-term planning. Here are some of the most prevalent implications on the horizon: Acceleration of Domestic Alternatives. The restrictions have spurred Chinese companies to invest heavily in developing local semiconductor technologies. This means that China is investing in a capacity for self-reliance, which could lead to the emergence of competitive alternatives to U.S. and European products. Market Share and Revenue Impact. U.S. companies like Nvidia have experienced significant reductions in their Chinese market share, dropping from 95 percent to 50 percent over four years . These declines not only affect revenues, but they also influence global competitiveness and innovation leadership. On this point alone, we ought to pay close attention to Nvidia’s future ability to supply GPUs required for supporting U.S.-driven AI innovation. Global AI Development Dynamics. Building from the previous point, the export controls may inadvertently fragment the global AI development landscape. This may, in turn, lead to parallel ecosystems with differing standards and technologies. This is what is referred to as a bifurcation: the division of something into two or more branches or parts, like a river that splits into two because of elevated terrain. A marketplace bifurcation may eventually encourage further self-reliance and innovation, but it will almost certainly complicate international collaboration and AI system interoperability at the same time. Partnerships and trust are at threat, to say the least. Strategic Considerations for Business Leaders in the Wake of the AI Chip Controversy This controversy is a warning sign. It reveals how AI adoption is no longer just about internal capability or budget. It’s also about navigating a volatile global landscape. Business leaders must now consider not only what AI tools can do, but also where those tools originate, whether future access will be reliable, and how international policy may affect ongoing AI strategies. As the supply side of AI becomes more political, leaders must become more strategic. Here are some tips that you should consider when internally canvasing the right fit, especially as a reflection of your ROI priorities: Assess Supply Chains and Diversify. Assess and diversify your supply chains. It’s important to mitigate risks associated with geopolitical tensions and export restrictions. Who is selling? Where are they sourcing their solutions from? Where are your vendors’ data farms? Ask these questions now to avoid issues later. Invest in R&D. To maintain a competitive edge, invest in research and development. Start now because it will become important later, particularly in areas less susceptible to export controls. The idea is to, at the very least, begin exposing yourself to an R&D process so that you can learn more about strategic AI-related investments downstream (no pun intended). Monitor, Monitor, Monitor. The everchanging regulatory landscape matters a lot here. Stay informed about evolving export regulations and international trade policies. It is essential for strategic planning, let alone compliance. Explore New Markets. With certain markets becoming less accessible due to restrictions, identifying and cultivating alternative markets can help offset potential losses. Who are the emerging suppliers around the globe? Where are AI innovations specific to your industry and use cases growing? Expand your horizon. The AI chip export controversy is as a reminder of the intricate balance between national priorities and global technological development. For business leaders, navigating this landscape requires awareness, agility, and informed decision-making. This is what a proactive approach looks like. Remember, AI adoption doesn’t happen in a vacuum. The semiconductor debate makes it clear that the tools we choose, and the ecosystems we rely on, matter more than ever. Previous Next
- Future of Jobs Report | voyAIge strategy
Future of Jobs Report What is Projected for the Future of Work? By Christina Catenacci, human writer May 1, 2025 Key Points The Future of Jobs Report 2025 has thoroughly examined a number of macrotrends and technology trends, and reported on where countries are individually and globally in terms of predicting the share of organizations that have identified the trend as likely to drive transformation in their organization It is predicted that by 2030, new job creation will amount to 170 million jobs (about 14 percent of today’s total employment), offset by the displacement of 92 million current jobs (about eight percent of today’s total employment)—this means there will be a net growth of 78 million jobs (seven percent of today’s total employment) Advances in technology are anticipated to drive skills change more than any other trend over the next five years. The most common workforce strategy in response to the macrotrends analyzed in the report is upskilling the workforce (85 percent) What is Projected for the Future of Work? The World Economic Forum recently released its Future of Jobs Report 2025 . The report discusses the perspectives of over 1,000 leading global employers across 22 industry clusters and 55 economies from around the world. What does it say about the future of work? What does it predict about AI in the workplace? This article answers these questions. More specifically, the article touches on drivers of labour-market transformation; the jobs outlook; the skills outlook; barriers to transformation and strategies that can improve talent availability; and industry insights. What is the Global Labour Market Landscape in 2025? Undoubtedly, 2025 has been marked by the rising cost of living, geopolitical conflicts, climate issues, and economic downturns. This report is dated January, 2025— before the recent tariff wars that were launched by America on several countries. At this point, the longer-term effects of these tariff wars are unclear when it comes to the markets, unemployment, and inflation. The projections for between 2025 and 2030 are outlined below. What are the Drivers of Labour-Market Transformation? The following trends are considered drivers of transformation in the global market, which reshape both jobs and required skills: Technological developments : Hands down, 60 percent of employers expect broadening digital access to transform their businesses, which is more than any other trend. This makes sense since growing digital access is a critical enabler for new technologies to transform labour markets. The three technologies that are expected to have the greatest impact on business transformation are AI, robots and autonomous systems, as well as energy generation and storage technologies. By far, AI is expected to have the most impact, with employers responding that 86 percent expect that AI will transform their businesses by 2030. Indeed, there has been a rapid increase in investment and adoption across several sectors, and a surge in demand for GenAI skills Economic uncertainty : Based on 2024 economic performance, there is some cautious optimism about the global economic outlook; however, more chief economists expect conditions to worsen rather than strengthen. Slow growth and political volatility keep many countries at risk of economic shocks, and 42 percent expect slower growth to impact their operations. Inflation is still high in low-income countries because of high food prices due to supply chain disruptions influenced by climate shocks, regional conflicts, and geopolitical tensions Geoeconomic fragmentation : Geoeconomic tensions threaten trade and supply chains, especially in lower-income economies. Globally, governments are responding to geoeconomic challenges by imposing trade and investment restrictions, increasing subsidies, and adjusting industrial policies. The shift toward geoeconomic fragmentation has significant macroeconomic implications. In fact, about 34 percent of surveyed employers view heightened geopolitical tensions and conflicts as a key driver of organizational transformation The green transition : About 47 percent of employers consider the ramping up of efforts and investments to reduce carbon emissions as a key driver for organizational transformation. As well, 41 percent of employers see the increased efforts and investments made to adapt to climate change as a significant driver for organizational change. The demand for green skills will continue to outpace supply. The report states, “To fully capitalize on opportunities created by the green transition and harness them in a way that is fair and inclusive, prioritizing green skilling is essential”. Employers agree, in that 71 percent in the Automotive and Aerospace industry and 69 percent in the Mining and Metals industry expect carbon emissions reductions to transform their organizations Demographic shifts : we have an aging and declining working-age population predominantly in higher-income economies (due to declining birth rates and longer life expectancy), and a growing working-age population in many lower-income economies (where younger populations are progressively entering the labour market). As a result, we are putting greater pressure on a smaller pool of working-age individuals and raising concerns about long-term labour availability. Many employers facing the effects of the aging population are more pessimistic about talent availability and expect to face bigger challenges in attracting talent, and believe that they may need to rely on automation (79 percent) and advance workforce augmentation (67 percent). In fact, 92 percent of employers think that they will need to prioritize upskilling and reskilling in the next five years What is the Jobs Outlook? The Jobs Outlook addresses the issue of how employers expect certain jobs to grow and decline in response to the above-mentioned trends. It is predicted that by 2030, new job creation will amount to 170 million jobs (about 14 percent of today’s total employment), offset by the displacement of 92 million current jobs (about eight percent of today’s total employment). This means there will be a net growth of 78 million jobs (seven percent of today’s total employment). The fastest growing job roles are driven by technological developments—Big Data Specialist, FinTech Engineers, AI and Machine Learning Specialists, Software and Applications Developers, Security Management Specialists, Data Warehousing Specialists, and Autonomous and EV Specialists . On the other hand, some of the top fastest declining jobs involve clerical roles, such as Cashiers and Ticket Clerks, Administrative Assistants and Executive Secretaries, Bank Tellers, as well as Accounting, Bookkeeping, and Payroll Clerks. Moreover, the largest growing jobs for 2025–2030 include Farmworkers, Labourers, and other Agricultural Workers, Light Truck or Delivery Services Drivers, Software and Applications Developers, Building Farmers Finishers, related Trades Workers, and Shop Salespersons. Conversely, the largest declining jobs include Cashiers and Tickets Clerks, Administrative Assistants and Executive Secretaries, Building Caretakers and Cleaners, Material-recording and Stock-keeping clerks, Printing and related Trades workers, as well as Accounting and Bookkeeping Clerks. The researchers also examined how the above trends would affect employment. Technology has been predicted to be the most divergent driver of labour-market change—broadening digital access will likely create and displace more jobs than any other macrotrend. That is, 19 million jobs would be created, and nine million jobs would be displaced. Also, AI and information processing technology are expected to create 11 million jobs and displace 9 million jobs. When it comes to robotics and autonomous systems, it is predicted to be the largest job displacer with a net decline of five million jobs. In fact, broadening digital access, advancements in AI and information processing, and robotics and autonomous systems technologies are the drivers of the fastest growing and declining jobs. When it comes to technology, there is some question about the interplay between humans, machines, and algorithms as they redefine job roles across industries—is it about autonomation or augmentation? Automation will change the way in which people work. In particular, as technology becomes more versatile, the proportional share of tasks performed solely by humans is expected to decline. Today, 47 percent of work tasks are performed mainly by humans alone, with 22 percent performed mainly by technology (machines and algorithms), and 30 percent completed by a combination of both. But by 2030, employers expect these proportions to be nearly evenly split across these three things. Interestingly, the report states: “both machines and humans might be significantly more productive in 2030 – performing more or higher value tasks in the same or less amount of time than it would have taken them to do so in 2025 – so any concern about humans “running out of things to do” due to automation would be misplaced” Along the same lines, the researchers asked this question: If an increasing amount of a firm’s total output and income is derived from advanced machines and proprietary algorithms, to what extent will human workers be able to share in this prosperity? They stressed that technology could be designed and developed in a way that complements and enhances, rather than displaces, human work. In fact, they underscore the importance of ensuring that talent development, reskilling, and upskilling strategies are designed and delivered in a way to enable and optimize human-machine collaboration. That said, at an industry level, all sectors are expected to see a reduction in the proportion of work tasks performed by humans alone by 2030, but they differ in the share of this reduction that is projected to be attributable to automation versus augmentation and human-machine collaboration. For instance, there are four sectors where automation is projected to reduce the proportion of total work tasks done by humans alone and reduce the share of total work tasks currently delivered through human-machine collaboration. With respect to geoeconomic fragmentation, employers view increased government subsidies and industrial policy, increased geopolitical division and conflicts, and increased restrictions to global trade and investment to be net job creators. Additionally, increased government subsidies and industrial policy are expected to drive increased demand for Business Intelligence Analysts and Business Development Professionals. Increased restrictions to global trade and investment are also predicted to drive growth in those roles, as well as in Strategic Advisors and Supply Chain and Logistics Specialists. And increased geopolitical division and conflicts are projected to drive growth in all these roles, in addition to Information Security Analysts and Security Management Specialists. Employers were also asked about whether they were planning to offshore parts of their workforce, or move operations closer to home through reshoring, nearshoring, or friendshoring. Employers are driven to off-shoring and re-shoring due to the above geoeconomic trends. In terms of the green transition, climate change adaptation is likely to be the third largest contributor to net growth in global jobs by 2030, with an additional five million net jobs; similarly, climate change mitigation is the sixth largest contributor, with an additional three million net jobs. In this context, some fast-growing jobs (they are in the top 15 fastest growing jobs) include Environmental Engineers and Renewable Energy Engineers. Some other fast-growing jobs include Sustainability Specialists and Renewable Energy Technicians. Additionally, green transition macrotrends will also drive labour-market transformation; for instance, there will likely be net job growth for Building Framers, Finishers, and Related Trades Workers. In regards to demographic shifts, the trend of growing working-age populations is expected to be the second largest driver of global net job creation, with nine million net additional jobs by 2030. Likewise, aging and declining working-age populations are expected to be the third-largest driver of job creation (with 11 million additional jobs), as well as the main factor in a global reduction in jobs (with seven million fewer jobs). These demographic trends will likely be drivers for growth in roles for Assembly and Factory Workers, Vocational Education Teachers, Nurses, Sales and Hospitality professionals, Shop Salespersons, Wholesale and Manufacturing Sales Representatives, Food and Beverage Servers, as well as University and Higher Education Teachers and Secondary Education Teachers. The slower economic growth has caused employers to believe that there will be more job destruction (three million jobs) than creation (two million jobs). Similarly, employers believe that the rising cost of living and higher prices will cause some job creation (four million jobs) and displacement (three million jobs). This economic uncertainty will likely contribute to the decline in roles such as Building Caretakers, Cleaners, and Housekeepers, while slower economic growth is also among the top contributors to job decline in Business Services and Administration Managers, General and Operations Managers, and Sales and Marketing Professionals. That said, slower economic growth is also projected to be a top driver for growth in roles such as Business Development Professionals and Sales Representatives. Furthermore, growth in roles driven by increasing cost of living is concentrated in jobs associated with finding ways of increasing efficiency, such as AI and Machine Learning Specialists, Business Development Professionals, and Supply Chain and Logistics Specialists. What is the Skills Outlook? This part discusses expectations of skill disruption by 2030, the skills currently required for work, and whether employers anticipate that these skills will increase or decrease in importance over the next five years. It also examines the skills that are expected to become core skills by 2030, the key drivers of skill transformation, and anticipated training needs. When it comes to skills disruptions, there have been rapid advancements in frontier technologies (tech that significantly changes how we communicate, solve problems, and conduct business) since the pandemic—the post-pandemic era, we see an accelerated adoption of digital tools, remote work solutions, and advanced technologies such as machine learning and generative AI. At this point, employers expect 39 percent of workers’ core skills to change by 2030 and 61 percent of core skills that would remain the same; compared to this global average, Canada is at 38 percent, and the United States is at 35 percent of core skills will change by 2030. This may be why there is a growing focus on continuous learning along with upskilling and reskilling programmes. In fact, about 50 percent have completed training as part of long-term learning strategies. Interestingly, the top 15 skills that are the core skills in today’s workforce: analytical thinking; resilience, flexibility, and agility; leadership and social influence; creative thinking; motivation and self-awareness; technological literacy; empathy and active listening; curiosity and lifelong learning; talent management; service orientation and customer service; AI and big data; systems thinking; resource management and operations; dependability and attention to detail; quality control; and teaching and mentoring. Similarly, the top 15 skills that are on the rise include: AI and big data; networks and cybersecurity; technological literacy; creative thinking; resilience, flexibility, and agility; curiosity and lifelong learning; leadership and social influence; talent management; analytical thinking; environmental stewardship; systems thinking; motivation and self-awareness; empathy and active listening; and design and user experience. It is important to keep in mind that there are industry-specific variations in the evolving importance of skills. For example, both analytical thinking as well as curiosity and lifelong learning are at the top of the list with respect to what is needed in education and training; likewise, environmental stewardship is at the top of the list for what is needed in oil and gas. How are the main trends expected to influence the skills evolution by 2030? In terms of technological change, advances in technology are anticipated to drive skills change more than any other trend over the next five years. In fact, the increasing importance of AI and big data, networks and cybersecurity, and technological literacy is driven by the expansion of digital access and the integration of AI and information processing technologies. These trends have also been seen as responsible for the growing importance of analytical thinking and systems thinking. In a data-driven landscape, there is an increasing complexity of decision-making and the need for critical problem solving. Similarly, design and user experience, along with marketing and media skills, are expected to grow because of technological advancements. On the other hand, technology has accelerated the decline in some skills, including manual dexterity, endurance, precision, and reading, writing, and mathematics—likely due to robotics and automation. As discussed above, the hope is that technologies such as Gen AI will help to augment human skills through human-machine collaboration instead of replacing them, and so there is a continued importance of human-centred skills. In fact, the report states: “These findings underscore an urgent need for appropriate reskilling and upskilling strategies to bridge emerging divides. Such strategies will be essential in helping workers transition to roles that blend technical expertise with human-centred capabilities, supporting a more adaptable workforce in an increasingly technology-driven landscape” The researchers recommend that employers recognize the need for training and upskilling initiatives that focus on both advanced prompt-writing skills and broader GenAI literacy. In terms of geoeconomic fragmentation and economic uncertainty, these trends have led to a demand for network and cybersecurity skills in order to protect digital infrastructure from emerging threats. They have also led to a need for human-centred skills including resilience, flexibility, agility, leadership and social influence, and global citizenship to manage multiple crises and complex social dynamics. With respect to the green transition, environmental skills are becoming increasingly integral across diverse sectors. Moreover, employers that anticipate a rise in the importance of global citizenship cite the convergence of climate-change adaptation, geoeconomic fragmentation, and broadening digital access as key factors. We cannot forget about demographic shifts as a driver of skills demand—aging and declining working-age populations are pressing organizations to prioritize talent management, teaching and mentoring, as well as motivation and self-awareness. At the same time, there is a rising focus on empathy and active listening, resource management, and customer service. This emphasizes the growing need for interpersonal and operational skills that can address the specific needs of an aging workforce and foster more inclusive work environments. What does this all mean when it comes to skills? Employers have increasingly invested in reskilling and upskilling initiatives to ensure that workforce skills are aligned with evolving demands. Since 50 percent of workforces have completed training across nearly all industries, there is a growing recognition of the importance of continuous skill development. However, some industries are outliers: Agriculture, Forestry and Fishing, and Real Estate are the only sectors that have seen a decline in training completion since 2023. For a representative sample of 100 workers, 41 will not require significant training by 2030; 11 will require training, but it will not be accessible to them in the foreseeable future; and 29 will require training and be upskilled within their current roles. Additionally, 19 out of 100 workers will require training and will be reskilled and redeployed within their organization by 2030. To fund the training, employers expect to fund their own training programmes (86 percent), free of cost training (27 percent), government (20 percent), public-private funding (18 percent), and co-funding across the industry (16 percent). From training initiatives, employers expect enhanced productivity (77 percent), and improved competitiveness (70 percent). What are Workforce Strategies? Employers were asked about the workforce strategies that they anticipate adopting in response to the macrotrends mentioned above that will shape the future of work. Also, this part also touches on key barriers to organizational transformation, talent availability, as well as planned workplace practices and policies. The main barrier to organizational transformation is skill gaps in the labour market (63 percent). This challenge exists across practically all industries and geographies. Second and third in line are organizational culture and resistance to change (46 percent), and outdated or inflexible regulatory framework (39 percent). In terms of talent availability outlook, it has decreased since 2023: in 2025, only 29 percent of businesses expect talent availability to improve between 2025–2030. That said, employers are optimistic about talent development (70 percent). But when it comes to talent retention, only 44 percent expect to see improvements in their ability to retain talent. The most common workforce strategy in response to the above macrotrends is upskilling the workforce (85 percent). This is the case across all geographies and economies at all income levels, with employers in high-income economies (87 percent) slightly ahead of those in upper-middle-income (84 percent) and lower-middle-income (82 percent) economies. In addition, process and task automation is expected to be the second most common workforce strategy (73 percent). Automation is a more pronounced strategy in high-income economies (77 percent), compared to upper-middle-income (74 percent) and lower-middle-income economies (57 percent). And third on the list, employers plan on complementing and augmenting their workforce with new technologies (63 percent). It is important to note that 70 percent of organizations plan to hire new staff with emerging in-demand skills, 51 percent plan to transition staff from declining to growing roles internally, and 41 percent plan to reduce staff due to skills obsolescence. Also, 10 percent plan to move operations within closer control through reshoring, nearshoring or friendshoring, and eight percent plan to offshore significant parts of their workforce. In terms of business practices, a top priority is supporting employee health and well-being (64 percent). Other top priorities include providing effective reskilling and upskilling (63 percent), improving talent progression and promotion processes (62 percent), offering higher wages (50 percent), tapping diverse talent pools (47 percent), and offering remote and hybrid work opportunities within countries (43 percent). In regards to public policies, employers identified funding for reskilling and upskilling (55 percent) and provision of reskilling and upskilling (52 percent) as the two most crucial policy measures. The researchers state that there is a clear desire for sustained public investment in skills development to align workforce capabilities with future labour-market demands. Interestingly, 83 percent of employers have already implemented diversity, equity, and inclusion measures; this represents a marked increase since 2023, where there were 67 percent of employers. These measures include training for managers and staff, recruitment and retention initiatives, setting goals and quotas, pay equity reviews, salary audits, having anti-harassment protocols, and ensuring these goals are across the supply chain. And wages are also affected by these trends—52 percent of employers expect to see an increase in the share of their revenue allocated to wages by 2030, 41 percent expect to see wages stay stable, and seven percent expect to see a reduction in wages. It appears that two main factors are related to wage expectations: aligning wages to productivity and performance (77 percent) and competing to retain talent (71 percent). With respect to assessing skills, work experience continues to be the most common assessment mechanism in hiring (81 percent plan on continuing to use this strategy). Second in line is pre-employment tests (48 percent), and third is psychometric tests (34 percent). Of course, resumes are still important (43 percent). Thus, in addition to education, employers want to see applicants use their skills and demonstrate their behavioural traits, cognitive abilities, and cultural fit. In response to AI adoption, 86 percent of employers expect that AI and information processing technologies will transform their businesses by 2030—though certain sectors would have higher numbers due to possible higher anticipated AI exposure, such as the Financial Services (97 percent) and Electronics (95 percent) sectors. In contrast, certain sectors have lower numbers likely due to lower exposure to AI disruption, including Energy Technology and Utilities (72 percent) and Government and Public Sector (76 percent). The following are barriers to AI adoption: lack of skills to support adoption (50 percent), lack of vision among managers and leaders (43 percent), high costs of AI products and services (29 percent), lack of customization to local business needs (24 percent), complex regulations around AI and data usage (21 percent), and lack of consumer demand (16 percent). What the foregoing suggests is that there is a gap in skills required for AI adoption for managers and workers alike. The most anticipated workforce strategy among employers (77 percent) in response to AI disruption is reskilling and upskilling of the existing workforce to work more effectively alongside AI (this applies to 45 out of the 55 covered economies). Moreover, 69 percent plan to recruit talent skilled in AI tool design and enhancement, and 62 percent anticipate that they will hire people with skills in working with AI. What’s more, 49 percent expect to reorient their business models toward new AI-driven opportunities, and 47 percent expect to transition employees from AI-disrupted roles to other positions. However, it is important to keep in mind that 41 percent expect to downsize their workforce as AI capabilities to replicate roles expand. The report also contains insights involving the various macrotrends mentioned above in relation to particular regions and industries. For instance, in North America, technological advancements, demographic shifts, and economic uncertainties are driving strategic decisions of companies. Focusing more precisely on Canada, employers are anticipating an evolving business landscape marked by advances in digital technologies, geoeconomic fragmentation, and increased climate-mitigation efforts. It is important to note that 97 percent of companies expect AI and information processing technologies to transform their operations. In order to ensure that there is a steady flow of talent, employers in Canada are trying to improve talent progression and promotion processes and invest in reskilling and upskilling. The Economy Profile on Canada also contains some helpful information. In Canada, 90 percent have secondary education and 68 percent have tertiary education. However, Canada only invests in mid-career training at a rate of five percent. Moreover, Canada’s individual rates on macrotrends and technology trends (share of organizations that identified the trend as likely to drive transformation in their organization) compared to the global rates were presented: Broadening digital access : 70 percent compared to the global rate of 60 percent Increased geopolitical division and conflicts : 58 percent compared to the global rate of 34 percent Increased efforts and investments to reduce carbon : 54 percent compared to the global rate of 47 percent Increased efforts and investments to adapt to climate change : 52 percent compared to the global rate of 41 percent Slower economic growth : 52 percent compared to the global rate of 42 percent Rising cost of living, higher prices or inflation : 47 percent compared to the global rate of 50 percent Ageing and declining working-age population : 42 percent compared to the global rate of 40 percent Increased focus on labour and social issues : 41 percent compared to the global rate of 46 percent Growing working-age populations : 30 percent compared to the global rate of 24 percent Increased restrictions to global trade and investment : 27 percent compared to the global rate of 23 percent Increased government subsidies and industrial policy : 16 percent compared to the global rate of 21 percent Stricter anti-trust and competition regulations : 16 percent compared to the global rate of 17 percent AI and information processing technologies (big data, VR, AR) : 97 percent compared to the global rate of 86 percent Robots and autonomous systems : 54 percent compared to the global rate of 58 percent Energy generation, storage, and distribution : 40 percent compared to the global rate of 41 percent New materials and composites : 24 percent compared to the global rate of 30 percent Semiconductors and computing technologies : 21 percent compared to the global rate of 20 percent What Can We Take from This Report? This report surveyed over 1,000 global employers on several topics involving employment. For instance, we learned about the trends that will affect organizations and drive business transformation up to 2030, including the rising cost of living, geopolitical conflicts, climate issues, and economic downturns—these issues were noted before the tariff wars began, and the tariffs could worsen the situation and cause further economic uncertainty. Given the above findings, I would like to suggest that employers need to prioritize upskilling and reskilling their workforces—and start thinking about this as soon as possible. Throughout this article, there were important revelations suggesting that, when it comes to skills, there is great opportunity with upskilling and reskilling and most employers say that it is their top workforce strategy that will help address skills misalignments and shape the future of work. Indeed, employers have identified funding for reskilling and upskilling and provision of reskilling and upskilling as the two most crucial policy measures. Employers also want to ensure that there is sustained public investment in skills development to align workforce capabilities with future labour-market demands. The researchers recommend that employers recognize the need for training and upskilling initiatives that focus on both advanced prompt-writing skills and broader GenAI literacy. As I wrote here , the purpose of improving an employee skill (upskilling) or teaching a brand-new skill or skills (reskilling) is to appreciate the nature of continuous learning. Previous Next
- What is an AI Impact Assessment? | voyAIge strategy
What is an AI Impact Assessment? A deep dive into the AIA By Christina Catenacci Nov 8, 2024 Key Points AIAs involve businesses looking at their programs or activities that may have an impact on individuals, communities, or an ecosystem and assessing the risks associated with their deployment of AI—and making solid plans for mitigating those risks There are a few examples of AIAs in the public and the private sectors AIAs (assessing and mitigating risks when deploying AI) are different than PIAs (assessing and mitigating risks associated with the privacy of individuals) and AHRIAs (assessing and mitigating risks dealing with human rights of individuals) Some people call it an AI Impact Assessment, and others call it an Algorithmic Impact Assessment—but what is the AIA? What is the difference between AIAs and other types of assessments like Privacy Impact Assessments and Humen Rights AI Impact Assessments? This article answers these questions. What is an AIA? Plainly put, when people talk about AIAs, they are talking about looking at their programs or activities that may have an impact on individuals, communities, or an ecosystem and assessing the risks associated with their deployment of AI—and setting out their plans for mitigating those risks. An example in the public sector Let us take an example. In Canada, since April 1, 2020, government departments have been required to complete an AIA pursuant to the Directive on Automated Decision Making (Directive) . The purpose of the Directive is to ensure that automated decision systems are deployed in a manner that reduces risks to clients, federal institutions and Canadian society, and leads to more efficient, accurate, consistent and interpretable decisions made pursuant to Canadian law. The Directive apples to any system, tool, or statistical model used to make an administrative decision or a related assessment about a client. It only applies to automated decision systems in production and excludes systems operating in test environments. To that end, the Algorithmic Impact Assessment tool supports the Directive. The tool is a questionnaire that determines the impact level of an automated decision-system. It is composed of 51 risk and 34 mitigation questions. Assessment scores are based on many factors, including the system's design, algorithm, decision type, impact, and data. In addition, the AIA was developed based on best practices in consultation with both internal and external stakeholders. Also. it was developed in the open and is available to the public for sharing and reuse under an open licence. The AIA is designed to help departments and agencies better understand and manage the risks associated with automated decision systems. The AIA is composed of questions in various formats to assess the areas of risk, and the mitigation measures in place to manage the risks identified. After this, there is a section on scoring with respect to both risks and mitigation measures. Once the scoring is completed, the impacts of automating an administrative decision are classified into four levels, ranging from Level I (little impact) to Level IV (very high impact). The AIA is available as an online questionnaire , which should be taken at the beginning of the design phase of a project. Additionally, the AIA should be completed for a second time, prior to the production of the system, to validate that the results accurately reflect the system that was built. The revised AIA should be released on the Open Government Portal as the final results. Certain requirements are connected to each impact level (from I to IV): these requirements are referred to as Impact Level Requirements. For instance, when it comes to notice, Level I has no requirements but Levels II to IV require notice to be posted through all service delivery channels. Level IV has the extra requirement of publishing documentation on relevant websites listing details about the automated decision system. Another example is human-in-the-loop requirements. Levels I and II do not require direct human involvement, but Levels III and IV require that decisions not be made without having specific human intervention points during the decision-making process, and the final decision must be made by a human. The Assistant Deputy Minister is responsible for completing and releasing the final results of an AIA prior to the production of any automated decision system, and applying the relevant Impact Level Requirements as determined by the AIA. Finally, the AIA should be reviewed and updated on a scheduled basis, and when the functionality or scope of the system changes. An example in the private sector The above discussion dealt with AIAs and the government. But what about AIAs in the private sector? Are there any tools that can help companies complete an AIA? One example is the United Kingdom’s recent initiative to launch a platform to help businesses manage AI risks and build trust. More specifically, the platform offers guidance and resources, outlining steps for businesses to conduct AIAs, evaluate AI systems, and check data for bias. Following a government report published in November 2024, the platform was subsequently announced within days. The report stated that the goal of the AI Assurance Platform was to help AI developers and deployers to navigate the complex AI landscape. The platform will act as a one-stop-shop for AI assurance—with tools, services, frameworks, and practices in one place. The ultimate goal of the platform is for the Department for Science, Innovation, and Technology to gradually create a set of accessible tools to enable baseline good practice for the responsible development and deployment of AI. Consequently, organizations will be supported, and building blocks for a more robust ecosystem will be established. This, in turn, will help to maintain trust in AI technologies. More precisely, the platform will identify and mitigate the potential risks and harms posed by AI. The government has focused on capitalising on the growing demand for AI assurance tools and services, and partnered with industry to develop a new roadmap, which will help navigate international standards on AI assurance. Small and medium-sized enterprises will be able to use a self-assessment tool to implement responsible AI management practices in their organisations and make better decisions as they develop and use AI systems. Moreover, the government plans to launch a public consultation to obtain industry feedback. Indeed, this initiative is fairly new, and the details are not yet solidified. Another example from the private sector Microsoft has produced the Responsible AI Impact Assessment Template . Released in June, 2022, Microsoft’s AIA template was an effort to define a process for assessing the impact an AI system may have on people, organizations, and society. It decided to share the final output of its research to help other companies. The sections of the document that can be completed are as follows: System information System profile System lifecycle stage System description System purpose System features Geographic areas and languages Deployment mode Intended uses Intended uses Assessment of fitness for purpose Stakeholders, potential benefits, and potential harms Stakeholders for Goal-driven requirements from the Responsible AI Standard Goal A5: Human oversight and control Goal T1: System intelligibility for decision making Goal T2: Communication to stakeholders Goal T3: Disclosure of AI interaction Fairness considerations Goal F1: Quality of service Goal F2: Allocation of resources and opportunities Goal F3: Minimization of stereotyping, demeaning, and erasing outputs Technology readiness assessment Task complexity Role of humans Deployment environment complexity Adverse impact Restricted Uses Unsupported uses Known limitations Potential impact of failure on stakeholders Potential impact of misuse on stakeholders Sensitive Uses Data requirements Data requirements Existing data sets Summary of Impact Potential harms and preliminary mitigations Goal Applicability Signing off on the Impact Assessment How are AI Impact Assessments different from other types of assessments? A Privacy Impact Assessment (PIA) is a risk management process that helps organizations ensure that they meet legislative requirements and identify the impacts their programs and activities will have on individuals’ privacy. On the other hand, the AIA mainly involves identifying risks associated with the deployment of AI (risks on individuals, communities, and the environment) and mitigating those risks. Furthermore, the Law Commission of Ontario and the Ontario Human Rights Commission have just created the first AI Human Rights Impact Assessment (AHRIA) in English and in French that is based om Canadian human rights law. Announced November 2024, the purpose of the AHRIA is to: Strengthen knowledge and understanding of AI and human rights Provide practical guidance on AI and Canadian human rights law Identify mitigation strategies to address bias and discrimination from AI systems The AHRIA is based on the following principles: Human rights must be forefront in AI development Ontario’s and Canada’s Human Rights Laws apply to AI systems Assessment of human rights in AI is a multi-faceted process that requires integrated expertise The HRIA is one piece of AI governance The tool is for the any organization in the public and private sector that intends on designing, implementing, or relying on an AI system. It applies throughout Canada and broadly to any algorithm, automated decision-making system, or AI system. The AHRIA should be completed: when the idea for the AI system is explored and developed before the AI system is made available to external parties (for example, before a vendor makes a model or application available to purchasers, or service providers deploy an AI technology for customer service) within ninety days of a material change in the system yearly as part of regular maintenance and reviews The AHRIA has two parts: Impact and Discrimination: assesses whether the AI system presents human rights harms The purpose of the AI system Is the AI system at high risk for human rights violations? Does the AI system show differential treatment? Is the differential treatment permissible? Does the AI system consider accommodation? Results Response and Mitigation: assesses how to minimize risks Mitigation strategies Internal procedure for assessing human rights Disclosure, data quality and explainability Consultations Testing and review Thus, companies are recommended to learn more about the AIA and other types of assessments such as the PIA and the AHRIA. Although there is some work involved with completing these assessments, doing them can go a long way to help prevent problems from surfacing in the future. Previous Next
- What is “AI Augmentation”, and How Do You Achieve It? | voyAIge strategy
What is “AI Augmentation”, and How Do You Achieve It? The New Frontier for HR By Christina Catenacci Nov 14, 2025 Key Points AI augmentation is the collaborative use of AI systems to enhance, support, and amplify the cognitive and physical capabilities of human workers, rather than replacing them entirely. The purpose is to increase productivity and quality of output by enabling humans to work faster and smarter AI augmentation is a safe way to carefully and gradually include AI as a collaborator There are several steps to achieving AI augmentation, starting with identifying the repetitive tasks that can be automatable AI augmentation is the collaborative use of AI systems to enhance, support, and amplify the cognitive and physical capabilities of human workers, rather than replacing them entirely. The purpose is to increase productivity and quality of output by enabling humans to work faster and smarter. Compared to full automation, augmentation is about giving existing valuable staff superpowers. You may have heard of collaborative robots , also known as cobots, which are industrial robots that can safely operate alongside humans in a shared workspace (unlike autonomous robots, which are hard-coded to repeatedly perform one task, work independently and remain stationary). In short, the goal is to combine the strengths of the AI with those of the human. What is an Example of AI Augmentation? For example, if someone needs to draft a proposal, that person could combine their abilities with AI’s capabilities. That is, the writer can decide which reports to select to include for the coverage in the proposal, and then ask the AI to list five of the most impactful statistics from those reports. At this point, the writer could ask the AI to draft a first draft of the proposal with those five set of statistics. From there, the writer could edit the document and complete an ethics check at the end. Together, the AI and the human writer could synthesize data, draft a document, edit the document, and do the final ethics check. How do HR Leaders Achieve AI Augmentation? AI Augmentation is the most responsible way to introduce AI. The reason is because it is not full automation, which can carry high risk and complexity, but it does not involve compiling statistics manually from multiple reports, which is the traditional way of doing things on the other end of the spectrum. AI augmentation is a happy medium. In fact, this is a safe way to carefully and gradually include AI as a collaborator. The AI can do the things that it is good at like sifting through mountains of data, finding patterns, and completing the repetitive tasks that bores most humans. This frees humans to focus on what they can do best, such as using expertise to solve tricky problems, building relationships with customers, and thinking creatively and empathetically. An humans can perform final ethics checks too. The following steps can lead to full AI augmentation , so that humans can still be in the driver’s seat instead of watching from the sidelines: Level 1: Use AI augmentation to eliminate the boring stuff . Identify the routine, automatable tasks in a job that slows everything down. Have AI start by taking on those tasks. For instance, the AI can clean up customer service tickets and thin out the queue Level 2: Allow workers to have AI tools that act as portable experts . Allow workers to use these experts to enhance the worker’s work quality and productivity. For example, the human customer service agent can ask the AI to read a ticket and respond by creating a first draft of a customer response. The human agent can review it, edit it, and confirm that it is an appropriate message before sending Level 3: Use AI augmentation for predictable tasks . Identify the more predictable tasks. Allow a more autonomous AI system to deal with specific predictable tasks, Predictable tasks could include things like answering the common question, “Where is my order?”, so that AI systems handle these type of tasks completely on their own—but if at some point where the AI system flags a more complex issue, the task escalates and the human agent can seamlessly take over the task—the human is always in control What are Some Best Practices for Using AI Augmentation? Here are a few tips that can help a businesses with AI augmentation: Use the knowledge and experience you have to train the AI system Remember to test the AI in a risk-free environment (a safe and stable sandbox) Make sure to roll out the AI slowly and make necessary adjustments Noted relevant metrics, measure the value created with AI augmentation, and note the value created by the AI-human collaborations Create training opportunities for employees with respect to AI-human collaborations Conclusion According to Gartner, it is necessary for HR leaders to plan for a blended workforce . This involves moving from a mindset where AI is viewed as a nice-to-have bolt-on to a regular practice of designing a human-AI workforce where both use their strengths and co-deliver work. Moreover, EY recommends blending operational gains with a people-first mindset . More specifically, the chances of sustainable business and capability growth hinge on whether organizations keep a people-first mindset while integrating new technologies. To accomplish this, EY suggests that organizations deploy the most efficient tools and processes to create sustainable value while still investing in the skills, career and personal growth of the workforce to create a more exceptional employee experience. This means bringing a holistic, people-centered perspective to an increasingly more digital world of work. While there may be a percentage of tasks for every employee that might be supported by AI tools, organizations will need those employees to be the human-in-the-loop who makes final decisions. Finally, employers are recommended to: Appreciate AI’s role in a comprehensive workforce strategy, and be aware of the potential challenges that lie ahead Determine how AI can empower workers in the organization Explore potential risks and security concerns Consider size, scope and cost in terms of evaluating performance and cost trade-offs With regards to training on the new tools, chart the path forward with people at the center Implement metrics that measure workforce sentiment tied to confidence in and adoption of the new technology Previous Next