top of page

Search Results

123 results found with an empty search

  • Canadian AI Trends in HR: A Year in Review and Foreshadowing 2025 | voyAIge strategy

    Canadian AI Trends in HR: A Year in Review and Foreshadowing 2025 Reflecting on Current and Forthcoming Shifts in HR By Tommy Cooke Nov 29, 2024 Key Points: AI has revolutionized HR recruitment and hiring in Canada by reducing the time-to-hire and enhancing the candidate experience Employee onboarding and learning management have been transformed with AI, streamlining manual tasks and enhancing personalization The year 2025 will see further developments in AI for HR in Canada, including enhanced Employee Experience (EX) management, balancing staffing and workload distribution, and challenging perceptions of trust in leadership Why Do AI Trends Matter? Looking back to understand AI trends may seem curious. By the fall of 2023, Chatbots were barely on the scene. Alas, in less than a year Artificial Intelligence (AI) has exploded in ways that have fundamentally changed the course of daily operations and growth strategies of organizations across virtually every industry around the world. The same holds particularly true for Human Resources (HR) in Canada. Indeed, 2024 has been an important year in terms of AI transformation in HR. It's crucial to understand these trends because they highlight the evolving nature of the HR landscape. 2024's trends indicate where the industry is heading. The HR world is often characterized as a manual world filled with time-consuming processes. Recruitment often involves hours screening resumes, onboarding relies on stacks of checklists, and performance management is limited to a frequency of reviews that can often miss the subtle nuances of an employee's growth. Employee engagement is often driven by surveys, and workforce planning requires significant data analysis efforts that rely on quickly antiquating methods. Since AI entered the landscape this year, it has promised efficiency, scalability, and intelligent decision-making. The appeal of AI in the HR world are numerous, but the most significant tends to be reducing armchair swiveling, providing personalized insights, and positioning HR professionals to be strategic drivers of an organization - and no longer merely administrators. A survey of 2,900 HR leaders and executives across Canada and the United States found that AI adoption has become mainstream for HR teams with smaller companies even more reliant on the technology than their larger competitors. Here are the top three AI in HR trends in Canada throughout 2024: 1. Recruitment and Hiring With the average time-to-hire being north of 36 days on average , recruitment is seeing the most visible transformation through AI and automation. In fact, 79 percent of organizations in North America are currently using AI and automation tools in recruitment. At the time of writing this Insight, there are 81 AI recruiting startups in Canada . AI chatbots have been increasingly used this year in the area of HR to handle candidate engagement, manage application updates, conduct preliminary assessments, and help interviewers prepare for interviews. From AI-based video interviewing software conducting sentiment analysis to AI-driven employee matching, training, talent identification, and professional development, 2024 has been a busy year in terms of AI’s impact on the HR recruitment and hiring scene. 2. Employee Onboarding The often cumbersome and paperwork-intensive process of onboarding has changed significantly because of AI. In the United States, 68 percent of organizations are already using AI in the onboarding process, with 41 percent of respondents in a recent study anticipating AI-driven onboarding by August 2025. In Canada, the stakes around AI use for onboarding are high. Throughout the year, managers have reported that long hiring cycles are leading to high turnover due to heavy workloads, higher recruitment costs, losing top candidates to competitors, and delayed or cancelled projects . AI assists by means of streamlining manual tasks like document submission and compliance training while AI chatbots are playing a role in guiding new hires throughout the onboarding process. Chatbots are also assisting in the production of FAQs. 3. Learning and Development Learning Management Systems (LMS) are familiar platforms to HR professionals. They can be tremendously useful in organizing, creating, and delivering training and educational content in ways that are insightful and easy to use for both HR and the workforce these platforms serve. A key transformation throughout 2024 is the onboarding of AI functionality into LMS. By the end of 2024, it is expected that 47 percent of all LMS tools will feature AI capabilities. In Canada, technology-driven use in teaching and learning is expected to grow well into 2027 with GenAI being a potent driver for change. The continued demand for flexible learning and hybrid offerings is in part due to the impact of GenAI on educational industries, and so AI’s entrance into LMS this year was not a surprise. We have already seen AI assist on LMS platforms by tailoring learning paths based on individual skills and growth goals and adapting to preferred learning styles as well. Moreover, AI is being positioned to reduce administrative support as well by automating enrollment, scheduling, and grading tasks. As we have seen in neighboring platforms, data-driven insights on learner progress and training program effectiveness are one of the many AI elements that are effectively transforming HR professionals into business leaders. Foreshadowing AI-HR Trends in Canada for 2025 Trends in the United States, Europe, and the rest of world serve as important indicators of AI solution marketplace transformations likely to arrive in 2025. Here are three trends Canada can expect to see next year: 1. AI Intersecting with Employee Experience (EX) Economic recovery priorities have quickly shifted organizations’ focus on cost reduction and efficiency boosts but at a considerable cost in terms of lost emphasis on inclusion, equity, mental health, and workplace schedule flexibility. To compensate for an increase in workplace challenges and an overall shift away from the quality of employee experience, AI is increasingly adopted by employees to recover their own workplace experience. As we published recently, AI that can come to the workplace in an employee’s pocket is an attractive assistant for getting through a difficult workday. This trend from 2024 will likely increase throughout 2025. With Gartner reporting that 60 percent of HR leaders believing that their current technology stack hinders rather than improves employee experience, organizations will do well to critically reflect upon what role its own and its employees’ AI play in improving their experience at work. Given the impact AI is making in niche HR workflows already, expect to see more AI targeted toward employee mental wellness to predict burnout, measure morale, and isolate what aspects of the workplace culture are not working. 2. AI to Balance Staffing and Workload More than half of organizations surveyed by SHRM in 2024 reported that their HR units are understaffed. With SHRC also reporting that only 19 percent of HR executives expect their HR unit size will increase next year and that 57 percent of HR professionals are reporting to work beyond normal capacity, AI-driven task automation will continue to offset HR labour shortages well into and beyond 2025. As organizations determine which human inputs add the most impact and value against where they believe AI can fill in and even take the lead, we can also expect to see a sharper rise in the propagation of AI Agents. As our Co-Founder Christina Catenacci explains , AI Agents are expected to work without the need for human supervision. They will play a significant role in reframing the conversation that an organization will have about balancing staffing and workloads. 3. AI’s Role in Challenging and Promoting Trust of Leadership Changes are stressful. Employees have been burning out and disconnecting at alarming rates throughout 2024. While EX will be a challenge to maintain and foster throughout 2025, leadership will also likely continue to face elevated pressure to maintain trust and transparency as well. Layoffs, economic uncertainty, global conflict, and the likelihood of political turmoil has created a global-reading sense of job insecurity for many workers. How leaders navigate these issues, particularly when the adoption of AI is often perceived as a barrier to trust, will certainly make the question of AI’s role in HR leadership a difficult one to answer. 46 percent of HR leaders believe AI boosts their analytics capabilities and 41 percent of business leaders expected to redesign business processes via AI in the next few years , AI will be implemented in more critical business lines and operations across more industries throughout 2025. This means that leadership must find ways to lead their workers with confidence. Thought Leadership will play a critical role in shaping conversations, easing debates, and softening cynicism. It involves creating and distributing key content in the form of blogs, podcasts, videocasts, and newsletters that clarify what AI is versus what AI is not, how and why it is proven to improve workplace processes and productivity and is implemented without necessitated layoffs. In order to protect, foster, and maintain trust with employees, leaders need to exemplify themselves as commanders of AI, and not merely its passengers We expect to see a significant rise in AI-oriented Thought Leadership content generation throughout 2025. 2024 and 2025: a Period of Significant AI Transformation in HR AI's influence on HR in Canada throughout 2024 has been nothing short of transformative. From streamlining recruitment and onboarding to tailoring employee learning experiences, AI has repositioned HR professionals to move beyond administrative tasking and take on more strategic roles. As we look toward 2025, global trends will continue to shape the Canadian HR landscape, requiring leaders to adopt ethical and inclusive AI practices while maintaining a human-centered approach that reflects critically on what it means to be a trustworthy leader who understands and embraces AI. Organizations that embrace these technologies and lessons thoughtfully will be well-positioned to foster a more productive and engaging workplace in the years ahead. Previous Next

  • What is Human in the Loop? | voyAIge strategy

    What is Human in the Loop? Understanding the Basics By Christina Catenacci Oct 18, 2024 Key points A human-in-the-Loop approach is the selective inclusion of human participation in the automation process There are some pros to using this approach, including increased transparency, an injection of human values and judgment, less pressure for algorithms to be perfect, and more powerful human-machine interactive systems There are also some cons, like Google’s overcorrection with Gemini, leading to historical inaccuracies Humans-in-the-Loop - What Does That Mean? A philosopher at Stanford University asked a computer musician if we will ever have robot musicians, and the musician was doubtful that an AI system would be able to capture the subtle human qualities of music, including conveying meaning during a performance. When we think about Humans-in-the-Loop, we may wonder exactly what this means. Simply put, when we design with a Human-in-the-Loop approach, we can imagine it as the selective inclusion of human participation in the automation process. More specifically, this could manifest as a process that harnesses the efficiency of intelligent automation while remaining open to human feedback, all while retaining a greater sense of meaning. If we picture an AI system on one end, and a human on the other, we can think of AI as a tool in the centre: where the AI system asks for intermediate feedback, the human would be right there (in the loop) providing some minor tweaks to refine the instruction and giving additional information to help the AI system be more on track with what the human wants to see as a final output. There would be arrows that go from the human to the machine and from the machine to the human. By designing this way, we have a Human-in-the-Loop interactive AI system. We can also view this type of design as a way to augment the human—serving as a tool, not a replacement. So, if we take the example of computer music, we could have a situation where the AI system starts creating a piece, and then the human musician could provide some “notes” (pun intended) for the AI system that adds meaning and ultimately begins creating a real-time, interactive machine learning system. What exactly would the human do? The human musician would be there to iteratively, efficiently, and incrementally train tools by example, continually refining the system. As a recording engineer and producer myself, I like and appreciate the idea of AI being able to take recorded songs and separate it into its individual stems or tracks, like keys, vocals, guitars, bass and drums. This is where the creativity begins… What are the pros and cons of this approach? While there are several benefits of using this approach, here are a few: There would be considerable gains in transparency : when humans and machines collaborate, there is less of a black box when it comes to what the AI is doing to get the result There would be a greater chance of incorporating human judgment : human values and preferences would be baked into the decision-making process so that they are reflected in the ultimate output There would be less pressure to build the perfect algorithm : since the human would be providing guidance, the AI system only needs to make meaningful progress to the next interaction point—the human could show a fermata symbol, so the system pauses and relaxes (pun intended) There could be more powerful systems : compared to fully automated or fully human manual systems, Human-in-the-Loop design strategies often lead to even better results Accordingly, it would be highly advantageous to think about AI systems as tools that humans use to collaborate with machines and create a great result. It is thus important to value human agency and enhanced human capability. That is, when humans fine-tune (pun intended) a computer music piece in collaboration with an AI system, that is where the magic begins. On the other hand, there could be cons to using this approach. Let us take the example of Google’s Bard (later renamed Gemini)—a mistake that cost the company dearly because it lost billions of dollars in shares and was quite an embarrassment . What happened? Gemini, Google’s new AI chatbot at the time, began answering queries incorrectly. It is possible that that Google was rushing to catch up to OpenAI’s new ChatGPT at the time. Apparently, there were issues with errors, skewed results, and plagiarism. The one issue that applies here is skewed results. In fact, Google apologized for “missing the mark” after Gemini generated racially diverse Nazis . Google was aware that GenAI had a history of amplifying racial and gender stereotypes, so it tried to fix those problems through Human-in-the-Loop design tactics, which went too far woke. In another example, Google generated a result following a request for “a US senator from the 1800s” by providing an image of Black and Native American women (the first female senator, a white woman, served in 1922). Google stated that it was working on fixing this issue, but the historically inaccurate picture will always be burned in our minds and make us question the accuracy of AI results. While it is being fixed, certain images will not be generated. We see then, what can happen when humans do not do a good job of portraying history correctly and in a balanced manner. While these examples are obvious, one may wonder what will happen when there are only minor issues with diverse representation or inaccuracies due to human preferences… Another con is that Humans-in-the-Loop may not know how to eradicate systemic bias in training data. That is, some biases could be baked right into the training data . We may question: who is identifying this problem in the training data? How is the person finding these biases? And who is making the decision to rectify the issue, and how are they doing it? One may also question whether tech companies should be the ones to be assigned these tasks. With the significant issues of misinformation and disinformation, we need to understand who the gatekeepers are and whether they are the appropriate entities to do this. Previous Next

  • Data Governance & Why Business Leaders Can’t Ignore It | voyAIge strategy

    Data Governance & Why Business Leaders Can’t Ignore It If you Plan to Adopt AI, Data Governance is a Must By Tommy Cooke, fueled by medium brew espresso Oct 13, 2025 Key Points: Data governance ensures reliability, trust, and efficiency and forms the foundation for business growth Even small businesses face risks without governance, making simple practices essential for resilience Strong data governance is the prerequisite for responsible and effective AI adoption Data governance isn’t the flashiest topic in the world of digital transformation. It doesn’t come with glossy demos or promises of instant breakthroughs. Yet, without it, it is one of the largest catalysts of failed data-driven practices like dashboarding, insight generation, and of course, the inevitable failure of AI itself. Even for smaller organizations, data governance is not optional. It is a framework that ensures data is reliable, locatable, streamlined, trustworthy, and safe. What Data Governance Means At its heart, data governance is about establishing clarity and accountability for information across an organization. It sets the rules for how data is collected, stored, accessed, and shared. Practically speaking, data governance involvees developing policies, assigning responsibilities, and building processes that keep data accurate and consistent over time. Done well, governance answers practical questions: Who owns this customer data? What version of this report should we trust? Which compliance rules apply to this information? Who is accountable if something goes wrong? When these questions don’t have answers, organizations waste precious energy manually searching for spreadsheets and PDFS, correcting data entry errors, and responding to breaches that could have been prevented. It also overwhelms your IT support team. Why Business Leaders Should Care Leaders are already responsible for risk, reputation, and growth. Data governance intersects with all three of these vital aspects of business growth. Let’s break things down a bit further: First, there is risk. Inconsistently managed data leads to serious consequences, including reporting errors, missed opportunities, and in worst case scenarios, regulatory penalties Second, governance creates operational efficiency. When data is properly governed, staff don’t need to spend hours reconciling reports. They are able to focus on their actual work and have confidence in the fact that the numbers in front of them are accurate, reliable, and backed by a company policy dictating so Third, governance is about trust. Customers, employees, and partners all want to know that information is handled responsibly. Organizations that demonstrate care with data earn credibility. This is a competitive differentiator, not a hurdle or setback Finally, and as the old adage goes: bad input equals bad output. In other words, data governance lays the groundwork for innovation. AI, predictive analytics, and advanced automation all depend on high-quality data Data Governance is Not Just for Big Business It is easy for smaller organizations to assume that governance is something that only large enterprises need. A small business owners might feel that with a few spreadsheets, a CRM system, and basic accounting tools, there is no need for a formal framework. But this assumption is risky. Every organization, no matter the size, handles sensitive data: customer contact details, employee records, payment information, and proprietary knowledge. The damage from a mistake can hit a smaller business just as hard, if not harder, than a large one. However, this does not mean that governance for smaller firms needs to be complex. It can mean assigning a single person to oversee data practices, creating simple rules for file naming and storage, using cloud platforms with built-in compliance features, and providing staff with basic training. These steps are straightforward but powerful, preventing errors and setting a foundation for growth. How Data Governance Connects to AI In discussions about technology and as we’ve discovered talking to our clients here at VS, leaders often hear the terms “data governance” and “AI governance.” Data governance deals with the quality, security, and compliance of the information itself. AI governance, by contrast, addresses the systems and models built on top of that data. That is, how algorithms are designed, deployed, and monitored for fairness, accuracy, and safety. The connection between the two is direct. Without strong data governance, AI systems cannot be trusted. Poor data, as I mentioned earlier, leads to bad outputs. Strong data governance, on the other hand, gives AI governance a firm foundation. Leaders who are serious about responsibly using AI must first take their data governance responsibilities seriously. The Human Dimension Too often, governance is described as a technical framework. But it is ultimately about people and processes. Employees need clarity about their roles in managing information. Teams need processes that make it easy to do the right thing. Leaders need to demonstrate commitment by modeling good practices and making governance part of the culture. When governance is human-centered, it feels less like red tape and more like confidence in action. Practical Steps to Begin your Data Governance Journey The path to data governance does not need to start with a massive overhaul. Leaders can begin with simple, actionable steps: Assign ownership. Make it clear who is responsible for each critical dataset Document standards. Establish guidelines for how data should be entered, named, and stored Audit your systems. Identify where data resides, who has access, and whether compliance gaps exist Train your people. Provide basic education on why governance matters and how to practice it Start small. Choose one domain—such as customer data or HR files—and implement governance practices there before expanding These steps build momentum and signal leadership commitment. Over time, they evolve into a supportive framework. Data Governance is a Strategic Advantage Data governance should not be seen as a regulatory chore. It is a way of unlocking value and protecting the future of the business. Organizations with strong governance are better positioned to innovate, comply with laws, reassure customers, and use advanced technologies responsibly. In a world where data is often described as the new oil, governance is the refinery. Without it, raw information is messy and hazardous. With it, data becomes clean fuel for decisions, strategy, and growth. Previous Next

  • Why It's Easier to Become a Black Hat Than a White Hat | voyAIge strategy

    Why It's Easier to Become a Black Hat Than a White Hat How AI Lowered Crime Barriers, and a Strained Economy Simultaneously Raised Career Barriers By Matt Milne, powered by cybersecurity wizardry Jun 3, 2025 Key Points The gap between becoming a cybercriminal and becoming a cyber security professional is widening, and AI is significantly lowering entry barriers for attackers—yet defensive careers still require extensive credentials and experience AI is making sophisticated attack methods accessible to less skilled actors, and this has the effect of potentially transforming the threat landscape from highly skilled APT groups to a broader base of AI-assisted attackers To keep up with the pace of AI-enabled threats, the cyber security field needs to fundamentally rethink credentialing and training approaches Last week, 19-year-old Matthew Lane of Massachusetts agreed to plead guilty to hacking PowerSchool and extorting the company for $2.85 million. This hacking incident affected approximately 60 million students nationwide. Stories like this always make me reflect on my journey in the field of information security. One morning in August of 2022, I received an email from Niagara University. I was recently accepted into their Information Security and Digital Forensics (ISDF) program, and the message was long and serious. Attached was a mandatory agreement outlining conditions for participation in the program. Because students would be granted access to state-of-the-art cyber security lab environments, we were required to affirm two key commitments: first, that we would only use the cyber tools provided for legitimate academic purposes such as labs, assignments, and projects; and second, that we would never use the skills or tools we acquired to perpetuate cybercrimes. Needless to say, I was immediately excited to dive into this field. But that excitement was tempered with a sobering realization: the skills we were learning could be dangerous if they fell into the wrong hands. Regardless of whether the threat actor is a skilled and experienced professional, a script kiddie (a novice hacker with minimal expertise) , or a curious, budding cyber security student, the penalties for certain actions, whether malicious or not, can be severe. I still vividly remember an early lesson in my ethical hacking course. After teaching us how to combine OpenVAS with the Metasploit Framework —two powerful tools often used in penetration testing—our professor paused and admitted plainly: "I've just handed you a gun and taught you how to fire it." This metaphor stuck with me. It captures the gravity of our responsibility as cyber security professionals and recognizes how rapidly the terrain is shifting under our feet in this new era. This was a phenomenon referred to as the Fourth Industrial Revolution by Klaus Schwab. More specifically, Schwab suggests that we are at the beginning of a revolution that is fundamentally changing the way we live, work, and relate to one another, given the dramatic and rapidly moving technological change that is all around us. What does the Fourth Industrial Revolution Mean for Hackers? What sets this revolution apart is the convergence of cyber security with emerging existential threats: Quantum Computing and Artificial Intelligence. While both present profound challenges, they differ in immediacy and accessibility for hackers. Quantum Computing Due to its cost and complexity, it will remain in the hands of elite nations and major corporations—at least initially “Q Day,” the term used to describe the day when a working quantum computer will go online, is an immediate danger to asymmetric encryption due to Shor's algorithm and certain hash digests due to Grover’s algorithm . More specifically, [CC1] Shor’s algorithm factors large integers and computes discrete logarithms, which breaks the mathematical foundations for RSA, ECC, and Diffie-Hellman. Grover’s algorithm enhances the speed of searching unsorted databases, reducing the time required for brute-force attacks on hash digests and digital signatures While it is foreseeable that advanced persistent threats (APTs) could be given access to such resources, it is not all doom and gloom. In fact, steps are already being taken in post-quantum cryptography , such as Quantum key encryption, which is already in use Encryption algorithms have life cycles, and RSA, Diffie-Hellman, and Elliptic Curve are at the end of theirs Artificial Intelligence AI is easily accessible Anyone with a capable GPU can spin up a local AI instance, design a custom graphical user interface, and experiment with various models that are available through platforms like Hugging Face or Ollama Specific LLMs like FraudGPT & WormGPT highlight that LLMs can be explicitly trained for hacking assistance Through prompt injection, AI does not need to be hosted locally to be used maliciously AI is already being deployed in cyber security operations and in defense What the foregoing suggests is that one transformative technology will remain in the hands of only a few. AI is prolific and only requires some decent hardware and an internet connection; what’s more, you now have your own teacher who can teach you all about the gun and how to fire it. AI-Assisted Hacking The following are features of AI-assisted hacking: It increases the speed at which hackers can discover and exploit zero-day (unpatched) vulnerabilities It creates more sophisticated social engineering attacks It can be leveraged on large data sets to connect and identify weaknesses The End of The Script Kiddie? The reality is that the skills I learned from my lab experience could easily be replicated with AI. Script Kiddies are known for their minimal knowledge, but now with this tool in their arsenal, the gaps in their technical knowledge can be supplemented. Instead of waiting for their professor to answer a technical question via email, searching for a YouTube tutorial, or accessing hacker chatrooms, AI (when not hallucinating) can provide expertise and assistance instantaneously. APTs are explicitly designed to bridge the knowledge gaps of each team member and ensure that experts are professionals in their technical domains. However, a lone script kiddie could now supplement those years of knowledge with an LLM tailored to their desires and objectives. What This Means for the Cyber Security Workforce Shortage Certainly, society is grappling with economic woes and recession—what was once considered to be a relatively resilient field due to its necessity, cyber security has been negatively impacted in recent years. In fact, according to the 2024 ISC2 Cybersecurity Workforce Study , budget cuts have been identified as the number one reason that new talent is not being trained or hired. Cyber Criminal versus Cyber Security Professional: Different Requirements To be a cyber security professional, one needs to have: Three to five years of previous IT or cyber security adjacent experience A degree (typically), though this trend is changing The CISSP certification (which itself requires five years of experience for full certification) Other Industry certifications that each cost between $700 – $1400 to write On the other hand, in order to be a Cyber Criminal, one needs to have: A good internet connection A laptop An RTX GPU for a locally hosted AI At this point, it is unclear whether Matthew Lane used AI when he hacked the company and tried to extort it for $2.85 million. That said, I have to wonder whether the term, "script kiddie", is no longer applicable. Previous Next

  • Services Mobile | voyAIge strategy

    Our Services VS offers industry-leading, experienced, and comprehensive solutions to support your successful use of AI. Have a look at what we provide. Don't hesitate to contact us. Policy & Procedures Streamline your operations with our expertly crafted policies and procedures, ensuring your AI initiatives are both effective and compliant. Research & Writing We lend our extensive experience in professional research and writing to provide insightful, impactful content tailored to support and drive your AI-related needs. Impact Assessments We take a deep dive into your organization's policies as well as data and AI operations to uncover hidden risks. AI Solution Scoping Our team assesses your organization's needs, painpoints, and opportunities. Compliance Let our team review, detect, and eliminate risks in your AI systems and business operations. Invited Talks We engage audiences with unique viewpoints that demystify complex legal, scholarly, political, popular, media, and philosophical understandings of AI. Ethical AI Playbooks Our playbooks assist organizations in navigating and responding to internal and external crises. Stakeholder Engagement Maximize AI adoption and AI project successes as we assist you in aligning your organization's stakeholders.

  • Recent Developments in the Regulation of Deep Fakes: a Focus on California | voyAIge strategy

    Recent Developments in the Regulation of Deep Fakes: a Focus on California The Landmark Law gets Another Coat of Paint, but then Faces Challenges By Christina Catenacci Oct 11, 2024 Key Points California has attempted to address misinformation and disinformation in relation to elections by signing new AI bills into law In the political context, Deep Fakes have the potential to manipulate voters by using manipulated media, including audio, images, and video In the political context, California’s AB 2839 and 2655 could lead the way in AI regulation Deep Fakes are inauthentic because they are manipulated media, and they can easily place anyone into audio, a photo, or video. They can make that person appear to do things that they never did and would never do. AI can synthesize images more rapidly and expedite the Deep Fake creation process. In the political context, we have seen several Deep Fakes pop up, whether it is audio robocalls by President Biden, manipulated video regarding President Obama, or AI-generated images of Vice President Harris. This is a critical time: we are approaching the 2024 Presidential Election in November. What has been done in terms of regulation to quell the spread of disinformation? To answer this question, it is necessary to take a closer look at what has taken place in California. Back in 2019, California Governor Newsom signed into law the “first of its kind”, AB 730, prohibiting a person, committee, or other entity, within 60 days of an election where a political candidate appears on the ballot, from distributing with actual malice materially deceptive audio or visual media (images or video) of the candidate with the intent to injure the candidate’s reputation or to deceive a voter into voting for or against the candidate, unless the media includes a disclosure stating that the media has been manipulated. To combat the growing distribution of political Deep Fakes, this law took effect on January 1, 2020. How did someone know if there was an instance of materially deceptive audio or visual media? The media would have to be shown to falsely appear to a reasonable person to be authentic, and cause a reasonable person to have a fundamentally different understanding or impression of the expressive content than that person would have if they were hearing or seeing the unaltered, original version of the content. However, there is the one exception: any media accompanied by a disclosure stating that the media has been manipulated. Likewise, media distributed for news-reporting purposes is similarly exempt, provided that those distributing the context acknowledge that its authenticity is in question or that it does not accurately represent the speech or conduct of the candidate. And media that constitutes satire or parody is also exempt. Candidates who believed that they have been affected would have to show that there has been a violation and can seek injunctive relief or seek general or special damages against the distributing party. There was some controversy in response to the bill—to the point where the Institute for Free Speech called AB 730 “a bad omen”. The main concern was that the law might have a chilling effect on political commentary and parody (even though there was an exception built in the bill for parody). Recent Enactment In September, 2024, Governor Newsom stated the following: “Safeguarding the integrity of elections is essential to democracy, and it’s critical that we ensure AI is not deployed to undermine the public’s trust through disinformation – especially in today’s fraught political climate. These measures will help to combat the harmful use of deepfakes in political ads and other content, one of several areas in which the state is being proactive to foster transparent and trustworthy AI.” That month, AB 2839 was published in the Legislative Counsel’s Digest. It would be in effect until 2027, and would similarly ban the use of materially deceptive content, which includes audio or visual media (images and video)—regularly referred to as AI-generated Deep Fakes, that is intentionally digitally created or modified, which includes, but is not limited to, Deep Fakes, such that the content would falsely appear to a reasonable person to be an authentic record of the content depicted in the media. The bill would also prohibit a person, committee, or other entity from knowingly distributing an advertisement or other election communication that contains certain materially deceptive content with malice, subject to specified exemptions. The bill would apply this prohibition within 120 days of an election in California and, in specified cases, 60 days after an election. The bill would authorize a recipient of materially deceptive content distributed in violation of this law, candidate or committee participating in the election, or elections official, to file a civil action to stop the distribution of the media and to seek damages against the person, committee, or other entity that distributed it, except as specified. The bill would require a court to place such proceedings on the calendar in the order of their date of filing and give the proceedings precedence. Governor Newsom recently signed the bill into law. The law took effect immediately as an urgency statute to safeguard voter trust before we approach the 2024 Presidential Election. Recent Legal Challenge to AB 2839 On October 8, 2024, it has been reported that there has been a legal challenge to AB 2839 . That’s right, about two weeks after Newsom signed the bill into law, a challenge has been made by a California resident and screenwriter, whose viral Deep Fake video of Vice President Kamala Harris was widely disseminated, and a satirical website. The main concern involves First Amendment free speech rights. California’s Attorney General defended the law and argued that AI-generated Deep Fakes constituted a global threat to elections because disinformation campaigns erode public trust. Of the numerous AI bills that Newsom signed into law, AB 2839 will be instrumental as we approach the election. However, the law will have no effect since a federal judge halted enforcement of the law. The reasons provided by the judge were that the law was too broad and restrictive. The ruling was considered to be a “blow” to California’s intentions to rein in misleading content on social media ahead of Election Day. Apparently, Elon Musk was happy with the ruling, and shared his thoughts about it on X. It is not clear what will happen next. Companion Bill: AB 2655 Also in September 2024, the California General Assembly passed AB 2655, Defending Democracy from Deepfake Deception Act of 2024. Essentially, the companion bill requires large online platforms to block the posting of election disinformation during specified periods before and after an election, and to label certain additional content as inauthentic, fake, or false. The bill also requires platforms to develop reporting procedures for such content. However, the bill would exempt from its provisions a regularly published online newspaper, magazine, or other periodical of general circulation that routinely carries news and commentary of general interest, if the publication complies with specified disclosure requirements. The bill would also exempt content that constitutes satire or parody. Despite the fact that there is an exemption for satire or parody, Elon Musk was compelled to respond to Newsom’s signing the bill into law on X: “Parody is legal in America”. After this, Newsom and Musk had a back and forth online that simply aggravated their relationship. This bill has been subject to a legal challenge as well. In both AB 2839 and AB 2655, the legislature declared the following: California is entering its first-ever generative AI election, in which disinformation powered by generative AI will pollute our information ecosystems like never before. Voters will not know what images, audio, or video they can trust In a few clicks, using current technology, bad actors now have the power to create a false image of a candidate accepting a bribe or a fake video of an elections official “caught on tape” saying that voting machines are not secure, or to generate the Governor’s voice telling millions of Californians their voting site has changed In the lead-up to the 2024 presidential elections, candidates and parties are already creating and distributing Deep Fake images and audio and video content. These fake images or files can spread to millions of Californians in seconds and skew election results or undermine trust in the ballot counting process The labeling information required by this bill is narrowly tailored to provide consumers with factual information about the inauthenticity of particular images, audio, video, or text content in order to prevent consumer deception In order to ensure California elections are free and fair, California must, for a limited time before and after elections, prevent the use of Deep Fakes and disinformation meant to prevent voters from voting and to deceive voters based on fraudulent content. Accordingly, the provisions of this chapter are narrowly tailored to support California's compelling interest in protecting its free and fair elections. AB 2355 Newsom also signed into law AB 2355 , which requires disclosure on any campaign advertisements made in whole or in part using AI. More specifically, the bill would require a person, committee, or other entity that creates, originally publishes, or originally distributes a qualified political advertisement to include in the advertisement a specified disclosure that the advertisement was generated, in whole or in part, using AI. Any registered voter can bring an action in court seeking a temporary or permanent restraining order or injunction against the publication, printing, circulation, posting, or distribution of any qualified political advertisement that violates these disclosure requirements. Results of the legal challenge regarding AB 2839 and 2655 US District Judge John A. Mendez has held that AI and Deep Fakes pose significant risks, but the law likely violates the First Amendment : “Most of AB 2839 acts as a hammer instead of a scalpel, serving as a blunt tool that hinders humorous expression and unconstitutionally stifles the free and unfettered exchange of ideas which is so vital to American democratic debate.” To that, Izzy Gardon, a spokesperson for Newsom, highlighted that the laws protect democracy and preserve free speech : “We’re confident the courts will uphold the state’s ability to regulate these types of dangerous and misleading deepfakes,” he said in a statement. “Satire remains alive and well in California — even for those who miss the punchline.” Zooming out When examining the larger picture, we see that there was a recent attempt at the federal level in September 2023 with H.R.5586 - DEEPFAKES Accountability Act in the 118th Congress (2023-2024). It will be considered by committee next before it is possibly sent on to the House or Senate as a whole. Only time will tell if this bill makes progress in the future so that the entire country is protected. Previous Next

  • APPROACH | voyAIge strategy

    Our 3-pronged method for integrating and managing AI successfully in your organization. OUR APPROACH What sets us apart from our competitors is our academic background and professional experience. We deliver affordable, award-winning research, insight, and strategy. Guiding Pillars We pride ourselves on our analytical rigor and high work ethic, which has been tried, tested, and true at the intersection of law, policy, and ethics - our Guiding Pillars . Each pillar forms the foundation of every engagement, ensuring success in delivering comprehensive, forward-thinking solutions. Law Our strategies incorporate leading legal practices to protect your organization from the uncertainty caused a rapidly evolving AI regulatory landscape Policy To ensure that your business initiatives are sustainable and forward-looking, you can rely on our expertise in policy and our AI frameworks, guidelines, and protocols Ethics Our top priority is to lead our work with moral integrity. We integrate transparency and accountability in our work to ensure that our clients' staff, employees, and stakeholders feel safe and are informed Expertise With over 20 years of experience in consulting, academia, and specializing in applicable law, we bring a wealth of interdisciplinary expertise and extensive experience in research, writing, educating, public speaking, and conference organization. We thrive in engaging diverse audiences to simplify complex ideas into practical solutions. We are committed to speaking the language of our clients, ensuring clarity and effectiveness in every interaction. Our Founders earned in PhDs in technology fields with specialized expertise in six areas: ETHICAL TECHNOLOGY We understand that diverse groups of Canadians have different ethical standards and expectations around technology We understand that consumers, populations, and marketplaces have different ethical standards and expectations around technology We understand that consumers, populations, and marketplaces have different ethical standards and expectations around technology CANADIAN LAW We understand Canadian laws and legislative policies, particularly as they relate to emerging and controversial technologies PRIVACY We are experienced in analyzing, developing, and advising policy design and solutions, particularly in the areas of informational and data privacy SURVEILLANCE We are professionally trained in understanding rights and liberties issues surrounding citizen and employee monitoring POLICY We regularly work with our clients and government to improve policy, procedure, process, and protocol around human-technology interactions MEDIA & COMMUNICATION We are professionally trained in studying the effects of media and digital communications in public and private enterprise Experience Project Management We have lead an ethics and privacy oversight team to support a public-private collaboration to use AI to study COVID-19 Alliances & Transparency We believe in strategic alignment through collaborative problem-solving. We look forward to working with our teammates, clients, and stakeholders to develop tailored solutions. We also open to partnering with organizations to pool expertise and creatively develop new ways of seeing and solving. As a reflection of our professional principles and values, our consultancy prioritizes understandable and honest communication. We are committed to clearly and fairly explaining client needs when scoping, assessing, and pricing engagements.

  • Accelerator Suite | voyAIge strategy

    Accelerator Suite Whether you are curious about AI, searching for the right AI solution, or are using AI and require risk management solutions, our Accelerators are designed to propel you through every need. Your introduction to AI. For organizations that are curious about AI. AI 101 training, stakeholder engagement, AI readiness assessment, AI law and ethics training, and a vulnerability scan AI Explorers Full Risk Management. For organizations using AI and want to be ethical and compliant. A Compliance Audit, Bias Detection & Mitigation Solutions, an AI Ethics Playbook, and tailored AI Policies AI Stewards Finding the right AI fit. For organizations ready to implement AI. An AI Opportunities Analysis, an AI Vendor Evaluation, an Implementation Roadmap, and a Training Plan AI Adopters AI Adopters Book a Free Consultation How Did You Hear About Us? Please take a minute to share with us how you heard about us! Submit Thanks for submitting!

  • Privacy Commissioner Investigation into Social Media Platform, X | voyAIge strategy

    Privacy Commissioner Investigation into Social Media Platform, X Complaint into the Collection, Use, and Disclosure of Personal Information By Christina Catenacci, human writer Jan 23, 2026 Key Points On February 27, 2025, the Office of the Privacy Commissioner of Canada (OPC) opened an investigation following a complaint it received questioning whether the social media platform, X, had violated the Personal Information Protection and Electronic Documents Act (PIPEDA) On January 15, 2026, the OPC decided to expand the investigation into X Corp following reports of AI-generated sexualized deepfake images—the OPC wanted to know whether X’s chatbot, Grok, was being used to create explicit images of individuals without their consent In addition, the OPC has also launched a related investigation into xAI, the AI company that is responsible for Grok On February 27, 2025, the Office of the Privacy Commissioner of Canada (OPC) opened an investigation following a complaint it received questioning whether the social media platform, X, had violated the Personal Information Protection and Electronic Documents Act (PIPEDA). Subsequently, on January 15, 2026, the OPC decided to expand the investigation into X Corp following reports of AI-generated sexualized deepfake images—the OPC wanted to know whether X’s chatbot, Grok, was being used to create explicit images of individuals without their consent. What is more, the OPC has also launched a related investigation into xAI, the AI company that is responsible for Grok. Therefore, the OPC has announced that it will examine whether X Corp and xAI are meeting their obligations under Canada’s federal private-sector privacy law, PIPEDA . At this time, the matter is being actively investigated; to that end, the OPC cannot provide further details. Philippe Dufresne, Privacy Commissioner of Canada stated: “The use of personal information without consent to create deepfakes, including intimate images, is a growing phenomenon that poses serious risks to individuals’ fundamental right to privacy. I have decided to expand my investigation to address this issue given its importance and the potential serious harms that it may cause to Canadians.” Previous Next

  • AI Thought Leadership | voyAIge strategy

    Expert AI insights and strategic content to position your organization as an industry leader. Thought Leadership We create expert-level content to position your organization as a leader in the AI space, showcasing your talent, knowledge, and vision. From blogs and newsletters to social media and podcasting, our insights help you build credibility and trust with employees, stakeholders, and clients. What is Thought Leadership? Thought leadership is the strategic process of creating content and communication that positions an individual or organization as an authority in their field. It involves producing insightful, relevant, and forward-thinking content that showcases expertise and deep knowledge on industry trends, challenges, and opportunities. What does Thought Leadership encompass? 1. Establishing Expertise Thought leadership is about sharing in-depth knowledge and insights that highlight an individual or organization's proficiency. For businesses, it goes beyond promotional content. It's about demonstrating a command of the field, which builds credibility and trust. 2. Influencing and Leading Industry Conversations A key aspect of thought leadership is contributing to and shaping discussions within the industry. This includes highlighting emerging trends, opportunities, and challenges. Offering unique perspectives or innovative solutions to common industry issues is a key way of contributing and moving along the conversation on AI. The goal is to be at the forefront of industry conversations, establishing a company or individual as a go-to resource for reliable information and insights. 3. Providing Value to Audiences Thought leadership content is valuable because it educates, informs, and inspires action. It helps audiences understand complex topics, make informed decisions, and see new opportunities. By delivering well-researched and relevant content, thought leaders build a loyal audience who sees them as a reliable source for information and guidance. 4. Building a Brand's Authority and Credibility Companies and professionals who consistently produce thoughtful, authoritative content establish themselves as credible leaders in their sector. This credibility is crucial for building trust with stakeholders, clients, and employees. It also positions the brand as an entity that knows its market and understands its environment. Over time, reputational increases translate into increased opportunities, such as partnerships, media features, speaking opportunities, or new business ventures. 5. Demonstrating Command of Industry Developments Thought leadership keeps audiences informed about the latest industry developments, including technological advancements, regulatory changes, and emerging best practices. It involves research and the ability to interpret and translate information into digestible and actionable insights for the audience. For example, an AI-driven organization might publish thought pieces on ethical AI practices, the impact of regulations like the EU AI Act, or trends in AI modelling. 6. Engaging Stakeholders Through Authentic Storytelling Effective thought leadership combines expertise with storytelling.. It’s not just about facts and data. It’s about weaving narratives that connect with stakeholders and build engagement. This can include sharing customer success stories, case studies, or experiences that showcase how the organization is tackling real-world challenges. 7. Leveraging Various Channels and Formats Thought leadership isn’t limited to written content; it extends across multiple platforms and formats to reach diverse audiences. These channels can include blogs, social media posts, podcasts, webinars, white papers, research reports, and more. Our Thought Leadership Samples Every week, voyAIge strategy generates thought leadership content that it shares on its homepage. Called "Insights" , we regularly share AI-related news, offer insights and analysis, and breakdown what it means for any organization by providing advice and actionable steps. We also provide three key takeaways to help your find what is most relevant, straight away. Book a Free Consultation to Learn More about our Thought Leadership services

  • What the Duolingo Layoffs Reveal About People and AI | voyAIge strategy

    What the Duolingo Layoffs Reveal About People and AI Keeping People in the Loop with AI Allows Organizations to Outperform Those Who Do Not By Tommy Cooke, fueled by coffee May 9, 2025 Key Points: 1. AI achieves its greatest potential not by replacing humans, but by augmenting and enhancing human capabilities 2. Mass layoffs tied to AI adoption risk damaging reputation, innovation capacity, resilience, and ethical oversight 3. Organizations that prioritize human-AI collaboration—through hybrid workflows, upskilling, and governance—position themselves for long-term success Duolingo, the world’s leading language-learning app, is getting rid of their contract employees and replacing them all with AI . Any human worker that wrote lessons or innovated ways to translate phrases from one language to another are being let go. This news comes on the heels of Duolingo letting go of 10 percent of its workforce last year . Terminating employees and replacing them with AI is not new. Shopify , Expedia , and Cars24 are but a few examples of dozens of large organizations around the world following suit. The reasons? There are a few and they are not unusual. For someone it’s about an “AI-first strategy”—to prioritize technology as a driving force for completing daily tasks. For others, it’s about cost reduction, streamlining operations, automating innovation and marketing, and so forth. For many readers and especially us here at VS, these stories are unsettling. They are harbingers of what so many workers fear: that AI may eventually replace us. However, beneath the surface of these stories are important lessons—about the myths we tell ourselves about AI, about the real value of humans, and the long-term consequences organizations face when they idealize technology to the extent that it removes people from the equation of work. Busting the “AI Will Replace Everyone” Myth Let’s begin where most of these media stories stop. The assumption that AI is here to “replace” workers is a misunderstanding of what AI can actually do and what value it provides. In fact, AI significantly boosts productivity and creativity up to 40 percent , providing that AI is paired with skilled human workers. Simply put, productivity gains do not come when AI eliminates the human role. Rather, they come when AI enhances human capacity, for example, by reducing manual data entry, accelerating review processes, and freeing people up to focus on complex problem-solving, creative design, or interpersonal work. The lesson here is simple. AI does not need to be about mass replacement because it is evidently about reshaping roles, tasks, and collaborations in ways that empower, and not replace, people. Organizations are at Risk when they Overlook the Value of Humans Here’s where the cases of employers replacing workers with AI really gets interesting. While companies like Duolingo and Expedia frame their layoffs as part of their strategic shift to AI tools, a closer look raises some important questions: What institutional knowledge was lost when long-term people were cut? What nuances in language, culture, and humour went out the door with those workers? What risks do these companies now face when AI-generated outputs are misaligned with user expectations? One thing AI does not have is lived experience. It cannot speak from personal context because it has none. AI merely mimics patterns in data. When it does so without careful human review–—that important Human-in-the-Loop — it outputs errors, biases, or tone-deaf missteps that can be extraordinarily costly to clean up financially, reputationally, and legally. Moreover, humans are simply better suited for complex tasks that require real-time adaptation to rapidly emerging changes. As David De Cremer and Garry Kaparov eloquently put in their co-authored article with the Harvard Business Review: “Contrary to AI abilities that are only responsive to the data available, humans have the ability to imagine, anticipate, feel, and judge changing situations, which allows them to shift from short-term to long-term concerns. These abilities are unique to humans and do not require a steady flow of externally provided data to work as is the case with artificial intelligence.” Essentially, short-term gains in efficiency not only seed long-term structural vulnerabilities, but they also place organizations at risk of permanently losing critical capabilities that only humans can provide. The Hidden Value of Keeping People Organizations that embrace a human-centric AI approach tap into some things that are profoundly valuable. Let’s look at a few of them: Embedded institutional memory . Seasoned employees know the why , not just the what. Cultural fluency . People bring deep cultural awareness and ethical discernment to decisions. Creative adaptability . When AI encounters novel problems, humans are the ones who figure out how to pivot, adapt, and respond. Critical self-reflection . People are better at determining when issues need to be escalated. Whereas AI models can drift overtime and become worse at detecting and solving critical issues, people remember from experience what is expected of them in high-stakes scenarios. As Christina Catenacci recently summarized in her review of the World Economic Forum (WEF)’s “The Future of Jobs” report, companies that retain, retrain, and reposition human workers alongside AI adoption outperform competitors on innovation, employee satisfaction, and customer loyalty. The Consequences of Getting It Wrong Employers who misread the AI landscape and pursue mass terminations under the illusion that AI can simply replace people, several risks emerge. First, is reputational fallout. Customers and clients increasingly value ethical, human-centered brands. High-profile layoffs tied to AI sparks backlash and can truly risk tarnishing a company’s public image. Second, is loss of resilience. Hollowed-out workforces are brittle. Without internal talent, companies become overdependent on external vendors or off-the-shelf solutions, making them less adaptable in fast-changing markets. This, of course, leads to reduced quality of products and services—clients will notice. Third, are innovation slowdowns. While AI can efficiently handles certain tasks such as generating images, it truly struggles with ambiguity . Without people who understand edge cases, novel demands, or cultural shifts, companies are at significant risk of losing their innovative edge. Lastly, is increased risk. AI systems are only as strong as the oversight and governance structures around them . Layoffs often undercut those ever-important human guardrails, increasing the odds of ethical missteps, legal violations, or data breaches. What Should Employers Do Instead of Replacing People with AI? Instead of treating AI as a magic bullet for growth, forward-looking organizations should approach AI as a multiplier— as something that augments the capacity, creativity, and performance of humans. Here’s a few ideas business leaders can consider: Invest in upskilling and reskilling . As AI takes over routine tasks, workers need new training to focus on higher-value tasks; in this way, workers can be challenged with higher-level and more complete roles. Prioritize making your people AI literate and ready to embrace a culture of AI-supported growth. Design hybrid workflows . Map out use cases and the subsequent critical operational business processes that might benefit most from human-AI collaboration rather than one-sided automation. Remember, Human-in-the-Loop is crucial for any AI to work. Build a governance framework . Ensure AI deployments have human checkpoints, clear accountability, and robust compliance safeguards. AI is considered to be human-centric when people are supported and guided in how they use AI. It is important to point out that this is exactly what AI governance frameworks do. Communicate transparently with employees . People are more willing to embrace AI when they understand how it supports their work, not threatens it. That is, employees are more willing to accept AI if employers are open, honest, and communicate regularly about AI usage in the organization. AI’s Power is Human-Centric The stories we began with may seem like a sign of things to come, but it’s just one chapter in a much larger story. The deeper truth is that AI is merely a tool. It is not a replacement. Its most powerful applications are those where humans and machines collaborate side by side, amplifying each other’s strengths and compensating for each other’s weaknesses. Previous Next

  • AI companions in the Workplace | voyAIge strategy

    AI companions in the Workplace An intro to BYOB (Bring your own bots) to work By Christina Catenacci Nov 20, 2024 Key Points AI companions are chatbots that talk with you, offer support, and help with various tasks There are pros and cons to bringing AI companions to work, and they need to be considered before use Employers are recommended to create strong policies and procedures for when they introduce AI companions to the workplace What exactly are AI companions? You may call it a friend. You may call it a mentor. You may even call it a romantic partner. What I’m talking about is the AI companion. Essentially, these AI companions are chatbots that talk with you, offer support, and help with various tasks . Some of the main characteristics of AI companions include: They chat : the conversation quality is important in that the chatbot needs to sound natural, understand context, and keep the conversations flowing They have several features : some of the key features include customization options, voice chat, and any special abilities They are easy to use : user-friendly elements include things like ease of set up, ease of finding features, and ease of use across different devices When evaluating AI companions, each criterion is given a score on a five-point scale, where the higher the score, the more highly rated the AI companion. What are some of the most popular AI companions? There are many websites that comment on and rank popular AI companions. It is important to keep in mind that it depends on the reasons why a person wants to use the AI companions, whether it is for friendship and emotional support, helping with school or work, or going down the romantic path. For instance, some sites rank the seven best AI companions of 2024 , others create comparison matrices for six AI companions , and others list the top 10 to chat and have fun with . Some of these AI companions can also be used as mentors and buddies to bounce ideas off of then thinking about work: think Star Trek and brainstorming in the Holodeck . It might be less challenging than finding a real-life mentor . For example, using AI to enhance—not replace humans can hep to embrace AI as a tool for growth, and therefore enhance human potential. Yet other AI companions are pure business and act as business tools. For example, users can “ Tackle any challenge with Copilot ”. This chatbot can give users straightforward answers so a they can lean, grow, and gain confidence. It helps with any task so users can transform their ideas into stunning visuals, simplify dense information into clear insights, and polish their writing so their voice shines (users need Microsoft 365 plan). There are even Copilot+ PCs. Another example of an AI companion that helps with work is Gemini for Google Workspace . Lauded as the “always-on AI assistant”, it can be used across the Google Workspace, meaning it is built right into Gmail, Docs, Sheets, and more, with enterprise-grade security and privacy (users need a Workspace plan). What are the pros and cons of bringing AI companions to the workplace? Like BYOD (Bring your own device), BYOBs have several pros : Increased productivity and efficiency Enhanced decision-making More efficient and precise learning, reasoning, problem-solving, perception, and language understanding, especially with certain sectors like healthcare Higher likelihood of innovation and competitive edge Ability to automate tasks And here are some of the main cons: Employees have serious job displacement concerns Privacy and cybersecurity concerns Ethical concerns Potential for overdependency Potential for errors When including AI companions in the workplace, there are several uses; that said, it is important to remain aware that there are challenges that need to be addressed . How are AI companions being used by employers at work? Some tasks that are being completed by AI companions are things like notetaking, summarizing meetings, and creating agendas or lists of follow-up tasks. When employees bring their own AI companions to work, they may be cheaper to buy individually compared to enterprise management features, employees can pick their own AI companions that they are familiar and work well with, and employees who frequently use these tools end up self-training them. Employers are recommended to set some rules and guardrails by creating an AI in the Workplace policy for all employees in the workplace. Moreover, it is critical for employers to understand the risks and attempt to mitigate those risks. See my colleague’s insight article on the risks of AI companions and how to mitigate them. Previous Next

bottom of page