top of page

Search Results

120 results found with an empty search

  • Keeping People in the Loop in the Workplace | voyAIge strategy

    Keeping People in the Loop in the Workplace Some Thoughts on Work and Meaning By Christina Catenacci, human writer May 16, 2025 Key Points We can look to the infamous words of C.J. Dickson made in the 1987 case, Alberta Reference, for some of the first judicial comments touching on the meaning of work When thinking about what exactly makes work meaningful, we can look to psychologists who have demonstrated through scientific studies that meaningful work can be attributed to community, contribution, and challenge—leaders are recommended to incorporate these aspects in their management strategies Leaders are also encouraged to note that self-actualization is the process of realizing and fulfilling one’s potential, leading to a more meaningful life. It involves personal growth, self-awareness, and the pursuit of authenticity, creativity, and purpose My co-founder, Tommy Cooke, just wrote a great article that discusses the effects of the Duolingo layoffs. In that piece, he talks about how Duolingo just replaced its contract workers (on top of the 10 percent of its workforce it just reduced) and replaced them with AI. Ultimately, he concludes that AI achieves its greatest potential not by replacing humans, but by augmenting and enhancing human capabilities—and it follows that Duolingo runs the risk of reducing employee morale, increasing inefficiencies, and causing other long-term negative consequences like damage to reputation. Duolingo is not alone when it comes to reducing a part of its workforce and replacing it with AI. In fact, Cooke suggests that organizations that prioritize human-AI collaboration through hybrid workflows, upskilling, and governance position themselves for long-term success. This article got me thinking more deeply about the meaning of work. From an Employment Law perspective, I am very familiar with the following statement made by Dickson C.J. in 1987 ( Alberta Reference ): “Work is one of the most fundamental aspects in a person's life, providing the individual with a means of financial support and, as importantly, a contributory role in society. A person's employment is an essential component of his or her sense of identity, self‑worth and emotional well‑being. Accordingly, the conditions in which a person works are highly significant in shaping the whole compendium of psychological, emotional and physical elements of a person's dignity and self-respect” Furthermore, Dickson C.J. elaborated that a person’s employment is an essential component of his or her sense of identity, self‑worth and emotional well‑being. I wrote about these concepts in my PhD dissertation , where I argued that there is an electronic surveillance gap in the employment context in Canada, a gap that is best understood as an absence of appropriate legal provisions to regulate employers’ electronic surveillance of employees both inside and outside the workplace. More specifically, I argued that, through the synthesis of social theories of surveillance and privacy, together with analyses of privacy provisions and workplace privacy cases, a new and better workplace privacy regime can be designed (to improve PIPEDA ). Disappointingly, we have still not seen the much-needed legislative reform, but let’s move on. Thus, it is safe to say that for decades, courts have recognized the essential nature of work when deciding Employment Law cases. Economists have too. For instance, Daniel Susskind wrote a working paper where he explored the theoretical and empirical literature that addressed this relationship between work and meaning. In fact, he explained why this relationship matters for policymakers and economists who are concerned about the impact of technology on work. He pointed out that the relationship matters for understanding how AI affects both the quantity and the quality of work and asserted that new technologies may erode the meaning that people get from their work. What’s more, if jobs are lost because of AI adoption, the relationship between work and meaning matters significantly for designing bold policy interventions like the 'Universal Basic Income' and 'Job Guarantee Schemes'. More precisely, he argues that policymakers must decide whether to simply focus on replacing lost income alone (as with a Universal Basic Income, or UBI) or, if they believe that work is an important and non-substitutable source of meaning, on protecting jobs for that additional role as well (as with a Job Guarantee Scheme, or JGS). With AI becoming more common in the workplace, Susskind points out that although traditional economic literature narrowly focuses on the economic impact of AI on the labour market (for instance, how employment earnings are considered), there has been a change of heart in the field that has evolved into a creeping pessimism involving the quantity of work to be done as well as the quality of that work. In fact, he touches on the notion that paid work is not only a source of an income, but of meaning as well. He also notes that classical political philosophers and sociologists have introduced some ambiguity when envisioning the relationship, but organizational psychologists have argued and successfully demonstrated through scientific studies that people do indeed get meaning from work. What does this all mean? Traditional economic models treat work solely as a source of disutility that people endure only for wages. But it is becoming more evident that the more modern way to think about work entails thinking about meaning—something that goes beyond income. What the foregoing suggests is that, if AI ultimately leads to less work for most people, we may need to better understand what exactly is “meaningful” to people, and how we can ensure that people who are downsized still have these meaningful activities to do. Further, we would need to provide a great deal of opportunity in these meaningful things, so people can feel the feelings of psychological, emotional, and physical elements of a person's dignity and self-respect that C.J. Dickson referred to back in 1987. Along the same lines, we will need to reimagine policy interventions such as UBI and JGS; while advocates of JGS tend to believe that work is necessary for meaning, UBI supporters believe that people can find meaning outside formal employment. More specifically, UBI is a social welfare proposal where all citizens receive a regular cash payment without any conditions or work requirements. The goal of UBI is to alleviate poverty and provide financial security to everyone, regardless of their economic status. On the other hand, the job guarantee is the landmark policy innovation that can secure true full employment, while stabilizing the economy, producing key social assets, and securing a basic economic right. What we can be sure of is the fact that things are changing with respect to how we see work and meaning. For many, work is a major component of their life and views of themselves. Some would go further and suggest that work is the central organizing principle in their lives—they could not imagine life without work, and self-actualizing would not take place without it. To be sure, self-actualization is the process of realizing and fulfilling one’s potential, leading to a more meaningful life. It involves personal growth, self-awareness, and the pursuit of authenticity, creativity, and purpose. What Makes Work Meaningful? A closer look into what makes work meaningful can help in this discussion. Meaningful work comes from: Community : We are human, whether we like it or not. Because of this, we are wired for connection. Studies show that employees who feel a strong sense of belonging are more engaged and productive Contribution : We view the ability to see how one’s work benefits others as one of the strongest motivators in any job. In fact, employees who find meaning in their work are 4.5 times more engaged than those who do not Challenge : We thrive when we are given opportunities to learn and grow. Put another way, when leaders challenge employees to expand their capabilities and provide the support they need to succeed, those employees experience more meaningful development When you stop and think about it, it makes sense that leaders play a considerable role in shaping workplace meaning. Since about 50 percent of employees’ sense of meaning at work can be attributed to the actions of their leaders, leaders are recommended to find ways to cultivate community, contribution, and challenge so that employees and teams can thrive. More precisely, leaders in organizations are recommended to: foster a strong sense of belonging be aware and acknowledge the impacts of employees’ work challenge workers so that they grow in meaningful ways Individuals can also add some other features so that they can create some meaning for themselves, namely with self-instigated learning, volunteering in the community, participating in local government, engaging in care work, and engaging in creative work. Previous Next

  • AI Governance in 2025 | voyAIge strategy

    AI Governance in 2025 Trust, Compliance, and Innovation to Take Center Stage this Year By Tommy Cooke, powered by caffeine and curiousity Jan 20, 2025 Key Points: AI governance is transitioning from a reactive compliance measure to a proactive discipline Innovations like AI impact assessments help organizations operationalize transparency AI governance frameworks are no longer regulatory shields. They enhance brands What was an emerging concern over the last few years will become a mature and necessary strategic discipline in 2025. As we move deeper into another year and while AI remains in its infancy, it is necessary to have the guardrails in place to ensure that AI grows and contributes successfully. The landscape of AI governance is thus evolving in many meaningful ways, much of which is due to growing international regulatory pressure, increasing stakeholder expectations, and the ongoing need to ensure significant financial investments in AI generate reliable returns. This Insight looks at what has changed over the last couple of years and looks ahead to how AI governance is maturing – and why these shifts matter. From Awareness to Structure In the couple short years of AI’s proliferation across virtually every industry, AI governance can be characterized as reactive. Organizations leveraging AI to innovate and reduce costs – particularly those with high stakes in demonstrating that AI can be trustworthy – has tended to approach governance as a checkbox exercise. Unless an organization existed within the purview of a particular jurisdiction requiring compliance, like the EU’s AI Act , holding up a compass to determine how and whether an organization required a dedicated office with a detailed AI governance strategy depended largely on its own awareness and relationship with its stakeholders. Moving forward, that awareness is intensifying. Organizations are no longer waiting for compliance to simply arrive. Even in the face of shifting political landscapes in North America where AI regulation seems to be losing momentum, the AI governance market is expected to grow from $890 million USD to $5.5 billion USD by 2029 . This statistic is indeed a reflection of regulatory pressure abroad – and it is also a reflection of the maturating need for structured management of AI. With AI systems earning the trust of organizations around the world to make critical decisions, the potential for damage and unintended consequences is becoming far too risky: algorithmic bias, breaches, and ethical violations can cause significant reputational liabilities and financial penalties that would almost certainly erase any organization’s AI investment; non-compliance with the EU’s AI Act , for example, can result in fines up to €35 million or 7 percent of an organization’s annual turnover. Transparency in the Spotlight Over the last couple of years, transparency has been a buzzword. It existed in a gray space because organizations tended to use the term strategically in public-facing white papers and proposal packages. The word “transparency” often appears through corporate “ AI-washing ”: the process of exaggerating the use of AI in products and services, particularly when companies make misleading claims about the safety and ethics of their systems. Moreover, transparency tends to be perceived as difficult to achieve. Many large-scale AI adopters believe that AI systems’ outputs are difficult to explain or that its processes are virtually impossible for laypeople to understand. This adage will no longer be satisfactory in 2025 and the years ahead. Why? Contrary to what some may belief, societal, political, and ethical pressures for transparency are growing. And those pressures are leading to AI transparency innovations. Here are two examples: AI impact assessments (AI-IAs) are not merely designed to identify positive and negative impacts of AI – they are also growing in popularity because they are positioning organizations to critically reflect and discuss the social and ethical consequences of using AI. What AI-IAs essentially do is commit an organization to comprehending how their AI systems can be improved as well as what the risks may be – whether emerging or existing in the present moment. These dynamics already exist in every AI system. By making them visible, organizations take crucial steps toward demonstrating transparent and accountable relationships with their AI systems 2024 observed a significant maturation in the AI model documentation: an explanation of what an AI system’s model is, what it was trained on, any tests or experiments conducted on it, etc. The goal is to document what the AI system is doing. By noting what the system does, an organization provides its stakeholders with a track record that can be examined to ensure responsible and ethical use as well as to demonstrate compliance Data Sovereignty on the Rise While data privacy has been a long-standing focus throughout the previous years, 2025 will mark a significant shift toward data sovereignty. As regulatory, geopolitical, and social concerns continue to rise around responsible and ethical use of AI, 2025 will observe organizations increasingly designing AI systems in ways that deal with how data is stored, processed, and accessed. Compliance with data residency laws to ensure that sensitive data will remain within a national boundary or specific jurisdiction, for example, will trend this year. We will hear more about other privacy-preserving technologies in AI systems, such as federated learning : a machine learning technique that allows AI to be trained across datasets from different sources without transferring sensitive data across borders. Data is no longer viewed as merely a business asset but a national asset. For organizations operating globally, this can make daily operations rather complicated due to the existence of multiple international laws and norms if they are to avoid penalties while maintaining trust. When data involves national security, healthcare, or financial sectors, demonstrating the ability to respect data storage laws when using AI will be a top priority for organizations this year. Ethical AI as a Top Operational Priority Much like the way in which buzzwords like transparency have been used to gesture toward responsible AI use, ethical AI will finally emerge as fully operationalized practices. Unlike the stretch of AI adoption in the early 2020’s where AI ethics tended to be little more than a vague concept, ethical AI discourse and debate have been sustained for a considerable period of time. Organizations are recognizing that failing to act upon the principles and values of ethical AI not only poses reputational harms and financial risks, but they can also harm operational integrity. Organizations have been and will continue to conduct structured reviews to identify potential bias, discrimination, and unintended social consequences of their AI systems. These kinds of assessments are being applied across an AI system’s lifecycle, from design to monitoring after implementation. According to the AI Ethics Institute (2023) , 74 percent of consumers prefer to use products certified as ethically deployed and developed. This makes comprehensive AI training with a focus on governance, privacy, and of course – ethics – a must. The same will hold true for selecting AI vendors and designers committed to embedding ethical considerations into their products from the beginning and not merely as an afterthought. AI Governance as a Strategic Differentiator A commonplace perception of all-things related to privacy, ethics, and governance is that they are expensive and stifle innovation. It is often echoed by techno-libertarians , those who believe that innovation and business should be left largely unregulated in order to maximize growth and creativity – people who resist external intervention unless mandated by law. What proponents of these perceptions and beliefs fail to understand is that proactivity in the realm of responsible and ethical management of technology is becoming extraordinarily risky and costly. In 2025, AI governance will be embraced as business strategies that not only mitigate risks but also allow organizations to actively differentiate themselves in competitive markets. AI-IAs, audits, transparent reporting, and other AI governance-related activities will be more directly attributable to brand equity and stakeholder confidence. In recognizing that the world’s legal, social, political, and ethical standards are strengthening around the use of AI, organizations are realizing that demonstrating and sharing a robust AI governance framework showcases the organization and its talent as thought leaders who are able to navigate complex technology while building trust-based relationships with customers, partners, regulators, and so on. So, Why Now? What is driving the maturation and higher adoption rates of AI governance? Here are three catalysts to consider: Regulatory Evolution: despite the resurgence of techno-liberalists who may be slowing the advance of AI-related regulatory agendas, this only applies to limited jurisdictions. It’s important to remember that sub-sovereign jurisdictions (e.g., state and provincial level government authorities) are developing their own regulations. Whether they deal specifically with AI or not, data and privacy laws are always changing – and they almost always have implications on how organizations use AI. Public Scrutiny: High-profile AI failures have made stakeholders more vigilant about ethical and operational risks. Consumers are increasingly skeptical about how organizations use AI. C-suite executives are becoming more and more aware of how important it is to demonstrate to stakeholders that they are using AI responsibly, necessitating the need to implement strong AI governance frameworks – and prove that they work. Market Maturity: Markets do not mature merely due to an invisible economic hand. Much of their maturation is driven by the behaviour, perception, and demands of its consumers. As AI becomes integral to business operations, it is perhaps unsurprising that consumers do not trust organizations that do not openly disclose that they use AI. Final Thoughts AI governance in 2025 represents a pivotal shift from a regulatory afterthought to a core strategic priority. Organizations that adopt structured governance frameworks, emphasize transparency, and prioritize ethical AI are not only mitigating risks but also distinguishing themselves in competitive markets. As regulatory landscapes evolve and public scrutiny intensifies, investing in robust AI governance is no longer optional. Previous Next

  • Canadian News Publishers Sue OpenAI for Copyright Infringement | voyAIge strategy

    Canadian News Publishers Sue OpenAI for Copyright Infringement Yet another lawsuit dealing with copyright and AI By Christina Catenacci, human writer Mar 21, 2025 Key Points Canadian news publishers have sued OpenAI in the Toronto Superior Court of Justice for copyright infringement It has been reported that three French trade groups have accused Meta of copyright infringement Copyright and AI cases are increasing, and stress the need to balance the rights of creators with those of large tech companies On November 28, 2024, Canadian news publishers including Toronto Star Newspapers Limited, Metroland Media Group Ltd., Postmedia Network Inc., PNI Maritimes LP, The Globe and Mail Inc./Publications Globe and Mail Inc., Canadian Press Enterprises Inc./Enterprises Presse Canadienne Inc., and Canadian Broadcasting Corporation/Société Radio-Canada (Publishers) sued OpenAI and related companies (OpenAI) in the Toronto Superior Court of Justice. What was the lawsuit against OpenAI about? The Publishers argued that OpenAI infringed, authorized, or induced copyright infringement of their copyrights, contrary to the Copyright Act (Act) . In addition, they argued that OpenAI engaged in circumvention of technological protection measures that prevented access and restricted the copying of their copyrighted works. The Publishers also argued that OpenAI breached their Terms of Use of their websites and unjustly enriched itself at the expense of the Publishers. To that end, the Publishers requested that the court issue Orders for damages or statutory damages, damages and an accounting and disgorgement of profits in respect of OpenAI’s breach of contract and unjust enrichment, and punitive and/or exemplary damages for OpenAI’s wilful and knowing infringement of the Publishers’ rights. What’s more, they asked for a permanent injunction stopping the infringement and a further “wide injunction” that would stop OpenAI from circumventing technological protection measures. They also asked for costs and interest. The Publishers began their Claim by pointing out that OpenAI engaged in ongoing, deliberate, and unauthorized misappropriation of their valuable news media works. They also noted that OpenAI scraped content from their websites, web-based applications, and third-party partner websites, and then, it used that proprietary content to develop its ChatGPT models, without consent or authorization. OpenAI also augments its models on an ongoing basis by accessing, copying, and/or scraping their content in response to user prompts: “OpenAI has taken large swaths of valuable work, indiscriminately and without regard for copyright protection or the contractual Terms of Use applicable to the misappropriated content” The Publishers even went so far as to say that OpenAI was aware of the value of the Publishers’ proprietary data and intellectual property, including the significant financial investments made to acquire the rights to publish the works, and of the need to both pay for that information and secure the express authorization of the Publishers before obtaining and using it for its own purposes. They said that ather than seek to obtain the information legally, OpenAI elected to “brazenly misappropriate” the Publishers’ valuable intellectual property and convert it for its own uses, including commercial uses, without consent or consideration. More specifically, the Publishers provided a chart with the number of works, owned works, and licensed works that each of the Publishers had. Owned works were ones that were either owned or exclusively licensed by one of the Publishers, and licensed works were ones that were published by the Publishers under a non-exclusive licence and with the permission of the copyright owner. Each of the Publishers had published hundreds of thousands, if not millions, of owned works across their websites, as well as hundreds of thousands of licensed works—all of which had copyright protection. Most concerning, the Publishers alleged that OpenAI developed its GPT models, by generating a data set comprised of copious amounts of text data (Training Data), which the model then analyzed to learn to generate coherent and natural-sounding text without the need for explicit supervision. Worse, they say that a significant proportion of the Training Data that was used to train the GPT models was obtained by OpenAI using a process called “scraping”, which involved programmatically visiting websites across the entirety of the Internet, locating the desired information, and extracting or copying it in a structured format for further use or analysis. In fact, they claimed that their copyrighted works were scraped and/or copied one or more times. Even though OpenAI generated billions of dollars in annual revenue (As of October 2024, OpenAI was valued at $157 billion), the Publishers said that they were not paid any form of consideration in exchange for their works. We shall see what transpires in this case—at this point, OpenAI has not filed its Statement of Defence. However, if one may take a guess, one might expect OpenAI to launch a defence of fair dealing pursuant to sections 29–29.4 of the Act . This is similar to fair use in the United States. To clarify, some things do not constitute copyright infringement; for instance, section 29 states that fair dealing for the purpose of research, private study, education, parody, or satire does not infringe copyright. Furthermore, other exceptions and related criteria involve the following: section 29.1 (fair dealing and criticism or review); section 29.2 (fair dealing and news reporting); section 29.21 (non-commercial user-generated content); section 29.22 (reproduction for private purposes); section 29.23 (fixing signals and recording programs); section 29.24 (backup copies); section 29.3 (acts undertaken without motive of gain); and section 29.4 (educational institutions). In this case, it is likely that OpenAI will run into problematic issues if it tries to use this defence because OpenAI is currently a for-profit entity with only commercial interests in mind. That is, many of the fair dealing exceptions simply do not apply in this case. On the other hand, if OpenAI were a not-for-profit entity attempting to conduct research, perhaps this would be a different situation; however, there is no doubt that OpenAI is making billions of dollars and using the Publishers’ works to do so. News of another copyright infringement case involving AI in France There have been several similar intellectual property lawsuits against AI companies, including one that I just wrote about where Cohere was sued by news publishers in the United States and Canada. In that article, I also referred to another case involving Thomson Reuters in the United States. And if that were not enough, we recently discovered that French publishers and authors are suing Meta in Paris, in the Third Chamber of the Paris Judicial Court, and accusing it of using their works without permission to train its AI model, LLaMA. In fact, it has been reported that three trade groups (the National Publishing Union that represents book publishers, the National Union of Authors and Composers that represents playrights and composers, and the Societe des Gens de Lettres that represents authors) have accused Meta of copyright infringement and asserted that the company has not obtained authorization to use their copyrighted works to train its AI model. They demand that Meta completely remove its data directories that were created through the alleged infringements. Interestingly, this will be the first copyright and AI case to be tried pursuant to the EU’s AI Act . We shall see what transpires in this case, as it will likely influence the direction of all future cases in the EU and beyond. I say this because the AI Act is referred to as the golden standard since it was the first regulation that dealt with AI . And there is no doubt that this Act requires compliance with EU copyright law. What can we take from these developments? As we can see from the above discussion, there is a growing number of accusations of copyright infringement where creators have been pitted against large, rich tech companies. Indeed, legal fights are increasingly highlighting the tension between traditional intellectual property protections and what large tech companies are referring to as the need for barrier-free innovation. In Canada, we are currently without an AI statute. In the United States, Biden’s Executive Order has been rescinded by President Trump, and it is not likely that there will be any further AI regulation at the federal level. That said, there are a few States that have recently enacted AI legislation: California, Colorado, Utah, and Virginia (passed but not yet signed into law). It appears that we are not in a place where Canada and most of the United States have committed to AI transparency with respect to training AI models and the data sources used. In the meantime, while there is hardly any AI legislation, a strong message is being sent to creators—the government is not interested in balancing the interests of creators with large AI companies—it is simply not a priority to do so via strong legislation. In fact, an argument could be made that Trump has sent the exact opposite message by rescinding Biden’s AI Executive Order. Economically speaking, one may argue that the lack of lawmaking and policymaking in this area in Canada and the United States could deter creators from producing novel creative works. It may even negatively impact creative markets in the long term. As creators begin to recognize that there is a lack of balance between fair compensation for creators and allowing innovation for tech companies, governments may need to respond by adapting existing copyright legislation and/or inserting new provisions in AI legislation that address these issues. What we can say for sure is, the AI and copyright debate remains unsettled at this time. Previous Next

  • Upskilling and Reskilling in the Age of AI | voyAIge strategy

    Upskilling and Reskilling in the Age of AI What Organizations Need to Know Christina Catenacci, Human Writer Jan 20, 2025 Key Points: Upskilling is the process of improving employee skill sets through AI training and development programs Reskilling is learning an entire set of new skills to do a new job It is not possible to have a one-time upskilling and reskilling session—rather, upskilling and reskilling is a continuous learning process IBM’s Institute for Business Value states that more than 60 percent of executives predict that Gen AI will disrupt how their organization designs experiences; even more striking, 75 percent say that competitive advantage depends on Gen AI. In a study by Boston Consulting Group where 13,000 people were surveyed, 89 percent of respondents said that their workforce needed improved AI skills—but only six percent said that they had begun upskilling in “a meaningful way”. Clearly, organizations that are not beginning the process of upskilling and reskilling can be at a disadvantage in this competitive game and risk being left behind. This may be why the AI Age is commonly referred to as an era of upskilling. What is upskilling and reskilling? IBM notes that upskilling and reskilling are two different things. In particular, upskilling is the process of improving employee skill sets through AI training and development programs. The goal is to minimize skill gaps and prepare employees for changes in their job roles or functions. For example, it could include asking customer care representatives to learn how to use Gen AI and chatbots to answer customer questions in real time with prompt engineering. On the other hand, reskilling is learning an entire set of new skills to do a new job. For example, someone who works in data processing might need to embrace reskilling to learn web development or advanced data analytics. Organizations Need to Prioritize Upskilling and Reskilling According to a report by KPMG, organizations are increasingly prioritizing upskilling and reskilling their workers to harness the power AI and realize true business value. The authors point out that the impact of AI transformation is often underestimated—AI is expected to surpass human intelligence, and organizations cannot be complacent. Only 41 percent of organizations are increasing their AI investments. This is concerning since Gen AI is not like past disruptive technology; there can be no one-time upskilling and reskilling session, but rather a continuous learning process that takes place. Leaders in organizations need to get past employee resistance and help to drive AI adoption. How can this be accomplished? The authors note that leaders need to be equipped with the right mindset, knowledge, and skills to guide their AI transformation. By actively using AI in their own work and sharing their experiences with their teams, leaders can create a safe environment for exploration and experimentation, and this in turn helps to create a culture of innovation and continuous learning. Most importantly, the authors state that leaders need to communicate the benefits of AI clearly and transparently: they need to share how the technology can augment and enhance human capabilities rather than replace them. An In-depth Study on Reskilling and Upskilling In an instructive report by World Economic Forum (in collaboration with Boston Consulting Group), the authors introduced an approach to mapping out job transition pathways and reskilling opportunities using the power of digital data to help guide workers, companies, and governments to prioritize their actions, time, and investments on focusing reskilling efforts efficiently and effectively. To prepare the workforce for the Fourth Industrial Revolution, the authors stated that it was necessary to identify and systematically map out realistic job transition opportunities for workers facing declining job prospects. When mapping job transition opportunities, the authors asked whether the job transition was viable and desirable. They broke down jobs into a series of relevant, measurable component parts in order to systematically compare them and identify any gaps in knowledge, skills, and experience. Then, they calculated “job-fit”’ of any one individual on the basis of objective criteria. They asked whether the job was viable and desirable. Viable future employees were those who were equipped to perform those tasks (individuals who possessed the necessary knowledge, skills, and experience). When it came to whether the job was desirable, some jobs were simply undesirable because the number of people projected to be employed in this job category was set to decline. Using the United States Bureau of Labor Statistics, the authors aimed to find job transition pathways for all. Let us take an example: the authors discovered several pathways for secretaries and administrative assistants. Some provided opportunities with a pay rise, such as insurance claim clerks, and some provided opportunities with a pay cut, such as library assistants or clerical workers. The authors emphasized that employers could no longer rely solely on new workers to fill their skills shortages. One of the main issues was the willingness to make reasonable investments in upskilling and reskilling that could bridge workers onto new jobs. Similarly, they stressed that it was not possible to begin the transformation unless there was a focus on individuals’ mindsets and efforts. For instance, they reasoned that some employees would need time off of work to gain additional qualifications, and some would require other supports and incentives to engage them in continuous learning. This transformation could involve a shift in the societal mindset such that individuals aspired to be more creative, curious, and comfortable with continuous change. Moreover, the authors noted that no single actor could solve the upskilling and reskilling puzzle alone; in fact, they suggested that a wide range of stakeholders (governments, employers, individuals, educational institutions and labour unions etc.) needed to collaborate and pool resources to achieve this goal. Further, data-driven approaches were anticipated to bring speed and additional value to upskilling and reskilling. For example, it may be worth exploring the amount of time required to make job the various transitions, or nuanced evaluations of economic benefits from these job transitions. How do Organizations Begin Upskilling and Reskilling? When it comes to upskilling, BCG recommends that organizations: assess their needs and measure outcomes prepare people for change unlock employees’ willingness to learn make adopting AI a C-Suite priority use AI for AI upskilling Moreover, IBM recommends creating a lasting strategy, communicating clearly, and investing in learning and development. Some AI tools that are critical to upskilling include computer vision, Gen AI, machine learning, natural language processing, and robotic process automation. Upskilling use cases include customer service, financial services, healthcare, HR, and web development. Organizations can use AI technologies to enhance the AI learning experience itself via online learning and development, on-the-job training, skill-gap analysis, and mentorship. AI can provide added value for organizations because it combines institutional knowledge with advanced capabilities, fills important gaps, improves employee retention, and embraces the democratization of web development. Furthermore, McKinsey & Company recommends that organizations use a cross-collaborative, scaled approach to upskilling and reskilling workforces. More specifically, to realize the opportunity of Gen AI, a new approach is required to address employee attraction, engagement, and retention. That is, before rushing in and starting the process, it is important to clarify business outcomes and how Gen AI investments can enable or accelerate them. This involves defining the skills that are required to deliver these outcomes and identify groups within the organization that need to build those skills. In addition, it is necessary to use a human-centred approach—from the outset, organizations are recommended to acknowledge that many employees experience upskilling and reskilling as a threat to their well-established professional identities. To address this issue, organizations need to lead using an empathetic, human-centered approach—foster learning and development and transform fears into curiosity—cultivating mindsets of opportunity and continuous learning. And of course, it is necessary to make personalized learning possible at scale. This involves having tighter collaboration across the HR function, stronger business integration to embed learning experiences into working environments, and a refreshed approach to the learning and development technology ecosystem. Benefits of Upskilling and Reskilling in an AI-Driven Environment There are several benefits of upskilling and reskilling: Organizations can remain competitive Employees can increase engagement and job satisfaction Workers with enhanced skills can improve their creativity, productivity, and efficiency Organizations can help employees reduce the risk of job displacement Employees can increase wages and enjoy better job opportunities Organizations can increase their retention numbers Indeed, according to an MIT study , evidence suggests that Gen AI, specifically ChatGPT, substantially raised average productivity. Moreover, exposure to ChatGPT increased job satisfaction and self-efficacy, as well as concern and excitement about automation technologies. We know that employee development programs, including upskilling and reskilling, are highly valued by workers. More precisely, employees appreciate the following: Skill assessment and analytics Personalized learning paths Adaptive learning platforms AI-powered content curation Virtual assistants and chatbots Simulation and gamification Predictive analytics for training ROI Natural language processing for feedback and coaching Augmented reality (AR) and virtual reality (VR) for leaning, mentoring, and training Continuous learning and adaptation What We Can Take From all This Given the above, itt may be in organizations’ interests to start the process of upskilling and reskilling, as recommended above. No one wants to find and hire new people: turnover costs organizations a great deal of money. And no one wants to stand by and watch an employer replace them with a robot or other form of Gen AI. The solution is to take the time to create a solid plan, beginning with outlining goals and aligning them with what the business needs. It is true: HR professionals who have an upskilling and reskilling plan look a lot more enlightened than those who view AI as a threat . As seen in the 2024 Work Trend Index Annual Report by Microsoft and LinkedIn , it appears that many employees want, and even expect, this type of training and development at work. Employers need to catch up to the employees, given that 75 percent of employees are already bringing AI into the workplace. Previous Next

  • What the Duolingo Layoffs Reveal About People and AI | voyAIge strategy

    What the Duolingo Layoffs Reveal About People and AI Keeping People in the Loop with AI Allows Organizations to Outperform Those Who Do Not By Tommy Cooke, fueled by coffee May 9, 2025 Key Points: 1. AI achieves its greatest potential not by replacing humans, but by augmenting and enhancing human capabilities 2. Mass layoffs tied to AI adoption risk damaging reputation, innovation capacity, resilience, and ethical oversight 3. Organizations that prioritize human-AI collaboration—through hybrid workflows, upskilling, and governance—position themselves for long-term success Duolingo, the world’s leading language-learning app, is getting rid of their contract employees and replacing them all with AI . Any human worker that wrote lessons or innovated ways to translate phrases from one language to another are being let go. This news comes on the heels of Duolingo letting go of 10 percent of its workforce last year . Terminating employees and replacing them with AI is not new. Shopify , Expedia , and Cars24 are but a few examples of dozens of large organizations around the world following suit. The reasons? There are a few and they are not unusual. For someone it’s about an “AI-first strategy”—to prioritize technology as a driving force for completing daily tasks. For others, it’s about cost reduction, streamlining operations, automating innovation and marketing, and so forth. For many readers and especially us here at VS, these stories are unsettling. They are harbingers of what so many workers fear: that AI may eventually replace us. However, beneath the surface of these stories are important lessons—about the myths we tell ourselves about AI, about the real value of humans, and the long-term consequences organizations face when they idealize technology to the extent that it removes people from the equation of work. Busting the “AI Will Replace Everyone” Myth Let’s begin where most of these media stories stop. The assumption that AI is here to “replace” workers is a misunderstanding of what AI can actually do and what value it provides. In fact, AI significantly boosts productivity and creativity up to 40 percent , providing that AI is paired with skilled human workers. Simply put, productivity gains do not come when AI eliminates the human role. Rather, they come when AI enhances human capacity, for example, by reducing manual data entry, accelerating review processes, and freeing people up to focus on complex problem-solving, creative design, or interpersonal work. The lesson here is simple. AI does not need to be about mass replacement because it is evidently about reshaping roles, tasks, and collaborations in ways that empower, and not replace, people. Organizations are at Risk when they Overlook the Value of Humans Here’s where the cases of employers replacing workers with AI really gets interesting. While companies like Duolingo and Expedia frame their layoffs as part of their strategic shift to AI tools, a closer look raises some important questions: What institutional knowledge was lost when long-term people were cut? What nuances in language, culture, and humour went out the door with those workers? What risks do these companies now face when AI-generated outputs are misaligned with user expectations? One thing AI does not have is lived experience. It cannot speak from personal context because it has none. AI merely mimics patterns in data. When it does so without careful human review–—that important Human-in-the-Loop — it outputs errors, biases, or tone-deaf missteps that can be extraordinarily costly to clean up financially, reputationally, and legally. Moreover, humans are simply better suited for complex tasks that require real-time adaptation to rapidly emerging changes. As David De Cremer and Garry Kaparov eloquently put in their co-authored article with the Harvard Business Review: “Contrary to AI abilities that are only responsive to the data available, humans have the ability to imagine, anticipate, feel, and judge changing situations, which allows them to shift from short-term to long-term concerns. These abilities are unique to humans and do not require a steady flow of externally provided data to work as is the case with artificial intelligence.” Essentially, short-term gains in efficiency not only seed long-term structural vulnerabilities, but they also place organizations at risk of permanently losing critical capabilities that only humans can provide. The Hidden Value of Keeping People Organizations that embrace a human-centric AI approach tap into some things that are profoundly valuable. Let’s look at a few of them: Embedded institutional memory . Seasoned employees know the why , not just the what. Cultural fluency . People bring deep cultural awareness and ethical discernment to decisions. Creative adaptability . When AI encounters novel problems, humans are the ones who figure out how to pivot, adapt, and respond. Critical self-reflection . People are better at determining when issues need to be escalated. Whereas AI models can drift overtime and become worse at detecting and solving critical issues, people remember from experience what is expected of them in high-stakes scenarios. As Christina Catenacci recently summarized in her review of the World Economic Forum (WEF)’s “The Future of Jobs” report, companies that retain, retrain, and reposition human workers alongside AI adoption outperform competitors on innovation, employee satisfaction, and customer loyalty. The Consequences of Getting It Wrong Employers who misread the AI landscape and pursue mass terminations under the illusion that AI can simply replace people, several risks emerge. First, is reputational fallout. Customers and clients increasingly value ethical, human-centered brands. High-profile layoffs tied to AI sparks backlash and can truly risk tarnishing a company’s public image. Second, is loss of resilience. Hollowed-out workforces are brittle. Without internal talent, companies become overdependent on external vendors or off-the-shelf solutions, making them less adaptable in fast-changing markets. This, of course, leads to reduced quality of products and services—clients will notice. Third, are innovation slowdowns. While AI can efficiently handles certain tasks such as generating images, it truly struggles with ambiguity . Without people who understand edge cases, novel demands, or cultural shifts, companies are at significant risk of losing their innovative edge. Lastly, is increased risk. AI systems are only as strong as the oversight and governance structures around them . Layoffs often undercut those ever-important human guardrails, increasing the odds of ethical missteps, legal violations, or data breaches. What Should Employers Do Instead of Replacing People with AI? Instead of treating AI as a magic bullet for growth, forward-looking organizations should approach AI as a multiplier— as something that augments the capacity, creativity, and performance of humans. Here’s a few ideas business leaders can consider: Invest in upskilling and reskilling . As AI takes over routine tasks, workers need new training to focus on higher-value tasks; in this way, workers can be challenged with higher-level and more complete roles. Prioritize making your people AI literate and ready to embrace a culture of AI-supported growth. Design hybrid workflows . Map out use cases and the subsequent critical operational business processes that might benefit most from human-AI collaboration rather than one-sided automation. Remember, Human-in-the-Loop is crucial for any AI to work. Build a governance framework . Ensure AI deployments have human checkpoints, clear accountability, and robust compliance safeguards. AI is considered to be human-centric when people are supported and guided in how they use AI. It is important to point out that this is exactly what AI governance frameworks do. Communicate transparently with employees . People are more willing to embrace AI when they understand how it supports their work, not threatens it. That is, employees are more willing to accept AI if employers are open, honest, and communicate regularly about AI usage in the organization. AI’s Power is Human-Centric The stories we began with may seem like a sign of things to come, but it’s just one chapter in a much larger story. The deeper truth is that AI is merely a tool. It is not a replacement. Its most powerful applications are those where humans and machines collaborate side by side, amplifying each other’s strengths and compensating for each other’s weaknesses. Previous Next

  • Understanding Risk of AI Self-Assessments | voyAIge strategy

    Understanding Risk of AI Self-Assessments Balancing Self-Assessments with External Audits By Tommy Coooke Nov 8, 2024 Key Points: AI self-assessments can uncover compliance gaps and build trust but should be paired with external expertise to ensure objectivity Collaborating with an external auditor helps create a comprehensive assessment that aligns AI adoption with an organization’s goals Maintaining a feedback loop with external auditors ensures ongoing improvements, keeping AI systems compliant and aligned with evolving organizational needs The United Kingdom recently announced that it has launched a new platform to promote AI adoption in the private sector. The goal of the platform is to efficiently allow a business to have a look at its operations and organizational design to identify, assess, and mitigate risks associated with AI before getting too entrenched in adopting AI incorrectly.  The announcement is timely and encouraging given that complex generative AI models are reportedly struggling with EU legal compliance benchmarks around AI bias and cybersecurity. Moreover, the UK's initiative appears to allow UK businesses to tackle the looming uncertainty around AI adoption head-on; despite its potential impact, 31% of British-based businesses are nervous to adopt AI with 39% of businesses reporting that it would be safer to stick with technologies that they already know how to use. The Value of AI Self-Assessments A self-assessment can generate the kind of clarity and confidence an organization requires to adopt AI. Here are a few benefits: Identify Compliance Gaps Early: A self-assessment can be a quick way to identify potential compliance blind-spots internally before they galvanize into substantive risks down the road. Scanning the regulatory landscape, both at home and abroad (especially for organizations with employees and stakeholders beyond its national borders), reveals what kinds of policy and procedure preparations are required prior to adopting AI. Foster Trust and Transparency: A self-assessment process can also play a crucial role in encouraging businesses to be transparent about their AI use. Per Cisco's 2024 Consumer Privacy Survey, 75% of respondents indicated that trustworthy and transparent data practices directly influence their buying choices. As importantly, trust and transparency not only protect and foster relationships with customers, but with regulators and insurers as well. Validate Internal Stakeholders from the Bottom-Up: Enlisting a workforce to assist in implementing a self-assessment is a powerful way of building trust in AI. When employees understand AI’s impact on their daily work and have an opportunity to assess a model or product, they are more likely to embrace AI rather than resist it. This is a bottom-up engagement strategy and is one that is proven to foster a culture that prioritizes communication, adaptation, and innovation. The Risks of Self-Assessments While there is value in conducting AI impact self-assessments, the process is not without risks. At voyAIge strategy, we encourage organizations to be mindful about the potential shortcomings of self-assessments, which include: A Lack of Objectivity: Conducting a self-assessment without external input  and feedback can generate biases that may take years to discover. Despite the power of a self-assessment tool to empower a workforce, they also signal to them quite clearly that AI is coming. Internal stakeholders may overlook weaknesses or ethical concerns because an organization is committed to adopting AI. These two quick examples reveal how a lack of objectivity can generate problems that may undermine the credibility of the assessment altogether. Limited Expertise: While self-assessment tools tend to be designed by AI experts, they are generally one-size-fits-all approaches to understanding an organization's needs. They also tend to look at many times of AI with the same lenses. Moreover and as importantly, those conducting a self-assessment usually lack the depth of knowledge required to fully understand not only AI’s implications but the rationale behind the design of a self-assessment tool. AI systems and assessment criteria are complex, necessitating a deep understanding of technical and regulatory challenges. on the one hand, and an organization's strengths and weaknesses on the other. There are many nuances at stake on either side of the equation. Failing to recognize and meet these nuances can result in superficial assessments that fail to uncover significant issues. Global Blind Spots: One of the biggest risks of a self-assessment is its oft failure to understand the regulatory landscape of neighboring jurisdictions. The key here is recognizing that AI laws are created and updated very quickly - and they do so around the globe at different rates and frequencies. A self-assessment might not fully capture these nuances, particularly if the evaluators behind the design of a self-assessment tool do not conduct adequate research and/or fail to regularly update the self-assessment tool. Balancing the Benefits and Risks of AI Self-Assessments Successfully conduct a self-assessment requires striking a balance between internal accountability and external assistance. Here are three steps your organization can take to responsibly and compellingly conduct a self-assessment: Step One - Enlist the Assistance of an External Auditor: Rather than leaving the entirety of the responsibility of a self-assessment to your workforce, bring in an external auditor as a project manager. By doing so, the auditor can guide key stakeholders in how to identify and recognize the key areas that may have otherwise been missed. Step Two - Engage Cross-Functional Teams with the Auditor: Involve the external auditor in a cross-functional team comprised of AI- and tech-savvy individuals from as many business units as possible. By having them work with the external auditor, specialized insight can be generated in a comprehensive way that minimizes blind spots and ensures that an AI's adoption fits across multiple business lines without being disruptive. Step Three - Develop a Feedback Loop: After your self-assessment is conducted, maintain check-ins with the external auditor. Continuous monitoring and improvement is always highly recommended when onboarding an AI system of any shape or size, especially as an organization grows and changes along the way. As time passes, have the external auditor provide updates on regulatory changes and provide advice on refining an AI system's KPIs to ensure that the system remains compliant and aligned with your organization's goals. Self-assessments can be crucial for generating alignment and excitement in a workforce. They're important tools for uncovering hidden risks and also for generating trust and transparency. However, they have their limitations. Understanding those limitations and overcoming them through external assistance is important for ensuring the successful implementation of any AI system. Previous Next

  • Contact | voyAIge strategy

    How to get in touch with our AI strategy and governance experts. Contact Us For questions, inquiries, or requests that require a personal response, we will respond within 48 hours. If you are submitting a request for a quote about our products or services , please use this form here . If you are requesting a proposal or bid , please use this form here . First Name Last Name Email Message Send Thanks for submitting!

  • Managed Services | voyAIge strategy

    Data and AI Leadership - without the overhead. Managed Data Governance & AI Governance Services Expert Leadership, Strategy, and Support at a Fixed Monthly Cost. Book a Free Consultation How our Managed Services Help We structure your journey. Whether you're just getting started or are already deploying tools, we help you assess readiness, define goals, and create a strategy that fits your organization's priorities. We build the right foundations. As experts in the law, policy, and ethics, we develop the nuanced solutions you need to grow without safely and successfully -without stifling innovation. We stay involved. As your challenges and opportunities change, we stay in touch with you to nimbly assess evaluate new use cases, manage compliance, and ensure your AI remains effective and aligned. Common AI Challenges Fear of AI Inappropriate Use of AI No AI Leadership No AI Strategy Too Many Questions Free Consultation Book a Free Consultation Book a no-strings-attached consultation session to see if our Managed AI Services can help you implement and use AI without the cost or complexity of doing it yourself. Organization Name First name Last name Email Briefly Describe Your Needs & Goals Submit Thanks for submitting! We will respond within 24 hours

  • Google Decision on Remedies for Unlawful Monopolization | voyAIge strategy

    Google Decision on Remedies for Unlawful Monopolization Company Does Not Have to Break Up: Will Competition be Restored? By Christina Catenacci, human writer Sep 19, 2025 On September 2, 2025, the US Department of Justice (DOJ) commented on US District Court Judge Amit P Mehta’s Order containing several remedies and referred to it as “an important step forward in the Department of Justice’s ongoing fight to protect American consumers”. More specifically, Google was ordered to not from enter or maintain exclusive contracts relating to the distribution of Google Search, Chrome, Google Assistant, and the Gemini app, and not enter or maintain agreements that: condition the licensing of any Google application on the distribution, preloading, or placement of Google Search, Chrome, Google Assistant, or the Gemini app anywhere on a device condition the receipt of revenue share payments for the placement of one Google application on the placement of another Google application condition the receipt of revenue share payments on maintaining Google Search, Chrome, Google Assistant, or the Gemini app on any device, browser, or search access point for more than one year, or prohibit any partner from simultaneously distributing any other GSE, browser, or GenAI product In addition, Google will have to make certain search index and user-interaction data available to certain competitors and offer certain competitors search and search text ads syndication services, which will open up the market by enabling rivals and potential rivals to deliver high-quality search results and ads and compete with Google as they develop their own capacity. These remedies had to do with the previous decision that Google abused its monopoly power, which I wrote about here . What is missing here? Google was not ordered to break up the company. Nonetheless, the DOJ stated: “For years, Google accounted for approximately 90 percent of all search queries in the United States, and Google used anticompetitive tactics to maintain and extend its monopolies in search and search advertising. Google entered into a series of exclusionary agreements that collectively locked up the primary avenues through which users access online search, requiring that Google be the preset default general search engine on billions of mobile devices and computers and, in many cases, prohibiting preinstallation of a competitor. Using its monopoly profits, Google bought preferential treatment for its search engine and created a self-reinforcing cycle of monopolization — shutting out potential competitors, reducing innovation, and taking choice away from American consumers” Abigail Slater, the Attorney General and spokesperson for the DOJ, stated in a post on X: “We proved in court that competition had been frozen in place for two decades in internet search. Google’s tactics have excluded competition, harming consumers and slowing innovation. Today’s remedy order agreed with the need to restore competition to the long-monopolized search market, and we are now weighing our options and thinking through whether the ordered relief goes far enough in serving that goal” What was Google’s Reaction to the Order? On the same day as the Order, Google posted its reaction: “Today’s decision recognizes how much the industry has changed through the advent of AI, which is giving people so many more ways to find information. This underlines what we’ve been saying since this case was filed in 2020: Competition is intense and people can easily choose the services they want. That’s why we disagree so strongly with the Court’s initial decision in August 2024 on liability. Now the Court has imposed limits on how we distribute Google services, and will require us to share Search data with rivals. We have concerns about how these requirements will impact our users and their privacy, and we’re reviewing the decision closely. The Court did recognize that divesting Chrome and Android would have gone beyond the case’s focus on search distribution, and would have harmed consumers and our partners As always, we’re continuing to focus on what matters — building innovative products that people choose and love” It appears that there was no acknowledgement by Google of the significant findings in Google’s abuse of monopoly power decision, or how lucky the company was for not having to break up or be forced to sell off its Android operating system. Apparently, shares in Alphabet , Google's parent company, jumped by more than eight percent after the ruling. This may be because going forward, phone manufacturers will be free to pre-load or promote other search engines, browsers or AI assistants alongside Google's. Tech companies will also benefit from the Order since Google will be able to continue paying distributors for default placement. In other news, Google was fined €2.95 billion by European Commission over abusive practices in online advertising technology A couple of days after the US antitrust remedies Order against Google, the European Commission fined Google €2.95 billion over abusive practices in online advertising technology. More specifically, Google was fined €2.95 billion for breaching EU antitrust rules by distorting competition in the advertising technology industry (AdTech). It did so by favouring its own online display advertising technology services to the detriment of competing providers of advertising technology services, advertisers, and online publishers. The Commission has ordered Google to: bring these self-preferencing practices to an end, and to implement measures to cease its inherent conflicts of interest along the AdTech supply chain Google has 60 days to inform the Commission about how it intends to do so. What happened? Google sells advertising on its own websites and applications and intermediates between advertisers that want to place their ads online and publishers (i.e. third-party, websites and apps) that can supply that space. Advertisers and publishers rely on the AdTech industry's digital tools for the placement of real-time ads not linked to a search query, such as banner ads in websites of newspapers (display ads). In particular, the AdTech industry provides three digital tools: publisher ad servers used by publishers to manage the advertising space on their websites and apps programmatic ad buying tools for the open web used by advertisers to manage their automated advertising campaigns ad exchanges where demand and supply meet in real time, typically via auctions, to buy and sell display adds. Google provides several AdTech services that intermediate between advertisers and publishers to display ads on websites or mobile apps. It operates: two ad buying tools, namely Google Ads and DV 360 a publisher ad server, DoubleClick For Publishers (DFP), and an ad exchange, AdX The Commission investigated and found that Google is dominant: in the market for publisher ad servers with its service DFP in the market for programmatic ad buying tools for the open web with its services Google Ads and DV360 Both markets referred to above are European Economic Area-wide. The Commission found that, between at least 2014 and today, Google abused such dominant positions in breach of Article 102 of the Treaty on the Functioning of the European Union by: Favouring its own ad exchange AdX in the ad selection process run by its dominant publisher ad server DFP by, for example, informing AdX in advance of the value of the best bid from competitors which it had to beat to win the auction Favouring its ad exchange AdX in the way its ad buying tools Google Ads and DV360 place bids on ad exchanges. For example, Google Ads was avoiding competing ad exchanges and mainly placing bids on AdX, thus making it the most attractive ad exchange The Commission concluded that those conducts aimed at intentionally giving AdX a competitive advantage and may have foreclosed ad exchanges competing with AdX. This has reinforced AdX's central role in the adtech supply chain as well as Google's ability to charge a high fee for its service. Therefore, the Commission ordered Google to bring these self-preferencing practices to an end. It has also ordered Google to implement measures to cease its inherent conflicts of interest along the AdTech supply chain. Google has 60 days to inform the Commission about the measures it intends to propose to that effect. Once received, the Commission will thoroughly assess them to see if they eliminate the conflicts of interest. Should they not, subject to Google's right to be heard, the Commission will proceed to impose an appropriate remedy. The Commission has already signaled its preliminary view that only the divestment by Google of part of its services would address the situation of inherent conflicts of interest, but it first wishes to hear and assess Google's proposal. What can we take from these decisions? Although the US DOJ has been relatively lenient on Google with its Order that does not require Google to break up, we can see that the EU Commission is harsher with its €2.95 billion fine and Order to bring the self-preferencing practices to an end and implement measures to cease its inherent conflicts of interest along the AdTech supply chain. What’s more, Google has 60 days to come up with a reasonable plan to carry out the remedy, or else the Commission will come up with its own. This demonstrates that there are different approaches in the US and in the EU when it comes to anti-competition law enforcement. It also begs the question about whether Google will be able to learn from past missteps and court decisions when there seems to be a lack of accountability and remorse for its actions. Previous Next

  • Antitrust: Google has to divest Chrome and Android - and maybe more | voyAIge strategy

    Antitrust: Google has to divest Chrome and Android - and maybe more Google abused its monopoly power and now has to face the music By Christina Catenacci Nov 21, 2024 Key Points Google has been subject to the court’s Executive Summary of the Plaintiffs’ Proposed Remedies following the decision that Google has abused its monopoly power The plaintiffs will have until March 7, 2025 to file its revised proposal for the remedy Google has reacted by calling the court document a “staggering proposal” This case was called the largest tech antitrust case since the US government’s antitrust case against Microsoft . This was a case that everyone was watching. This case had an August, 2024 decision regarding Google’s abuse of monopoly power, and some recent updates in November, 2024 regarding the court’s comments on the DOJ’s recommendations for the remedy. This article delves into recent developments concerning the fate of Google. What was the decision in August? As I wrote previously , the United States District Court for the District of Columbia filed its decision in August, 2024. After reviewing the relevant contracts, the court concluded the following: The DOJ was able to show that Google had monopoly power in the general search services and general search text advertising The DOJ was able to show that Google engaged in exclusionary conduct regarding general search services and general search text advertising—they blocked their rivals from the most effective channels of search distribution, namely out-of-the-box default search settings The court questioned whether the exclusive distribution contracts appeared to significantly contribute to maintaining a Google monopoly. The court responded, “The answer is ‘yes.’” The court declined to impose sanctions on Google for its failure to preserve its employees’ chat messages, but it made a point of saying that it was not condoning Google’s failure to preserve chat evidence As a result of the findings, the court concluded that Google violated section 2 of the Sherman Act , 15 US Code § 2. Section 2 states that: “Every person who shall monopolize, or attempt to monopolize, or combine or conspire with any other person or persons, to monopolize any part of the trade or commerce among the several States, or with foreign nations, shall be deemed guilty of a felony, and, on conviction thereof, shall be punished by fine not exceeding $100,000,000 if a corporation, or, if any other person, $1,000,000, or by imprisonment not exceeding 10 years, or by both said punishments, in the discretion of the court” The court held that Google violated this provision by maintaining a monopoly in two product markets in the United States: General search services and General text advertising. Google accomplished this through its exclusive distribution Agreements. Consequently, Amit P. Mehta for the court stated that Google was liable. At that time, the court did not say what the remedy would be. People speculated, but no one knew for sure what would happen. We know that the DOJ was asking for sanctions against Google such as putting an end to exclusive agreements Google had with companies like Apple and Samsung, and prohibiting certain kinds of data tracking. The government wrote that it was also considering “behavioral and structural” remedies that would ensure that Google could not use its Chrome browser or Android phone in a way that advantages its search engine, but didn’t outline what the structural remedies would be. But now we know based on recent news that Amit P. Mehta for the court has spoken. What did the court decide about remedy? As can be seen from the recent Executive Summary of the Plaintiffs’ Proposed Final Judgement , Amit P. Mehta noted that Google’s ownership and control of Chrome and Android—key methods for the distribution of search engines to consumers—posed a significant challenge to effectuate a remedy that aimed to unfetter these markets from anticompetitive conduct and ensure that there remained no practices likely to result in monopolization in the future. For instance, Android had the issue of being a critical platform on which search competitors relied and for which Google had myriad obvious and not-so-obvious ways to favor its own search products. Moreover, the court stated that, to address these challenges, Google had to divest Chrome, which had fortified its dominance. Furthermore, the court stated that the DOJ’s Initial Proposed Final Judgment (remedy proposals), which were set out at the end of the court’s comments, ensured efficacy, efficiency, and administrability by deploying a Technical Committee to investigate and examine the issues that will invariably arise when dealing with implementing the remedies. The remedy had to restore incentives for innovation and disruptive entry that Google diminished. The court stated, “The remedy must prevent Google from frustrating or circumventing the Court’s Final Judgment by manipulating the development and deployment of new technologies like query-based AI solutions that provide the most likely long-term path for a new generation of search competitors, who will depend on the absence of anticompetitive constraints to evolve into full-fledged competitors and competitive threats” To that end, the court took note of the DOJ’s remedy proposals and commented on them: Stop and prevent exclusionary agreements with third parties . An effective remedy had to prevent Google from entering into contracts that foreclosed or otherwise excluded competing general search engines and potential entrants, including by raising their costs, discouraging their distribution, or depriving them of competitive access to inputs. The proposed remedies were designed to end Google’s unlawful practices and open up the market for rivals and new entrants to emerge Prevent Google from self-preferencing through its ownership and control of search-related products . It was necessary to divest Chrome to safeguard against the possibility of further foreclosure and exclusion of rivals and potential entrants including via self-preferencing. Indeed, the court commented that the Chrome default was a market reality that significantly narrowed the available channels of distribution and thus disincentivized the emergence of new competition. What’s more, following its divestiture of Chrome, Google would not be able to re-enter the browser market for five years, and own or acquire any investment or interest in any search or search text ad rival, search distributor, or rival query-based AI product or ads technology. If that were not enough, it was recommended that Google divest Android to provide further structural relief to prevent Google from improperly leveraging its control of the Android ecosystem to its advantage Prevent Google from stifling or eliminating emerging competitive threats through acquisitions, minority investments, or partnerships . An effective remedy had to also ensure that Google could not circumvent the court’s remedy by providing its search products preferential access to related products or services that it owned or controlled, including mobile operating systems such as Android, apps such as YouTube, or AI products such as Gemini, or related data. Moreover, it would be necessary to prevent Google from engaging in conduct that undermined, frustrated, interfered with, or in any way lessened the ability of a user to discover a rival general search engine, limited the competitive capabilities of rivals, or otherwise impeded user discovery of products or services that were competitive threats to Google in the general search services or search text ads markets Disclose data critical to restoring competition . Through its unlawful behaviour, Google accumulated a staggering amount of data over many years, at the expense of its rivals. It would be necessary to make Google’s search index available at marginal cost, and on an ongoing basis, to rivals and potential rivals and provide rivals and potential rivals both user-side and ads data for a period of 10 years, at no cost, on a non-discriminatory basis, and with proper privacy safeguards in place. Google would also have to provide publishers, websites, and content creators with data crawling rights (such as the ability to opt out of having their content crawled for the index or training of large language models or displayed as AI-generated content). Moreover, Google would have to syndicate its search results, ranking signals, and query understanding information for 10 years and syndicate its search text ads for one year Increase transparency and control for advertisers . Google would have to provide advertisers with the information, options, and visibility into the performance and cost of Google Text Ads necessary to optimize their advertising across Google and its rivals. Google would need to include fulsome and necessary real-time performance information about ad performance and costs in its search query reports to advertisers and increase advertiser control by improving keyword matching options to advertisers. Google would also be prohibited from limiting the ability of advertisers to export search text ad data and information for which the advertiser bids on keywords, and Google would need to provide to the Technical Committee and Plaintiffs a monthly report outlining any changes to its search text ads auction and its public disclosure of those changes End Google’s unlawful distribution . A comprehensive and unitary remedy in this case had to undo the effects on search distribution. Google would have to divest Chrome, which would permanently stop Google’s control of this critical search access point and allow rival search engines the ability to access the browser that for many users was a gateway to the internet. Also, Google would have to limit its distribution of general search services by contract with third-party devices and search access points and via self-distribution on Google devices and search access points to facilitate competition in the markets for general search services and search text advertising. Additionally, Google would be prohibited from preinstalling any search access point on any new Google device and required to display a choice screen on every new and existing instance of a Google browser where the user had not previously affirmatively selected a default general search engine. The choice screens would have to be designed not to preference Google and to be accessible, easy to use, and minimize choice friction. So that users better understand the benefits that Google’s rivals can provide, Google would have to fund a nationwide advertising and education program. The program could include short-term incentive payments to individual users as a further incentive to choosing a non-Google default on a choice screen Allow for the enforcement of the proposed remedies while preventing circumvention . A remedy that prevents and restrains monopoly maintenance would require administration as well as protections against circumvention and retaliation, including through novel paths to preserving dominance in the monopolized markets. Google would have to appoint an internal Compliance Officer and establish a Technical Committee to assist the Plaintiffs and the court in monitoring Google’s compliance Ultimately, the court stated that the DOJ’s proposed remedies reflected extensive efforts to engage with market participants, use formal discovery, and collaborate with experts. As for the court’s plan, the plaintiffs will continue to investigate and evaluate the remedies necessary to restore competition to the affected markets; further, they reserve the right to add, remove, or modify the proposals as needed following further engagement with market participants and additional remedies discovery. They will have until March 7, 2025, a date on which they will need to file a revised proposal. Reactions to the court The next day, Google called the DOJ’s proposals a “ staggering proposal ” that would hurt consumers and America’s global technological leadership. More specifically, Google has stated that this extreme proposal would: Endanger the security and privacy of millions of Americans, and undermine the quality of products people love, by forcing the sale of Chrome and potentially Android Require disclosure to unknown foreign and domestic companies of not just Google’s innovations and results, but even more troublingly, Americans’ personal search queries. Chill our investment in artificial intelligence, perhaps the most important innovation of our time, where Google plays a leading role Hurt innovative services, like Mozilla’s Firefox, whose businesses depend on charging Google for Search placement Deliberately hobble people’s ability to access Google Search Mandate government micromanagement of Google Search and other technologies by appointing a “Technical Committee” with enormous power over your online experience. Lastly, Google stated the following: “DOJ’s approach would result in unprecedented government overreach that would harm American consumers, developers, and small businesses — and jeopardize America’s global economic and technological leadership at precisely the moment it’s needed most. As the Court said, Google offers “the industry’s highest quality search engine, which has earned Google the trust of hundreds of millions of daily users.” We’re still at the early stages of a long process and many of these demands are clearly far afield from what even the Court’s order contemplated. We’ll file our own proposals next month, and will make our broader case next year” In terms of other industry reactions , some liken Chrome to losing a left foot—selling it is valuable to the person, but not very valuable to anyone on its own. Others were very negative and called the court’s decision “dumb”. Some think that Google will create a workaround. Others, like DuckDuckGo, are more optimistic that the decision will lower barriers to competition. What happens next? There is no question that Google will be appealing the August, 2024 decision along with any decision regarding punishments. It will be hoping that it has more luck with the new administration. It said as much in its reaction to the news. Indeed, the new administration may not take a similar stance against Big Tech going forward, and may reverse such a decision on remedy similar to what happened in the past with Microsoft . Previous Next

  • Some of the Main Risks of Creating and Selling a Chatbot | voyAIge strategy

    Some of the Main Risks of Creating and Selling a Chatbot How to mitigate those risks before putting them on the market By Christina Catenacci Dec 5, 2024 Key Points As we approach 2025, chatbots are in high demand and businesses are wondering whether they should buy or build their own chatbots, and put them on the market Although there are several risks associated with businesses having an AI chatbot, there are also many ways to mitigate the risks. One example is that we need to accept that chatbots can be wrong (as in the case of hallucinations), so it is necessary to check the AI chatbot for accuracy There are clear benefits of using chatbot services in a business: chatbots are cost effective since it can automate tasks, chatbots can work 24/7 and don’t need breaks, chatbots can leverage user and data preferences to provide a tailored experience, and chatbots are scalable in that they can handle several conversations simultaneously Chatbots are hot these days, and plenty of businesspeople want to create them and put them on the market as soon as possible. They want to stay competitive. They want to make passive income. They want to ride the AI wave. It is no wonder that 79 percent of the top-performing businesses already have installed some form of conversational marketing tool . Chatbots are in demand, can unlock new revenue streams, and can help to create high returns. The more clients and the more mid-sized businesses that are involved, the higher the monthly revenue can be. In fact, it is possible to search use cases for ready-made chatbots to use in certain industries for certain tasks like lead generation, recruitment, or appointment booking. It is an exciting time to leverage technology to bolster a business’s services. Indeed, a 2022 study by ThriveMyWay , 24 percent of enterprises, 15 percent of midsized companies, and 16 percent of small firms utilized chatbot services. There are clear benefits of using chatbot services in a business: chatbots are cost effective since it can automate tasks, chatbots can work 24/7 and don’t need breaks, chatbots can leverage user and data preferences to provide a tailored experience, and chatbots are scalable in that they can handle several conversations simultaneously. They certainly appear helpful for businesses that wish to deliver enhanced customer experiences. Interested businesses who want to build their own chatbot instead of resell one are recommended to explore necessary action steps including signing up for a chatbot platform, build a demo chatbot for a simple client in a target niche, create a landing page that shows off the demo bot, reach out to a small number of prospects, and schedule consultations. But what are the risks of doing so, and how can we mitigate those risks? Are these risks present even if businesses become chatbot resellers (they buy a ready-made chatbot and then resell it to their clients) instead of builders (they make a bot and sell it to businesses)? Risks Some of the main risks include: Security and data leakage : if there is sensitive third-party or internal company information that is entered into a chatbot, it becomes part of the chatbot’s data model and may be shared with others who ask relevant questions. This could lead to data leakage and violate an organization’s security policies Hallucinations : if there is an inquiry, it is possible that the AI’s answer to that question could be an hallucination. What is this? Simply put, it is when the chatbot makes up stuff (including citations/references) The chatbot could go rogue : if there is a lack of human feedback, or poorly trained systems, the chatbots could provide unexpected, incorrect, or even harmful outputs Disinformation : if chatbots are making it easy for bad actors to create false information at a mass scale—cheaply and rapidly—a business could face reputational risks, legal risks, and other damage too. There is more: the same AI chatbot could teach another AI chatbot to spread even more harmful disinformation Bias and Discrimination : if bias is created because of the biased nature of the data on which AI tools are trained, or if users purposefully manipulate AI systems and chatbots to produce unflattering or prejudiced outputs, there could be problematic consequences. Worse, when decisions are made based on the biased information, a considerable risk of discriminatory decisions could occur Intellectual Property : if the AI system is trained on enormous amounts of data (including protected data like copyrighted data), the business that uses the data through the chatbot could be violating a business’s intellectual property and could end up on the receiving end of an infringement action Privacy and Confidentiality : if the AI system is trained on or is fed any sensitive information about a person, the business could be violating a person’s privacy. Similar to the Intellectual Property issue, the business could face privacy complaints or actions The chatbot could be insensitive and lack empathy . If the chatbot responds to client questions in an atypical or even an emotionally unintelligent manner, there could be some resulting issues including reputational harms or various actions Mitigating the risks Here are some mitigation strategies : Be cautious and acknowledge the risks before acting Create policies and procedures that can outline for employees what is acceptable and unacceptable use of AI in the workplace Accept that chatbots can be wrong, and check the references If you are a builder of a chatbot and you are selling, use contractual provisions to limit liability Use transparency when dealing with AI use and communicating with clients and employees Review AI outputs and check if there are bias and discriminatory impacts Create plans to address AI-powered disinformation Previous Next

  • Deep Fakes, Elections, and Safeguarding Democracy | voyAIge strategy

    Deep Fakes, Elections, and Safeguarding Democracy Understanding and Preventing the Threat of Deep Fakes in Elections By Tommy Cooke Oct 11, 2024 Key Points: Deep Fakes use AI to create hyper-realistic fake media, posing serious risks to elections They can manipulate voter perceptions and erode trust in democratic institutions Organizations and governments must invest in detection tools, media literacy, and rapid response protocols to combat misinformation As the 2024 US election approaches, the integrity of the democratic voting processes is under threat from Deep Fakes. In January 2024, a robocall impersonating Joe Biden spread misinformation about the election process. It demonstrated how easily AI can be used to manipulate voters. The incident highlights the urgency with which governments, organizations, and citizens need to take in understanding Deep Fakes – and undertaking proactive measures to combat them. What Are Deep Fakes? Deep Fakes are AI-generated content, usually in the form of images, videos, audio recordings. They are designed to closely replicate the appearance, voice, and mannerisms of real people. Deep Fakes are often generated through a class of AI Machine Learning, called Generative Adversarial Networks (GANs), where two neural networks compete with one another. One neural network (the generator) creates a fake image, recording, or video. The other (the discriminator) tries to spot if they are fake. The two networks continue competing with one another, improving over time until the generator produces fake content that the discriminator finds difficult to distinguish from real footage. The process often results in uncanny resemblances that are hyper-realistic, misleading the viewer into witnessing and believing a statement or action that never factually occurred. The ongoing proliferation of Deep Fakes raise considerable ethics and security concerns, particularly during elections. Why Deep Fakes Threaten Democracy Deep Fakes are not merely technological curiosities. They can be a powerful weapon for misinformation, designed to: Manipulate Public Perception : Deep Fakes falsely portray political figures making inflammatory statements or engaging in unethical behavior, leading to confusion while eroding voter trus t. The January 2024 robocall that mimicked Joe Biden is a clear example. Erode Trust in Democratic Institutions : As more Deep Fakes are produced, they generate increasingly higher levels of suspicion and confusion around what is real and what is not. The atmosphere of doubt and uncertainty that Deep Fakes create thus undermine the credibility of both political candidates and the electoral process . In an age where misinformation already spreads rapidly on social media platforms, Deep Fakes are particularly lethal in their ability to sow distrust in an otherwise informed and engaged electorate. Distort Voter Behavior : When Deep Fakes are curated for specific audiences (e.g., traditionally Democratic voters), adversaries of the democratic process specifically manipulate certain voters into questioning the validity of the voting process or even the value of their own vote. This targeted approach can significantly alter or shift voter behavior in way that is detrimental to both political parties. The 2024 Election: A New Era for Deep Fakes As we head into the 2024 United States presidential election in November, the threat of Deep Fakes is unprecedently high. In today’s polarized political climate, Deep Fakes have the potential to escalate tensions by spreading false narratives that align with partisan biases. The current political climate is divisive, and Deep Fakes threaten to make divisions deeper and wider. It is imperative that governments and organizations adhere to best practices to combat Deep Fakes: Monitoring and Detection Governments and organizations can invest in advanced AI tools capable of detecting Deep Fakes in real-time. Detection algorithms can reverse engineer how Deep Fakes are created to flag suspicious content before they gain widespread traction. Media Literacy One of the most effective ways to mitigate the impact of Deep Fakes is through public education. Media literacy campaigns equip voters with the that skills they need to critically evaluate the content they encounter. Rapid Response Governments and election bodies must have clear strategies in place for when Deep Fakes emerge. Rapid response protocols can include issuing statements to correct misinformation, collaborating with tech companies to remove malicious content, and engaging the public through verified communication channels. Cross-Sector Collaboration Governments, tech companies, and media organizations should work togethe r to create transparent systems that verify the authenticity of election-related media. Public-private partnerships can help develop common standards for content verification and share expertise in detecting Deep Fakes. Proactive Measures for Future Elections Because Deep Fakes become more commonplace on social media platforms, organizations, governments, and citizens need to recognize the urgency of this issue – and act proactively: Seek Independent Fact-Checking Organizations : Governments should work closely with fact-checking bodies to ensure swift and accurate debunking of manipulated content. Establish Digital Forensics Units : Election bodies and governments can create teams focused on monitoring and analyzing digital content for manipulation . These units can serve as first responders when Deep Fakes are detected, in turn assisting with coordinating responses. Promote Research and Development : Supporting innovation in the Deep Fake detection and prevention space is crucial. Governments, organizations, and citizens can stay ahead of emerging threats by investing in tools that protect the democratic process. The risks to democratic integrity are real. Without concerted efforts, the impact of Deep Fakes can reshape the political landscape in exceptionally harmful ways. By investing in monitoring systems, educating the public, and establishing rapid response protocols, we can mitigate these risks and protect the foundations of our democratic processes. Previous Next

bottom of page