top of page

Search Results

120 results found with an empty search

  • AI in Health Care | voyAIge strategy

    AI in Health Care Some Mitigation Strategies, Use Cases, and 2025 Predictions Christina Catenacci, Human Writer and Editor Dec 13, 2024 Key Points This is an exciting time for using AI in the medical field Both the Canadian and the American Medical Associations have provided guiding principles for use of AI by physicians Some of the main use cases used in the medical field involve research, medical education, administration assistance for medical professionals, diagnosis, treatment, monitoring, and more. The use cases that are predicted to be especially useful in 2025 are striking AI is becoming pervasive in medicine, and many in the health care field predict that it is going to continue to proliferate this realm well into the future. Input from Canadian and American Medical Associations The Canadian Medical Association (CMA) notes that the rapid evolution of AI technologies is expected to improve health care and change the way it is delivered. In fact, the CMA states that AI is being explored, along with other tools, as a means of increasing diagnostic accuracy, improving treatment planning, and forecasting outcomes of care. There has been promise for the following: clinical application in image-intensive fields, including radiology, pathology, ophthalmology, dermatology, and image-guided surgery broader public health purposes, such as disease surveillance Interestingly, Health Canada has already approved several AI applications, but it is worth noting that that the CMA advises doctors that: “Before deciding to use an AI-based technology in your medical practice, it is important to evaluate any findings, recommendations, or diagnoses suggested by the tool. Most AI applications are designed to be clinical aids used by clinicians as appropriate to complement other relevant and reliable clinical information and tools. Medical care provided to the patient should continue to reflect your own recommendations based on objective evidence and sound medical judgment” Moreover, the CMA stresses that physicians do the following: Ensure that AI is used to compliment clinical care. Medical care should reflect doctors’ own recommendations based on objective evidence and sound medical judgment Crucially review and assess whether the AI tool is suited for its intended use and the nature of your practice Consider the measures that are in place to ensure the AI tool’s continued effectiveness and reliability Be mindful of legal and medical professional obligations, including privacy, confidentiality, and how patient data is transferred, stored, and used (and whether reasonable safeguards are in place) Be aware if bias and try to mitigate it as much as possible Have regard to the best interests of the patient The American Medical Association (AMA) similarly recognizes the immense potential of AI in health care in enhancing diagnostic accuracy, treatment outcomes, and patient care; simultaneously, it appreciates that there are ethical considerations and potential risks that demand a proactive and principled approach to the oversight and governance of health care AI. To that end, the AMA created principles that call for comprehensive policies that mitigate risks to patients and physicians, ensuring that the benefits of AI in health care are maximized while potential harms are minimized. These key principles include: Oversight : the AMA encourages a whole of government approach to implement governance policies to mitigate risks associated with health care AI, but also acknowledges that non-government entities have a role in appropriate oversight and governance of health care AI Transparency : The AMA emphasizes that transparency is essential for the use of AI in health care to establish trust among patients and physicians. Key characteristics and information regarding the design, development, and deployment processes should be mandated by law where possible, including potential sources of inequity in problem formulation, inputs, and implementation Disclosure and Documentation : The AMA calls for appropriate disclosure and documentation when AI directly impacts patient care, access to care, medical decision making, communications, or the medical record Generative AI: To manage risk, the AMA calls on health care organizations to develop and adopt appropriate polices that anticipate and minimize negative impacts associated with generative AI. Governance policies should be in place prior to its adoption and use Privacy and Security : Built upon the AMA’s Privacy Principles , the AMA prioritizes robust measures to protect patient privacy and data security. AI developers have a responsibility to design their systems from the ground up with privacy in mind. Developers and health care organizations must implement safeguards to instill confidence in patients that personal information is handled responsibly. Strengthening AI systems against cybersecurity threats is crucial to their reliability, resiliency, and safety Bias Mitigation : To promote equitable health care outcomes, the AMA advocates for the proactive identification and mitigation of bias in AI algorithms to promote a health care system that is fair, inclusive, and free from discrimination Liability : The AMA will continue to advocate to ensure that physician liability for the use of AI-enabled technologies is limited and adheres to current legal approaches to medical liability Furthermore, the AMA principles address when payors use AI and algorithm-based decision-making to determine coverage limits, make claim determinations, and engage in benefit design. The AMA urges that payors’ use of automated decision-making systems do not reduce access to needed care, nor systematically withhold care from specific groups. It states that steps should be taken to ensure that these systems are not overriding clinical judgement and do not eliminate human review of individual circumstances. There should be stronger regulatory oversight, transparency, and audits when payors use these systems for coverage, claim determinations, and benefit design. Another thing to consider is that the AMA has released Principles for Augmented Intelligence Development, Deployment, and Use , which provides explanatory information that elaborates on the above principles. Examples of Use Cases There are several examples of AI use in health care. Here are some of the main ones we came across: Early warning systems : This AI tool has reduced unexpected deaths in hospital by 26 percent. An AI-based early warning system flagged incoming results showing that the patient's white blood cell count was very high and caught an instance of cellulitis (a bacterial skin infection that can cause extensive tissue damage). Another example has been seen in detecting instances of breast cancer—AI is becoming a member of the medical team Optimizing chemotherapy treatment plans and monitoring treatment response : Oncologists rely on imprecise methods to design chemotherapy regimens, leading to suboptimal medication choices. AI models that assess clinical data, genomic biomarkers, and population outcomes help determine optimal treatment plans for patients. Also, cancer treatment plans require frequent adjustment, but quantifying how patients respond to interventions remains challenging. AI imaging algorithms track meaningful changes in tumors over the course of therapy to determine next steps Robotic surgery : AI is enabling surgical robots to perform complex operations with greater precision and control, resulting in reduced recovery times, fewer complications, and better patient outcomes. These AI systems are used for minimally invasive surgeries as well Medical research and training : AI is being used for new and repurposed drug discovery and clinical trials. Additionally, medical students are receiving some feedback from AI tutors as they learn to remove brain tumors and practice skills on AI-based virtual patients Improving precision for Computed Tomography (CT) and Magnetic resonance (MR) image reconstruction : radiology departments are not being replaced—they are being improved by bolstering precision and speed. That is, AI-enabled camera technology can automatically detect anatomical landmarks in a patient to enable fast, accurate and consistent patient positioning. Also, AI-enabled image reconstruction can help to reduce radiation dose and improve CT image quality, thereby supporting diagnostic confidence. This all helps radiologists read images faster and more accurately. For instance, AI assessed Alzheimer’s disease brain atrophy rates with 99% accuracy using longitudinal MRI scans Precision oncology : AI allows for the development of highly personalized treatment plans based on a patient’s individual health data, including their genetics, lifestyle, and treatment history Remote medicine : With wearable devices and mobile health applications, AI can continuously monitor patients remotely. The data collected is analyzed in real time to provide updates on the patient’s condition, making it easier for healthcare providers to intervene early if something goes wrong Administration, professional support, and patient engagement : AI can help professionals identify and reduce fraud, receive necessary supports, and support patients What is in Store for 2025? Indeed, it is an exciting time for medical professionals. AI is fundamentally reimagining our approach to human health. Here are some AI trends that are expected to dominate the medial field in 2025 : Predictive healthcare : Machine learning algorithms now analyze complex datasets from genetic profiles, wearable devices, electronic health records, and environmental factors to create comprehensive health risk assessments. There are platforms that can predict disease onset and recommend preventative interventions and treatment plans Advanced precision medicine and genomic engineering : Driven by remarkable advances in genomic engineering and CRISPR technologies, this is becoming standard practice. The ability to precisely edit genetic codes has opened up revolutionary treatment possibilities for previously untreatable genetic disorders, including correcting genetic mutations, developing targeted therapies, and making customized treatment plans Immersive telemedicine and extended reality healthcare : Extended reality (XR) technologies, including augmented reality (AR) and virtual reality (VR), have transformed remote medical consultations and patient care. Surgeons can now perform complex procedures using haptic feedback robotic systems controlled remotely, while patients can receive comprehensive medical consultations through hyper-realistic virtual environments. This is important when dealing with patients in rural areas and underserved regions Internet of medical things and continuous health monitoring : This has matured into a robust, interconnected ecosystem of smart medical devices that provide continuous, non-invasive health monitoring. Wearable and implantable devices now offer real-time, comprehensive health insights that go far beyond simple fitness tracking. It is important for monitoring, detecting, and transmitting data to healthcare providers Sustainable and regenerative biotechnologies : some of these technologies include: biodegradable medical implants that naturally integrate with human tissue; regenerative therapies that can repair or replace damaged organs; sustainable production of medical treatments with minimal environmental impact; and bioengineered solutions for addressing climate-related health challenges Previous Next

  • DeepSeek in Focus | voyAIge strategy

    DeepSeek in Focus What Leaders Need to Know By Tommy Cooke, fueled by caffeine and curiousity Feb 14, 2025 Key Points: DeepSeek is a major disruptor in the AI market, rapidly gaining adoption due to its affordability and open-source appeal Despite being open-source, DeepSeek's data is stored in China, raising security, compliance, and censorship concerns Organizations must weigh the benefits of open-source AI against the risks of data privacy, geopolitical scrutiny, and regulatory uncertainty In just over a year, DeepSeek has gone from an emerging AI model to leaving an everlasting imprint on the global AI market. Developed in China as an open-source large language model (LLM), it is rapidly gaining attention. In fact, as of January 2025 it has overtaken ChatGPT as the most downloaded free app on Apple iPhones in the U.S. DeepSeek's meteoric rise signals a shift in AI adoption trends and the AI industry itself, and that warrants awareness and conversation for organization leaders; as people gravitate toward alternative AI models outside the traditional Western ecosystem, it is important to understand the what, why, and how of this recent AI phenomenon. As of February 2025, it is critically important to ensure that you are prepared to respond to DeepSeek in your organization. Leaders must accept the likelihood that DeepSeek is already being used by their workforce for work purposes. DeepSeek is a startup based in Hangzhou city, China. It was founded in 2023 by Liang Feng, an entrepreneur who also founded the $7bn USD hedge fund group High-Flyer in 2016. In January 2025, DeepSeek released its latest AI model, DeepSeek R1. It is a free AI Chatbot that looks, feels, sounds, and responds very similarly to ChatGPT. Unlike proprietary AI models developed in the West, like ChatGPT, Claude, and Gemini, DeepSeek is freely available for organizations to customize and use at their will. Part of the reason it is making waves is not only because of how quickly and easily it can be adopted and used, but also because it's significantly cheaper to build than its competitors' designs. While the exact figures are currently being debated, there is general agreement OpenAI - the company that owns, produced, and maintains ChatGPT - spent at least two to three times more to train their AI models.  This point is very important to understand because it explains a lot about economic fallout, the balance of global AI development, market disruption, as well as accessibility and control. The implications stretch beyond cost alone. They affect how organizations plan AI adoption, determine their budgets, and structure their technology ecosystems. If AI models can be produced at a fraction of the cost of the development norm while maintaining competitive performance, organizations must consider how this changes their long-term investment in AI. Are proprietary solutions worth the price if open-source alternatives are rapidly closing the gap? As importantly, what are the hidden risks and trade-offs that come with choosing a model purely on affordability? Security & Compliance Concerns with DeepSeek DeepSeek’s rapid rise comes with critical questions for organizations, especially regarding security, governance, and compliance. First, DeepSeek was developed in China, and that is where its data is as well. The Western world is thus concerned about how data are processed, who has access to them, and whether companies using DeepSeek are exposing themselves to regulatory or cybersecurity risks. For organizations bound by stringent data privacy regulations, this is likely a major red flag. Secondly, DeepSeek is receiving considerable criticism for its censorship policies. It will not discuss certain political topics, and it was trained on filtered datasets. This impacts the reliability of its responses and raises concerns about bias in AI-generated content. This alone, at least in part, explains why South Korea, Australia, and Taiwan have banned it . Third, today's turbulent geopolitical climate means that Western governments are increasingly wary of foreign influence. AI is no exception. DeepSeek is being closely monitored by both governments and organizations around the globe, asking whether or not the company and its AI should be restricted or even outright banned. Organizations looking for a cost-effective entry to powerful AI are certainly attracted and interested in DeepSeek - and they are considering the long-term viability and potential implications of adopting a tool in the face of regulatory and political scrutiny. Is Your Staff Using DeepSeek? Guidance for Leaders Given the incredible rate that AI is being installed on the personal devices of your employees - with DeepSeek clearly being no exception - there are things we feel strongly that you should consider: Audit your AI Usage.  Find out who in your company is using chatbots, especially DeepSeek - and how. Are employees feeding sensitive data into the model? Have they uploaded business plans, client data, personal information of their patients or coworkers? Do they understand the risks?  Assess Risk. What do your technology and AI use policies say? Do you have them yet? Has your organization established clear policies and procedures on AI tools that store data outside your legal jurisdiction? Ask yourself, would using DeepSeek put your organization at risk of legal noncompliance or even reputational harms? Who are your external stakeholders and investors? It's critical that you start thinking about their perception, expectations, and needs.  Engage and Communicate . One of our clients recently told us that an executive in their organization instructed their respective staff to freely use AI chatbots at will - without discussing the decision and announcement with legal counsel. As you might imagine, this raised many concerns about understanding and mitigating AI-related risks. If you have not done so already, now is the time to articulate clearly your organization's stance on AI to employees, stakeholders, and partners. Organization leaders need to be strategizing not only how they communicate to staff about AI, but they also need to be thinking about communication along the lines of organizational engagement. What are your employees thinking about AI, truly? Do they silently use it to keep up with the creative or time-consuming demands of their job? Are they afraid you will find out and will be punished? Do they feel supported by you, and are they willing to provide honest feedback? How Open is Open-source AI? Industry observers are debating how and whether DeepSeek’s biggest strength is also its biggest risk: it is open-source. What that means is that companies can see, download, and edit the AI's code. This opens interesting and valuable doors to many users and organizations. For example, openly readable code means that its openly verifiable, and openly scrutinized. If something exists in the code that can be deemed as a fatal flaw, security concern, or a path toward bias and harmful inference or decision-making, it can be detected more easily because the global community of talented volunteer programmers and engineers can find and address any such issues. In theory, this means that managing security, compliance, and governance yields more flexible and transparent control. Unlike a proprietary AI vendor, who does not disclose their code and invite public gaze into its design, if something goes wrong – it is often your problem to address. On the other hand, industry observers are also questioning how "open" DeepSeek truly is. By conventional understandings, open-source means that code is openly available for anyone to inspect, modify, and use. However, when it comes to AI, it is much more than code. AI must be trained, and training requires data. DeepSeek does not provide full transparency on what data it was trained on, nor has it been entirely forthcoming about the details of its training process. These points are important because it is forcing organizations and governments to question the transparency and trust of DeepSeek. As an organizational leader, you need to ask yourself: is open-source AI considered a strategic advantage or a risk? Who controls the AI for your organization? You, or vendors outside of your jurisdiction? DeepSeek is more than just another AI model. It’s a disruptor in the AI industry. Many attribute marketplace  disruption as a positive - something that challenges norms, standards, and best-in-class models. However, there is much more that is potentially disrupted here. Those disruptions are not merely globally economic and political, they are in your organization. Leaders must recognize that AI strategy is no longer just about choosing the most powerful model. It’s about choosing the right balance between control, risk, and innovation. Previous Next

  • EU Commission finds Apple and Meta in breach of the Digital Markets Act (DMA) | voyAIge strategy

    EU Commission finds Apple and Meta in breach of the Digital Markets Act (DMA) The fines were huge—Apple was fined €500 million, and Meta was fined €200 million By Christina Catenacci, human writer May 9, 2025 Key Points: Apple and Meta were fined by the EU Commission for violating the DMA—Apple was fined €500 million, and Meta was fined €200 million The DMA is an EU regulation that aims to ensure fair competition in the EU digital economy Noncompliance of the DMA carries serious consequences in the form of fines, penalties, and additional fines in the case of continued noncompliance On April 22, 2025, the EU Commission announced that Apple breached its anti-steering obligation under the DMA , and that Meta breached the DMA obligation to give consumers the choice of a service that uses less of their personal data. As a result, the Commission fined Apple and Meta with €500 million and €200 million respectively. But what is the DMA ? What were these obligations that Apple and Meta violated? Why were the fines so high? Does this affect businesses in Canada or the United States? This article answers these questions. What is the DMA ? The DMA is an EU regulation that aims to ensure fair competition in the EU digital economy. The main goal is to regulate large online platforms, called gatekeepers (big companies like Apple, Meta, or Google), so that these large companies do not abuse their market power. Essentially, the purpose of the DMA is to make the markets in the digital sector fairer and more contestable (a contestable market is one that is fairly easy for new companies to enter). In other words, the market is more competitive thanks to the DMA . More specifically, gatekeepers have to comply with the do’s (i.e. obligations) and don’ts (i.e. prohibitions) listed in the DMA . For example , gatekeepers have to: allow third parties to inter-operate with the gatekeeper’s own services in certain specific situations; allow their business users to access the data that they generate in their use of the gatekeeper’s platform; provide companies advertising on their platform with the tools and information necessary for advertisers and publishers to carry out their own independent verification of their advertisements hosted by the gatekeeper; and allow their business users to promote their offer and conclude contracts with their customers outside the gatekeeper’s platform. Also, gatekeepers must not: treat services and products offered by the gatekeeper itself more favourably in ranking than similar services or products offered by third parties on the gatekeeper's platform; prevent consumers from linking up to businesses outside their platforms; prevent users from uninstalling any pre-installed software or app if they wish so; and track end users outside of the gatekeepers' core platform service for the purpose of targeted advertising, without effective consent having been granted. As a result of the DMA , consumers have more choice of digital services and can install preferred apps (with choice screens), gain more control over their personal data (users decide whether the companies can use their data), can port their data easily to the platform of their choice, have streamlined access, and have unbiased search results. As we have just seen, the consequences of noncompliance can be quite costly. In particular, there can be fines of up to 10 percent of the company’s total worldwide annual turnover, or up to 20 percent in the event of repeated infringements. Moreover, there could be periodic penalty payments of up to five percent of the average daily turnover. Furthermore, in the case of systematic infringements by gatekeepers, additional remedies may be imposed on the gatekeepers after a market investigation (these remedies have to be proportionate to the offence committed). And if necessary as a last resort option, non-financial remedies can be imposed, including behavioural and structural remedies like divestiture of (parts of) a business. In Canada, we have the Competition Act ; similarly, the United States has antitrust laws such as the Sherman Antitrust Act . For example, in Canada, there was a recent court action brought by the Competition Bureau against Google for abusing a monopoly with search. Likewise, there was an antitrust action brought against Meta by the Antitrust Division of the Department of Justice in the United States regarding its acquisition of Instagram and WhatsApp. I’d be remiss not to mention that Canadian and American businesses could be subject to the DMA in certain circumstances. This is because the DMA applies to core platform services provided or offered by gatekeepers to business users established in the EU or end users established or located in the EU, irrespective of the place of establishment or residence of the gatekeepers and irrespective of the law otherwise applicable to the provision of service. What this means is, regardless of location or residence of gatekeepers, if they offer their services to users in the EU, they are subject to the DMA . This requirement can be found in Article 1 of the DMA . Why was Apple fined €500 million? Under the DMA, app developers distributing their apps on Apple's App Store should be able to inform customers (free of charge) of alternative offers outside the App Store, steer them to those offers, and allow them to make purchases. However, Apple does not do this. Due to a number of restrictions imposed by Apple, app developers cannot fully benefit from the advantages of alternative distribution channels outside the App Store. Similarly, consumers cannot fully benefit from alternative and cheaper offers since Apple prevents app developers from directly informing consumers about such offers. The company has failed to demonstrate that these restrictions are objectively necessary and proportionate. Therefore, the Commission has ordered Apple to remove the technical and commercial restrictions on steering, and to refrain from perpetuating the non-compliant conduct in the future, which includes adopting conduct with an equivalent object or effect. When imposing the €500 million fine, the Commission has taken into account the gravity and duration of the non-compliance. At this point, the Commission has closed the investigation on Apple's user choice obligations, thanks to early and proactive engagement by Apple on a compliance solution. And why was Meta fined €200 million? Under the DMA , gatekeepers must seek users' consent for combining their personal data between services. The users who do not consent must have access to a less personalised but equivalent alternative. But Meta did not do this. Instead, it introduced a binary ‘Consent or Pay' advertising model. Under this model, EU users of Facebook and Instagram had a choice between consenting to personal data combination for personalised advertising, or paying a monthly subscription for an ad-free service. As a result, the Commission found that Meta’s model was not compliant with the DMA , because it did not give users the required specific choice to opt for a service that used less of their personal data but was otherwise equivalent to the ‘personalised ads' service. Meta's model also did not allow users to exercise their right to freely consent to the combination of their personal data. Subsequently (after numerous exchanges with the Commission), Meta introduced another version of the free personalised ads model, offering a new option that allegedly used less personal data to display advertisements. The Commission is currently assessing this new option and continues its dialogue with Meta. The Commission is requesting that the company provide evidence of the impact that this new ads model has in practice. To that end, the decision that found non-compliance involves the time period during which users in the EU were only offered the binary ‘Consent or Pay' option between March 2024 (when the DMA obligations became legally binding) and November 2024 (when Meta's new ads model was introduced). When imposing the fines, the Commission took into account the gravity and duration of the non-compliance. What’s more, the Commission also found that Meta's online intermediation service, Facebook Marketplace, should no longer be designated under the DMA, mostly because Marketplace had less than 10,000 business users in 2024. Meta therefore no longer met the threshold giving rise to a presumption that Marketplace was an important gateway for business users to reach end users. What can we take from this development? It is important to note that these decisions made against Apple and Meta are the first noncompliance decisions adopted under the DMA . Both Apple and Meta are required to comply with the Commission's decisions within 60 days, or else they risk periodic penalty payments. It is clear that the DMA is a serious regulation—businesses that offer products and services to consumers in the EU need to be aware of this and act accordingly if they want to avoid serious fines and penalties. In like manner, businesses that are encapsulated under the DMA will need to be aware that fines and penalties continue and worsen over time if the noncompliance continues. For businesses that are subject to domestic competition/antitrust legislation in Canada and the United States are recommended to note that the consequences, albeit less severe than the DMA , are also grave in the case where businesses are abusing their monopoly power and ignoring regulators. Why is competition so important? The goal of these laws is to protect the competitiveness of the markets and to protect consumers by ensuring that they have choice and are not subject to pressure by companies who abuse monopoly power. Take a look at an article that I wrote about antitrust woes here . Indeed, some companies are watching what is happening to Apple and Meta, and are responding in a positive, proactive, and cooperative manner—for instance, Microsoft President Brad Smith has announced a landmark set of digital commitments aimed at strengthening the company’s relationship with Europe, expanding its cloud and AI infrastructure, and reinforcing its respect for European laws and values. Likely attempting to learn from past antitrust mistakes (think about the antitrust case back in the late 90s), Brad Smith stated: “We respect European values, comply with European laws, and actively defend Europe’s cybersecurity. Our support for Europe has always been–and always will be–steadfast” Previous Next

  • The Canadian Cyber Security Job Market is Far From NICE | voyAIge strategy

    The Canadian Cyber Security Job Market is Far From NICE Main challenges and what to do about them By Matt Milne Jul 25, 2025 Key Points The Cyber Security system is broken, to the point that some may assert that cyber security degrees are “useless" One of the main reasons for the broken system is that organizations are not investing in new talent and training, and AI adds further complications Some proposals for rectifying the situation are: eliminate the experience gap through mandatory training investment; mandate industry-education-government coordination for work placements; and strengthen government regulation and skills-job alignment review At this point, we can all agree that cyber security has a serious problem, and it's not Advanced Persistent Threats or quantum computing; it’s the HR firewall rule set that denies access without experience, and poor government policy and automation that exacerbate an already broken system. The job market in Canada is challenging, which is not particularly significant news to recent graduates, long-time job seekers, those over the age of 45, and those who have recently become unemployed. The job market competition in Canada is fierce. This is particularly true for 15 to 19-year-olds who are now at a 22 percent unemployment rate . Due to a variety of factors, one could conclude that education in Canada either exists as a pretext to scam people or is itself a scam . These days, some might say that a Master’s degree in Canada is helpful if one wants to pursue origami or needs some kindling to start a small fire. This is not entirely the fault of Applicant Tracking System (ATS) hiring systems (software that helps companies manage the recruitment and hiring process), biased recruiters, or infamous catch-22 of needing experience to get initial experience. As I mentioned in a previous article , the 2024 ISC2 Cybersecurity Workforce Study's budget cuts are the most significant reason why new cyber security talent is not being hired or trained. Why Are Cyber Security Degrees “Useless”? Yes, some may deduce that degrees are useless, but not in the way your tough, long, disillusioned older relative warned you about. Of course, dance theory, art, or sociology don’t mesh with the brutal demands of the late-stage neoliberal job market. However, the truth is that while STEM degrees on average pay better than humanities degrees, a quick observation of Statistics Canada’s Interactive Labour Market Tool reveals that the data is from 2018 and shouldn’t be considered relevant due to the unprecedented disruptions to labour markets caused by the pandemic. Why exactly can one be certain that cyber security degrees are useless? Are they not in demand? Is cyber security not a STEM field that requires intense knowledge? Well, that is half-true. Cyber security is in high demand, but the degree is distinct from traditional STEM degrees. Where doctors and engineers secure placements and gain work experience to verify the validity of their degrees, cyber security degrees will hopefully include lab work or projects. In my view, the reality is that the crucial experience component that employers desire is absent. Although this lack of work placement is shifting, it remains challenging to find undergraduate or Master 's-level cyber security programs in Canada that include a work experience component. For instance, according to the Canadian Centre for Cyber Security’s Post-Secondary Cyber Security Related Programs Guide , only ten bachelor's programs and four master’s programs offer a work placement option out of a total of 147 entries. Moreover, according to the 2024 ISC2 Cybersecurity Workforce Study , organizations surveyed around the world have experienced a significant increase in risk and disruption, yet economic pressures, exacerbated by geopolitical uncertainties, have led to budget and workforce reductions in a number of sectors, and cyber security threats and data security incidents have only continued to grow. Resources are strained, and this impacts cyber security teams and their abilities to adopt new technologies and protect against the nuanced threats they pose to their organizations. The conclusion of this study was that in 2024, economic conditions have significantly impacted the workforce, leading to both talent shortages and skills gaps at a time when need has never been greater. On top of this, the introduction of AI to drive transformation, cope with demand, and shape strategic decisions has come with its own challenges: “We found that while cybersecurity teams have ambitious plans for AI within the cybersecurity function, they anticipate the biggest return on investment will occur in two or more years. As a result, they are not immediately overhauling their practices to adopt AI. Cybersecurity professionals are also conscious of the additional risks AI will introduce across the organization. As different departments adopt AI tools, cybersecurity teams are encouraging their organizations to create comprehensive AI strategies” Interestingly, some of the key findings are: Cybersecurity professionals don’t believe their cybersecurity teams have sufficient numbers or the right range of skills to meet their goals Cybersecurity professionals are still focused on higher education and professional development once in the workforce, but they increasingly prioritize work-life balance Many believe that diverse backgrounds can help solve the talent gap The expected advancements of AI will change the way cyber respondents view their skills shortage (certain skills may be automated), yet Cyber professionals confident Gen AI will not replace their role It was found that 45 percent of cyber security teams have implemented Gen AI into their teams’ tools to bridge skills gaps, improve threat detection and provide vast benefits to cybersecurity Organizations need a Gen AI strategy to responsibly implement the technology How HR is Adding to the Problem & Is Far From NICE As I mentioned above, budget cuts are the primary reason organizations are not investing in new talent and training; however, it would be inaccurate to suggest that is the only reason cyber security hiring is broken. During my undergraduate degree in world conflict and peace studies, I observed that most conflicts stem from a lack of communication or a shared language. At a fundamental level, there is a significant gap because of the lack of a standardized language. To rectify this, the National Institute of Standards and Technology (NIST) published the Special Publication 800-181, The National Initiative for Cybersecurity Education (NICE) Framework, in 2017. Canada has since adopted the NICE framework to create the Canadian Cyber Security Skills Framework in 2023. The National Initiative for Cybersecurity Education (NICE) framework categorizes cyber security competencies for the various roles and Knowledge, Skills, and Abilities (KSAs). I note that while Canadian cyber security degree programs effectively teach knowledge and foundational skills, they fall short in the "abilities" component, which can only be developed through practical experience. HR departments, however, treat all three components as a requirement, creating a catch-22 experience gap requirement. It follow then that the combination of HR departments' risk aversion and tight budgets creates a perfect storm, leading to a talent shortage. Bad Policy and Government Decisions Have Ruined the Credibility of Postsecondary Education International students, especially those from South Asia, have created significant business for some private colleges, which often lure students with false promises. Immigration Minister Mark Miller referred to these institutions as “puppy mill” schools. It involves the folling: students are charged four times what Canadians pay to attend college in Ontario while receiving substandard education that doesn't prepare them for meaningful employment. Unfortunately, this systematic exploitation has created a credibility crisis that affects all postsecondary education in Ontario. When HR departments and employers see degrees from Canadian institutions, they now face the challenge of distinguishing between legitimate educational institutions and those “puppy mills.” The credibility crisis in Ontario's postsecondary education stems from government policy decisions that has systematically reduced funding to legitimate educational institutions. How AI is Poised to Make The Job Market Worse The automation of entry-level cybersecurity and IT help desk roles is creating a significant career progression problem that will likely exacerbate the experience gap. The fundamental issue is that AI will exacerbate the entry-level crises by eliminating precisely the entry-level positions that traditionally served as stepping stones to senior roles or even entry-level roles. The menial tasks that AI is designed to automate— basic incident response, routine monitoring, simple troubleshooting, and repetitive security assessments— are the same daily activities that historically proved to employers that candidates had developed practical competencies beyond their theoretical education. The Path Forward Eliminate the Experience Gap Through Mandatory Training Investment. Organizations must abandon the false economy of demanding pre-existing experience over investing in job training. While tight budgets drive risk-averse hiring, the cost of a single cyber security incident far exceeds the investment required to train motivated graduates. It might be worth reminding these companies that refusing to train entry-level talent is like gambling their entire business on an increasingly shrinking pool of experienced professionals and creating a strategic vulnerability that threat actors can exploit more easily than any technical system. Mandate Industry-Education-Government Coordination for Work Placements. Canadian educational institutions must be required to coordinate with government and private industry to create robust work placement programs that directly funnel graduates into in-demand positions. This cannot remain optional—with only ten bachelor's programs and four Master's programs offering work placement out of 147 total entries, the current system is systemically failing students and employers alike. These partnerships must be structured to provide real-world experience that develops the "abilities" component of the NICE framework. Strengthen Government Regulation and Skills-Job Alignment Review. The Canadian government must implement stricter regulation of educational institutions and conduct a thorough review of the mismatch between job-ready skills and student abilities. This includes shutting down diploma mills that have destroyed credential credibility, establishing minimum standards for cyber security program outcomes, and creating accountability mechanisms that tie institutional funding to graduate employment rates and employer satisfaction. That is, educational institutions should be required to demonstrate that their curricula align with current industry needs and that graduates possess demonstrable competencies, not just theoretical knowledge. Previous Next

  • Canada’s AI Brain Drain | voyAIge strategy

    Canada’s AI Brain Drain A Silent Crisis for Canadian Business By Tommy Cooke, fueled by curiousity and ethically sourced coffee Oct 17, 2025 Key Points: Canada’s AI brain drain threatens national competitiveness by eroding the local talent base essential for innovation and execution Without retaining AI expertise, Canada risks becoming dependent on foreign ecosystems, which undermines sovereignty and commercialization potential Business leaders must treat AI talent development as a core strategy—Canadians need to build, invest, and upskill locally to remain competitive Canada is starting to punch above its weight in AI. With world-class research hubs in Toronto (Vector Institute), Montreal (Mila), and Edmonton (Amii), and visionaries such as Geoffrey Hinton and Yoshua Bengio driving Canada’s AI momentum, Canada is increasingly recognizable around the globe as a hotspot for innovation. Alas, as the global AI boom accelerates, Canada is at risk of losing that advantage through exodus of talent. The phenomenon, often dubbed the “AI brain drain”, refers to top researchers, engineers, and startup founders relocating (or aligning remotely) with U.S. or global tech hubs as opposed to building at home . For a business leader in Canada who is currently considering AI, this trend is one to keep an eye on because the stakes are high: how easily one can recruit, retain, and deploy AI talent will increasingly define which firms win or lose over the next half decade and beyond. Why Business Leaders Should Pay Attention Seeing talent leave or take jobs globally has multiple implications in terms of AI-driven innovation and the extent to which they can make an impact for Canadian businesses. Let’s take a closer look at three of them: First, the absence of talent is an AI execution bottleneck. In many industries, the difference between AI as a novelty versus a value-creator lies in execution, not algorithms. That execution depends on access to specialized engineers, ML researchers, operations talent, data scientists, hybrid roles, and so on. If a tech company plans on adopting or building AI, it will have to compete not only with other Canadian firms, but also with global tech giants offering premium compensation, equity, and prestige. That competitive pressure already manifests in Canada’s tech sector, where many former Canadian AI founders and researchers have relocated or anchored operations in Silicon Valley or U.S. hubs despite having roots here. Losing that talent, or failing to attract it, translates to longer timelines, lower quality, higher costs, or outright stalling of AI initiatives. Second, dependency on external ecosystems weakens innovation sovereignty. Relying on remote work or foreign talent is a short-term fix. If a company’s AI strategy depends on overseas labs, it risks instability from geopolitical shifts, visa regimes, cross-border regulation, or simple churn in remote teams. Canada’s recent announcement of a $2 billion+ Canadian AI Sovereign Compute Strategy is a response to such vulnerabilities: the federal government wants Canada to own its compute infrastructure rather than remain tethered to foreign cloud or GPU suppliers. Unfortunately, computing power alone is simply not enough. To leverage it fully, Canada needs people who know how to harness it. Without a base of AI talent anchored in Canada, compute investments risk underutilization and may be forced to finding support beyond the border. Moreover, it is important to keep in mind that investments in AI compute are considerably larger in other jurisdictions such as the United States; even if Canadian AI founders want to stay in Canada to use the new AI infrastructure, the Canadian compute will pale in comparison to the sorts of opportunities that the Americans are offering. Therefore, it will also be challenging to convince founders to take advantage of Canada’s AI compute. Third, the “imagination gap” will widen. Rather ironically, Canada lags many peers in actual AI adoption. Despite being a global leader in AI ideation and innovation, 12 percent of Canadian firms have broadly integrated AI into operations or products, putting the country near the bottom of OECD adoption rates. Some of this gap stems from cultural and literacy issues. But the primary issue is structural; if Canadian firms can’t access or retain top talent, pilots stay pilots, and experimentation never scales. The brain drain heightens that barrier. In effect, the Canadian market becomes a slow adopter, while global firms dominate the frontier. Catalysts of the Brain Drain It’s important to understand where the pressure comes from if Canada is to begin recognizing countermeasures. They are ubiquitous and complicated, but let’s quickly identify the most critical: Compensation and equity. U.S. tech firms routinely offer higher absolute compensation and more liquid equity upside Prestige. Many researchers seek the cachet of working at OpenAI, DeepMind, or leading U.S. AI labs Scale and data access. Larger U.S. and international firms have access to vast user bases and data that Canada-based projects often can’t match Funding scale . Global venture capital and public markets remain deeper and more aggressive than those in Canada Remote work . Many Canadian researchers don’t physically relocate now, but instead work remotely for international firms while remaining in Canada What Canadian Business Leaders Should Do About the AI Brain Drain, Right Now If you are serious about embedding AI in your organization, there are crucial steps you can take right now to join other business leaders seeking to alter the course of the brain drain. For starters, Canadian business owners need to invest in AI anchors. More specifically, it is important to create internal AI competence centers or labs rather than AI projects. It is also important to provide mandates, budgets, visibility, and career ladders. Ask yourself, What would talent want or need? It is necessary to attract Canadian talent to the centers and labs that have been created in such that the opportunity is interesting. It’s also important to offer compelling equity and long-term incentives. It’s expensive, but if the talent economy in Canada is to bolster itself, employers need to be thinking more strategically about matching or emulating international-esque equity models, grants, and research budgets. Engineers want to feel that they can build something significant—companies are recommended to do what they can to build the sandboxes that these engineers want to play in. Furthermore, companies are encouraged to partner with local colleges and universities for that they can align their interests with those of Canada’s top AI innovators. Develop interesting ways to fund cross appointments, joint labs, or even industrial research chairs. Companies may also wish to ask themselves, How skilled is our existing talent? If companies are not sure, they would benefit from upskilling. To begin, companies can drive internal reskilling and establish AI‐centric learning paths. That is, engaging workers in AI 101 learning sessions can help non-technical staff understand AI itself. Lastly, but perhaps most importantly, it is important to frame AI strategy as core business strategy—not a side project. AI disruption is old news. The ship has already set sail. Every industry is transforming. If companies are adopting AI now or are panning to do so in the near future, it is best to think strategically. For instance, ask, How might AI drive our business strategy as opposed to merely summarizing emails? By making AI more self-evidently valuable in terms of business growth, companies are more likely to attract talent. Delaying Investment in AI Talent is a Strategic Risk Canada’s AI brain drain may still feel distant to many executives, but the lead time for losing competitive edge is long. If Canadian firms don’t move to secure talent now, they’ll find themselves significantly behind their competitors. For any business in Canada that is eyeing AI, the choice is not whether to care about the brain drain. It’s whether to treat it as a strategic pillar. Previous Next

  • Meta Refuses to Sign the EU’s AI Code of Practice | voyAIge strategy

    Meta Refuses to Sign the EU’s AI Code of Practice A closer look at the reasons why By Christina Catenacci, human writer Jul 30, 2025 Key Points On July 18, 2025, the European Commission released its General-Purpose AI Code of Practice and its Guidelines on the scope of the obligations for providers of general-purpose AI models under the AI Act Many companies have complained about the Code of Practice, and some have gone so far as to refuse to sign it—like Meta Businesses who are in the European Union and who are outside but do business with the EU (see Article 2 regarding application) are recommended to review the AI Act, Code of Practice, and Guidelines and comply Meta has just refused to sign the European Union’s General-Purpose AI Code of Practice for the AI Act . That’s right—Joel Kaplan, the Chief Global Affairs Officer of Meta, said in a LinkedIn post on July 18, 2025 that “Meta won’t be signing it”. By general-purpose AI, I mean an AI model, including when trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications. This does not cover AI models that are used before release on the market for research, development and prototyping activities. What is the purpose of the AI Act? As you may recall, section (1) of the Preamble of the AI Act states that the purpose is to: “The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform legal framework in particular for the development, the placing on the market, the putting into service and the use of artificial intelligence systems (AI systems) in the Union, in accordance with Union values, to promote the uptake of human centric and trustworthy artificial intelligence (AI) while ensuring a high level of protection of health, safety, fundamental rights as enshrined in the Charter of Fundamental Rights of the European Union (the ‘Charter’), including democracy, the rule of law and environmental protection, to protect against the harmful effects of AI systems in the Union, and to support innovation. This Regulation ensures the free movement, cross-border, of AI-based goods and services, thus preventing Member States from imposing restrictions on the development, marketing and use of AI systems, unless explicitly authorized by this Regulation” The AI Act classifies AI according to risk and prohibits unacceptable risk like social scoring systems and manipulative AI. High-risk AI is regulated, limited risk has lighter obligations, and minimal risk is unregulated. The AI Act entered into force on August 1, 2024, but its prohibitions will be phased in over time. The first set of regulations, which take effect on February 2, 2025, ban certain unacceptable risk AI systems. After this, a wave of obligations over the next two to three years, with full compliance for high-risk AI systems expected by 2027 (August 2, 2025, February 2, 2026, and August 2, 2027 have certain requirements). Those involved in general-purpose AI may have to take additional steps (e.g., develop of Codes of Practice by 2025), and may be subject to specific provisions for general-purpose AI models and systems. See the timeline for particulars. What is the Code of Practice for the AI Act ? The Code of Practice is a voluntary tool (not a binding law), prepared by independent experts in a multi-stakeholder process, designed to help industry comply with the AI Act’s obligations for providers of general-purpose AI models. More specifically, the specific objectives of the Code of Practice are to: serve as a guiding document for demonstrating compliance with the obligations provided for in the AI Act , while recognising that adherence to the Code of Practice does not constitute conclusive evidence of compliance with these obligations under the AI Act ensure providers of general-purpose AI models comply with their obligations under the AI Act and enable the AI Office to assess compliance of providers of general-purpose AI models who choose to rely on the Code of Practice to demonstrate compliance with their obligations under the AI Act Released on July 10, 2025, it has three parts: Transparency : Commitments of Signatories include Documentation (there is a Model Documentation Form containing general information, model properties, methods of distribution and licenses, use, training process, information on the data used for training, testing, and validation, computational resources, and energy consumption during training and inference) Copyright : Commitments of Signatories include putting in place a Copyright policy Safety and Security : Commitments of Signatories include adopting a Safety and security framework; Systemic risk identification; Systemic risk analysis; Systemic risk acceptance determination; Safety mitigations; Security mitigations; Safety and security model reports; Systemic risk responsibility allocation; Serious incident reporting; Additional documentation and transparency For each Commitment that Signatories sign onto, there is a corresponding Article of the AI Act to which it relates. In this way, Signatories can understand what parts of the AI Act are being triggered and complied with. For example, the Transparency chapter deals with obligations under Article 53(1)(a) and (b), 53(2), 53(7), and Annexes XI and XII of the AI Act . Similarly, the Copyright chapter deals with obligations under Article 53(1)(c) of the AI Act . And the Safety and Security chapter deals with obligations under Articles 53, 55, and 56 and Recitals 110, 114, and 115 AI Act. In a nutshell, adhering to the Code of Practice that is assessed as adequate by the AI Office and the Board will offer a simple and transparent way to demonstrate compliance with the AI Ac t. The plan is that the Code of Practice will be complemented by Commission guidelines on key concepts related to general-purpose AI models, also published in July. An explanation of these guidelines is set out below. Why are tech companies not happy with the Code of Practice? To start, we should examine the infamous LinkedIn post: “Europe is heading down the wrong path on AI. We have carefully reviewed the European Commission’s Code of Practice for general-purpose AI (GPAI) models and Meta won’t be signing it. This Code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act. Businesses and policymakers across Europe have spoken out against this regulation. Earlier this month, over 40 of Europe’s largest businesses signed a letter calling for the Commission to ‘Stop the Clock’ in its implementation. We share concerns raised by these businesses that this over-reach will throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them. The post criticizes the European Union for going down the wrong path. It also talks about legal uncertainties, measures which go far beyond the scope of the AI Act , as well as stunting development of AI models and companies. There was also mention of other companies wanting to delay the need to comply. To be sure, CEOs from more than 40 European companies including ASML, Philips, Siemens and Mistral, asked for a “two-year clock-stop” on the AI Act before key obligations enter into force this August. In fact, the bottom part of the open letter to European Commission President Ursula von der Leyen called “Stop the Clock” asked for more simplified and practical AI regulation and spoke of a need to postpone the enforcement of the AI Act . Essentially, the companies want a pause on obligations on high-risk AI systems that are due to take effect as of August 2026, and to obligations for general-purpose AI models that are due to enter into force as of August 2025. Contrastingly, the top of the document is entitled “EU Champions AI Initiative”, with logos of over 110 organizations that have over $3 billion in market cap and over 3.7 million jobs across Europe. In response to the feedback, the European Commission is mulling giving companies who sign a Code of Practice on general-purpose AI a grace period before they need to comply with the European Union's AI Ac t. This is a switch from the July 10, 2025 announcement that the EU would be moving forward notwithstanding the complaints. The final word appears to be that there is no stop the clock or pauses or grace periods, period. New guidelines also released July 18, 2025 In addition, the European Commission published detailed Guidelines on the scope of the obligations for providers of general-purpose AI models under the AI Act (Regulation EU 2024/1689)—right before the AI Act’s key compliance date, August 2, 2025. The goal is to help AI developers and downstream providers by providing clarification. For example, it explains which providers of general-purpose AI models are in and out of scope of the AI Act’s obligations. In fact, the European Commission stated that “The aim is to provide legal certainty to actors across the AI value chain by clarifying when and how they are required to comply with these obligations”. The Guidelines focus on four main areas: General-purpose AI model Providers of general-purpose AI models Exemptions from certain obligations Enforcement of obligations The intention is to use clear definitions, a pragmatic approach, and exemptions for open-source. That said, the Guidelines consist of 36 pages of dense material that need to be reviewed and understood. For instance, the Guidelines answer the question, “When is a model a general-purpose AI model? Examples are provided for models in scope and out of scope. What happens next? As we can see from the above discussion, there are serious obligations that need to be complied with—soon. To that end, businesses in the European Union or who do business in the European Union (see Article 2 regarding application) are recommended to review the AI Act, the Code of Practice, and the Guidelines to ensure that they are ready for August 2, 2025. After August 2, 2025, providers placing general-purpose AI models on the market must comply with their respective AI Act obligations. Providers of general-purpose AI models that will be classified as general-purpose AI models with systemic risk must notify the AI Office without delay. In the first year after entry into application of these obligations, the AI Office will work closely with providers, in particular those who adhere to the General-Purpose AI Code of Practice, to help them comply with the rules. From 2 August 2026, the Commission’s enforcement powers enter into application. And by August 2, 2027, providers of general-purpose AI models placed on the market before August 2, 2025 must comply. Previous Next

  • California Bill on AI Companion Chatbots | voyAIge strategy

    California Bill on AI Companion Chatbots A New Law Emerges due to Concerns About the Impacts on Mental Health and Real-World Relationships By Christina Catenacci, human writer Oct 31, 2025 Key Points On October 13, 2025, California SB 243, Companion chatbots, was signed into law by Governor Newsom SB 243 addresses concerns about teen suicide and other impacts on mental health and real-world relationships since people have used companion chatbots as romantic partners California is the first state to enact this law—this law is a welcome development On October 13, 2025, California SB 243 , Companion chatbots, was signed into law by Governor Newsom. As can be seen in the recent Bill Analyses on the Senate Floor , AI companion chatbots that are created through genAI have become increasingly prevalent since they seek to offer consumers the benefits of convenience and personalized interaction. These chatbots learn intimate details and preferences of users based on their interactions and user customization. Millions of consumers use these chatbots as friends, mentors, and even romantic partners. However, there are serious concerns about their effects on users, including impacts on mental health and real-world relationships. In fact, many studies and reports point to the addictive nature of these chatbots and call for more research into their effects and for meaningful guardrails. Unfortunately, incidents resulting in users harming themselves and even committing suicide have been reported in the last year. To that end, SB 243 addresses these concerns by requiring operators of companion chatbot platforms to maintain certain protocols aimed at preventing some of the worst outcomes. What Does the New Law Say? The law defines a “companion chatbot” as an AI system with a natural language interface that provides adaptive, human-like responses to user inputs and is capable of meeting a user’s social needs, including by exhibiting anthropomorphic features and being able to sustain a relationship across multiple interactions. However, it does not include any of the following: A bot that is used only for customer service, a business’ operational purposes, productivity and analysis related to source information, internal research, or technical assistance A bot that is a feature of a video game and is limited to replies related to the video game that cannot discuss topics related to mental health, self-harm, sexually explicit conduct, or maintain a dialogue on other topics unrelated to the video game A stand-alone consumer electronic device that functions as a speaker and voice command interface, acts as a voice-activated virtual assistant, and does not sustain a relationship across multiple interactions or generate outputs that are likely to elicit emotional responses in the user The law also defines an “operator” as a person who makes a companion chatbot platform available to a user in the state. A “companion chatbot platform” is a platform that allows a user to engage with companion chatbots. Beginning on July 1, 2027, requires the following: If a reasonable person interacting with a companion chatbot would be misled to believe that the person is interacting with a human, the operator must issue a clear and conspicuous notification indicating that the companion chatbot is artificially generated and not human Operators must prevent a companion chatbot on its companion chatbot platform from engaging with users unless they maintain a protocol for preventing the production of suicidal ideation, suicide, or self-harm content to the user, including, but not limited to, by providing a notification to the user that refers the user to crisis service providers, including a suicide hotline or crisis text line, if the user expresses suicidal ideation, suicide, or self-harm. Operators must publish the details of this protocol on the operator’s internet website For a user that the operator knows is a minor, operators must do all of the following: (1) Disclose to the user that the user is interacting with AI; (2) Provide by default a clear and conspicuous notification to the user at least every three hours for continuing companion chatbot interactions that reminds the user to take a break and that the companion chatbot is artificially generated and not human; and (3) Institute reasonable measures to prevent its companion chatbot from producing visual material of sexually explicit conduct or directly stating that the minor should engage in sexually explicit conduct Operators must annually report to the Office of Suicide Prevention all of the following: (1) The number of times the operator has issued a crisis service provider referral notification in the preceding calendar year; (2) Protocols put in place to detect, remove, and respond to instances of suicidal ideation by users; and (3) Protocols put in place to prohibit a companion chatbot response about suicidal ideation or actions with the user. This report must not include any identifiers or personal information about users Operators must disclose to a user of its companion chatbot platform, on the application, the browser, or any other format that a user can use to access the companion chatbot platform, that companion chatbots may not be suitable for some minors A person who suffers injury as a result of a violation of this law may bring a civil action to recover all of the following relief: injunctive relief damages in an amount equal to the greater of actual damages or $1,000 per violation reasonable attorney’s fees and costs What Can We Take from This Development? This landmark bill is the first law in the United States to regulate AI companions. Given that teenagers have committed suicide following questionable conversations with these AI companion chatbots, the new transparency requirements are a welcome development. Previous Next

  • Meta Wins the Antitrust Case Against It | voyAIge strategy

    Meta Wins the Antitrust Case Against It No Monopoly Found By Christina Catenacci, human writer Nov 27, 2025 Key Points On November 18, 2025, the United States District Court for the District of Columbia confirmed that Meta did not have a monopoly This decision confirms that Meta will not have to break off Instagram and WhatsApp This antitrust decision is markedly different than the Google antitrust decisions involving Search and online ads On November 18, 2025, James E. Boasberg, Chief Judge at the United States District Court for the District of Columbia, confirmed that Meta did not have a monopoly. Accordingly, Meta will not have to break off Instagram and WhatsApp. As I mentioned here , Meta had its antitrust trial about seven months ago, where the main question was whether Meta had a monopoly in social media by acquiring Instagram and WhatsApp about ten years ago (2012 and 2014 respectively). Essentially, Mark Zucherbergh was the first to give testimony and apparently, while he was on the stand, he was asked to look at his own previous emails that he wrote to associates before and after the acquisition of Instagram and WhatsApp to clarify his motives. More specifically, the questions were, “Was the purchase to halt Instagram’s growth and get rid of a threat? Or was it to improve Meta’s product by having WhatsApp run as an independent brand?” In short, the ultimate decision was that Meta won: it did not have a monopoly and did not have to break up Instagram and WhatsApp. What Did the Judge Decide? Initial Comments The judge made a point of beginning with the comment, “The Court emphasizes that Facebook and Instagram have significantly transformed over the last several years”. In fact, it noted that Facebook bought Instagram back in 2012, and WhatsApp in 2014. In addition, the court described two other relevant social media apps, TikTok and YouTube, which allowed users to watch and upload videos. The court explained the history of evolution of Meta’s apps. For example, as Meta moved to showing TikTok-style videos, TikTok moved to adding Meta-style features to share them with friends. Technological changes have made video apps more social. More specifically, smartphone usage exploded; cellphone data got better; the steady progress of cellular data was followed by a massive leap in AI; and as social networks matured, the alternatives to AI-recommended content have become less appealing. The court detailed the lengthy history of proceedings beginning with the initial Complaint that was filed in 2021. Again, the court stated straight away in Facebook’s motion to dismiss that it had doubts that the Federal Trade Commission (FTC) could state a claim for injunctive relief. The court granted Facebook’s motion to dismiss but allowed the FTC to amend its Complaint. The FTC indeed created an Amended Complaint and alleged that Facebook held a monopoly in personal social networking and that Facebook maintained the monopoly by buying both Instagram and WhatsApp to eliminate them as competitive threats. The court found that the FTC had plausibly alleged that Facebook held monopoly power and that the acquisitions of Instagram and WhatsApp constituted monopolization. That said, the court did say that the FTC may have challenges proving its allegations in court. Subsequently, the parties each moved for summary judgment. The court denied both motions and indicated that the FTC had met its forgiving summary judgment standard, but the FTC faced hard questions about whether its claims could hold up in the crucible of trial. At trial, the court heard testimony for over six weeks and considered thousands of documents. Decision at Trial The court found the following: Section 2 of the Sherman Act prohibited monopolization. The main elements included holding monopoly power (power over some market) and maintaining it through means other than competition on the merits. Plaintiffs typically proved monopoly power indirectly by showing that a firm had a dominant share of a market that was protected by barriers to entry A big question in this case was, When did Meta have monopoly power? The FTC had to show that Meta was violating the law now or imminently and could only seek to enjoin the conduct that currently or imminently violated the law (the FTC incorrectly argued that Meta broke the law in the past, and this violation is still harming competition) The court defined the product market as the smallest set of products such that if a hypothetical monopolist controlled them all, then it would maximize its profits by raising prices significantly above competitive levels. The court confirmed that the FTC had the burden of proving the market’s bounds The court found that consumers treated TikTok and YouTube as substitutes for Facebook and Instagram. For instance, this could be seen when there was a shutdown of TikTok in the United States: users switched to other apps like Facebook, and later Instagram, and then YouTube. The court commented, “The amount of time that TikTok seems to be taking from Meta’s apps is stunning”. In fact, the court noted that when consumers could not use Facebook and Instagram, they turned first to TikTok and YouTube, and when they could not use TikTok or YouTube, they turned to Facebook and Instagram—Meta itself had no doubt that TikTok and YouTube competed with it. Thus, even when considering only qualitative evidence, the court found that Meta’s apps were reasonably interchangeable with TikTok and YouTube In assessing Meta’s monopoly power, the court considered a market that comprised Facebook, Instagram, Snapchat, MeWe, TikTok, and YouTube. Also, the court found that the best single measure of market share here was total time spent—the companies themselves often measured their market share using this measure. The court noted that Meta’s market share was falling, and what counted most regarding market share was the ability to maintain market share. A given market share was less likely to add up to a monopoly if it was eroding—if monopoly power was the power to control prices or exclude competition, then that power seemed to have slipped from Meta’s grip. The court concluded that YouTube and TikTok belonged in the product market, and they prevented Meta from holding a monopoly. Even if YouTube were not included in the product market, including TikTok alone defeated the FTC’s case Social media moved so quickly that it never looked the same way twice since the case began in 2021. The competitors changed significantly too. Previous decisions in motions did not even mention TikTok. Yet today, TikTok was Meta’s fiercest rival. It was understandable that the FTC was unable to fix the boundaries of Meta’s product market. Accordingly, the court stated: “Whether or not Meta enjoyed monopoly power in the past, though, the [FTC] must show that it continues to hold such power now. The Court’s verdict today determines that the FTC has not done so” Therefore, the case against Meta was dismissed. What Can we Take from this Development? Meta did not have a monopoly in social networking and survived a very serious existential challenge—it will not have to break the company apart as a result of this decision. The results of this decision were the polar opposite of the Google decision, where there was indeed a confirmed monopoly in Google Search and online ads . Why such a different result? The first clue came right at the beginning of this Meta decision, when the judge noted that the question was whether Meta had monopoly power now or imminently. In particular, there was no determination about whether there had had been a monopoly in the past (as the FTC incorrectly alleged), because it was irrelevant. That is, Meta may have had a monopoly in the past, but the FTC had to show that it had one now. Unlike the judge in the Google decision, the judge in the Meta case was able to show that the test for monopoly power was not met, primarily because the FTC could not show that Meta currently had monopoly power (power over some market) and maintained it through means other than competition on the merits. Second, unlike the Google decision, the product market had changed considerably since the FTC launched the Complaint, to the point where Meta’s strongest competitor right now, TikTok, had not even come on people’s radars. The judge made an important finding that consumers treated TikTok and YouTube as substitutes for Facebook and Instagram. After considering the evidence, the court found that TikTok had to now be included in the product market. This was significant and played a large role in the court dismissing the case. Most strikingly, the judge stated, “Even if YouTube were not included in the product market, including TikTok alone defeated the FTC’s case”. Third, throughout the previous Meta decisions since 2021, there was foreshadowing by the court that the FTC may struggle to prove its allegations in court. This was not so in the Google case, which involved the company using exclusionary contracts and other means to create and maintain its monopoly, which it still has. It is not just the DOJ that thinks Google currently has the monopoly—the EU has also fined Google significantly for having and maintaining a monopoly in Search and online ads. Fourth, it became clear that Meta’s market share had decreased, likely because of TikTok and YouTube—this made it difficult for the FTC to prove that there was a monopoly where Meta would have the opportunity to charge more, or demand more time spent. Recall that a main measure in this sphere was time spent, and the court stated that the amount of time that TikTok seemed to be taking from Meta’s apps was stunning. On the other hand, in Google’s case, Google had—and still has—89 percent of the global search engine market share. Sure, Mark Zuckerberg made comments in 2008 emails , “It is better to buy than compete”, but even if that were true, the court just showed that the FTC could no longer meet the test for holding a monopoly. Some may question why there is such importance placed on antitrust trials. Speaking about its competition mission, the FTC states: “Free and open markets are the foundation of a vibrant economy. Aggressive competition among sellers in an open marketplace gives consumers — both individuals and businesses — the benefits of lower prices, higher quality products and services, more choices, and greater innovation” Previous Next

  • Keeping People in the Loop in the Workplace | voyAIge strategy

    Keeping People in the Loop in the Workplace Some Thoughts on Work and Meaning By Christina Catenacci, human writer May 16, 2025 Key Points We can look to the infamous words of C.J. Dickson made in the 1987 case, Alberta Reference, for some of the first judicial comments touching on the meaning of work When thinking about what exactly makes work meaningful, we can look to psychologists who have demonstrated through scientific studies that meaningful work can be attributed to community, contribution, and challenge—leaders are recommended to incorporate these aspects in their management strategies Leaders are also encouraged to note that self-actualization is the process of realizing and fulfilling one’s potential, leading to a more meaningful life. It involves personal growth, self-awareness, and the pursuit of authenticity, creativity, and purpose My co-founder, Tommy Cooke, just wrote a great article that discusses the effects of the Duolingo layoffs. In that piece, he talks about how Duolingo just replaced its contract workers (on top of the 10 percent of its workforce it just reduced) and replaced them with AI. Ultimately, he concludes that AI achieves its greatest potential not by replacing humans, but by augmenting and enhancing human capabilities—and it follows that Duolingo runs the risk of reducing employee morale, increasing inefficiencies, and causing other long-term negative consequences like damage to reputation. Duolingo is not alone when it comes to reducing a part of its workforce and replacing it with AI. In fact, Cooke suggests that organizations that prioritize human-AI collaboration through hybrid workflows, upskilling, and governance position themselves for long-term success. This article got me thinking more deeply about the meaning of work. From an Employment Law perspective, I am very familiar with the following statement made by Dickson C.J. in 1987 ( Alberta Reference ): “Work is one of the most fundamental aspects in a person's life, providing the individual with a means of financial support and, as importantly, a contributory role in society. A person's employment is an essential component of his or her sense of identity, self‑worth and emotional well‑being. Accordingly, the conditions in which a person works are highly significant in shaping the whole compendium of psychological, emotional and physical elements of a person's dignity and self-respect” Furthermore, Dickson C.J. elaborated that a person’s employment is an essential component of his or her sense of identity, self‑worth and emotional well‑being. I wrote about these concepts in my PhD dissertation , where I argued that there is an electronic surveillance gap in the employment context in Canada, a gap that is best understood as an absence of appropriate legal provisions to regulate employers’ electronic surveillance of employees both inside and outside the workplace. More specifically, I argued that, through the synthesis of social theories of surveillance and privacy, together with analyses of privacy provisions and workplace privacy cases, a new and better workplace privacy regime can be designed (to improve PIPEDA ). Disappointingly, we have still not seen the much-needed legislative reform, but let’s move on. Thus, it is safe to say that for decades, courts have recognized the essential nature of work when deciding Employment Law cases. Economists have too. For instance, Daniel Susskind wrote a working paper where he explored the theoretical and empirical literature that addressed this relationship between work and meaning. In fact, he explained why this relationship matters for policymakers and economists who are concerned about the impact of technology on work. He pointed out that the relationship matters for understanding how AI affects both the quantity and the quality of work and asserted that new technologies may erode the meaning that people get from their work. What’s more, if jobs are lost because of AI adoption, the relationship between work and meaning matters significantly for designing bold policy interventions like the 'Universal Basic Income' and 'Job Guarantee Schemes'. More precisely, he argues that policymakers must decide whether to simply focus on replacing lost income alone (as with a Universal Basic Income, or UBI) or, if they believe that work is an important and non-substitutable source of meaning, on protecting jobs for that additional role as well (as with a Job Guarantee Scheme, or JGS). With AI becoming more common in the workplace, Susskind points out that although traditional economic literature narrowly focuses on the economic impact of AI on the labour market (for instance, how employment earnings are considered), there has been a change of heart in the field that has evolved into a creeping pessimism involving the quantity of work to be done as well as the quality of that work. In fact, he touches on the notion that paid work is not only a source of an income, but of meaning as well. He also notes that classical political philosophers and sociologists have introduced some ambiguity when envisioning the relationship, but organizational psychologists have argued and successfully demonstrated through scientific studies that people do indeed get meaning from work. What does this all mean? Traditional economic models treat work solely as a source of disutility that people endure only for wages. But it is becoming more evident that the more modern way to think about work entails thinking about meaning—something that goes beyond income. What the foregoing suggests is that, if AI ultimately leads to less work for most people, we may need to better understand what exactly is “meaningful” to people, and how we can ensure that people who are downsized still have these meaningful activities to do. Further, we would need to provide a great deal of opportunity in these meaningful things, so people can feel the feelings of psychological, emotional, and physical elements of a person's dignity and self-respect that C.J. Dickson referred to back in 1987. Along the same lines, we will need to reimagine policy interventions such as UBI and JGS; while advocates of JGS tend to believe that work is necessary for meaning, UBI supporters believe that people can find meaning outside formal employment. More specifically, UBI is a social welfare proposal where all citizens receive a regular cash payment without any conditions or work requirements. The goal of UBI is to alleviate poverty and provide financial security to everyone, regardless of their economic status. On the other hand, the job guarantee is the landmark policy innovation that can secure true full employment, while stabilizing the economy, producing key social assets, and securing a basic economic right. What we can be sure of is the fact that things are changing with respect to how we see work and meaning. For many, work is a major component of their life and views of themselves. Some would go further and suggest that work is the central organizing principle in their lives—they could not imagine life without work, and self-actualizing would not take place without it. To be sure, self-actualization is the process of realizing and fulfilling one’s potential, leading to a more meaningful life. It involves personal growth, self-awareness, and the pursuit of authenticity, creativity, and purpose. What Makes Work Meaningful? A closer look into what makes work meaningful can help in this discussion. Meaningful work comes from: Community : We are human, whether we like it or not. Because of this, we are wired for connection. Studies show that employees who feel a strong sense of belonging are more engaged and productive Contribution : We view the ability to see how one’s work benefits others as one of the strongest motivators in any job. In fact, employees who find meaning in their work are 4.5 times more engaged than those who do not Challenge : We thrive when we are given opportunities to learn and grow. Put another way, when leaders challenge employees to expand their capabilities and provide the support they need to succeed, those employees experience more meaningful development When you stop and think about it, it makes sense that leaders play a considerable role in shaping workplace meaning. Since about 50 percent of employees’ sense of meaning at work can be attributed to the actions of their leaders, leaders are recommended to find ways to cultivate community, contribution, and challenge so that employees and teams can thrive. More precisely, leaders in organizations are recommended to: foster a strong sense of belonging be aware and acknowledge the impacts of employees’ work challenge workers so that they grow in meaningful ways Individuals can also add some other features so that they can create some meaning for themselves, namely with self-instigated learning, volunteering in the community, participating in local government, engaging in care work, and engaging in creative work. Previous Next

  • AI Governance in 2025 | voyAIge strategy

    AI Governance in 2025 Trust, Compliance, and Innovation to Take Center Stage this Year By Tommy Cooke, powered by caffeine and curiousity Jan 20, 2025 Key Points: AI governance is transitioning from a reactive compliance measure to a proactive discipline Innovations like AI impact assessments help organizations operationalize transparency AI governance frameworks are no longer regulatory shields. They enhance brands What was an emerging concern over the last few years will become a mature and necessary strategic discipline in 2025. As we move deeper into another year and while AI remains in its infancy, it is necessary to have the guardrails in place to ensure that AI grows and contributes successfully. The landscape of AI governance is thus evolving in many meaningful ways, much of which is due to growing international regulatory pressure, increasing stakeholder expectations, and the ongoing need to ensure significant financial investments in AI generate reliable returns. This Insight looks at what has changed over the last couple of years and looks ahead to how AI governance is maturing – and why these shifts matter. From Awareness to Structure In the couple short years of AI’s proliferation across virtually every industry, AI governance can be characterized as reactive. Organizations leveraging AI to innovate and reduce costs – particularly those with high stakes in demonstrating that AI can be trustworthy – has tended to approach governance as a checkbox exercise. Unless an organization existed within the purview of a particular jurisdiction requiring compliance, like the EU’s AI Act , holding up a compass to determine how and whether an organization required a dedicated office with a detailed AI governance strategy depended largely on its own awareness and relationship with its stakeholders. Moving forward, that awareness is intensifying. Organizations are no longer waiting for compliance to simply arrive. Even in the face of shifting political landscapes in North America where AI regulation seems to be losing momentum, the AI governance market is expected to grow from $890 million USD to $5.5 billion USD by 2029 . This statistic is indeed a reflection of regulatory pressure abroad – and it is also a reflection of the maturating need for structured management of AI. With AI systems earning the trust of organizations around the world to make critical decisions, the potential for damage and unintended consequences is becoming far too risky: algorithmic bias, breaches, and ethical violations can cause significant reputational liabilities and financial penalties that would almost certainly erase any organization’s AI investment; non-compliance with the EU’s AI Act , for example, can result in fines up to €35 million or 7 percent of an organization’s annual turnover. Transparency in the Spotlight Over the last couple of years, transparency has been a buzzword. It existed in a gray space because organizations tended to use the term strategically in public-facing white papers and proposal packages. The word “transparency” often appears through corporate “ AI-washing ”: the process of exaggerating the use of AI in products and services, particularly when companies make misleading claims about the safety and ethics of their systems. Moreover, transparency tends to be perceived as difficult to achieve. Many large-scale AI adopters believe that AI systems’ outputs are difficult to explain or that its processes are virtually impossible for laypeople to understand. This adage will no longer be satisfactory in 2025 and the years ahead. Why? Contrary to what some may belief, societal, political, and ethical pressures for transparency are growing. And those pressures are leading to AI transparency innovations. Here are two examples: AI impact assessments (AI-IAs) are not merely designed to identify positive and negative impacts of AI – they are also growing in popularity because they are positioning organizations to critically reflect and discuss the social and ethical consequences of using AI. What AI-IAs essentially do is commit an organization to comprehending how their AI systems can be improved as well as what the risks may be – whether emerging or existing in the present moment. These dynamics already exist in every AI system. By making them visible, organizations take crucial steps toward demonstrating transparent and accountable relationships with their AI systems 2024 observed a significant maturation in the AI model documentation: an explanation of what an AI system’s model is, what it was trained on, any tests or experiments conducted on it, etc. The goal is to document what the AI system is doing. By noting what the system does, an organization provides its stakeholders with a track record that can be examined to ensure responsible and ethical use as well as to demonstrate compliance Data Sovereignty on the Rise While data privacy has been a long-standing focus throughout the previous years, 2025 will mark a significant shift toward data sovereignty. As regulatory, geopolitical, and social concerns continue to rise around responsible and ethical use of AI, 2025 will observe organizations increasingly designing AI systems in ways that deal with how data is stored, processed, and accessed. Compliance with data residency laws to ensure that sensitive data will remain within a national boundary or specific jurisdiction, for example, will trend this year. We will hear more about other privacy-preserving technologies in AI systems, such as federated learning : a machine learning technique that allows AI to be trained across datasets from different sources without transferring sensitive data across borders. Data is no longer viewed as merely a business asset but a national asset. For organizations operating globally, this can make daily operations rather complicated due to the existence of multiple international laws and norms if they are to avoid penalties while maintaining trust. When data involves national security, healthcare, or financial sectors, demonstrating the ability to respect data storage laws when using AI will be a top priority for organizations this year. Ethical AI as a Top Operational Priority Much like the way in which buzzwords like transparency have been used to gesture toward responsible AI use, ethical AI will finally emerge as fully operationalized practices. Unlike the stretch of AI adoption in the early 2020’s where AI ethics tended to be little more than a vague concept, ethical AI discourse and debate have been sustained for a considerable period of time. Organizations are recognizing that failing to act upon the principles and values of ethical AI not only poses reputational harms and financial risks, but they can also harm operational integrity. Organizations have been and will continue to conduct structured reviews to identify potential bias, discrimination, and unintended social consequences of their AI systems. These kinds of assessments are being applied across an AI system’s lifecycle, from design to monitoring after implementation. According to the AI Ethics Institute (2023) , 74 percent of consumers prefer to use products certified as ethically deployed and developed. This makes comprehensive AI training with a focus on governance, privacy, and of course – ethics – a must. The same will hold true for selecting AI vendors and designers committed to embedding ethical considerations into their products from the beginning and not merely as an afterthought. AI Governance as a Strategic Differentiator A commonplace perception of all-things related to privacy, ethics, and governance is that they are expensive and stifle innovation. It is often echoed by techno-libertarians , those who believe that innovation and business should be left largely unregulated in order to maximize growth and creativity – people who resist external intervention unless mandated by law. What proponents of these perceptions and beliefs fail to understand is that proactivity in the realm of responsible and ethical management of technology is becoming extraordinarily risky and costly. In 2025, AI governance will be embraced as business strategies that not only mitigate risks but also allow organizations to actively differentiate themselves in competitive markets. AI-IAs, audits, transparent reporting, and other AI governance-related activities will be more directly attributable to brand equity and stakeholder confidence. In recognizing that the world’s legal, social, political, and ethical standards are strengthening around the use of AI, organizations are realizing that demonstrating and sharing a robust AI governance framework showcases the organization and its talent as thought leaders who are able to navigate complex technology while building trust-based relationships with customers, partners, regulators, and so on. So, Why Now? What is driving the maturation and higher adoption rates of AI governance? Here are three catalysts to consider: Regulatory Evolution: despite the resurgence of techno-liberalists who may be slowing the advance of AI-related regulatory agendas, this only applies to limited jurisdictions. It’s important to remember that sub-sovereign jurisdictions (e.g., state and provincial level government authorities) are developing their own regulations. Whether they deal specifically with AI or not, data and privacy laws are always changing – and they almost always have implications on how organizations use AI. Public Scrutiny: High-profile AI failures have made stakeholders more vigilant about ethical and operational risks. Consumers are increasingly skeptical about how organizations use AI. C-suite executives are becoming more and more aware of how important it is to demonstrate to stakeholders that they are using AI responsibly, necessitating the need to implement strong AI governance frameworks – and prove that they work. Market Maturity: Markets do not mature merely due to an invisible economic hand. Much of their maturation is driven by the behaviour, perception, and demands of its consumers. As AI becomes integral to business operations, it is perhaps unsurprising that consumers do not trust organizations that do not openly disclose that they use AI. Final Thoughts AI governance in 2025 represents a pivotal shift from a regulatory afterthought to a core strategic priority. Organizations that adopt structured governance frameworks, emphasize transparency, and prioritize ethical AI are not only mitigating risks but also distinguishing themselves in competitive markets. As regulatory landscapes evolve and public scrutiny intensifies, investing in robust AI governance is no longer optional. Previous Next

  • Canadian News Publishers Sue OpenAI for Copyright Infringement | voyAIge strategy

    Canadian News Publishers Sue OpenAI for Copyright Infringement Yet another lawsuit dealing with copyright and AI By Christina Catenacci, human writer Mar 21, 2025 Key Points Canadian news publishers have sued OpenAI in the Toronto Superior Court of Justice for copyright infringement It has been reported that three French trade groups have accused Meta of copyright infringement Copyright and AI cases are increasing, and stress the need to balance the rights of creators with those of large tech companies On November 28, 2024, Canadian news publishers including Toronto Star Newspapers Limited, Metroland Media Group Ltd., Postmedia Network Inc., PNI Maritimes LP, The Globe and Mail Inc./Publications Globe and Mail Inc., Canadian Press Enterprises Inc./Enterprises Presse Canadienne Inc., and Canadian Broadcasting Corporation/Société Radio-Canada (Publishers) sued OpenAI and related companies (OpenAI) in the Toronto Superior Court of Justice. What was the lawsuit against OpenAI about? The Publishers argued that OpenAI infringed, authorized, or induced copyright infringement of their copyrights, contrary to the Copyright Act (Act) . In addition, they argued that OpenAI engaged in circumvention of technological protection measures that prevented access and restricted the copying of their copyrighted works. The Publishers also argued that OpenAI breached their Terms of Use of their websites and unjustly enriched itself at the expense of the Publishers. To that end, the Publishers requested that the court issue Orders for damages or statutory damages, damages and an accounting and disgorgement of profits in respect of OpenAI’s breach of contract and unjust enrichment, and punitive and/or exemplary damages for OpenAI’s wilful and knowing infringement of the Publishers’ rights. What’s more, they asked for a permanent injunction stopping the infringement and a further “wide injunction” that would stop OpenAI from circumventing technological protection measures. They also asked for costs and interest. The Publishers began their Claim by pointing out that OpenAI engaged in ongoing, deliberate, and unauthorized misappropriation of their valuable news media works. They also noted that OpenAI scraped content from their websites, web-based applications, and third-party partner websites, and then, it used that proprietary content to develop its ChatGPT models, without consent or authorization. OpenAI also augments its models on an ongoing basis by accessing, copying, and/or scraping their content in response to user prompts: “OpenAI has taken large swaths of valuable work, indiscriminately and without regard for copyright protection or the contractual Terms of Use applicable to the misappropriated content” The Publishers even went so far as to say that OpenAI was aware of the value of the Publishers’ proprietary data and intellectual property, including the significant financial investments made to acquire the rights to publish the works, and of the need to both pay for that information and secure the express authorization of the Publishers before obtaining and using it for its own purposes. They said that ather than seek to obtain the information legally, OpenAI elected to “brazenly misappropriate” the Publishers’ valuable intellectual property and convert it for its own uses, including commercial uses, without consent or consideration. More specifically, the Publishers provided a chart with the number of works, owned works, and licensed works that each of the Publishers had. Owned works were ones that were either owned or exclusively licensed by one of the Publishers, and licensed works were ones that were published by the Publishers under a non-exclusive licence and with the permission of the copyright owner. Each of the Publishers had published hundreds of thousands, if not millions, of owned works across their websites, as well as hundreds of thousands of licensed works—all of which had copyright protection. Most concerning, the Publishers alleged that OpenAI developed its GPT models, by generating a data set comprised of copious amounts of text data (Training Data), which the model then analyzed to learn to generate coherent and natural-sounding text without the need for explicit supervision. Worse, they say that a significant proportion of the Training Data that was used to train the GPT models was obtained by OpenAI using a process called “scraping”, which involved programmatically visiting websites across the entirety of the Internet, locating the desired information, and extracting or copying it in a structured format for further use or analysis. In fact, they claimed that their copyrighted works were scraped and/or copied one or more times. Even though OpenAI generated billions of dollars in annual revenue (As of October 2024, OpenAI was valued at $157 billion), the Publishers said that they were not paid any form of consideration in exchange for their works. We shall see what transpires in this case—at this point, OpenAI has not filed its Statement of Defence. However, if one may take a guess, one might expect OpenAI to launch a defence of fair dealing pursuant to sections 29–29.4 of the Act . This is similar to fair use in the United States. To clarify, some things do not constitute copyright infringement; for instance, section 29 states that fair dealing for the purpose of research, private study, education, parody, or satire does not infringe copyright. Furthermore, other exceptions and related criteria involve the following: section 29.1 (fair dealing and criticism or review); section 29.2 (fair dealing and news reporting); section 29.21 (non-commercial user-generated content); section 29.22 (reproduction for private purposes); section 29.23 (fixing signals and recording programs); section 29.24 (backup copies); section 29.3 (acts undertaken without motive of gain); and section 29.4 (educational institutions). In this case, it is likely that OpenAI will run into problematic issues if it tries to use this defence because OpenAI is currently a for-profit entity with only commercial interests in mind. That is, many of the fair dealing exceptions simply do not apply in this case. On the other hand, if OpenAI were a not-for-profit entity attempting to conduct research, perhaps this would be a different situation; however, there is no doubt that OpenAI is making billions of dollars and using the Publishers’ works to do so. News of another copyright infringement case involving AI in France There have been several similar intellectual property lawsuits against AI companies, including one that I just wrote about where Cohere was sued by news publishers in the United States and Canada. In that article, I also referred to another case involving Thomson Reuters in the United States. And if that were not enough, we recently discovered that French publishers and authors are suing Meta in Paris, in the Third Chamber of the Paris Judicial Court, and accusing it of using their works without permission to train its AI model, LLaMA. In fact, it has been reported that three trade groups (the National Publishing Union that represents book publishers, the National Union of Authors and Composers that represents playrights and composers, and the Societe des Gens de Lettres that represents authors) have accused Meta of copyright infringement and asserted that the company has not obtained authorization to use their copyrighted works to train its AI model. They demand that Meta completely remove its data directories that were created through the alleged infringements. Interestingly, this will be the first copyright and AI case to be tried pursuant to the EU’s AI Act . We shall see what transpires in this case, as it will likely influence the direction of all future cases in the EU and beyond. I say this because the AI Act is referred to as the golden standard since it was the first regulation that dealt with AI . And there is no doubt that this Act requires compliance with EU copyright law. What can we take from these developments? As we can see from the above discussion, there is a growing number of accusations of copyright infringement where creators have been pitted against large, rich tech companies. Indeed, legal fights are increasingly highlighting the tension between traditional intellectual property protections and what large tech companies are referring to as the need for barrier-free innovation. In Canada, we are currently without an AI statute. In the United States, Biden’s Executive Order has been rescinded by President Trump, and it is not likely that there will be any further AI regulation at the federal level. That said, there are a few States that have recently enacted AI legislation: California, Colorado, Utah, and Virginia (passed but not yet signed into law). It appears that we are not in a place where Canada and most of the United States have committed to AI transparency with respect to training AI models and the data sources used. In the meantime, while there is hardly any AI legislation, a strong message is being sent to creators—the government is not interested in balancing the interests of creators with large AI companies—it is simply not a priority to do so via strong legislation. In fact, an argument could be made that Trump has sent the exact opposite message by rescinding Biden’s AI Executive Order. Economically speaking, one may argue that the lack of lawmaking and policymaking in this area in Canada and the United States could deter creators from producing novel creative works. It may even negatively impact creative markets in the long term. As creators begin to recognize that there is a lack of balance between fair compensation for creators and allowing innovation for tech companies, governments may need to respond by adapting existing copyright legislation and/or inserting new provisions in AI legislation that address these issues. What we can say for sure is, the AI and copyright debate remains unsettled at this time. Previous Next

  • Upskilling and Reskilling in the Age of AI | voyAIge strategy

    Upskilling and Reskilling in the Age of AI What Organizations Need to Know Christina Catenacci, Human Writer Jan 20, 2025 Key Points: Upskilling is the process of improving employee skill sets through AI training and development programs Reskilling is learning an entire set of new skills to do a new job It is not possible to have a one-time upskilling and reskilling session—rather, upskilling and reskilling is a continuous learning process IBM’s Institute for Business Value states that more than 60 percent of executives predict that Gen AI will disrupt how their organization designs experiences; even more striking, 75 percent say that competitive advantage depends on Gen AI. In a study by Boston Consulting Group where 13,000 people were surveyed, 89 percent of respondents said that their workforce needed improved AI skills—but only six percent said that they had begun upskilling in “a meaningful way”. Clearly, organizations that are not beginning the process of upskilling and reskilling can be at a disadvantage in this competitive game and risk being left behind. This may be why the AI Age is commonly referred to as an era of upskilling. What is upskilling and reskilling? IBM notes that upskilling and reskilling are two different things. In particular, upskilling is the process of improving employee skill sets through AI training and development programs. The goal is to minimize skill gaps and prepare employees for changes in their job roles or functions. For example, it could include asking customer care representatives to learn how to use Gen AI and chatbots to answer customer questions in real time with prompt engineering. On the other hand, reskilling is learning an entire set of new skills to do a new job. For example, someone who works in data processing might need to embrace reskilling to learn web development or advanced data analytics. Organizations Need to Prioritize Upskilling and Reskilling According to a report by KPMG, organizations are increasingly prioritizing upskilling and reskilling their workers to harness the power AI and realize true business value. The authors point out that the impact of AI transformation is often underestimated—AI is expected to surpass human intelligence, and organizations cannot be complacent. Only 41 percent of organizations are increasing their AI investments. This is concerning since Gen AI is not like past disruptive technology; there can be no one-time upskilling and reskilling session, but rather a continuous learning process that takes place. Leaders in organizations need to get past employee resistance and help to drive AI adoption. How can this be accomplished? The authors note that leaders need to be equipped with the right mindset, knowledge, and skills to guide their AI transformation. By actively using AI in their own work and sharing their experiences with their teams, leaders can create a safe environment for exploration and experimentation, and this in turn helps to create a culture of innovation and continuous learning. Most importantly, the authors state that leaders need to communicate the benefits of AI clearly and transparently: they need to share how the technology can augment and enhance human capabilities rather than replace them. An In-depth Study on Reskilling and Upskilling In an instructive report by World Economic Forum (in collaboration with Boston Consulting Group), the authors introduced an approach to mapping out job transition pathways and reskilling opportunities using the power of digital data to help guide workers, companies, and governments to prioritize their actions, time, and investments on focusing reskilling efforts efficiently and effectively. To prepare the workforce for the Fourth Industrial Revolution, the authors stated that it was necessary to identify and systematically map out realistic job transition opportunities for workers facing declining job prospects. When mapping job transition opportunities, the authors asked whether the job transition was viable and desirable. They broke down jobs into a series of relevant, measurable component parts in order to systematically compare them and identify any gaps in knowledge, skills, and experience. Then, they calculated “job-fit”’ of any one individual on the basis of objective criteria. They asked whether the job was viable and desirable. Viable future employees were those who were equipped to perform those tasks (individuals who possessed the necessary knowledge, skills, and experience). When it came to whether the job was desirable, some jobs were simply undesirable because the number of people projected to be employed in this job category was set to decline. Using the United States Bureau of Labor Statistics, the authors aimed to find job transition pathways for all. Let us take an example: the authors discovered several pathways for secretaries and administrative assistants. Some provided opportunities with a pay rise, such as insurance claim clerks, and some provided opportunities with a pay cut, such as library assistants or clerical workers. The authors emphasized that employers could no longer rely solely on new workers to fill their skills shortages. One of the main issues was the willingness to make reasonable investments in upskilling and reskilling that could bridge workers onto new jobs. Similarly, they stressed that it was not possible to begin the transformation unless there was a focus on individuals’ mindsets and efforts. For instance, they reasoned that some employees would need time off of work to gain additional qualifications, and some would require other supports and incentives to engage them in continuous learning. This transformation could involve a shift in the societal mindset such that individuals aspired to be more creative, curious, and comfortable with continuous change. Moreover, the authors noted that no single actor could solve the upskilling and reskilling puzzle alone; in fact, they suggested that a wide range of stakeholders (governments, employers, individuals, educational institutions and labour unions etc.) needed to collaborate and pool resources to achieve this goal. Further, data-driven approaches were anticipated to bring speed and additional value to upskilling and reskilling. For example, it may be worth exploring the amount of time required to make job the various transitions, or nuanced evaluations of economic benefits from these job transitions. How do Organizations Begin Upskilling and Reskilling? When it comes to upskilling, BCG recommends that organizations: assess their needs and measure outcomes prepare people for change unlock employees’ willingness to learn make adopting AI a C-Suite priority use AI for AI upskilling Moreover, IBM recommends creating a lasting strategy, communicating clearly, and investing in learning and development. Some AI tools that are critical to upskilling include computer vision, Gen AI, machine learning, natural language processing, and robotic process automation. Upskilling use cases include customer service, financial services, healthcare, HR, and web development. Organizations can use AI technologies to enhance the AI learning experience itself via online learning and development, on-the-job training, skill-gap analysis, and mentorship. AI can provide added value for organizations because it combines institutional knowledge with advanced capabilities, fills important gaps, improves employee retention, and embraces the democratization of web development. Furthermore, McKinsey & Company recommends that organizations use a cross-collaborative, scaled approach to upskilling and reskilling workforces. More specifically, to realize the opportunity of Gen AI, a new approach is required to address employee attraction, engagement, and retention. That is, before rushing in and starting the process, it is important to clarify business outcomes and how Gen AI investments can enable or accelerate them. This involves defining the skills that are required to deliver these outcomes and identify groups within the organization that need to build those skills. In addition, it is necessary to use a human-centred approach—from the outset, organizations are recommended to acknowledge that many employees experience upskilling and reskilling as a threat to their well-established professional identities. To address this issue, organizations need to lead using an empathetic, human-centered approach—foster learning and development and transform fears into curiosity—cultivating mindsets of opportunity and continuous learning. And of course, it is necessary to make personalized learning possible at scale. This involves having tighter collaboration across the HR function, stronger business integration to embed learning experiences into working environments, and a refreshed approach to the learning and development technology ecosystem. Benefits of Upskilling and Reskilling in an AI-Driven Environment There are several benefits of upskilling and reskilling: Organizations can remain competitive Employees can increase engagement and job satisfaction Workers with enhanced skills can improve their creativity, productivity, and efficiency Organizations can help employees reduce the risk of job displacement Employees can increase wages and enjoy better job opportunities Organizations can increase their retention numbers Indeed, according to an MIT study , evidence suggests that Gen AI, specifically ChatGPT, substantially raised average productivity. Moreover, exposure to ChatGPT increased job satisfaction and self-efficacy, as well as concern and excitement about automation technologies. We know that employee development programs, including upskilling and reskilling, are highly valued by workers. More precisely, employees appreciate the following: Skill assessment and analytics Personalized learning paths Adaptive learning platforms AI-powered content curation Virtual assistants and chatbots Simulation and gamification Predictive analytics for training ROI Natural language processing for feedback and coaching Augmented reality (AR) and virtual reality (VR) for leaning, mentoring, and training Continuous learning and adaptation What We Can Take From all This Given the above, itt may be in organizations’ interests to start the process of upskilling and reskilling, as recommended above. No one wants to find and hire new people: turnover costs organizations a great deal of money. And no one wants to stand by and watch an employer replace them with a robot or other form of Gen AI. The solution is to take the time to create a solid plan, beginning with outlining goals and aligning them with what the business needs. It is true: HR professionals who have an upskilling and reskilling plan look a lot more enlightened than those who view AI as a threat . As seen in the 2024 Work Trend Index Annual Report by Microsoft and LinkedIn , it appears that many employees want, and even expect, this type of training and development at work. Employers need to catch up to the employees, given that 75 percent of employees are already bringing AI into the workplace. Previous Next

bottom of page