Search Results
120 results found with an empty search
- HR Automation | voyAIge strategy
HR Automation Some Issues that Could Arise By Christina Catenacci Dec 2, 2024 Key Points In the future, Ontario employers making publicly advertised job postings will be required to disclose within the job posting if they use AI to screen, assess or select applicants for publicly advertised job posts Where a job applicant participates in a video interview and sentiment analysis is used, there could be privacy and human rights issues that are triggered Other jurisdictions, such as New York City, have addressed AI tools in recruitment and hiring very differently than Ontario My co-founder, Tommy Cooke, just wrote an informative article regarding some of the main HR automation trends that have been pervasive in the business world in 2024. When it comes to these trends, it is worth taking a closer look at some of the issues that could become problematic. More specifically, I would like to examine the uses of AI in the area of recruitment and hiring. Whether it is using AI to automate resume screening or using AI to conduct video interviewing sentiment analysis, there could be some challenges for employers. In particular, employers will need to comply with Ontario’s Employment Standards Act , namely the AI provisions in Bill 149 , in the near future. As of some future date to be named by proclamation, employers making publicly advertised job postings will be required to disclose within the job posting if they use AI to screen, assess or select applicants for publicly advertised job posts. These employers will also have to retain copies of every publicly advertised job posting (as well as the associated application form) for three years after the post is taken down. In fact, I recently wrote an article on this topic. I wrote about how skeletal the AI provisions were in this bill. And the AI-related definitions were nowhere to be found. I compared the requirements in Ontario’s Bill 149 to those in New York City’s (NYC’s) hiring law involving using AI and automated decision-making tools. It was striking that NYC required employers to conduct a bias audit before using the tool; post a summary of the results of the bias audit on their websites; notify job candidates and existing employees that the tool would be used to assess them; include instructions for requesting accommodations; and post on their websites about the type and source of data that was used for the tool as well as their data retention policies. There were detailed definitions and fines for noncompliance too. Needless to say, this will be a challenge for provincially regulated employers in Ontario, and it is highly recommended that employers prepare now for these employment law changes. That said, it is understandable that employers may struggle with how to comply with such ambiguous provisions. Additional issues could arise, namely privacy and human rights issues. Let us take the example of the video interview where sentiment analysis is conducted. This is troubling from a privacy perspective—job applicants may not be comfortable participating in video interviews where their facial expressions and gestures are closely scrutinized with intrusive software that enables AI tools to analyze their sentiments. Moreover, employees who are up for a promotion may not appreciate video analytics of their video interview performance being retained for an unknown period of time, and accessible to an unknown number of actors in the workplace. Because job applicants are in a vulnerable position, they may not feel like they can object to the use of these AI hiring tools. In addition to privacy concerns, human rights issues could surface. The video interview could reveal various aspects of a person that may fall under any of the prohibited grounds of discrimination. For instance, under the Ontario Human Rights Code , section 5 of the Code prohibits discrimination in employment on the grounds of race, ancestry, place of origin, colour, ethnic origin, citizenship, creed, sex, sexual orientation, gender identity, gender expression, age, record of offences, marital status, family status or disability. It may be possible for an AI tool to be biased (unintentionally, but biased nonetheless), where it favours the younger candidates, gives them higher interview scores, and ultimately inadvertently discriminates on the ground of age. Since it may not be possible to detect these biased decisions immediately, it may be that some job applicants simply miss out on an employment opportunity due to an ageist AI tool. It will be interesting to see whether other jurisdictions come up with more extensive provisions to address the use of AI in recruitment and hiring. In Ontario, it is questionable whether we will see additional detail to help employers comply with the requirements. Previous Next
- De-Risking AI Prompts | voyAIge strategy
De-Risking AI Prompts How to Make AI Use Safer for Business By Tommy Cooke, fueled by caffeine and curiousity Aug 8, 2025 Key Points: Small, well-intentioned actions can quietly introduce risk when staff lack clear guidance and boundaries De-risking AI isn’t about restricting use. It’s about educating staff, adopting prompt training into workflows, and developing a support team Safe and effective AI use begins when leadership models responsible practices and builds a culture of clarity, not control Many moons ago, I was working with a data centre on a surveillance experiment. One of the interns was a motivated student. He was tasked with investigating third parties that we suspected were abusing access to sensitive location data within one of our experiment’s smartphones. Without telling anyone, the student sent sample data from our smartphone to an organization we were actively investigating. It was an organization whose credibility was under intense scrutiny for abusive data practices. The student wasn’t acting out of malice. They were trying to be helpful, to show responsiveness, to move the work forward. But they didn’t understand the stakes. To them, the data was “just a sample.” To us, it signaled loss of control and a risky alignment with an actor we hadn’t finished vetting. The problem wasn’t the intern. The problem was that we hadn’t taken the time to review and discuss contract terms—to find ways to guide interns on both the best practices and boundaries around their work. This is what prompting GPT looks like in many organizations today. Staff often using AI to accelerate their work, lighten work loads, and inject some creativity into their craft. AI is a tool that is attractive to staff for many reasons, and so it is not surprising to us here at VS to hear that staff turn to AI to also respond to mounting work pressures; now that AI is available, executives increasingly expect their teams to work harder, faster, and better with it as a result of its existence. But with less than 25 percent of organizations having an AI policy in place , and even fewer educating their staff on how to use AI, it’s not surprising that how your staff use AI is not only highly risky, but you are also likely unaware of precisely what they are doing with AI. To most organizations we speak to, this risk is entirely unacceptable. While we highly advocate for a robust AI policy in place, as well as training around that policy, let’s dive into what you can be doing to de-risk your organization’s AI use. De-Risking AI Is Not Just About Restricting Use Before we take a deeper dive, it’s important to address a common knee-jerk reaction among business leaders. There is a temptation to de-risk by locking down: restricting access to GPTs, blocking it from the firewall, or ban prompts that mention sensitive keywords. These reactions are just that: reactions. They are not responses because they are not planned, considered, and contextualized. They are rigid, inflexible, and as such, they often backfire. What’s as important here is that they send a very clear message to your staff: AI is dangerous and not learnable. This subsequently pushes experimentation underground and creates a shadow use problem that’s harder to monitor or support. Instead, and as I mentioned earlier, the safer and more sustainable path is to educate, empower, and build clarity. It’s impossible to eliminate risk entirely, but you can reduce it by building good habits, providing effective guidance, and a sharing an understanding of what safe prompting looks like. What Team Leads and Business Owners Can Do If you lead a team or own a business, here are some steps you can take right now to start de-risking GPT use without killing its potential and promise: Create a prompt playbook. A living document that outlines safe and unsafe prompting practices, gives examples, and evolves over time. This could include do’s and don’ts, safe phrasing suggestions, and reminders about privacy, intellectual property, and any other related laws and policies relevant to the scenario at hand. It doesn't have to be long—it just has to be usable and user-friendly. Build training around real workflows. It’s quite common for organizations to bring in third-parties to offer cookie cutter training on how to use AI safely and effectively. Don’t do that. Abstraction doesn’t resonate on the front line, nor do we find it effective in resonating with executives either. Bring in an organization that can offer training that reflects how your people actually use AI and the daily nuances of their work. Schedule prompt review . Designate an AI team. Task them with making it normal to collect, analyze, and assess how your staff talk to AI. Encourage them to ask questions like, “is this a safe way to talk to AI?” We want to create a culture where prompt sharing and refinement is part of collaboration. Designate prompt leaders . Identify or train a few people, ideally within the aforementioned team, who can act as internal advisors on AI use. Not to gatekeep, but to support. Let staff know who to ask when they're unsure if a prompt might cause issues. Make it part of their job description and KPIs to lift up and support employees when they use AI. Develop internal guardrails. This is also something I discussed before, and something that Christina and I discuss ad nauseum in our articles. If you're using GPT through an API, platform, or organization-wide license, get AI policies in place. Set rules, automate flags, or integrate prompt logging for sensitive areas like legal, HR, or R&D. Communicate the purpose. Let people know why prompting guidance and safe use matters. Use examples to show how good prompting helps them avoid mistakes and do better work, not just follow rules. Ensure that you show the implications when things go wrong, and then follow up by reassuring staff that you have contingency plans in place. Let them know that you have a plan for when things go wrong, and that they shouldn’t be afraid to use AI if they follow their training. Signal leadership’s involvement. Executives and leaders should model good prompting habits, or they should at least acknowledge the importance of prompting. Lead by example, not just by word. The intern I mentioned earlier didn’t intend to create risk. The boundaries were drawn, but the intern was not familiar with them. We avoided damage to the project, and that damage was never about malice or recklessness. It was about misunderstanding what small mistakes could catalyze, especially when they go unrecognized by a staff member. Previous Next
- Privacy Commissioner of Canada (OPC) Releases Findings of Joint Investigation into TikTok | voyAIge strategy
Privacy Commissioner of Canada (OPC) Releases Findings of Joint Investigation into TikTok TikTok Must Do More to Protect Children’s Privacy By Christina Catenacci Sep 26, 2025 Key Points: On September 23, 2025, the OPC released the findings of the joint investigation of TikTok The measures that TikTok had in place to keep children off the video-sharing platform and to prevent the collection and use of their sensitive personal information for profiling and content targeting purposes were inadequate There were several recommendations that were made that TikTok will have to follow, and the company has already started bringing itself in compliance On September 23, 2025, the OPC released the findings of the joint investigation of TikTok by the OPC, and the Privacy Commissioners of Quebec, British Columbia, and Alberta (Privacy Commissioners). The OPC said in its Statement that the measures that TikTok had in place to keep children off the video-sharing platform and to prevent the collection and use of their sensitive personal information for profiling and content targeting purposes were inadequate. About TikTok TikTok is a popular short-form video sharing and streaming platform, which is available both through its website and as an app. Its content features 30–50 second videos, although it also offers other services, such as live streaming and photos. TikTok also provides multiple interactive features such as comments and direct messaging for content creators and users to connect with other users. The company’s core commercial business has been the delivery of advertising, which it enabled by using the information it collected about its users to track and profile them for the ultimate purposes of delivering targeted advertising and personalizing content. TikTok’s platform has a high level of usership, with 14 million active monthly Canadian users as of November 2024. What was the Investigation About? The investigation examined TikTok’s collection, use and disclosure of personal information for the purposes of ad targeting and content personalization on the platform, with particular focus on TikTok’s practices as they relate to children. More specifically, the Offices considered whether TikTok: engaged in these practices for purposes that a reasonable person would consider appropriate in the circumstances, were reasonable in their nature and extent, and fulfilled a legitimate need obtained valid and meaningful consent and, in the case of individuals in Quebec, met its transparency obligations under Quebec’s Act Respecting the Protection of Personal Information in the Private Sector What did the Privacy Commissioners Find? The Privacy Commissioners found the following: Issue 1: Was TikTok collecting, using, and disclosing personal information, in particular with respect to children, for an appropriate, reasonable, and legitimate purpose? TikTok collected and made extensive use of potentially sensitive personal information of all its users, including both adults and children. Despite TikTok’s Terms of Service stating that users under that age were not allowed to use the platform, the investigation ultimately found that TikTok had not implemented reasonable measures to prevent its collection and use of the personal information of underage users. Therefore, the Privacy Commissioners found that TikTok’s purposes for collecting and using underage users’ data, to target advertising and personalize content (including through tracking, profiling and the use of personal information to train machine learning and refine algorithms), were not purposes that a reasonable person would consider to be appropriate, reasonable, or legitimate under the circumstances. TikTok’s collection and use of underage users’ data for these purposes did not address a legitimate issue, or fulfill a legitimate need or bona fide business interest More specifically, when balancing interests (an individual’s right to privacy and a corporation’s need to collect personal information), it was important to consider the sensitivity of the information. The Privacy Commissioners pointed out that information relating to children was particularly sensitive. On a site visit, they noted that the hashtags “#transgendergirl” and “#transgendersoftiktok” were displayed as options for an advertiser to use as targeting criteria. TikTok personnel were unable to explain, either during the site visit or when offered a follow-up opportunity, why these hashtags had been available on the ad manager platform as options. The company later confirmed that the hashtags should not have been available, had since been removed as options, and had not been used in any Canadian ad campaigns from 2023 to the date of the site visit in 2024. The Privacy Commissioners stated, “While TikTok resolved this specific issue after it was discovered by our investigation, we remain concerned that this sensitive information had not been caught by TikTok proactively and that individuals could potentially have been targeted based on their transgender identity” Even where certain elements of the information that TikTok used for profiling and targeting its users (including underage users) could be considered less sensitive when taken separately, when taken together and associated with a single user and refined by TikTok with the use of its analytics and machine learning tools, it could be rendered more sensitive since the insights could be inferred from that information in relation to the individual, such as their habits, interests, activities, location, and preferences There were a large number of underage users (under 13 years), notwithstanding the rules that they were not allowed. TikTok has been banning an average of about 500,000 accounts per year in Canada—just in 2023, there were 579,306 children who were removed for likely being under 13. The Privacy Commissioners concluded that the actual number of accounts held by underage users on the platform was likely much higher. What’s more, TikTok used an “age gate”, which required the user to provide a date of birth during the account creation process. When a date of birth corresponded to an age under 13, account creation was denied, and the device was temporarily blocked from creating an account. The Privacy Commissioners determined that this was the only age assurance mechanism that TikTok implemented at the sign-up/registration stage to prevent underage users from creating an account and accessing the platform Moreover, TikTok had a moderation team to identify users who were suspected to be underage, and members of this team were provided with specific training to identify individuals under the age of 13. The moderation team relied on user reports (where someone, such as a parent, contacted TikTok to report that a user was under the age of 13), and automated monitoring (which included scans for keywords in text inputted by the user that would suggest that they could be under the age of 13 like, “I am in grade three,” or in the case of TikTok LIVE, the use of computer vision and audio analytics to help identify individuals under 18 years. Then, moderators conducted manual reviews of accounts identified from these flags. These included a review of posted videos, comments, and biographic information. This was done to decide whether to ban an account In light of the deficiencies in TikTok’s age assurance mechanisms, the Privacy Commissioners found that TikTok implemented inadequate measures to prevent those users from accessing and being tracked and profiled on, the platform. TikTok had no legitimate need or bona fide business interest for its collection and use of the sensitive personal information of these underage users in relations to all the jurisdictions involved, whether it was the federal, Alberta, British Columbia, or Québec jurisdictions The Privacy Commissioners stated: “We are deeply concerned by the limited measures that the company has put in place to prevent children from using the platform. We find it particularly troubling that even though TikTok has implemented many sophisticated analytics tools for age estimation to serve its various other business purposes, evidence suggest that the company did not consider using those tools or other similar tools to prevent underage users from accessing, and being tracked and profiled on, the platform” Issue 2: Did TikTok obtain valid and meaningful consent from its users for tracking, profiling, targeting and content personalization? It was not necessary for the Privacy Commissioners to consider this question since organizations were not allowed to rely on consent for the collection, use, or disclosure of personal information when its purpose was not appropriate, reasonable, or legitimate within the meaning of the legislation. They stated, “In other words, obtaining consent does not render an otherwise inappropriate purpose appropriate”. In this case, the Privacy Commissioners already found that TikTok’s collection and use of personal information from children was not for an appropriate purpose. That said, the Privacy Commissioners decided to continue the analysis regarding meaningful consent from adults (aged 18 and above) and youth (aged 13–17). Ultimately, the Privacy Commissioners found that TikTok did not explain its practices (related to tracking, profiling, ad targeting and content personalization) to individuals in a manner that was sufficiently clear or accessible, and therefore did not obtain meaningful consent from platform users—including youth users More specifically, the legislation (excluding that of Québec—see Issue 2.1) required consent for the collection, use, or disclosure of their personal information, unless an exception applied. The type of consent required varied depending on the circumstances and sensitivity of the personal information. When taken together, the personal information collected and used by TikTok via tracking and profiling for the purposes of targeting and content personalization could be sensitive. Where the personal information involved was sensitive, the organization had to obtain express consent. This is especially true since many of TikTok’s practices were invisible to the user. Where the collection or use of personal information fell outside the reasonable expectations of an individual or what they would reasonably provide voluntarily, then the organization generally could not rely upon implied or deemed consent For consent to be meaningful, organizations had to inform individuals of their privacy practices in a comprehensive and understandable manner. In addition, organizations had to place additional emphasis on four key elements: What personal information is being collected; With which parties personal information is being shared; For what purposes personal information is collected, used, or disclosed; and Risk of harm and other consequences The Privacy Commissioners concluded that more needed to be done by TikTok to obtain valid and meaningful consent from its users. This was important with respect to TikTok’s privacy communications (during the count creation process, its Privacy Policy , as well as pop ups and notifications, and supporting materials like help centre and FAQs), and the youth-specific privacy protections such as the default privacy settings that made the accounts private by default without the ability to live stream or send and receive direct messages. Although TikTok created videos, added a youth portal, and prepared documentation aimed at youth, more needed to be done to protect their privacy In addition, when it came to adults 18 years and older, the Privacy Commissioners determined that TikTok did not explain its privacy practices with respect to the collection and use of personal information, including via tracking and profiling, for purposes of ad targeting and content personalization in a manner that would result in meaningful consent being obtained from those users. Though the company made significant information available to users regarding its privacy practices, including through just-in-time notices and in a layered format, and even tried to improve its practices, the Privacy Commissioners found that that: TikTok did not provide certain key information about its privacy practices up-front; its Privacy Policy did not explain its practices in sufficient detail for users to reasonably understand how their personal information would be used and for what purposes; other available documents with further details were difficult to find and not linked in the Privacy Policy; and many key documents, including TikTok’s Privacy Policy, were not made available in French. Also, TikTok failed to adequately explain its collection and use of users’ biometric information When it came to the meaningfulness of consent from youth users, it became clear that the same communications were used for both youth and adults, and they were similarly inadequate. The Privacy Commissioners pointed out that children were particularly vulnerable to the risks arising from the collection, use, and disclosure of their personal information. In fact, UNICEF Canada has called for a prohibition on the use of personal data in the development of targeted marketing towards children and young people because it has been established that they are extremely vulnerable to such advertising. They also noted that there are other potential general harms to children and youth resulting from targeted advertising, including the marketing of games that can lead to the normalization of gambling and an increased risk of identity theft and fraud through profiling associated with targeted advertising TikTok failed to obtain meaningful consent from youth for its collection and use of their personal information, including via tracking and profiling, for purposes of ad targeting and content personalization. More specifically, the Privacy Commissioners found that, in addition to the fact that TikTok’s privacy communications were inadequate to support consent from adults, TikTok’s youth-specific privacy measures were also inadequate to ensure meaningful consent for youth for the following reasons: youth-specific communications in TikTok’s portal were not easy to find; none of those communications explained TikTok’s collection and use of personal information, including via tracking and profiling, for purposes of ad targeting and content personalization; and TikTok provided no evidence to establish that its communications had, in fact, led to an understanding by youth users of what personal information TikTok would use, and how, for such purposes The Privacy Commissioners stated: “Given these risks and sensitivities, we would expect TikTok to implement a consent model and privacy communications that seek to ensure that individuals aged 13-17 can meaningfully understand and consent to TikTok’s tracking, profiling, targeting and content personalization practices when they use the platform. This includes an expectation that TikTok would develop their communications intended for users aged 13-17 in language that those users can reasonably understand, taking into account their level of cognitive development. TikTok should also make clear to those users the risk of harm and other consequences associated with use of the platform consistent with the Consent Guidelines and section 6.1 of PIPEDA . In light of the fact that younger users may not be aware of the existence and implications of targeted advertising, TikTok’s privacy communications should include prominent up-front notification that targeted ads may be delivered to them on the platform to influence their behaviour” Issue 2.1: Did TikTok meet its obligations to inform the persons concerned with respect to the collection and use of personal information to create user profiles for the purposes of ad targeting and content personalization Rather than an obligation to obtain consent and regardless of the type of personal information, the Québec legislation provides that when personal information is collected directly from the person concerned, the company collecting the information has an obligation to inform the person concerned. A person who provides their personal information in accordance with the privacy legislation consents to its use and its communication for the purposes for which it was collected In this case, TikTok collects personal information from the user using technology with functions that enable it to identify, locate, or profile the user. Specifically, TikTok uses its platform (website and app) along with associated technologies such as computer vision and audio analytics, as well as the three age models, to collect and infer information about users (including their demographics, interests and location) to create a profile about them. These profiles can in turn be used to assist in the delivery of targeted advertising and tailored content recommendations on the platform Since TikTok did not meet the obligation to inform the person, the Privacy Commissioners found that the collection of personal information by TikTok was not compliant with Québec’s legislation. Also, TikTok did not, by default, deactivate functions that allowed a person to be identified, located, or profiled using personal information. Since users did not have to make an active gesture to activate these specific functions, the Privacy Commissioner found that TikTok contravened the requirements of Québec’s legislation. Moreover, TikTok was not ensuring that the privacy settings of its technological product provided the highest level of privacy by default, without any intervention by the person concerned also contravened the legislation. Consequently, TikTok’s practices did not comply with sections 8, 8.1 and 9.1 of Quebec’s private sector privacy legislation The Privacy Commissioners stated: “Subsequent to engagement with the Offices, a new stand-alone privacy policy for Canada was published in July 2025” What were the Recommendations that TikTok will be working to follow? Given the above findings, the company agreed to work with the Privacy Commissioners to resolve the matter. More specifically, TikTok committed to the following: Implement three new enhanced age assurance mechanisms that are to be demonstrably effective at keeping underage users off the platform Enhance its privacy policy to better explain its practices related to targeted advertising and content personalization, and make additional relevant privacy communications more accessible, including by links in the privacy policy and up-front notices Cease allowing advertisers to target under-18 users, except via generic categories such as language and approximate location Publish a new plain-language summary of its privacy policy for teens, and develop and distribute a video to teen users to highlight certain of TikTok’s key privacy practices, including its collection and use of personal information to target ads and personalize content Enhance privacy communications, including through prominent up-front notices, regarding its collection and use of biometric information and the potential for data to be processed in China Implement and inform users of a new “Privacy Settings Check-up” mechanism for all Canadian users, which would centralize TikTok’s “most important and tangible” privacy settings and allow users to more easily review, adjust, and confirm those setting choices What Has TikTok Done in Response to the Findings? In response to the joint findings and recommendations, the OPC News Release states that TikTok has agreed to strengthen privacy communications to ensure that users, and in particular younger users, understand how their data could be used, including for targeted advertising and content personalization. In addition, TikTok has also agreed to enhance age-assurance methods to keep underage users off TikTok and provide more privacy information in French. In fact, the company quickly began making some improvements during the investigation. As a result, the matter was found to be well-founded and conditionally resolved with respect to all three issues. The Privacy Commissioners will continue to work with TikTok to ensure the final resolution of the matter through its implementation of the agreed upon recommendations. The Privacy Commissioner of Canada, Philippe Dufresne stated: “TikTok is one of the most prevalent social media applications used in Canada, and it is collecting vast amounts of personal information about its users, including a large number of Canadian children. The investigation has revealed that personal data profiles of youth, including children, are used at times to target advertising content directly to them, which can have harmful impacts on their well-being. This investigation also uncovered the extent to which personal information is being collected and used, often without a user’s knowledge or consent. This underscores important considerations for any organization subject to Canadian privacy laws that designs and develops services, particularly for younger users. As technology plays an increasingly central role in the lives of young people in Canada, we must put their best interests at the forefront so that they are enabled to safely navigate the digital world.” For more information, readers can view the Backgrounder: Investigation into TikTok and user privacy. What Can we Take from This Development? Although TikTok generally disagreed with the Privacy Commissioners’ findings, the company did commit to working with the Privacy Commissioners and had already started to make improvements. What we can see from this case is that when it comes to youth privacy, there can be no excuse for faulty privacy protections—it is important to get it right and provide the highest level of privacy by default. This is especially true with respect to complying with Québec’s private sector privacy legislation, mainly because Québec has already caught up and created private sector privacy legislation that closely resembled the more protective General Data Protection Regulation . It is notable that the Privacy Commissioners said that they were deeply concerned by the limited protective measures that TikTok had in place to protect youth privacy, and found it particularly troubling that even though TikTok implemented many sophisticated analytics tools for age estimation to serve its various other business purposes, the company did not consider using those tools or other similar tools to prevent underage users from accessing, and being tracked and profiled on, the platform. We can see how important it is for companies like TikTok to ensure that the purposes for collection, use, and disclosure are what a reasonable person would consider to be appropriate, reasonable, or legitimate under the circumstances. What is more, companies need to make sure that they obtain valid and meaningful consent, in line with the Guidelines for obtaining meaningful consent . Also, adequate age assurance mechanisms need to be in place to ensure that underage users are not let onto the platform. And when it comes to consent to use biometric information, more needs to be done to ensure that there is proper express consent due to the sensitive nature of the information. Lastly, we cannot forget to mention the importance of ensuring that companies communicate as clearly as possible and properly explain things such as company practices in a company Privacy Policy. And finally, it is important to remember what the Privacy Commissioners said: obtaining consent does not render an otherwise inappropriate purpose appropriate. If the purposes are not appropriate, users will not be able to consent. So, if you cannot protect the users with appropriate or reasonable measures, there is no point in asking for consent to collect or use the personal information. Previous Next
- AI in Health Care | voyAIge strategy
AI in Health Care Some Mitigation Strategies, Use Cases, and 2025 Predictions Christina Catenacci, Human Writer and Editor Dec 13, 2024 Key Points This is an exciting time for using AI in the medical field Both the Canadian and the American Medical Associations have provided guiding principles for use of AI by physicians Some of the main use cases used in the medical field involve research, medical education, administration assistance for medical professionals, diagnosis, treatment, monitoring, and more. The use cases that are predicted to be especially useful in 2025 are striking AI is becoming pervasive in medicine, and many in the health care field predict that it is going to continue to proliferate this realm well into the future. Input from Canadian and American Medical Associations The Canadian Medical Association (CMA) notes that the rapid evolution of AI technologies is expected to improve health care and change the way it is delivered. In fact, the CMA states that AI is being explored, along with other tools, as a means of increasing diagnostic accuracy, improving treatment planning, and forecasting outcomes of care. There has been promise for the following: clinical application in image-intensive fields, including radiology, pathology, ophthalmology, dermatology, and image-guided surgery broader public health purposes, such as disease surveillance Interestingly, Health Canada has already approved several AI applications, but it is worth noting that that the CMA advises doctors that: “Before deciding to use an AI-based technology in your medical practice, it is important to evaluate any findings, recommendations, or diagnoses suggested by the tool. Most AI applications are designed to be clinical aids used by clinicians as appropriate to complement other relevant and reliable clinical information and tools. Medical care provided to the patient should continue to reflect your own recommendations based on objective evidence and sound medical judgment” Moreover, the CMA stresses that physicians do the following: Ensure that AI is used to compliment clinical care. Medical care should reflect doctors’ own recommendations based on objective evidence and sound medical judgment Crucially review and assess whether the AI tool is suited for its intended use and the nature of your practice Consider the measures that are in place to ensure the AI tool’s continued effectiveness and reliability Be mindful of legal and medical professional obligations, including privacy, confidentiality, and how patient data is transferred, stored, and used (and whether reasonable safeguards are in place) Be aware if bias and try to mitigate it as much as possible Have regard to the best interests of the patient The American Medical Association (AMA) similarly recognizes the immense potential of AI in health care in enhancing diagnostic accuracy, treatment outcomes, and patient care; simultaneously, it appreciates that there are ethical considerations and potential risks that demand a proactive and principled approach to the oversight and governance of health care AI. To that end, the AMA created principles that call for comprehensive policies that mitigate risks to patients and physicians, ensuring that the benefits of AI in health care are maximized while potential harms are minimized. These key principles include: Oversight : the AMA encourages a whole of government approach to implement governance policies to mitigate risks associated with health care AI, but also acknowledges that non-government entities have a role in appropriate oversight and governance of health care AI Transparency : The AMA emphasizes that transparency is essential for the use of AI in health care to establish trust among patients and physicians. Key characteristics and information regarding the design, development, and deployment processes should be mandated by law where possible, including potential sources of inequity in problem formulation, inputs, and implementation Disclosure and Documentation : The AMA calls for appropriate disclosure and documentation when AI directly impacts patient care, access to care, medical decision making, communications, or the medical record Generative AI: To manage risk, the AMA calls on health care organizations to develop and adopt appropriate polices that anticipate and minimize negative impacts associated with generative AI. Governance policies should be in place prior to its adoption and use Privacy and Security : Built upon the AMA’s Privacy Principles , the AMA prioritizes robust measures to protect patient privacy and data security. AI developers have a responsibility to design their systems from the ground up with privacy in mind. Developers and health care organizations must implement safeguards to instill confidence in patients that personal information is handled responsibly. Strengthening AI systems against cybersecurity threats is crucial to their reliability, resiliency, and safety Bias Mitigation : To promote equitable health care outcomes, the AMA advocates for the proactive identification and mitigation of bias in AI algorithms to promote a health care system that is fair, inclusive, and free from discrimination Liability : The AMA will continue to advocate to ensure that physician liability for the use of AI-enabled technologies is limited and adheres to current legal approaches to medical liability Furthermore, the AMA principles address when payors use AI and algorithm-based decision-making to determine coverage limits, make claim determinations, and engage in benefit design. The AMA urges that payors’ use of automated decision-making systems do not reduce access to needed care, nor systematically withhold care from specific groups. It states that steps should be taken to ensure that these systems are not overriding clinical judgement and do not eliminate human review of individual circumstances. There should be stronger regulatory oversight, transparency, and audits when payors use these systems for coverage, claim determinations, and benefit design. Another thing to consider is that the AMA has released Principles for Augmented Intelligence Development, Deployment, and Use , which provides explanatory information that elaborates on the above principles. Examples of Use Cases There are several examples of AI use in health care. Here are some of the main ones we came across: Early warning systems : This AI tool has reduced unexpected deaths in hospital by 26 percent. An AI-based early warning system flagged incoming results showing that the patient's white blood cell count was very high and caught an instance of cellulitis (a bacterial skin infection that can cause extensive tissue damage). Another example has been seen in detecting instances of breast cancer—AI is becoming a member of the medical team Optimizing chemotherapy treatment plans and monitoring treatment response : Oncologists rely on imprecise methods to design chemotherapy regimens, leading to suboptimal medication choices. AI models that assess clinical data, genomic biomarkers, and population outcomes help determine optimal treatment plans for patients. Also, cancer treatment plans require frequent adjustment, but quantifying how patients respond to interventions remains challenging. AI imaging algorithms track meaningful changes in tumors over the course of therapy to determine next steps Robotic surgery : AI is enabling surgical robots to perform complex operations with greater precision and control, resulting in reduced recovery times, fewer complications, and better patient outcomes. These AI systems are used for minimally invasive surgeries as well Medical research and training : AI is being used for new and repurposed drug discovery and clinical trials. Additionally, medical students are receiving some feedback from AI tutors as they learn to remove brain tumors and practice skills on AI-based virtual patients Improving precision for Computed Tomography (CT) and Magnetic resonance (MR) image reconstruction : radiology departments are not being replaced—they are being improved by bolstering precision and speed. That is, AI-enabled camera technology can automatically detect anatomical landmarks in a patient to enable fast, accurate and consistent patient positioning. Also, AI-enabled image reconstruction can help to reduce radiation dose and improve CT image quality, thereby supporting diagnostic confidence. This all helps radiologists read images faster and more accurately. For instance, AI assessed Alzheimer’s disease brain atrophy rates with 99% accuracy using longitudinal MRI scans Precision oncology : AI allows for the development of highly personalized treatment plans based on a patient’s individual health data, including their genetics, lifestyle, and treatment history Remote medicine : With wearable devices and mobile health applications, AI can continuously monitor patients remotely. The data collected is analyzed in real time to provide updates on the patient’s condition, making it easier for healthcare providers to intervene early if something goes wrong Administration, professional support, and patient engagement : AI can help professionals identify and reduce fraud, receive necessary supports, and support patients What is in Store for 2025? Indeed, it is an exciting time for medical professionals. AI is fundamentally reimagining our approach to human health. Here are some AI trends that are expected to dominate the medial field in 2025 : Predictive healthcare : Machine learning algorithms now analyze complex datasets from genetic profiles, wearable devices, electronic health records, and environmental factors to create comprehensive health risk assessments. There are platforms that can predict disease onset and recommend preventative interventions and treatment plans Advanced precision medicine and genomic engineering : Driven by remarkable advances in genomic engineering and CRISPR technologies, this is becoming standard practice. The ability to precisely edit genetic codes has opened up revolutionary treatment possibilities for previously untreatable genetic disorders, including correcting genetic mutations, developing targeted therapies, and making customized treatment plans Immersive telemedicine and extended reality healthcare : Extended reality (XR) technologies, including augmented reality (AR) and virtual reality (VR), have transformed remote medical consultations and patient care. Surgeons can now perform complex procedures using haptic feedback robotic systems controlled remotely, while patients can receive comprehensive medical consultations through hyper-realistic virtual environments. This is important when dealing with patients in rural areas and underserved regions Internet of medical things and continuous health monitoring : This has matured into a robust, interconnected ecosystem of smart medical devices that provide continuous, non-invasive health monitoring. Wearable and implantable devices now offer real-time, comprehensive health insights that go far beyond simple fitness tracking. It is important for monitoring, detecting, and transmitting data to healthcare providers Sustainable and regenerative biotechnologies : some of these technologies include: biodegradable medical implants that naturally integrate with human tissue; regenerative therapies that can repair or replace damaged organs; sustainable production of medical treatments with minimal environmental impact; and bioengineered solutions for addressing climate-related health challenges Previous Next
- DeepSeek in Focus | voyAIge strategy
DeepSeek in Focus What Leaders Need to Know By Tommy Cooke, fueled by caffeine and curiousity Feb 14, 2025 Key Points: DeepSeek is a major disruptor in the AI market, rapidly gaining adoption due to its affordability and open-source appeal Despite being open-source, DeepSeek's data is stored in China, raising security, compliance, and censorship concerns Organizations must weigh the benefits of open-source AI against the risks of data privacy, geopolitical scrutiny, and regulatory uncertainty In just over a year, DeepSeek has gone from an emerging AI model to leaving an everlasting imprint on the global AI market. Developed in China as an open-source large language model (LLM), it is rapidly gaining attention. In fact, as of January 2025 it has overtaken ChatGPT as the most downloaded free app on Apple iPhones in the U.S. DeepSeek's meteoric rise signals a shift in AI adoption trends and the AI industry itself, and that warrants awareness and conversation for organization leaders; as people gravitate toward alternative AI models outside the traditional Western ecosystem, it is important to understand the what, why, and how of this recent AI phenomenon. As of February 2025, it is critically important to ensure that you are prepared to respond to DeepSeek in your organization. Leaders must accept the likelihood that DeepSeek is already being used by their workforce for work purposes. DeepSeek is a startup based in Hangzhou city, China. It was founded in 2023 by Liang Feng, an entrepreneur who also founded the $7bn USD hedge fund group High-Flyer in 2016. In January 2025, DeepSeek released its latest AI model, DeepSeek R1. It is a free AI Chatbot that looks, feels, sounds, and responds very similarly to ChatGPT. Unlike proprietary AI models developed in the West, like ChatGPT, Claude, and Gemini, DeepSeek is freely available for organizations to customize and use at their will. Part of the reason it is making waves is not only because of how quickly and easily it can be adopted and used, but also because it's significantly cheaper to build than its competitors' designs. While the exact figures are currently being debated, there is general agreement OpenAI - the company that owns, produced, and maintains ChatGPT - spent at least two to three times more to train their AI models. This point is very important to understand because it explains a lot about economic fallout, the balance of global AI development, market disruption, as well as accessibility and control. The implications stretch beyond cost alone. They affect how organizations plan AI adoption, determine their budgets, and structure their technology ecosystems. If AI models can be produced at a fraction of the cost of the development norm while maintaining competitive performance, organizations must consider how this changes their long-term investment in AI. Are proprietary solutions worth the price if open-source alternatives are rapidly closing the gap? As importantly, what are the hidden risks and trade-offs that come with choosing a model purely on affordability? Security & Compliance Concerns with DeepSeek DeepSeek’s rapid rise comes with critical questions for organizations, especially regarding security, governance, and compliance. First, DeepSeek was developed in China, and that is where its data is as well. The Western world is thus concerned about how data are processed, who has access to them, and whether companies using DeepSeek are exposing themselves to regulatory or cybersecurity risks. For organizations bound by stringent data privacy regulations, this is likely a major red flag. Secondly, DeepSeek is receiving considerable criticism for its censorship policies. It will not discuss certain political topics, and it was trained on filtered datasets. This impacts the reliability of its responses and raises concerns about bias in AI-generated content. This alone, at least in part, explains why South Korea, Australia, and Taiwan have banned it . Third, today's turbulent geopolitical climate means that Western governments are increasingly wary of foreign influence. AI is no exception. DeepSeek is being closely monitored by both governments and organizations around the globe, asking whether or not the company and its AI should be restricted or even outright banned. Organizations looking for a cost-effective entry to powerful AI are certainly attracted and interested in DeepSeek - and they are considering the long-term viability and potential implications of adopting a tool in the face of regulatory and political scrutiny. Is Your Staff Using DeepSeek? Guidance for Leaders Given the incredible rate that AI is being installed on the personal devices of your employees - with DeepSeek clearly being no exception - there are things we feel strongly that you should consider: Audit your AI Usage. Find out who in your company is using chatbots, especially DeepSeek - and how. Are employees feeding sensitive data into the model? Have they uploaded business plans, client data, personal information of their patients or coworkers? Do they understand the risks? Assess Risk. What do your technology and AI use policies say? Do you have them yet? Has your organization established clear policies and procedures on AI tools that store data outside your legal jurisdiction? Ask yourself, would using DeepSeek put your organization at risk of legal noncompliance or even reputational harms? Who are your external stakeholders and investors? It's critical that you start thinking about their perception, expectations, and needs. Engage and Communicate . One of our clients recently told us that an executive in their organization instructed their respective staff to freely use AI chatbots at will - without discussing the decision and announcement with legal counsel. As you might imagine, this raised many concerns about understanding and mitigating AI-related risks. If you have not done so already, now is the time to articulate clearly your organization's stance on AI to employees, stakeholders, and partners. Organization leaders need to be strategizing not only how they communicate to staff about AI, but they also need to be thinking about communication along the lines of organizational engagement. What are your employees thinking about AI, truly? Do they silently use it to keep up with the creative or time-consuming demands of their job? Are they afraid you will find out and will be punished? Do they feel supported by you, and are they willing to provide honest feedback? How Open is Open-source AI? Industry observers are debating how and whether DeepSeek’s biggest strength is also its biggest risk: it is open-source. What that means is that companies can see, download, and edit the AI's code. This opens interesting and valuable doors to many users and organizations. For example, openly readable code means that its openly verifiable, and openly scrutinized. If something exists in the code that can be deemed as a fatal flaw, security concern, or a path toward bias and harmful inference or decision-making, it can be detected more easily because the global community of talented volunteer programmers and engineers can find and address any such issues. In theory, this means that managing security, compliance, and governance yields more flexible and transparent control. Unlike a proprietary AI vendor, who does not disclose their code and invite public gaze into its design, if something goes wrong – it is often your problem to address. On the other hand, industry observers are also questioning how "open" DeepSeek truly is. By conventional understandings, open-source means that code is openly available for anyone to inspect, modify, and use. However, when it comes to AI, it is much more than code. AI must be trained, and training requires data. DeepSeek does not provide full transparency on what data it was trained on, nor has it been entirely forthcoming about the details of its training process. These points are important because it is forcing organizations and governments to question the transparency and trust of DeepSeek. As an organizational leader, you need to ask yourself: is open-source AI considered a strategic advantage or a risk? Who controls the AI for your organization? You, or vendors outside of your jurisdiction? DeepSeek is more than just another AI model. It’s a disruptor in the AI industry. Many attribute marketplace disruption as a positive - something that challenges norms, standards, and best-in-class models. However, there is much more that is potentially disrupted here. Those disruptions are not merely globally economic and political, they are in your organization. Leaders must recognize that AI strategy is no longer just about choosing the most powerful model. It’s about choosing the right balance between control, risk, and innovation. Previous Next
- EU Commission finds Apple and Meta in breach of the Digital Markets Act (DMA) | voyAIge strategy
EU Commission finds Apple and Meta in breach of the Digital Markets Act (DMA) The fines were huge—Apple was fined €500 million, and Meta was fined €200 million By Christina Catenacci, human writer May 9, 2025 Key Points: Apple and Meta were fined by the EU Commission for violating the DMA—Apple was fined €500 million, and Meta was fined €200 million The DMA is an EU regulation that aims to ensure fair competition in the EU digital economy Noncompliance of the DMA carries serious consequences in the form of fines, penalties, and additional fines in the case of continued noncompliance On April 22, 2025, the EU Commission announced that Apple breached its anti-steering obligation under the DMA , and that Meta breached the DMA obligation to give consumers the choice of a service that uses less of their personal data. As a result, the Commission fined Apple and Meta with €500 million and €200 million respectively. But what is the DMA ? What were these obligations that Apple and Meta violated? Why were the fines so high? Does this affect businesses in Canada or the United States? This article answers these questions. What is the DMA ? The DMA is an EU regulation that aims to ensure fair competition in the EU digital economy. The main goal is to regulate large online platforms, called gatekeepers (big companies like Apple, Meta, or Google), so that these large companies do not abuse their market power. Essentially, the purpose of the DMA is to make the markets in the digital sector fairer and more contestable (a contestable market is one that is fairly easy for new companies to enter). In other words, the market is more competitive thanks to the DMA . More specifically, gatekeepers have to comply with the do’s (i.e. obligations) and don’ts (i.e. prohibitions) listed in the DMA . For example , gatekeepers have to: allow third parties to inter-operate with the gatekeeper’s own services in certain specific situations; allow their business users to access the data that they generate in their use of the gatekeeper’s platform; provide companies advertising on their platform with the tools and information necessary for advertisers and publishers to carry out their own independent verification of their advertisements hosted by the gatekeeper; and allow their business users to promote their offer and conclude contracts with their customers outside the gatekeeper’s platform. Also, gatekeepers must not: treat services and products offered by the gatekeeper itself more favourably in ranking than similar services or products offered by third parties on the gatekeeper's platform; prevent consumers from linking up to businesses outside their platforms; prevent users from uninstalling any pre-installed software or app if they wish so; and track end users outside of the gatekeepers' core platform service for the purpose of targeted advertising, without effective consent having been granted. As a result of the DMA , consumers have more choice of digital services and can install preferred apps (with choice screens), gain more control over their personal data (users decide whether the companies can use their data), can port their data easily to the platform of their choice, have streamlined access, and have unbiased search results. As we have just seen, the consequences of noncompliance can be quite costly. In particular, there can be fines of up to 10 percent of the company’s total worldwide annual turnover, or up to 20 percent in the event of repeated infringements. Moreover, there could be periodic penalty payments of up to five percent of the average daily turnover. Furthermore, in the case of systematic infringements by gatekeepers, additional remedies may be imposed on the gatekeepers after a market investigation (these remedies have to be proportionate to the offence committed). And if necessary as a last resort option, non-financial remedies can be imposed, including behavioural and structural remedies like divestiture of (parts of) a business. In Canada, we have the Competition Act ; similarly, the United States has antitrust laws such as the Sherman Antitrust Act . For example, in Canada, there was a recent court action brought by the Competition Bureau against Google for abusing a monopoly with search. Likewise, there was an antitrust action brought against Meta by the Antitrust Division of the Department of Justice in the United States regarding its acquisition of Instagram and WhatsApp. I’d be remiss not to mention that Canadian and American businesses could be subject to the DMA in certain circumstances. This is because the DMA applies to core platform services provided or offered by gatekeepers to business users established in the EU or end users established or located in the EU, irrespective of the place of establishment or residence of the gatekeepers and irrespective of the law otherwise applicable to the provision of service. What this means is, regardless of location or residence of gatekeepers, if they offer their services to users in the EU, they are subject to the DMA . This requirement can be found in Article 1 of the DMA . Why was Apple fined €500 million? Under the DMA, app developers distributing their apps on Apple's App Store should be able to inform customers (free of charge) of alternative offers outside the App Store, steer them to those offers, and allow them to make purchases. However, Apple does not do this. Due to a number of restrictions imposed by Apple, app developers cannot fully benefit from the advantages of alternative distribution channels outside the App Store. Similarly, consumers cannot fully benefit from alternative and cheaper offers since Apple prevents app developers from directly informing consumers about such offers. The company has failed to demonstrate that these restrictions are objectively necessary and proportionate. Therefore, the Commission has ordered Apple to remove the technical and commercial restrictions on steering, and to refrain from perpetuating the non-compliant conduct in the future, which includes adopting conduct with an equivalent object or effect. When imposing the €500 million fine, the Commission has taken into account the gravity and duration of the non-compliance. At this point, the Commission has closed the investigation on Apple's user choice obligations, thanks to early and proactive engagement by Apple on a compliance solution. And why was Meta fined €200 million? Under the DMA , gatekeepers must seek users' consent for combining their personal data between services. The users who do not consent must have access to a less personalised but equivalent alternative. But Meta did not do this. Instead, it introduced a binary ‘Consent or Pay' advertising model. Under this model, EU users of Facebook and Instagram had a choice between consenting to personal data combination for personalised advertising, or paying a monthly subscription for an ad-free service. As a result, the Commission found that Meta’s model was not compliant with the DMA , because it did not give users the required specific choice to opt for a service that used less of their personal data but was otherwise equivalent to the ‘personalised ads' service. Meta's model also did not allow users to exercise their right to freely consent to the combination of their personal data. Subsequently (after numerous exchanges with the Commission), Meta introduced another version of the free personalised ads model, offering a new option that allegedly used less personal data to display advertisements. The Commission is currently assessing this new option and continues its dialogue with Meta. The Commission is requesting that the company provide evidence of the impact that this new ads model has in practice. To that end, the decision that found non-compliance involves the time period during which users in the EU were only offered the binary ‘Consent or Pay' option between March 2024 (when the DMA obligations became legally binding) and November 2024 (when Meta's new ads model was introduced). When imposing the fines, the Commission took into account the gravity and duration of the non-compliance. What’s more, the Commission also found that Meta's online intermediation service, Facebook Marketplace, should no longer be designated under the DMA, mostly because Marketplace had less than 10,000 business users in 2024. Meta therefore no longer met the threshold giving rise to a presumption that Marketplace was an important gateway for business users to reach end users. What can we take from this development? It is important to note that these decisions made against Apple and Meta are the first noncompliance decisions adopted under the DMA . Both Apple and Meta are required to comply with the Commission's decisions within 60 days, or else they risk periodic penalty payments. It is clear that the DMA is a serious regulation—businesses that offer products and services to consumers in the EU need to be aware of this and act accordingly if they want to avoid serious fines and penalties. In like manner, businesses that are encapsulated under the DMA will need to be aware that fines and penalties continue and worsen over time if the noncompliance continues. For businesses that are subject to domestic competition/antitrust legislation in Canada and the United States are recommended to note that the consequences, albeit less severe than the DMA , are also grave in the case where businesses are abusing their monopoly power and ignoring regulators. Why is competition so important? The goal of these laws is to protect the competitiveness of the markets and to protect consumers by ensuring that they have choice and are not subject to pressure by companies who abuse monopoly power. Take a look at an article that I wrote about antitrust woes here . Indeed, some companies are watching what is happening to Apple and Meta, and are responding in a positive, proactive, and cooperative manner—for instance, Microsoft President Brad Smith has announced a landmark set of digital commitments aimed at strengthening the company’s relationship with Europe, expanding its cloud and AI infrastructure, and reinforcing its respect for European laws and values. Likely attempting to learn from past antitrust mistakes (think about the antitrust case back in the late 90s), Brad Smith stated: “We respect European values, comply with European laws, and actively defend Europe’s cybersecurity. Our support for Europe has always been–and always will be–steadfast” Previous Next
- The Canadian Cyber Security Job Market is Far From NICE | voyAIge strategy
The Canadian Cyber Security Job Market is Far From NICE Main challenges and what to do about them By Matt Milne Jul 25, 2025 Key Points The Cyber Security system is broken, to the point that some may assert that cyber security degrees are “useless" One of the main reasons for the broken system is that organizations are not investing in new talent and training, and AI adds further complications Some proposals for rectifying the situation are: eliminate the experience gap through mandatory training investment; mandate industry-education-government coordination for work placements; and strengthen government regulation and skills-job alignment review At this point, we can all agree that cyber security has a serious problem, and it's not Advanced Persistent Threats or quantum computing; it’s the HR firewall rule set that denies access without experience, and poor government policy and automation that exacerbate an already broken system. The job market in Canada is challenging, which is not particularly significant news to recent graduates, long-time job seekers, those over the age of 45, and those who have recently become unemployed. The job market competition in Canada is fierce. This is particularly true for 15 to 19-year-olds who are now at a 22 percent unemployment rate . Due to a variety of factors, one could conclude that education in Canada either exists as a pretext to scam people or is itself a scam . These days, some might say that a Master’s degree in Canada is helpful if one wants to pursue origami or needs some kindling to start a small fire. This is not entirely the fault of Applicant Tracking System (ATS) hiring systems (software that helps companies manage the recruitment and hiring process), biased recruiters, or infamous catch-22 of needing experience to get initial experience. As I mentioned in a previous article , the 2024 ISC2 Cybersecurity Workforce Study's budget cuts are the most significant reason why new cyber security talent is not being hired or trained. Why Are Cyber Security Degrees “Useless”? Yes, some may deduce that degrees are useless, but not in the way your tough, long, disillusioned older relative warned you about. Of course, dance theory, art, or sociology don’t mesh with the brutal demands of the late-stage neoliberal job market. However, the truth is that while STEM degrees on average pay better than humanities degrees, a quick observation of Statistics Canada’s Interactive Labour Market Tool reveals that the data is from 2018 and shouldn’t be considered relevant due to the unprecedented disruptions to labour markets caused by the pandemic. Why exactly can one be certain that cyber security degrees are useless? Are they not in demand? Is cyber security not a STEM field that requires intense knowledge? Well, that is half-true. Cyber security is in high demand, but the degree is distinct from traditional STEM degrees. Where doctors and engineers secure placements and gain work experience to verify the validity of their degrees, cyber security degrees will hopefully include lab work or projects. In my view, the reality is that the crucial experience component that employers desire is absent. Although this lack of work placement is shifting, it remains challenging to find undergraduate or Master 's-level cyber security programs in Canada that include a work experience component. For instance, according to the Canadian Centre for Cyber Security’s Post-Secondary Cyber Security Related Programs Guide , only ten bachelor's programs and four master’s programs offer a work placement option out of a total of 147 entries. Moreover, according to the 2024 ISC2 Cybersecurity Workforce Study , organizations surveyed around the world have experienced a significant increase in risk and disruption, yet economic pressures, exacerbated by geopolitical uncertainties, have led to budget and workforce reductions in a number of sectors, and cyber security threats and data security incidents have only continued to grow. Resources are strained, and this impacts cyber security teams and their abilities to adopt new technologies and protect against the nuanced threats they pose to their organizations. The conclusion of this study was that in 2024, economic conditions have significantly impacted the workforce, leading to both talent shortages and skills gaps at a time when need has never been greater. On top of this, the introduction of AI to drive transformation, cope with demand, and shape strategic decisions has come with its own challenges: “We found that while cybersecurity teams have ambitious plans for AI within the cybersecurity function, they anticipate the biggest return on investment will occur in two or more years. As a result, they are not immediately overhauling their practices to adopt AI. Cybersecurity professionals are also conscious of the additional risks AI will introduce across the organization. As different departments adopt AI tools, cybersecurity teams are encouraging their organizations to create comprehensive AI strategies” Interestingly, some of the key findings are: Cybersecurity professionals don’t believe their cybersecurity teams have sufficient numbers or the right range of skills to meet their goals Cybersecurity professionals are still focused on higher education and professional development once in the workforce, but they increasingly prioritize work-life balance Many believe that diverse backgrounds can help solve the talent gap The expected advancements of AI will change the way cyber respondents view their skills shortage (certain skills may be automated), yet Cyber professionals confident Gen AI will not replace their role It was found that 45 percent of cyber security teams have implemented Gen AI into their teams’ tools to bridge skills gaps, improve threat detection and provide vast benefits to cybersecurity Organizations need a Gen AI strategy to responsibly implement the technology How HR is Adding to the Problem & Is Far From NICE As I mentioned above, budget cuts are the primary reason organizations are not investing in new talent and training; however, it would be inaccurate to suggest that is the only reason cyber security hiring is broken. During my undergraduate degree in world conflict and peace studies, I observed that most conflicts stem from a lack of communication or a shared language. At a fundamental level, there is a significant gap because of the lack of a standardized language. To rectify this, the National Institute of Standards and Technology (NIST) published the Special Publication 800-181, The National Initiative for Cybersecurity Education (NICE) Framework, in 2017. Canada has since adopted the NICE framework to create the Canadian Cyber Security Skills Framework in 2023. The National Initiative for Cybersecurity Education (NICE) framework categorizes cyber security competencies for the various roles and Knowledge, Skills, and Abilities (KSAs). I note that while Canadian cyber security degree programs effectively teach knowledge and foundational skills, they fall short in the "abilities" component, which can only be developed through practical experience. HR departments, however, treat all three components as a requirement, creating a catch-22 experience gap requirement. It follow then that the combination of HR departments' risk aversion and tight budgets creates a perfect storm, leading to a talent shortage. Bad Policy and Government Decisions Have Ruined the Credibility of Postsecondary Education International students, especially those from South Asia, have created significant business for some private colleges, which often lure students with false promises. Immigration Minister Mark Miller referred to these institutions as “puppy mill” schools. It involves the folling: students are charged four times what Canadians pay to attend college in Ontario while receiving substandard education that doesn't prepare them for meaningful employment. Unfortunately, this systematic exploitation has created a credibility crisis that affects all postsecondary education in Ontario. When HR departments and employers see degrees from Canadian institutions, they now face the challenge of distinguishing between legitimate educational institutions and those “puppy mills.” The credibility crisis in Ontario's postsecondary education stems from government policy decisions that has systematically reduced funding to legitimate educational institutions. How AI is Poised to Make The Job Market Worse The automation of entry-level cybersecurity and IT help desk roles is creating a significant career progression problem that will likely exacerbate the experience gap. The fundamental issue is that AI will exacerbate the entry-level crises by eliminating precisely the entry-level positions that traditionally served as stepping stones to senior roles or even entry-level roles. The menial tasks that AI is designed to automate— basic incident response, routine monitoring, simple troubleshooting, and repetitive security assessments— are the same daily activities that historically proved to employers that candidates had developed practical competencies beyond their theoretical education. The Path Forward Eliminate the Experience Gap Through Mandatory Training Investment. Organizations must abandon the false economy of demanding pre-existing experience over investing in job training. While tight budgets drive risk-averse hiring, the cost of a single cyber security incident far exceeds the investment required to train motivated graduates. It might be worth reminding these companies that refusing to train entry-level talent is like gambling their entire business on an increasingly shrinking pool of experienced professionals and creating a strategic vulnerability that threat actors can exploit more easily than any technical system. Mandate Industry-Education-Government Coordination for Work Placements. Canadian educational institutions must be required to coordinate with government and private industry to create robust work placement programs that directly funnel graduates into in-demand positions. This cannot remain optional—with only ten bachelor's programs and four Master's programs offering work placement out of 147 total entries, the current system is systemically failing students and employers alike. These partnerships must be structured to provide real-world experience that develops the "abilities" component of the NICE framework. Strengthen Government Regulation and Skills-Job Alignment Review. The Canadian government must implement stricter regulation of educational institutions and conduct a thorough review of the mismatch between job-ready skills and student abilities. This includes shutting down diploma mills that have destroyed credential credibility, establishing minimum standards for cyber security program outcomes, and creating accountability mechanisms that tie institutional funding to graduate employment rates and employer satisfaction. That is, educational institutions should be required to demonstrate that their curricula align with current industry needs and that graduates possess demonstrable competencies, not just theoretical knowledge. Previous Next
- Canada’s AI Brain Drain | voyAIge strategy
Canada’s AI Brain Drain A Silent Crisis for Canadian Business By Tommy Cooke, fueled by curiousity and ethically sourced coffee Oct 17, 2025 Key Points: Canada’s AI brain drain threatens national competitiveness by eroding the local talent base essential for innovation and execution Without retaining AI expertise, Canada risks becoming dependent on foreign ecosystems, which undermines sovereignty and commercialization potential Business leaders must treat AI talent development as a core strategy—Canadians need to build, invest, and upskill locally to remain competitive Canada is starting to punch above its weight in AI. With world-class research hubs in Toronto (Vector Institute), Montreal (Mila), and Edmonton (Amii), and visionaries such as Geoffrey Hinton and Yoshua Bengio driving Canada’s AI momentum, Canada is increasingly recognizable around the globe as a hotspot for innovation. Alas, as the global AI boom accelerates, Canada is at risk of losing that advantage through exodus of talent. The phenomenon, often dubbed the “AI brain drain”, refers to top researchers, engineers, and startup founders relocating (or aligning remotely) with U.S. or global tech hubs as opposed to building at home . For a business leader in Canada who is currently considering AI, this trend is one to keep an eye on because the stakes are high: how easily one can recruit, retain, and deploy AI talent will increasingly define which firms win or lose over the next half decade and beyond. Why Business Leaders Should Pay Attention Seeing talent leave or take jobs globally has multiple implications in terms of AI-driven innovation and the extent to which they can make an impact for Canadian businesses. Let’s take a closer look at three of them: First, the absence of talent is an AI execution bottleneck. In many industries, the difference between AI as a novelty versus a value-creator lies in execution, not algorithms. That execution depends on access to specialized engineers, ML researchers, operations talent, data scientists, hybrid roles, and so on. If a tech company plans on adopting or building AI, it will have to compete not only with other Canadian firms, but also with global tech giants offering premium compensation, equity, and prestige. That competitive pressure already manifests in Canada’s tech sector, where many former Canadian AI founders and researchers have relocated or anchored operations in Silicon Valley or U.S. hubs despite having roots here. Losing that talent, or failing to attract it, translates to longer timelines, lower quality, higher costs, or outright stalling of AI initiatives. Second, dependency on external ecosystems weakens innovation sovereignty. Relying on remote work or foreign talent is a short-term fix. If a company’s AI strategy depends on overseas labs, it risks instability from geopolitical shifts, visa regimes, cross-border regulation, or simple churn in remote teams. Canada’s recent announcement of a $2 billion+ Canadian AI Sovereign Compute Strategy is a response to such vulnerabilities: the federal government wants Canada to own its compute infrastructure rather than remain tethered to foreign cloud or GPU suppliers. Unfortunately, computing power alone is simply not enough. To leverage it fully, Canada needs people who know how to harness it. Without a base of AI talent anchored in Canada, compute investments risk underutilization and may be forced to finding support beyond the border. Moreover, it is important to keep in mind that investments in AI compute are considerably larger in other jurisdictions such as the United States; even if Canadian AI founders want to stay in Canada to use the new AI infrastructure, the Canadian compute will pale in comparison to the sorts of opportunities that the Americans are offering. Therefore, it will also be challenging to convince founders to take advantage of Canada’s AI compute. Third, the “imagination gap” will widen. Rather ironically, Canada lags many peers in actual AI adoption. Despite being a global leader in AI ideation and innovation, 12 percent of Canadian firms have broadly integrated AI into operations or products, putting the country near the bottom of OECD adoption rates. Some of this gap stems from cultural and literacy issues. But the primary issue is structural; if Canadian firms can’t access or retain top talent, pilots stay pilots, and experimentation never scales. The brain drain heightens that barrier. In effect, the Canadian market becomes a slow adopter, while global firms dominate the frontier. Catalysts of the Brain Drain It’s important to understand where the pressure comes from if Canada is to begin recognizing countermeasures. They are ubiquitous and complicated, but let’s quickly identify the most critical: Compensation and equity. U.S. tech firms routinely offer higher absolute compensation and more liquid equity upside Prestige. Many researchers seek the cachet of working at OpenAI, DeepMind, or leading U.S. AI labs Scale and data access. Larger U.S. and international firms have access to vast user bases and data that Canada-based projects often can’t match Funding scale . Global venture capital and public markets remain deeper and more aggressive than those in Canada Remote work . Many Canadian researchers don’t physically relocate now, but instead work remotely for international firms while remaining in Canada What Canadian Business Leaders Should Do About the AI Brain Drain, Right Now If you are serious about embedding AI in your organization, there are crucial steps you can take right now to join other business leaders seeking to alter the course of the brain drain. For starters, Canadian business owners need to invest in AI anchors. More specifically, it is important to create internal AI competence centers or labs rather than AI projects. It is also important to provide mandates, budgets, visibility, and career ladders. Ask yourself, What would talent want or need? It is necessary to attract Canadian talent to the centers and labs that have been created in such that the opportunity is interesting. It’s also important to offer compelling equity and long-term incentives. It’s expensive, but if the talent economy in Canada is to bolster itself, employers need to be thinking more strategically about matching or emulating international-esque equity models, grants, and research budgets. Engineers want to feel that they can build something significant—companies are recommended to do what they can to build the sandboxes that these engineers want to play in. Furthermore, companies are encouraged to partner with local colleges and universities for that they can align their interests with those of Canada’s top AI innovators. Develop interesting ways to fund cross appointments, joint labs, or even industrial research chairs. Companies may also wish to ask themselves, How skilled is our existing talent? If companies are not sure, they would benefit from upskilling. To begin, companies can drive internal reskilling and establish AI‐centric learning paths. That is, engaging workers in AI 101 learning sessions can help non-technical staff understand AI itself. Lastly, but perhaps most importantly, it is important to frame AI strategy as core business strategy—not a side project. AI disruption is old news. The ship has already set sail. Every industry is transforming. If companies are adopting AI now or are panning to do so in the near future, it is best to think strategically. For instance, ask, How might AI drive our business strategy as opposed to merely summarizing emails? By making AI more self-evidently valuable in terms of business growth, companies are more likely to attract talent. Delaying Investment in AI Talent is a Strategic Risk Canada’s AI brain drain may still feel distant to many executives, but the lead time for losing competitive edge is long. If Canadian firms don’t move to secure talent now, they’ll find themselves significantly behind their competitors. For any business in Canada that is eyeing AI, the choice is not whether to care about the brain drain. It’s whether to treat it as a strategic pillar. Previous Next
- Meta Refuses to Sign the EU’s AI Code of Practice | voyAIge strategy
Meta Refuses to Sign the EU’s AI Code of Practice A closer look at the reasons why By Christina Catenacci, human writer Jul 30, 2025 Key Points On July 18, 2025, the European Commission released its General-Purpose AI Code of Practice and its Guidelines on the scope of the obligations for providers of general-purpose AI models under the AI Act Many companies have complained about the Code of Practice, and some have gone so far as to refuse to sign it—like Meta Businesses who are in the European Union and who are outside but do business with the EU (see Article 2 regarding application) are recommended to review the AI Act, Code of Practice, and Guidelines and comply Meta has just refused to sign the European Union’s General-Purpose AI Code of Practice for the AI Act . That’s right—Joel Kaplan, the Chief Global Affairs Officer of Meta, said in a LinkedIn post on July 18, 2025 that “Meta won’t be signing it”. By general-purpose AI, I mean an AI model, including when trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications. This does not cover AI models that are used before release on the market for research, development and prototyping activities. What is the purpose of the AI Act? As you may recall, section (1) of the Preamble of the AI Act states that the purpose is to: “The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform legal framework in particular for the development, the placing on the market, the putting into service and the use of artificial intelligence systems (AI systems) in the Union, in accordance with Union values, to promote the uptake of human centric and trustworthy artificial intelligence (AI) while ensuring a high level of protection of health, safety, fundamental rights as enshrined in the Charter of Fundamental Rights of the European Union (the ‘Charter’), including democracy, the rule of law and environmental protection, to protect against the harmful effects of AI systems in the Union, and to support innovation. This Regulation ensures the free movement, cross-border, of AI-based goods and services, thus preventing Member States from imposing restrictions on the development, marketing and use of AI systems, unless explicitly authorized by this Regulation” The AI Act classifies AI according to risk and prohibits unacceptable risk like social scoring systems and manipulative AI. High-risk AI is regulated, limited risk has lighter obligations, and minimal risk is unregulated. The AI Act entered into force on August 1, 2024, but its prohibitions will be phased in over time. The first set of regulations, which take effect on February 2, 2025, ban certain unacceptable risk AI systems. After this, a wave of obligations over the next two to three years, with full compliance for high-risk AI systems expected by 2027 (August 2, 2025, February 2, 2026, and August 2, 2027 have certain requirements). Those involved in general-purpose AI may have to take additional steps (e.g., develop of Codes of Practice by 2025), and may be subject to specific provisions for general-purpose AI models and systems. See the timeline for particulars. What is the Code of Practice for the AI Act ? The Code of Practice is a voluntary tool (not a binding law), prepared by independent experts in a multi-stakeholder process, designed to help industry comply with the AI Act’s obligations for providers of general-purpose AI models. More specifically, the specific objectives of the Code of Practice are to: serve as a guiding document for demonstrating compliance with the obligations provided for in the AI Act , while recognising that adherence to the Code of Practice does not constitute conclusive evidence of compliance with these obligations under the AI Act ensure providers of general-purpose AI models comply with their obligations under the AI Act and enable the AI Office to assess compliance of providers of general-purpose AI models who choose to rely on the Code of Practice to demonstrate compliance with their obligations under the AI Act Released on July 10, 2025, it has three parts: Transparency : Commitments of Signatories include Documentation (there is a Model Documentation Form containing general information, model properties, methods of distribution and licenses, use, training process, information on the data used for training, testing, and validation, computational resources, and energy consumption during training and inference) Copyright : Commitments of Signatories include putting in place a Copyright policy Safety and Security : Commitments of Signatories include adopting a Safety and security framework; Systemic risk identification; Systemic risk analysis; Systemic risk acceptance determination; Safety mitigations; Security mitigations; Safety and security model reports; Systemic risk responsibility allocation; Serious incident reporting; Additional documentation and transparency For each Commitment that Signatories sign onto, there is a corresponding Article of the AI Act to which it relates. In this way, Signatories can understand what parts of the AI Act are being triggered and complied with. For example, the Transparency chapter deals with obligations under Article 53(1)(a) and (b), 53(2), 53(7), and Annexes XI and XII of the AI Act . Similarly, the Copyright chapter deals with obligations under Article 53(1)(c) of the AI Act . And the Safety and Security chapter deals with obligations under Articles 53, 55, and 56 and Recitals 110, 114, and 115 AI Act. In a nutshell, adhering to the Code of Practice that is assessed as adequate by the AI Office and the Board will offer a simple and transparent way to demonstrate compliance with the AI Ac t. The plan is that the Code of Practice will be complemented by Commission guidelines on key concepts related to general-purpose AI models, also published in July. An explanation of these guidelines is set out below. Why are tech companies not happy with the Code of Practice? To start, we should examine the infamous LinkedIn post: “Europe is heading down the wrong path on AI. We have carefully reviewed the European Commission’s Code of Practice for general-purpose AI (GPAI) models and Meta won’t be signing it. This Code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act. Businesses and policymakers across Europe have spoken out against this regulation. Earlier this month, over 40 of Europe’s largest businesses signed a letter calling for the Commission to ‘Stop the Clock’ in its implementation. We share concerns raised by these businesses that this over-reach will throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them. The post criticizes the European Union for going down the wrong path. It also talks about legal uncertainties, measures which go far beyond the scope of the AI Act , as well as stunting development of AI models and companies. There was also mention of other companies wanting to delay the need to comply. To be sure, CEOs from more than 40 European companies including ASML, Philips, Siemens and Mistral, asked for a “two-year clock-stop” on the AI Act before key obligations enter into force this August. In fact, the bottom part of the open letter to European Commission President Ursula von der Leyen called “Stop the Clock” asked for more simplified and practical AI regulation and spoke of a need to postpone the enforcement of the AI Act . Essentially, the companies want a pause on obligations on high-risk AI systems that are due to take effect as of August 2026, and to obligations for general-purpose AI models that are due to enter into force as of August 2025. Contrastingly, the top of the document is entitled “EU Champions AI Initiative”, with logos of over 110 organizations that have over $3 billion in market cap and over 3.7 million jobs across Europe. In response to the feedback, the European Commission is mulling giving companies who sign a Code of Practice on general-purpose AI a grace period before they need to comply with the European Union's AI Ac t. This is a switch from the July 10, 2025 announcement that the EU would be moving forward notwithstanding the complaints. The final word appears to be that there is no stop the clock or pauses or grace periods, period. New guidelines also released July 18, 2025 In addition, the European Commission published detailed Guidelines on the scope of the obligations for providers of general-purpose AI models under the AI Act (Regulation EU 2024/1689)—right before the AI Act’s key compliance date, August 2, 2025. The goal is to help AI developers and downstream providers by providing clarification. For example, it explains which providers of general-purpose AI models are in and out of scope of the AI Act’s obligations. In fact, the European Commission stated that “The aim is to provide legal certainty to actors across the AI value chain by clarifying when and how they are required to comply with these obligations”. The Guidelines focus on four main areas: General-purpose AI model Providers of general-purpose AI models Exemptions from certain obligations Enforcement of obligations The intention is to use clear definitions, a pragmatic approach, and exemptions for open-source. That said, the Guidelines consist of 36 pages of dense material that need to be reviewed and understood. For instance, the Guidelines answer the question, “When is a model a general-purpose AI model? Examples are provided for models in scope and out of scope. What happens next? As we can see from the above discussion, there are serious obligations that need to be complied with—soon. To that end, businesses in the European Union or who do business in the European Union (see Article 2 regarding application) are recommended to review the AI Act, the Code of Practice, and the Guidelines to ensure that they are ready for August 2, 2025. After August 2, 2025, providers placing general-purpose AI models on the market must comply with their respective AI Act obligations. Providers of general-purpose AI models that will be classified as general-purpose AI models with systemic risk must notify the AI Office without delay. In the first year after entry into application of these obligations, the AI Office will work closely with providers, in particular those who adhere to the General-Purpose AI Code of Practice, to help them comply with the rules. From 2 August 2026, the Commission’s enforcement powers enter into application. And by August 2, 2027, providers of general-purpose AI models placed on the market before August 2, 2025 must comply. Previous Next
- California Bill on AI Companion Chatbots | voyAIge strategy
California Bill on AI Companion Chatbots A New Law Emerges due to Concerns About the Impacts on Mental Health and Real-World Relationships By Christina Catenacci, human writer Oct 31, 2025 Key Points On October 13, 2025, California SB 243, Companion chatbots, was signed into law by Governor Newsom SB 243 addresses concerns about teen suicide and other impacts on mental health and real-world relationships since people have used companion chatbots as romantic partners California is the first state to enact this law—this law is a welcome development On October 13, 2025, California SB 243 , Companion chatbots, was signed into law by Governor Newsom. As can be seen in the recent Bill Analyses on the Senate Floor , AI companion chatbots that are created through genAI have become increasingly prevalent since they seek to offer consumers the benefits of convenience and personalized interaction. These chatbots learn intimate details and preferences of users based on their interactions and user customization. Millions of consumers use these chatbots as friends, mentors, and even romantic partners. However, there are serious concerns about their effects on users, including impacts on mental health and real-world relationships. In fact, many studies and reports point to the addictive nature of these chatbots and call for more research into their effects and for meaningful guardrails. Unfortunately, incidents resulting in users harming themselves and even committing suicide have been reported in the last year. To that end, SB 243 addresses these concerns by requiring operators of companion chatbot platforms to maintain certain protocols aimed at preventing some of the worst outcomes. What Does the New Law Say? The law defines a “companion chatbot” as an AI system with a natural language interface that provides adaptive, human-like responses to user inputs and is capable of meeting a user’s social needs, including by exhibiting anthropomorphic features and being able to sustain a relationship across multiple interactions. However, it does not include any of the following: A bot that is used only for customer service, a business’ operational purposes, productivity and analysis related to source information, internal research, or technical assistance A bot that is a feature of a video game and is limited to replies related to the video game that cannot discuss topics related to mental health, self-harm, sexually explicit conduct, or maintain a dialogue on other topics unrelated to the video game A stand-alone consumer electronic device that functions as a speaker and voice command interface, acts as a voice-activated virtual assistant, and does not sustain a relationship across multiple interactions or generate outputs that are likely to elicit emotional responses in the user The law also defines an “operator” as a person who makes a companion chatbot platform available to a user in the state. A “companion chatbot platform” is a platform that allows a user to engage with companion chatbots. Beginning on July 1, 2027, requires the following: If a reasonable person interacting with a companion chatbot would be misled to believe that the person is interacting with a human, the operator must issue a clear and conspicuous notification indicating that the companion chatbot is artificially generated and not human Operators must prevent a companion chatbot on its companion chatbot platform from engaging with users unless they maintain a protocol for preventing the production of suicidal ideation, suicide, or self-harm content to the user, including, but not limited to, by providing a notification to the user that refers the user to crisis service providers, including a suicide hotline or crisis text line, if the user expresses suicidal ideation, suicide, or self-harm. Operators must publish the details of this protocol on the operator’s internet website For a user that the operator knows is a minor, operators must do all of the following: (1) Disclose to the user that the user is interacting with AI; (2) Provide by default a clear and conspicuous notification to the user at least every three hours for continuing companion chatbot interactions that reminds the user to take a break and that the companion chatbot is artificially generated and not human; and (3) Institute reasonable measures to prevent its companion chatbot from producing visual material of sexually explicit conduct or directly stating that the minor should engage in sexually explicit conduct Operators must annually report to the Office of Suicide Prevention all of the following: (1) The number of times the operator has issued a crisis service provider referral notification in the preceding calendar year; (2) Protocols put in place to detect, remove, and respond to instances of suicidal ideation by users; and (3) Protocols put in place to prohibit a companion chatbot response about suicidal ideation or actions with the user. This report must not include any identifiers or personal information about users Operators must disclose to a user of its companion chatbot platform, on the application, the browser, or any other format that a user can use to access the companion chatbot platform, that companion chatbots may not be suitable for some minors A person who suffers injury as a result of a violation of this law may bring a civil action to recover all of the following relief: injunctive relief damages in an amount equal to the greater of actual damages or $1,000 per violation reasonable attorney’s fees and costs What Can We Take from This Development? This landmark bill is the first law in the United States to regulate AI companions. Given that teenagers have committed suicide following questionable conversations with these AI companion chatbots, the new transparency requirements are a welcome development. Previous Next
- Meta Wins the Antitrust Case Against It | voyAIge strategy
Meta Wins the Antitrust Case Against It No Monopoly Found By Christina Catenacci, human writer Nov 27, 2025 Key Points On November 18, 2025, the United States District Court for the District of Columbia confirmed that Meta did not have a monopoly This decision confirms that Meta will not have to break off Instagram and WhatsApp This antitrust decision is markedly different than the Google antitrust decisions involving Search and online ads On November 18, 2025, James E. Boasberg, Chief Judge at the United States District Court for the District of Columbia, confirmed that Meta did not have a monopoly. Accordingly, Meta will not have to break off Instagram and WhatsApp. As I mentioned here , Meta had its antitrust trial about seven months ago, where the main question was whether Meta had a monopoly in social media by acquiring Instagram and WhatsApp about ten years ago (2012 and 2014 respectively). Essentially, Mark Zucherbergh was the first to give testimony and apparently, while he was on the stand, he was asked to look at his own previous emails that he wrote to associates before and after the acquisition of Instagram and WhatsApp to clarify his motives. More specifically, the questions were, “Was the purchase to halt Instagram’s growth and get rid of a threat? Or was it to improve Meta’s product by having WhatsApp run as an independent brand?” In short, the ultimate decision was that Meta won: it did not have a monopoly and did not have to break up Instagram and WhatsApp. What Did the Judge Decide? Initial Comments The judge made a point of beginning with the comment, “The Court emphasizes that Facebook and Instagram have significantly transformed over the last several years”. In fact, it noted that Facebook bought Instagram back in 2012, and WhatsApp in 2014. In addition, the court described two other relevant social media apps, TikTok and YouTube, which allowed users to watch and upload videos. The court explained the history of evolution of Meta’s apps. For example, as Meta moved to showing TikTok-style videos, TikTok moved to adding Meta-style features to share them with friends. Technological changes have made video apps more social. More specifically, smartphone usage exploded; cellphone data got better; the steady progress of cellular data was followed by a massive leap in AI; and as social networks matured, the alternatives to AI-recommended content have become less appealing. The court detailed the lengthy history of proceedings beginning with the initial Complaint that was filed in 2021. Again, the court stated straight away in Facebook’s motion to dismiss that it had doubts that the Federal Trade Commission (FTC) could state a claim for injunctive relief. The court granted Facebook’s motion to dismiss but allowed the FTC to amend its Complaint. The FTC indeed created an Amended Complaint and alleged that Facebook held a monopoly in personal social networking and that Facebook maintained the monopoly by buying both Instagram and WhatsApp to eliminate them as competitive threats. The court found that the FTC had plausibly alleged that Facebook held monopoly power and that the acquisitions of Instagram and WhatsApp constituted monopolization. That said, the court did say that the FTC may have challenges proving its allegations in court. Subsequently, the parties each moved for summary judgment. The court denied both motions and indicated that the FTC had met its forgiving summary judgment standard, but the FTC faced hard questions about whether its claims could hold up in the crucible of trial. At trial, the court heard testimony for over six weeks and considered thousands of documents. Decision at Trial The court found the following: Section 2 of the Sherman Act prohibited monopolization. The main elements included holding monopoly power (power over some market) and maintaining it through means other than competition on the merits. Plaintiffs typically proved monopoly power indirectly by showing that a firm had a dominant share of a market that was protected by barriers to entry A big question in this case was, When did Meta have monopoly power? The FTC had to show that Meta was violating the law now or imminently and could only seek to enjoin the conduct that currently or imminently violated the law (the FTC incorrectly argued that Meta broke the law in the past, and this violation is still harming competition) The court defined the product market as the smallest set of products such that if a hypothetical monopolist controlled them all, then it would maximize its profits by raising prices significantly above competitive levels. The court confirmed that the FTC had the burden of proving the market’s bounds The court found that consumers treated TikTok and YouTube as substitutes for Facebook and Instagram. For instance, this could be seen when there was a shutdown of TikTok in the United States: users switched to other apps like Facebook, and later Instagram, and then YouTube. The court commented, “The amount of time that TikTok seems to be taking from Meta’s apps is stunning”. In fact, the court noted that when consumers could not use Facebook and Instagram, they turned first to TikTok and YouTube, and when they could not use TikTok or YouTube, they turned to Facebook and Instagram—Meta itself had no doubt that TikTok and YouTube competed with it. Thus, even when considering only qualitative evidence, the court found that Meta’s apps were reasonably interchangeable with TikTok and YouTube In assessing Meta’s monopoly power, the court considered a market that comprised Facebook, Instagram, Snapchat, MeWe, TikTok, and YouTube. Also, the court found that the best single measure of market share here was total time spent—the companies themselves often measured their market share using this measure. The court noted that Meta’s market share was falling, and what counted most regarding market share was the ability to maintain market share. A given market share was less likely to add up to a monopoly if it was eroding—if monopoly power was the power to control prices or exclude competition, then that power seemed to have slipped from Meta’s grip. The court concluded that YouTube and TikTok belonged in the product market, and they prevented Meta from holding a monopoly. Even if YouTube were not included in the product market, including TikTok alone defeated the FTC’s case Social media moved so quickly that it never looked the same way twice since the case began in 2021. The competitors changed significantly too. Previous decisions in motions did not even mention TikTok. Yet today, TikTok was Meta’s fiercest rival. It was understandable that the FTC was unable to fix the boundaries of Meta’s product market. Accordingly, the court stated: “Whether or not Meta enjoyed monopoly power in the past, though, the [FTC] must show that it continues to hold such power now. The Court’s verdict today determines that the FTC has not done so” Therefore, the case against Meta was dismissed. What Can we Take from this Development? Meta did not have a monopoly in social networking and survived a very serious existential challenge—it will not have to break the company apart as a result of this decision. The results of this decision were the polar opposite of the Google decision, where there was indeed a confirmed monopoly in Google Search and online ads . Why such a different result? The first clue came right at the beginning of this Meta decision, when the judge noted that the question was whether Meta had monopoly power now or imminently. In particular, there was no determination about whether there had had been a monopoly in the past (as the FTC incorrectly alleged), because it was irrelevant. That is, Meta may have had a monopoly in the past, but the FTC had to show that it had one now. Unlike the judge in the Google decision, the judge in the Meta case was able to show that the test for monopoly power was not met, primarily because the FTC could not show that Meta currently had monopoly power (power over some market) and maintained it through means other than competition on the merits. Second, unlike the Google decision, the product market had changed considerably since the FTC launched the Complaint, to the point where Meta’s strongest competitor right now, TikTok, had not even come on people’s radars. The judge made an important finding that consumers treated TikTok and YouTube as substitutes for Facebook and Instagram. After considering the evidence, the court found that TikTok had to now be included in the product market. This was significant and played a large role in the court dismissing the case. Most strikingly, the judge stated, “Even if YouTube were not included in the product market, including TikTok alone defeated the FTC’s case”. Third, throughout the previous Meta decisions since 2021, there was foreshadowing by the court that the FTC may struggle to prove its allegations in court. This was not so in the Google case, which involved the company using exclusionary contracts and other means to create and maintain its monopoly, which it still has. It is not just the DOJ that thinks Google currently has the monopoly—the EU has also fined Google significantly for having and maintaining a monopoly in Search and online ads. Fourth, it became clear that Meta’s market share had decreased, likely because of TikTok and YouTube—this made it difficult for the FTC to prove that there was a monopoly where Meta would have the opportunity to charge more, or demand more time spent. Recall that a main measure in this sphere was time spent, and the court stated that the amount of time that TikTok seemed to be taking from Meta’s apps was stunning. On the other hand, in Google’s case, Google had—and still has—89 percent of the global search engine market share. Sure, Mark Zuckerberg made comments in 2008 emails , “It is better to buy than compete”, but even if that were true, the court just showed that the FTC could no longer meet the test for holding a monopoly. Some may question why there is such importance placed on antitrust trials. Speaking about its competition mission, the FTC states: “Free and open markets are the foundation of a vibrant economy. Aggressive competition among sellers in an open marketplace gives consumers — both individuals and businesses — the benefits of lower prices, higher quality products and services, more choices, and greater innovation” Previous Next
- Keeping People in the Loop in the Workplace | voyAIge strategy
Keeping People in the Loop in the Workplace Some Thoughts on Work and Meaning By Christina Catenacci, human writer May 16, 2025 Key Points We can look to the infamous words of C.J. Dickson made in the 1987 case, Alberta Reference, for some of the first judicial comments touching on the meaning of work When thinking about what exactly makes work meaningful, we can look to psychologists who have demonstrated through scientific studies that meaningful work can be attributed to community, contribution, and challenge—leaders are recommended to incorporate these aspects in their management strategies Leaders are also encouraged to note that self-actualization is the process of realizing and fulfilling one’s potential, leading to a more meaningful life. It involves personal growth, self-awareness, and the pursuit of authenticity, creativity, and purpose My co-founder, Tommy Cooke, just wrote a great article that discusses the effects of the Duolingo layoffs. In that piece, he talks about how Duolingo just replaced its contract workers (on top of the 10 percent of its workforce it just reduced) and replaced them with AI. Ultimately, he concludes that AI achieves its greatest potential not by replacing humans, but by augmenting and enhancing human capabilities—and it follows that Duolingo runs the risk of reducing employee morale, increasing inefficiencies, and causing other long-term negative consequences like damage to reputation. Duolingo is not alone when it comes to reducing a part of its workforce and replacing it with AI. In fact, Cooke suggests that organizations that prioritize human-AI collaboration through hybrid workflows, upskilling, and governance position themselves for long-term success. This article got me thinking more deeply about the meaning of work. From an Employment Law perspective, I am very familiar with the following statement made by Dickson C.J. in 1987 ( Alberta Reference ): “Work is one of the most fundamental aspects in a person's life, providing the individual with a means of financial support and, as importantly, a contributory role in society. A person's employment is an essential component of his or her sense of identity, self‑worth and emotional well‑being. Accordingly, the conditions in which a person works are highly significant in shaping the whole compendium of psychological, emotional and physical elements of a person's dignity and self-respect” Furthermore, Dickson C.J. elaborated that a person’s employment is an essential component of his or her sense of identity, self‑worth and emotional well‑being. I wrote about these concepts in my PhD dissertation , where I argued that there is an electronic surveillance gap in the employment context in Canada, a gap that is best understood as an absence of appropriate legal provisions to regulate employers’ electronic surveillance of employees both inside and outside the workplace. More specifically, I argued that, through the synthesis of social theories of surveillance and privacy, together with analyses of privacy provisions and workplace privacy cases, a new and better workplace privacy regime can be designed (to improve PIPEDA ). Disappointingly, we have still not seen the much-needed legislative reform, but let’s move on. Thus, it is safe to say that for decades, courts have recognized the essential nature of work when deciding Employment Law cases. Economists have too. For instance, Daniel Susskind wrote a working paper where he explored the theoretical and empirical literature that addressed this relationship between work and meaning. In fact, he explained why this relationship matters for policymakers and economists who are concerned about the impact of technology on work. He pointed out that the relationship matters for understanding how AI affects both the quantity and the quality of work and asserted that new technologies may erode the meaning that people get from their work. What’s more, if jobs are lost because of AI adoption, the relationship between work and meaning matters significantly for designing bold policy interventions like the 'Universal Basic Income' and 'Job Guarantee Schemes'. More precisely, he argues that policymakers must decide whether to simply focus on replacing lost income alone (as with a Universal Basic Income, or UBI) or, if they believe that work is an important and non-substitutable source of meaning, on protecting jobs for that additional role as well (as with a Job Guarantee Scheme, or JGS). With AI becoming more common in the workplace, Susskind points out that although traditional economic literature narrowly focuses on the economic impact of AI on the labour market (for instance, how employment earnings are considered), there has been a change of heart in the field that has evolved into a creeping pessimism involving the quantity of work to be done as well as the quality of that work. In fact, he touches on the notion that paid work is not only a source of an income, but of meaning as well. He also notes that classical political philosophers and sociologists have introduced some ambiguity when envisioning the relationship, but organizational psychologists have argued and successfully demonstrated through scientific studies that people do indeed get meaning from work. What does this all mean? Traditional economic models treat work solely as a source of disutility that people endure only for wages. But it is becoming more evident that the more modern way to think about work entails thinking about meaning—something that goes beyond income. What the foregoing suggests is that, if AI ultimately leads to less work for most people, we may need to better understand what exactly is “meaningful” to people, and how we can ensure that people who are downsized still have these meaningful activities to do. Further, we would need to provide a great deal of opportunity in these meaningful things, so people can feel the feelings of psychological, emotional, and physical elements of a person's dignity and self-respect that C.J. Dickson referred to back in 1987. Along the same lines, we will need to reimagine policy interventions such as UBI and JGS; while advocates of JGS tend to believe that work is necessary for meaning, UBI supporters believe that people can find meaning outside formal employment. More specifically, UBI is a social welfare proposal where all citizens receive a regular cash payment without any conditions or work requirements. The goal of UBI is to alleviate poverty and provide financial security to everyone, regardless of their economic status. On the other hand, the job guarantee is the landmark policy innovation that can secure true full employment, while stabilizing the economy, producing key social assets, and securing a basic economic right. What we can be sure of is the fact that things are changing with respect to how we see work and meaning. For many, work is a major component of their life and views of themselves. Some would go further and suggest that work is the central organizing principle in their lives—they could not imagine life without work, and self-actualizing would not take place without it. To be sure, self-actualization is the process of realizing and fulfilling one’s potential, leading to a more meaningful life. It involves personal growth, self-awareness, and the pursuit of authenticity, creativity, and purpose. What Makes Work Meaningful? A closer look into what makes work meaningful can help in this discussion. Meaningful work comes from: Community : We are human, whether we like it or not. Because of this, we are wired for connection. Studies show that employees who feel a strong sense of belonging are more engaged and productive Contribution : We view the ability to see how one’s work benefits others as one of the strongest motivators in any job. In fact, employees who find meaning in their work are 4.5 times more engaged than those who do not Challenge : We thrive when we are given opportunities to learn and grow. Put another way, when leaders challenge employees to expand their capabilities and provide the support they need to succeed, those employees experience more meaningful development When you stop and think about it, it makes sense that leaders play a considerable role in shaping workplace meaning. Since about 50 percent of employees’ sense of meaning at work can be attributed to the actions of their leaders, leaders are recommended to find ways to cultivate community, contribution, and challenge so that employees and teams can thrive. More precisely, leaders in organizations are recommended to: foster a strong sense of belonging be aware and acknowledge the impacts of employees’ work challenge workers so that they grow in meaningful ways Individuals can also add some other features so that they can create some meaning for themselves, namely with self-instigated learning, volunteering in the community, participating in local government, engaging in care work, and engaging in creative work. Previous Next