Search Results
123 results found with an empty search
- HR Automation | voyAIge strategy
HR Automation Some Issues that Could Arise By Christina Catenacci Dec 2, 2024 Key Points In the future, Ontario employers making publicly advertised job postings will be required to disclose within the job posting if they use AI to screen, assess or select applicants for publicly advertised job posts Where a job applicant participates in a video interview and sentiment analysis is used, there could be privacy and human rights issues that are triggered Other jurisdictions, such as New York City, have addressed AI tools in recruitment and hiring very differently than Ontario My co-founder, Tommy Cooke, just wrote an informative article regarding some of the main HR automation trends that have been pervasive in the business world in 2024. When it comes to these trends, it is worth taking a closer look at some of the issues that could become problematic. More specifically, I would like to examine the uses of AI in the area of recruitment and hiring. Whether it is using AI to automate resume screening or using AI to conduct video interviewing sentiment analysis, there could be some challenges for employers. In particular, employers will need to comply with Ontario’s Employment Standards Act , namely the AI provisions in Bill 149 , in the near future. As of some future date to be named by proclamation, employers making publicly advertised job postings will be required to disclose within the job posting if they use AI to screen, assess or select applicants for publicly advertised job posts. These employers will also have to retain copies of every publicly advertised job posting (as well as the associated application form) for three years after the post is taken down. In fact, I recently wrote an article on this topic. I wrote about how skeletal the AI provisions were in this bill. And the AI-related definitions were nowhere to be found. I compared the requirements in Ontario’s Bill 149 to those in New York City’s (NYC’s) hiring law involving using AI and automated decision-making tools. It was striking that NYC required employers to conduct a bias audit before using the tool; post a summary of the results of the bias audit on their websites; notify job candidates and existing employees that the tool would be used to assess them; include instructions for requesting accommodations; and post on their websites about the type and source of data that was used for the tool as well as their data retention policies. There were detailed definitions and fines for noncompliance too. Needless to say, this will be a challenge for provincially regulated employers in Ontario, and it is highly recommended that employers prepare now for these employment law changes. That said, it is understandable that employers may struggle with how to comply with such ambiguous provisions. Additional issues could arise, namely privacy and human rights issues. Let us take the example of the video interview where sentiment analysis is conducted. This is troubling from a privacy perspective—job applicants may not be comfortable participating in video interviews where their facial expressions and gestures are closely scrutinized with intrusive software that enables AI tools to analyze their sentiments. Moreover, employees who are up for a promotion may not appreciate video analytics of their video interview performance being retained for an unknown period of time, and accessible to an unknown number of actors in the workplace. Because job applicants are in a vulnerable position, they may not feel like they can object to the use of these AI hiring tools. In addition to privacy concerns, human rights issues could surface. The video interview could reveal various aspects of a person that may fall under any of the prohibited grounds of discrimination. For instance, under the Ontario Human Rights Code , section 5 of the Code prohibits discrimination in employment on the grounds of race, ancestry, place of origin, colour, ethnic origin, citizenship, creed, sex, sexual orientation, gender identity, gender expression, age, record of offences, marital status, family status or disability. It may be possible for an AI tool to be biased (unintentionally, but biased nonetheless), where it favours the younger candidates, gives them higher interview scores, and ultimately inadvertently discriminates on the ground of age. Since it may not be possible to detect these biased decisions immediately, it may be that some job applicants simply miss out on an employment opportunity due to an ageist AI tool. It will be interesting to see whether other jurisdictions come up with more extensive provisions to address the use of AI in recruitment and hiring. In Ontario, it is questionable whether we will see additional detail to help employers comply with the requirements. Previous Next
- Canada’s AI Brain Drain | voyAIge strategy
Canada’s AI Brain Drain A Silent Crisis for Canadian Business By Tommy Cooke, fueled by curiousity and ethically sourced coffee Oct 17, 2025 Key Points: Canada’s AI brain drain threatens national competitiveness by eroding the local talent base essential for innovation and execution Without retaining AI expertise, Canada risks becoming dependent on foreign ecosystems, which undermines sovereignty and commercialization potential Business leaders must treat AI talent development as a core strategy—Canadians need to build, invest, and upskill locally to remain competitive Canada is starting to punch above its weight in AI. With world-class research hubs in Toronto (Vector Institute), Montreal (Mila), and Edmonton (Amii), and visionaries such as Geoffrey Hinton and Yoshua Bengio driving Canada’s AI momentum, Canada is increasingly recognizable around the globe as a hotspot for innovation. Alas, as the global AI boom accelerates, Canada is at risk of losing that advantage through exodus of talent. The phenomenon, often dubbed the “AI brain drain”, refers to top researchers, engineers, and startup founders relocating (or aligning remotely) with U.S. or global tech hubs as opposed to building at home . For a business leader in Canada who is currently considering AI, this trend is one to keep an eye on because the stakes are high: how easily one can recruit, retain, and deploy AI talent will increasingly define which firms win or lose over the next half decade and beyond. Why Business Leaders Should Pay Attention Seeing talent leave or take jobs globally has multiple implications in terms of AI-driven innovation and the extent to which they can make an impact for Canadian businesses. Let’s take a closer look at three of them: First, the absence of talent is an AI execution bottleneck. In many industries, the difference between AI as a novelty versus a value-creator lies in execution, not algorithms. That execution depends on access to specialized engineers, ML researchers, operations talent, data scientists, hybrid roles, and so on. If a tech company plans on adopting or building AI, it will have to compete not only with other Canadian firms, but also with global tech giants offering premium compensation, equity, and prestige. That competitive pressure already manifests in Canada’s tech sector, where many former Canadian AI founders and researchers have relocated or anchored operations in Silicon Valley or U.S. hubs despite having roots here. Losing that talent, or failing to attract it, translates to longer timelines, lower quality, higher costs, or outright stalling of AI initiatives. Second, dependency on external ecosystems weakens innovation sovereignty. Relying on remote work or foreign talent is a short-term fix. If a company’s AI strategy depends on overseas labs, it risks instability from geopolitical shifts, visa regimes, cross-border regulation, or simple churn in remote teams. Canada’s recent announcement of a $2 billion+ Canadian AI Sovereign Compute Strategy is a response to such vulnerabilities: the federal government wants Canada to own its compute infrastructure rather than remain tethered to foreign cloud or GPU suppliers. Unfortunately, computing power alone is simply not enough. To leverage it fully, Canada needs people who know how to harness it. Without a base of AI talent anchored in Canada, compute investments risk underutilization and may be forced to finding support beyond the border. Moreover, it is important to keep in mind that investments in AI compute are considerably larger in other jurisdictions such as the United States; even if Canadian AI founders want to stay in Canada to use the new AI infrastructure, the Canadian compute will pale in comparison to the sorts of opportunities that the Americans are offering. Therefore, it will also be challenging to convince founders to take advantage of Canada’s AI compute. Third, the “imagination gap” will widen. Rather ironically, Canada lags many peers in actual AI adoption. Despite being a global leader in AI ideation and innovation, 12 percent of Canadian firms have broadly integrated AI into operations or products, putting the country near the bottom of OECD adoption rates. Some of this gap stems from cultural and literacy issues. But the primary issue is structural; if Canadian firms can’t access or retain top talent, pilots stay pilots, and experimentation never scales. The brain drain heightens that barrier. In effect, the Canadian market becomes a slow adopter, while global firms dominate the frontier. Catalysts of the Brain Drain It’s important to understand where the pressure comes from if Canada is to begin recognizing countermeasures. They are ubiquitous and complicated, but let’s quickly identify the most critical: Compensation and equity. U.S. tech firms routinely offer higher absolute compensation and more liquid equity upside Prestige. Many researchers seek the cachet of working at OpenAI, DeepMind, or leading U.S. AI labs Scale and data access. Larger U.S. and international firms have access to vast user bases and data that Canada-based projects often can’t match Funding scale . Global venture capital and public markets remain deeper and more aggressive than those in Canada Remote work . Many Canadian researchers don’t physically relocate now, but instead work remotely for international firms while remaining in Canada What Canadian Business Leaders Should Do About the AI Brain Drain, Right Now If you are serious about embedding AI in your organization, there are crucial steps you can take right now to join other business leaders seeking to alter the course of the brain drain. For starters, Canadian business owners need to invest in AI anchors. More specifically, it is important to create internal AI competence centers or labs rather than AI projects. It is also important to provide mandates, budgets, visibility, and career ladders. Ask yourself, What would talent want or need? It is necessary to attract Canadian talent to the centers and labs that have been created in such that the opportunity is interesting. It’s also important to offer compelling equity and long-term incentives. It’s expensive, but if the talent economy in Canada is to bolster itself, employers need to be thinking more strategically about matching or emulating international-esque equity models, grants, and research budgets. Engineers want to feel that they can build something significant—companies are recommended to do what they can to build the sandboxes that these engineers want to play in. Furthermore, companies are encouraged to partner with local colleges and universities for that they can align their interests with those of Canada’s top AI innovators. Develop interesting ways to fund cross appointments, joint labs, or even industrial research chairs. Companies may also wish to ask themselves, How skilled is our existing talent? If companies are not sure, they would benefit from upskilling. To begin, companies can drive internal reskilling and establish AI‐centric learning paths. That is, engaging workers in AI 101 learning sessions can help non-technical staff understand AI itself. Lastly, but perhaps most importantly, it is important to frame AI strategy as core business strategy—not a side project. AI disruption is old news. The ship has already set sail. Every industry is transforming. If companies are adopting AI now or are panning to do so in the near future, it is best to think strategically. For instance, ask, How might AI drive our business strategy as opposed to merely summarizing emails? By making AI more self-evidently valuable in terms of business growth, companies are more likely to attract talent. Delaying Investment in AI Talent is a Strategic Risk Canada’s AI brain drain may still feel distant to many executives, but the lead time for losing competitive edge is long. If Canadian firms don’t move to secure talent now, they’ll find themselves significantly behind their competitors. For any business in Canada that is eyeing AI, the choice is not whether to care about the brain drain. It’s whether to treat it as a strategic pillar. Previous Next
- LR Special Series: March 2025 AI Legislation Status | voyAIge strategy
LR Special Series: March 2025 AI Legislation Status What is the Status of AI Statutes in Canada, the United States, and the European Union (EU)? By Christina Catenacci, Human Writer Mar 4, 2025 Legislative Roundup Special Series: Issue 1° Key Points: The jurisdictions examined are currently at different stages of legislative development when it comes to AI regulation EU’s AI Act was the first of its kind and has become the golden standard; on the contrary, Canada is left only with PIPEDA for privacy regulation and no legislation for AI regulation It is unlikely that the United States will create a comprehensive federal AI statute, but some States have created their own AI statutes, namely California, Colorado, and Utah So many AI developments have transpired in such a short period of time, and it is important to keep on top of them. Here is the latest on the jurisdictions at issue: Canada Bill C-27 involved proposing a privacy and AI bill in one long document. It was proposed during the first session of the 44th Parliament. On January 6, 2025, Parliament was prorogued with a proclamation of the Governor General on the advice of the Prime Minister, putting an end to the parliamentary session. This means that Bill C-27 died at the time of prorogation, along with all other legislation on the Order Paper. Similarly, the study of Bill C-27 by the Standing Committee on Industry and Technology (INDU) that began in September, 2023 ceased upon prorogation. To that end, Canada is still left with PIPEDA for privacy and no statute governing AI. Given the current political situation where the Liberals are in the process of selecting a new leader who will undoubtedly be welcomed with an election, it is not clear whether or to what extent the government will introduce either privacy or AI legislation. United States On his first day back in the White House, President Trump rescinded Biden’s Executive Order on AI safety—on January 23, 2025, he signed an Executive Order removing barriers to American AI Innovation. The document from the White House stated that the Biden AI Executive Order established unnecessarily burdensome requirements for companies developing and deploying AI that would stifle private sector innovation and threaten American technological leadership. Therefore, President Trump revoked Biden’s Executive Order and called for departments and agencies to revise or rescind all policies, directives, regulations, orders, and other actions taken under the Biden AI Executive Order that are inconsistent with enhancing America’s leadership in AI. From this, we can deduce that there will likely not be a national AI bill coming any time soon. If we turn to the States, we see from the IAPP AI Governance Tracker 2025 and a helpful map of the State jurisdictions’ requirements that there are three States that have gone through the legislative process that ended with an AI bill being signed into law: 1. California : Introduced on January 31, 2024, AB 2013 , requires, on or before January 1, 2026, a developer of an artificial intelligence system or service made available to Californians for use, regardless of whether the terms of that use include compensation, to post on the developer’s internet website documentation regarding the data used to train the artificial intelligence system or service. It takes effect on January 1, 2026. In addition, SB 942 sets out rules for businesses that provide a generative AI system with over 1 million monthly visitors during a 12-month period that is publicly accessible within the State’s geographic boundaries. It takes effect on January 1, 2026. 2. Utah : On March 13, 2024, Utah enacted SB 149 , which imposes certain disclosure requirements on entities using generative AI tools with their customers, and limits an entity’s ability to blame generative AI for statements or acts that constitute consumer protection violations. Interestingly, the bill defines generative artificial intelligence as an artificial system that: (i) is trained on data; (ii) interacts with a person using text, audio, or visual communication; and (iii) generates non-scripted outputs similar to outputs created by a human, with limited or no human oversight. SB 149 took effect on May 1, 2024. 3. Colorado : SB 205 involves providing consumer protections in interactions with AI systems. It was approved by the Governor and became effective on May17, 2024. That said, the detailed requirements start to apply to developers and deployers on February 1, 2026 It is worth noting that Virginia just saw its HB 2094 pass, but it is not yet signed into law. It creates requirements for the development, deployment, and use of high-risk AI systems, defined in the bill, and civil penalties for noncompliance, to be enforced by the Attorney General. The bill takes effect on July 1, 2026. Additionally, there are several bills that have been introduced but are still in the early stages. The potential is there for more bills to progress; in this way, there can be critical AI guardrails established, at least at the State level. The European Union The Artificial Intelligence Act (AI Act) introduces a uniform framework across all EU countries, based on a forward-looking definition of AI and a risk-based approach (minimal, specific transparency, high, and unacceptable risk). The goal is to foster responsible AI development and deployment in the EU. Ahead of its time, the European Commission first proposed the AI Act in April, 2021. It entered into force on August 1, 2024; however, no provisions began to apply since they would start to apply gradually over time. On February 2, 2025, prohibitions on certain AI systems and requirements on AI literacy began to apply (Chapter I and Chapter II). Chapter 1 deals with general provisions such as scope, definitions, and AI literacy. Chapter 2 deals with a number of prohibitions like placing on the market or into service an AI system that exploits any of the vulnerabilities of a person or group of persons due to their age, disability, or economic situation. More provisions will come into effect on August 2, 2025 that involve notified bodies (Chapter III, section 4), GPAI models (Chapter V), governance (Chapter VII), confidentiality (Article 78), and penalties (Articles 99 and 100). Ultimately, the rest of the AI provisions begin to apply on August 2, 2026 except for Article 6(1). As for Article 6(1), it will become effective on August 2, 2027. What can we take from this status report? As can be seen from the above discussion, the different jurisdictions are currently at different stages of legislative development when it comes to AI regulation. And even if it gets signed into law, it is worth mentioning that it is common for AI bills to propose a delayed effective date because businesses need some time to prepare for the upcoming changes. By far, the EU is ahead of the pack, having already created the provisions that are necessary to limit out-of-control AI activity. On the other hand, Canada is in the worst position, since it is not even known who will govern, what the new AI statute might look like, and when it might be proposed in the format of a comprehensive bill. Then there is the United States. There does not appear to be any hope of further AI regulation at the federal level; this may be why States have been moving forward on their own and proposing AI bills, including the three States that recently signed AI bills into law. Previous Next
- Privacy Commissioner of Canada (OPC) Releases Findings of Joint Investigation into TikTok | voyAIge strategy
Privacy Commissioner of Canada (OPC) Releases Findings of Joint Investigation into TikTok TikTok Must Do More to Protect Children’s Privacy By Christina Catenacci Sep 26, 2025 Key Points: On September 23, 2025, the OPC released the findings of the joint investigation of TikTok The measures that TikTok had in place to keep children off the video-sharing platform and to prevent the collection and use of their sensitive personal information for profiling and content targeting purposes were inadequate There were several recommendations that were made that TikTok will have to follow, and the company has already started bringing itself in compliance On September 23, 2025, the OPC released the findings of the joint investigation of TikTok by the OPC, and the Privacy Commissioners of Quebec, British Columbia, and Alberta (Privacy Commissioners). The OPC said in its Statement that the measures that TikTok had in place to keep children off the video-sharing platform and to prevent the collection and use of their sensitive personal information for profiling and content targeting purposes were inadequate. About TikTok TikTok is a popular short-form video sharing and streaming platform, which is available both through its website and as an app. Its content features 30–50 second videos, although it also offers other services, such as live streaming and photos. TikTok also provides multiple interactive features such as comments and direct messaging for content creators and users to connect with other users. The company’s core commercial business has been the delivery of advertising, which it enabled by using the information it collected about its users to track and profile them for the ultimate purposes of delivering targeted advertising and personalizing content. TikTok’s platform has a high level of usership, with 14 million active monthly Canadian users as of November 2024. What was the Investigation About? The investigation examined TikTok’s collection, use and disclosure of personal information for the purposes of ad targeting and content personalization on the platform, with particular focus on TikTok’s practices as they relate to children. More specifically, the Offices considered whether TikTok: engaged in these practices for purposes that a reasonable person would consider appropriate in the circumstances, were reasonable in their nature and extent, and fulfilled a legitimate need obtained valid and meaningful consent and, in the case of individuals in Quebec, met its transparency obligations under Quebec’s Act Respecting the Protection of Personal Information in the Private Sector What did the Privacy Commissioners Find? The Privacy Commissioners found the following: Issue 1: Was TikTok collecting, using, and disclosing personal information, in particular with respect to children, for an appropriate, reasonable, and legitimate purpose? TikTok collected and made extensive use of potentially sensitive personal information of all its users, including both adults and children. Despite TikTok’s Terms of Service stating that users under that age were not allowed to use the platform, the investigation ultimately found that TikTok had not implemented reasonable measures to prevent its collection and use of the personal information of underage users. Therefore, the Privacy Commissioners found that TikTok’s purposes for collecting and using underage users’ data, to target advertising and personalize content (including through tracking, profiling and the use of personal information to train machine learning and refine algorithms), were not purposes that a reasonable person would consider to be appropriate, reasonable, or legitimate under the circumstances. TikTok’s collection and use of underage users’ data for these purposes did not address a legitimate issue, or fulfill a legitimate need or bona fide business interest More specifically, when balancing interests (an individual’s right to privacy and a corporation’s need to collect personal information), it was important to consider the sensitivity of the information. The Privacy Commissioners pointed out that information relating to children was particularly sensitive. On a site visit, they noted that the hashtags “#transgendergirl” and “#transgendersoftiktok” were displayed as options for an advertiser to use as targeting criteria. TikTok personnel were unable to explain, either during the site visit or when offered a follow-up opportunity, why these hashtags had been available on the ad manager platform as options. The company later confirmed that the hashtags should not have been available, had since been removed as options, and had not been used in any Canadian ad campaigns from 2023 to the date of the site visit in 2024. The Privacy Commissioners stated, “While TikTok resolved this specific issue after it was discovered by our investigation, we remain concerned that this sensitive information had not been caught by TikTok proactively and that individuals could potentially have been targeted based on their transgender identity” Even where certain elements of the information that TikTok used for profiling and targeting its users (including underage users) could be considered less sensitive when taken separately, when taken together and associated with a single user and refined by TikTok with the use of its analytics and machine learning tools, it could be rendered more sensitive since the insights could be inferred from that information in relation to the individual, such as their habits, interests, activities, location, and preferences There were a large number of underage users (under 13 years), notwithstanding the rules that they were not allowed. TikTok has been banning an average of about 500,000 accounts per year in Canada—just in 2023, there were 579,306 children who were removed for likely being under 13. The Privacy Commissioners concluded that the actual number of accounts held by underage users on the platform was likely much higher. What’s more, TikTok used an “age gate”, which required the user to provide a date of birth during the account creation process. When a date of birth corresponded to an age under 13, account creation was denied, and the device was temporarily blocked from creating an account. The Privacy Commissioners determined that this was the only age assurance mechanism that TikTok implemented at the sign-up/registration stage to prevent underage users from creating an account and accessing the platform Moreover, TikTok had a moderation team to identify users who were suspected to be underage, and members of this team were provided with specific training to identify individuals under the age of 13. The moderation team relied on user reports (where someone, such as a parent, contacted TikTok to report that a user was under the age of 13), and automated monitoring (which included scans for keywords in text inputted by the user that would suggest that they could be under the age of 13 like, “I am in grade three,” or in the case of TikTok LIVE, the use of computer vision and audio analytics to help identify individuals under 18 years. Then, moderators conducted manual reviews of accounts identified from these flags. These included a review of posted videos, comments, and biographic information. This was done to decide whether to ban an account In light of the deficiencies in TikTok’s age assurance mechanisms, the Privacy Commissioners found that TikTok implemented inadequate measures to prevent those users from accessing and being tracked and profiled on, the platform. TikTok had no legitimate need or bona fide business interest for its collection and use of the sensitive personal information of these underage users in relations to all the jurisdictions involved, whether it was the federal, Alberta, British Columbia, or Québec jurisdictions The Privacy Commissioners stated: “We are deeply concerned by the limited measures that the company has put in place to prevent children from using the platform. We find it particularly troubling that even though TikTok has implemented many sophisticated analytics tools for age estimation to serve its various other business purposes, evidence suggest that the company did not consider using those tools or other similar tools to prevent underage users from accessing, and being tracked and profiled on, the platform” Issue 2: Did TikTok obtain valid and meaningful consent from its users for tracking, profiling, targeting and content personalization? It was not necessary for the Privacy Commissioners to consider this question since organizations were not allowed to rely on consent for the collection, use, or disclosure of personal information when its purpose was not appropriate, reasonable, or legitimate within the meaning of the legislation. They stated, “In other words, obtaining consent does not render an otherwise inappropriate purpose appropriate”. In this case, the Privacy Commissioners already found that TikTok’s collection and use of personal information from children was not for an appropriate purpose. That said, the Privacy Commissioners decided to continue the analysis regarding meaningful consent from adults (aged 18 and above) and youth (aged 13–17). Ultimately, the Privacy Commissioners found that TikTok did not explain its practices (related to tracking, profiling, ad targeting and content personalization) to individuals in a manner that was sufficiently clear or accessible, and therefore did not obtain meaningful consent from platform users—including youth users More specifically, the legislation (excluding that of Québec—see Issue 2.1) required consent for the collection, use, or disclosure of their personal information, unless an exception applied. The type of consent required varied depending on the circumstances and sensitivity of the personal information. When taken together, the personal information collected and used by TikTok via tracking and profiling for the purposes of targeting and content personalization could be sensitive. Where the personal information involved was sensitive, the organization had to obtain express consent. This is especially true since many of TikTok’s practices were invisible to the user. Where the collection or use of personal information fell outside the reasonable expectations of an individual or what they would reasonably provide voluntarily, then the organization generally could not rely upon implied or deemed consent For consent to be meaningful, organizations had to inform individuals of their privacy practices in a comprehensive and understandable manner. In addition, organizations had to place additional emphasis on four key elements: What personal information is being collected; With which parties personal information is being shared; For what purposes personal information is collected, used, or disclosed; and Risk of harm and other consequences The Privacy Commissioners concluded that more needed to be done by TikTok to obtain valid and meaningful consent from its users. This was important with respect to TikTok’s privacy communications (during the count creation process, its Privacy Policy , as well as pop ups and notifications, and supporting materials like help centre and FAQs), and the youth-specific privacy protections such as the default privacy settings that made the accounts private by default without the ability to live stream or send and receive direct messages. Although TikTok created videos, added a youth portal, and prepared documentation aimed at youth, more needed to be done to protect their privacy In addition, when it came to adults 18 years and older, the Privacy Commissioners determined that TikTok did not explain its privacy practices with respect to the collection and use of personal information, including via tracking and profiling, for purposes of ad targeting and content personalization in a manner that would result in meaningful consent being obtained from those users. Though the company made significant information available to users regarding its privacy practices, including through just-in-time notices and in a layered format, and even tried to improve its practices, the Privacy Commissioners found that that: TikTok did not provide certain key information about its privacy practices up-front; its Privacy Policy did not explain its practices in sufficient detail for users to reasonably understand how their personal information would be used and for what purposes; other available documents with further details were difficult to find and not linked in the Privacy Policy; and many key documents, including TikTok’s Privacy Policy, were not made available in French. Also, TikTok failed to adequately explain its collection and use of users’ biometric information When it came to the meaningfulness of consent from youth users, it became clear that the same communications were used for both youth and adults, and they were similarly inadequate. The Privacy Commissioners pointed out that children were particularly vulnerable to the risks arising from the collection, use, and disclosure of their personal information. In fact, UNICEF Canada has called for a prohibition on the use of personal data in the development of targeted marketing towards children and young people because it has been established that they are extremely vulnerable to such advertising. They also noted that there are other potential general harms to children and youth resulting from targeted advertising, including the marketing of games that can lead to the normalization of gambling and an increased risk of identity theft and fraud through profiling associated with targeted advertising TikTok failed to obtain meaningful consent from youth for its collection and use of their personal information, including via tracking and profiling, for purposes of ad targeting and content personalization. More specifically, the Privacy Commissioners found that, in addition to the fact that TikTok’s privacy communications were inadequate to support consent from adults, TikTok’s youth-specific privacy measures were also inadequate to ensure meaningful consent for youth for the following reasons: youth-specific communications in TikTok’s portal were not easy to find; none of those communications explained TikTok’s collection and use of personal information, including via tracking and profiling, for purposes of ad targeting and content personalization; and TikTok provided no evidence to establish that its communications had, in fact, led to an understanding by youth users of what personal information TikTok would use, and how, for such purposes The Privacy Commissioners stated: “Given these risks and sensitivities, we would expect TikTok to implement a consent model and privacy communications that seek to ensure that individuals aged 13-17 can meaningfully understand and consent to TikTok’s tracking, profiling, targeting and content personalization practices when they use the platform. This includes an expectation that TikTok would develop their communications intended for users aged 13-17 in language that those users can reasonably understand, taking into account their level of cognitive development. TikTok should also make clear to those users the risk of harm and other consequences associated with use of the platform consistent with the Consent Guidelines and section 6.1 of PIPEDA . In light of the fact that younger users may not be aware of the existence and implications of targeted advertising, TikTok’s privacy communications should include prominent up-front notification that targeted ads may be delivered to them on the platform to influence their behaviour” Issue 2.1: Did TikTok meet its obligations to inform the persons concerned with respect to the collection and use of personal information to create user profiles for the purposes of ad targeting and content personalization Rather than an obligation to obtain consent and regardless of the type of personal information, the Québec legislation provides that when personal information is collected directly from the person concerned, the company collecting the information has an obligation to inform the person concerned. A person who provides their personal information in accordance with the privacy legislation consents to its use and its communication for the purposes for which it was collected In this case, TikTok collects personal information from the user using technology with functions that enable it to identify, locate, or profile the user. Specifically, TikTok uses its platform (website and app) along with associated technologies such as computer vision and audio analytics, as well as the three age models, to collect and infer information about users (including their demographics, interests and location) to create a profile about them. These profiles can in turn be used to assist in the delivery of targeted advertising and tailored content recommendations on the platform Since TikTok did not meet the obligation to inform the person, the Privacy Commissioners found that the collection of personal information by TikTok was not compliant with Québec’s legislation. Also, TikTok did not, by default, deactivate functions that allowed a person to be identified, located, or profiled using personal information. Since users did not have to make an active gesture to activate these specific functions, the Privacy Commissioner found that TikTok contravened the requirements of Québec’s legislation. Moreover, TikTok was not ensuring that the privacy settings of its technological product provided the highest level of privacy by default, without any intervention by the person concerned also contravened the legislation. Consequently, TikTok’s practices did not comply with sections 8, 8.1 and 9.1 of Quebec’s private sector privacy legislation The Privacy Commissioners stated: “Subsequent to engagement with the Offices, a new stand-alone privacy policy for Canada was published in July 2025” What were the Recommendations that TikTok will be working to follow? Given the above findings, the company agreed to work with the Privacy Commissioners to resolve the matter. More specifically, TikTok committed to the following: Implement three new enhanced age assurance mechanisms that are to be demonstrably effective at keeping underage users off the platform Enhance its privacy policy to better explain its practices related to targeted advertising and content personalization, and make additional relevant privacy communications more accessible, including by links in the privacy policy and up-front notices Cease allowing advertisers to target under-18 users, except via generic categories such as language and approximate location Publish a new plain-language summary of its privacy policy for teens, and develop and distribute a video to teen users to highlight certain of TikTok’s key privacy practices, including its collection and use of personal information to target ads and personalize content Enhance privacy communications, including through prominent up-front notices, regarding its collection and use of biometric information and the potential for data to be processed in China Implement and inform users of a new “Privacy Settings Check-up” mechanism for all Canadian users, which would centralize TikTok’s “most important and tangible” privacy settings and allow users to more easily review, adjust, and confirm those setting choices What Has TikTok Done in Response to the Findings? In response to the joint findings and recommendations, the OPC News Release states that TikTok has agreed to strengthen privacy communications to ensure that users, and in particular younger users, understand how their data could be used, including for targeted advertising and content personalization. In addition, TikTok has also agreed to enhance age-assurance methods to keep underage users off TikTok and provide more privacy information in French. In fact, the company quickly began making some improvements during the investigation. As a result, the matter was found to be well-founded and conditionally resolved with respect to all three issues. The Privacy Commissioners will continue to work with TikTok to ensure the final resolution of the matter through its implementation of the agreed upon recommendations. The Privacy Commissioner of Canada, Philippe Dufresne stated: “TikTok is one of the most prevalent social media applications used in Canada, and it is collecting vast amounts of personal information about its users, including a large number of Canadian children. The investigation has revealed that personal data profiles of youth, including children, are used at times to target advertising content directly to them, which can have harmful impacts on their well-being. This investigation also uncovered the extent to which personal information is being collected and used, often without a user’s knowledge or consent. This underscores important considerations for any organization subject to Canadian privacy laws that designs and develops services, particularly for younger users. As technology plays an increasingly central role in the lives of young people in Canada, we must put their best interests at the forefront so that they are enabled to safely navigate the digital world.” For more information, readers can view the Backgrounder: Investigation into TikTok and user privacy. What Can we Take from This Development? Although TikTok generally disagreed with the Privacy Commissioners’ findings, the company did commit to working with the Privacy Commissioners and had already started to make improvements. What we can see from this case is that when it comes to youth privacy, there can be no excuse for faulty privacy protections—it is important to get it right and provide the highest level of privacy by default. This is especially true with respect to complying with Québec’s private sector privacy legislation, mainly because Québec has already caught up and created private sector privacy legislation that closely resembled the more protective General Data Protection Regulation . It is notable that the Privacy Commissioners said that they were deeply concerned by the limited protective measures that TikTok had in place to protect youth privacy, and found it particularly troubling that even though TikTok implemented many sophisticated analytics tools for age estimation to serve its various other business purposes, the company did not consider using those tools or other similar tools to prevent underage users from accessing, and being tracked and profiled on, the platform. We can see how important it is for companies like TikTok to ensure that the purposes for collection, use, and disclosure are what a reasonable person would consider to be appropriate, reasonable, or legitimate under the circumstances. What is more, companies need to make sure that they obtain valid and meaningful consent, in line with the Guidelines for obtaining meaningful consent . Also, adequate age assurance mechanisms need to be in place to ensure that underage users are not let onto the platform. And when it comes to consent to use biometric information, more needs to be done to ensure that there is proper express consent due to the sensitive nature of the information. Lastly, we cannot forget to mention the importance of ensuring that companies communicate as clearly as possible and properly explain things such as company practices in a company Privacy Policy. And finally, it is important to remember what the Privacy Commissioners said: obtaining consent does not render an otherwise inappropriate purpose appropriate. If the purposes are not appropriate, users will not be able to consent. So, if you cannot protect the users with appropriate or reasonable measures, there is no point in asking for consent to collect or use the personal information. Previous Next
- The US AI Safety Institute Signs Research Agreements with Anthropic and OpenAI | voyAIge strategy
The US AI Safety Institute Signs Research Agreements with Anthropic and OpenAI Agreement has potential to influence safety improvements on AI systems By Christina Catenacci Sep 13, 2024 Key Points: The Safety Institute has signed research agreements with Anthropic and OpenAI The Safety Institute will receive access to major new models from each company prior to and following their public release The Safety Institute will be providing feedback and collaborating with the companies The US AI Safety Institute (Safety Institute) has recently signed research agreements with Anthropic and OpenAI. This article describes the details as set out in the Safety Institute’s recent press release. What is the Safety Institute? The Safety Institute located within the Department of Commerce at the National Institute of Standards and Technology (NIST), was established following the Biden-Harris administration’s 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence to advance the science of AI safety and address the risks posed by advanced AI systems. In fact, it is focused on developing the testing, evaluations and guidelines that will help accelerate safe AI innovation in the United States and around the world. The Safety Institute recognizes the potential of artificial intelligence, but simultaneously acknowledges that there are significant present and future harms associated with the technology. Additionally, the Safety Institute is dedicated to advancing research and measurement science for AI safety, conducting safety evaluations of models and systems, and developing guidelines for evaluations and risk mitigations, including content authentication and the detection of synthetic content. And 270 Days Following President Biden’s Executive Order on AI, the Safety Institute created draft guidance in order to help AI developers evaluate and mitigate risks stemming from generative AI and dual-use foundation models. In fact, NIST released three final guidance documents that were first released in April for public comment, as well as a draft guidance document from the Safety Institute that is intended to help mitigate risks. NIST is also releasing a software package designed to measure how adversarial attacks can degrade the performance of an AI system. The goal is to have the following guidance documents and testing platform inform software creators about the risks and help them develop ways to mitigate those risks while supporting innovation: Preventing Misuse of Dual-Use Foundation Models Testing How AI System Models Respond to Attacks Mitigating the Risks of Generative AI Reducing Threats to the Data Used to Train AI Systems Global Engagement on AI Standards One guidance document, Managing Misuse Risk for Dual-Use Foundation Models deals with the key challenges in mapping and measuring misuse risks. This is followed by a discussion of several objectives: anticipate potential misuse risk; establish plans for managing misuse risk; manage the risk of model theft; measure the risk of misuse; ensure that misuse risk is managed before deploying foundation models; collect and respond to information about misuse after deployment; and provide appropriate transparency about misuse risk. What do the Agreements Require? In its press release, the Safety Institute announced collaboration efforts on AI safety research, testing, and evaluation with Anthropic and OpenAI. In fact, each company’s Memorandum of Understanding establishes the framework for the Safety Institute to receive access to major new models from each company prior to and following their public release. The agreements will enable collaborative research on how to evaluate capabilities and safety risks, as well as methods to mitigate those risks. Elizabeth Kelly, director of the U.S. AI Safety Institute, stated: “Safety is essential to fueling breakthrough technological innovation. With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science of AI safety…These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI.” It will be interesting to see what comes of these collaborations. More specifically, time will tell whether the Safety Institute actually provides meaningful feedback to Anthropic and OpenAI on potential safety improvements to their models, and whether the companies attempt to incorporate the Safety Institute’s feedback to improve safety and better protect consumers. Previous Next
- The Government of Canada launches an AI Strategy Task Force and Public Engagement | voyAIge strategy
The Government of Canada launches an AI Strategy Task Force and Public Engagement Public Consultations run from October 1, 2025 to October 31, 2025 By Christina Catenacci, human writer Oct 3, 2025 Key Points On September 26, 2025, the Government of Canada announced the launch of an AI Strategy Task Force and a “30-day national sprint” that will help shape Canada’s approach to AI The government will be seeking advice on a broad range of AI-related themes, including: research and talent; AI adoption across industry and governments; commercialization of AI; scaling Canadian AI champions and attracting investments; building safe AI systems and strengthening public trust in AI; education and skills; building enabling infrastructure; and security of the Canadian infrastructure and capacity The AI Task Force is made of technical experts in areas such as research and talent, AI adoption, commercialization, scaling and attracting investment, education and skills, infrastructure, and security On September 26, 2025, the Government of Canada, via the Honourable Evan Solomon, Minister of Artificial Intelligence and Digital Innovation, announced the launch of an AI Strategy Task Force (Task Force) and a “30-day national sprint” (Consultation) that will help shape Canada’s approach to AI. The ultimate goal of the Task Force and Consultation is to set out a renewed AI strategy to position Canada. What Will the Task Force Do? The government will be seeking advice on a broad range of AI-related themes, including: research and talent AI adoption across industry and governments commercialization of AI scaling Canadian AI champions and attracting investments building safe AI systems and strengthening public trust in AI education and skills building enabling infrastructure security of the Canadian infrastructure and capacity The Task Force, a group of experts, will consult their networks to provide actionable insights and recommendations. What are the Consultations About? Canadians are invited to share their perspectives to help define the next chapter of Canada’s AI. The Consultation begins October 1, 2025 and ends on October 31 , 2025. Subsequently, ideally in November, the AI Strategy Task Force will share the ideas that they gather. Who is on the AI Task Force? The AI Strategy Task Force comprises several leaders of the AI ecosystem and will be consulting their networks on specific themes, as listed below: Research and Talent Gail Murphy, Professor of Computer Science and Vice-President – Research & Innovation, University of British Columbia and Vice-Chair at the Digital Research Alliance of Canada Diane Gutiw, Vice-President – Global AI Research Lead, CGI Canada and Co-Chair of the Advisory Council on AI Michael Bowling, Professor of Computer Science and Principal Investigator – Reinforcement Learning & Artificial Intelligence Lab, University of Alberta and Research Fellow, Alberta Machine Intelligence Institute and Canada CIFAR AI Chair Arvind Gupta, Professor of Computer Science, University of Toronto Adoption across industry and governments Olivier Blais, Co-Founder and Vice-President of AI, Moov.AI and Co-Chair of the Advisory Council on AI Cari Covent, Strategic Data and AI Advisor, Technology Executive Dan Debow, Chair of the Board, Build Canada Commercialization of AI Louis Têtu, Executive Chairman, Coveo Michael Serbinis, Founder and Chief Executive Officer, League and Board Chair of the Perimeter Institute Adam Keating, Founder and Chief Executive Officer, CoLab Scaling our champions and attracting investment Patrick Pichette, General Partner, Inovia Capital Ajay Agrawal, Professor of Strategic Management, University of Toronto and Founder, Next Canada and Founder, Creative Destruction Lab Sonia Sennik, Chief Executive Officer, Creative Destruction Lab Ben Bergen, President, Council of Canadian Innovators Building safe AI systems and public trust in AI Mary Wells, Dean of Engineering, University of Waterloo Joelle Pineau, Chief AI Officer, Cohere Taylor Owen, Founding Director, Center for Media, Technology and Democracy Education and Skills Natiea Vinson, Chief Executive Officer, First Nations Technology Council Alex Laplante, Vice-President – Cash Management Technology Canada, Royal Bank of Canada and Board Member at Mitacs David Naylor, Professor of Medicine and President Emeritus Sarah Ryan, Senior Research Officer, Canadian Union of Public Employees Infrastructure Garth Gibson, Chief Technology and AI Officer, VDURA Ian Rae, President and Chief Executive Officer, Aptum Marc Etienne Ouimette, Chair of the Board, Digital Moment and Member, OECD One AI Group of Experts, Affiliate researcher, sovereign AI, Cambridge University Bennett School of Public Policy Security Shelly Bruce, Distinguished Fellow, Centre for International Governance Innovation James Neufeld, Founder and Chief Executive Officer, samdesk Sam Ramadori, Co-President and Executive Director, LawZero What Can We Take from this Development? First and foremost, if you are interested in providing comments, do it right away (you may need to sprint to participate in the national sprint) because the deadline is fast approaching. Second, the AI Minister was quick to point out that Canada was the first country in the world to launch a funded national AI strategy—the Pan-Canadian AI Strategy (PCAIS)—to drive adoption of AI across Canada’s economy and society. In fact, he went so far as to deny that Canada is behind other countries in AI and needs to catch up. He seemed to be of the view that Canada is and remains to be a leader in AI—even though we have watched other countries surpass Canada in AI (and privacy) regulation. This response will likely be disappointing for many Canadians, since it is important to first acknowledge that there is a problem before it is possible to fix it. Third, the announcement also referred to the recent investment of $2 billion for the Canadian Sovereign AI Compute Strategy in 2024—despite the fact that other governments and companies in other countries have poured in a much larger amount and have gotten so much further ahead than Canada. For example, Google invested $33 billion in 2024; it plans on investing $75 billion in 2025. Fourth, the AI Task Force mentioned the list of experts, most of whom are involved in the technical aspects of AI. While it is important to have a task force made of technical experts in areas such as research and talent, AI adoption, commercialization, scaling and attracting investment, education and skills, infrastructure, and security, it is also important to include philosophers and ethicists, legal minds, and sociologists, as they would myriad perspectives and create a more holistic understanding of how AI needs to be implemented into society. Fifth, the AI Minister has said that the members of the AI Task Force will consult their networks to provide actionable insights and recommendations. Can I ask, What does this mean? It appears that experts who have something to offer the government can only participate in this phase if they fall into certain networks. This is disappointing given that many Canadians who are not part of these select networks could provide valuable insights into how we can create a renewed AI strategy in Canada. Previous Next
- What are AI agents? | voyAIge strategy
What are AI agents? Will AI agents make our lives easier? By Christina Catenacci Oct 22, 2024 Key Points AI agents act autonomously—with agency, and can act as virtual employees throughout an organization and across industries Some companies are starting to launch AI agents; for example, Microsoft is launching Copilot Studio, and Salesforce is launching Agentforce As companies roll out AI agents, it is likely that several tasks will be included in the package, involving sales, tracking expenses, or customer service You may be wondering what all the hype is about when hearing about AI agents. What are they? How is it different than a chatbot? How can this make my life easier? AI agents are not like chatbots, which need to be prompted. Rather, AI agents act autonomously—with agency. It may not be common right now, but it has been projected that by 2028, 33 percent of enterprise software applications will include agentic AI, up from less than one percent in 2024, enabling 15 percent of day-to-day work decisions to be made autonomously. Also referred to as intelligent agents, AI agents use AI techniques to complete tasks and achieve goals. In fact, AI agents do the following: receive instructions create a plan use AI to complete tasks produce outputs Why would businesses need these AI agents? By having intelligent agents, organizations can increase the number of automatable tasks and workflows. This saves time and automates monotonous tasks so that (human) employees can be freed up to complete more interesting and intellectually stimulating work. Eventually, AI agents will likely create more complicated plans and create more stunning deliverables—all autonomously. As wonderful as these tools might be, some significant concerns are likely to arise—whether it is privacy, security, or ethical concerns. For instance, when an AI agent is carrying out autonomous tasks to achieve goals, there is some question about whether it will violate a privacy law in the process. And the AI agent may not be ethically aware enough to consider how important it is to ensure that how one accesses information is in line with human values. When trying to accomplish certain goals, would an AI Agent concern itself with human values? Even if it would, would it sufficiently understand human values in every circumstance? If it does not, who would be responsible if the AI agent did something that was not what a human would consider to be acceptable conduct? Perhaps the law of agency would apply and make the owner of the AI agent responsible for any unintended consequences. But if a large portion of users starts to use these AI agents simultaneously, how would it be possible to minimize the extent of harm that could ensue—all at once? Interestingly, it has been reported that Microsoft will be launching these AI agents in it’s Copilot Studio in the very near future. In fact, Microsoft forecasts that these AI agents will carry out tasks throughout the workplace in many industries—to help with that, the company is going to be releasing 10 fine-tuned agents and give users the ability to create their own agents. For example, some of the agents that Microsoft Copilot Studio will be releasing include: Sales Qualification Agent Supplier Communications Agent Customer Intent Agent Customer Knowledge Management Agent Indeed, Microsoft has claimed that AI automation will remove the boring parts of jobs instead of replacing entire jobs. The company plans on achieving this goal by allowing businesses and developers to build AI-powered Copilots that can work as virtual employees. Unlike a chatbot waiting to be prompted, the Copilot would do things such as monitoring email inboxes or automating a series of tasks in the place of employees. Employees will need to appreciate that the parts of jobs that will be eliminated are the kinds of things employees do not enjoy doing. When this type of process takes place, HR Managers may need to reconfigure jobs so that there are more tasks that humans can do contained in each job. To be sure, Microsoft has built some controls into Copilot Studio so that it does not go rogue and complete inappropriate tasks autonomously. Managers will likely need to provide guidance to employees with respect to what types of tasks are good candidates to be automated; they will need to assign those tasks to the AI Copilot and leave more delicate or complex work tasks to human employees. Technically speaking, Copilot Studio combines the natural language understanding models already in Copilot Studio with Azure OpenAI to: Understand what the copilot maker wants to achieve by parsing their request Apply knowledge of how nodes within a topic work together, and how a topic should be constructed for the best effect Generate a series of connected nodes that together form a full topic Use plain language in any node that contains user-facing text that corresponds with the copilot maker's request According to Microsoft, the “Create with Copilot” option in Copilot Studio allows users to just describe what they want to achieve, and then produce a topic path that achieves that goal. Microsoft recommends that users include granular instructions in a description and limit the scope of the description to a single topic. It is possible to modify a topic if necessary, using natural language. Apparently, Microsoft’s latest announcement has deepened its rivalry with Salesforce , especially since Salesforce just presented its “Agentforce” at its Dreamforce conference. Indeed, the competition is fierce . We shall see: AI agents are in their early stages. Previous Next
- ABOUT | voyAIge strategy
AI governance experts helping businesses navigate AI challenges with clarity and confidence. Our Founders We are lawyers, consultants, and PhDs who are passionate about demystifying the complexities of AI through the lens of law, policy, and ethics. Drawing on our extensive experience working with government and private companies, we empower organizations to embrace AI with confidence so that they can navigate its challenges with informed, ethical, compliant strategies. Christina Catenacci, BA, LLB, LLM, PhD Christina Catenacci is a member of the Law Society of Ontario. Christina worked as an editor with First Reference between 2005 and 2015 working on publications including The Human Resources Advisor (Ontario, Western and Atlantic editions), HRinfodesk, and First Reference Talks blog discussing topics in Canadian Labour and Employment Law. She continues to contribute to First Reference Talks as a regular guest blogger, where she writes on surveillance technologies, AI, and privacy law, policy, and ethics. Since graduating with her PhD in law (with specialties in Privacy law and Employment law), Christina has worked as a law professor, senior policy advisor, and self-employed consultant. Christina has also appeared in the Montreal AI Ethics Institute's AI Brief, International Association of Privacy Professionals’ Privacy Advisor, Tech Policy Press, and Slaw - Canada's online legal magazine. Christina is a driven, organized, and resilient thinker and lifelong learner who works at the intersection of technology, privacy, surveillance, and ethics to work with organizations to design creative solutions to their challenges—with integrity, enthusiasm, and passion. She has extensive knowledge of the privacy laws of three main jurisdictions—Canada, the United States, and the European Union. She also has significant knowledge and experience in the labour and employment laws of all jurisdictions in Canada. She has considered the privacy implications of cutting edge surveillance technologies involving the consumer, employment, and health contexts. In her PhD dissertation, Christina created a comparative socio-legal analysis where she synthesized privacy legislative provisions, privacy cases, and social theories of privacy and surveillance to propose new provisions in Canada’s federal privacy law, the Personal Information Protection and Electronic Documents Act (PIPEDA). In particular, she proposed closing the electronic surveillance gap with novel legislative data protection provisions by modifying and adding provisions to PIPEDA. Christina has a friendly and collaborative personality, strong work ethic, and strong organizational and analytical skills. Christina has extensive education and experience in each of the three pillars of voyAIge strategy. First, she has a law degree at the PhD level with two specialties (Employment and Privacy Law) and she has been a law professor for a number of years. Christina has also worked as a legal editor for about 20 years. Second, Christina has created a PhD dissertation in legislative privacy policy and has worked as a senior policy advisor with the Government of Ontario. And third, Christina took a large concentration of philosophy and ethics courses when earning her first degree, and excelled in her legal philosophy course when earning her PhD. She continues to have a strong interest in philosophy, including technology ethics. Finally, Christina has taught legal ethics, where the program focused on the Ontario Law Society’s Rules of Professional Conduct. She likes to learn about the art of public speaking, organize and present at conferences, and creatively combine ideas in interdisciplinary ways. In fact, Christina was a finalist in the Three Minute Thesis (3MT) Competition, where she competed against students in the entire University of Western Ontario, and has since been the only law student from Western Law to reach this stage. Christina is interested in Design Thinking and finding innovative solutions to wicked problems. Furthermore, Christina likes to engage with thought leaders and ask critical questions about the implications of technology. Tommy Cooke, BA, MA, PhD Tommy Cooke is an experienced professor of political science, sociology, and engineering. He is also an experienced project manager, team director, principal investigator. After completing his PhD in in Communication & Culture (York University) in 2017 with a focus on privacy theory and ethical technology, he completed a Research Fellowship at the Centre for Advanced Internet Studies (CAIS) in Bochum, Germany followed by a Social Sciences and Humanities Research Council of Canada (SSHRC) Postdoctoral Fellowship at the Surveillance Studies Centre, Queen's University. At Queen's University, Tommy received the Wicked Ideas Award for innovative research methodology. He currently leads a multinational research team (ADITLOM) that studies privacy safeguards around the collection of smartphone location data. Tommy was also the Ethics, Privacy, & Internal Threat Assessment Manager at the Centre for Advanced Computing, where he and his team provided oversight on a university- province collaboration to analyze the spread of COVID-19 using Machine Learning. Tommy has worked as a Policy Advisor at the Office of the Information and Privacy Commissioner of Ontario, and as a Senior Analyst in Deloitte Canada's Trustworthy AI team. Tommy has been a Professor of Political Science (UWO), Sociology (UWO), and Engineering (Queen's) and has published extensively in the areas of privacy, AI, and ethical technology. Our Talent voyAIge strategy is proud to work with the brightest strategic and analytical talent. We are grateful to learn and grow with highly motivated individuals that listen and innovatively problem solve. Matt Milne, BA, MS Matt Milne is a cybersecurity researcher, manager, and technology specialist with a master's degree in information security and digital forensics from Niagara University, an NSA-designated center for academic excellence. His expertise includes cyber forensics, ethical hacking, software security, cryptography, and risk management. His thesis explored AI readiness and cybersecurity in public organizations, focusing on security, transparency, and social good. He also holds a bachelor's degree from Western University in sociology and political science, with a certification in critical security studies. His research has addressed topics like cell phone malware and smart city security. CompTIA Security+ certified and preparing for the CISSP exam, Matt combines over a decade of management and customer service experience with a passion for technology. He is dedicated to raising awareness of cybersecurity risks and the societal impacts of emerging technologies. MattMilne
- Projects | voyAIge strategy
VS Media Room Stay on top of the latest about VS. Announcements, press information, and interactive content are found right here. 01 VS's Welcome Matt Milne VS is excited to welcome cyber security and AI expert, Matt Milne. Learn more about Matt's impact and experience analyzing complex digitally-related issues and solutions. 02 Published with the International Association of Privacy Professionals What does it mean to govern AI as a small business? We recently got the conversation started with the International Association of Privacy Professionals. 03 Featured Blog at IN2Communications Want to learn more about AI policies? VS was featured in IN2Communication's blog series, exploring what AI policies are - and why they are crucial to any organization using AI. 04 Canada Club of London Featured Guest Speakers We were honoured to be invited by the Canada Club of London to speak on all-things AI governance
- Disclaimer and Terms of Use | voyAIge strategy
Disclaimer and Terms of Use - Our Policies for Working with our Clients Disclaimer and Terms of Use DOWNLOAD
- Canada’s Innovation Crossroads | voyAIge strategy
Canada’s Innovation Crossroads Why Performance Matters Now By Tommy Cooke, fueled by curiousity and caffeine Jan 16, 2026 Key Points: Canada’s innovation challenge is no longer about talent or ideas, but about translating world-class knowledge into sustained economic and social impact Decades of declining research and development investment and slow technology adoption are eroding Canada’s competitiveness at the exact moment global innovation pressures are intensifying Reversing this trajectory requires coordinated action across business, academia, and government to treat innovation as essential infrastructure rather than optional ambition Canada is facing a moment of truth. In a world marked by rapid technological change, intensifying global competition, and ever-shifting economic power, a country’s capacity to innovate is no longer a niche advantage. It becomes essential. Alas, the latest assessment from the Council of Canadian Academies (CCA) paints a sobering picture. Canada’s innovation performance is declining. This speaks directly to the future of Canadian competitiveness, jobs, social systems, and the livelihoods of people across such a massive country. The fundamental tension that the report highlights is familiar to many Canadian leaders: despite exceptional research and globally recognizing talent, Canada struggles to turn ideas into economic action. As the expert panel put it, we are at a “critical juncture” in Canada’s history. The Numbers Don’t Lie The CCA’s report systematically benchmarks Canada’s performance in science, technology, and innovation against other nations. In many key metrics Canada lags significantly behind her neighbours. Both business and government research and development spending are well below the Organisation for Economic Co-operation and Development (OECD) average, an underinvestment which impedes productivity growth and dulls competitive edge. What’s striking here isn’t just the relative performance. It is where Canada is heading. While research and development In Canada has been steadily declining over the last 30 years, the rest of the world has been trending in the opposite direction. For business leaders, this development matters because research and development, technology adoption, and innovation directly influence productivity, economic growth, export competitiveness, and the ability of Canadian firms to lead in global markets. Knowledge is Power… if it Catalyzes Change Canadian universities remain a genuine strength. They continue to produce world-class research and attract top talent with high levels of international collaboration and impact. These institutions are a national asset because they illustrate a core challenge. And yet, excellence in knowledge creation does not automatically translate into innovative success. Many post-secondary institutions still struggle to commercialize discoveries. While Canada certainly excels at ideation, the CCA reports that it truly struggles to translate knowledge into pragmatic, measurable action and outcomes. This is precisely where other economies are significantly outperforming Canada. Therefore, it is not surprising that AI is a useful case study here. Canada indeed played an early and influential role in advancing AI research, with breakthroughs that shaped the field globally. But according to the CCA, Canada’s strength in AI research has not yet yielded proportional commercial or economic impact. While Canada’s banks and retailers lead AI adoption nationally, wider industrial adoption and commercialization lag behind the rest of the world, meaning that Canada’s early lead is eroding. The Consequences of Inaction Why does this matter to business leaders? It is self-evident that lagging innovation stifles GDP growth, limits the proliferation of high-paying jobs, and diminishes competitiveness in global markets. But one of the more significant impacts are its societal implications. Innovation isn’t just about making technology. It has a correlating social effect that connects directly to how efficiently public services are delivered. It also effects national resilience; modern healthcare delivery and housing solutions–which are exceptionally expensive yet constant societal hurdles in Canada–depend upon innovation that streamlines population support capabilities. The report’s emphasis on “a serious and widening gap between our potential and performance” is thus a call to action. It signals that maintaining the status quo is not an option if Canada wants to preserve its standard of living and social solidarity in an era of rapid and uncertain change. What Canadian Leaders Must Wrestle With If there is a central insight from this report for Canadian business and public sector leaders, it is this: improving innovation performance requires systemic and coordinated action across the ecosystem. There is no single silver bullet here. Strengthening Canada’s innovation performance means rethinking how it funds research and development, how it supports firms in adopting new technologies, and how it helps startups scale into globally competitive businesses. For business leaders, this means investing more deliberately in technology adoption, building internal innovation capabilities, and collaborating across sectors. It also means engaging more actively with legislators and policymakers to signal where structural barriers are holding firms back. A Path Forward for Canada Despite its severity in tone, the report does not mean Canada is doomed. Far from it. The foundational elements of a high-performing innovation ecosystem are present. Canada has educated citizens, excellent universities, robust industries, and a cultural openness to new ideas. Canada just needs to figure out its struggles with knowledge translation. Canadian leaders in business, academia, and government must align around innovation to make it a strategic imperative. The CCA report is not a verdict—it shows where Canada stands today and highlights the strategic choices that lie ahead. How Canada responds will shape not just economic statistics, but the everyday lives of Canadians in jobs, public services, and opportunities for future generations. Previous Next
- Newsom Vetoes California's SB 1047 | voyAIge strategy
Newsom Vetoes California's SB 1047 A missed opportunity to lead in AI regulation By Christina Catenacci Oct 1, 2024 Key Points: Governor Newsom vetoed SB 1047, California’s AI safety bill, on September 29, 2024 Many view the veto as a missed opportunity for California to lead in AI regulation in the United States The bill created significant controversy in Silicon Valley because of the concern that it was too rigid and would stifle innovationAs you may have heard, Governor Gavin Newsom of California just vetoed California’s AI safety bill, SB 1047. We wrote about this potentially landmark bill earlier , where we explained the inner workings of the text. The article also noted that the bill was the first of its kind in the United States, and had the potential of influencing how other States crafted their AI statutes and regulations. Professors Hinton and Bengio supported it too . So why did the bill fail? The bill that attempted to balance AI innovation with safety made it through readings in the General Assembly and in the Senate—and was ready to be signed by Newsom. It seemed that the author of the bill, Senator Scott Weiner, was confident that a great deal of work had gone into it, and it deserved to pass. Let us examine the veto note signed by Newsom on September 29, 2024. First, it appears that Newsom was concerned about the fact that the bill focused only on the most expensive and large-scale models—something that could give the public a false sense of security about controlling AI. He pointed out that smaller, more specialized models could be equally or even more dangerous than the models that the bill targeted. Second, Newsom called the bill well-intentioned, but also remarked that it did not take into account whether an AI system was deployed in high-risk environments, involved critical decision-making, or dealt with the use of sensitive data. Newsom stated, “Instead, the bill applies stringent standards to even the most basic functions—so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology”. Third, Newsom agreed that we could not afford to wait for a major catastrophe to occur before taking action to protect the public, and he emphasized that California would not abandon its responsibility—but he did not agree that to keep the public safe, the State had to settle for a solution that was not informed by an empirical trajectory analysis of Al systems and capabilities. He accented that any framework for effectively regulating Al had to keep pace with the technology itself. He also added that the US AI Safety Institute was developing guidance on national security risks, informed by evidence-based approaches to guard against demonstratable risks to public safety. Additionally, he noted important initiatives that agencies within his Administration were taking in the form of performing risk analyses of the potential threats and vulnerabilities to California's critical infrastructure using Al. Newsom highlighted that a California-only approach might be warranted, especially absent federal action by Congress, but it had to be based on empirical evidence and science. He concluded his remarks by saying that he was committed to working with the Legislature, federal partners, technology experts, ethicists, and academia to find the appropriate path forward. He stated, “Given the stakes—protecting against actual threats without unnecessarily thwarting the promise of this technology to advance the public good—we must get this right”. He simply could not sign SB 1047. We should all keep in mind that this was a controversial bill that caused a kerfuffle in Silicon Valley. In fact, various tech leaders had been saying that the bill was too rigid and objecting about its potential to hinder innovation and driving them out of California. Several lobbyists and politicians had been communicating their concerns in the past few weeks, including former Speaker Nancy Pelosi . There were other powerful players in Silicon Valley, including venture capital firm Andreessen Horowitz, OpenAI, and trade groups representing Google and Meta, lobbied against the bill, arguing it would slow the development of AI and stifle growth for early-stage companies. It makes sense that there was significant concern about the bill in Silicon Valley: SB 1047 dealt with serious harms and set out considerable consequences for noncompliance. It is no surprise that those tech workers were in favour of avoiding them. Indeed, the bill was called, “hotly contested” since those in the tech industry complained about it. That said, there were many who supported the bill who viewed SB 1047 as an opportunity to lead the way on American AI regulation. To that end, tech industry leaders reacted positively to Newsom’s veto, and even expressed gratitude to Newsom on social media . On September 29, 2024, Senator Weiner responded to the decision to veto the bill, noting the following: The veto was a setback for everyone who believed in oversight of massive corporations that are making critical decisions that affected the safety and welfare of the public and the future of the planet While the large AI labs had made admirable commitments to monitor and mitigate risks, the truth was that voluntary commitments from industry were not enforceable and rarely worked out well for the public This veto left us with the troubling reality that companies aiming to create an extremely powerful technology faced no binding restrictions from American policymakers, particularly given “Congress’s continuing paralysis” around regulating the tech industry in any meaningful way With respect to Newsom’s criticisms, he stated that “SB 1047 was crafted by some of the leading AI minds on the planet, and any implication that it is not based in empirical evidence is patently absurd” The veto was a missed opportunity for California to once again lead on innovative tech regulation Weiner stated that California would continue to lead the conversation, and it was not going anywhere. We will have to wait and see what is proposed in the near future. In the meantime, it is important to note that there were some smaller AI bills that Newsom did sign into law. For instance, Newsom recently signed one to crack down on the spread of deepfakes during elections, and another to protect actors against their likenesses being replicated by AI without their consent. As for Weiner, he stated , “It’s disappointing that this veto happened, but this issue isn’t going away”. Previous Next