Search Results
120 results found with an empty search
- Technology Lessons from Orange Shirt Day | voyAIge strategy
Technology Lessons from Orange Shirt Day Applying Indigenous Principles to Build Ethical and Inclusive Organizational Practices By Tommy Cooke Oct 4, 2024 Key Points: OCAP principles guide respectful and community-centered data management Indigenous Data Sovereignty emphasizes transparency and accountability in data use Indigenizing technology creates inclusive systems that honor cultural perspectives The National Day for Truth and Reconciliation, also known as Orange Shirt Day in Canada, is a time to reflect on the painful history of residential schools and it is a time to honor survivors and those who never came home. It is a day of remembrance – and also a time for learning and growth. At voyAIge strategy, we have recently reflected on the many lessons Indigenous technology and media leaders provide through their work and example. We recognize that Indigenous business practices present an opportunity for organizations to revitalize their own practices. Our Co-Founder Tommy hosts the What’s That Noise?! Podcast . For the past year, it has been the home to a special series called One Feather, Two Pens , co-hosted by Lawrence Lewis , Co-Founder and CEO of OneFeather . This Insight draws upon some of the series’ nine podcast episodes to share key values and principles that can serve as powerful growth opportunities for organizations working with data and complex technology like AI. Here are four principles and values that could help your organization foster more ethical, inclusive, and accountable data practices: 1. Ownership, Control, Access, and Possession (OCAP) The OCAP principles – Ownership, Control, Access, and Possession – are essential for understanding respectful data management. Indigenous communities have long advocated that they must have the ability to govern data about their people and lands. OCAP is not merely about data privacy. It speaks to the heart of how communities can control and protect data that are used to generate stories about Indigenous peoples. These principles are also crucial for protecting the ability for Indigenous peoples to continue authentically telling their own stories. At any organization, OCAP can reshape how data is treated. Data are not just numbers. They represent people, their histories, and their futures. As Ja'Elle Leite , CEO of Ultralogix , mentioned on a recent episode of One Feather, Two Pens, stories from Indigenous communities carry important lessons for those who are willing to listen. Reflecting on how your organization handles and protects data, particularly when it relates to vulnerable or underrepresented groups, is a crucial step toward demonstrating ethical technology use. This applies to AI as much as it does any technology. 2. Indigenous Data Sovereignty Indigenous Data Sovereignty refers to the right of Indigenous peoples to govern data about their communities, cultures, and lands. If OCAP are data management steps an organization can take, Indigenous Data Sovereignty is a goal for many Indigenous peoples who encounter data and technology. In the era of Big Data, an organization can quickly lose sight of who controls the narratives that data create. Indigenous Data Sovereignty emphasizes the importance of giving Indigenous communities agency over their data and ensuring that its use aligns with their values. For your organization, applying this principle could mean being transparent about how data is collected, stored, and shared. It involves making sure that these processes can be explained and understood by the community members themselves. If your organization is gathering data that involves Indigenous populations, this principle is crucial for maintaining trust. 3. Indigenization of Technology Indigenization means embedding Indigenous perspectives and values into existing systems. It’s the deliberate practice of protecting and promoting culture through tools and technologies. It’s helpful to recognize that bringing an Indigenous lens into an organization doesn’t just benefit Indigenous stakeholders. It also helps organizations make their technology more inclusive and culturally aware. James Delorme , CEO of Indigelink Digital Inc., who was featured in Episode 8 of the podcast series, highlighted the importance of intentionally bringing Indigenous perspectives into spaces and places where they haven’t traditionally existed. For businesses, this might look like creating spaces for Indigenous collaboration, particularly in decision-making processes related to technology and data. For example, this could involve re-evaluating how data flows through your company, ensuring that systems in place don’t marginalize Indigenous voices or stories. 4. Narrative Authority Elle-Máijá Tailfeathers , award-winning filmmaker and storyteller, shared in Episode 5 of our series the importance of narrative authority – the power and right of individuals and communities to control, shape, and share their own stories. In an organizational setting, narrative authority prompts organizations to think deeply about how they present information, especially when it relates to Indigenous peoples. Organizations must be self-aware of how their data collection, products, and services might filter or alter Indigenous narratives. Engaging directly with Indigenous communities is vital when your data or technology practices involve or represent their stories. This ensures that narratives are not only accurate but also owned and told by the right people and voices. Bridging Indigenous Data and Technology Insights with Organizational Practice Reflecting on these principles offers organizations a chance to recalibrate their ethical approaches to data and technology. Ethics in the digital age isn’t just about compliance or creating polished policies. It’s about respecting the stories behind the data and the people represented by them. As we learned from Indigenous thought leaders, ethical technology practices require constant dialogue, humility, and an openness. Here are three tips to help bridge Indigenous principles with your own organizational practices: Translate Ethics into Action : Don’t just publish ethics policies. Turn them into daily actions. Ask how OCAP or Indigenous Data Sovereignty can be applied to your organization's specific context. Engage Communities : Actively engage Indigenous voices, especially when your technology touches their data, culture, or representation. Make space for dialogue and collaboration. Be Accountable : Ensure transparency and accountability in how data is managed and shared. Being answerable to the communities your data affects is a hallmark of ethical practice. Orange Shirt Day reminds us of the power of stories, and Indigenous communities have much to teach us if we listen. By adopting even some of their many data and technology principles, organizations can not only create more ethical and inclusive data systems but also honor the cultural wisdom that strengthens them. Previous Next
- AI in Cybersecurity | voyAIge strategy
AI in Cybersecurity A Game Changer for MSPs when People Come First Tommy Cooke Dec 13, 2024 Key Points: AI transforms cybersecurity for MSPs by enabling real-time threat detection, automating responses, and predicting vulnerabilities Effective deployment requires tailoring training, thoughtful vendor selection, and clear communications with all stakeholders through strong thought leadership When it comes to AI in cybersecurity, trust is essential so ensure that people always come first Managed Service Providers (MSPs), companies that provide ongoing technology services and maintenance for their clients, are one of the many business types that are undergoing significant change due to AI—especially for MSPs providing cybersecurity solutions. Cybersecurity is complicated. With average data breaches exceeding $4.75 million per year per organization, coupled with the fact that 88 percent of these breaches are caused by human error , MSPs themselves are often the targets of hackers. It is perhaps unsurprising that the industry is undergoing significant transformation where the days of manual monitoring, static rules, and signature-based detection methods are failing far too quickly to outpace new modalities of cyberattacks driven by AI. AI isn't merely a weapon for bad actors: it is also a tool for progressive MSPs to protect you and themselves. Here are a few ways that AI is transforming cybersecurity. How AI Is Transforming Cybersecurity for MSPs Proactive Threat Detection AI analyzes massive data in real time to identify anomalies or unusual patterns that could reveal malicious activity. Through machine learning models, AI can uncover subtle deviations in network activity, login behaviors, or file access patterns that might otherwise go unnoticed. How many times has your bank emailed or texted you about suspicious purchase activity? There's a good chance that AI is helping them out. Capabilities like this allow MSPs to respond faster and build more defensive strategies for their clients and themselves. Automated Incident Response AI is also excellent at responding to threats automatically . Rather than depending on a human to isolate compromised systems, block malicious IPs, or trigger containment protocols, these tasks can be automated and run 24/7. Reducing downtime and enhancing the ability to prevent damage, AI frees up humans so that they can focus on strategic decision-making. It also gives them more time to use AI cybersecurity tools in a sandbox - a virtual space where they can test the vulnerability of their own and their clients’ systems to ensure that a given cybersecurity solution is watertight. Predictive Intelligence Beyond detecting threats, AI can forecast them . Feeding historical incident data, whether from a client directly or from global threat intelligence indexes, AI can help a cybersecurity firm identify patterns and trends behind emerging vulnerabilities. As many of us experience on a near-daily basis, the software and systems we used are updated all the time. This is yet another instance where AI is likely assisting one of your many preferred vendors in predicting issues and patching them before they arise. Understanding AI-Cybersecurity Challenges While scalability, efficiency, and enhanced trust are attractive to MSPs, AI is not a silver bullet. It is crucial that MSPs understand that AI can still misidentify threats that can lead to alert fatigue. AI solutions must be constantly tweaked, and it is imperative that companies listen to customers who may grow tired of constantly losing access to their credit card because of false positives. Automated cybersecurity solutions are also only as effective as the data on which they train. Data lacking representation of varied attack patterns can lead to gaps in threat detection. AI must also align with privacy laws and ethical standards, especially in client environments where sensitive and personal data are handled regularly—not to mention their intellectual property. As we have discussed on In2Communication's video series, The Smarketing Show , many AI cybersecurity platforms have a bad habit of automatically generating new policies and procedures for an organization, including ones that are not tailored to the dynamic shape and size of a company. This can introduce myriad problems for any organization. Overcoming AI-Cybersecurity Challenges Start with education: it is necessary to train teams on how to understand AI cybersecurity platforms. Ask whether your teams use the platforms as efficiently and with the same precision that they use my Enterprise Resource Planning (ERP). Question whether teams aware of all of the platform's ins and outs, capabilities, tools, and shortcomings. Seamless use and integration depends upon giving a team time to play with the tool—to break it and make it work more effectively for yourself and your clients as well. This is why it is also important to vet vendors thoroughly. Chose partners with proven expertise, transparency, and great support as a vendor who is claiming to innovate by offering automated solutions, and also those that claim to generate automated policies and procedures. Lastly, communicate your vision. Your team, your clients, and your stakeholders are engaging AI at varied rates of exposure. They will have different opinions. Cybersecurity is already a high-stakes application for AI, so talk to your people. Explain how and why AI benefits them. Ensure the data remains secure. Prove that you are a thought leader before you implement anything. AI in Cybersecurity is Effective—if Managed Properly Remember that in the world of AI, people matter most. For the foreseeable future, AI is always going to be a trust issue. Effectively deploying AI means more than just the technology; it requires planning, ethical deployment, excellent training, and superb communication. AI can propel an MSP into the future of innovative and reliable cybersecurity solutions if they are able to recognize the inherent complexities of adoption and strategic thinking—processes that always start and end with people. Previous Next
- Data Governance & Why Business Leaders Can’t Ignore It | voyAIge strategy
Data Governance & Why Business Leaders Can’t Ignore It If you Plan to Adopt AI, Data Governance is a Must By Tommy Cooke, fueled by medium brew espresso Oct 13, 2025 Key Points: Data governance ensures reliability, trust, and efficiency and forms the foundation for business growth Even small businesses face risks without governance, making simple practices essential for resilience Strong data governance is the prerequisite for responsible and effective AI adoption Data governance isn’t the flashiest topic in the world of digital transformation. It doesn’t come with glossy demos or promises of instant breakthroughs. Yet, without it, it is one of the largest catalysts of failed data-driven practices like dashboarding, insight generation, and of course, the inevitable failure of AI itself. Even for smaller organizations, data governance is not optional. It is a framework that ensures data is reliable, locatable, streamlined, trustworthy, and safe. What Data Governance Means At its heart, data governance is about establishing clarity and accountability for information across an organization. It sets the rules for how data is collected, stored, accessed, and shared. Practically speaking, data governance involvees developing policies, assigning responsibilities, and building processes that keep data accurate and consistent over time. Done well, governance answers practical questions: Who owns this customer data? What version of this report should we trust? Which compliance rules apply to this information? Who is accountable if something goes wrong? When these questions don’t have answers, organizations waste precious energy manually searching for spreadsheets and PDFS, correcting data entry errors, and responding to breaches that could have been prevented. It also overwhelms your IT support team. Why Business Leaders Should Care Leaders are already responsible for risk, reputation, and growth. Data governance intersects with all three of these vital aspects of business growth. Let’s break things down a bit further: First, there is risk. Inconsistently managed data leads to serious consequences, including reporting errors, missed opportunities, and in worst case scenarios, regulatory penalties Second, governance creates operational efficiency. When data is properly governed, staff don’t need to spend hours reconciling reports. They are able to focus on their actual work and have confidence in the fact that the numbers in front of them are accurate, reliable, and backed by a company policy dictating so Third, governance is about trust. Customers, employees, and partners all want to know that information is handled responsibly. Organizations that demonstrate care with data earn credibility. This is a competitive differentiator, not a hurdle or setback Finally, and as the old adage goes: bad input equals bad output. In other words, data governance lays the groundwork for innovation. AI, predictive analytics, and advanced automation all depend on high-quality data Data Governance is Not Just for Big Business It is easy for smaller organizations to assume that governance is something that only large enterprises need. A small business owners might feel that with a few spreadsheets, a CRM system, and basic accounting tools, there is no need for a formal framework. But this assumption is risky. Every organization, no matter the size, handles sensitive data: customer contact details, employee records, payment information, and proprietary knowledge. The damage from a mistake can hit a smaller business just as hard, if not harder, than a large one. However, this does not mean that governance for smaller firms needs to be complex. It can mean assigning a single person to oversee data practices, creating simple rules for file naming and storage, using cloud platforms with built-in compliance features, and providing staff with basic training. These steps are straightforward but powerful, preventing errors and setting a foundation for growth. How Data Governance Connects to AI In discussions about technology and as we’ve discovered talking to our clients here at VS, leaders often hear the terms “data governance” and “AI governance.” Data governance deals with the quality, security, and compliance of the information itself. AI governance, by contrast, addresses the systems and models built on top of that data. That is, how algorithms are designed, deployed, and monitored for fairness, accuracy, and safety. The connection between the two is direct. Without strong data governance, AI systems cannot be trusted. Poor data, as I mentioned earlier, leads to bad outputs. Strong data governance, on the other hand, gives AI governance a firm foundation. Leaders who are serious about responsibly using AI must first take their data governance responsibilities seriously. The Human Dimension Too often, governance is described as a technical framework. But it is ultimately about people and processes. Employees need clarity about their roles in managing information. Teams need processes that make it easy to do the right thing. Leaders need to demonstrate commitment by modeling good practices and making governance part of the culture. When governance is human-centered, it feels less like red tape and more like confidence in action. Practical Steps to Begin your Data Governance Journey The path to data governance does not need to start with a massive overhaul. Leaders can begin with simple, actionable steps: Assign ownership. Make it clear who is responsible for each critical dataset Document standards. Establish guidelines for how data should be entered, named, and stored Audit your systems. Identify where data resides, who has access, and whether compliance gaps exist Train your people. Provide basic education on why governance matters and how to practice it Start small. Choose one domain—such as customer data or HR files—and implement governance practices there before expanding These steps build momentum and signal leadership commitment. Over time, they evolve into a supportive framework. Data Governance is a Strategic Advantage Data governance should not be seen as a regulatory chore. It is a way of unlocking value and protecting the future of the business. Organizations with strong governance are better positioned to innovate, comply with laws, reassure customers, and use advanced technologies responsibly. In a world where data is often described as the new oil, governance is the refinery. Without it, raw information is messy and hazardous. With it, data becomes clean fuel for decisions, strategy, and growth. Previous Next
- Newsom Signs Bill S53 Into Law | voyAIge strategy
Newsom Signs Bill S53 Into Law The Transparency in Frontier Artificial Intelligence Act is Born By Christina Catenacci, human writer Oct 10, 2025 Key Points On September 29, 2025, Governor Newsom signed into law Senate Bill 53, the Transparency in Frontier Artificial Intelligence Act The purpose of these added provisions is to add transparency and safety provisions for large developers of AI models State-level regulation by other states is likely to follow On September 29, 2025, Governor Newsom signed into law Senate Bill 53, the Transparency in Frontier Artificial Intelligence Act (Act). Essentially, the bill adds provisions to Division 8 of the Business and Professions Code . The purpose of these added provisions is to add transparency and safety provisions for large developers of AI models. What does the law say? The following are main safety and transparency features that are contained in the Transparency in Frontier Artificial Intelligence Act : Definitions: Defines a “foundation model” means an AI model that is all of the following: (1 )Trained on a broad data set; (2) Designed for generality of output; and (3) Adaptable to a wide range of distinctive tasks Defines a “frontier model” as a foundation model that was trained using a quantity of computing power greater than 10^26 integer or floating-point operations. The quantity of computing power described must include computing for the original training run and for any subsequent fine-tuning, reinforcement learning, or other material modifications the developer applies to a preceding foundation model Defines a “frontier developer” as a person who has trained, or initiated the training of, a frontier model, with respect to which the person has used, or intends to use, at least as much computing power to train the frontier model as would meet the technical specifications Defines a “large frontier developer” as a frontier developer that together with its affiliates collectively had annual gross revenues in excess of $500,000,000 in the preceding calendar year Defines a “frontier AI framework” as documented technical and organizational protocols to manage, assess, and mitigate catastrophic risks Requirements : Large frontier developers must write, implement, comply with, and clearly and conspicuously publish on its internet website a frontier AI framework that applies to the large frontier developer’s frontier models and describes how the large frontier developer approaches all of the following: (1) Incorporating national standards, international standards, and industry-consensus best practices into its frontier AI framework; (2) Defining and assessing thresholds used by the large frontier developer to identify and assess whether a frontier model has capabilities that could pose a catastrophic risk, which may include multiple-tiered thresholds; (3) Applying mitigations to address the potential for catastrophic risks based on the results of assessments; (4) Reviewing assessments and adequacy of mitigations as part of the decision to deploy a frontier model or use it extensively internally; (5) Using third parties to assess the potential for catastrophic risks and the effectiveness of mitigations of catastrophic risks; (6) Revisiting and updating the frontier AI framework, including any criteria that trigger updates and how the large frontier developer determines when its frontier models are substantially modified enough to require disclosures; (7) Cybersecurity practices to secure unreleased model weights from unauthorized modification or transfer by internal or external parties; (8) Identifying and responding to critical safety incidents; (9) Instituting internal governance practices to ensure implementation of these processes; and (10) Assessing and managing catastrophic risk resulting from the internal use of its frontier models, including risks resulting from a frontier model circumventing oversight mechanisms Large frontier developers must review and, as appropriate, update its frontier AI framework at least once per year Large frontier developers must clearly and conspicuously publish a modified frontier AI framework and a justification for a modification within 30 days Before, or concurrently with, deploying a new frontier model or a substantially modified version of an existing frontier model, a frontier developer must clearly and conspicuously publish on its internet website a transparency report containing all of the following: (A) The internet website of the frontier developer; (B) A mechanism that enables a natural person to communicate with the frontier developer; (C) The release date of the frontier model; (D) The languages supported by the frontier model; (E) The modalities of output supported by the frontier model; (F) The intended uses of the frontier model; and (G) Any generally applicable restrictions or conditions on uses of the frontier model Before, or concurrently with, deploying a new frontier model or a substantially modified version of an existing frontier model, a large frontier developer must include in the transparency report required above summaries of all of the following: (A) Assessments of catastrophic risks from the frontier model conducted pursuant to the large frontier developer’s frontier AI framework; (B) The results of those assessments; (C) The extent to which third-party evaluators were involved; and (D) Other steps taken to fulfill the requirements of the frontier AI framework with respect to the frontier model A frontier developer that publishes the information described above as part of a larger document, including a system card or model card, is deemed to be in compliance with the above requirements regarding deploying the new frontier model. Moreover, frontier developers are encouraged (but not required) to make disclosures described in this part that are consistent with, or superior to, industry best practices The Office of Emergency Services: Defines “catastrophic risk” as a foreseeable and material risk that a frontier developer’s development, storage, use, or deployment of a foundation model will materially contribute to the death of, or serious injury to, more than 50 people or more than $1,000,000,000 in damage to, or loss of, property arising from a single incident involving a foundation model doing any of the following: (A) Providing expert-level assistance in the creation or release of a chemical, biological, radiological, or nuclear weapon; (B) Engaging in conduct with no meaningful human oversight, intervention, or supervision that is either a cyberattack or, if committed by a human, would constitute the crime of murder, assault, extortion, or theft, including theft by false pretense; and (C) Evading the control of its frontier developer or user Defines a “Critical safety incident” as any of the following: (1) Unauthorized access to, modification of, or exfiltration of the model weights of a foundation model that results in death, bodily injury, or damage to, or loss of, property; (2) Harm resulting from the materialization of a catastrophic risk; (3) Loss of control of a foundation model causing death or bodily injury; or (4) A foundation model that uses deceptive techniques against the frontier developer to subvert the controls or monitoring of its frontier developer outside of the context of an evaluation designed to elicit this behavior and in a manner that demonstrates materially increased catastrophic risk Large frontier developers must transmit to the Office of Emergency Services a summary of any assessment of catastrophic risk resulting from internal use of its frontier models every three months or pursuant to another reasonable schedule specified by the large frontier developer and communicated in writing to the Office of Emergency Services with written updates, as appropriate Frontier developers are not allowed to make a materially false or misleading statements about catastrophic risk from their frontier models or its management of catastrophic risk. Likewise, large frontier developers are not allowed to make a materially false or misleading statements about their implementation of, or compliance with, their frontier AI frameworks When frontier developers publish documents to comply with the above requirements, they may make redactions to those documents to protect the secrets, cybersecurity, public safety, or the national security of the United States or to comply with any federal or state law. in the case where a frontier developer redacts information in a document, the frontier developer must describe the character and justification of the redaction in any published version of the document to the extent permitted by the concerns that justify redaction, and must retain the unredacted information for five years The Office of Emergency Services must establish a mechanism to be used by a frontier developer or a member of the public to report a critical safety incident that includes all of the following: (1) The date of the critical safety incident; (2) The reasons the incident qualifies as a critical safety incident; (3) A short and plain statement describing the critical safety incident; and (4) Whether the incident was associated with internal use of a frontier model Similarly, the Office of Emergency Services must establish a mechanism to be used by a large frontier developer to confidentially submit summaries of any assessments of the potential for catastrophic risk resulting from internal use of its frontier models The Office of Emergency Services must take all necessary precautions to limit access to any reports related to internal use of frontier models to only personnel with a specific need to know the information and to protect the reports from unauthorized access Frontier developers must report any critical safety incident pertaining to one or more of its frontier models to the Office of Emergency Services within 15 days of discovering the critical safety incident If a frontier developer discovers that a critical safety incident poses an imminent risk of death or serious physical injury, the frontier developer must disclose that incident within 24 hours to an authority, including any law enforcement agency or public safety agency with jurisdiction, that is appropriate based on the nature of that incident and as required by law. That said, frontier developers are encouraged (but not required) to report critical safety incidents pertaining to foundation models that are not frontier models The Office of Emergency Services must review critical safety incident reports submitted by frontier developers and may review reports submitted by members of the public The Attorney General or the Office of Emergency Services may transmit reports of critical safety incidents and reports from covered employees. Either entity must strongly consider any risks related to trade secrets, public safety, cybersecurity of a frontier developer, or national security when transmitting reports Beginning January 1, 2027, and annually thereafter, the Office of Emergency Services must produce a report with anonymized and aggregated information about critical safety incidents that have been reviewed by the Office of Emergency Services since the preceding report. However, it must not include information in a report that would compromise the trade secrets or cybersecurity of a frontier developer, public safety, or the national security of the United States or that would be prohibited by any federal or state law. The Office of Emergency Services must transmit the report to the Legislature and to the Governor Frontier developers who intend to comply with the above provisions concerning the Office of Emergency Services must declare their intent to do so to the Office of Emergency Services. Once they make this declaration: (A) The frontier developer must be deemed in compliance to the extent that the frontier developer meets the standards of, or complies with the requirements imposed or stated by, the designated federal law, regulation, or guidance document until the frontier developer declares the revocation of that intent to the Office of Emergency Services or the Office of Emergency Services revokes a relevant regulation; and (B) The failure by a frontier developer to meet the standards of, or comply with the requirements stated by, the federal law, regulation, or guidance document designated must constitute a violation of the Act On or before January 1, 2027, and annually thereafter, the Department of Technology must assess recent relevant evidence and developments and make recommendations about whether and how to update any of the following definitions: frontier model, frontier developer, and large frontier developer, and create a report Beginning January 1, 2027, and annually thereafter, the Attorney General must produce a report with anonymized and aggregated information about reports from covered employees. It cannot include information in a report pursuant to this subdivision that would compromise the trade secrets or cybersecurity of a frontier developer, confidentiality of a covered employee, public safety, or the national security of the United States or that would be prohibited by any federal or state law. it must also transmit a report to the Legislature and to the Governor Enforcement The consequences of noncompliance are steep: A large frontier developer that fails to publish or transmit a compliant document that is required to be published or transmitted, makes a statement in violation, fails to report an incident, or fails to comply with its own frontier AI framework must be subject to a civil penalty in an amount dependent upon the severity of the violation that does not exceed $1,000,000 per violation . A civil penalty must be recovered in a civil action brought only by the Attorney General CalCompute Moreover, the Act establishes within the Government Operations Agency a consortium that must develop a framework for the creation of a public cloud computing cluster to be known as CalCompute. The consortium must develop a framework for the creation of CalCompute that advances the development and deployment of AI that is safe, ethical, equitable, and sustainable by fostering research and innovation that benefits the public and enabling equitable innovation by expanding access to computational resources at minimum Additionally, the consortium must make reasonable efforts to ensure that CalCompute is established within the University of California to the extent possible. CalCompute must include, but not be limited to, all of the following: (1) A fully owned and hosted cloud platform; (2) Necessary human expertise to operate and maintain the platform; and (3) Necessary human expertise to support, train, and facilitate the use of CalCompute On or before January 1, 2027, the Government Operations Agency must submit a report from the consortium to the Legislature with the framework developed for the creation and operation of CalCompute, and the report must include all of the following elements: (A) A landscape analysis of California’s current public, private, and nonprofit cloud computing platform infrastructure; (B) An analysis of the cost to the state to build and maintain CalCompute and recommendations for potential funding sources; (C) Recommendations for the governance structure and ongoing operation of CalCompute; (D) Recommendations for the parameters for use of CalCompute, including, but not limited to, a process for determining which users and projects will be supported by CalCompute; (E) An analysis of the state’s technology workforce and recommendations for equitable pathways to strengthen the workforce, including the role of CalCompute; (F) A detailed description of any proposed partnerships, contracts, or licensing agreements with nongovernmental entities, including, but not limited to, technology-based companies, that demonstrates compliance; and (G) Recommendations regarding how the creation and ongoing management of CalCompute can prioritize the use of the current public sector workforce The consortium has to consist of 14 members as follows: (1) Four representatives of the University of California and other public and private academic research institutions and national laboratories appointed by the Secretary of Government Operations; (2) Three representatives of impacted workforce labor organizations appointed by the Speaker of the Assembly; (3) Three representatives of stakeholder groups with relevant expertise and experience, including, but not limited to, ethicists, consumer rights advocates, and other public interest advocates appointed by the Senate Rules Committee; and (4) Four experts in technology and AI to provide technical assistance appointed by the Secretary of Government Operations If CalCompute is established within the University of California, the University of California may receive private donations for the purposes of implementing CalCompute Whistleblower Protections : Frontier developers are not allowed to make, adopt, enforce, or enter into a rule, regulation, policy, or contract that prevents a covered employee from disclosing, or retaliates against a covered employee for disclosing, information to the Attorney General, a federal authority, a person with authority over the covered employee, or another covered employee who has authority to investigate, discover, or correct the reported issue, if the covered employee has reasonable cause to believe that the information discloses either of the following: (1) The frontier developer’s activities pose a specific and substantial danger to the public health or safety resulting from a catastrophic risk; and (2) The frontier developer has violated the Act Frontier developers are similarly not allowed to enter into a contract that prevents a covered employee from making a disclosure that is protected under the whistleblowing provisions Frontier developers must provide a clear notice to all covered employees of their rights and responsibilities under this Act, including by doing either of the following: (1) At all times posting and displaying within any workplace maintained by the frontier developer a notice to all covered employees of their rights under this section, ensuring that any new covered employee receives equivalent notice, and ensuring that any covered employee who works remotely periodically receives an equivalent notice; and (2) At least once each year, providing written notice to each covered employee of the covered employee’s rights under this section and ensuring that the notice is received and acknowledged by all of those covered employees Large frontier developers must provide a reasonable internal process through which a covered employee may anonymously disclose information to the large frontier developer if the covered employee believes in good faith that the information indicates that the large frontier developer’s activities present a specific and substantial danger to the public health or safety resulting from a catastrophic risk or that the large frontier developer violated the Act The disclosures and responses of the process must be shared with officers and directors of the large frontier developer at least once each quarter. However, if a covered employee has alleged wrongdoing by an officer or director of the large frontier developer in a disclosure or response, this rule does not apply with respect to that officer or director Courts are authorized to award reasonable attorney’s fees to a plaintiff who brings a successful action for a violation of this section Once it has been demonstrated by a preponderance of the evidence that an activity proscribed by this section was a contributing factor in the alleged prohibited action against the covered employee, the frontier developer has the burden of proof to demonstrate by clear and convincing evidence that the alleged action would have occurred for legitimate, independent reasons even if the covered employee had not engaged in activities protected by this section Covered employees may petition the superior court in any county where the violation in question is alleged to have occurred, or wherein the person resides or transacts business, for appropriate temporary or preliminary injunctive relief. The court must consider the chilling effect on other covered employees asserting their rights under this section in determining whether temporary injunctive relief is just and proper An order authorizing temporary injunctive relief must remain in effect until an administrative or judicial determination or citation has been issued, or until the completion of a review under the Act, whichever is longer, or at a certain time set by the court. Thereafter, a preliminary or permanent injunction may be issued if it is shown to be just and proper. Any temporary injunctive relief shall not prohibit a frontier developer from disciplining or terminating a covered employee for conduct that is unrelated to the claim of the retaliation. The remedies provided by this section are cumulative to each other As can be seen from the above discussion, it is good news that Governor Newsom was able to successfully enact critical AI protective legislation, notwithstanding the recent (and failed) attempts by the federal government to create moratoriums on state-level AI regulation. It appears that this is a signal that more progressive states can create and enforce their own AI legislation without unnecessary interference from the current administration. Given this fact, it will not be surprising to see states with similar values and approaches to AI boldly moving ahead with their legislative agendas and proposing new AI bills. What can we take from this development? Unlike Bill 1047 (California’s first legislative attempt at AI safety and transparency) that I wrote about here , SB 53 made it through to the end of the legislative process. It was designed to enhance online safety by installing commonsense guardrails on the development of frontier AI models, and help build public trust while also continuing to spur innovation in these new technologies. Additionally, the bill helps to advance California’s position as a national leader in responsible and ethical AI, and also a world leader—California continues to dominate the AI sector—it is the birthplace of AI, and the state is home to 32 of the 50 top AI companies worldwide. Governor Newsom stated: “California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive. This legislation strikes that balance. AI is the new frontier in innovation, and California is not only here for it – but stands strong as a national leader by enacting the first-in-the-nation frontier AI safety legislation that builds public trust as this emerging technology rapidly evolves” As a result of this development, frontier AI developers will need to be transparent, safe, accountable, and responsive. Further, the consortium, CalCompute, will advance the development and deployment of AI that is safe, ethical, equitable, and sustainable by fostering research and innovation. Moreover, the bill fills the gap left by Congress, which so far has not passed broad AI legislation, and provides a model for American states (and Canada) to follow. Previous Next
- Budget 2025 | voyAIge strategy
Budget 2025 Canada’s Plans for AI By Christina Catenacci, human writer Nov 7, 2025 Key Points On November 4, 2025, the Government of Canada released Budget 2025: Canada Strong The federal government made several proposals to invest in AI and quantum computing One of the first things that Canadians will likely see is fresh feedback on how the consultations went, along with an update on the status of the development of the AI Strategy On November 4, 2025, the Government of Canada announced the release of Budget 2025: Canada Strong. Generally speaking, the federal government plans to transform Canada’s economy from one that is reliant on a single trade partner, to one that is stronger, more self-sufficient, and more resilient to global shocks. Essentially, Canada has just delivered an investment budget: the goal is to spend less on government operations and invest more in the workers, businesses, and nation-building infrastructure that will grow the economy. More specifically, the budget includes a total of $60 billion in savings and revenues over five years, and makes generational investments in housing, infrastructure, defence, productivity, and competitiveness. These strategic investments will enable $1 trillion in total investments over the next five years through smarter public spending and stronger capital investment. Budget 2025 rests on two fiscal anchors: Balancing day-to-day operating spending with revenues by 2028–29, shifting spending toward investments that grow the economy Maintaining a declining deficit-to-GDP ratio to ensure disciplined fiscal management for future generations Indeed, Budget 2025 was passionately delivered by The Honourable François-Philippe Champagne, Minister of Finance and National Revenue. He noted that these are difficult times, but we need to rest assured that the government will not back down, will be there for Canadians now and for as long as it takes, and will do what Canadians do best in times of need—we look after each other and help each other. He stated, “That’s the Canadian way, our way!” That said, he acknowledged that meeting this challenge requires both ambition and discipline. To mark the day, Champagne even made some shoes for the occasion: they were made by Canadians for Canadians to hammer home the point that we need to be our own best customers. We cannot forget the ending of the speech: “Long live Canada!” The entire budget is a lengthy document; this article deals with what the budget has articulated with respect to Canada’s plans for AI. What Does Budget 2025 Say about AI? Canada wants to seize the full potential of AI. The purpose is to create opportunities for millions of Canadians, businesses, and the economy. Budget 2025 will facilitate the creation of AI compute infrastructure, including the development of a Sovereign Canadian Cloud. Ultimately, AI will help to create new jobs and economic growth. It is not only about AI: the government plans to allocate funds to foster innovation in both AI and quantum technologies. More precisely, Budget 2025 aims to: Provide $925.6 million over five years, starting in 2025-26 : this is, to support a large-scale sovereign public AI infrastructure that will boost AI compute availability and support access to sovereign AI compute capacity for public and private research. The investment will ensure that Canada has the capacity needed to be globally competitive in a secure and sovereign environment. Of this amount, $800 million will be sourced from funds that were previously provisioned in the fiscal framework. This means that $800 million of the $925.6-million investment will come from funds that were set aside by the last federal budget, which announced a total of $2 billion to boost domestic AI compute capacity and build public supercomputing infrastructure Enable the Minister of Artificial Intelligence and Digital Innovation, Evan Solomon, to engage with industry to identify new promising AI infrastructure projects and enter into Memoranda of Understanding with those projects. Along the same lines, the government intends to enable the Canada Infrastructure Bank to invest in AI infrastructure projects Allocate $25 million over six years, starting in 2025-26, and $4.5 million ongoing for Statistics Canada to implement the Artificial Intelligence and Technology Measurement Program (TechStat). TechStat will use data and insights to measure how AI is used by organizations, and understand its impact on Canadian society, the labour force, and the economy Explore options for the National Research Council of Canada’s Canadian Photonics Fabrication Centre to best position it to attract private capital, scale its operations, and serve as a platform for Canadian innovation and new photonic applications, including in the face of the rise of AI and related compute infrastructure Provide, through the Defence Industrial Strategy, $334.3 million over five years to strengthen Canada’s quantum ecosystem. It is important to note that computing problems that are currently considered to be intractable even with the most powerful classical computers could be solved using quantum computers Enable Canada to unlock significant economic benefits through commercialising the associated intellectual property (IP) and being among the first to put it to use. For example, when it comes to IP, the budge plans on providing $84.4 million over four years, starting in 2026-27, to Innovation, Science and Economic Development Canada to extend the Elevate IP program, as well as $22.5 million over three years, starting in 2026-27, to renew support for the Innovation Asset Collective’s Patent Collective Establish a new Office of Digital Transformation that will lead the adoption of AI and other new technologies across government. On top of that, there will be near-term procurement of made-in-Canada sovereign AI tools for the public service, which will lead to a more efficient government Enable the Shared Services Canada (SSC) and the Department of National Defence and the Communications Security Establishment to will develop a made-in-Canada AI tool which will be deployed across the federal government. The goal is to facilitate the partnership between the SSC and leading Canadian AI companies to develop the internal tool As I wrote about here , the government announced in September, 2025 the launch of an AI Strategy Task Force and a “30-day national sprint” (consultations) that will help shape Canada’s approach to AI. The government is set to develop a new AI strategy by the end of 2025. It will also consider whether new AI incentives and supports should be provided. Already, the government has decided to work with Cohere to use AI to improve the public service . In fact, the parties signed an agreement to set up an early-stage collaboration so that Cohere can identify areas where AI can enhance public service operations. What Can We Take from Budget 2025? As Champagne has highlighted, Canada is strong and has a lot going for it. AI and quantum computing are part of this. In the context of this investment budget, we see that the government has allocated significant resources to improve Canada’s AI and quantum computing posture. One of the first things that Canadians will likely see is fresh feedback on how the consultations went, along with an update on the status of the development of the AI Strategy. We can only wait and see if the above proposals will come to fruition. Previous Next
- What Happened to the Algorithmic Accountability Act | voyAIge strategy
What Happened to the Algorithmic Accountability Act The US's Algorithmic Accountability Act is in effect By Christina Catenacci Sep 19, 2024 Key Points: the Algorithmic Accountability Act of 2023 is indeed in effect and contains several requirements for covered entities the penalties are serious for covered entities that do not comply—the FTC can enforce the Act and has broad powers to investigate and find violations involving unfair or deceptive acts or practices. Also, States can also bring a civil action on behalf of residents in the State to obtain appropriate relief Canada does not have anything like the Algorithmic Accountability Act of 2023 or the EU’s AI Act Some may be wondering about the Algorithmic Accountability Act in the United States. What happened to it? Did it ever pass? Do companies need to comply with it? How is the American approach different than that of the EU and Canada? What is the Algorithmic Accountability Act ? Generally speaking, the Algorithmic Accountability Act requires businesses that use automated decision systems to make critical decisions to study and report on the impact of those systems on consumers. What are critical decisions? They could be any decisions that have a significant effect on a consumer’s life, including housing, educational opportunities, employment, essential utilities, healthcare, family planning, legal services, or financial services. The Act also establishes the Bureau of Technology to advise the Federal Trade Commission (FTC) about the technological aspects of its functions. What is the status of the Algorithmic Accountability Act ? This story began in the beginning of 2022, with the 117th Congress. Bill HR 6580, the Algorithmic Accountability Act of 2022 , was introduced in the House of Representatives and referred to the House Committee on Energy and Commerce. The bill was then referred to the Subcommittee on Consumer Protection and Commerce. However, nothing happened after that point. That is, it failed to gain the support it needed to become law. It was not until September 2023 that Bill HR 5628, the Algorithmic Accountability Act of 2023, was introduced in the House in the 118th Congress. Subsequently, Bill 5628 was referred to the House Committee on Energy and Commerce, and later referred to the Subcommittee on Innovation, Data, and Commerce. This time, it passed in the House and the Senate. It went to the President and then became law. What does the law require? The Algorithmic Accountability Act of 2023 contains several definitions, some of which include augmented critical decision process (process), automated decision system (system), biometrics, covered entity, critical decision (decision), deploy, develop, identifying information, impact assessment, passive computing infrastructure, and third-party decision recipient. This Act applies to covered entities. Under the Act, a covered entity includes any person, partnership, or corporation that deploys any process and has any of the following: had greater than $50,000,000 in average annual gross receipts or is deemed to have greater than $250,000,000 in equity value for three tax years possesses, manages, modifies, handles, analyzes, controls, or otherwise uses identifying information about more than 1,000,000 consumers, households, or consumer devices for developing or deploying any system or process is substantially owned, operated, or controlled by a person, partnership, or corporation that meets the above two requirements had greater than $5,000,000 in average annual gross receipts or is deemed to have greater than $25,000,000 in equity value for three tax years deploys any system that is developed for implementation or use, or that the person, partnership, or corporation reasonably expects to be implemented or used, in a process meets any of the above criteria in the last three years Essentially, covered entities must perform impact assessments of any deployed process that was developed for implementation or use or that the covered entity reasonably expects to be implemented or used, in an augmented critical decision process and any augmented critical decision process, both prior to and after deployment by the covered entity. The covered entity must also maintain documentation of any impact assessment performed for three years longer than the duration of time for which the system or process is deployed. Some other main requirements include: disclosing status as a covered entity submitting to the FTC an annual summary report for ongoing impact assessment of any deployed system or process (in addition to the initial summary report that is required for new systems or processes) consulting with relevant internal stakeholders such as employees and ethics teams and independent external stakeholders such as civil society and technology experts) as frequently as necessary attempting to eliminate or mitigate any impact that effects a consumer’s life in a timely manner When it comes to the impact assessment, covered entities need to consider several things, depending on whether the system or process is new or ongoing. For example, for new systems or processes, covered entities must: provide any necessary documentation describe the baseline process being enhanced or replaced by a process include information regarding any known harm, shortcoming, failure case, or material negative impact on consumers of the previously existing process used to make the critical decision include information on the intended benefits of and need for the process, and the intended purpose of the system or process It is also important to note that covered entities must, in accordance with National Institute of Standards and Technology (NIST) or other Federal Government best practices and standards, perform ongoing testing and evaluation of the privacy risks and privacy-enhancing measures of the system or process. Some examples of this include assessing and documenting the data minimization practices, assessing the information security measures that are in place, and assessing and documenting the current and potential future or downstream positive and negative impacts of these systems or processes. With respect to training and education, all employees, contractors, and other agents must be trained regarding any documented material negative impacts on consumers from similar systems or processes and any improved methods of developing or performing an impact assessment for such system or process based on industry best practices and relevant proposals and publications from experts, such as advocates, journalists, and academics. Covered entities must also maintain and keep updated documentation of any data or other input documents used to develop, test, maintain, or update the system or process, including things such as sourcing (metadata about the structure and type of data, an explanation of the methodology, and whether consumers provided informed consent for the inclusion and further use of data or other input information about themselves). Other factors to consider include why the data was used and what alternatives were explored, evaluations of the rights of consumers, and assessments of explainability and transparency. One cannot forget about the responsibilities of covered entities to identify any capabilities, tools, standards, datasets, security protocols, improvements to stakeholder engagement, or other resources that may be necessary or beneficial to improving the system, process, or the impact assessment of a system or process, in areas such as: performance, including accuracy, robustness, and reliability; fairness, including bias and non-discrimination; transparency, explainability, contestability, and opportunity for recourse; privacy and security; personal and public safety; efficiency and timeliness; cost; or any other area determined appropriate by the FTC. The FTC will be publishing an annual report summarizing the information in summary reports, and a public repository that is designed to publish a limited subset of the information about each system and process for which the FTC received a summary report. The FTC will also be publishing guidance and technical assistance. Most importantly, covered entities must note that the FTC can enforce the Act as it has broad powers to investigate and find violations involving unfair or deceptive acts or practices. Moreover, States can also bring a civil action on behalf of residents in the State to obtain appropriate relief. How is this Act different from the EU’s Artificial Intelligence Act ? And Canada? As discussed above, the American approach in the Act solely focuses on automated processes and systems deployed to render critical decisions. It is a standalone regime and is quite brief. On the other hand, the EU’s Artificial Intelligence Act (AI Act) covers a wider range of AI systems and provides nuanced regulatory requirements that are associated with the level of risk that an AI system poses to the public. In particular, the EU’s AI Act separates AI systems into three categories: unacceptable risk high-risk low/minimal risk Yet, there are some similarities between the two approaches. That is, both are involved in serious decisions that have a significant impact on consumers. What about Canada? Unfortunately, Canada does not have a law like the US Act described above, or the EU AI Act . That said, there is something similar to the US Act in the Canadian public sector, namely the Directive on Automated Decision-Making (Directive). This Directive requires algorithmic impact assessments for each system that is deployed by a federal institution. Again, this Directive does not apply to private sector businesses in Canada. When it comes to the private sector, we are still dealing with Bill C-27 , which is still in its infancy and combines an updated privacy law to PIPEDA (the CPPA ) with a brand-new AI law ( AIDA ). This legislative process may be delayed even further if there is an early election, which could happen at any time, now that the NDP has prematurely ripped up the 2022 supply and confidence deal he struck to support Prime Minister Justin Trudeau’s minority government. Lastly, it is worth pointing out that neither the proposed CPPA nor AIDA takle the concept of algorithmic accountability in the same way as the US or the EU. In fact, AIDA is completely lacking when it comes to algorithmic accountability. Previous Next
- 23andMe Investigation | voyAIge strategy
23andMe Investigation Findings of OPC and ICO By Christina Catenacci, human writer Jul 18, 2025 Key Points: The Privacy Commissioner of Canada (OPC) and the UK Information Commissioner (ICO) launched an investigation into the data breach that 23andMe experienced The OPC and the ICO both concluded that 23andMe contravened provisions concerning safeguards and breach notifications 23andMe has been sold for $305 million to TTAM As I wrote recently after 23andMe went bankrupt, both the Privacy Commissioner of Canada (OPC) and the UK Information Commissioner (ICO) launched an investigation into the data breach that the company experienced. This article discusses the results of that investigation. As you may recall, 23andMe, a company that provided direct-to-consumer genetic testing and ancestry services to individuals globally, confirmed that it experienced a data breach that affected almost 7 million of its customers (almost 320,000 people in Canada and 155,600 people in the UK). Given the scale of the breach and the sensitivity of the personal information involved, the OPC and the ICO launched an investigation . They tried to determine whether the company contravened Canada’s PIPEDA and the UK’s DPA 2018 and UK GDPR . What did the Commissioners find? According to the Commissioners, there were several deficiencies regarding safeguards that fell into three main areas: Prevention : there was no mandatory multi-factor authentication; inadequate minimum password requirements; inadequate compromised password checks; and no additional protections to access raw DNA data Breach detection : there were ineffective detection systems; insufficient logging and monitoring of suspicious activity; and inadequate investigation of anomalies Breach Response : there were delays in mitigation (four days to disable all active user sessions and implement a password reset for all customers) As a result, both the OPC and the ICO concluded that there was a failure to implement appropriate safeguards to ensure the protection of the highly sensitive personal information of its customers. Also, there were many deficiencies in terms of breach notifications: Notification to the Commissioners : the company’s breach reports were not made as required since they failed to include the complete information about the personal information that was involved. However, with respect to the timing, the Commissioners accepted that the company provided its breach notification as soon as feasible Notification to the affected individuals : the company’s notifications failed to provide relevant information that was known to the company when submitting the notifications, including the complete information about the personal information that was involved or likely to be involved in the breach and the fact that the personal information of some individuals had been posted for sale online by the hacker. Further, regarding the timing, individuals were not notified about their account having been accessed by the hacker until more than one month after the company had completed its forensic analysis and determined which accounts had been accessed Therefore, both the OPC and the ICO concluded that the company contravened the breach notification requirements. The Commissioners noted that on March 23, 2025, following the breach and in the face of mounting financial losses, 23andMe Holding Co. and certain of its subsidiaries, including 23andMe, filed for Chapter 11 bankruptcy under the US Bankruptcy Code . Both the OPC and the ICO communicated with the trustee in bankruptcy and emphasized that the legal requirements for personal information relating to individuals located in Canada and the UK to be handled in compliance with their respective data protection laws. The sale approval hearing was scheduled to take place on June 17, 2025 in the US Bankruptcy Court for the Eastern District of Missouri. A bankruptcy court just approved the $305 million sale to a nonprofit organization led by the company's former CEO Anne Wojcicki. The TTAM Research Institute, a California-based nonprofit set to acquire 23andMe, plans to maintain the company's customer privacy policies and add further data security measures. What’s more, the nonprofit plans to operate for "the public good". Interestingly, a company named Regeneron Pharmaceuticals had offered to buy most of 23andMe’s assets for $256 million. It did not submit a higher bid during the bidding process following its assessment of the company’s remaining value. This means that TTAM won out in the bidding war, and it was Wojcicki who used her own funds to purchase the company. What were the OPC’s key recommendations? The OPC noted that when taking proactive steps to protect against cyber attacks, it was important to start identifying potential threats and the risk of harm associated with them. When the personal information at issue is highly sensitive, the safeguards should be more robust as there is a heightened risk of harm. Additionally, credential-based attacks such as “credential stuffing” are one of the most common and well-known threats targeting web applications. Organizations are recommended to ensure that their customers’ online accounts were protected against such attacks by using safeguards that are appropriate to the sensitivity of the personal information at risk. Some of the ways used to protect against credential-based attacks were: Mandatory multi-factor authentication that requires customers to enter more than just a password in order to access an account Strong minimum password requirements to ensure that customers use a long, unique, and hard-to guess password Compromised password checks to prevent customers from reusing a password that was compromised in a previous breach Adequate monitoring to detect abnormal activity that may be a sign of a cyber attack, including a sudden spike in failed login attempts, or logins from unfamiliar devices or unusual locations Moreover, when considering web design, appropriate information security safeguards have to be prioritized and built into the customer experience design, since breaches could also have a significant negative impact on customer experience and trust. Last but not least, organizations need to notify the appropriate privacy regulators and affected individuals as soon as feasible after discovering a breach that creates a real risk of significant harm. In Canada, there are certain things that companies need to communicate following a breach. For instance, breach notifications must include the information that is prescribed under PIPEDA and the Breach of Security Safeguards Regulations . Companies need to report complete information about the personal information that was subject to the breach. Notifications to affected individuals must also provide sufficient information to allow them to understand the significance and potential impact of the breach. Commissioner Philippe Dufresne of the OPC stated: “Strong data protection must be a priority for organizations, especially those that are holding sensitive personal information. With data breaches growing in severity and complexity, and ransomware and malware attacks rising sharply, any organization that is not taking steps to prioritize data protection and address these threats is increasingly vulnerable.” On the topic of collaboration between the OPC and the ICO, he stated: “Joint investigations like this one demonstrate how regulatory collaboration can more effectively address issues of global significance. By leveraging our combined powers, resources, and expertise, we are able to maximize our impact and better protect and promote the fundamental right to privacy of individuals across jurisdictions” Businesses can also refer to the helpful guidance of the OPC titled, “What you need to know about mandatory reporting of breaches of security safeguards” here for further information on how to take proactive steps to deal with data breaches. Also, the Information Bulletin on Safeguards can be found here . Previous Next
- Trump's AI Action Plan | voyAIge strategy
Trump's AI Action Plan America's Bold Bid for Global AI Dominance: A Marked Departure from Biden’s emphasis on AI Safety By Christina Catenacci, human writer Jul 30, 2025 Key Points On July 23, 2025, the Trump administration released the document, "Winning the AI Race: America's AI Action Plan” ( Action Plan ) The Action Plan focuses on accelerating AI innovation, building AI infrastructure, and leading international diplomacy and security The Action Plan suggests that there is urgency in completing these policy actions, but there are no clear deadlines with which to comply On July 23, 2025, the Trump administration released the document, "Winning the AI Race: America's AI Action Plan” ( Action Plan ). The authors of the Action Plan include Michael J. Kratsios (Assistant to the President for Science and Technology), David O. Sacks (Special Advisor for AI and Crypto), and Marco A. Rubio (Assistant to the President for National Security Affairs). This move represents a dramatic shift in US AI policy. It is built on three strategic pillars: accelerating AI innovation, building AI infrastructure, and leading international diplomacy and security. The Action Plan outlines federal policy actions that are designed to cement American dominance in the global AI race. Unlike Biden’s previous safety-first approach, Trump's plan prioritizes deregulation, rapid deployment, and ideological neutrality in AI systems. Indeed, this plan is in line with Trump’s 2025 Executive Order on AI and VP Vance’s comments from the February 2025 AI Action Summit in Paris, both of which downplayed AI safety and highlighted the importance of AI innovation, AI deregulation, and American dominance. It may be challenging for the US to assert its dominance, when the European Union is the dominant one in this regard. What is in Trump's AI Action Plan? Biden's October 2023 AI executive order was titled, “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence”, and emphasized safety testing, bias mitigation, and careful deployment. On the other hand, Trump's Action Plan prioritizes speed, deregulation, and ideological neutrality. It is clear that there is some urgency contained in this Action Plan: the opening paragraph is a quote by the President himself: “As our global competitors race to exploit these technologies, it is a national security imperative for the United States to achieve and maintain unquestioned and unchallenged global technological dominance. To secure our future, we must harness the full power of American innovation” In fact, the first sentence of the introduction notes that the US is in a race to achieve global dominance in AI. Pillar I: Accelerate AI Innovation The first pillar is introduced by this statement: “America must have the most powerful AI systems in the world, but we must also lead the world in creative and transformative application of these systems. Achieving these goals requires the Federal government to create the conditions where private-sector-led innovation can flourish” There are several policy actions that are recommended to be taken under each of these priorities: Removing red tape and onerous regulation Ensuring that frontier AI protects free speech and American values Encouraging open-source and open-weight AI Enabling AI adoption Empowering American workers in the age of AI Supporting next-generation manufacturing Investing AI-enabled science Building world-class scientific datasets Advancing the science of AI Investing in AI interpretability, control, and robustness breakthroughs Building an AI evaluations ecosystem Accelerating AI adoption in government Driving adoption of AI within the department of defense Protecting commercial and government AI innovations Combatting synthetic media in the legal system For example, under enabling AI adoption, some of the main policy actions include: Establish regulatory sandboxes or AI Centers of Excellence around the country where researchers, startups, and established enterprises can rapidly deploy and test AI tools while committing to open sharing of data and results. These efforts would be enabled by regulatory agencies such as the Food and Drug Administration (FDA) and the Securities and Exchange Commission (SEC), with support from DOC through its AI evaluation initiatives at NIST Launch several domain-specific efforts (e.g., in healthcare, energy, and agriculture), led by NIST at DOC, to convene a broad range of public, private, and academic stakeholders to accelerate the development and adoption of national standards for AI systems and to measure how much AI increases productivity at realistic tasks in those domains Led by the Department of Defense (DOD) in coordination with the Office of the Director of National Intelligence (ODNI), regularly update joint DOD-Intelligence Community (IC) assessments of the comparative level of adoption of AI tools by the United States, its competitors, and its adversaries’ national security establishments, and establish an approach for continuous adaptation of the DOD and IC’s respective AI adoption initiatives based on these AI net assessments Prioritize, collect, and distribute intelligence on foreign frontier AI projects that may have national security implications, via collaboration between the IC, the Department of Energy (DOE), CAISI at DOC, the National Security Council (NSC), and OSTP Pillar II: Build American AI Infrastructure The second pillar is introduced by this statement: “AI is the first digital service in modern life that challenges America to build vastly greater energy generation than we have today. American energy capacity has stagnated since the 1970s while China has rapidly built out their grid. America’s path to AI dominance depends on changing this troubling trend” There are several policy actions that are recommended to be taken under each of these priorities: Creating streamlined permitting for data centres, semiconductor manufacturing facilities, and energy infrastructure while guaranteeing security Developing a grid to match the pace of AI innovation Restoring American semiconductor manufacturing Building high-security data centres for military and intelligence community usage Training a skilled workforce for AI infrastructure Bolstering critical infrastructure cybersecurity Promoting secure-by-design AI technologies and applications Promoting mature federal capacity for AI incident response For instance, when it cones to training a skilled workforce, we see the following policy actions: Led by DOL and DOC, create a national initiative to identify high-priority occupations essential to the buildout of AI-related infrastructure. This effort would convene employers, industry groups, and other workforce stakeholders to develop or identify national skill frameworks and competency models for these roles. These frameworks would provide voluntary guidance that may inform curriculum design, credential development, and alignment of workforce investments Through DOL, DOE, ED, NSF, and DOC, partner with state and local governments and workforce system stakeholders to support the creation of industry-driven training programs that address workforce needs tied to priority AI infrastructure occupations. These programs should be co-developed by employers and training partners to ensure individuals who complete the program are job-ready and directly connected to the hiring process. Models could also be explored that incentivize employer upskilling of incumbent workers into priority occupations. DOC should integrate these training models as a core workforce component of its infrastructure investment programs. Funding for this strategy will be prioritized based on a program’s ability to address identified pipeline gaps and deliver talent outcomes aligned to employer demand Led by DOL, ED, and NSF, partner with education and workforce system stakeholders to expand early career exposure programs and pre-apprenticeships that engage middle and high school students in priority AI infrastructure occupations. These efforts should focus on creating awareness and excitement about these jobs, aligning with local employer needs, and providing on-ramps into high-quality training and Registered Apprenticeship programs Through the ED Office of Career, Technical, and Adult Education, provide guidance to state and local CTE systems about how to update programs of study to align with priority AI infrastructure occupations. This includes refreshing curriculum, expanding dual enrollment options, and strengthening connections between CTE programs, employers, and training providers serving AI infrastructure occupations Led by DOL, expand the use of Registered Apprenticeships in occupations critical to AI infrastructure. Efforts should focus on streamlining the launch of new programs in priority industries and occupations and removing barriers to employer adoption, including simplifying registration, supporting intermediaries, and aligning program design with employer needs Led by DOE, expand the hands-on research training and development opportunities for undergraduate, graduate, and postgraduate students and educators, leveraging expertise and capabilities in AI across its national laboratories. This should include partnering with community colleges and technical/career colleges to prepare new workers and help transition the existing workforce to fill critical AI roles Pillar III: Lead in International AI Diplomacy and Security The third pillar is introduced by this statement: “To succeed in the global AI competition, America must do more than promote AI within its own borders. The United States must also drive adoption of American AI systems, computing hardware, and standards throughout the world. America currently is the global leader on data center construction, computing hardware performance, and models. It is imperative that the United States leverage this advantage into an enduring global alliance, while preventing our adversaries from free-riding on our innovation and investment” There are several policy actions that are recommended to be taken under each of these priorities: Exporting American AI to allies and partners Countering Chinese influence in international governance bodies Strengthening AI compute export control enforcement Plugging loopholes in existing semiconductor manufacturing export controls Aligning protection measures globally Ensuring that the US Government is at the forefront of evaluating national security risks in frontier models Investing in biosecurity By way of example, with respect to countering Chinese influence in international governance bodies, we see the following policy action: Led by DOS and DOC, leverage the U.S. position in international diplomatic and standard-setting bodies to vigorously advocate for international AI governance approaches that promote innovation, reflect American values, and counter authoritarian influence Timeline One would think that the US administration has set an aggressive timeline for implementation of the Action Plan. Key agencies including the Department of Commerce, Department of Energy, and NIST have been tasked with developing specific implementation plans in near term. The introduction states, “Simply put, we need to “Build, Baby, Build!”. But there are no clear deadlines. The success of this ambitious agenda will likely depend on several factors: Congress's willingness to provide necessary funding, the private sector's ability to scale infrastructure rapidly, and the international community's receptiveness to American AI leadership. What Can We Take from This Development? Trump's Action Plan represents one of the most comprehensive technology policy initiatives in American history. How will this play out? Its success could successfully cement American dominance in AI, and its failure could leave America trailing in a competition that is viewed by many as existential. Indeed, the promise is stated right in the Introduction of this Action Plan: "Winning the AI race will usher in a new golden age of human flourishing, economic competitiveness, and national security for the American people” Interestingly, the introduction also states: “Whoever has the largest AI ecosystem will set global AI standards and reap broad economic and military benefits” As we know, the European Union has already set the golden standard for AI regulation with its AI Act . Many countries already look to this golden standard when legislating and enforcing AI laws in their jurisdictions. It will be difficult to see how the American federal government, an administration that has not yet even created a national privacy regulation much less a comprehensive law, will come even close to catching up to the European Union. In fact, it may be challenging for the US to be taken seriously in this regard; interestingly, the US suggests that this Action Plan will uphold American values and then influence all jurisdictions around the world. Again, it is the European Union that has already been an influence on the world in the technology sphere—not the US—since it has encouraged countries around the world to act in accordance with European values and laws. In my view, countries will not turn around and change their groundbreaking laws or change what laws they will comply with just to please the US. Previous Next
- Reddit Sues Data Scrapers and AI Companies | voyAIge strategy
Reddit Sues Data Scrapers and AI Companies The Accusation: Stealing Valuable User-Generated Data By Christina Catenacci, human writer Oct 24, 2025 Key Points On October 22, 2025, Reddit filed a Complaint in the United States District Court, Southern District of New York against Serpapi LLC, Oxylabs UAB, Awmproxy, and Perplexity AI, Inc Reddit has alleged that the Defendants have violated copyright law and used its user-generated content without permission and without entering into an agreement with Reddit that protects users The Defendants have denied the allegations and plan to defend themselves in court On October 22, 2025, Reddit filed a Complaint in the United States District Court, Southern District of New York against Serpapi LLC, Oxylabs UAB, Awmproxy, and Perplexity AI, Inc (Defendants). In short, Reddit accused the Defendants of stealing valuable copyrighted user content without permission and without entering into an agreement with Reddit that protects users. The case gets at the tension between content owners like Reddit and AI companies that use user-generated data for commercial gain. What’s more, this lawsuit deals with not just AI companies, but also data scrapers that get the data from Google’s Search Results Pages to circumvent technological protections. What Happened? According to Reddit, the lawsuit was commenced because it was necessary to stop the “the industrial-scale, unlawful circumvention of data protections by a group of bad actors who will stop at nothing to get their hands on valuable copyrighted content on Reddit”. Three of the Defendants, Oxylabs UAB, AWMProxy, and SerpApi (a Lithuanian data scraper, a former Russian botnet, and a Texas company that publicly advertises its shady circumvention tactics), are data-scraping service providers who specialize in creating and selling tools that are designed to circumvent digital defenses and scrape others’ content. The tools aim to bypass two levels of security: evading Reddit’s own anti-scraping measures, and second, circumventing Google’s controls and scraping Reddit content directly from Google’s Search Engine Results Pages . Reddit equated this behaviour to what bank robbers do—knowing that they cannot get into the bank vault, they break into the armored truck carrying the cash instead. The fourth Defendant, Perplexity AI Inc., was equated to a “North Korean hacker” and is a willing customer of at least one of its co-defendants. Reddit submits that Perplexity AI will apparently do anything to get the Reddit data to fuel its “answer engine”. Reddit, founded 20 years ago, is one of the largest repositories of human conversation in existence. In particular, over 100 million unique users engage in discussions each day across its hundreds of thousands of interest-based communities (or “subreddits”), which is a continuous stream of real-time and creative copyrighted works. According to Reddit, it is prohibited to engage in unauthorized commercialization of Reddit content unless there is an express agreement with guardrails in place to ensure that user rights are protected. In a nutshell, if AI companies want to legally access Reddit data, they need to comply with Reddit’s policies just like Google and OpenAI have. What is Reddit Claiming? Reddit has asserted that the first three Defendants have: scraped the data from Google’s Search Engine Results Pages instead of Reddit’s site (like the bank robbers attacking the truck carrying the cash) while masking their identities, hiding their locations, and disguising their web scrapers as regular people to circumvent or bypass the security restrictions meant to stop them Reddit has asserted that Perplexity AI has: ignored Reddit’s cease-and-desist letter after Reddit caught Perplexity AI red-handed by using the digital equivalent of marked bills (to use the bank robbery analogy) to track Reddit data and confirm that Perplexity AI was using Reddit data acquired through the scraping of Google Search Engine Results Pages In its Complaint, Reddit argued that Congress has already enacted the Digital Millennium Copyright Act to prevent what the Defendants are doing—bypassing technological measures to access copyrighted works. Moreover, Reddit has pointed out that the Defendants know that they do not have permission to do what they are doing, and has claimed the following: All Defendants have violated the Digital Millennium Copyright Act by unlawfully circumventing technological measures The Defendants, SerpApi and Oxylabs, have violated the Digital Millennium Copyright Act by trafficking of technology, product, service, or device for use in circumventing technological measure controlling access The Defendants, SerpApi and Oxylabs, have violated the Digital Millennium Copyright Act by trafficking of technology, product, service, or device for use in circumventing technological measure protecting the right of copyright owner All Defendants have gained access to and scraped Reddit data on a large-scale, unauthorized, and automated basis, including misappropriation of real-time Reddit content and services and the timely content authored by Reddit users, from which Defendants have been unjustly enriched at Reddit’s expense The Defendants, SerpApi and Perplexity AI, have engaged in civil conspiracy by entering into one or more contracts or business agreements for the purpose of circumventing the technological control measures described above in order to gain access to Reddit data on a large-scale, unauthorized, and automated basis, including Reddit content and services and the content authored by Reddit users Reddit has suffered harms since it depends on the contributions of Redditors and its business and reputation has been damaged by the Defendants’ misappropriation of Reddit data To that end, Reddit has requested that the court grant injunctive relief, damages, costs, and any other legal or equitable relief as the court deems just and proper. What was the Defendants’ Responses? Generally speaking, the Defendants all deny the allegations and plan on defending themselves in court. But it was Perplexity AI that made a statement right on Reddit . Essentially, Perplexity AI noted that “this is a sad example of what happens when public data becomes a big part of a public company’s business model”. More specifically, the AI company stated that the reason that it is being sued by Reddit is likely because it is about a show of force in Reddit’s training data negotiations with Google and OpenAI. Perplexity AI went on to say that it has not ignored Reddit—whenever anyone asks the company about content licensing, it explains that Perplexity AI, as an application-layer company, does not train AI models on content. In fact, it never has, and thus it is impossible for the company to sign a license agreement to do so. What Reddit does in fact, is summarize Reddit discussions, and cite Reddit threads in answers, just like people share links to posts on Reddit all the time. Perplexity invented citations in AI for two reasons: so that people can verify the accuracy of the AI-generated answers, and so they can follow the citation to learn more and expand their journey of curiosity. The way Reddit is acting is the opposite of an open internet. Lastly, Perplexity AI stated: “In any case, we won’t be extorted, and we won’t help Reddit extort Google, even if they’re our (huge) competitor. Perplexity will play fair, but we won’t cave. And we won’t let bigger companies use us in shell games” What Can We Take From This Development? I think that Reddit’s Chief Legal Officer said it best: “AI companies are locked in an arms race for quality human content - and that pressure has fueled an industrial-scale 'data laundering' economy” So this is the second Reddit lawsuit that has come up. As I wrote about the first Reddit case against Anthropic , we will need to wait and see what the court decides. These sorts of copyright cases are popping up at a rapid rate in the context of AI, and courts are going to have to set the fair balance between innovation and the rights of companies like Reddit (and its users). Plainly put, the court is going to have to decide what is fair. Is Reddit “extorting” Perplexity AI as has been alleged, or are the Defendants trying to unlawfully access and use Reddit content without permission? And where does this leave the users? Will there be a chilling effect as a result of these kinds of lawsuits—will users shy away from sharing their thoughts and creative works online because they are unsure of how they will be used against them in the future? Previous Next
- HR Automation | voyAIge strategy
HR Automation Some Issues that Could Arise By Christina Catenacci Dec 2, 2024 Key Points In the future, Ontario employers making publicly advertised job postings will be required to disclose within the job posting if they use AI to screen, assess or select applicants for publicly advertised job posts Where a job applicant participates in a video interview and sentiment analysis is used, there could be privacy and human rights issues that are triggered Other jurisdictions, such as New York City, have addressed AI tools in recruitment and hiring very differently than Ontario My co-founder, Tommy Cooke, just wrote an informative article regarding some of the main HR automation trends that have been pervasive in the business world in 2024. When it comes to these trends, it is worth taking a closer look at some of the issues that could become problematic. More specifically, I would like to examine the uses of AI in the area of recruitment and hiring. Whether it is using AI to automate resume screening or using AI to conduct video interviewing sentiment analysis, there could be some challenges for employers. In particular, employers will need to comply with Ontario’s Employment Standards Act , namely the AI provisions in Bill 149 , in the near future. As of some future date to be named by proclamation, employers making publicly advertised job postings will be required to disclose within the job posting if they use AI to screen, assess or select applicants for publicly advertised job posts. These employers will also have to retain copies of every publicly advertised job posting (as well as the associated application form) for three years after the post is taken down. In fact, I recently wrote an article on this topic. I wrote about how skeletal the AI provisions were in this bill. And the AI-related definitions were nowhere to be found. I compared the requirements in Ontario’s Bill 149 to those in New York City’s (NYC’s) hiring law involving using AI and automated decision-making tools. It was striking that NYC required employers to conduct a bias audit before using the tool; post a summary of the results of the bias audit on their websites; notify job candidates and existing employees that the tool would be used to assess them; include instructions for requesting accommodations; and post on their websites about the type and source of data that was used for the tool as well as their data retention policies. There were detailed definitions and fines for noncompliance too. Needless to say, this will be a challenge for provincially regulated employers in Ontario, and it is highly recommended that employers prepare now for these employment law changes. That said, it is understandable that employers may struggle with how to comply with such ambiguous provisions. Additional issues could arise, namely privacy and human rights issues. Let us take the example of the video interview where sentiment analysis is conducted. This is troubling from a privacy perspective—job applicants may not be comfortable participating in video interviews where their facial expressions and gestures are closely scrutinized with intrusive software that enables AI tools to analyze their sentiments. Moreover, employees who are up for a promotion may not appreciate video analytics of their video interview performance being retained for an unknown period of time, and accessible to an unknown number of actors in the workplace. Because job applicants are in a vulnerable position, they may not feel like they can object to the use of these AI hiring tools. In addition to privacy concerns, human rights issues could surface. The video interview could reveal various aspects of a person that may fall under any of the prohibited grounds of discrimination. For instance, under the Ontario Human Rights Code , section 5 of the Code prohibits discrimination in employment on the grounds of race, ancestry, place of origin, colour, ethnic origin, citizenship, creed, sex, sexual orientation, gender identity, gender expression, age, record of offences, marital status, family status or disability. It may be possible for an AI tool to be biased (unintentionally, but biased nonetheless), where it favours the younger candidates, gives them higher interview scores, and ultimately inadvertently discriminates on the ground of age. Since it may not be possible to detect these biased decisions immediately, it may be that some job applicants simply miss out on an employment opportunity due to an ageist AI tool. It will be interesting to see whether other jurisdictions come up with more extensive provisions to address the use of AI in recruitment and hiring. In Ontario, it is questionable whether we will see additional detail to help employers comply with the requirements. Previous Next
- De-Risking AI Prompts | voyAIge strategy
De-Risking AI Prompts How to Make AI Use Safer for Business By Tommy Cooke, fueled by caffeine and curiousity Aug 8, 2025 Key Points: Small, well-intentioned actions can quietly introduce risk when staff lack clear guidance and boundaries De-risking AI isn’t about restricting use. It’s about educating staff, adopting prompt training into workflows, and developing a support team Safe and effective AI use begins when leadership models responsible practices and builds a culture of clarity, not control Many moons ago, I was working with a data centre on a surveillance experiment. One of the interns was a motivated student. He was tasked with investigating third parties that we suspected were abusing access to sensitive location data within one of our experiment’s smartphones. Without telling anyone, the student sent sample data from our smartphone to an organization we were actively investigating. It was an organization whose credibility was under intense scrutiny for abusive data practices. The student wasn’t acting out of malice. They were trying to be helpful, to show responsiveness, to move the work forward. But they didn’t understand the stakes. To them, the data was “just a sample.” To us, it signaled loss of control and a risky alignment with an actor we hadn’t finished vetting. The problem wasn’t the intern. The problem was that we hadn’t taken the time to review and discuss contract terms—to find ways to guide interns on both the best practices and boundaries around their work. This is what prompting GPT looks like in many organizations today. Staff often using AI to accelerate their work, lighten work loads, and inject some creativity into their craft. AI is a tool that is attractive to staff for many reasons, and so it is not surprising to us here at VS to hear that staff turn to AI to also respond to mounting work pressures; now that AI is available, executives increasingly expect their teams to work harder, faster, and better with it as a result of its existence. But with less than 25 percent of organizations having an AI policy in place , and even fewer educating their staff on how to use AI, it’s not surprising that how your staff use AI is not only highly risky, but you are also likely unaware of precisely what they are doing with AI. To most organizations we speak to, this risk is entirely unacceptable. While we highly advocate for a robust AI policy in place, as well as training around that policy, let’s dive into what you can be doing to de-risk your organization’s AI use. De-Risking AI Is Not Just About Restricting Use Before we take a deeper dive, it’s important to address a common knee-jerk reaction among business leaders. There is a temptation to de-risk by locking down: restricting access to GPTs, blocking it from the firewall, or ban prompts that mention sensitive keywords. These reactions are just that: reactions. They are not responses because they are not planned, considered, and contextualized. They are rigid, inflexible, and as such, they often backfire. What’s as important here is that they send a very clear message to your staff: AI is dangerous and not learnable. This subsequently pushes experimentation underground and creates a shadow use problem that’s harder to monitor or support. Instead, and as I mentioned earlier, the safer and more sustainable path is to educate, empower, and build clarity. It’s impossible to eliminate risk entirely, but you can reduce it by building good habits, providing effective guidance, and a sharing an understanding of what safe prompting looks like. What Team Leads and Business Owners Can Do If you lead a team or own a business, here are some steps you can take right now to start de-risking GPT use without killing its potential and promise: Create a prompt playbook. A living document that outlines safe and unsafe prompting practices, gives examples, and evolves over time. This could include do’s and don’ts, safe phrasing suggestions, and reminders about privacy, intellectual property, and any other related laws and policies relevant to the scenario at hand. It doesn't have to be long—it just has to be usable and user-friendly. Build training around real workflows. It’s quite common for organizations to bring in third-parties to offer cookie cutter training on how to use AI safely and effectively. Don’t do that. Abstraction doesn’t resonate on the front line, nor do we find it effective in resonating with executives either. Bring in an organization that can offer training that reflects how your people actually use AI and the daily nuances of their work. Schedule prompt review . Designate an AI team. Task them with making it normal to collect, analyze, and assess how your staff talk to AI. Encourage them to ask questions like, “is this a safe way to talk to AI?” We want to create a culture where prompt sharing and refinement is part of collaboration. Designate prompt leaders . Identify or train a few people, ideally within the aforementioned team, who can act as internal advisors on AI use. Not to gatekeep, but to support. Let staff know who to ask when they're unsure if a prompt might cause issues. Make it part of their job description and KPIs to lift up and support employees when they use AI. Develop internal guardrails. This is also something I discussed before, and something that Christina and I discuss ad nauseum in our articles. If you're using GPT through an API, platform, or organization-wide license, get AI policies in place. Set rules, automate flags, or integrate prompt logging for sensitive areas like legal, HR, or R&D. Communicate the purpose. Let people know why prompting guidance and safe use matters. Use examples to show how good prompting helps them avoid mistakes and do better work, not just follow rules. Ensure that you show the implications when things go wrong, and then follow up by reassuring staff that you have contingency plans in place. Let them know that you have a plan for when things go wrong, and that they shouldn’t be afraid to use AI if they follow their training. Signal leadership’s involvement. Executives and leaders should model good prompting habits, or they should at least acknowledge the importance of prompting. Lead by example, not just by word. The intern I mentioned earlier didn’t intend to create risk. The boundaries were drawn, but the intern was not familiar with them. We avoided damage to the project, and that damage was never about malice or recklessness. It was about misunderstanding what small mistakes could catalyze, especially when they go unrecognized by a staff member. Previous Next
- Privacy Commissioner of Canada (OPC) Releases Findings of Joint Investigation into TikTok | voyAIge strategy
Privacy Commissioner of Canada (OPC) Releases Findings of Joint Investigation into TikTok TikTok Must Do More to Protect Children’s Privacy By Christina Catenacci Sep 26, 2025 Key Points: On September 23, 2025, the OPC released the findings of the joint investigation of TikTok The measures that TikTok had in place to keep children off the video-sharing platform and to prevent the collection and use of their sensitive personal information for profiling and content targeting purposes were inadequate There were several recommendations that were made that TikTok will have to follow, and the company has already started bringing itself in compliance On September 23, 2025, the OPC released the findings of the joint investigation of TikTok by the OPC, and the Privacy Commissioners of Quebec, British Columbia, and Alberta (Privacy Commissioners). The OPC said in its Statement that the measures that TikTok had in place to keep children off the video-sharing platform and to prevent the collection and use of their sensitive personal information for profiling and content targeting purposes were inadequate. About TikTok TikTok is a popular short-form video sharing and streaming platform, which is available both through its website and as an app. Its content features 30–50 second videos, although it also offers other services, such as live streaming and photos. TikTok also provides multiple interactive features such as comments and direct messaging for content creators and users to connect with other users. The company’s core commercial business has been the delivery of advertising, which it enabled by using the information it collected about its users to track and profile them for the ultimate purposes of delivering targeted advertising and personalizing content. TikTok’s platform has a high level of usership, with 14 million active monthly Canadian users as of November 2024. What was the Investigation About? The investigation examined TikTok’s collection, use and disclosure of personal information for the purposes of ad targeting and content personalization on the platform, with particular focus on TikTok’s practices as they relate to children. More specifically, the Offices considered whether TikTok: engaged in these practices for purposes that a reasonable person would consider appropriate in the circumstances, were reasonable in their nature and extent, and fulfilled a legitimate need obtained valid and meaningful consent and, in the case of individuals in Quebec, met its transparency obligations under Quebec’s Act Respecting the Protection of Personal Information in the Private Sector What did the Privacy Commissioners Find? The Privacy Commissioners found the following: Issue 1: Was TikTok collecting, using, and disclosing personal information, in particular with respect to children, for an appropriate, reasonable, and legitimate purpose? TikTok collected and made extensive use of potentially sensitive personal information of all its users, including both adults and children. Despite TikTok’s Terms of Service stating that users under that age were not allowed to use the platform, the investigation ultimately found that TikTok had not implemented reasonable measures to prevent its collection and use of the personal information of underage users. Therefore, the Privacy Commissioners found that TikTok’s purposes for collecting and using underage users’ data, to target advertising and personalize content (including through tracking, profiling and the use of personal information to train machine learning and refine algorithms), were not purposes that a reasonable person would consider to be appropriate, reasonable, or legitimate under the circumstances. TikTok’s collection and use of underage users’ data for these purposes did not address a legitimate issue, or fulfill a legitimate need or bona fide business interest More specifically, when balancing interests (an individual’s right to privacy and a corporation’s need to collect personal information), it was important to consider the sensitivity of the information. The Privacy Commissioners pointed out that information relating to children was particularly sensitive. On a site visit, they noted that the hashtags “#transgendergirl” and “#transgendersoftiktok” were displayed as options for an advertiser to use as targeting criteria. TikTok personnel were unable to explain, either during the site visit or when offered a follow-up opportunity, why these hashtags had been available on the ad manager platform as options. The company later confirmed that the hashtags should not have been available, had since been removed as options, and had not been used in any Canadian ad campaigns from 2023 to the date of the site visit in 2024. The Privacy Commissioners stated, “While TikTok resolved this specific issue after it was discovered by our investigation, we remain concerned that this sensitive information had not been caught by TikTok proactively and that individuals could potentially have been targeted based on their transgender identity” Even where certain elements of the information that TikTok used for profiling and targeting its users (including underage users) could be considered less sensitive when taken separately, when taken together and associated with a single user and refined by TikTok with the use of its analytics and machine learning tools, it could be rendered more sensitive since the insights could be inferred from that information in relation to the individual, such as their habits, interests, activities, location, and preferences There were a large number of underage users (under 13 years), notwithstanding the rules that they were not allowed. TikTok has been banning an average of about 500,000 accounts per year in Canada—just in 2023, there were 579,306 children who were removed for likely being under 13. The Privacy Commissioners concluded that the actual number of accounts held by underage users on the platform was likely much higher. What’s more, TikTok used an “age gate”, which required the user to provide a date of birth during the account creation process. When a date of birth corresponded to an age under 13, account creation was denied, and the device was temporarily blocked from creating an account. The Privacy Commissioners determined that this was the only age assurance mechanism that TikTok implemented at the sign-up/registration stage to prevent underage users from creating an account and accessing the platform Moreover, TikTok had a moderation team to identify users who were suspected to be underage, and members of this team were provided with specific training to identify individuals under the age of 13. The moderation team relied on user reports (where someone, such as a parent, contacted TikTok to report that a user was under the age of 13), and automated monitoring (which included scans for keywords in text inputted by the user that would suggest that they could be under the age of 13 like, “I am in grade three,” or in the case of TikTok LIVE, the use of computer vision and audio analytics to help identify individuals under 18 years. Then, moderators conducted manual reviews of accounts identified from these flags. These included a review of posted videos, comments, and biographic information. This was done to decide whether to ban an account In light of the deficiencies in TikTok’s age assurance mechanisms, the Privacy Commissioners found that TikTok implemented inadequate measures to prevent those users from accessing and being tracked and profiled on, the platform. TikTok had no legitimate need or bona fide business interest for its collection and use of the sensitive personal information of these underage users in relations to all the jurisdictions involved, whether it was the federal, Alberta, British Columbia, or Québec jurisdictions The Privacy Commissioners stated: “We are deeply concerned by the limited measures that the company has put in place to prevent children from using the platform. We find it particularly troubling that even though TikTok has implemented many sophisticated analytics tools for age estimation to serve its various other business purposes, evidence suggest that the company did not consider using those tools or other similar tools to prevent underage users from accessing, and being tracked and profiled on, the platform” Issue 2: Did TikTok obtain valid and meaningful consent from its users for tracking, profiling, targeting and content personalization? It was not necessary for the Privacy Commissioners to consider this question since organizations were not allowed to rely on consent for the collection, use, or disclosure of personal information when its purpose was not appropriate, reasonable, or legitimate within the meaning of the legislation. They stated, “In other words, obtaining consent does not render an otherwise inappropriate purpose appropriate”. In this case, the Privacy Commissioners already found that TikTok’s collection and use of personal information from children was not for an appropriate purpose. That said, the Privacy Commissioners decided to continue the analysis regarding meaningful consent from adults (aged 18 and above) and youth (aged 13–17). Ultimately, the Privacy Commissioners found that TikTok did not explain its practices (related to tracking, profiling, ad targeting and content personalization) to individuals in a manner that was sufficiently clear or accessible, and therefore did not obtain meaningful consent from platform users—including youth users More specifically, the legislation (excluding that of Québec—see Issue 2.1) required consent for the collection, use, or disclosure of their personal information, unless an exception applied. The type of consent required varied depending on the circumstances and sensitivity of the personal information. When taken together, the personal information collected and used by TikTok via tracking and profiling for the purposes of targeting and content personalization could be sensitive. Where the personal information involved was sensitive, the organization had to obtain express consent. This is especially true since many of TikTok’s practices were invisible to the user. Where the collection or use of personal information fell outside the reasonable expectations of an individual or what they would reasonably provide voluntarily, then the organization generally could not rely upon implied or deemed consent For consent to be meaningful, organizations had to inform individuals of their privacy practices in a comprehensive and understandable manner. In addition, organizations had to place additional emphasis on four key elements: What personal information is being collected; With which parties personal information is being shared; For what purposes personal information is collected, used, or disclosed; and Risk of harm and other consequences The Privacy Commissioners concluded that more needed to be done by TikTok to obtain valid and meaningful consent from its users. This was important with respect to TikTok’s privacy communications (during the count creation process, its Privacy Policy , as well as pop ups and notifications, and supporting materials like help centre and FAQs), and the youth-specific privacy protections such as the default privacy settings that made the accounts private by default without the ability to live stream or send and receive direct messages. Although TikTok created videos, added a youth portal, and prepared documentation aimed at youth, more needed to be done to protect their privacy In addition, when it came to adults 18 years and older, the Privacy Commissioners determined that TikTok did not explain its privacy practices with respect to the collection and use of personal information, including via tracking and profiling, for purposes of ad targeting and content personalization in a manner that would result in meaningful consent being obtained from those users. Though the company made significant information available to users regarding its privacy practices, including through just-in-time notices and in a layered format, and even tried to improve its practices, the Privacy Commissioners found that that: TikTok did not provide certain key information about its privacy practices up-front; its Privacy Policy did not explain its practices in sufficient detail for users to reasonably understand how their personal information would be used and for what purposes; other available documents with further details were difficult to find and not linked in the Privacy Policy; and many key documents, including TikTok’s Privacy Policy, were not made available in French. Also, TikTok failed to adequately explain its collection and use of users’ biometric information When it came to the meaningfulness of consent from youth users, it became clear that the same communications were used for both youth and adults, and they were similarly inadequate. The Privacy Commissioners pointed out that children were particularly vulnerable to the risks arising from the collection, use, and disclosure of their personal information. In fact, UNICEF Canada has called for a prohibition on the use of personal data in the development of targeted marketing towards children and young people because it has been established that they are extremely vulnerable to such advertising. They also noted that there are other potential general harms to children and youth resulting from targeted advertising, including the marketing of games that can lead to the normalization of gambling and an increased risk of identity theft and fraud through profiling associated with targeted advertising TikTok failed to obtain meaningful consent from youth for its collection and use of their personal information, including via tracking and profiling, for purposes of ad targeting and content personalization. More specifically, the Privacy Commissioners found that, in addition to the fact that TikTok’s privacy communications were inadequate to support consent from adults, TikTok’s youth-specific privacy measures were also inadequate to ensure meaningful consent for youth for the following reasons: youth-specific communications in TikTok’s portal were not easy to find; none of those communications explained TikTok’s collection and use of personal information, including via tracking and profiling, for purposes of ad targeting and content personalization; and TikTok provided no evidence to establish that its communications had, in fact, led to an understanding by youth users of what personal information TikTok would use, and how, for such purposes The Privacy Commissioners stated: “Given these risks and sensitivities, we would expect TikTok to implement a consent model and privacy communications that seek to ensure that individuals aged 13-17 can meaningfully understand and consent to TikTok’s tracking, profiling, targeting and content personalization practices when they use the platform. This includes an expectation that TikTok would develop their communications intended for users aged 13-17 in language that those users can reasonably understand, taking into account their level of cognitive development. TikTok should also make clear to those users the risk of harm and other consequences associated with use of the platform consistent with the Consent Guidelines and section 6.1 of PIPEDA . In light of the fact that younger users may not be aware of the existence and implications of targeted advertising, TikTok’s privacy communications should include prominent up-front notification that targeted ads may be delivered to them on the platform to influence their behaviour” Issue 2.1: Did TikTok meet its obligations to inform the persons concerned with respect to the collection and use of personal information to create user profiles for the purposes of ad targeting and content personalization Rather than an obligation to obtain consent and regardless of the type of personal information, the Québec legislation provides that when personal information is collected directly from the person concerned, the company collecting the information has an obligation to inform the person concerned. A person who provides their personal information in accordance with the privacy legislation consents to its use and its communication for the purposes for which it was collected In this case, TikTok collects personal information from the user using technology with functions that enable it to identify, locate, or profile the user. Specifically, TikTok uses its platform (website and app) along with associated technologies such as computer vision and audio analytics, as well as the three age models, to collect and infer information about users (including their demographics, interests and location) to create a profile about them. These profiles can in turn be used to assist in the delivery of targeted advertising and tailored content recommendations on the platform Since TikTok did not meet the obligation to inform the person, the Privacy Commissioners found that the collection of personal information by TikTok was not compliant with Québec’s legislation. Also, TikTok did not, by default, deactivate functions that allowed a person to be identified, located, or profiled using personal information. Since users did not have to make an active gesture to activate these specific functions, the Privacy Commissioner found that TikTok contravened the requirements of Québec’s legislation. Moreover, TikTok was not ensuring that the privacy settings of its technological product provided the highest level of privacy by default, without any intervention by the person concerned also contravened the legislation. Consequently, TikTok’s practices did not comply with sections 8, 8.1 and 9.1 of Quebec’s private sector privacy legislation The Privacy Commissioners stated: “Subsequent to engagement with the Offices, a new stand-alone privacy policy for Canada was published in July 2025” What were the Recommendations that TikTok will be working to follow? Given the above findings, the company agreed to work with the Privacy Commissioners to resolve the matter. More specifically, TikTok committed to the following: Implement three new enhanced age assurance mechanisms that are to be demonstrably effective at keeping underage users off the platform Enhance its privacy policy to better explain its practices related to targeted advertising and content personalization, and make additional relevant privacy communications more accessible, including by links in the privacy policy and up-front notices Cease allowing advertisers to target under-18 users, except via generic categories such as language and approximate location Publish a new plain-language summary of its privacy policy for teens, and develop and distribute a video to teen users to highlight certain of TikTok’s key privacy practices, including its collection and use of personal information to target ads and personalize content Enhance privacy communications, including through prominent up-front notices, regarding its collection and use of biometric information and the potential for data to be processed in China Implement and inform users of a new “Privacy Settings Check-up” mechanism for all Canadian users, which would centralize TikTok’s “most important and tangible” privacy settings and allow users to more easily review, adjust, and confirm those setting choices What Has TikTok Done in Response to the Findings? In response to the joint findings and recommendations, the OPC News Release states that TikTok has agreed to strengthen privacy communications to ensure that users, and in particular younger users, understand how their data could be used, including for targeted advertising and content personalization. In addition, TikTok has also agreed to enhance age-assurance methods to keep underage users off TikTok and provide more privacy information in French. In fact, the company quickly began making some improvements during the investigation. As a result, the matter was found to be well-founded and conditionally resolved with respect to all three issues. The Privacy Commissioners will continue to work with TikTok to ensure the final resolution of the matter through its implementation of the agreed upon recommendations. The Privacy Commissioner of Canada, Philippe Dufresne stated: “TikTok is one of the most prevalent social media applications used in Canada, and it is collecting vast amounts of personal information about its users, including a large number of Canadian children. The investigation has revealed that personal data profiles of youth, including children, are used at times to target advertising content directly to them, which can have harmful impacts on their well-being. This investigation also uncovered the extent to which personal information is being collected and used, often without a user’s knowledge or consent. This underscores important considerations for any organization subject to Canadian privacy laws that designs and develops services, particularly for younger users. As technology plays an increasingly central role in the lives of young people in Canada, we must put their best interests at the forefront so that they are enabled to safely navigate the digital world.” For more information, readers can view the Backgrounder: Investigation into TikTok and user privacy. What Can we Take from This Development? Although TikTok generally disagreed with the Privacy Commissioners’ findings, the company did commit to working with the Privacy Commissioners and had already started to make improvements. What we can see from this case is that when it comes to youth privacy, there can be no excuse for faulty privacy protections—it is important to get it right and provide the highest level of privacy by default. This is especially true with respect to complying with Québec’s private sector privacy legislation, mainly because Québec has already caught up and created private sector privacy legislation that closely resembled the more protective General Data Protection Regulation . It is notable that the Privacy Commissioners said that they were deeply concerned by the limited protective measures that TikTok had in place to protect youth privacy, and found it particularly troubling that even though TikTok implemented many sophisticated analytics tools for age estimation to serve its various other business purposes, the company did not consider using those tools or other similar tools to prevent underage users from accessing, and being tracked and profiled on, the platform. We can see how important it is for companies like TikTok to ensure that the purposes for collection, use, and disclosure are what a reasonable person would consider to be appropriate, reasonable, or legitimate under the circumstances. What is more, companies need to make sure that they obtain valid and meaningful consent, in line with the Guidelines for obtaining meaningful consent . Also, adequate age assurance mechanisms need to be in place to ensure that underage users are not let onto the platform. And when it comes to consent to use biometric information, more needs to be done to ensure that there is proper express consent due to the sensitive nature of the information. Lastly, we cannot forget to mention the importance of ensuring that companies communicate as clearly as possible and properly explain things such as company practices in a company Privacy Policy. And finally, it is important to remember what the Privacy Commissioners said: obtaining consent does not render an otherwise inappropriate purpose appropriate. If the purposes are not appropriate, users will not be able to consent. So, if you cannot protect the users with appropriate or reasonable measures, there is no point in asking for consent to collect or use the personal information. Previous Next