top of page

Search Results

123 results found with an empty search

  • voyAIge strategy | Data & AI Governance Specialists

    Our governance solutions ensures your successful and safe use of data and AI. We are seasoned specialists in AI law, policy, and ethics. We make AI and people work Tommy Cooke Co-Founder voyAIge strategy (VS) helps organizations to responsibly navigate AI through innovative strategic solutions, such as tailored policies, thought leadership content, as well as workforce and executive training. Our goal is to make AI and people work by bridging the gap between AI and people. We align AI use with real human needs and talents. Our solutions empower people to use AI confidently, safely, and effectively. Christina Catenacci Co-Founder Book a Free Consultation Managed Data & AI Governance Services Guidance, Oversight, and Leadership for your AI Journey Our Managed Data & AI Governance S ervices offer your organization the confidence to move forward with data and AI. Think of us as your virtual Chief Data or Chief AI Officer: someone on-call and working with you. We meet you where you are in your data and AI journey and provide you the support services you need at an affordable monthly rate that is within your budget. From a tailored AI strategy and risk assessments to executive and staff training to communications planning, we help ensure your data and AI use are aligned with your goals, and is safe, responsible, and built to last. Whether you are building your first use cases, integrating data across departments, or exploring risk mitigation,we provide the leadership and structure to make AI work for your people and your goals. Our extensive experience in AI and related areas such as data governance, data privacy and security, intellectual property, and confidential information, has assisted us to craft a simple yet reliable three-step approach to guide you in your AI journey: Data Governance End-to-End Management We partner to set up, maintain and evolve the structures, roles, policies and operations required to treat data as a trusted asset across the enterprise AI Governance Strategic AI Enablement We help you govern AI within your business: from use-case identification, model selection, deployment, through to monitoring, ethics & control frameworks Ecosystem Oversight Define data ownership, steward roles, access controls, metadata management and lifecycle processes Use-case to Value Identify high-impact AI opportunities, run pilots, operationalize them and embed the capabilities in business processes Compliance & Risk Reduction We ensure your data handling meets legal, regulatory and ethical standards Operational Readiness Equip your organisation with the governance structure that enables reliable analytics, BI and advanced data-capabilities Operational Readiness Deploy AI governance frameworks, playbooks and training so your teams act confidently and safely Compliance & Risk Reduction We ensure your data handling and AI operations meet legal, regulatory and ethical standards as well as industry best practices Our Managed Data & Governance Services Deliver Benefits That Accelerate Safe and Successful Growth Clarity on strategy and direction We help you set a focused, realistic AI roadmap Compliance and risk management Your AI stays aligned with law and compliance Expertise without full-time cost Access senior-level guidance at a fraction of the cost Support that grows with your needs We adapt as your AI use evolves Faster, safer Implementation Avoid false starts with structured deployment Confidence across teams and stakeholders Build trust in AI with clear guidance and communication Most Organizations Encounter the Same AI Challenges Most organizations encounter the same kinds of roadblocks when adopting AI. These challenges can stall progress, create risk, and leave teams overwhelmed or misaligned. VS provides solutions to address these challenges: Fear of AI Executives fear lost ROI as well as strategic or stakeholder misalignment Employees fear replacement, uncertainty, extra work, and inadequate training The solution is Training Inappropriate Use of AI Executives worry about employee misuse, data leaks, and non-compliance while employees often lack clarity on the rules and accidentally share sensitive information The solution is AI Policies Lack of Preparedness Organizations are unsure if they are ready for AI, lack budget clarity, and struggle to communicate effectively with stakeholders The solutions are thought leadership and stakeholder engagement No Leadership Organizations often do not have an internal AI expert. There may be no AI direction, and no coordination between departments. No one in charge of decision-making. The solution is VS's Managed AI Services No Strategy Leaders do not know what AI tools are the right fit, or they are overwhelmed by options. There is no roadmap or strategy for AI adoption, nor is there a change management plan in place. The solution is Adoption and Transformation Too Many Questions "Where do we start? "Do we need a plan?" "Is AI worth the investment? "What AI do we need?" The solution is the AI Helpdesk Strategic and Critical Insights to Guide your AI Journey Canada’s Innovation Crossroads Read More New York Governor Hochul Signs AI Safety and Transparency Bill into Law Read More Privacy Commissioner Investigation into Social Media Platform, X Read More Read More Testimonials "voyAIge has delivered exceptional work with their AI in the Workplace Policy for OneFeather. By centering Indigenous data sovereignty, collective growth, and the principle of 'leaving the table better set than we found it,' they've created more than just a policy, they've provided a blueprint for ethical AI implementation that protects community interests and removes systemic barriers" Jerret Taylor / Chief Technical Officer / OneFeather Mobile Technologies Ltd. Contact Us Partnership Opportunities Submit RFP Stay Informed Get expert perspectives on AI risks, solutions, and strategies Email address: Submit

  • Closing the Gap: from Policies and Procedures to Practice | voyAIge strategy

    Closing the Gap: from Policies and Procedures to Practice Overcoming the policy/procedure-practice paradox requires focus and commitment By Tommy Cooke Sep 24, 2024 Key Points: Having AI policies doesn't automatically ensure ethical AI practices Regular audits and cross-functional teams are crucial for aligning AI with ethical standards Explainability and stakeholder engagement are key to responsible AI implementation Closing the Gap: from Policies and Procedures to Practice Organizations pride themselves on having comprehensive AI policies and procedures. They show care, diligence, and signal to your staff and stakeholders that you are take AI use and employee behaviour seriously as part of your business plan. However, AI policies and procedures don’t guarantee ethical AI. Even when multiple policies reference AI, there's often a gap between policy and procedures on the one hand, and practice on the other. This gap is a problem because it can catalyze unintended consequences and ethical breaches that undermine the very principles they otherwise uphold. The Policy/Procedure-Practice Paradox This problem is a paradox that is common in virtually every industry using AI. By paradox we mean a contradictory statement that, when investigated and explained, proves to be true. For example, say aloud to yourself, “the beginning is the end”. It sounds absurd, but when you think it through, it makes sense. This same phenomenon presents itself when thinking about “policies and procedures in practice”. Policies and procedures are documents, so how exactly are they practiced? The initial thought that a document practices anything is absurd. But when we read them, they guide how people ought to use and not use AI. The policy/procedure-practice paradox is a problem because failing to understand it means failing to address it. And in failing to address it, policies and procedures about AI often lead to broken and misinformed practices. Let’s consider a real-world example: Despite a company having an anti-bias policy in place, a facial recognition system used in stores across the United States for over eight years exhibited significant bias . The system struggled to accurately identify people with darker skin tones, leading to higher error rates for certain demographics. This occurred because the AI was trained on datasets disproportionately representing lighter skin tones. And so, even well-intentioned policies can fail in practice. The example above is not isolated. It’s a symptom of a larger issue in AI implementation. While the example I provided was caused by biased data, there are several other reasons why the policy/procedure paradox exists: Lack of Explainability: many AI systems operate as " black boxes ," making it difficult to understand their decision-making processes, even with transparency policies in place Rigid Rule Adherence: AI systems may strictly follow their programmed rules without understanding the nuanced ethical priorities of an organization Complexity of Ethical Standards: Translating abstract ethical concepts into concrete, programmable instructions is a complex task that often leaves room for interpretation and error Closing the Gap To mitigate the paradox, we need to close the gap that often exists between AI policies and procedures with AI practices. Here are some strategies to achieve this: Translate Policies into AI-Specific Guidelines: high-level policy language needs to be converted into actionable steps that can be implemented in AI systems. This translation ensures that AI operates on the same definitions of privacy, fairness, and transparency as the organization. Engage with your AI vendor to discuss how your policies can be integrated into the system's operations. Remember, AI systems often require fine-tuning to align with specific organizational needs . Conduct Regular Audits: periodic reviews of AI systems are essential to ensure they're behaving in line with ethical standards. These audits should be thorough and look for potential blind spots. They’re also excellent at discovering and mitigating issues that an organization may have previously missed . Compare your system's training data with the data your organization provides. Analyze the differences and involve your ethics and analytics teams in prioritizing findings for policy amendments. Build a Cross-Functional Ethics Team: bringing together technology champions, legal experts, and individuals with strong ethical compasses can provide a well-rounded perspective on AI implementation. Ensure this team regularly communicates with your AI vendor, especially during the implementation of new systems. When building this team, diversify it. As academics say, make it multidisciplinary, meaning the combination of professional specializations when approaching a problem . Promote Explainability: as the Electronic Frontier Foundation has advocated for years, explainability is crucial when using AI . Why? If an AI system's decisions can't be explained, it becomes difficult for an organization to claim accountability for its actions. Work with your vendor to ensure AI models are interpretable. Position the right people to explain system outputs to anyone in your organization and verify that these align with your founding principles. Engage External Stakeholders: as AI ethics expert Kristofer Bouchard recently argued , external perspectives, especially from customers, communities, and marginalized groups, are crucial when using AI. This is especially the case when it comes to identifying ethical blind spots. Regularly seek feedback from these groups when evaluating your AI systems. Their insights can be invaluable in uncovering unforeseen ethical implications. The Path Forward: Ongoing Oversight and Proactive Management The close the gap between AI policy and ethical practice requires keep the gap shut. Unfortunately, it’s not as simple as closing a door once-and-for-all. It needs to be closed because it can easily reopen many times during your AI journey. Closing the gap requires ongoing oversight, regular policy updates, and a commitment to aligning AI behavior with organizational values. Actively integrate the five strategies above as doing so can significantly minimize risks associated with AI use. Being proactive not only ensures compliance with ethical standards but also builds trust with stakeholders and positions the organization as a responsible leader in AI adoption. Remember, in the world of AI, accountability and responsibility are critical. The power of these systems demands continuous vigilance and active management. By committing to this process, organizations can harness the full potential of AI while upholding their ethical principles and societal responsibilities. Previous Next

  • Meta Refuses to Sign the EU’s AI Code of Practice | voyAIge strategy

    Meta Refuses to Sign the EU’s AI Code of Practice A closer look at the reasons why By Christina Catenacci, human writer Jul 30, 2025 Key Points On July 18, 2025, the European Commission released its General-Purpose AI Code of Practice and its Guidelines on the scope of the obligations for providers of general-purpose AI models under the AI Act Many companies have complained about the Code of Practice, and some have gone so far as to refuse to sign it—like Meta Businesses who are in the European Union and who are outside but do business with the EU (see Article 2 regarding application) are recommended to review the AI Act, Code of Practice, and Guidelines and comply Meta has just refused to sign the European Union’s General-Purpose AI Code of Practice for the AI Act . That’s right—Joel Kaplan, the Chief Global Affairs Officer of Meta, said in a LinkedIn post on July 18, 2025 that “Meta won’t be signing it”. By general-purpose AI, I mean an AI model, including when trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications. This does not cover AI models that are used before release on the market for research, development and prototyping activities. What is the purpose of the AI Act? As you may recall, section (1) of the Preamble of the AI Act states that the purpose is to: “The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform legal framework in particular for the development, the placing on the market, the putting into service and the use of artificial intelligence systems (AI systems) in the Union, in accordance with Union values, to promote the uptake of human centric and trustworthy artificial intelligence (AI) while ensuring a high level of protection of health, safety, fundamental rights as enshrined in the Charter of Fundamental Rights of the European Union (the ‘Charter’), including democracy, the rule of law and environmental protection, to protect against the harmful effects of AI systems in the Union, and to support innovation. This Regulation ensures the free movement, cross-border, of AI-based goods and services, thus preventing Member States from imposing restrictions on the development, marketing and use of AI systems, unless explicitly authorized by this Regulation” The AI Act classifies AI according to risk and prohibits unacceptable risk like social scoring systems and manipulative AI. High-risk AI is regulated, limited risk has lighter obligations, and minimal risk is unregulated. The AI Act entered into force on August 1, 2024, but its prohibitions will be phased in over time. The first set of regulations, which take effect on February 2, 2025, ban certain unacceptable risk AI systems. After this, a wave of obligations over the next two to three years, with full compliance for high-risk AI systems expected by 2027 (August 2, 2025, February 2, 2026, and August 2, 2027 have certain requirements). Those involved in general-purpose AI may have to take additional steps (e.g., develop of Codes of Practice by 2025), and may be subject to specific provisions for general-purpose AI models and systems. See the timeline for particulars. What is the Code of Practice for the AI Act ? The Code of Practice is a voluntary tool (not a binding law), prepared by independent experts in a multi-stakeholder process, designed to help industry comply with the AI Act’s obligations for providers of general-purpose AI models. More specifically, the specific objectives of the Code of Practice are to: serve as a guiding document for demonstrating compliance with the obligations provided for in the AI Act , while recognising that adherence to the Code of Practice does not constitute conclusive evidence of compliance with these obligations under the AI Act ensure providers of general-purpose AI models comply with their obligations under the AI Act and enable the AI Office to assess compliance of providers of general-purpose AI models who choose to rely on the Code of Practice to demonstrate compliance with their obligations under the AI Act Released on July 10, 2025, it has three parts: Transparency : Commitments of Signatories include Documentation (there is a Model Documentation Form containing general information, model properties, methods of distribution and licenses, use, training process, information on the data used for training, testing, and validation, computational resources, and energy consumption during training and inference) Copyright : Commitments of Signatories include putting in place a Copyright policy Safety and Security : Commitments of Signatories include adopting a Safety and security framework; Systemic risk identification; Systemic risk analysis; Systemic risk acceptance determination; Safety mitigations; Security mitigations; Safety and security model reports; Systemic risk responsibility allocation; Serious incident reporting; Additional documentation and transparency For each Commitment that Signatories sign onto, there is a corresponding Article of the AI Act to which it relates. In this way, Signatories can understand what parts of the AI Act are being triggered and complied with. For example, the Transparency chapter deals with obligations under Article 53(1)(a) and (b), 53(2), 53(7), and Annexes XI and XII of the AI Act . Similarly, the Copyright chapter deals with obligations under Article 53(1)(c) of the AI Act . And the Safety and Security chapter deals with obligations under Articles 53, 55, and 56 and Recitals 110, 114, and 115 AI Act. In a nutshell, adhering to the Code of Practice that is assessed as adequate by the AI Office and the Board will offer a simple and transparent way to demonstrate compliance with the AI Ac t. The plan is that the Code of Practice will be complemented by Commission guidelines on key concepts related to general-purpose AI models, also published in July. An explanation of these guidelines is set out below. Why are tech companies not happy with the Code of Practice? To start, we should examine the infamous LinkedIn post: “Europe is heading down the wrong path on AI. We have carefully reviewed the European Commission’s Code of Practice for general-purpose AI (GPAI) models and Meta won’t be signing it. This Code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act. Businesses and policymakers across Europe have spoken out against this regulation. Earlier this month, over 40 of Europe’s largest businesses signed a letter calling for the Commission to ‘Stop the Clock’ in its implementation. We share concerns raised by these businesses that this over-reach will throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them. The post criticizes the European Union for going down the wrong path. It also talks about legal uncertainties, measures which go far beyond the scope of the AI Act , as well as stunting development of AI models and companies. There was also mention of other companies wanting to delay the need to comply. To be sure, CEOs from more than 40 European companies including ASML, Philips, Siemens and Mistral, asked for a “two-year clock-stop” on the AI Act before key obligations enter into force this August. In fact, the bottom part of the open letter to European Commission President Ursula von der Leyen called “Stop the Clock” asked for more simplified and practical AI regulation and spoke of a need to postpone the enforcement of the AI Act . Essentially, the companies want a pause on obligations on high-risk AI systems that are due to take effect as of August 2026, and to obligations for general-purpose AI models that are due to enter into force as of August 2025. Contrastingly, the top of the document is entitled “EU Champions AI Initiative”, with logos of over 110 organizations that have over $3 billion in market cap and over 3.7 million jobs across Europe. In response to the feedback, the European Commission is mulling giving companies who sign a Code of Practice on general-purpose AI a grace period before they need to comply with the European Union's AI Ac t. This is a switch from the July 10, 2025 announcement that the EU would be moving forward notwithstanding the complaints. The final word appears to be that there is no stop the clock or pauses or grace periods, period. New guidelines also released July 18, 2025 In addition, the European Commission published detailed Guidelines on the scope of the obligations for providers of general-purpose AI models under the AI Act (Regulation EU 2024/1689)—right before the AI Act’s key compliance date, August 2, 2025. The goal is to help AI developers and downstream providers by providing clarification. For example, it explains which providers of general-purpose AI models are in and out of scope of the AI Act’s obligations. In fact, the European Commission stated that “The aim is to provide legal certainty to actors across the AI value chain by clarifying when and how they are required to comply with these obligations”. The Guidelines focus on four main areas: General-purpose AI model Providers of general-purpose AI models Exemptions from certain obligations Enforcement of obligations The intention is to use clear definitions, a pragmatic approach, and exemptions for open-source. That said, the Guidelines consist of 36 pages of dense material that need to be reviewed and understood. For instance, the Guidelines answer the question, “When is a model a general-purpose AI model? Examples are provided for models in scope and out of scope. What happens next? As we can see from the above discussion, there are serious obligations that need to be complied with—soon. To that end, businesses in the European Union or who do business in the European Union (see Article 2 regarding application) are recommended to review the AI Act, the Code of Practice, and the Guidelines to ensure that they are ready for August 2, 2025. After August 2, 2025, providers placing general-purpose AI models on the market must comply with their respective AI Act obligations. Providers of general-purpose AI models that will be classified as general-purpose AI models with systemic risk must notify the AI Office without delay. In the first year after entry into application of these obligations, the AI Office will work closely with providers, in particular those who adhere to the General-Purpose AI Code of Practice, to help them comply with the rules. From 2 August 2026, the Commission’s enforcement powers enter into application. And by August 2, 2027, providers of general-purpose AI models placed on the market before August 2, 2025 must comply. Previous Next

  • Compliance | voyAIge strategy

    AI policies and frameworks to help your organization meet legal and ethical standards. Compliance At voyAIge strategy, compliance is a foundation for our analysis on the legal, policy, and ethical dimensions of AI. We understand the intricacies of the laws of many jurisdictions and can guide you through every step of your compliance journey. Today's rapidly evolving digital landscape is fueled by the exponential rate that AI transforms not just business practices and ways of seeing but entire industries. However, with innovation comes new challenges - particularly in compliance. Governments and regulatory bodies around the world are racing to keep up. They are creating complex legal requirements with which business must comply. For businesses, navigating this complexity is not just about avoiding fines and penalties. It's about safeguarding reputation, building trust with stakeholders, and ensuring sustainability. What's Your Compliance Challenge? Understanding jurisdiction, sector, applicable legislation, data types, and data flows are some of the many considerations we take into account when identifying regulatory bodies relevant to your organization. The following are some examples that may apply to a business now or in the future: GDPR Enforces the General Data Protection Regulation (GDPR). Applies to all businesses with clients and customers in Europe NYC LL 144 New York City regulates how business can and cannot use AI to assist in hiring employees AI ACT Regulates and governs the use of all Artificial Intelligence inside the European Union FDA The US Food and Drug Administration contains specific AI regulations governing medical devices, evidence curation, and market monitoring CCPA The California Consumer Privacy Act has a significant impact on how AI systems used in businesses handle data AIDA Enforces the General Data Protection Regulation (GDPR). Applies to all businesses with clients and customers in Europe OESA Governs employee monitoring as well as using AI to recruit employees in Ontario, Canada SB 1047 California's Safe and Secure Innovation for Frontier AI Models Act imposes safety restrictions on advanced AI PIPEDA Canada's Personal Information Protection and Electronic Documents Act governs how companies collect, use, and share personal information Did You know? Up to €30 million or 6% of Global Annual Turnover for prohibited AI practices under the GDPR. Amazon was fined €746 million by Luxembourg’s data protection authority for how it processed personal data for targeted advertising using AI-driven systems. Canada's proposed Artificial Intelligence and Data Act plans to impose administrative monetary penalties $10 million or 3% of Global Annual Revenue - including criminal penalties such as jail time for AI decisions causing significant harm. Expert Insights, Experience & Resources Book a free consultation to chat with us about correctly and accurately identifying compliance and regulation applicable to your organization. Stay informed by subscribing to our VS-AI Observer Substack , where we offer articles, whitepapers, case studies, and video content that will keep your organization ahead of emerging compliance challenges, requirements, and issues. Book a Free Consultation

  • Partnership Opportunities | voyAIge strategy

    Collaborating with organizations to drive responsible and effective AI adoption. Partnership Opportunities We are passionate about partnerships and alliances. If your business or organization is interested in collaborating with us on business opportunities, CfPs, or other forms of co-operation for mutual benefit, we would like to hear from you. For all other inquiries, please contact us here . First Name Last Name Email Partnership Inquiry Details Send Thanks for submitting!

  • Governing AI by Learning from Cohere’s Mistakes | voyAIge strategy

    Governing AI by Learning from Cohere’s Mistakes Why it is Crucial to Demonstrate Control By Tommy Cooke, powered by caffeine and curiousity and a strong desire for sunny weather Mar 7, 2025 Key Points: AI governance is essential because it ensures organizations maintain transparency, accountability, and oversight over how AI systems are trained, deployed, and used Leaders must proactively assess where AI models source their data, ensuring compliance with intellectual property laws and mitigating risks related to unauthorized content use We don’t govern AI to avoid lawsuits. We govern AI to demonstrate that we are in control. This is how we build trust with stakeholders In my 20s, I spent a lot of time traveling to Germany to visit a close friend. He had an early GPS unit in his car. It was an outdated system that relied on CDs for updates and generated very blurry little arrows on a tiny screen nowhere near the driver's eyes. On one trip, we thought we were heading west of Frankfurt to a favourite restaurant. Instead, we ended up 75 kilometers south. We found ourselves sitting in his car at a dead end, with his high beams on, staring into a dark farmer's field. The system led us astray because we over-relied on old data inside a brand-new consumer technology. When I speak with leaders adopting AI for the first time, I often think of getting lost in the rural German countryside. AI, like early GPS, promises efficiency, but its reliability depends entirely on its data. Organizations are under pressure to adopt AI to streamline operations, reduce costs, and drive creativity. But AI isn’t a magic bullet. It’s a tool. And like an unreliable GPS, AI trained on flawed or unauthorized data can take your organization in the wrong direction. It relinquishes control. Cohere, a major AI company in Canada, is facing a significant lawsuit over how it trained its models; Cohere used copyrighted content without permission or providing compensation. This case is one you should know about because it's more than just a legal battle. It’s a reminder: AI adoption isn’t just about capability. It’s about building and maintaining responsible control. So, how exactly do you ensure you are in control? The answer begins and ends with an ethical strategy. The Ethical Fault Line The lawsuit against Cohere, a Toronto-based AI company, highlights the growing tension between AI developers and content creators. Major media organizations allege that AI companies are scraping and reproducing their content without consent or compensation. This raises a critical question: Who controls knowledge in the era of AI? This isn’t just a tech industry issue—it’s a governance challenge with real consequences. AI systems generate content, provide insights, and automate decisions based on their underlying data. If that data is sourced irresponsibly—such as using newspaper articles without publisher consent—organizations risk reputational harm, legal liability, and a breakdown of trust with employees, customers, and industry partners. Lessons for AI Leaders: How to Stay on the Right Side of AI Ethics As AI continues to reshape industries, its impact will depend on how it is developed and deployed. Business leaders don’t need to be AI engineers, but they do need to ensure that they are using AI transparently. Here's why: Transparency is the Foundation of Trust. AI should not be a "black box"—a technology that operates mysteriously without clear explanation. Leaders need visibility into how AI works, what data it uses, and what safeguards are in place. This means two things: first, working with AI vendors to receive clear documentation on data sources and model behaviour. If a company can’t explain how its AI makes decisions, that’s a red flag. Second, leaders need a communication strategy—something that they can reference to explain AI’s role to any stakeholder Respect Intellectual Property from the Start. Whether using AI to generate content, analyze trends, or assist in decision-making, stakeholders expect AI leaders to account for where AI data comes from. If an organization uses internal data from sales reports, for example, this needs to be documented. If outsourcing data from a third-party vendor, it’s not enough to say that the data is external—leaders must be able to confirm the vendor’s ownership and rights to that data Governing AI Is Not Optional. Responsible AI use requires a governance framework. Companies need clear policies that define how AI is trained, where data comes from, and how the system and its outputs are monitored. Think of AI governance like driving a car: just as drivers follow traffic laws and speed limits, AI systems require rules to ensure safe and ethical operation. AI governance is a business strategy that demonstrates commitment to legal, compliant, and ethical AI development—ensuring transparency, explainability, and accountability. Ethical AI is an Advantage. Not a Financial Burden Much like the way a driver is expected to maintain control of their vehicle, to abide by rules, and to ensure their own and others' safety, drivers build trust with their passengers and other drivers by continuously demonstrating that they are in control. The same holds true with AI ethics. We don't govern AI to avoid lawsuits. We govern AI to demonstrate that we are in control. Previous Next

  • De-Risking AI Prompts | voyAIge strategy

    De-Risking AI Prompts How to Make AI Use Safer for Business By Tommy Cooke, fueled by caffeine and curiousity Aug 8, 2025 Key Points: Small, well-intentioned actions can quietly introduce risk when staff lack clear guidance and boundaries De-risking AI isn’t about restricting use. It’s about educating staff, adopting prompt training into workflows, and developing a support team Safe and effective AI use begins when leadership models responsible practices and builds a culture of clarity, not control Many moons ago, I was working with a data centre on a surveillance experiment. One of the interns was a motivated student. He was tasked with investigating third parties that we suspected were abusing access to sensitive location data within one of our experiment’s smartphones. Without telling anyone, the student sent sample data from our smartphone to an organization we were actively investigating. It was an organization whose credibility was under intense scrutiny for abusive data practices. The student wasn’t acting out of malice. They were trying to be helpful, to show responsiveness, to move the work forward. But they didn’t understand the stakes. To them, the data was “just a sample.” To us, it signaled loss of control and a risky alignment with an actor we hadn’t finished vetting. The problem wasn’t the intern. The problem was that we hadn’t taken the time to review and discuss contract terms—to find ways to guide interns on both the best practices and boundaries around their work. This is what prompting GPT looks like in many organizations today. Staff often using AI to accelerate their work, lighten work loads, and inject some creativity into their craft. AI is a tool that is attractive to staff for many reasons, and so it is not surprising to us here at VS to hear that staff turn to AI to also respond to mounting work pressures; now that AI is available, executives increasingly expect their teams to work harder, faster, and better with it as a result of its existence. But with less than 25 percent of organizations having an AI policy in place , and even fewer educating their staff on how to use AI, it’s not surprising that how your staff use AI is not only highly risky, but you are also likely unaware of precisely what they are doing with AI. To most organizations we speak to, this risk is entirely unacceptable. While we highly advocate for a robust AI policy in place, as well as training around that policy, let’s dive into what you can be doing to de-risk your organization’s AI use. De-Risking AI Is Not Just About Restricting Use Before we take a deeper dive, it’s important to address a common knee-jerk reaction among business leaders. There is a temptation to de-risk by locking down: restricting access to GPTs, blocking it from the firewall, or ban prompts that mention sensitive keywords. These reactions are just that: reactions. They are not responses because they are not planned, considered, and contextualized. They are rigid, inflexible, and as such, they often backfire. What’s as important here is that they send a very clear message to your staff: AI is dangerous and not learnable. This subsequently pushes experimentation underground and creates a shadow use problem that’s harder to monitor or support. Instead, and as I mentioned earlier, the safer and more sustainable path is to educate, empower, and build clarity. It’s impossible to eliminate risk entirely, but you can reduce it by building good habits, providing effective guidance, and a sharing an understanding of what safe prompting looks like. What Team Leads and Business Owners Can Do If you lead a team or own a business, here are some steps you can take right now to start de-risking GPT use without killing its potential and promise: Create a prompt playbook. A living document that outlines safe and unsafe prompting practices, gives examples, and evolves over time. This could include do’s and don’ts, safe phrasing suggestions, and reminders about privacy, intellectual property, and any other related laws and policies relevant to the scenario at hand. It doesn't have to be long—it just has to be usable and user-friendly. Build training around real workflows. It’s quite common for organizations to bring in third-parties to offer cookie cutter training on how to use AI safely and effectively. Don’t do that. Abstraction doesn’t resonate on the front line, nor do we find it effective in resonating with executives either. Bring in an organization that can offer training that reflects how your people actually use AI and the daily nuances of their work. Schedule prompt review . Designate an AI team. Task them with making it normal to collect, analyze, and assess how your staff talk to AI. Encourage them to ask questions like, “is this a safe way to talk to AI?” We want to create a culture where prompt sharing and refinement is part of collaboration. Designate prompt leaders . Identify or train a few people, ideally within the aforementioned team, who can act as internal advisors on AI use. Not to gatekeep, but to support. Let staff know who to ask when they're unsure if a prompt might cause issues. Make it part of their job description and KPIs to lift up and support employees when they use AI. Develop internal guardrails. This is also something I discussed before, and something that Christina and I discuss ad nauseum in our articles. If you're using GPT through an API, platform, or organization-wide license, get AI policies in place. Set rules, automate flags, or integrate prompt logging for sensitive areas like legal, HR, or R&D. Communicate the purpose. Let people know why prompting guidance and safe use matters. Use examples to show how good prompting helps them avoid mistakes and do better work, not just follow rules. Ensure that you show the implications when things go wrong, and then follow up by reassuring staff that you have contingency plans in place. Let them know that you have a plan for when things go wrong, and that they shouldn’t be afraid to use AI if they follow their training. Signal leadership’s involvement. Executives and leaders should model good prompting habits, or they should at least acknowledge the importance of prompting. Lead by example, not just by word. The intern I mentioned earlier didn’t intend to create risk. The boundaries were drawn, but the intern was not familiar with them. We avoided damage to the project, and that damage was never about malice or recklessness. It was about misunderstanding what small mistakes could catalyze, especially when they go unrecognized by a staff member. Previous Next

  • The Strategic Values of Local AI | voyAIge strategy

    The Strategic Values of Local AI What it Means to Use AI in-house versus in-the-cloud By Tommy Cooke, powered by unusually high amounts of pollen in the air for this time of year Jul 18, 2025 Key Points: 1. Local AI keeps sensitive data in-house, helps businesses meet regulatory requirements, and reduces risk 2. Compared to escalating cloud costs, local AI offers predictable long-term savings for organizations with consistent workloads 3. A hybrid approach (using cloud AI for scale and local AI for control) is emerging as the most strategic model for enterprise AI deployment When you open the ChatGPT app on your phone, its AI runs in the cloud. What do I mean by that, you ask? The AI itself does not happen in your phone—it happens in a server somewhere else in the world. Your phone sends data, the data is ingested into a room filled with processors, and the output is returned to your phone. But there is another way for AI to function. And that way is called “ local AI ”: AI that is deployed, operated, and lives entirely within an organization’s walls. While the idea of local AI seemed like a farfetched dream merely a couple of years ago (and for good reason, as it was correctly perceived to be quite expensive at the time), it is a superior alternative to many organizations that are risk averse; privacy, control, cost predictability, and operational resilience are qualities all at the heart of local AI. Let’s unpack these qualities in further detail as I imagine many of you reading this right now will be highly interested in exploring local AI for your own organizations. Sovereignty over Sensitive Data The premier benefit of local AI is guaranteeing that sensitive data never leaves the organization. In tightly regulated industries, such as healthcare and finance, data sovereignty is critical. Using cloud-based AI, even with robust security protocols, creates uncertainties: Who audits the vendor’s access logs? Where is the data stored, geographically? How is it being used to train backend models? These are unanswered questions that haunt compliance officers and auditing teams. On the other hand, using local AI means that every bit of data that is processed stays within your full visibility and stewardship—this gives businesses a critical advantage particularly during a time of proliferating regulations. Simplified Compliance in a Complex Legal Landscape Regulations such as the EU’s GDPR and Canada’s PIPEDA impose strict obligations on data transfers and processing. This makes local AI models, which operate entirely within the bounds of these regulations, capable of sidestepping many of the issues that cloud AI systems are still struggling to navigate. That is, by minimizing the need to transfer data across and through jurisdictions, local AI reduces exposure to many legal complications. Moreover, and because all operations occur in-house, audit readiness becomes more straightforward: logs, model versions, and access records remain under corporate control. Predictable Operating Costs Cloud-based AI is often marketed as pay-for-use or as something that you can sign up for and begin using immediately. This makes mainstream AIs like ChatGPT and the like attractive: they are elastic, cost-efficient, and easy to access. However, as workloads grow, so too do fees. Application Programming Interface (API) calls, data storage, and compute time are but some of the many characteristics that all begin to add up. Cloud services also often carry usage-based or subscription-based pricing that tend to escalate over time. To be fair, the initial capital expenditure for local AI may be higher, but once it is set up, those costs amortize. For bounded workloads like batch processing, document classification, and real-time inference making, the cumulative total cost of ownership is considerably lower than continual cloud usage. Latency, Resilience, and Offline Capability Local processing also provides tremendous improvements in speed. Without the back-and-forth delays caused by network requests, turnaround times interacting with AI shrinks considerably. This is particularly attractive for real-time applications like manufacturing quality assurance or point-of-care diagnostics. Moreover, local AI continues to operate amidst network disruptions. For instance, remote sites, field offices, or secure facilities with limited connectivity can maintain uninterrupted service by using local AI. In an age where downtime translates directly into lost revenue and reputational risk, it is worth considering avoiding cloud-driven AI. Customization Although generalist cloud models have dazzling bread they often stumble in the face of domain-specific syntax. This is where local AI offers the opportunity to fine-tune with proprietary data: legal briefs, clinical records, manufacturing logs. Additionally, this makes local AIs considerably more reliable than their cloud counterparts in terms of avoiding hallucinations. Practically speaking, that means cleaner summaries, safer predictions, and fewer erroneous suggestions. Enhanced Data Governance Running models locally brings a considerable benefit by way of transparency. When you control the entire stack, from data ingestion to output, you gain visibility into model behaviour. This facilitates a higher level of explainability compared with cloud-driven AI. Local AI means that you are no longer reliant on opaque APIs; this can be a deal breaker for many prospective clients and customers. A Hybrid Future: Balancing Reach and Responsibility between Local and Cloud AI It is important to stress that local AI does not necessarily need to be seen as supplanting cloud-based systems. Rather, they can be complimentary. The optimal business model for many organizations is modular, or using both: Cloud-based AI for delivering massive-scale capabilities (think complex reasoning, multi-domain synthesis, and vast world knowledge) Local AI for handling sensitive tasks, private data, or immediate-response scenarios This balanced, hybrid approach is the future of enterprise AI. It's a “precision-first” approach and is one that could do wonders for aligning AI deployment within the context, risk tolerance, and regulatory demands within your industry. Local AI is not a niche pursuit. It is a strategic investment for businesses that are seeking to reconcile innovation, privacy, and compliance. Through local deployment, companies gain control over data, reduce long-term costs, improve performance, tighten governance, and allow them to converse using their own language or jargon. For businesses that are serious about data privacy, customer trust, and operational continuity, local AI is not just an alternative—it is a better, smarter, more principled choice. And this approach has the flexibility to enable use in conjunction with cloud-driven solutions. Previous Next

  • Impact Assessments | voyAIge strategy

    Evaluate AI risks and opportunities with expert-driven impact assessments. Impact Assessments We specialize in data, algorithmic, ethics, and socioeconomic impact assessments to understand technological and operational impacts on organizations, their clients, customers, and stakeholders. Our assessments deliver deep insights businesses need to understand their impact on those who matter most. Proactive Impact Assessments for Successful AI Use As AI, data, and algorithmic technologies become increasingly central to business operations, understanding their potential impacts is more critical than ever. Whether it’s assessing the ethical implications, ensuring compliance with regulatory standards, or evaluating the broader social and economic effects, impact assessments provide the clarity and foresight you need to navigate this complex landscape responsibly. At voyAIge strategy, we specialize in conducting thorough impact assessments that help organizations anticipate and mitigate risks, align with best practices, and make informed decisions. Our assessments go beyond surface-level analysis, offering deep insights into how your AI systems, data practices, and algorithms might influence your stakeholders, your business, and society at large. What is an Impact Assessment? An impact assessment is a systematic process of identifying, evaluating, and addressing the potential effects of AI systems, data usage, and algorithmic processes on individuals, organizations, and society. These assessments are crucial for ensuring that your technology strategies not only achieve their goals but also align with ethical standards, legal requirements, and social expectations. Examples of Impact Assessments include: DPIA Data Privacy Impact Assessment: evaluates how your data collection, storage, and processing practices affect individual privacy, ensuring compliance along the way. AIA Algorithmic Impact Assessment: a nalyzes the potential biases, fairness, and transparency of your algorithms, providing recommendations for mitigating negative outcomes and ensuring equitable results. EAIA Ethical AI Impact Assessment: assesses the broader ethical implications of deploying AI systems, including their effects on decision-making processes, social justice, and public trust. SEIA Social and Economic Impact Assessment: examines the potential social and economic consequences of your AI and data initiatives, helping you anticipate and address both positive and negative impacts. Why Impact Assessments Matter Impact assessments are not just a regulatory requirement—they are a critical tool for ensuring that your AI and data initiatives are responsible, sustainable, and aligned with your organization’s values. Here’s why impact assessments matter: Risk Mitigation: Identify and address potential risks before they become issues, protecting your organization from legal, ethical, and reputational harm. Regulatory Compliance: Ensure that your practices comply with local, national, and international laws and regulations, avoiding fines and penalties. Ethical Alignment: Align your technology strategies with ethical standards, ensuring that your AI and data use promote fairness, transparency, and accountability. Stakeholder Trust: Build and maintain trust with customers, employees, regulators, and the public by demonstrating your commitment to responsible AI and data use. Why Choose voyAIge strategy? At voyAIge strategy, we bring a unique blend of expertise, rigor, and strategic insight to every impact assessment. Here’s why organizations trust us to guide their AI and data initiatives: 1 Deep Expertise: Our team has extensive experience in AI, data ethics, and regulatory compliance, ensuring that our assessments are both thorough and informed by the latest developments in the field. 2 Tailored Approach: We customize each impact assessment to your organization’s specific needs, goals, and regulatory environment, ensuring that our insights are relevant and actionable. 3 Ethical Commitment: We are deeply committed to promoting ethical AI and data use, and our assessments reflect this commitment, helping you align your practices with the highest ethical standards. 4 Strategic Focus: Our assessments are designed to provide not just risk mitigation, but also strategic insights that help you leverage AI and data technologies for sustainable growth.

  • Upskilling and Reskilling in the Age of AI | voyAIge strategy

    Upskilling and Reskilling in the Age of AI What Organizations Need to Know Christina Catenacci, Human Writer Jan 20, 2025 Key Points: Upskilling is the process of improving employee skill sets through AI training and development programs Reskilling is learning an entire set of new skills to do a new job It is not possible to have a one-time upskilling and reskilling session—rather, upskilling and reskilling is a continuous learning process IBM’s Institute for Business Value states that more than 60 percent of executives predict that Gen AI will disrupt how their organization designs experiences; even more striking, 75 percent say that competitive advantage depends on Gen AI. In a study by Boston Consulting Group where 13,000 people were surveyed, 89 percent of respondents said that their workforce needed improved AI skills—but only six percent said that they had begun upskilling in “a meaningful way”. Clearly, organizations that are not beginning the process of upskilling and reskilling can be at a disadvantage in this competitive game and risk being left behind. This may be why the AI Age is commonly referred to as an era of upskilling. What is upskilling and reskilling? IBM notes that upskilling and reskilling are two different things. In particular, upskilling is the process of improving employee skill sets through AI training and development programs. The goal is to minimize skill gaps and prepare employees for changes in their job roles or functions. For example, it could include asking customer care representatives to learn how to use Gen AI and chatbots to answer customer questions in real time with prompt engineering. On the other hand, reskilling is learning an entire set of new skills to do a new job. For example, someone who works in data processing might need to embrace reskilling to learn web development or advanced data analytics. Organizations Need to Prioritize Upskilling and Reskilling According to a report by KPMG, organizations are increasingly prioritizing upskilling and reskilling their workers to harness the power AI and realize true business value. The authors point out that the impact of AI transformation is often underestimated—AI is expected to surpass human intelligence, and organizations cannot be complacent. Only 41 percent of organizations are increasing their AI investments. This is concerning since Gen AI is not like past disruptive technology; there can be no one-time upskilling and reskilling session, but rather a continuous learning process that takes place. Leaders in organizations need to get past employee resistance and help to drive AI adoption. How can this be accomplished? The authors note that leaders need to be equipped with the right mindset, knowledge, and skills to guide their AI transformation. By actively using AI in their own work and sharing their experiences with their teams, leaders can create a safe environment for exploration and experimentation, and this in turn helps to create a culture of innovation and continuous learning. Most importantly, the authors state that leaders need to communicate the benefits of AI clearly and transparently: they need to share how the technology can augment and enhance human capabilities rather than replace them. An In-depth Study on Reskilling and Upskilling In an instructive report by World Economic Forum (in collaboration with Boston Consulting Group), the authors introduced an approach to mapping out job transition pathways and reskilling opportunities using the power of digital data to help guide workers, companies, and governments to prioritize their actions, time, and investments on focusing reskilling efforts efficiently and effectively. To prepare the workforce for the Fourth Industrial Revolution, the authors stated that it was necessary to identify and systematically map out realistic job transition opportunities for workers facing declining job prospects. When mapping job transition opportunities, the authors asked whether the job transition was viable and desirable. They broke down jobs into a series of relevant, measurable component parts in order to systematically compare them and identify any gaps in knowledge, skills, and experience. Then, they calculated “job-fit”’ of any one individual on the basis of objective criteria. They asked whether the job was viable and desirable. Viable future employees were those who were equipped to perform those tasks (individuals who possessed the necessary knowledge, skills, and experience). When it came to whether the job was desirable, some jobs were simply undesirable because the number of people projected to be employed in this job category was set to decline. Using the United States Bureau of Labor Statistics, the authors aimed to find job transition pathways for all. Let us take an example: the authors discovered several pathways for secretaries and administrative assistants. Some provided opportunities with a pay rise, such as insurance claim clerks, and some provided opportunities with a pay cut, such as library assistants or clerical workers. The authors emphasized that employers could no longer rely solely on new workers to fill their skills shortages. One of the main issues was the willingness to make reasonable investments in upskilling and reskilling that could bridge workers onto new jobs. Similarly, they stressed that it was not possible to begin the transformation unless there was a focus on individuals’ mindsets and efforts. For instance, they reasoned that some employees would need time off of work to gain additional qualifications, and some would require other supports and incentives to engage them in continuous learning. This transformation could involve a shift in the societal mindset such that individuals aspired to be more creative, curious, and comfortable with continuous change. Moreover, the authors noted that no single actor could solve the upskilling and reskilling puzzle alone; in fact, they suggested that a wide range of stakeholders (governments, employers, individuals, educational institutions and labour unions etc.) needed to collaborate and pool resources to achieve this goal. Further, data-driven approaches were anticipated to bring speed and additional value to upskilling and reskilling. For example, it may be worth exploring the amount of time required to make job the various transitions, or nuanced evaluations of economic benefits from these job transitions. How do Organizations Begin Upskilling and Reskilling? When it comes to upskilling, BCG recommends that organizations: assess their needs and measure outcomes prepare people for change unlock employees’ willingness to learn make adopting AI a C-Suite priority use AI for AI upskilling Moreover, IBM recommends creating a lasting strategy, communicating clearly, and investing in learning and development. Some AI tools that are critical to upskilling include computer vision, Gen AI, machine learning, natural language processing, and robotic process automation. Upskilling use cases include customer service, financial services, healthcare, HR, and web development. Organizations can use AI technologies to enhance the AI learning experience itself via online learning and development, on-the-job training, skill-gap analysis, and mentorship. AI can provide added value for organizations because it combines institutional knowledge with advanced capabilities, fills important gaps, improves employee retention, and embraces the democratization of web development. Furthermore, McKinsey & Company recommends that organizations use a cross-collaborative, scaled approach to upskilling and reskilling workforces. More specifically, to realize the opportunity of Gen AI, a new approach is required to address employee attraction, engagement, and retention. That is, before rushing in and starting the process, it is important to clarify business outcomes and how Gen AI investments can enable or accelerate them. This involves defining the skills that are required to deliver these outcomes and identify groups within the organization that need to build those skills. In addition, it is necessary to use a human-centred approach—from the outset, organizations are recommended to acknowledge that many employees experience upskilling and reskilling as a threat to their well-established professional identities. To address this issue, organizations need to lead using an empathetic, human-centered approach—foster learning and development and transform fears into curiosity—cultivating mindsets of opportunity and continuous learning. And of course, it is necessary to make personalized learning possible at scale. This involves having tighter collaboration across the HR function, stronger business integration to embed learning experiences into working environments, and a refreshed approach to the learning and development technology ecosystem. Benefits of Upskilling and Reskilling in an AI-Driven Environment There are several benefits of upskilling and reskilling: Organizations can remain competitive Employees can increase engagement and job satisfaction Workers with enhanced skills can improve their creativity, productivity, and efficiency Organizations can help employees reduce the risk of job displacement Employees can increase wages and enjoy better job opportunities Organizations can increase their retention numbers Indeed, according to an MIT study , evidence suggests that Gen AI, specifically ChatGPT, substantially raised average productivity. Moreover, exposure to ChatGPT increased job satisfaction and self-efficacy, as well as concern and excitement about automation technologies. We know that employee development programs, including upskilling and reskilling, are highly valued by workers. More precisely, employees appreciate the following: Skill assessment and analytics Personalized learning paths Adaptive learning platforms AI-powered content curation Virtual assistants and chatbots Simulation and gamification Predictive analytics for training ROI Natural language processing for feedback and coaching Augmented reality (AR) and virtual reality (VR) for leaning, mentoring, and training Continuous learning and adaptation What We Can Take From all This Given the above, itt may be in organizations’ interests to start the process of upskilling and reskilling, as recommended above. No one wants to find and hire new people: turnover costs organizations a great deal of money. And no one wants to stand by and watch an employer replace them with a robot or other form of Gen AI. The solution is to take the time to create a solid plan, beginning with outlining goals and aligning them with what the business needs. It is true: HR professionals who have an upskilling and reskilling plan look a lot more enlightened than those who view AI as a threat . As seen in the 2024 Work Trend Index Annual Report by Microsoft and LinkedIn , it appears that many employees want, and even expect, this type of training and development at work. Employers need to catch up to the employees, given that 75 percent of employees are already bringing AI into the workplace. Previous Next

  • Insider Threats in the Age of AI | voyAIge strategy

    Insider Threats in the Age of AI Employers deal with the dual challenge of leveraging AI for operations and defending against AI-powered internal threats By Christina Catenacci, human writer Mar 28, 2025 Key Points Insiders are trusted individuals who have been given access to, or has knowledge of, any company resources, data, or system that is not generally available to the public Insider risks are the potential for a person to use authorized access to the organization’s assets—either maliciously or unintentionally—in a way that negatively affects an organization As AI becomes more common in the workplace and performs tasks that are not completed by humans, organizations face a growing security risk from artificial insiders as well as human ones AI is making waves in the workplace—employers have been trying to find novel ways of implementing AI to improve their operations, while simultaneously defending against cyberattacks, including new methods of AI-powered attacks, from inside their organizations. This article explores the nature of internal threats in modern workplaces. What are insider risks? According to Microsoft, insider risks (before they become actual threats or attacks) are the potential for a person to use authorized access to the organization’s assets—either maliciously or unintentionally—in a way that negatively affects an organization. “Assets” mean information, processes, systems, and facilities. In this context, an “insider” is a trusted individual who has been given access to, or has knowledge of, any company resources, data, or system that is not generally available to the public. For example, an insider could be someone with a company computer with network access. It could be a person with a badge or device that allows them to continuously access the company’s physical property. Or it could be someone who has access to the corporate network, cloud storage, or data. It could even be a person who knows the company strategy and financial information. Some risk indicators include: Changes in user activity such as a person behaves in a way that is out-of-character Anomalous data exfiltration such as sharing or downloading unusual amounts of sensitive data A sequence of related risky activities, which could involve renaming confidential files to look less sensitive, downloading the files, saved to a portable device, and deleting the files from cloud storage Data exfiltration of a departing employee, such as a resigning employee downloading a copy of a previous project file to keep a record of accomplishments (unintentional) or knowingly downloading sensitive data for personal gain or to assist them in the next position at a new company (intentional) Abnormal system access, where employees download files that they do not need for their jobs Intimidation or harassment, which could involve an employee making a threatening, harassing, or a discriminatory statement Privileges escalation, such as employees trying to escalate their privileges without a clear business justification What are insider threats and attacks? Further down the continuum, insider threats have the potential to damage the system or asset. The threat could be intentional or unintentional. And even further down, an insider attack is an intentional malicious act that causes damage to a system or asset. Unlike threats, attacks are relatively easy to detect. Not all cyberattacks are data breaches. More specifically, a data breach is any security incident where unauthorized parties access sensitive or confidential information, including personal data like health information and corporate data like customer records, intellectual property, or financial information. The ultimate goal of these insiders could be to steal sensitive data or intellectual property, to sabotage data or systems, espionage, or even intimidate co-workers. What is the cost of an insider attack? Data breaches are serious: according to the IBM Cost of a Data Breach Report 2024 , the global average cost of a data breach has increased by 10 percent from 2023 and now has reached USD 4.88 million. But what is striking is that the average cost of a malicious insider attacks averaged about USD 4.99 million in 2024. In this regard, expensive attack vectors included business email compromise, phishing, social engineering, and stolen or compromised credentials. The most common ones were phishing and stolen or compromised credentials. What happens when you add AI? According to IBM, AI and automation are transforming the world of cybersecurity. They make it easier for bad actors to create and launch attacks at scale. For example, when they create phishing attacks, they make it easier to produce grammatically correct and plausible phishing messages. In fact, the ThreatLabz 2025 AI Security Report revealed that threat actors are currently leveraging AI to enhance phishing campaigns, automated attacks, and create realistic deepfake content. ThreatLabz researchers discovered how DeepSeek can be manipulated to quickly generate phishing pages that mimic trusted brands, and how attackers can create a fake AI platform to exploit interest in AI and trick victims into downloading malicious software. In addition, ThreatLabz suggests that organizations face a number of AI risks: Shadow AI and data leakage (using AI tools without formal approval or oversight of the IT department and causing data leaks) AI-generated phishing campaign (in about five prompts, a phishing page can be created) AI-driven social engineering, from deepfake videos to voice impersonation used to defraud businesses Malware campaigns exploiting interest in AI, where attackers lure victims with a fake AI platform to deliver the Rhadamanthys infostealer The dangers of open-source AI enabling accidental data exposure and more serious things like data exfiltration The rise of agentic AI, where there are autonomous AI systems capable of executing tasks with minimal human oversight Indeed, Security Intelligence claims that Gen AI is expanding the insider threat surface . We’re talking about chatbots, image synthesizers, voice cloning software, and deepfake video technology for creating virtual avatars. Employees are going to work and misusing AI to the point that some companies are starting to ban the use of Gen AI tools in the workplace. For instance, Samsung apparently made such a ban following an incident where employees were suspected of sharing sensitive data in conversations with OpenAI’s ChatGPT. This is concerning, especially since OpenAI records and archives all conversations, potentially for use in training future generations of the large language model. A combination of human and AI security internal threats Organizations face many internal security threats, which can be of a traditional or AI nature: we can see from the above discussion that as AI becomes more common in the workplace and performs tasks that are not completed by humans, organizations face a growing security risk from artificial insiders as well as human ones . These AI insiders would be even better at learning how to avoid detection by ingesting more information and becoming more adept at spotting patterns within that information. In fact, threat actors use AI-generated malware, exploit network traffic analysis to find weak points, manipulate AI models by injecting false data, and make advanced phishing messages that evade detection. And AI systems can be used to detect those risks—AI and machine learning can enhance the security of systems and data by analyzing vast amounts of data, recognizing patterns, and adapting to new threats. Insider risk reframed The above discussion touched on several risks that could result because of insiders, whether they are human or AI. These risks can be boiled down and examined by looking at people, processes, and technology. The following could be another way of thinking about internal risk: People : human insiders make errors, lie about what they are doing, behave in unusual or suspicious ways, engage in theft of confidential information or intellectual property, could be manipulated, do not comply with the company’s policies and procedures, have low levels of AI literacy, easily fall for phishing or give up credentials, have no human-in-the-loop, or lack AI governance Processes : the company may not have AI in the workplace policies and procedures in place, or if they do have them, they may not be regularly updated, monitored, or enforced Technology : there may be biased data, a lack of data hygiene leading to bad-quality data, no change management so people and systems are not supported during the transition, AI-generated malware, manipulated AI models by injecting false data, advanced phishing messages that evade detection, agentic AI that goes rogue, or model drift and consequent inaccurate predictions Previous Next

  • L&E Analysis: What is Neural Privacy, and Why is it Important? | voyAIge strategy

    L&E Analysis: What is Neural Privacy, and Why is it Important? More US States are Regulating it By Christina Catenacci Mar 14, 2025 Legal & Ethical Analysis: Issue 1° Key Points Neural data is very sensitive and can reveal a great deal about a person The law is starting to catch up to the tech and the ethicists’ concerns In North America, California and Colorado are leading the way when it comes to creating mental privacy protections in relation to neurotechnology This is a hot topic, but what is it? Generally speaking, neural data is information that is generated by measuring activity in a person's central or peripheral nervous systems, including brain activity (seen in EEGs, fMRIs, or implanted devices); signals from the peripheral nervous system (such as nerves that extend from the brain and spine); and data that can be used to infer mental states, emotions, and cognitive processes. Interestingly, this data has been used to create artificial neural networks . For instance, machine vision can be used to identify a person's emotions by analyzing their facial expressions. Some may be surprised to know that there are many types of neurotechnology (neurotech) in existence today, but what is that? Neurotechnology bridges the gap between neuroscience, the scientific study of the nervous system, and technology. The goal of neurotech is to understand how the brain can be enhanced by technological advancements to create applications that improve both brain function and overall human health. In fact, some may characterize this growing area as “a thrilling glimpse into the potential of human ingenuity to transform lives”. Others have noted that neurotechnology, combined with the explosion of AI, opens up a world of infinite possibilities . One simple way of explaining neurotech is to divide it into two categories : invasive (such as implants), and non-invasive (such as wearables). More specifically, invasive neurotech is mostly used in the medical area to deal with conditions such as neurological disorders like Parkinson’s disease. Neural privacy has to do with being confident that we have control over the access to our own neural data and to the information about our mental processes. This article delves into the law of neural privacy and the ethics of neurotech. The Law Involving Neural Privacy In the United States, there has been a flurry of activity in this regard. Why is neural privacy important? Essentially, this type of data is very sensitive personal data as it can reveal thoughts, emotions, and intentions. Certain companies have a lot to gain if they are privy to this information—think about employers, insurers, or law enforcement—this could affect how workers are able to work, individuals apply for insurance coverage, or citizens engage with their societies. Another aspect is data ownership: who owns one’s thoughts? Some may believe that this question is for the distant future, but it might be worth mentioning that Neuralink has already had its first human patient using a brain implant to play online chess . It is here already! This may be why the UN Special Rapporteur on the right to privacy has recently set out the foundations and principles for the regulation of neurotechnologies and the processing of neurodata from the perspective of the right to privacy. More precisely, the UN Report deals with key definitions and establishes fundamental principles to guide regulation in this area, including the protection of human dignity, the safeguarding of mental privacy, the recognition of neurodata as highly sensitive personal data, and the requirement of informed consent for the processing of this data. Emphasis is placed on the inclusion of ethical values and the protection of human rights in the design. While Canada has not yet legislated on mental privacy, we note that the United States has in the following jurisdictions: California : the California Consumer Privacy Act (CCPA) was amended with SB 1223 that included “neural data” in the definition of sensitive personal information, and defined “neural data” as “information that is generated by measuring the activity of a consumer’s central or peripheral nervous system, and that is not inferred from nonneural information”. Governor Newsom has already approved this amendment. California also has two new bills, SB-44 ( Neural Data Protection Ac t) that would deal with brain-computer interfaces and govern the disclosure of medical information by an employer, a provider of health care, a health care service plan, or a contractor—to include new protections for neural data; and SB-7 ( Automated Decision Systems (ADS) in the Workplace ) that would require an employer, or a vendor engaged by the employer, to provide a written notice that an ADS, for the purpose of making employment-related decisions, is in use at the workplace to all workers that will be directly or indirectly affected by the ADS Colorado : the Colorado Privacy Act was also amended with HB 24-1058 that defines “neural data” as “information that is generated by the measurement of the activity of an individual's central or peripheral nervous systems and that can be processed by or with the assistance of a device”, and adds neural data to the definition of biological data and sensitive data. This has already been signed into law Connecticut : SB 1356 ( An Act Concerning Data Privacy, Online Monitoring, Social Media, and Data Brokers ), is a bill that would amend the Connecticut Data Privacy Act , define “neural data” as any information that is generated by measuring the activity of an individual's central or peripheral nervous system”, and include it in the definition of sensitive data Illinois : HB 2984 is a bill that would amend the Biometric Information Privacy Ac t , define “neural data” as “information that is generated by the measurement of activity of an individual's central or peripheral nervous system, and that is not inferred from non-neural information”, and add neural data to the definition of biometric identifier Massachusetts : HD 4127 ( Neural Data Privacy Protection Act ) is a bill that would define “neural data” as “information that is generated by measuring the activity of an individual’s central or peripheral nervous system, and that is not inferred from non-neural information” and include it in the definition of sensitive covered data. This is a significant step since there is no comprehensive consumer privacy law at this point Minnesota : SF 1240 is a bill that would not amend the consumer privacy legislation, but would rather be a standalone piece of legislation that provides the right to mental data and sets out neurotech rights concerning brain-computer interfaces. It would begin to apply in August, 2025 Vermont : there are three bills involving neural data protection: H210 (Age-Appropriate Design Code Act), H208 (Data Privacy and Online Surveillance Act), and H366 (An Act Relating to Neurological Rights). In a nutshell, these bills would define “neural data” as “information that is collected through biosensors and that could be processed to infer or predict mental states”, provide individuals with the right to mental or neural data privacy, protect minors specifically, and create a comprehensive consumer privacy bill that includes specific protections for neural data Clearly, it is becoming more important to enact mental or neurological privacy protections when it comes to neurotech and automated decision-making systems. In North America, these States are leading the way and could influence the direction of legislation for both Canada and the entire United States. That is, they are adding provisions to their consumer privacy legislation or creating standalone statutes. Ethics of Neurotechnology Let us begin this discussion with the question, Why is neural data unique? Simply put, neural data is not just a phone number or a person’s age. It is very sensitive and can reveal much more about a person. This is why Cooley lawyers refer to it as a kind of digital “source code” for an individual , potentially uncovering thoughts, emotions and even intentions: “From EEG readings to fMRI scans, neural data allows insights into neural activity that could, in the future, decode neural data into speech, detect truthfulness or create a digital clone of an individual’s personality” Several thinkers have asked about what needs to be protected. For example, the Neurorights Foundation tackles the issue of human rights for the age of neurotech. It advocates for promoting innovation, protecting human rights, and ensuring the ethical development of neurotech. The foundation has created a number of research reports, including Safeguarding Brain Data: Assessing the Privacy Practices of Consumer Neurotechnology Companies , which analyzed the data practices and user rights of consumer neurotechnology products. In this report, there were several areas of concern: access to information, data collection and storage, data sharing, user rights, as well as data safety and security. The conclusion was that the consumer neurotechnology space is growing at a rate that has outpaced research and regulation. Further, most existing neurotechnology companies do not adequately inform consumers or protect their neural data from misuse and abuse. The report was created so that companies and investors can appreciate the kinds of specific further measures that are needed to responsibly expand neurotechnology into the consumer sphere. Additionally, UNESCO has pointed out that there are several innovative neurotechnology techniques such as brain stimulation or neuroimaging techniques, which have changed the face of our understanding of the nervous system. Neurotechnology has helped us to address many challenges, especially in the context of neurological disorders; however there are also ethical issues and problems particularly with its use of non-invasive interventions. For example, neurotechnology can directly access, manipulate, and emulate the structure of the brain—it can produce information about our identities, our emotions, our fears. If you combine this neurotech with AI, there can be a threat to notions of human identity, human dignity, freedom of thought, autonomy, (mental) privacy and well-being. UNESCO states that the fast-developing field of neurotechnology is promising, but we need a solid governance framework: “Combined with artificial intelligence, these techniques can enable developers, public or private, to abuse of cognitive biases and trigger reactions and emotions without consent. Consequently, this is not a technological debate, but a societal one. We need to react and tackle this together, now!” In fact, UNESCO has drafted a Working Document regarding the Ethics of Neurotechnology, and includes a discussion of the following ethical principles and human rights: Beneficence, proportionality, and do no harm : Neurotechnology should promote health, awareness, and well-being, empower individuals to make informed decisions about their brain and mental health while fostering a better understanding of themselves. That said, restrictions on human rights need to adhere to legal principles, including legality, legitimate aim, necessity, and proportionality Self-determination and freedom of thought : Throughout the entire lifecycle of neurotechnology, the protection and promotion of freedom of thought, mental self-determination, and mental privacy must be secured. That is, neurotechnology should never be used to exert undue influence or manipulation, whether through force, coercion, or other means that compromise cognitive liberty Mental privacy and the protection of neural data : With neural data, there is a risk of stigmatization/discrimination, and revealing neurobiological correlates of diseases, disorders, or general mental states without the authorization of the person from whom data is collected. Mental privacy is fundamental for the protection of human dignity, personal identity, and agency. The collection, processing, and sharing of neural data must be conducted with free and informed consent, in ways that respect the ethical and human rights principles outlined by UNESCO, including safeguarding against the misuse or unauthorized access of neural and cognitive biometric data, particularly in contexts where such data might be aggregated with other sources Trustworthiness : Neurotechnology systems for human use should always ensure trustworthiness across their entire lifecycle to guarantee the respect, promotion and protection of human rights and fundamental freedoms. This requires, that these systems do not replicate or amplify biases; are transparent, traceable and explainable; are grounded on solid scientific evidence; and define clear conditions for responsibility and accountability Epistemic and global justice : Public awareness of brain and mental health and understanding of neurotechnology and the importance of neural data should be promoted through open and accessible education, public engagement, training, capacity-building, and science communication Best interests of the child and protection of future generations : It is important to balance against the potential benefits of enhancing cognitive function through neurotechnology for early diagnosis, instruction, education, and continuous learning with a commitment to the holistic development of the child. This includes nurturing their social life, fostering meaningful relationships, and promoting a healthy lifestyle encompassing nutrition and physical activity Enjoying the benefits of scientific-technological progress and its applications : Access to neurotechnology that contributes to human health and wellbeing should be equitable. The benefits of these technologies should be fairly distributed across individuals and communities globally The document also touches on areas outside health such as employment. For instance, as neurotechnology evolves and converges with other technologies in the workplace, they present unique opportunities and risks in labour settings. It is necessary to develop policy frameworks that protect employees’ mental privacy and the right to self-determination but also promote their health and wellbeing to balance the potential for human flourishing with the imperative to safeguard against practices that could infringe on mental privacy and dignity. In Four ethical priorities for neurotechnologies and AI , the author discussed AI and brain-computer interfaces and explored four ethical concerns with respect to neurotech: Privacy and consent : it is trite to say that an extraordinary level of personal information can already be obtained from people's data trails, however, this is how the concern is framed. The author stresses that individuals should have the ability and right to keep their neural data private—the default choice needs to be “opt out” Agency and identity : the author asserts that as neurotechnologies develop and corporations, governments and others need to start striving to endow people with new capabilities, individual identity (bodily and mental integrity) and agency (the ability to choose our actions) must be protected as basic human rights Augmentation : there will be pressure to enhance ourselves, such as adopting enhancing neurotechnologies like those that allow people to radically expand their endurance or sensory or mental capacities. This will likely change societal norms, raise issues of equitable access, and generate new forms of discrimination. And the author notes that outright bans of certain technologies could simply push them underground. Thus, decisions must be made within a culture-specific context, while respecting universal rights and global guidelines Bias : a major concern is that biases could become embedded in neural devices, and therefore, it is necessary to develop countermeasures to combat bias and include probable user groups (especially those who are already marginalized) to add their input into the design of algorithms and devices as another way to ensure that biases are addressed from the first stages of technology development The paper also touched on the need for responsible neuroengineering: there was a call for industry and academic researchers to take on the responsibilities that came with devising these devices and systems. The authors suggested that researchers draw on existing frameworks that have been developed for responsible innovation. In Philosophical foundation of the right to mental integrity in the age of neurotechnologies , the author has equated neurorights such as mental privacy, freedom of thought, and mental integrity to basic human rights. The author created philosophical foundation to a specific right, the right to mental integrity. It included both the classical concepts of privacy and non-interference in our mind/brain. In addition, the author considered a philosophical foundation with certain features of the mind that could not be reached directly from the outside: intentionality, first-person perspective, personal autonomy in moral choices and in the construction of one's narrative, and relational identity. The author asserted that a variety of neurotechnologies or other tools, including AI, alone or in combination, could by their very availability, threaten our mental integrity. To that end, the author proposed philosophical foundations for a right to mental integrity that encompassed both privacy and protection from direct interference in mind/brain states and processes. Such foundations focused on aspects that were well known within philosophy of mind, but not commonly considered in the literature on neurotechnology and neurorights. Intentionality, the first-person perspective, moral choice, and the construction of one’s identity were concepts and processes that needed as precise a theoretical definition as possible. The author stated: “In our perspective, such a right should not be understood as a guarantee against malicious uses of technologies, but as a general warning against the availability of means that potentially endanger a fundamental dimension of the human being. Therefore, the recognition of the existence of the right to mental integrity takes the form of a necessary first step, even prior to its potential specific applications” In Neurorights – Do we Need New Human Rights? A Reconsideration of the Right to Freedom of Thought , the author stated that the progress in neurotechnology and AI provided unprecedented insights into the human brain. Likewise, there were increasing opportunities to influence and measure brain activity. These developments raised several legal and ethical questions. The author argued that the right to freedom of thought could be coherently interpreted as providing comprehensive protection of mental processes and brain data, which could offer a normative basis regarding the use of neurotechnologies. Moreover, an evolving interpretation of the right to freedom of thought was more convincing than introducing a new human right to mental self-determination. What Can We Take from These Developments? Undoubtedly, ethicists have spent a considerable amount of time thinking about mental privacy in the age of neurotech, and exactly what is at risk if there are no privacy and AI protections in place. Fortunately, the law is starting to catch up to the tech and the ethicists’ concerns. For example, California and Colorado have already enacted provisions to add to their consumer privacy statutes, and more bills have been introduced to address the issues. Previous Next

bottom of page