Search Results
123 results found with an empty search
- The Strategic Values of Local AI | voyAIge strategy
The Strategic Values of Local AI What it Means to Use AI in-house versus in-the-cloud By Tommy Cooke, powered by unusually high amounts of pollen in the air for this time of year Jul 18, 2025 Key Points: 1. Local AI keeps sensitive data in-house, helps businesses meet regulatory requirements, and reduces risk 2. Compared to escalating cloud costs, local AI offers predictable long-term savings for organizations with consistent workloads 3. A hybrid approach (using cloud AI for scale and local AI for control) is emerging as the most strategic model for enterprise AI deployment When you open the ChatGPT app on your phone, its AI runs in the cloud. What do I mean by that, you ask? The AI itself does not happen in your phone—it happens in a server somewhere else in the world. Your phone sends data, the data is ingested into a room filled with processors, and the output is returned to your phone. But there is another way for AI to function. And that way is called “ local AI ”: AI that is deployed, operated, and lives entirely within an organization’s walls. While the idea of local AI seemed like a farfetched dream merely a couple of years ago (and for good reason, as it was correctly perceived to be quite expensive at the time), it is a superior alternative to many organizations that are risk averse; privacy, control, cost predictability, and operational resilience are qualities all at the heart of local AI. Let’s unpack these qualities in further detail as I imagine many of you reading this right now will be highly interested in exploring local AI for your own organizations. Sovereignty over Sensitive Data The premier benefit of local AI is guaranteeing that sensitive data never leaves the organization. In tightly regulated industries, such as healthcare and finance, data sovereignty is critical. Using cloud-based AI, even with robust security protocols, creates uncertainties: Who audits the vendor’s access logs? Where is the data stored, geographically? How is it being used to train backend models? These are unanswered questions that haunt compliance officers and auditing teams. On the other hand, using local AI means that every bit of data that is processed stays within your full visibility and stewardship—this gives businesses a critical advantage particularly during a time of proliferating regulations. Simplified Compliance in a Complex Legal Landscape Regulations such as the EU’s GDPR and Canada’s PIPEDA impose strict obligations on data transfers and processing. This makes local AI models, which operate entirely within the bounds of these regulations, capable of sidestepping many of the issues that cloud AI systems are still struggling to navigate. That is, by minimizing the need to transfer data across and through jurisdictions, local AI reduces exposure to many legal complications. Moreover, and because all operations occur in-house, audit readiness becomes more straightforward: logs, model versions, and access records remain under corporate control. Predictable Operating Costs Cloud-based AI is often marketed as pay-for-use or as something that you can sign up for and begin using immediately. This makes mainstream AIs like ChatGPT and the like attractive: they are elastic, cost-efficient, and easy to access. However, as workloads grow, so too do fees. Application Programming Interface (API) calls, data storage, and compute time are but some of the many characteristics that all begin to add up. Cloud services also often carry usage-based or subscription-based pricing that tend to escalate over time. To be fair, the initial capital expenditure for local AI may be higher, but once it is set up, those costs amortize. For bounded workloads like batch processing, document classification, and real-time inference making, the cumulative total cost of ownership is considerably lower than continual cloud usage. Latency, Resilience, and Offline Capability Local processing also provides tremendous improvements in speed. Without the back-and-forth delays caused by network requests, turnaround times interacting with AI shrinks considerably. This is particularly attractive for real-time applications like manufacturing quality assurance or point-of-care diagnostics. Moreover, local AI continues to operate amidst network disruptions. For instance, remote sites, field offices, or secure facilities with limited connectivity can maintain uninterrupted service by using local AI. In an age where downtime translates directly into lost revenue and reputational risk, it is worth considering avoiding cloud-driven AI. Customization Although generalist cloud models have dazzling bread they often stumble in the face of domain-specific syntax. This is where local AI offers the opportunity to fine-tune with proprietary data: legal briefs, clinical records, manufacturing logs. Additionally, this makes local AIs considerably more reliable than their cloud counterparts in terms of avoiding hallucinations. Practically speaking, that means cleaner summaries, safer predictions, and fewer erroneous suggestions. Enhanced Data Governance Running models locally brings a considerable benefit by way of transparency. When you control the entire stack, from data ingestion to output, you gain visibility into model behaviour. This facilitates a higher level of explainability compared with cloud-driven AI. Local AI means that you are no longer reliant on opaque APIs; this can be a deal breaker for many prospective clients and customers. A Hybrid Future: Balancing Reach and Responsibility between Local and Cloud AI It is important to stress that local AI does not necessarily need to be seen as supplanting cloud-based systems. Rather, they can be complimentary. The optimal business model for many organizations is modular, or using both: Cloud-based AI for delivering massive-scale capabilities (think complex reasoning, multi-domain synthesis, and vast world knowledge) Local AI for handling sensitive tasks, private data, or immediate-response scenarios This balanced, hybrid approach is the future of enterprise AI. It's a “precision-first” approach and is one that could do wonders for aligning AI deployment within the context, risk tolerance, and regulatory demands within your industry. Local AI is not a niche pursuit. It is a strategic investment for businesses that are seeking to reconcile innovation, privacy, and compliance. Through local deployment, companies gain control over data, reduce long-term costs, improve performance, tighten governance, and allow them to converse using their own language or jargon. For businesses that are serious about data privacy, customer trust, and operational continuity, local AI is not just an alternative—it is a better, smarter, more principled choice. And this approach has the flexibility to enable use in conjunction with cloud-driven solutions. Previous Next
- Impact Assessments | voyAIge strategy
Evaluate AI risks and opportunities with expert-driven impact assessments. Impact Assessments We specialize in data, algorithmic, ethics, and socioeconomic impact assessments to understand technological and operational impacts on organizations, their clients, customers, and stakeholders. Our assessments deliver deep insights businesses need to understand their impact on those who matter most. Proactive Impact Assessments for Successful AI Use As AI, data, and algorithmic technologies become increasingly central to business operations, understanding their potential impacts is more critical than ever. Whether it’s assessing the ethical implications, ensuring compliance with regulatory standards, or evaluating the broader social and economic effects, impact assessments provide the clarity and foresight you need to navigate this complex landscape responsibly. At voyAIge strategy, we specialize in conducting thorough impact assessments that help organizations anticipate and mitigate risks, align with best practices, and make informed decisions. Our assessments go beyond surface-level analysis, offering deep insights into how your AI systems, data practices, and algorithms might influence your stakeholders, your business, and society at large. What is an Impact Assessment? An impact assessment is a systematic process of identifying, evaluating, and addressing the potential effects of AI systems, data usage, and algorithmic processes on individuals, organizations, and society. These assessments are crucial for ensuring that your technology strategies not only achieve their goals but also align with ethical standards, legal requirements, and social expectations. Examples of Impact Assessments include: DPIA Data Privacy Impact Assessment: evaluates how your data collection, storage, and processing practices affect individual privacy, ensuring compliance along the way. AIA Algorithmic Impact Assessment: a nalyzes the potential biases, fairness, and transparency of your algorithms, providing recommendations for mitigating negative outcomes and ensuring equitable results. EAIA Ethical AI Impact Assessment: assesses the broader ethical implications of deploying AI systems, including their effects on decision-making processes, social justice, and public trust. SEIA Social and Economic Impact Assessment: examines the potential social and economic consequences of your AI and data initiatives, helping you anticipate and address both positive and negative impacts. Why Impact Assessments Matter Impact assessments are not just a regulatory requirement—they are a critical tool for ensuring that your AI and data initiatives are responsible, sustainable, and aligned with your organization’s values. Here’s why impact assessments matter: Risk Mitigation: Identify and address potential risks before they become issues, protecting your organization from legal, ethical, and reputational harm. Regulatory Compliance: Ensure that your practices comply with local, national, and international laws and regulations, avoiding fines and penalties. Ethical Alignment: Align your technology strategies with ethical standards, ensuring that your AI and data use promote fairness, transparency, and accountability. Stakeholder Trust: Build and maintain trust with customers, employees, regulators, and the public by demonstrating your commitment to responsible AI and data use. Why Choose voyAIge strategy? At voyAIge strategy, we bring a unique blend of expertise, rigor, and strategic insight to every impact assessment. Here’s why organizations trust us to guide their AI and data initiatives: 1 Deep Expertise: Our team has extensive experience in AI, data ethics, and regulatory compliance, ensuring that our assessments are both thorough and informed by the latest developments in the field. 2 Tailored Approach: We customize each impact assessment to your organization’s specific needs, goals, and regulatory environment, ensuring that our insights are relevant and actionable. 3 Ethical Commitment: We are deeply committed to promoting ethical AI and data use, and our assessments reflect this commitment, helping you align your practices with the highest ethical standards. 4 Strategic Focus: Our assessments are designed to provide not just risk mitigation, but also strategic insights that help you leverage AI and data technologies for sustainable growth.
- Upskilling and Reskilling in the Age of AI | voyAIge strategy
Upskilling and Reskilling in the Age of AI What Organizations Need to Know Christina Catenacci, Human Writer Jan 20, 2025 Key Points: Upskilling is the process of improving employee skill sets through AI training and development programs Reskilling is learning an entire set of new skills to do a new job It is not possible to have a one-time upskilling and reskilling session—rather, upskilling and reskilling is a continuous learning process IBM’s Institute for Business Value states that more than 60 percent of executives predict that Gen AI will disrupt how their organization designs experiences; even more striking, 75 percent say that competitive advantage depends on Gen AI. In a study by Boston Consulting Group where 13,000 people were surveyed, 89 percent of respondents said that their workforce needed improved AI skills—but only six percent said that they had begun upskilling in “a meaningful way”. Clearly, organizations that are not beginning the process of upskilling and reskilling can be at a disadvantage in this competitive game and risk being left behind. This may be why the AI Age is commonly referred to as an era of upskilling. What is upskilling and reskilling? IBM notes that upskilling and reskilling are two different things. In particular, upskilling is the process of improving employee skill sets through AI training and development programs. The goal is to minimize skill gaps and prepare employees for changes in their job roles or functions. For example, it could include asking customer care representatives to learn how to use Gen AI and chatbots to answer customer questions in real time with prompt engineering. On the other hand, reskilling is learning an entire set of new skills to do a new job. For example, someone who works in data processing might need to embrace reskilling to learn web development or advanced data analytics. Organizations Need to Prioritize Upskilling and Reskilling According to a report by KPMG, organizations are increasingly prioritizing upskilling and reskilling their workers to harness the power AI and realize true business value. The authors point out that the impact of AI transformation is often underestimated—AI is expected to surpass human intelligence, and organizations cannot be complacent. Only 41 percent of organizations are increasing their AI investments. This is concerning since Gen AI is not like past disruptive technology; there can be no one-time upskilling and reskilling session, but rather a continuous learning process that takes place. Leaders in organizations need to get past employee resistance and help to drive AI adoption. How can this be accomplished? The authors note that leaders need to be equipped with the right mindset, knowledge, and skills to guide their AI transformation. By actively using AI in their own work and sharing their experiences with their teams, leaders can create a safe environment for exploration and experimentation, and this in turn helps to create a culture of innovation and continuous learning. Most importantly, the authors state that leaders need to communicate the benefits of AI clearly and transparently: they need to share how the technology can augment and enhance human capabilities rather than replace them. An In-depth Study on Reskilling and Upskilling In an instructive report by World Economic Forum (in collaboration with Boston Consulting Group), the authors introduced an approach to mapping out job transition pathways and reskilling opportunities using the power of digital data to help guide workers, companies, and governments to prioritize their actions, time, and investments on focusing reskilling efforts efficiently and effectively. To prepare the workforce for the Fourth Industrial Revolution, the authors stated that it was necessary to identify and systematically map out realistic job transition opportunities for workers facing declining job prospects. When mapping job transition opportunities, the authors asked whether the job transition was viable and desirable. They broke down jobs into a series of relevant, measurable component parts in order to systematically compare them and identify any gaps in knowledge, skills, and experience. Then, they calculated “job-fit”’ of any one individual on the basis of objective criteria. They asked whether the job was viable and desirable. Viable future employees were those who were equipped to perform those tasks (individuals who possessed the necessary knowledge, skills, and experience). When it came to whether the job was desirable, some jobs were simply undesirable because the number of people projected to be employed in this job category was set to decline. Using the United States Bureau of Labor Statistics, the authors aimed to find job transition pathways for all. Let us take an example: the authors discovered several pathways for secretaries and administrative assistants. Some provided opportunities with a pay rise, such as insurance claim clerks, and some provided opportunities with a pay cut, such as library assistants or clerical workers. The authors emphasized that employers could no longer rely solely on new workers to fill their skills shortages. One of the main issues was the willingness to make reasonable investments in upskilling and reskilling that could bridge workers onto new jobs. Similarly, they stressed that it was not possible to begin the transformation unless there was a focus on individuals’ mindsets and efforts. For instance, they reasoned that some employees would need time off of work to gain additional qualifications, and some would require other supports and incentives to engage them in continuous learning. This transformation could involve a shift in the societal mindset such that individuals aspired to be more creative, curious, and comfortable with continuous change. Moreover, the authors noted that no single actor could solve the upskilling and reskilling puzzle alone; in fact, they suggested that a wide range of stakeholders (governments, employers, individuals, educational institutions and labour unions etc.) needed to collaborate and pool resources to achieve this goal. Further, data-driven approaches were anticipated to bring speed and additional value to upskilling and reskilling. For example, it may be worth exploring the amount of time required to make job the various transitions, or nuanced evaluations of economic benefits from these job transitions. How do Organizations Begin Upskilling and Reskilling? When it comes to upskilling, BCG recommends that organizations: assess their needs and measure outcomes prepare people for change unlock employees’ willingness to learn make adopting AI a C-Suite priority use AI for AI upskilling Moreover, IBM recommends creating a lasting strategy, communicating clearly, and investing in learning and development. Some AI tools that are critical to upskilling include computer vision, Gen AI, machine learning, natural language processing, and robotic process automation. Upskilling use cases include customer service, financial services, healthcare, HR, and web development. Organizations can use AI technologies to enhance the AI learning experience itself via online learning and development, on-the-job training, skill-gap analysis, and mentorship. AI can provide added value for organizations because it combines institutional knowledge with advanced capabilities, fills important gaps, improves employee retention, and embraces the democratization of web development. Furthermore, McKinsey & Company recommends that organizations use a cross-collaborative, scaled approach to upskilling and reskilling workforces. More specifically, to realize the opportunity of Gen AI, a new approach is required to address employee attraction, engagement, and retention. That is, before rushing in and starting the process, it is important to clarify business outcomes and how Gen AI investments can enable or accelerate them. This involves defining the skills that are required to deliver these outcomes and identify groups within the organization that need to build those skills. In addition, it is necessary to use a human-centred approach—from the outset, organizations are recommended to acknowledge that many employees experience upskilling and reskilling as a threat to their well-established professional identities. To address this issue, organizations need to lead using an empathetic, human-centered approach—foster learning and development and transform fears into curiosity—cultivating mindsets of opportunity and continuous learning. And of course, it is necessary to make personalized learning possible at scale. This involves having tighter collaboration across the HR function, stronger business integration to embed learning experiences into working environments, and a refreshed approach to the learning and development technology ecosystem. Benefits of Upskilling and Reskilling in an AI-Driven Environment There are several benefits of upskilling and reskilling: Organizations can remain competitive Employees can increase engagement and job satisfaction Workers with enhanced skills can improve their creativity, productivity, and efficiency Organizations can help employees reduce the risk of job displacement Employees can increase wages and enjoy better job opportunities Organizations can increase their retention numbers Indeed, according to an MIT study , evidence suggests that Gen AI, specifically ChatGPT, substantially raised average productivity. Moreover, exposure to ChatGPT increased job satisfaction and self-efficacy, as well as concern and excitement about automation technologies. We know that employee development programs, including upskilling and reskilling, are highly valued by workers. More precisely, employees appreciate the following: Skill assessment and analytics Personalized learning paths Adaptive learning platforms AI-powered content curation Virtual assistants and chatbots Simulation and gamification Predictive analytics for training ROI Natural language processing for feedback and coaching Augmented reality (AR) and virtual reality (VR) for leaning, mentoring, and training Continuous learning and adaptation What We Can Take From all This Given the above, itt may be in organizations’ interests to start the process of upskilling and reskilling, as recommended above. No one wants to find and hire new people: turnover costs organizations a great deal of money. And no one wants to stand by and watch an employer replace them with a robot or other form of Gen AI. The solution is to take the time to create a solid plan, beginning with outlining goals and aligning them with what the business needs. It is true: HR professionals who have an upskilling and reskilling plan look a lot more enlightened than those who view AI as a threat . As seen in the 2024 Work Trend Index Annual Report by Microsoft and LinkedIn , it appears that many employees want, and even expect, this type of training and development at work. Employers need to catch up to the employees, given that 75 percent of employees are already bringing AI into the workplace. Previous Next
- Insider Threats in the Age of AI | voyAIge strategy
Insider Threats in the Age of AI Employers deal with the dual challenge of leveraging AI for operations and defending against AI-powered internal threats By Christina Catenacci, human writer Mar 28, 2025 Key Points Insiders are trusted individuals who have been given access to, or has knowledge of, any company resources, data, or system that is not generally available to the public Insider risks are the potential for a person to use authorized access to the organization’s assets—either maliciously or unintentionally—in a way that negatively affects an organization As AI becomes more common in the workplace and performs tasks that are not completed by humans, organizations face a growing security risk from artificial insiders as well as human ones AI is making waves in the workplace—employers have been trying to find novel ways of implementing AI to improve their operations, while simultaneously defending against cyberattacks, including new methods of AI-powered attacks, from inside their organizations. This article explores the nature of internal threats in modern workplaces. What are insider risks? According to Microsoft, insider risks (before they become actual threats or attacks) are the potential for a person to use authorized access to the organization’s assets—either maliciously or unintentionally—in a way that negatively affects an organization. “Assets” mean information, processes, systems, and facilities. In this context, an “insider” is a trusted individual who has been given access to, or has knowledge of, any company resources, data, or system that is not generally available to the public. For example, an insider could be someone with a company computer with network access. It could be a person with a badge or device that allows them to continuously access the company’s physical property. Or it could be someone who has access to the corporate network, cloud storage, or data. It could even be a person who knows the company strategy and financial information. Some risk indicators include: Changes in user activity such as a person behaves in a way that is out-of-character Anomalous data exfiltration such as sharing or downloading unusual amounts of sensitive data A sequence of related risky activities, which could involve renaming confidential files to look less sensitive, downloading the files, saved to a portable device, and deleting the files from cloud storage Data exfiltration of a departing employee, such as a resigning employee downloading a copy of a previous project file to keep a record of accomplishments (unintentional) or knowingly downloading sensitive data for personal gain or to assist them in the next position at a new company (intentional) Abnormal system access, where employees download files that they do not need for their jobs Intimidation or harassment, which could involve an employee making a threatening, harassing, or a discriminatory statement Privileges escalation, such as employees trying to escalate their privileges without a clear business justification What are insider threats and attacks? Further down the continuum, insider threats have the potential to damage the system or asset. The threat could be intentional or unintentional. And even further down, an insider attack is an intentional malicious act that causes damage to a system or asset. Unlike threats, attacks are relatively easy to detect. Not all cyberattacks are data breaches. More specifically, a data breach is any security incident where unauthorized parties access sensitive or confidential information, including personal data like health information and corporate data like customer records, intellectual property, or financial information. The ultimate goal of these insiders could be to steal sensitive data or intellectual property, to sabotage data or systems, espionage, or even intimidate co-workers. What is the cost of an insider attack? Data breaches are serious: according to the IBM Cost of a Data Breach Report 2024 , the global average cost of a data breach has increased by 10 percent from 2023 and now has reached USD 4.88 million. But what is striking is that the average cost of a malicious insider attacks averaged about USD 4.99 million in 2024. In this regard, expensive attack vectors included business email compromise, phishing, social engineering, and stolen or compromised credentials. The most common ones were phishing and stolen or compromised credentials. What happens when you add AI? According to IBM, AI and automation are transforming the world of cybersecurity. They make it easier for bad actors to create and launch attacks at scale. For example, when they create phishing attacks, they make it easier to produce grammatically correct and plausible phishing messages. In fact, the ThreatLabz 2025 AI Security Report revealed that threat actors are currently leveraging AI to enhance phishing campaigns, automated attacks, and create realistic deepfake content. ThreatLabz researchers discovered how DeepSeek can be manipulated to quickly generate phishing pages that mimic trusted brands, and how attackers can create a fake AI platform to exploit interest in AI and trick victims into downloading malicious software. In addition, ThreatLabz suggests that organizations face a number of AI risks: Shadow AI and data leakage (using AI tools without formal approval or oversight of the IT department and causing data leaks) AI-generated phishing campaign (in about five prompts, a phishing page can be created) AI-driven social engineering, from deepfake videos to voice impersonation used to defraud businesses Malware campaigns exploiting interest in AI, where attackers lure victims with a fake AI platform to deliver the Rhadamanthys infostealer The dangers of open-source AI enabling accidental data exposure and more serious things like data exfiltration The rise of agentic AI, where there are autonomous AI systems capable of executing tasks with minimal human oversight Indeed, Security Intelligence claims that Gen AI is expanding the insider threat surface . We’re talking about chatbots, image synthesizers, voice cloning software, and deepfake video technology for creating virtual avatars. Employees are going to work and misusing AI to the point that some companies are starting to ban the use of Gen AI tools in the workplace. For instance, Samsung apparently made such a ban following an incident where employees were suspected of sharing sensitive data in conversations with OpenAI’s ChatGPT. This is concerning, especially since OpenAI records and archives all conversations, potentially for use in training future generations of the large language model. A combination of human and AI security internal threats Organizations face many internal security threats, which can be of a traditional or AI nature: we can see from the above discussion that as AI becomes more common in the workplace and performs tasks that are not completed by humans, organizations face a growing security risk from artificial insiders as well as human ones . These AI insiders would be even better at learning how to avoid detection by ingesting more information and becoming more adept at spotting patterns within that information. In fact, threat actors use AI-generated malware, exploit network traffic analysis to find weak points, manipulate AI models by injecting false data, and make advanced phishing messages that evade detection. And AI systems can be used to detect those risks—AI and machine learning can enhance the security of systems and data by analyzing vast amounts of data, recognizing patterns, and adapting to new threats. Insider risk reframed The above discussion touched on several risks that could result because of insiders, whether they are human or AI. These risks can be boiled down and examined by looking at people, processes, and technology. The following could be another way of thinking about internal risk: People : human insiders make errors, lie about what they are doing, behave in unusual or suspicious ways, engage in theft of confidential information or intellectual property, could be manipulated, do not comply with the company’s policies and procedures, have low levels of AI literacy, easily fall for phishing or give up credentials, have no human-in-the-loop, or lack AI governance Processes : the company may not have AI in the workplace policies and procedures in place, or if they do have them, they may not be regularly updated, monitored, or enforced Technology : there may be biased data, a lack of data hygiene leading to bad-quality data, no change management so people and systems are not supported during the transition, AI-generated malware, manipulated AI models by injecting false data, advanced phishing messages that evade detection, agentic AI that goes rogue, or model drift and consequent inaccurate predictions Previous Next
- L&E Analysis: What is Neural Privacy, and Why is it Important? | voyAIge strategy
L&E Analysis: What is Neural Privacy, and Why is it Important? More US States are Regulating it By Christina Catenacci Mar 14, 2025 Legal & Ethical Analysis: Issue 1° Key Points Neural data is very sensitive and can reveal a great deal about a person The law is starting to catch up to the tech and the ethicists’ concerns In North America, California and Colorado are leading the way when it comes to creating mental privacy protections in relation to neurotechnology This is a hot topic, but what is it? Generally speaking, neural data is information that is generated by measuring activity in a person's central or peripheral nervous systems, including brain activity (seen in EEGs, fMRIs, or implanted devices); signals from the peripheral nervous system (such as nerves that extend from the brain and spine); and data that can be used to infer mental states, emotions, and cognitive processes. Interestingly, this data has been used to create artificial neural networks . For instance, machine vision can be used to identify a person's emotions by analyzing their facial expressions. Some may be surprised to know that there are many types of neurotechnology (neurotech) in existence today, but what is that? Neurotechnology bridges the gap between neuroscience, the scientific study of the nervous system, and technology. The goal of neurotech is to understand how the brain can be enhanced by technological advancements to create applications that improve both brain function and overall human health. In fact, some may characterize this growing area as “a thrilling glimpse into the potential of human ingenuity to transform lives”. Others have noted that neurotechnology, combined with the explosion of AI, opens up a world of infinite possibilities . One simple way of explaining neurotech is to divide it into two categories : invasive (such as implants), and non-invasive (such as wearables). More specifically, invasive neurotech is mostly used in the medical area to deal with conditions such as neurological disorders like Parkinson’s disease. Neural privacy has to do with being confident that we have control over the access to our own neural data and to the information about our mental processes. This article delves into the law of neural privacy and the ethics of neurotech. The Law Involving Neural Privacy In the United States, there has been a flurry of activity in this regard. Why is neural privacy important? Essentially, this type of data is very sensitive personal data as it can reveal thoughts, emotions, and intentions. Certain companies have a lot to gain if they are privy to this information—think about employers, insurers, or law enforcement—this could affect how workers are able to work, individuals apply for insurance coverage, or citizens engage with their societies. Another aspect is data ownership: who owns one’s thoughts? Some may believe that this question is for the distant future, but it might be worth mentioning that Neuralink has already had its first human patient using a brain implant to play online chess . It is here already! This may be why the UN Special Rapporteur on the right to privacy has recently set out the foundations and principles for the regulation of neurotechnologies and the processing of neurodata from the perspective of the right to privacy. More precisely, the UN Report deals with key definitions and establishes fundamental principles to guide regulation in this area, including the protection of human dignity, the safeguarding of mental privacy, the recognition of neurodata as highly sensitive personal data, and the requirement of informed consent for the processing of this data. Emphasis is placed on the inclusion of ethical values and the protection of human rights in the design. While Canada has not yet legislated on mental privacy, we note that the United States has in the following jurisdictions: California : the California Consumer Privacy Act (CCPA) was amended with SB 1223 that included “neural data” in the definition of sensitive personal information, and defined “neural data” as “information that is generated by measuring the activity of a consumer’s central or peripheral nervous system, and that is not inferred from nonneural information”. Governor Newsom has already approved this amendment. California also has two new bills, SB-44 ( Neural Data Protection Ac t) that would deal with brain-computer interfaces and govern the disclosure of medical information by an employer, a provider of health care, a health care service plan, or a contractor—to include new protections for neural data; and SB-7 ( Automated Decision Systems (ADS) in the Workplace ) that would require an employer, or a vendor engaged by the employer, to provide a written notice that an ADS, for the purpose of making employment-related decisions, is in use at the workplace to all workers that will be directly or indirectly affected by the ADS Colorado : the Colorado Privacy Act was also amended with HB 24-1058 that defines “neural data” as “information that is generated by the measurement of the activity of an individual's central or peripheral nervous systems and that can be processed by or with the assistance of a device”, and adds neural data to the definition of biological data and sensitive data. This has already been signed into law Connecticut : SB 1356 ( An Act Concerning Data Privacy, Online Monitoring, Social Media, and Data Brokers ), is a bill that would amend the Connecticut Data Privacy Act , define “neural data” as any information that is generated by measuring the activity of an individual's central or peripheral nervous system”, and include it in the definition of sensitive data Illinois : HB 2984 is a bill that would amend the Biometric Information Privacy Ac t , define “neural data” as “information that is generated by the measurement of activity of an individual's central or peripheral nervous system, and that is not inferred from non-neural information”, and add neural data to the definition of biometric identifier Massachusetts : HD 4127 ( Neural Data Privacy Protection Act ) is a bill that would define “neural data” as “information that is generated by measuring the activity of an individual’s central or peripheral nervous system, and that is not inferred from non-neural information” and include it in the definition of sensitive covered data. This is a significant step since there is no comprehensive consumer privacy law at this point Minnesota : SF 1240 is a bill that would not amend the consumer privacy legislation, but would rather be a standalone piece of legislation that provides the right to mental data and sets out neurotech rights concerning brain-computer interfaces. It would begin to apply in August, 2025 Vermont : there are three bills involving neural data protection: H210 (Age-Appropriate Design Code Act), H208 (Data Privacy and Online Surveillance Act), and H366 (An Act Relating to Neurological Rights). In a nutshell, these bills would define “neural data” as “information that is collected through biosensors and that could be processed to infer or predict mental states”, provide individuals with the right to mental or neural data privacy, protect minors specifically, and create a comprehensive consumer privacy bill that includes specific protections for neural data Clearly, it is becoming more important to enact mental or neurological privacy protections when it comes to neurotech and automated decision-making systems. In North America, these States are leading the way and could influence the direction of legislation for both Canada and the entire United States. That is, they are adding provisions to their consumer privacy legislation or creating standalone statutes. Ethics of Neurotechnology Let us begin this discussion with the question, Why is neural data unique? Simply put, neural data is not just a phone number or a person’s age. It is very sensitive and can reveal much more about a person. This is why Cooley lawyers refer to it as a kind of digital “source code” for an individual , potentially uncovering thoughts, emotions and even intentions: “From EEG readings to fMRI scans, neural data allows insights into neural activity that could, in the future, decode neural data into speech, detect truthfulness or create a digital clone of an individual’s personality” Several thinkers have asked about what needs to be protected. For example, the Neurorights Foundation tackles the issue of human rights for the age of neurotech. It advocates for promoting innovation, protecting human rights, and ensuring the ethical development of neurotech. The foundation has created a number of research reports, including Safeguarding Brain Data: Assessing the Privacy Practices of Consumer Neurotechnology Companies , which analyzed the data practices and user rights of consumer neurotechnology products. In this report, there were several areas of concern: access to information, data collection and storage, data sharing, user rights, as well as data safety and security. The conclusion was that the consumer neurotechnology space is growing at a rate that has outpaced research and regulation. Further, most existing neurotechnology companies do not adequately inform consumers or protect their neural data from misuse and abuse. The report was created so that companies and investors can appreciate the kinds of specific further measures that are needed to responsibly expand neurotechnology into the consumer sphere. Additionally, UNESCO has pointed out that there are several innovative neurotechnology techniques such as brain stimulation or neuroimaging techniques, which have changed the face of our understanding of the nervous system. Neurotechnology has helped us to address many challenges, especially in the context of neurological disorders; however there are also ethical issues and problems particularly with its use of non-invasive interventions. For example, neurotechnology can directly access, manipulate, and emulate the structure of the brain—it can produce information about our identities, our emotions, our fears. If you combine this neurotech with AI, there can be a threat to notions of human identity, human dignity, freedom of thought, autonomy, (mental) privacy and well-being. UNESCO states that the fast-developing field of neurotechnology is promising, but we need a solid governance framework: “Combined with artificial intelligence, these techniques can enable developers, public or private, to abuse of cognitive biases and trigger reactions and emotions without consent. Consequently, this is not a technological debate, but a societal one. We need to react and tackle this together, now!” In fact, UNESCO has drafted a Working Document regarding the Ethics of Neurotechnology, and includes a discussion of the following ethical principles and human rights: Beneficence, proportionality, and do no harm : Neurotechnology should promote health, awareness, and well-being, empower individuals to make informed decisions about their brain and mental health while fostering a better understanding of themselves. That said, restrictions on human rights need to adhere to legal principles, including legality, legitimate aim, necessity, and proportionality Self-determination and freedom of thought : Throughout the entire lifecycle of neurotechnology, the protection and promotion of freedom of thought, mental self-determination, and mental privacy must be secured. That is, neurotechnology should never be used to exert undue influence or manipulation, whether through force, coercion, or other means that compromise cognitive liberty Mental privacy and the protection of neural data : With neural data, there is a risk of stigmatization/discrimination, and revealing neurobiological correlates of diseases, disorders, or general mental states without the authorization of the person from whom data is collected. Mental privacy is fundamental for the protection of human dignity, personal identity, and agency. The collection, processing, and sharing of neural data must be conducted with free and informed consent, in ways that respect the ethical and human rights principles outlined by UNESCO, including safeguarding against the misuse or unauthorized access of neural and cognitive biometric data, particularly in contexts where such data might be aggregated with other sources Trustworthiness : Neurotechnology systems for human use should always ensure trustworthiness across their entire lifecycle to guarantee the respect, promotion and protection of human rights and fundamental freedoms. This requires, that these systems do not replicate or amplify biases; are transparent, traceable and explainable; are grounded on solid scientific evidence; and define clear conditions for responsibility and accountability Epistemic and global justice : Public awareness of brain and mental health and understanding of neurotechnology and the importance of neural data should be promoted through open and accessible education, public engagement, training, capacity-building, and science communication Best interests of the child and protection of future generations : It is important to balance against the potential benefits of enhancing cognitive function through neurotechnology for early diagnosis, instruction, education, and continuous learning with a commitment to the holistic development of the child. This includes nurturing their social life, fostering meaningful relationships, and promoting a healthy lifestyle encompassing nutrition and physical activity Enjoying the benefits of scientific-technological progress and its applications : Access to neurotechnology that contributes to human health and wellbeing should be equitable. The benefits of these technologies should be fairly distributed across individuals and communities globally The document also touches on areas outside health such as employment. For instance, as neurotechnology evolves and converges with other technologies in the workplace, they present unique opportunities and risks in labour settings. It is necessary to develop policy frameworks that protect employees’ mental privacy and the right to self-determination but also promote their health and wellbeing to balance the potential for human flourishing with the imperative to safeguard against practices that could infringe on mental privacy and dignity. In Four ethical priorities for neurotechnologies and AI , the author discussed AI and brain-computer interfaces and explored four ethical concerns with respect to neurotech: Privacy and consent : it is trite to say that an extraordinary level of personal information can already be obtained from people's data trails, however, this is how the concern is framed. The author stresses that individuals should have the ability and right to keep their neural data private—the default choice needs to be “opt out” Agency and identity : the author asserts that as neurotechnologies develop and corporations, governments and others need to start striving to endow people with new capabilities, individual identity (bodily and mental integrity) and agency (the ability to choose our actions) must be protected as basic human rights Augmentation : there will be pressure to enhance ourselves, such as adopting enhancing neurotechnologies like those that allow people to radically expand their endurance or sensory or mental capacities. This will likely change societal norms, raise issues of equitable access, and generate new forms of discrimination. And the author notes that outright bans of certain technologies could simply push them underground. Thus, decisions must be made within a culture-specific context, while respecting universal rights and global guidelines Bias : a major concern is that biases could become embedded in neural devices, and therefore, it is necessary to develop countermeasures to combat bias and include probable user groups (especially those who are already marginalized) to add their input into the design of algorithms and devices as another way to ensure that biases are addressed from the first stages of technology development The paper also touched on the need for responsible neuroengineering: there was a call for industry and academic researchers to take on the responsibilities that came with devising these devices and systems. The authors suggested that researchers draw on existing frameworks that have been developed for responsible innovation. In Philosophical foundation of the right to mental integrity in the age of neurotechnologies , the author has equated neurorights such as mental privacy, freedom of thought, and mental integrity to basic human rights. The author created philosophical foundation to a specific right, the right to mental integrity. It included both the classical concepts of privacy and non-interference in our mind/brain. In addition, the author considered a philosophical foundation with certain features of the mind that could not be reached directly from the outside: intentionality, first-person perspective, personal autonomy in moral choices and in the construction of one's narrative, and relational identity. The author asserted that a variety of neurotechnologies or other tools, including AI, alone or in combination, could by their very availability, threaten our mental integrity. To that end, the author proposed philosophical foundations for a right to mental integrity that encompassed both privacy and protection from direct interference in mind/brain states and processes. Such foundations focused on aspects that were well known within philosophy of mind, but not commonly considered in the literature on neurotechnology and neurorights. Intentionality, the first-person perspective, moral choice, and the construction of one’s identity were concepts and processes that needed as precise a theoretical definition as possible. The author stated: “In our perspective, such a right should not be understood as a guarantee against malicious uses of technologies, but as a general warning against the availability of means that potentially endanger a fundamental dimension of the human being. Therefore, the recognition of the existence of the right to mental integrity takes the form of a necessary first step, even prior to its potential specific applications” In Neurorights – Do we Need New Human Rights? A Reconsideration of the Right to Freedom of Thought , the author stated that the progress in neurotechnology and AI provided unprecedented insights into the human brain. Likewise, there were increasing opportunities to influence and measure brain activity. These developments raised several legal and ethical questions. The author argued that the right to freedom of thought could be coherently interpreted as providing comprehensive protection of mental processes and brain data, which could offer a normative basis regarding the use of neurotechnologies. Moreover, an evolving interpretation of the right to freedom of thought was more convincing than introducing a new human right to mental self-determination. What Can We Take from These Developments? Undoubtedly, ethicists have spent a considerable amount of time thinking about mental privacy in the age of neurotech, and exactly what is at risk if there are no privacy and AI protections in place. Fortunately, the law is starting to catch up to the tech and the ethicists’ concerns. For example, California and Colorado have already enacted provisions to add to their consumer privacy statutes, and more bills have been introduced to address the issues. Previous Next
- AI Helpdesk | voyAIge strategy
Ask us anything, no strings attached. Welcome to the AI Helpdesk. Your Questions Answered. No Strings Attached. Talk to an Expert Now How We Help. Book a session to get expert guidance on topics such as: Chatbot assistance and prompt engineering: make AI work better for you. Data privacy & intellectual property: navigate AI-related risks and compliance. Safeguards and best practices: ask for advice on setting up policy, rules, and responsibilities. AI use ideas and vendor selection: identify the right tools for your organization. Auditing 101 & risk detection: discuss AI risks before they become problems. How it Works. 1 2 3 Book a Session. Book yourself into our calendar. Submit your questions. Bring Questions. In 30 minutes, we tackle your AI challenges together. Get Advice. Walk away with insights you can use right away. Need further support? Learn more about our Virtual Chief AI Officer (VCAIO) services.
- Clearview AI Fined by the Dutch Data Protection Authority | voyAIge strategy
Clearview AI Fined by the Dutch Data Protection Authority Clearview AI fined by The Netherlands for violating the General Data Protection Rule By Christina Catenacci Sep 6, 2024 Key Points: Clearview AI was fined a significant amount of money for scraping the faces and biometric information of people on the Internet, and then not properly informing them that it had their data If Clearview AI does not stop what it is doing, it will receive further fines of up to 5.1 million Euros on top of the 30.5 million Euro fine It is important to note that Clearview AI is an American company that does not have an establishment in the EU—it just received a hefty fine anyhow, since some of the faces it scraped were photos of Dutch people. Thus, it is no surprise that the GDPR applied Despite the fact that Clearview AI is an American company that does not have an establishment in the EU, the company has just received a hefty fine—about €30.5 million (plus up to €5.1 million fine if there is further noncompliance)—courtesy of the Dutch Data Protection Authority (DPA). What happened? Clearview AI is a commercial business that offers facial recognition services to intelligence and investigative services. In fact, it has acquired 30 billion photos of people (including photos of Dutch people). How has it accumulated so many images? It has scraped them from the Internet and converted each image to a unique biometric code. This of course, has been accomplished without obtaining the consent of the people whose faces were scraped. According to Clearview AI, it provides its services to intelligence and investigative services outside the EU only. However, the DPA has concluded that Clearview AI is operating illegally . In fact, since using the services of Clearview AI is also prohibited, the DPA has warned that Dutch organizations that use Clearview AI may expect serious fines as well. What were the violations? The DPA found that Clearview AI has violated the General Data Protection Regulation (GDPR) . The DPA has stated that the company never should have built its database in the first place. The following are the main violations by Clearview AI: Collected and used facial images and biometric data Insufficiently informed people who were in the database that the company had their data, and did not cooperate in requests for access to the information Again, the DPA asked Clearview AI to stop doing these things. If it does not stop, there will be further fines of up to 5.1 million Euros—on top of the 30.5 million Euro fine. What can we take from this development? The message here is clear: Clearview AI has to stop doing what it is doing, and it will be the DPA that stops it. This conclusion about Clearview AI may be made in other jurisdictions too. Again, Article 3 of the GDPR deals with territorial scope and unequivocally states that the GDPR applies to the processing of personal data of data subjects who are in the EU where there is processing of personal data related to: The offering of goods or services to data subjects in the EU (regardless of whether payment is required), or The monitoring of the behaviour of data subjects where their behaviour takes place within the EU DPA Chairman, Aleid Wilfsen stated: “Facial recognition is a highly intrusive technology, that you cannot simply unleash on anyone in the world…If there is a photo of you on the Internet – and doesn't that apply to all of us? – then you can end up in the database of Clearview and be tracked. This is not a doom scenario from a scary film. Nor is it something that could only be done in China…This really shouldn't go any further. We have to draw a very clear line at incorrect use of this sort of technology” Wilfsen pointed out that it was important for safety reasons to be able to detect criminals using the facial recognition technology, but he highlighted that this should not be done by commercial businesses. Rather, he noted that facial recognition should only be used by competent authorities and only in exceptional cases. For instance, the police can manage the software and database themselves—under the watchful eye of the DPA and other supervisory authorities. There can be no appeal in this case because Clearview AI did not object to the decision. The message is clear: collecting (in this case, scraping) sensitive data from the Internet and subsequently not informing individuals that the data is in their control is not going to work for businesses. In fact, companies that do this are likely to be fined considerably. Previous Next
- Meta Wins the Antitrust Case Against It | voyAIge strategy
Meta Wins the Antitrust Case Against It No Monopoly Found By Christina Catenacci, human writer Nov 27, 2025 Key Points On November 18, 2025, the United States District Court for the District of Columbia confirmed that Meta did not have a monopoly This decision confirms that Meta will not have to break off Instagram and WhatsApp This antitrust decision is markedly different than the Google antitrust decisions involving Search and online ads On November 18, 2025, James E. Boasberg, Chief Judge at the United States District Court for the District of Columbia, confirmed that Meta did not have a monopoly. Accordingly, Meta will not have to break off Instagram and WhatsApp. As I mentioned here , Meta had its antitrust trial about seven months ago, where the main question was whether Meta had a monopoly in social media by acquiring Instagram and WhatsApp about ten years ago (2012 and 2014 respectively). Essentially, Mark Zucherbergh was the first to give testimony and apparently, while he was on the stand, he was asked to look at his own previous emails that he wrote to associates before and after the acquisition of Instagram and WhatsApp to clarify his motives. More specifically, the questions were, “Was the purchase to halt Instagram’s growth and get rid of a threat? Or was it to improve Meta’s product by having WhatsApp run as an independent brand?” In short, the ultimate decision was that Meta won: it did not have a monopoly and did not have to break up Instagram and WhatsApp. What Did the Judge Decide? Initial Comments The judge made a point of beginning with the comment, “The Court emphasizes that Facebook and Instagram have significantly transformed over the last several years”. In fact, it noted that Facebook bought Instagram back in 2012, and WhatsApp in 2014. In addition, the court described two other relevant social media apps, TikTok and YouTube, which allowed users to watch and upload videos. The court explained the history of evolution of Meta’s apps. For example, as Meta moved to showing TikTok-style videos, TikTok moved to adding Meta-style features to share them with friends. Technological changes have made video apps more social. More specifically, smartphone usage exploded; cellphone data got better; the steady progress of cellular data was followed by a massive leap in AI; and as social networks matured, the alternatives to AI-recommended content have become less appealing. The court detailed the lengthy history of proceedings beginning with the initial Complaint that was filed in 2021. Again, the court stated straight away in Facebook’s motion to dismiss that it had doubts that the Federal Trade Commission (FTC) could state a claim for injunctive relief. The court granted Facebook’s motion to dismiss but allowed the FTC to amend its Complaint. The FTC indeed created an Amended Complaint and alleged that Facebook held a monopoly in personal social networking and that Facebook maintained the monopoly by buying both Instagram and WhatsApp to eliminate them as competitive threats. The court found that the FTC had plausibly alleged that Facebook held monopoly power and that the acquisitions of Instagram and WhatsApp constituted monopolization. That said, the court did say that the FTC may have challenges proving its allegations in court. Subsequently, the parties each moved for summary judgment. The court denied both motions and indicated that the FTC had met its forgiving summary judgment standard, but the FTC faced hard questions about whether its claims could hold up in the crucible of trial. At trial, the court heard testimony for over six weeks and considered thousands of documents. Decision at Trial The court found the following: Section 2 of the Sherman Act prohibited monopolization. The main elements included holding monopoly power (power over some market) and maintaining it through means other than competition on the merits. Plaintiffs typically proved monopoly power indirectly by showing that a firm had a dominant share of a market that was protected by barriers to entry A big question in this case was, When did Meta have monopoly power? The FTC had to show that Meta was violating the law now or imminently and could only seek to enjoin the conduct that currently or imminently violated the law (the FTC incorrectly argued that Meta broke the law in the past, and this violation is still harming competition) The court defined the product market as the smallest set of products such that if a hypothetical monopolist controlled them all, then it would maximize its profits by raising prices significantly above competitive levels. The court confirmed that the FTC had the burden of proving the market’s bounds The court found that consumers treated TikTok and YouTube as substitutes for Facebook and Instagram. For instance, this could be seen when there was a shutdown of TikTok in the United States: users switched to other apps like Facebook, and later Instagram, and then YouTube. The court commented, “The amount of time that TikTok seems to be taking from Meta’s apps is stunning”. In fact, the court noted that when consumers could not use Facebook and Instagram, they turned first to TikTok and YouTube, and when they could not use TikTok or YouTube, they turned to Facebook and Instagram—Meta itself had no doubt that TikTok and YouTube competed with it. Thus, even when considering only qualitative evidence, the court found that Meta’s apps were reasonably interchangeable with TikTok and YouTube In assessing Meta’s monopoly power, the court considered a market that comprised Facebook, Instagram, Snapchat, MeWe, TikTok, and YouTube. Also, the court found that the best single measure of market share here was total time spent—the companies themselves often measured their market share using this measure. The court noted that Meta’s market share was falling, and what counted most regarding market share was the ability to maintain market share. A given market share was less likely to add up to a monopoly if it was eroding—if monopoly power was the power to control prices or exclude competition, then that power seemed to have slipped from Meta’s grip. The court concluded that YouTube and TikTok belonged in the product market, and they prevented Meta from holding a monopoly. Even if YouTube were not included in the product market, including TikTok alone defeated the FTC’s case Social media moved so quickly that it never looked the same way twice since the case began in 2021. The competitors changed significantly too. Previous decisions in motions did not even mention TikTok. Yet today, TikTok was Meta’s fiercest rival. It was understandable that the FTC was unable to fix the boundaries of Meta’s product market. Accordingly, the court stated: “Whether or not Meta enjoyed monopoly power in the past, though, the [FTC] must show that it continues to hold such power now. The Court’s verdict today determines that the FTC has not done so” Therefore, the case against Meta was dismissed. What Can we Take from this Development? Meta did not have a monopoly in social networking and survived a very serious existential challenge—it will not have to break the company apart as a result of this decision. The results of this decision were the polar opposite of the Google decision, where there was indeed a confirmed monopoly in Google Search and online ads . Why such a different result? The first clue came right at the beginning of this Meta decision, when the judge noted that the question was whether Meta had monopoly power now or imminently. In particular, there was no determination about whether there had had been a monopoly in the past (as the FTC incorrectly alleged), because it was irrelevant. That is, Meta may have had a monopoly in the past, but the FTC had to show that it had one now. Unlike the judge in the Google decision, the judge in the Meta case was able to show that the test for monopoly power was not met, primarily because the FTC could not show that Meta currently had monopoly power (power over some market) and maintained it through means other than competition on the merits. Second, unlike the Google decision, the product market had changed considerably since the FTC launched the Complaint, to the point where Meta’s strongest competitor right now, TikTok, had not even come on people’s radars. The judge made an important finding that consumers treated TikTok and YouTube as substitutes for Facebook and Instagram. After considering the evidence, the court found that TikTok had to now be included in the product market. This was significant and played a large role in the court dismissing the case. Most strikingly, the judge stated, “Even if YouTube were not included in the product market, including TikTok alone defeated the FTC’s case”. Third, throughout the previous Meta decisions since 2021, there was foreshadowing by the court that the FTC may struggle to prove its allegations in court. This was not so in the Google case, which involved the company using exclusionary contracts and other means to create and maintain its monopoly, which it still has. It is not just the DOJ that thinks Google currently has the monopoly—the EU has also fined Google significantly for having and maintaining a monopoly in Search and online ads. Fourth, it became clear that Meta’s market share had decreased, likely because of TikTok and YouTube—this made it difficult for the FTC to prove that there was a monopoly where Meta would have the opportunity to charge more, or demand more time spent. Recall that a main measure in this sphere was time spent, and the court stated that the amount of time that TikTok seemed to be taking from Meta’s apps was stunning. On the other hand, in Google’s case, Google had—and still has—89 percent of the global search engine market share. Sure, Mark Zuckerberg made comments in 2008 emails , “It is better to buy than compete”, but even if that were true, the court just showed that the FTC could no longer meet the test for holding a monopoly. Some may question why there is such importance placed on antitrust trials. Speaking about its competition mission, the FTC states: “Free and open markets are the foundation of a vibrant economy. Aggressive competition among sellers in an open marketplace gives consumers — both individuals and businesses — the benefits of lower prices, higher quality products and services, more choices, and greater innovation” Previous Next
- OECD’s AI Benchmark is the Message | voyAIge strategy
OECD’s AI Benchmark is the Message What the OECD's AI Capability Indicators Mean for Business Leaders By Tommy Cooke, powered by coffee and Blue Jays baseball Jun 27, 2025 Key Points: The OECD’s AI Capability Indicators mark a critical shift from AI hype to measurable, human-centered benchmarking Even today’s most advanced AI systems perform unevenly, often failing to match basic human adaptability, judgment, and emotional nuance For business leaders, these benchmarks are not just technical assessments — they’re a practical guide to knowing when to automate and when to elevate human talent A quiet evolution is underway. After years of loud debates fraught with both hype and doubt surrounding AI's risks, promises, and potential, the Organisation for Economic Co-operation and Development (OECD) has done something refreshingly uncontroversial: it is measuring AI. This is not measuring in the abstract. It is happening through a concrete framework that evaluates how even the most advanced AI systems stack up against human capabilities. The release of the OECD AI Capability Indicators earlier this month offers business leaders something valuable: a way to separate performance from perception. Moreover, business leaders ought to pay attention because the OECD’s efforts have established a benchmark precedent that encourages people to really focus on the differences between AI-generated and human-generated outputs; AI is learning, but so too are humans. From Possibility to Performance: OECD’s AI Capability Indicators at a Glance For years, AI discourse has been dominated by extremes. From utopian visions of self-aware machines to dystopian warnings of job-stealing automation, humans have been pulled back and forth for a long time now. The trouble here is that between the excitement and fear, there is little space to comfortably find and occupy a middle ground. That middle ground is important. Why? It’s a space where we can more pragmatically and objectively reflect upon what AI is currently capable of. The OECD’s new framework benchmark is important scaffolding for that middle ground. What they have done is identify nine human-relevant capabilities including language, vision, reasoning, creativity, and social interaction. Using a scale from zero to five, the goal of the benchmark is to measure and compare how well AI systems perform like humans. What makes the framework so significant isn’t just the scoring—it’s the mindset shift. For the first time, governments, researchers, and businesses are using a shared scale to evaluate AI performance not against hype, but against actual, measurable human capability. This benchmark is significant because it reinforces a growing recognition among humans that intelligence isn’t binary. It cannot be reduced to linear algebra. It’s gradual, contextual, and measurable. AI can be scored in terms of its ability to not just talk like a human, but whether it sees, manipulates, critically reflects, memorizes, problem solves, and learns like a human. The Verdict: What OECD’s AI Capability Indicators Reveal about The State of AI So, how is AI performing against these indicators? The verdict is not encouraging. Even state-of-the-art systems like GPT-4 and Claude fall between Level 2 and Level 3. This means that they exhibit some general human skills. However, they do not do so consistently, nor are they robust and adaptive in their ability to perform like humans. This is a sobering truth. AI is strong but against this benchmark, its strength is not very compelling. Let’s have a look at a few key indicators and see how extant AI is doing: Language. At level 3 , AI generates coherent summaries, it translates text, and it mimics tone and even emotions like empathy. But, it still hallucinates. It misses nuance, and it consistently struggles with factual accuracy Problem-solving and Knowledge Retention. These capabilities are important. Can AI find solutions and retain them consistently? This is important for structured tasks like drafting reports, generating legal summaries, or even conducting market analyses. AI performs moderately here, around level 3 . Why? It still struggles to creatively synthesize. It also has a hard time judging criteria in the way that humans do Visual Understanding. Image recognition and labeling are reasonably advanced at level 3 . However, it struggles to interpret diagrams with text (also called multi-modal coordination. Most AI that generates images also fail to consistently learn from images that users provide them Social and Emotional Capability . This is perhaps where AI lags behind the most. Averaging level 1 or 2 means that its efforts merely emulate social and emotion intelligence. AI mimics politeness, but it cannot understand or respond to real human emotion. Empathy is still beyond the machine, contrary to what some engineers believe The OECD’s AI Benchmark Is the Message The most important thing that the OECD has done here isn’t simply rating AI. It’s changing how we talk about it. For far too long, AI has been treated as both mysterious and magical. It’s been treated as a black box that fires magic bullets . By establishing shared indicators, the OECD has invited the world to measure AI’s strengths and weaknesses like any other technology. It is, in effect, demystifying AI for us . As importantly, the benchmark reminds us of something that is far too easy to forget: human standards matter. These indicators don’t merely reveal what AI can do, but also it reveals to us what AI cannot yet do. This is more than semantics, especially if you are a business leader. When reflecting upon the OECD’s benchmark, I encourage you to start exploring and asking how your own AI performs. At the very least, it should reveal to you not only where your people are important, but also why they are so important. Consider doing the following: First, map your AI efforts. What are your core business processes, and which of them perform at level 1 to 2 ? They are likely repetitive, structured, and predictable. These are the tasks that are prime for automation. Think customer service scripts. Monthly reporting. Content generation. Invoice verification, and so on Second, identify your human talent. This is your competitive edge, after all. Where are you relying on social insight, flexible judgment, ethical nuance, and cultural awareness? These are not just hard to automate, but also where your people offer the most value Third, design integration with purpose. Don’t just deploy AI because your competitors are doing it. Deploy it because you understand where it fits, and it should fit your people and your organization like a glove—not like a raincoat Lastly, build a feedback loop. The OECD indicators will evolve. Your business should too. Treat the OECD’s indicators as a maturity benchmark that are living metrics. They will change and maturate just like your human talent. Revisit them often and use them to evaluate vendors, assess risks, and communicate clearly with stakeholders There’s a quiet elegance to what the OECD has done. In a world obsessed with what’s next, they’ve grounded us in what’s now. Previous Next
- Research & Reporting | voyAIge strategy
In-depth analysis and reports to support informed AI decision-making. Research & Reporting We are highly experienced researchers and writers. We are passionate about generating bespoke and informative reports, government bids, and thought leadership submissions to regulators and corporate communications teams. Your Content, Our Expertise Whitepapers & Thought Leadership Position your organization as an industry leader with expertly crafted whitepapers and thought leadership articles. We conduct thorough research to produce insightful, well-argued content that not only informs but also engages your audience. Our whitepapers cover the latest trends, challenges, and innovations in AI and related fields, helping you shape the conversation in your industry. The Research Process 1. Consultation We start by understanding your needs, goals, and audiences. 2. Research We collect and analyze data from industry and academic sources. 3. Draft & Develop We craft clear, engaging, and structured outputs that convey your message. 4. Review & Finalize We collaborate with you to gather feedback and make any requested changes. 5. Present We love to present. We are happy to communicate our outputs to your staff, stakeholders, and investors. Request Our Research & Reporting Samples We have numerous samples of our previous research and writing projects, including excerpts from whitepapers, case studies, and policy documents. Contact us to learn more.
- California Legislature Approves AI Bill | voyAIge strategy
California Legislature Approves AI Bill Bill 1047 passes in the California Legislative Assembly By Christina Catenacci Aug 30, 2024 Key Points: California could be the first to launch comprehensive AI legislation in the United States Some controversy has arisen in response to the AI bill There are significant penalties associated with contraventions In August 2024, Senate Bill 1047, Safe and Secure Innovation for Frontier Artificial Intelligence Models Act , was read for a third time and passed in the California Legislative Assembly. It was subsequently ordered to the Senate. By August 29, 2024, the bill passed in the Senate (29-9 votes). It now must be signed by Governor Newsom. There are rumblings that he is taking his time weighing the pros and cons of signing a bill that has caused some controversy in Silicon Valley. What does the bill say? Bill 1047 defines important concepts such as advanced persistent threat, AI safety incident, covered model (and derivative), critical harm, developer, fine-tuning, full shutdown, post-training modification, and safety and security protocol. The bill also requires that developers, before beginning to initially train a covered model, comply with several requirements, including using administrative, technical, and physical cybersecurity safeguards; implementing the capability to promptly enact a full shutdown; and implementing a written and separate safety and security protocol. Moreover, the bill requires developers to retain an unredacted copy of the safety and security protocol for as long as the covered model is made available for commercial, public, or foreseeably public use, plus five years. Developers must grant to the Attorney General access to the unredacted safety and security protocol. Also, developers must annually review the protocol and make any necessary modifications to the policy. Additionally, Bill 1047 prohibits developers from using a covered model or derivative for a purpose that is not exclusively related to the training or reasonable evaluation of the covered model, compliance with State or federal law, or making a covered model or derivative available for commercial or public, or foreseeably public use, if there is an unreasonable risk that the covered model or derivative will cause or materially enable a critical harm. Bill 1047 also requires developers, beginning January 1, 2026, to annually retain a third-party auditor to perform an independent audit, consistent with best practices, of compliance with the provisions. The auditor must produce an audit report and require developers to retain an unredacted copy of the audit report for as long as the covered model is made available for commercial, public, or foreseeably public use, plus five years. Developers must grant to the Attorney General access to the unredacted auditor’s report upon request. Bill 1047 requires developers of a covered model to submit to the Attorney General a statement of compliance with these provisions. The bill also requires developers of a covered model to report each AI safety incident affecting the covered model or derivative controlled by the developer to the Attorney General. The bill requires a person who operates a computing cluster to implement written policies and procedures to do certain things when a customer utilizes compute resources that would be sufficient to train a covered model, including assess whether a prospective customer intends on utilizing the computing cluster to train a covered model. There are some hefty penalties contained in Bill 1047. The bill authorizes the Attorney General to bring a civil action for a violation—this includes for a violation that causes death or bodily harm to another human, harm to property, or theft. In this case, as of January 1, 2026, a civil penalty can be in an amount not exceeding 10 percent of the cost of the quantity of computing power used to train the covered model, and it is 30 percent for any subsequent violation. The bill also contains whistleblower protections, whereby developers, contractors, or subcontractors are not allowed to prevent an employee from disclosing information, or retaliate against an employee for disclosing information to the Attorney General or Labor Commissioner if the employee has reasonable cause to believe the information indicates the developer is out of compliance with certain requirements or that the covered model poses an unreasonable risk of critical harm. In this case, the civil penalty is found under the Labor Code . Other violations involving a computer cluster can result in penalties of up to $50,000 for a first violation, $100,000 for any subsequent violation, and a penalty not exceeding $10 million in the aggregate. Also, the Attorney General is free to recover injunctive or declaratory relief, monetary damages as well as punitive damages, fees, costs, and any other relief it deems appropriate. Bill 1047 creates the Board of Frontier Models within the Government Operations Agency, independent of the Department of Technology, and provide for the board’s membership. The Agency is required to, on or before January 1, 2027 and annually thereafter, issue regulations to update the definition of a “covered model,” as provided. The bill establishes in the Agency a consortium required to develop a framework for the creation of a public cloud computing cluster to be known as “CalCompute” that advances the development and deployment of AI that is safe, ethical, equitable, and sustainable by, among other things, fostering research and innovation that benefits the public. On or before January 1, 2026, the Agency must submit a report from the consortium to the Legislature with that framework. What can we take from this development? The main author of the bill, Senator Scott Weiner, has talked about the fact that the bill took a lot of work and collaboration with industry, and emphasized that it deserves to be enacted. Though there has been some criticism arguing that the bill is overly focused on the harms, it can be said that Bill 1047 is the first of its kind in the United States—it requires AI companies operating in California to comply with several requirements when it comes to training AI models. And businesses will have some time to prepare so they can be in compliance. In the preamble, it is declared that California is leading the world in AI innovation and research. One might question whether Canada is even part of the equation any longer given the slow-moving progress of Bill C-27 . And if an election takes place in Canada, there will be further delays in enacting a meaningful piece of AI (and privacy) legislation. We will have to wait and see. Previous Next
- L&E Analysis: Reddit Sues Anthropic | voyAIge strategy
L&E Analysis: Reddit Sues Anthropic What is Reddit Claiming in this Complaint? By Christina Catenacci, human writer Jun 20, 2025 Legal & Ethical Analysis: Issue 2° Key Points On June 4, 2025, Reddit filed a Complaint against Anthropic in the Superior Court of the State of California in San Francisco This Complaint follows the one against Anthropic launched by the major music publishers commenced in October 18, 2023, which was ultimately settled The Complaint by the music publishers was about copyright law, and the Reddit Complaint was about violating the User Agreement and the privacy of Reddit users. It will be interesting to see how the Reddit Complaint is resolved On June 4, 2025, Reddit filed a Complaint against Anthropic in the Superior Court of the State of California in San Francisco. This is not the first Complaint that has been commenced against Anthropic—we cannot forget what recently took place when the music publishers sued Anthropic for copyright infringement. What is the claim about? What does Reddit want? How is this claim different than the one against Anthropic that was launched by the music publishers? What can we take from this development? This article answers these questions. What is Reddit Claiming? Essentially, Reddit has stated that although Anthropic claims it is the “white knight of the AI industry” that prioritizes honesty and high trust, it is “anything but” and it uses empty marketing gimmicks. Reddit asserts in its Complaint that it has a User Agreement that contains the following excerpts of sections 3 and 7: “ 3.Your Use of the Services Except and solely to the extent such a restriction is impermissible under applicable law, you may not, without our written agreement: license, sell, transfer, assign, distribute, host, or otherwise commercially exploit the Services or Content” “ 7.Things You Cannot Do Access, search, or collect data from the Services by any means (automated or otherwise) except as permitted in these Terms or in a separate agreement with Reddit (we conditionally grant permission to crawl the Services in accordance with the parameters set forth in our robots.txt file, but scraping the Services without Reddit’s prior written consent is prohibited)” Even though Reddit has this provision, it claims that Anthropic has intentionally trained on the personal data of Reddit users without ever requesting their consent. In fact, it claims that Anthropic has been ignoring the provision and has had bots repeatedly hit Reddit’s servers over 100,000 times notwithstanding the statements of Reddit’s CEO that Anthropic has unlawfully exploited Reddit content. Further, Reddit states that Anthropic has refused to respect Reddit users’ privacy rights—contrary to Anthropic’s own values. By training its model, Claude, on Reddit’s posts without authorization, Reddit claims that it is in direct violation of Reddit’s User Agreement. In a nutshell, Reddit says that Anthropic has scraped and used Reddit content in its commercial offerings—Claude even provides output statements confirming that it has been trained on Reddit. Anthropic refused to respect Reddit’s guardrails and enter into a lisensing agreement like Google and OpenAI has. Reddit asserts that instead, Anthropic continued to commercialize Reddit content without authorization. Interestingly, Reddit states that Anthropic has admitted that it scrapes Reddit content, but it has provided several excuses—all of which are unacceptable. To that end, Reddit is advancing the following claims against Anthropic: Breach of Contract : Anthropic has violated Reddit’s User Agreement by acting contrary to sections 3 and 7 of the Agreement Unjust Enrichment : Anthropic was unjustly enriched at the expense of Reddit when it scraped and used Reddit content to train and power a model to the tune of billions of dollars Trespass to Chattels : Anthropic intentionally entered into, and made use of, Reddit’s platform and technological infrastructure, including its software and servers, to access and obtain Reddit content and information for its own economic benefit Tortious Interference With Contract : Anthropic intentionally interfered with Reddit’s contractual relationships with its users by: scraping Reddit content without entering into a licensing agreement that would provide the necessary guardrails to protect users’ privacy rights; bypassing connecting to Reddit’s Compliance API, which automatically notifies licensees when users delete posts or comments; training its AI models on user content without any mechanism to respect Reddit user deletion requests; and continuing to scrape Reddit content after being notified that such conduct violated Reddit’s obligations to its users. This intentional interference diminished Reddit’s capacity to fulfill its obligations to its users Unfair Competition : Anthropic has engaged in acts of unfair competition, including unlawful, unfair, and/or fraudulent business acts and practices as defined by the Business and Professions Code . Anthropic has trespassed on Reddit’s platform and taken possession of Reddit content and data without authority or permission, and interfered with Reddit’s contractual relationships with Reddit’s users. Anthropic has also engaged in fraudulent business practices by falsely stating that it was no longer scraping the Reddit platform, even as Anthropic continued to scrape to acquire and use Reddit content to train its AI models for commercial gain In addition, Reddit has requested a jury trial. What is Reddit Asking for in the Complaint? Reddit is asked for the following: Specific Performance, compensatory damages, consequential damages, lost profits, and/or disgorgement of Anthropic’s profits An injunction Restitution for the amount by which Anthropic has been enriched by its scraping and use of Reddit content Pre-judgment and post-judgment interest Punitive damages Fees, costs, and any other appropriate relief Another Previous Complaint by Music Publishers We cannot forget that on October 18, 2023, several major music publishers (Music Publishers) filed a Complaint against Anthropic in the United States District Court for the Middle District of Tennessee Nashville Division. Essentially, the Music Publishers brought the action to address the systematic and widespread infringement of their copyrighted song lyrics by Anthropic. That is, they asserted that Anthropic unlawfully copied and disseminated vast amounts of copyrighted works—including the lyrics to myriad musical compositions owned or controlled by the Music Publishers. In the very first paragraph, the Music Publishers stated: “(Music) Publishers embrace innovation and recognize the great promise of AI when used ethically and responsibly. But Anthropic violates these principles on a systematic and widespread basis. Anthropic must abide by well-established copyright laws, just as countless other technology companies regularly do” The Music Publishers explained that they partnered with innovators, including entrepreneurs, start-ups, and established companies—they recognized and drove true innovation (for instance, Universal used AI in its business and production operations). However, Anthropic’s copyright infringement did not constitute innovation: “in layman’s terms, it’s theft”. In fact, the Music Publishers claimed that Anthropic violated the United States Copyright Act . They acknowledged that AI was a new technology, but they insisted that AI companies still had to follow the law. Technological advances could not come at the expense of the creators who essentially served as the backbone for AI’s development. Anthropic built AI models by scraping and ingesting massive amounts of text from the internet (and potentially other sources), using all of it to train its AI models (Claude 2 in this case) and generate output based on this copied text. The Music Publishers claimed that Anthropic copied the data to fuel its AI models lyrics to their musical compositions. They urged that copyrighted material was not free just because it could be found on the internet—in this case, Anthropic never asked for permission. Notwithstanding Anthropic’s company Constitution (the goal was to be harmless, respectful, and ethical), the Music Publishers passionately argued that Anthropic committed copyright infringement since it generated identical or nearly identical copies of their lyrics. In the Complaint, the Music Publishers provided examples where famous songs were either completely or partially outputted in the response to user prompts. In fact, the Music Publishers argued that: Anthropic directly infringed the Music Publishers’ exclusive rights as copyright holders, including the rights of reproduction, preparation of derivative works, distribution, and public display Anthropic unlawfully enabled, encouraged, and profited from massive copyright infringement by its users, so it was secondarily liable for the infringing acts of its users under well-established theories of contributory infringement and vicarious infringement Anthropic’s AI output often omitted critical copyright management information regarding these works, making it so that the composers of the song lyrics frequently did not get recognition for being the creators of the works that were being distributed The Music Publishers stated, “It is unfathomable for Anthropic to treat itself as exempt from the ethical and legal rules it purports to embrace”. According to the Music Publishers, there was no doubt that Anthropic profited from the infringement of the Music Publishers’ repertoires, since Anthropic was already valued at $5 billion, received billions of dollars in funding, and boasted about numerous high-profile commercial customers and partnerships. The Music Publishers stated in the Claim: “None of that would be possible without the vast troves of copyrighted material that Anthropic scrapes from the internet and exploits as the input and output for its AI models” The Music Publishers noted that nothing about Anthropic was creative—Anthropic depended on the creativity of others and paid them nothing. This caused substantial and irreparable harm. The Claim set out how Anthropic trained the data: copied massive amounts of text from the internet (and potentially other sources), by “scraping” (or copying or downloading) the text directly from websites and other digital sources onto Anthropic’s servers, using automated tools, such as bots and web crawlers, and/or by working from collections prepared by third parties cleaned the copied text to remove material that it perceived as inconsistent with its business model, whether technical or subjective in nature (such as deduplication or removal of offensive language) copied the massive “corpus” and processed it in multiple ways to train the Claude AI models (encoding the text into tokens) processed the data further to finetune the Claude AI model and engaged in additional reinforcement learning, based both on human feedback and AI feedback, all of which may require additional copying of the collected text The following are Claims for Relief: Count I: Direct Copyright Infringement Count II: Contributory Infringement Count III: Vicarious Infringement Count IV: Removal or Alteration of Copyright Management Information To that end, the Music Publishers requested relief against Anthropic in the form of Judgement on each of the claims above, an order for equitable relief, an order requiring Anthropic to pay the Music Publishers statutory damages, an order requiring Anthropic to provide an accounting of the training data and methods (and the lyrics on which it trained AI models), an order to destroy (under Court supervision) all infringing copies of the Music Publishers’ copyrighted works, costs, and interest. On October 23, 2023, Anthropic initially stated that training its AI models constituted “fair use”, meaning that it was a lawful exemption in copyright law. Why? Because Anthropic was engaging in a use that was highly transformative. Indeed, the company mentioned that it did not intend to violate the law. Furthermore, on November 16, 2023, the Music Publishers brought a Motion asking Anthropic to stop using their music lyrics, asking for a preliminary injunction. However, by January 6, 2025, it was reported that Anthropic and the Music Publishers reached a settlement where Anthropic would implement robust measures to ensure compliance with the law, namely revising its data collection and training methodologies to exclude copyrighted content, unless proper licenses or permissions have been obtained. Also, it agreed to have more stringent oversight on its data sources to mitigate the risks of inadvertently using protected material in future AI training. What Can We Take from This Development? Following the debacle with the Music Publishers, one would think that Anthropic would be striving to promote ethical AI practices and foster trust with both the creators of artistic works and the wider public. One would think that Anthropic learned its expensive lesson. But now, Anthropic has to face Reddit. The disappointing part of the story is that Anthropic similarly scraped Reddit content from the internet and used it to train Claude models—clearly without permission and without entering into a proper licensing agreement. While this is technically not a copyright infringement case, it is similar in that the User Agreement was allegedly breached because there were terms that Anthropic needed to comply with, but instead it appeared to have violated them. In particular, to use Reddit, the terms in clauses 3 and 7 had to be complied with. And Anthropic was warned —but Anthropic continued to hit Reddit’s servers and scrape away so that Anthropic could train Claude (without paying). This appears to be the first time that a big tech company has challenged an AI model provider over its training data practices, and it will be interesting to see what happens. As with the case with the Music Publishers, Anthropic will likely have to settle and promise not to do this again. This may be what the company has to ultimately do in order to preserve the delicate balance between innovation and the rights of companies like Reddit (along with its users). For a company that was just valued at 61.5 billion (up from 5 billion noted in the Complaint a couple of years ago), it may be something that Anthropic needs to do sooner rather than later in order to preserve its reputation. Anthropic spoke with TechCrunch recently and said that it disagreed with Reddit and would vigorously defend itself. We shall see what happens in the coming months… Previous Next