Search Results
123 results found with an empty search
- Understanding Risk of AI Self-Assessments | voyAIge strategy
Understanding Risk of AI Self-Assessments Balancing Self-Assessments with External Audits By Tommy Coooke Nov 8, 2024 Key Points: AI self-assessments can uncover compliance gaps and build trust but should be paired with external expertise to ensure objectivity Collaborating with an external auditor helps create a comprehensive assessment that aligns AI adoption with an organization’s goals Maintaining a feedback loop with external auditors ensures ongoing improvements, keeping AI systems compliant and aligned with evolving organizational needs The United Kingdom recently announced that it has launched a new platform to promote AI adoption in the private sector. The goal of the platform is to efficiently allow a business to have a look at its operations and organizational design to identify, assess, and mitigate risks associated with AI before getting too entrenched in adopting AI incorrectly. The announcement is timely and encouraging given that complex generative AI models are reportedly struggling with EU legal compliance benchmarks around AI bias and cybersecurity. Moreover, the UK's initiative appears to allow UK businesses to tackle the looming uncertainty around AI adoption head-on; despite its potential impact, 31% of British-based businesses are nervous to adopt AI with 39% of businesses reporting that it would be safer to stick with technologies that they already know how to use. The Value of AI Self-Assessments A self-assessment can generate the kind of clarity and confidence an organization requires to adopt AI. Here are a few benefits: Identify Compliance Gaps Early: A self-assessment can be a quick way to identify potential compliance blind-spots internally before they galvanize into substantive risks down the road. Scanning the regulatory landscape, both at home and abroad (especially for organizations with employees and stakeholders beyond its national borders), reveals what kinds of policy and procedure preparations are required prior to adopting AI. Foster Trust and Transparency: A self-assessment process can also play a crucial role in encouraging businesses to be transparent about their AI use. Per Cisco's 2024 Consumer Privacy Survey, 75% of respondents indicated that trustworthy and transparent data practices directly influence their buying choices. As importantly, trust and transparency not only protect and foster relationships with customers, but with regulators and insurers as well. Validate Internal Stakeholders from the Bottom-Up: Enlisting a workforce to assist in implementing a self-assessment is a powerful way of building trust in AI. When employees understand AI’s impact on their daily work and have an opportunity to assess a model or product, they are more likely to embrace AI rather than resist it. This is a bottom-up engagement strategy and is one that is proven to foster a culture that prioritizes communication, adaptation, and innovation. The Risks of Self-Assessments While there is value in conducting AI impact self-assessments, the process is not without risks. At voyAIge strategy, we encourage organizations to be mindful about the potential shortcomings of self-assessments, which include: A Lack of Objectivity: Conducting a self-assessment without external input and feedback can generate biases that may take years to discover. Despite the power of a self-assessment tool to empower a workforce, they also signal to them quite clearly that AI is coming. Internal stakeholders may overlook weaknesses or ethical concerns because an organization is committed to adopting AI. These two quick examples reveal how a lack of objectivity can generate problems that may undermine the credibility of the assessment altogether. Limited Expertise: While self-assessment tools tend to be designed by AI experts, they are generally one-size-fits-all approaches to understanding an organization's needs. They also tend to look at many times of AI with the same lenses. Moreover and as importantly, those conducting a self-assessment usually lack the depth of knowledge required to fully understand not only AI’s implications but the rationale behind the design of a self-assessment tool. AI systems and assessment criteria are complex, necessitating a deep understanding of technical and regulatory challenges. on the one hand, and an organization's strengths and weaknesses on the other. There are many nuances at stake on either side of the equation. Failing to recognize and meet these nuances can result in superficial assessments that fail to uncover significant issues. Global Blind Spots: One of the biggest risks of a self-assessment is its oft failure to understand the regulatory landscape of neighboring jurisdictions. The key here is recognizing that AI laws are created and updated very quickly - and they do so around the globe at different rates and frequencies. A self-assessment might not fully capture these nuances, particularly if the evaluators behind the design of a self-assessment tool do not conduct adequate research and/or fail to regularly update the self-assessment tool. Balancing the Benefits and Risks of AI Self-Assessments Successfully conduct a self-assessment requires striking a balance between internal accountability and external assistance. Here are three steps your organization can take to responsibly and compellingly conduct a self-assessment: Step One - Enlist the Assistance of an External Auditor: Rather than leaving the entirety of the responsibility of a self-assessment to your workforce, bring in an external auditor as a project manager. By doing so, the auditor can guide key stakeholders in how to identify and recognize the key areas that may have otherwise been missed. Step Two - Engage Cross-Functional Teams with the Auditor: Involve the external auditor in a cross-functional team comprised of AI- and tech-savvy individuals from as many business units as possible. By having them work with the external auditor, specialized insight can be generated in a comprehensive way that minimizes blind spots and ensures that an AI's adoption fits across multiple business lines without being disruptive. Step Three - Develop a Feedback Loop: After your self-assessment is conducted, maintain check-ins with the external auditor. Continuous monitoring and improvement is always highly recommended when onboarding an AI system of any shape or size, especially as an organization grows and changes along the way. As time passes, have the external auditor provide updates on regulatory changes and provide advice on refining an AI system's KPIs to ensure that the system remains compliant and aligned with your organization's goals. Self-assessments can be crucial for generating alignment and excitement in a workforce. They're important tools for uncovering hidden risks and also for generating trust and transparency. However, they have their limitations. Understanding those limitations and overcoming them through external assistance is important for ensuring the successful implementation of any AI system. Previous Next
- AI in Cybersecurity | voyAIge strategy
AI in Cybersecurity A Game Changer for MSPs when People Come First Tommy Cooke Dec 13, 2024 Key Points: AI transforms cybersecurity for MSPs by enabling real-time threat detection, automating responses, and predicting vulnerabilities Effective deployment requires tailoring training, thoughtful vendor selection, and clear communications with all stakeholders through strong thought leadership When it comes to AI in cybersecurity, trust is essential so ensure that people always come first Managed Service Providers (MSPs), companies that provide ongoing technology services and maintenance for their clients, are one of the many business types that are undergoing significant change due to AI—especially for MSPs providing cybersecurity solutions. Cybersecurity is complicated. With average data breaches exceeding $4.75 million per year per organization, coupled with the fact that 88 percent of these breaches are caused by human error , MSPs themselves are often the targets of hackers. It is perhaps unsurprising that the industry is undergoing significant transformation where the days of manual monitoring, static rules, and signature-based detection methods are failing far too quickly to outpace new modalities of cyberattacks driven by AI. AI isn't merely a weapon for bad actors: it is also a tool for progressive MSPs to protect you and themselves. Here are a few ways that AI is transforming cybersecurity. How AI Is Transforming Cybersecurity for MSPs Proactive Threat Detection AI analyzes massive data in real time to identify anomalies or unusual patterns that could reveal malicious activity. Through machine learning models, AI can uncover subtle deviations in network activity, login behaviors, or file access patterns that might otherwise go unnoticed. How many times has your bank emailed or texted you about suspicious purchase activity? There's a good chance that AI is helping them out. Capabilities like this allow MSPs to respond faster and build more defensive strategies for their clients and themselves. Automated Incident Response AI is also excellent at responding to threats automatically . Rather than depending on a human to isolate compromised systems, block malicious IPs, or trigger containment protocols, these tasks can be automated and run 24/7. Reducing downtime and enhancing the ability to prevent damage, AI frees up humans so that they can focus on strategic decision-making. It also gives them more time to use AI cybersecurity tools in a sandbox - a virtual space where they can test the vulnerability of their own and their clients’ systems to ensure that a given cybersecurity solution is watertight. Predictive Intelligence Beyond detecting threats, AI can forecast them . Feeding historical incident data, whether from a client directly or from global threat intelligence indexes, AI can help a cybersecurity firm identify patterns and trends behind emerging vulnerabilities. As many of us experience on a near-daily basis, the software and systems we used are updated all the time. This is yet another instance where AI is likely assisting one of your many preferred vendors in predicting issues and patching them before they arise. Understanding AI-Cybersecurity Challenges While scalability, efficiency, and enhanced trust are attractive to MSPs, AI is not a silver bullet. It is crucial that MSPs understand that AI can still misidentify threats that can lead to alert fatigue. AI solutions must be constantly tweaked, and it is imperative that companies listen to customers who may grow tired of constantly losing access to their credit card because of false positives. Automated cybersecurity solutions are also only as effective as the data on which they train. Data lacking representation of varied attack patterns can lead to gaps in threat detection. AI must also align with privacy laws and ethical standards, especially in client environments where sensitive and personal data are handled regularly—not to mention their intellectual property. As we have discussed on In2Communication's video series, The Smarketing Show , many AI cybersecurity platforms have a bad habit of automatically generating new policies and procedures for an organization, including ones that are not tailored to the dynamic shape and size of a company. This can introduce myriad problems for any organization. Overcoming AI-Cybersecurity Challenges Start with education: it is necessary to train teams on how to understand AI cybersecurity platforms. Ask whether your teams use the platforms as efficiently and with the same precision that they use my Enterprise Resource Planning (ERP). Question whether teams aware of all of the platform's ins and outs, capabilities, tools, and shortcomings. Seamless use and integration depends upon giving a team time to play with the tool—to break it and make it work more effectively for yourself and your clients as well. This is why it is also important to vet vendors thoroughly. Chose partners with proven expertise, transparency, and great support as a vendor who is claiming to innovate by offering automated solutions, and also those that claim to generate automated policies and procedures. Lastly, communicate your vision. Your team, your clients, and your stakeholders are engaging AI at varied rates of exposure. They will have different opinions. Cybersecurity is already a high-stakes application for AI, so talk to your people. Explain how and why AI benefits them. Ensure the data remains secure. Prove that you are a thought leader before you implement anything. AI in Cybersecurity is Effective—if Managed Properly Remember that in the world of AI, people matter most. For the foreseeable future, AI is always going to be a trust issue. Effectively deploying AI means more than just the technology; it requires planning, ethical deployment, excellent training, and superb communication. AI can propel an MSP into the future of innovative and reliable cybersecurity solutions if they are able to recognize the inherent complexities of adoption and strategic thinking—processes that always start and end with people. Previous Next
- Adoption & Transformation | voyAIge strategy
AI strategy and roadmaps to guide digital adoption and transformation. Digital Adoption and Transformation Work with voyAIge strategy to scope, select, and strategize the right AI solution for your organization. When we scope AI solutions for your organization, we ensure that we understand the problems first. We identify root causes as well as your pain points, risks, and opportunities. We consider all angles and perspectives available to then define your ideal solution. The solutions we choose and present to you align with your company’s values, ambitions, and goals. While we don't implement AI systems ourselves, we leverage our expertise to provide clear, actionable direction in your AI journey so that you and your team are empowered to make informed decisions. What is Digital Adoption & Transformation? Adopting is the process of bringing a new technology or platform, like AI, into an organization. Transformation is the process of significantly overhauling an existing system, along with business processes, to improve upon its capabilities and capacities. Whether adopting a new AI system or transforming an existing one, we analyze your current business processes, challenges, and goals to formulate clear insights on gaps and opportunities along with a robust strategy and recommendations for moving forward in your AI journey. Our Digital Adoption & Transformation Process Assessment We start by conducting a comprehensive assessment of your current state. This includes analyzing your existing processes, technology stack, data readiness, and identifying any challenges or opportunities. Our goal is to understand where you are now and where you want to be. Why work with voyAIge strategy? Choosing the right AI solution is critical to business success. We are your trusted partner in this journey. Here's why organizations choose us for AI Solution Scoping: Deep Industry & Academic Knowledge Our team has over 20 years of experience working in the public, private, and not-for-profit sectors as professors, lawyers, and consultants. This gives us unique access to a wide range of best practices, logics, and methods when thinking about AI. Schedule a Free Consultation
- What is “AI Augmentation”, and How Do You Achieve It? | voyAIge strategy
What is “AI Augmentation”, and How Do You Achieve It? The New Frontier for HR By Christina Catenacci Nov 14, 2025 Key Points AI augmentation is the collaborative use of AI systems to enhance, support, and amplify the cognitive and physical capabilities of human workers, rather than replacing them entirely. The purpose is to increase productivity and quality of output by enabling humans to work faster and smarter AI augmentation is a safe way to carefully and gradually include AI as a collaborator There are several steps to achieving AI augmentation, starting with identifying the repetitive tasks that can be automatable AI augmentation is the collaborative use of AI systems to enhance, support, and amplify the cognitive and physical capabilities of human workers, rather than replacing them entirely. The purpose is to increase productivity and quality of output by enabling humans to work faster and smarter. Compared to full automation, augmentation is about giving existing valuable staff superpowers. You may have heard of collaborative robots , also known as cobots, which are industrial robots that can safely operate alongside humans in a shared workspace (unlike autonomous robots, which are hard-coded to repeatedly perform one task, work independently and remain stationary). In short, the goal is to combine the strengths of the AI with those of the human. What is an Example of AI Augmentation? For example, if someone needs to draft a proposal, that person could combine their abilities with AI’s capabilities. That is, the writer can decide which reports to select to include for the coverage in the proposal, and then ask the AI to list five of the most impactful statistics from those reports. At this point, the writer could ask the AI to draft a first draft of the proposal with those five set of statistics. From there, the writer could edit the document and complete an ethics check at the end. Together, the AI and the human writer could synthesize data, draft a document, edit the document, and do the final ethics check. How do HR Leaders Achieve AI Augmentation? AI Augmentation is the most responsible way to introduce AI. The reason is because it is not full automation, which can carry high risk and complexity, but it does not involve compiling statistics manually from multiple reports, which is the traditional way of doing things on the other end of the spectrum. AI augmentation is a happy medium. In fact, this is a safe way to carefully and gradually include AI as a collaborator. The AI can do the things that it is good at like sifting through mountains of data, finding patterns, and completing the repetitive tasks that bores most humans. This frees humans to focus on what they can do best, such as using expertise to solve tricky problems, building relationships with customers, and thinking creatively and empathetically. An humans can perform final ethics checks too. The following steps can lead to full AI augmentation , so that humans can still be in the driver’s seat instead of watching from the sidelines: Level 1: Use AI augmentation to eliminate the boring stuff . Identify the routine, automatable tasks in a job that slows everything down. Have AI start by taking on those tasks. For instance, the AI can clean up customer service tickets and thin out the queue Level 2: Allow workers to have AI tools that act as portable experts . Allow workers to use these experts to enhance the worker’s work quality and productivity. For example, the human customer service agent can ask the AI to read a ticket and respond by creating a first draft of a customer response. The human agent can review it, edit it, and confirm that it is an appropriate message before sending Level 3: Use AI augmentation for predictable tasks . Identify the more predictable tasks. Allow a more autonomous AI system to deal with specific predictable tasks, Predictable tasks could include things like answering the common question, “Where is my order?”, so that AI systems handle these type of tasks completely on their own—but if at some point where the AI system flags a more complex issue, the task escalates and the human agent can seamlessly take over the task—the human is always in control What are Some Best Practices for Using AI Augmentation? Here are a few tips that can help a businesses with AI augmentation: Use the knowledge and experience you have to train the AI system Remember to test the AI in a risk-free environment (a safe and stable sandbox) Make sure to roll out the AI slowly and make necessary adjustments Noted relevant metrics, measure the value created with AI augmentation, and note the value created by the AI-human collaborations Create training opportunities for employees with respect to AI-human collaborations Conclusion According to Gartner, it is necessary for HR leaders to plan for a blended workforce . This involves moving from a mindset where AI is viewed as a nice-to-have bolt-on to a regular practice of designing a human-AI workforce where both use their strengths and co-deliver work. Moreover, EY recommends blending operational gains with a people-first mindset . More specifically, the chances of sustainable business and capability growth hinge on whether organizations keep a people-first mindset while integrating new technologies. To accomplish this, EY suggests that organizations deploy the most efficient tools and processes to create sustainable value while still investing in the skills, career and personal growth of the workforce to create a more exceptional employee experience. This means bringing a holistic, people-centered perspective to an increasingly more digital world of work. While there may be a percentage of tasks for every employee that might be supported by AI tools, organizations will need those employees to be the human-in-the-loop who makes final decisions. Finally, employers are recommended to: Appreciate AI’s role in a comprehensive workforce strategy, and be aware of the potential challenges that lie ahead Determine how AI can empower workers in the organization Explore potential risks and security concerns Consider size, scope and cost in terms of evaluating performance and cost trade-offs With regards to training on the new tools, chart the path forward with people at the center Implement metrics that measure workforce sentiment tied to confidence in and adoption of the new technology Previous Next
- Deep Fakes, Elections, and Safeguarding Democracy | voyAIge strategy
Deep Fakes, Elections, and Safeguarding Democracy Understanding and Preventing the Threat of Deep Fakes in Elections By Tommy Cooke Oct 11, 2024 Key Points: Deep Fakes use AI to create hyper-realistic fake media, posing serious risks to elections They can manipulate voter perceptions and erode trust in democratic institutions Organizations and governments must invest in detection tools, media literacy, and rapid response protocols to combat misinformation As the 2024 US election approaches, the integrity of the democratic voting processes is under threat from Deep Fakes. In January 2024, a robocall impersonating Joe Biden spread misinformation about the election process. It demonstrated how easily AI can be used to manipulate voters. The incident highlights the urgency with which governments, organizations, and citizens need to take in understanding Deep Fakes – and undertaking proactive measures to combat them. What Are Deep Fakes? Deep Fakes are AI-generated content, usually in the form of images, videos, audio recordings. They are designed to closely replicate the appearance, voice, and mannerisms of real people. Deep Fakes are often generated through a class of AI Machine Learning, called Generative Adversarial Networks (GANs), where two neural networks compete with one another. One neural network (the generator) creates a fake image, recording, or video. The other (the discriminator) tries to spot if they are fake. The two networks continue competing with one another, improving over time until the generator produces fake content that the discriminator finds difficult to distinguish from real footage. The process often results in uncanny resemblances that are hyper-realistic, misleading the viewer into witnessing and believing a statement or action that never factually occurred. The ongoing proliferation of Deep Fakes raise considerable ethics and security concerns, particularly during elections. Why Deep Fakes Threaten Democracy Deep Fakes are not merely technological curiosities. They can be a powerful weapon for misinformation, designed to: Manipulate Public Perception : Deep Fakes falsely portray political figures making inflammatory statements or engaging in unethical behavior, leading to confusion while eroding voter trus t. The January 2024 robocall that mimicked Joe Biden is a clear example. Erode Trust in Democratic Institutions : As more Deep Fakes are produced, they generate increasingly higher levels of suspicion and confusion around what is real and what is not. The atmosphere of doubt and uncertainty that Deep Fakes create thus undermine the credibility of both political candidates and the electoral process . In an age where misinformation already spreads rapidly on social media platforms, Deep Fakes are particularly lethal in their ability to sow distrust in an otherwise informed and engaged electorate. Distort Voter Behavior : When Deep Fakes are curated for specific audiences (e.g., traditionally Democratic voters), adversaries of the democratic process specifically manipulate certain voters into questioning the validity of the voting process or even the value of their own vote. This targeted approach can significantly alter or shift voter behavior in way that is detrimental to both political parties. The 2024 Election: A New Era for Deep Fakes As we head into the 2024 United States presidential election in November, the threat of Deep Fakes is unprecedently high. In today’s polarized political climate, Deep Fakes have the potential to escalate tensions by spreading false narratives that align with partisan biases. The current political climate is divisive, and Deep Fakes threaten to make divisions deeper and wider. It is imperative that governments and organizations adhere to best practices to combat Deep Fakes: Monitoring and Detection Governments and organizations can invest in advanced AI tools capable of detecting Deep Fakes in real-time. Detection algorithms can reverse engineer how Deep Fakes are created to flag suspicious content before they gain widespread traction. Media Literacy One of the most effective ways to mitigate the impact of Deep Fakes is through public education. Media literacy campaigns equip voters with the that skills they need to critically evaluate the content they encounter. Rapid Response Governments and election bodies must have clear strategies in place for when Deep Fakes emerge. Rapid response protocols can include issuing statements to correct misinformation, collaborating with tech companies to remove malicious content, and engaging the public through verified communication channels. Cross-Sector Collaboration Governments, tech companies, and media organizations should work togethe r to create transparent systems that verify the authenticity of election-related media. Public-private partnerships can help develop common standards for content verification and share expertise in detecting Deep Fakes. Proactive Measures for Future Elections Because Deep Fakes become more commonplace on social media platforms, organizations, governments, and citizens need to recognize the urgency of this issue – and act proactively: Seek Independent Fact-Checking Organizations : Governments should work closely with fact-checking bodies to ensure swift and accurate debunking of manipulated content. Establish Digital Forensics Units : Election bodies and governments can create teams focused on monitoring and analyzing digital content for manipulation . These units can serve as first responders when Deep Fakes are detected, in turn assisting with coordinating responses. Promote Research and Development : Supporting innovation in the Deep Fake detection and prevention space is crucial. Governments, organizations, and citizens can stay ahead of emerging threats by investing in tools that protect the democratic process. The risks to democratic integrity are real. Without concerted efforts, the impact of Deep Fakes can reshape the political landscape in exceptionally harmful ways. By investing in monitoring systems, educating the public, and establishing rapid response protocols, we can mitigate these risks and protect the foundations of our democratic processes. Previous Next
- The Who Why and How of Human in the Loop | voyAIge strategy
The Who Why and How of Human in the Loop Embracing AI Failures to Turn Mistakes into Growth By Tommy Cooke Oct 18, 2024 Key Points: Human-in-the-Loop (HITL) positions employees as guides of their own AI systems AI failures reveal system limitations but also offer opportunities for refinement, turning errors into a pathway for improvement A balanced HITL approach is essential to integrate human values, prevent biases, and ensure AI evolves responsibly and in alignment with an organization’s values 57% of American workers have tried ChatGPT at work. AI adoption at work is on the rise. As more and more employees interact regularly with tools like ChatGPT, they becoming increasingly more familiar with them. Relationships are developing between your employees and Natural Language Processors like ChatGPT . Understanding that relationship and how to leverage it to improve upon them is crucial to using AI successfully in your organization. As the relationship continues to grow, your employee is becoming a kind of AI supervisor. Over the last year, the concept of “Human-in-the-Loop” has become a commonplace, fundamental concept of AI. It refers it a form of human oversight of an AI system, such as an engineer adjusting a large AI system behind closed doors. However, if your employees are using systems like ChatGPT regularly, they are positioned to participate in HITL in ways that can be tremendously valuable. When an employee interacts with AI, they are not merely a passive user. They can actively become a part of its learning process . Every prompt, correction, and piece of feedback they provide it refines the AI, guiding it to better align itself with the employee’s goals. More simply, the employee becomes their own Human-in-the-Loop. Take my recent experience as an example. I asked ChatGPT to help me refresh a jazz guitar lesson plan. I’ve been studying jazz guitar for a couple of years now, and one of my goals is to get more comfortable with inverted chord voicings (for non-musicians, if a chord is the sum of its notes, changing the order of those notes can enrich your writing and playing). Initially, ChatGPT suggested I spend an extra 15 minutes a week drilling scales (individual notes, not chords). I was confused. That is not my goal here. So, I told ChatGPT, “Thanks for the suggestion, but I’m not interested in scales right now. Let’s make sure we’re focusing on the goals I explicitly share so we can build a plan that better uses my time.” ChatGPT apologized. It then provided me with a set of inverted chord exercises that will keep me busy through April 2025. My interaction is a common one among ChatGPT users and it is important. As individuals, employees have the power to shape and improve AI outputs directly. AI is bound to make mistakes, like misunderstanding intentions and goals. The key is recognizing that limitations are not setbacks. They are opportunities for growth. When employees see themselves as in-the-loop with AI, they can take an active role in recognizing its limitations and pushing it to improve. They can be essential to refining AI and making it more aligned with their own or an organization’s needs. Each interaction, correction, and bit of guidance we provide helps the AI learn more effectively. Recognizing AI failures are sources of insight and growth is an important capability for any organization. Not only do failures reveal how systems are built, their tendencies, and their biases exist – it reveals a pathway for encouraging the system to develop and perform in a more efficient and more accurate manner in the future. Leveraging Failures as Opportunities AI failures provide critical insight into the dynamics of how humans develop relationships with AI. More specifically, the dynamics at play between systems that learn and human intention . Human decision-making is complex. It involves values, context, empathy, historical biases, and so on. AI can struggle in its inability to understand these subtle human complexities. This is part of what makes a Human-in-the-Loop so important. It ensures that human judgement and intent are integrated into an AI system’s learning process so that it becomes better at replicating human behaviours. For example, AI models used for job recruitment have been known to be biased against certain minority groups . It is crucial to identify why this bias exists and address it directly. A Human-in-the-Loop plays a critical role here. They can analyze data and algorithms to determine where the issue occurred, what parameters led to the unintended outcome and begin designing a solution. How You and Your Employees Can Be Their Your Own Human-in-the-Loop Here are three actionable tips that you, your colleagues, or any employee can follow to encourage tools like ChatGPT to improve, especially when it makes a mistake: Explain : Clearly point out what went wrong and why. Provide the rationale behind your thinking. This helps the system learn more effectively and align with your specific needs. Coach : If AI seems to misunderstand your request, guide it by rephrasing or breaking down your request into smaller components. This makes it easier for the tool to understand your intentions and learn from them. Validate : Positive reinforcement can help. When AI gets it right, acknowledge it. This validation encourages AI to replicate the improved behavior in the future. HITL as a Balanced Partnership, When Kept in Check Building a relationship with AI is a dynamic process. It is about fostering growth, accountability, and understanding. While employees can be their own Human-in-the-Loop, it is essential that there is at least an awareness and intention to align an employee’s guidance of AI with the organization's goals and priorities. Without this alignment, individuals adjusting and guiding AI may inadvertently trigger old or create new biases that may diverge from strategic objectives. Human-in-the-Loop is certainly a bridge that connects human values, expertise, and context with the computational power of AI – but it must be implemented thoughtfully. As organizations become more comfortable integrating AI, remember that Human-in-the-Loop is about creating synergy between human insight and machine learning. By maintaining human involvement, we ensure that AI evolves in a direction that benefits not only the organization but also its employees, customers, and broader stakeholders. Previous Next
- AI & The Future of Work Report | voyAIge strategy
Analyses & Recommendations on AI & The Future of Work The future of work has arrived. AI is transforming industries at a pace that is faster than most leaders expected. This in-depth report distills global research into clear insights for: C-suite executives who are navigating AI disruption Employers and HR who are at the forefront of dealing with layoffs, mass terminations, and hiring freezes Policymakers who are trying to address these technological changes and influence legal and policy direction Governments who are trying to make positive change for society Inside our report, you'll learn: Which jobs and skills are most at risk and where new growth is emerging as a result of AI The economic, psychological, and sociological implications of AI disruption The current legal and policy landscape in the United States, Canada, and the European Union The steps that leaders can take to balance innovation, compliance, and employee well-being Why read it? AI is not a technology problem—it is a leadership test. Learn how to manage disruption proactively and seize new opportunities to grow with AI Submit to Download and Stay Informed Sign up to download the full report and join our monthly newsletter First Name Last Name Email Submit to Download
- Managed Services | voyAIge strategy
Data and AI Leadership - without the overhead. Managed Data Governance & AI Governance Services Expert Leadership, Strategy, and Support at a Fixed Monthly Cost. Book a Free Consultation How our Managed Services Help We structure your journey. Whether you're just getting started or are already deploying tools, we help you assess readiness, define goals, and create a strategy that fits your organization's priorities. We build the right foundations. As experts in the law, policy, and ethics, we develop the nuanced solutions you need to grow without safely and successfully -without stifling innovation. We stay involved. As your challenges and opportunities change, we stay in touch with you to nimbly assess evaluate new use cases, manage compliance, and ensure your AI remains effective and aligned. Common AI Challenges Fear of AI Inappropriate Use of AI No AI Leadership No AI Strategy Too Many Questions Free Consultation Book a Free Consultation Book a no-strings-attached consultation session to see if our Managed AI Services can help you implement and use AI without the cost or complexity of doing it yourself. Organization Name First name Last name Email Briefly Describe Your Needs & Goals Submit Thanks for submitting! We will respond within 24 hours
- Tesla Class Action to Move Ahead | voyAIge strategy
Tesla Class Action to Move Ahead Advanced Driver Assistance Systems Litigation Proceeds in California By Christina Catenacci, human writer Aug 22, 2025 Key Points: On August 18, 2025, a United States District Judge granted a motion for class certification and appointed a representative plaintiff of the certified classes The court narrowed the classes and considered whether it was appropriate to hear the plaintiffs’ claims together in one class action The lesson here is that businesses need to be careful about what kinds of statements they make about their technology’s capabilities, or else they could face litigation from many plaintiffs, potentially leading to a class action On August 18, 2025, a United States District Judge, Rita F. Lin, granted a motion for class certification and appointed a representative plaintiff of the certified classes. In addition, she appointed class counsel and set a pathway for next steps leading to the case management conference. Let this story serve as a warning for businesses—be careful about what statements you make about the capabilities of your technology—whether it is on Twitter, YouTube, or any other channel. If you have a goal that you are striving to achieve, then say that. If you are promoting a new product with extensive capabilities, then do that. Just try not to make claims that are untrue, unless you want to be on the hook for those misleading misrepresentations. What is the Class Action About? Tesla did not sell its vehicles through third parties and did not engage in traditional marketing or advertising; in fact, the only way one could buy a Tesla vehicle was through its website. Tesla reached consumers also through its own YouTube channel, Instagram account, press conferences, sales events, marketing newsletters, and Elon Musk’s personal Twitter account. Additionally, customers could buy optional technology packages that were designed to enable autonomous vehicle operation. For example, customers could buy the “Enhanced Autopilot Package (EAP)” that had features such as Autopark, Dumb Summon, Actually Smart Summon, and Navigate on Autopilot (highway). Also, the “Full Self-Driving (FSD) Package” had all of the Enhanced Autopilot features, plus Stop Sign and Traffic Signal recognition and Autosteer on Streets. The EAP was offered as a stand-alone package only until the first quarter of 2019, and again for a limited period from the second quarter of 2022 through the second quarter of 2024; at other times, these features were only available as part of the FSD Package. Essentially, claimants were arguing that Tesla Inc. (Tesla) made misleading statements about the full self-driving capability of its vehicles. The plaintiffs alleged that they relied on two types of misrepresentations that Tesla made: that Tesla vehicles were equipped with the hardware necessary for full self-driving capability (“Hardware Statement”) that a Tesla vehicle would be able to drive itself across the country within the following year (“Cross-Country Statement”) When it came to the Hardware Statement, in October, 2016, Musk said at a press conference that second-generation autonomous driving hardware would have hardware necessary for Level 5 Autonomy (“literally meaning hardware capable of full self-driving for driver-less capability”). These statements were also on Tesla’s website and Tesla’s November 2016 newsletter. There was even a Tesla blog post dated October 2016 and a Tesla quarterly earnings call in May 2017 containing these statements. Musk even made comments that the self-driving hardware would enable full self-driving capability at a safety level that was greater than a human driver. Since 2016, the hardware had been updated to version 3.0 and version 4.0—these upgrades had a more powerful computer and cameras. In a 2024 earnings call, Musk stated that a further hardware upgrade would likely be necessary for customers who bought FSD with prior hardware configurations: “I mean, I think the honest answer is that we’re going to have to upgrade people’s hardware 3 computer for those that have bought full self driving. And that is the honest answer. And that’s going to be painful and difficult, but we’ll get it done. Now I’m kind of glad that not that many people bought the FSD package” When it came to the Cross-Country Statement, Musk stated at a 2016 press conference that people would be able to go from LA to New York—going from home in LA to dropping someone off in Times Square and then having the car park itself, without the need for a single touch including the charger. Musk posted versions of this claim on his personal Twitter account three times. In January 2016, Musk tweeted that “in 2 years, summon should work anywhere connected by land & not blocked by borders, eg you’re in LA and the car is in NY”. When asked for an update on these claims in May, 2017, Musk said that the demo was still on for the end of the year, and things were “just software limited”. And in May, 2019, when asked whether there were still plans to drive from NYC to LA on full autopilot, Musk said that he could have gamed this type of journey the previous year, but when he did it in 2019, everyone with Tesla Full Self-Driving would be able to do it too. That 2019 tweet generated about 2,000 engagements compared to 300 engagements following the 2016 tweet. In October, 2016, Tesla showed a video where a Tesla vehicle was driving autonomously (it is still on the Tesla site) and a similar video was shown on YouTube. Interestingly, Tesla does not dispute that any of the statements or videos were made—it simply states that the FSD could not be obtained until the completion of validation and regulatory approval. However, the plaintiff presented evidence that Tesla had not applied for regulatory approval to deploy a Society of Automotive Engineers Level 3 or higher vehicle in California, which was a necessary step for approval of a full self-driving vehicle. In terms of the technical claims, the plaintiffs alleged that Tesla violated California’s: Unfair Competition Law Consumer Legal Remedies Act False Advertising Law In addition, they alleged that Tesla engaged in fraud, negligent misrepresentation, and negligence. As a consequence, they filed a motion for class certification so that they could proceed to the next stage of litigation. What did the District Judge Decide? The judge had to go through the main elements to determine whether she could certify the class in the class action. With respect to the proposed class representative, the main plaintiff paid Tesla $5,000 for EAP and $3,000 for the FSD Packages for his new Tesla Model S car. He alleged that he purchased these packages because he was misled by the Hardware Statement and the Cross-Country Statement. He saw these things on the Tesla website in October, 2016 and in a Tesla newsletter sent in November, 2016. In addition, he read statements that led him to believe that a Tesla would soon drive across the country, and that self-driving software would be available in the next year or two. He claimed that he discovered the alleged fraud in April, 2022. In fact, five customers (including the above plaintiff) brought separate lawsuits against Tesla in September, 2022. They alleged similar things and accused Tesla of violating warranties, consumer protection statutes and engaged in fraud, negligence, and negligent misrepresentation. The court consolidated the cases, dismissal all warranty claims, and permitted all the plaintiffs’ fraud, negligence, and statutory claims to proceed to the extent that they were premised on the Hardware Statement and Cross-Country Statement. It is worth mentioning that some plaintiffs opted out of Tesla’s arbitration agreement. Subsequently, the court noted that the class certification was a two-step process: The plaintiff had to show that four requirements were met, namely numerosity (the class was so numerous that joinder of all members was impractical), commonality (there were questions of fact and law), typicality (the claims and defenses were typical of the claims and defenses of the class), and adequacy (the representative parties would fairly and adequately protect the interests of the class) The plaintiff had to show that one of the bases for certification was met, such as predominance and superiority (questions of law or fact common to class members predominated over any questions affecting only individual members, and a class action was superior to other available methods for fairly and efficiently adjudicating the controversy The judge concluded the following: There were some minor differences with the proposed classes. The judge certified two classes: (1) a California arbitration opt-out class where customers bought or leased a Tesla vehicle and bought the FSD package between May 207 and July 2024, and opted out of Tesla’s arbitration agreement; (2) a California pre-arbitration class where customers bought or leased a Tesla vehicle and paid for the FSD package from October 2016 to May 2017, and currently reside in California. Neither class dealt with the EAP, and both classes were narrowed slightly Tesla did not contest that numerosity was met The plaintiff was able to show that commonality and predominance were met. For the purposes of class certification, the claims were martially indistinguishable and could be analyzed together The plaintiff could show that the Hardware Statement would be material to an FSD purchaser. However, the plaintiff could not show common exposure to the Cross-Country Statement The plaintiff could show that the issue of whether Tesla vehicles were equipped with hardware sufficient for Full Self-Driving capability was subject to common proof The plaintiff was able to show that damages could be established through common proof. Under California law, the proper measure of restitution was the difference between what the plaintiff paid and the value of what the plaintiff received Although Tesla argued that there were many claims that were subject to the statute of limitations, separate examinations of each situation with each plaintiff were needed. The court disagreed and said that this was not fatal to class certification when there was a sufficient nucleus of common questions The requirement of adequacy was met Superiority was also established. The economies of scale made it desirable to concentrate all of the plaintiffs’ claims in one forum, and this case was manageable as a class action The court certified a narrower class, namely all members of the California Arbitration Opt-Out class and California Pre-Arbitration class who had stated that they wanted to purchase or subscribe to FSD in the future but could not rely on the product’s future advertising or labelling The plaintiff showed that he had standing to seek injunctive relief, since he had provided the general contours of an injunction that could be given greater substance at a later stage in the case Accordingly, all elements were met, and the class certification was granted, subject to the modified class definitions. Within 14 days, the plaintiff had to amend the class definition so that the parties could move on to the case management conference. The court also appointed the main plaintiff as the representative plaintiff for the class, and appointed class counsel. What can we Take from This Development? This was simply a motion to certify the class action. The judge went through the main elements and confirmed that the class action could move forward. The examination of each of the components in the test had to do with whether it was more effective to hear the claims together in one class action instead of addressing each claim separately in the court. This was not a decision that confirmed Tesla engaged in unfair competition, false advertising, negligent misrepresentation, or negligence. This was a preliminary decision that allowed the class action to proceed. Previous Next
- When Technology Stops Amplifying Artists and Starts Replacing Them | voyAIge strategy
When Technology Stops Amplifying Artists and Starts Replacing Them AI-generated creativity forces us to confront a new cultural crossroads: if machines can make the art, what remains uniquely human in the act of creating? This matters to any business owner By Tommy Cooke, powered by medium roast espresso Key Points: 1. AI isn't just automating routine tasks, it's beginning to replace human creativity where scale and predictability dominate 2. The future of work hinges on what only humans can bring: meaning, perspective, imperfection, and authentic connection 3. If efficiency becomes our only compass, we risk building a world rich in content but poor in humanity When I was in my 20’s, I was the lead guitarist in a regularly gigging and recording rock band. At our busiest, we were performing four to six nights a month while being full-time college students and holding down part-time jobs. We produced an album and two EPs along the way. When you are working that much at your craft while having an incredibly full plate, you always hope for a break. Ours came when our music was introduced to a major record executive. He enjoyed the album and thought one of our tunes would be an instant radio hit. He suggested that we work closely with a well known hitmaker to punch out more radio friendly versions of our album. But, in the meantime, there was a catch: to be on the label, we were required to have a certain number of followers on MySpace. We came well short of that number, exponentially. It would take us years to achieve that. To say that we were shocked would be an understatement. That was back in the late 2000s. Today, being a musician is far more difficult. Not only are targets harder to reach as a bare minimum entry point to talking to labels, but now AI has entered the scene. In a recent Global News article, the famous music journalist and historian Alan Cross speculated a future that may not be far away: record producers have every impetus to remove the artist from art. In other words, AI-driven music could forever change the way people consumer music. As AI-generated music becomes mainstream, Mr. Cross poses a worrying question: what happens to the human in the creative process? What does this shift teach us about AI, authenticity, and human purpose? At first glance, Alan’s article is a story about singers, streaming royalties, and rights-owners. But for business leaders, policy makers, and organisational strategists, the underlying theme is deeper: as AI moves from tool to creator, the boundary between human value and machine delivery is shifting. The Discomfort of AI As painful as it is for me to admit, in Alan’s dystopic future vision where AI drives content creation, musicians are just not that unique. They’re simply the first creative class to experience what economists have been warning about for years: automation begins where scale and predictability create the highest return. Let me give you an example. Pop music is formulaic. How many times have you seen this video or one exactly like it ? It’s a routine that has been done time again, ad nauseum. But they make a fascinating point. So, if you haven’t seen them, take a minute to watch. It’s very revealing in terms of how much the structure of popular music is replicated over, and over, and over again. This same thing can be said of social media personas and design principles, too. My point is that the more quantifiable and structure-driven human content continues to become, the more likely machines are to inhabit and reproduce human content. For years, people comforted themselves with a hopeful refrain: AI will take the routine tasks so we can focus on creative and strategic work. This is why Alan’s vision is alarming. It presents us with an existential tension: one where romantic expectations of technology and its actual outcomes forces us to question what happens when efficiency and extraction are prioritized over meaning. So, Alan asks us to stop and recognize a crossroads in front of us, and he’s asking us to do so by prompting ourselves with a critical question: what do we want human experience to mean when machines can perform the visible parts of it? This question will have different implications for everyone, and they matter in non-music contexts, too. What AI as a Creator means for People and Organisations While music is the most visible example, parallel dynamics are already unfolding in marketing, design, customer service, legal drafting, and more. Alan isn’t merely presenting a vision anymore. He’s offering a critical narrative, and it carries three key lessons: Human value must be re-defined . When an algorithm can generate content at scale, cost-effectively, and without human pain-points (sleep, illness, ego, negotiation), the “value” of human labour shifts. It’s no longer just about whether a person can you do the job, but also (a) what unique stance the person brings, and (b) how that person shifts from being a deliverer to a designer of meaning. In other words, business leaders, that treat humans as input-machines are likely to find that they experience a high turnover. If instead they ask: “What does only a human bring?”, they protect and amplify their human capital Authenticity and trust become strategic assets. Alan’s article raises a paradox: people found AI-generated music more arousing, yet human-composed music was familiar. That suggests a gap between novelty and connection. In a world of AI production, human stories, human flaws, and human context become competitive differentiators. Organizations that lean into their human-identity, align culture, ethics, and narrative, and resist the “machine everything” push will build stronger trust and attachment Strategy must account for the human-machine continuum, not just the machine. Leaders often frame AI as “how do we use this tool to generate faster/cheaper?” But the music story shows the existential side: “What if the machine becomes creator?” and “What if our work becomes obsolete?” The strategic imperative is two-fold: (a) define how humans and machines co-create value, and (b) define safeguards What struck me reading Cross’ article and thinking back to that moment in my twenties when a gatekeeper told a young band we needed tens of thousands of invisible followers to be worthy, is that technology has always mediated who gets seen. What’s different now is the scale: we’re not just gatekeeping humans—we’re replacing them in the system. And that invites a stubborn but necessary question: What role do we want people to play in a world of perfect synthesis and endless content? If we allow efficiency alone to steer the ship, we will build a culture optimized for frictionless consumption rather than lived experience; a world full of sound, but not necessarily any music. The point isn’t to fear AI or resist progress. It’s to remember what makes human work meaningful in the first place. Creativity isn’t merely output. It’s the accumulated weight of effort, failure, identity, memory, taste, temperament, private doubt, and public courage. It’s the quiet, unglamorous process of becoming someone capable of expression. So, as AI becomes a collaborator, producer, and in some cases a creator, our responsibility isn’t to compete with it on volume or speed but do double down on what only people can offer: perspective, dissonance, care, imperfection, and soul. Not to mention, community. Previous Next
- New York Times Sues OpenAI and Microsoft for Copyright Infringement | voyAIge strategy
New York Times Sues OpenAI and Microsoft for Copyright Infringement The NYTimes lawsuit has the potential to significantly shape copyright and AI policy By Christina Catenacci Aug 2, 2024 Key Points: The Times has sued both OpenAI and Microsoft, alleging copyright infringement, trademark dilution, and unfair competition by misappropriation OpenAI has responded to the Complaint on its website stating, “We support journalism, partner with news organizations, and believe The New York Times lawsuit is without merit” A decision in this case may provide much-needed clarification regarding the use of copyrighted works in the development of generative AI tools On December 27, 2023, the New York Times (The Times) sued OpenAI and Microsoft (Defendants) for copyright infringement in the United States District Court in New York. In its Complaint, The Times explained that its work is made possible through the efforts of a large and expensive organization that provides legal, security, and operational support, as well as editors who ensure their journalism meets the highest standards of accuracy and fairness. In fact, The Times has evolved into a diversified multi-media company with readers, listeners, and viewers around the globe with more than 10 million subscribers. But according to The Times, the joint efforts of the Defendants have harmed The Times, as seen by lost advertising revenue and fewer subscriptions to name a few. The Times alleges that that OpenAI unlawfully used its works to create artificial intelligence products. The Times argued in its Complaint that unauthorized copying of The Times works without payment to train Large Language Models (LLMs) is a substitutive use that is “not justified by any transformative purpose”. The Times has sued the Defendants as follows: Copyright infringement against all Defendants : by building training datasets containing millions of copies of The Times works (including by scraping copyrighted works from The Times’s websites and reproducing them from third-party datasets), the Defendants have directly infringed The Times’s exclusive rights in its copyrighted works. Also, by storing, processing, and reproducing the training datasets containing millions of copies of The Times works to train the GPT models on Microsoft’s supercomputing platform, Microsoft and the OpenAI Defendants have jointly directly infringed The Times’s exclusive rights in its copyrighted works Vicarious copyright Infringement against Microsoft and OpenAI : Microsoft controlled, directed, and profited from the infringement perpetrated by the OpenAI Defendants. Microsoft controls and directs the supercomputing platform used to store, process, and reproduce the training datasets containing millions of The Times works, the GPT models, and OpenAI’s ChatGPT offerings. The Times alleges that Microsoft profited from the infringement perpetrated by the OpenAI Defendants by incorporating the infringing GPT models trained on The Times works into its own product offerings, including Bing Chat Contributory copyright infringement against Microsoft : Microsoft materially contributed to and directly assisted in the direct infringement that is attributable to the OpenAI Defendants. The Times alleged that Microsoft provided the supercomputing infrastructure and directly assisted the OpenAI Defendants in: building training datasets containing millions of copies of Times Works; storing, processing, and reproducing the training datasets containing millions of copies of The Times works used to train the GPT models; providing the computing resources to host, operate, and commercialize the GPT models and GenAI products; and providing the Browse with Bing plug-in to facilitate infringement and generate infringing output. The Times said that Microsoft was fully aware of the infringement and OpenAI’s capabilities regarding ChatGPT-based products Digital Millennium Copyright Act–Removal of Copyright Management Information against all Defendants : The Times included several forms of copyright-management information in each of The Times’s infringed works, including: copyright notice, title and other identifying information, terms and conditions of use, and identifying numbers or symbols referring to the copyright-management information. However, The Times claimed that without The Times’s authority, the Defendants copied The Times’s works and used them as training data for their GenAI models. The Times believed that the Defendants removed The Times’s copyright-management information in building the training datasets containing millions of copies of The Times works, including removing The Times’s copyright-management information from Times Works that were scraped directly from The Times’s websites and removing The Times’s copyright-management information from The Times works reproduced from third-party datasets. Moreover, the Times asserted that the Defendants created copies and derivative works based on The Times’s works, and by distributing these works without their copyright-management information, the Defendants violated the Copyright Act . Unfair competition by misappropriation against all Defendants : by offering content that is created by GenAI but is the same or similar to content published by The Times, the Defendants’ GPT models directly compete with The Times content. The Defendants’ use of The Times content encoded within models and live Times content processed by models produces outputs that usurp specific commercial opportunities of The Times. In addition to copying The Times’ content, it altered the content by removing links to the products, thereby depriving The Times of the opportunity to receive referral revenue and appropriating that opportunity for Defendants. The Times now competes for traffic and has lost advertising and affiliate referral revenue Trademark dilution against all Defendants : in addition, The Times has registered several trademarks and argued that the Defendants’ unauthorized use of The Times’s marks on lower quality and inaccurate writing dilutes the quality of The Times’s trademarks by tarnishment. The Times asserts that the Defendants are fully aware that their GPT-based products produce inaccurate content that is falsely attributed to The Times, and yet continue to profit commercially from creating and attributing inaccurate content to The Times. The Defendant’s unauthorized use of The Times’s trademarks has resulted in several harms including damage to reputation for accuracy, originality, and quality, which has and will continue to cause it economic loss. The Times has asked for statutory damages, compensatory damages, restitution, disgorgement, and any other relief that may be permitted by law or equity. Additionally, The Times has requested that there be a jury trial. What can we take from this development? This has the makings of a landmark copyright case and can go a long way to shape copyright and AI policy for years to come. In fact, some have referred to this case as, “The biggest IP case ever”. In terms of a response to the Complaint, OpenAI has made a public statement in January 2024 on its website stating, “We support journalism, partner with news organizations, and believe The New York Times lawsuit is without merit”. The company set out its position as follows: “Our position can be summed up in these four points, which we flesh out below: We collaborate with news organizations and are creating new opportunities Training is fair use, but we provide an opt-out because it’s the right thing to do “Regurgitation” is a rare bug that we are working to drive to zero The New York Times is not telling the full story” Interestingly, OpenAI has stated that training AI models using publicly available internet materials is “fair use”, as supported by long-standing and widely accepted precedents. It stated, “We view this principle as fair to creators, necessary for innovators, and critical for US competitiveness”. However, the Defendants in this case may run into problems with this argument because The Times’ copyrighted works are behind a paywall. The Defendants are familiar with what this means—it is necessary to pay in order to read (with subscriptions) or use (with proper licensing). It is concerning that OpenAI refers to regurgitation (word-for-word memorization and presentation of content) as a bug that they are working on, but then says, “Because models learn from the enormous aggregate of human knowledge, any one sector—including news—is a tiny slice of overall training data, and any single data source—including The New York Times—is not significant for the model’s intended learning”. Essentially, OpenAI has downplayed the role that The Times’ works play in the training process, yet not addressing The Times’ arguments that the Defendants have ingested millions of copyrighted works without consent or compensation and have been outputting The Times works practically in their entirety. Another point of interest is that, in the Complaint, The Times stated that it reached out to OpenAI in order to build a partnership, but the negotiations never resulted in a resolution. However, OpenAI has stated in its website post that the discussions with The Times had appeared to be progressing constructively. It said that the negotiations focused on a high-value partnership around real-time display with attribution in ChatGPT, where The Times would gain a new way to connect with their existing and new readers, and their users would gain access to The Times reporting. It stated, “We had explained to The New York Times that, like any single source, their content didn't meaningfully contribute to the training of our existing models and also wouldn't be sufficiently impactful for future training. Their lawsuit on December 27—which we learned about by reading The New York Times—came as a surprise and disappointment to us”. Clearly, there are two different sides to this story, and the court will need to sort out what took place in order to make a determination. Ultimately, this case will have a significant impact on the relationship between generative AI and copyright law, particularly with respect to fair use . In p articular, a decision in this case may provide much-needed clarification regarding the use of copyrighted works in the development of generative AI tools, such as OpenAI’s ChatGPT and Microsoft’s Bing Chat (Copilot), both of which are built on top of OpenAI’s GPT model. Previous Next
- Contact | voyAIge strategy
How to get in touch with our AI strategy and governance experts. Contact Us For questions, inquiries, or requests that require a personal response, we will respond within 48 hours. If you are submitting a request for a quote about our products or services , please use this form here . If you are requesting a proposal or bid , please use this form here . First Name Last Name Email Message Send Thanks for submitting!