top of page

Search Results

120 results found with an empty search

  • Six Keys to Consider Before Implementing AI Agents | voyAIge strategy

    Six Keys to Consider Before Implementing AI Agents Issues and Solutions to Help Prepare your Organization By Tommy Cooke Oct 25, 2024 Key Points: Identify specific tasks and processes where AI agents can make compelling contributions Understand and address privacy, security, and ethics vulnerabilities prior to implementation Communicate transparently with employees, offer training, and address their concerns AI agents have been explosively trending this last week. AI agents, as voyAIge strategy’s Co-Founder Dr. Christina Catenacci describes , are applications that use a combination of different AI models together with conventional automation frameworks to process information – without human interaction. The goal of the AI agent is thus to supplement or even supplant human workers. Unlike the AI many of us have explored in the course of the last calendar year, AI agents have autonomy. They make decisions, perform tasks, and create outputs based on defined goals. With companies like Microsoft and Salesforce introducing AI agents, many organizations are considering incorporating them into their operations, and rather quickly. However, it is essential to consider precisely what is at stake in adopting AI agents. How will they change workflows? What impact will they have on workforce morale? In what ways do they – and perhaps do they not – align with your organization’s goals and growth plans? Thoughtful, informed preparation is crucial if we are to ensure that AI agents enhance as opposed to impede critical processes. Here are key considerations that can help generate conversations with your business line leaders and executives that can cultivate the strategic planning and insight your organization requires if/when implementing AI agents – especially if you are considering them in the long term: Define Objectives, Values, and Use Cases Before adopting AI agents, pinpoint exactly what value they can provide. A common misconception is that AI agents can immediately replace complex tasks or roles. This is not the case, especially in early stages where human oversight and model adjustment is required. Moreover, different departments and processes will benefit from AI agents in different ways. Begin by auditing your organization. Identify tasks that are repetitive, rule-based, and time-consuming. These are often prime candidates for automation. Basic administrative tasks, inventory management, calendar scheduling – these are all prime examples of time-consuming tasks that can benefit from an AI agent’s assistance. Start small. Focus on specific workflows and expand from there. Prioritize Security and Privacy AI agents process large amounts of data. Some of those data are sensitive. Given that 51% of employees have tried tools like ChatGPT in the last year and only 17% of employers have AI policies that may otherwise regulate what information employees give to AI systems, there is already a risk in your organization whereby employees are feeding to an AI system client information, data measurements, and insights valuable to your brand. Employees are significantly outpacing their employers in using AI at work and this blind-spot can be highly problematic, opening your organization up to non-compliance and legal risks because most AI systems that employees are unofficially using at work train and model from the data anyone provides them. When dealing with AI agents, the potential risk increases quite significantly. If you do not have AI policies in place as well as data privacy policies, these are crucial requirements. You will also need to develop and implement a clear data governance policy to ensure that AI agents handle your own and your stakeholders’ data safely and securely. Establish a Clear Accountability Structure Because AI agents act autonomously without direct human input, it is difficult to know how their decision-making will align with and deviate from an organization’s values, priorities, and procedures. If and when an AI agent misses the mark, who is accountable for reporting and addressing these errors – and to whom are they accountable, exactly? Establish ethical guidelines for AI agents before they are fully implemented and deployed. How should an AI agent handle sensitive situations? What exactly defines a sensitive situation as well as sensitive information? When should certain decisions by deferred to a human decision-maker? Layers of oversight that are clearly articulated in AI ethics policies and accountability policies ensuring that AI agents can be audited and adjusted if they make inappropriate or harmful decisions. Create a governance structure that holds specific teams and roles accountable for monitoring and managing an AI agent’s actions. Assess and Understand Workforce Impact The introduction of AI agents will change the nature of work in your organization. While it is true that AI agents will likely return valuable time to your hardworking employees, 38% of employees are nervous that AI will replace them while 51% worry that AI will negatively impact their mental health . There is considerable likelihood that your workforce will have questions and concerns. Engaging employees honestly, accurately, and giving them avenues to be heard is critical if any AI system is to work in an organization. It is recommended that businesses use transparent communication with employees . It is paramount that AI agents are clearly explained, described and situated within specific roles and contexts. Employees need to hear that they will not be replaced. HR leaders are recommended to be proactive in providing training opportunities that help employees adapt to these changes as well. Work should be undertaken to restructure roles to reflect altered workflows. Start Small with a Pilot Program Mass implementation of any digital solution is risky. While software and automated tools can save time, they need to be learned before they are fully understood and embraced. A pilot program is a small-scale, isolated, and controlled study that allows an organization to understand feasibility, cost, roadblocks, and opportunities in isolation. Choose a single team, department, or process for your pilot project. For example, automating customer complaint responses. A controlled approach with a small team testing an AI agent over the course of a couple of months allows you to gather data on the effectiveness of AI while minimizing disruption. Choose a team of individuals that understand and familiar with AI, preferably employees that are excited about AI who can champion the cause and socialize it later. Use the pilot’s findings to make any necessary adjustments. Develop a Monitoring Plan AI systems learn. They change by adjusting their behaviour in an attempt to improve over time. Like a teacher in a classroom, how one learns is as important to the learning process itself. Testing, monitoring, and providing opportunities to expand horizons is crucial to any person or AI system’s successful growth. To facilitate successful growth, establish a team that will monitor the AI agent’s performance. Regular audits should become a part of your organization’s AI governance repertoire. Establish key performance indicators (KPIs) to measure the AI agent’s success in meeting its goals, adjusting along the way as needed. Continuous monitoring not only helps mitigate risks, but it will ensure successful and smooth operation in the long-term. Adopt for Long-Term Success, Not Short-Term Gain As with any release of new technology, industry and media hype tremendously stimulates early adoption. A challenge of early adoption is not being aware of a technology’s limits and challenges. Left unchecked and misunderstood, they can derail investments and disturb workforces. Start small, stay informed, and ensure that both technology and human talent are aligned for future success. Previous Next

  • The Who Why and How of Human in the Loop | voyAIge strategy

    The Who Why and How of Human in the Loop Embracing AI Failures to Turn Mistakes into Growth By Tommy Cooke Oct 18, 2024 Key Points: Human-in-the-Loop (HITL) positions employees as guides of their own AI systems AI failures reveal system limitations but also offer opportunities for refinement, turning errors into a pathway for improvement A balanced HITL approach is essential to integrate human values, prevent biases, and ensure AI evolves responsibly and in alignment with an organization’s values 57% of American workers have tried ChatGPT at work. AI adoption at work is on the rise. As more and more employees interact regularly with tools like ChatGPT, they becoming increasingly more familiar with them. Relationships are developing between your employees and Natural Language Processors like ChatGPT . Understanding that relationship and how to leverage it to improve upon them is crucial to using AI successfully in your organization. As the relationship continues to grow, your employee is becoming a kind of AI supervisor. Over the last year, the concept of “Human-in-the-Loop” has become a commonplace, fundamental concept of AI. It refers it a form of human oversight of an AI system, such as an engineer adjusting a large AI system behind closed doors. However, if your employees are using systems like ChatGPT regularly, they are positioned to participate in HITL in ways that can be tremendously valuable. When an employee interacts with AI, they are not merely a passive user. They can actively become a part of its learning process . Every prompt, correction, and piece of feedback they provide it refines the AI, guiding it to better align itself with the employee’s goals. More simply, the employee becomes their own Human-in-the-Loop. Take my recent experience as an example. I asked ChatGPT to help me refresh a jazz guitar lesson plan. I’ve been studying jazz guitar for a couple of years now, and one of my goals is to get more comfortable with inverted chord voicings (for non-musicians, if a chord is the sum of its notes, changing the order of those notes can enrich your writing and playing). Initially, ChatGPT suggested I spend an extra 15 minutes a week drilling scales (individual notes, not chords). I was confused. That is not my goal here. So, I told ChatGPT, “Thanks for the suggestion, but I’m not interested in scales right now. Let’s make sure we’re focusing on the goals I explicitly share so we can build a plan that better uses my time.” ChatGPT apologized. It then provided me with a set of inverted chord exercises that will keep me busy through April 2025. My interaction is a common one among ChatGPT users and it is important. As individuals, employees have the power to shape and improve AI outputs directly. AI is bound to make mistakes, like misunderstanding intentions and goals. The key is recognizing that limitations are not setbacks. They are opportunities for growth. When employees see themselves as in-the-loop with AI, they can take an active role in recognizing its limitations and pushing it to improve. They can be essential to refining AI and making it more aligned with their own or an organization’s needs. Each interaction, correction, and bit of guidance we provide helps the AI learn more effectively. Recognizing AI failures are sources of insight and growth is an important capability for any organization. Not only do failures reveal how systems are built, their tendencies, and their biases exist – it reveals a pathway for encouraging the system to develop and perform in a more efficient and more accurate manner in the future. Leveraging Failures as Opportunities AI failures provide critical insight into the dynamics of how humans develop relationships with AI. More specifically, the dynamics at play between systems that learn and human intention . Human decision-making is complex. It involves values, context, empathy, historical biases, and so on. AI can struggle in its inability to understand these subtle human complexities. This is part of what makes a Human-in-the-Loop so important. It ensures that human judgement and intent are integrated into an AI system’s learning process so that it becomes better at replicating human behaviours. For example, AI models used for job recruitment have been known to be biased against certain minority groups . It is crucial to identify why this bias exists and address it directly. A Human-in-the-Loop plays a critical role here. They can analyze data and algorithms to determine where the issue occurred, what parameters led to the unintended outcome and begin designing a solution. How You and Your Employees Can Be Their Your Own Human-in-the-Loop Here are three actionable tips that you, your colleagues, or any employee can follow to encourage tools like ChatGPT to improve, especially when it makes a mistake: Explain : Clearly point out what went wrong and why. Provide the rationale behind your thinking. This helps the system learn more effectively and align with your specific needs. Coach : If AI seems to misunderstand your request, guide it by rephrasing or breaking down your request into smaller components. This makes it easier for the tool to understand your intentions and learn from them. Validate : Positive reinforcement can help. When AI gets it right, acknowledge it. This validation encourages AI to replicate the improved behavior in the future. HITL as a Balanced Partnership, When Kept in Check Building a relationship with AI is a dynamic process. It is about fostering growth, accountability, and understanding. While employees can be their own Human-in-the-Loop, it is essential that there is at least an awareness and intention to align an employee’s guidance of AI with the organization's goals and priorities. Without this alignment, individuals adjusting and guiding AI may inadvertently trigger old or create new biases that may diverge from strategic objectives. Human-in-the-Loop is certainly a bridge that connects human values, expertise, and context with the computational power of AI – but it must be implemented thoughtfully. As organizations become more comfortable integrating AI, remember that Human-in-the-Loop is about creating synergy between human insight and machine learning. By maintaining human involvement, we ensure that AI evolves in a direction that benefits not only the organization but also its employees, customers, and broader stakeholders. Previous Next

  • Code of Conduct | voyAIge strategy

    Code of Conduct DOWNLOAD

  • Services Mobile | voyAIge strategy

    Our Services VS offers industry-leading, experienced, and comprehensive solutions to support your successful use of AI. Have a look at what we provide. Don't hesitate to contact us. Policy & Procedures Streamline your operations with our expertly crafted policies and procedures, ensuring your AI initiatives are both effective and compliant. Research & Writing We lend our extensive experience in professional research and writing to provide insightful, impactful content tailored to support and drive your AI-related needs. Impact Assessments We take a deep dive into your organization's policies as well as data and AI operations to uncover hidden risks. AI Solution Scoping Our team assesses your organization's needs, painpoints, and opportunities. Compliance Let our team review, detect, and eliminate risks in your AI systems and business operations. Invited Talks We engage audiences with unique viewpoints that demystify complex legal, scholarly, political, popular, media, and philosophical understandings of AI. Ethical AI Playbooks Our playbooks assist organizations in navigating and responding to internal and external crises. Stakeholder Engagement Maximize AI adoption and AI project successes as we assist you in aligning your organization's stakeholders.

  • Privacy Policy | voyAIge strategy

    Privacy Policy DOWNLOAD

  • AI Thought Leadership | voyAIge strategy

    Expert AI insights and strategic content to position your organization as an industry leader. Thought Leadership We create expert-level content to position your organization as a leader in the AI space, showcasing your talent, knowledge, and vision. From blogs and newsletters to social media and podcasting, our insights help you build credibility and trust with employees, stakeholders, and clients. What is Thought Leadership? Thought leadership is the strategic process of creating content and communication that positions an individual or organization as an authority in their field. It involves producing insightful, relevant, and forward-thinking content that showcases expertise and deep knowledge on industry trends, challenges, and opportunities. What does Thought Leadership encompass? 1. Establishing Expertise Thought leadership is about sharing in-depth knowledge and insights that highlight an individual or organization's proficiency. For businesses, it goes beyond promotional content. It's about demonstrating a command of the field, which builds credibility and trust. 2. Influencing and Leading Industry Conversations A key aspect of thought leadership is contributing to and shaping discussions within the industry. This includes highlighting emerging trends, opportunities, and challenges. Offering unique perspectives or innovative solutions to common industry issues is a key way of contributing and moving along the conversation on AI. The goal is to be at the forefront of industry conversations, establishing a company or individual as a go-to resource for reliable information and insights. 3. Providing Value to Audiences Thought leadership content is valuable because it educates, informs, and inspires action. It helps audiences understand complex topics, make informed decisions, and see new opportunities. By delivering well-researched and relevant content, thought leaders build a loyal audience who sees them as a reliable source for information and guidance. 4. Building a Brand's Authority and Credibility Companies and professionals who consistently produce thoughtful, authoritative content establish themselves as credible leaders in their sector. This credibility is crucial for building trust with stakeholders, clients, and employees. It also positions the brand as an entity that knows its market and understands its environment. Over time, reputational increases translate into increased opportunities, such as partnerships, media features, speaking opportunities, or new business ventures. 5. Demonstrating Command of Industry Developments Thought leadership keeps audiences informed about the latest industry developments, including technological advancements, regulatory changes, and emerging best practices. It involves research and the ability to interpret and translate information into digestible and actionable insights for the audience. For example, an AI-driven organization might publish thought pieces on ethical AI practices, the impact of regulations like the EU AI Act, or trends in AI modelling. 6. Engaging Stakeholders Through Authentic Storytelling Effective thought leadership combines expertise with storytelling.. It’s not just about facts and data. It’s about weaving narratives that connect with stakeholders and build engagement. This can include sharing customer success stories, case studies, or experiences that showcase how the organization is tackling real-world challenges. 7. Leveraging Various Channels and Formats Thought leadership isn’t limited to written content; it extends across multiple platforms and formats to reach diverse audiences. These channels can include blogs, social media posts, podcasts, webinars, white papers, research reports, and more. Our Thought Leadership Samples Every week, voyAIge strategy generates thought leadership content that it shares on its homepage. Called "Insights" , we regularly share AI-related news, offer insights and analysis, and breakdown what it means for any organization by providing advice and actionable steps. We also provide three key takeaways to help your find what is most relevant, straight away. Book a Free Consultation to Learn More about our Thought Leadership services

  • Free Consultation | voyAIge strategy

    Free Consultation Organization Name Industry First name Last name Email What service or product are you interested in? Are you curious about or interested in a specific AI? Let us know Do you have a project deadline? Additional Information or Request Details Submit Thanks for submitting! We will respond within 24 hours

  • News (List) | voyAIge strategy

    As AI continues to reshape industries, understanding its organizational, legal, social, and ethical impacts is essential for successfully running an organization. Our collection of articles offers both depth and breadth on critical AI topics, from legal compliance to ethical deployment, providing you with the knowledge necessary to integrate AI successfully and responsibly into your operations. With 85% of CEOs affirming that AI will significantly change the way they do business in the next five years, the urgency to adopt AI ethically and fairly cannot be overstated. Dive into our resources to ensure your growth with AI is both innovative and just, positioning your organization as a leader in the conscientious application of advanced technology. Insights Articles to increase awareness and understanding of AI adoption and integration Trump Signs Executive Order on AI Dec 15, 2025 Legal Tech Woes Dec 5, 2025 Meta Wins the Antitrust Case Against It Nov 27, 2025 Cohere Loses Motion to Dismiss Nov 21, 2025 What is “AI Augmentation”, and How Do You Achieve It? Nov 14, 2025 Budget 2025 Nov 7, 2025 When Technology Stops Amplifying Artists and Starts Replacing Them California Bill on AI Companion Chatbots Oct 31, 2025 Reddit Sues Data Scrapers and AI Companies Oct 24, 2025 Data Governance & Why Business Leaders Can’t Ignore It Oct 13, 2025 Canada’s AI Brain Drain Oct 17, 2025 Americans Feel the Pinch of High Electricity Costs Oct 17, 2025 Newsom Signs Bill S53 Into Law Oct 10, 2025 The Government of Canada launches an AI Strategy Task Force and Public Engagement Oct 3, 2025 Privacy Commissioner of Canada (OPC) Releases Findings of Joint Investigation into TikTok Sep 26, 2025 How an Infrastructure Race is Defining AI’s Future Sep 26, 2025 Google Decision on Remedies for Unlawful Monopolization Sep 19, 2025 Tesla Class Action to Move Ahead Aug 22, 2025 De-Risking AI Prompts Aug 8, 2025 The Canadian Cyber Security Job Market is Far From NICE Jul 25, 2025 1 2 3 4 5 1 ... 1 2 3 4 5 ... 5

  • Legal Tech Woes | voyAIge strategy

    Legal Tech Woes The Story of How Fastcase Sued Alexi Technologies By Christina Catenacci, human writer Dec 5, 2025 Key Points On November 26, 2025, Fastcase Inc (Fastcase), an American legal publishing company founded in 1999, commenced an action against Alexi Technologies Inc (Alexi), a Canadian legal tech company that began as a research institution in 2017 In 2021, Fastcase and Alexi entered into a Data Licence Agreement (Agreement) where Fastcase would grant Alexi limited access to Fastcase’s proprietary and highly curated case law database According to Fastcase, Alexi expanded its use of Fastcase data beyond the license’s narrow internal-use limitations, using that data to build and scale its own legal research platform—it began publishing and distributing Fastcase-sourced case law directly to its users in clear violation of the Agreement’s core restrictions On November 26, 2025, Fastcase Inc (Fastcase), an American legal publishing company founded in 1999, commenced an action against Alexi Technologies Inc (Alexi), a Canadian legal tech company that began as a research institution in 2017, in the United States District Court for the District of Columbia. Background It is first important to understand the context of this lawsuit. In particular, Fastcase spent decades building one of the industry’s most comprehensive and innovative legal research databases. In 2023, Fastcase merged with vLex LLC (vLex) and became part of the vLex Group, which was subsequently acquired by Clio, Inc (Clio), a company that is valued at $5 billion , on November 10, 2025. The acquisition was for $1 billion and was characterized as one of the most significant transactions in legal technology history. On the other hand, Alexi initially operated with a small team of research attorneys who used a passage-retrieval AI system to help prepare legal memoranda for clients. In 2021, Fastcase and Alexi entered into a Data Licence Agreement (Agreement) where Fastcase would grant Alexi limited access to Fastcase’s proprietary and highly curated case law database. From Fastcase’s perspective, the main term of the Agreement was that the licence was expressly restricted to internal research purposes. For example, it was limited to research that was performed by Alexi’s own staff lawyers in preparing client memoranda. And most importantly, Alexi agreed that it would not use Fastcase data for any commercial purpose, use the data to compete with Fastcase, or publish or distribute Fastcase data in any form. This Agreement was important to Fastcase, given the number of years and the millions of dollars in investment to create one of the most sophisticated legal research databases in the industry. More precisely, Fastcase’s efforts involved extensive text and metadata tagging, specialized structuring into HTML, and proprietary formatting and harmonization processes that required significant technical expertise and sustained investment. Thus, Fastcase entrusted Alexi with access to this highly valuable, unique proprietary compilation solely for the narrow internal research purpose defined in the Agreement. There was a time when Fastcase and Alexi considered entering into a partnership. In 2022, Alexi sought to integrate its passage-retrieval AI system with Fastcase’s database so that Alexi customers could directly access Fastcase case law. However, that partnership never materialized. Instead, in 2023, Fastcase proceeded with its merger with vLex and expanded its own research offerings. Yet, following the merger, Fastcase continued operating under the Fastcase name, and the Agreement with Alexi remained in full force and effect. But then, according to Fastcase, Alexi began pivoting from occupying different roles in the legal-tech space into direct competition with Fastcase. That is, Fastcase says that Alexi expanded its use of Fastcase data beyond the license’s narrow internal-use limitations, using that data to build and scale its own legal research platform—it began publishing and distributing Fastcase-sourced case law directly to its users in clear violation of the Agreement’s core restrictions. What’s more, Alexi began holding itself out as a full-scale legal-research alternative to incumbent providers, including Fastcase. According to Fastcase, Alexi shortcut the massive investment that was required to build a comprehensive commercial legal-research platform using the Fastcase data for the very commercial and competitive purposes that the Agreement expressly forbid. That was not all: Fastcase believed that Alexi misused its intellectual property to bolster its own credibility and to suggest that there was an affiliation that did not exist. Further, Fastcase believed that Alexi misappropriated Fastcase’s compilation trade secrets. As a result, Fastcase says that Alexi has appropriated Fastcase’s decades of investment while simultaneously damaging Fastcase’s market position and goodwill. But what Fastcase highlighted above all else was that Alexi never notified Fastcase of its changing service model, its expanding use of Fastcase data, or its intent to compete directly with Fastcase. It never even tried to renegotiate the Agreement to authorize its new uses. Instead, Alexi continued to rely on the internal-use license while using Fastcase data to build, train, power, and market a direct competitor. When Fastcase discovered what Alexi was doing, vLex (acting on behalf of Fastcase) sent Alexi a written Notice of Material Breach in October, 2025. The notice explained that Alexi was using Fastcase data for improper commercial and competitive purposes in violation of the Agreement and demanded that Alexi cure its breach within 30 days, as required by the Agreement. In response, Alexi denied any wrongdoing—in early November, 2025, Alexi’s counsel sent a letter rejecting Fastcase’s notice and actually admitted that Alexi had used the Fastcase Data to train and power its generative AI models, and that this did not constitute a violation of the Agreement. Moreover, the letter stated that the intention of the Agreement was never to preclude Alexi from using Fastcase’s data as source material for Alexi’s generative AI product. Rather, this was exactly why Alexi was paying Fastcase nearly a quarter million dollars annually. Fastcase has terminated the Agreement, yet Alexi has continued to use the data. What did Fastcase Claim Against Alexi? Fastcase has made the following claims: Breach of contract . Alexi was granted only a limited, non-exclusive, non-transferable license to use Fastcase’s data solely for Alexi’s internal research purposes. Fastcase performed its obligations under the Agreement by granting access to the data, but Alexi materially breached the Agreement by using the data for commercial and competitive purposes. Alexi has caused, and will continue to cause, irreparable harm to Fastcase, including loss of control over its proprietary compilation, erosion of competitive position, and impairment of contractual and intellectual property rights Trademark infringement . Fastcase has, at all relevant times, used the Fastcase trademarks in commerce in connection with its legal-research products, software, and related services. Its trademark registration remains active, valid, and in full force and effect. Without Fastcase’s consent or authorization, Alexi has used, reproduced, displayed, and distributed the Fastcase marks in its platform interfaces, public presentations, promotional materials, and commercial advertising. It even used the marks in ways that suggested that Alexi’s products were affiliated with Fastcase when no partnership was ever formed, and this constitutes infringement Misappropriation of trade secrets . Fastcase has devoted decades of engineering, editorial, and resource investment to build and refine its compilation. To maintain the secrecy and value of its compilation, Fastcase has required licensees and partners to enter confidentiality, non-use, and restricted-use agreements, including the Agreement with Alexi, and other technical and security measures. Alexi’s misappropriation included using Fastcase’s confidential compilation and metadata structure to train large-scale generative AI models, power user-facing legal-research features, generate outputs incorporating Fastcase data, and provide end users with direct access to the content of the Fastcase compilation. Fastcase has suffered and continues to suffer substantial harm, including loss of licensing revenue, competitive injury, market displacement, unjust enrichment to Alexi, erosion of goodwill, and diminution of the value of Fastcase’s proprietary compilation False Designation of Origin and Unfair Competition . Without the consent of Fastcase, Alexi has used Fastcase’s marks in commerce on its platform interfaces, in product demonstrations, and in promotional and advertising materials. This falsely suggests to consumers, legal professionals, and industry participants that Fastcase endorses, sponsors, authorizes, or is affiliated with Alexi’s competing legal-research platform, even though no such relationship exists and Fastcase has expressly declined to form a partnership with Alexi. This constitutes a false designation of origin and a false or misleading representation of affiliation, connection, or sponsorship. This conduct is likely to cause and has already caused consumer confusion, mistake, and deception as to the origin of Alexi’s products, whether Alexi’s products incorporate or are powered by Fastcase’s proprietary services with authorization, and whether Fastcase has partnered with, approved, or is otherwise associated with Alexi Consequently, Fastcase is asking for the following: Judgement for Fastcase A declaration of the breach of the Agreement A permanent injunction An award of compensatory damages An order of disgorgement requiring Alexi to account for and disgorge all revenues, profits, cost savings, and other benefits derived from its unauthorized use of Fastcase’s data An order requiring the return and destruction of all Fastcase data in Alexi’s possession, custody, or control, including all copies, derivatives, embeddings, model weights, datasets, or training artifacts incorporating or derived from Fastcase Data, together with a certification of complete purge and destruction Monetary relief for actual damages attributable to the infringement What Can We Take from This Development? At this point, we have not yet seen Alexi’s defence. Clearly, Alexi will likely argue that this was simply a misunderstanding of the Agreement. One question that will indeed arise during the proceedings is about whether there was a definition of “internal research purposes” set out in the Agreement. Could there actually be a way to argue that training an AI system was in the scope of the Agreement, and this was why Alexi was paying so much to Fastcase each year? Although Alexi may have internally considered fully automating the creation of legal research memos, it may be difficult for it to show that it contemplated at the time of forming the Agreement that it would remove its internal research component entirely and begin publishing Fastcase case law directly to end users. We will have to wait and see what happens. Previous Next

  • The US AI Safety Institute Signs Research Agreements with Anthropic and OpenAI | voyAIge strategy

    The US AI Safety Institute Signs Research Agreements with Anthropic and OpenAI Agreement has potential to influence safety improvements on AI systems By Christina Catenacci Sep 13, 2024 Key Points: The Safety Institute has signed research agreements with Anthropic and OpenAI The Safety Institute will receive access to major new models from each company prior to and following their public release The Safety Institute will be providing feedback and collaborating with the companies The US AI Safety Institute (Safety Institute) has recently signed research agreements with Anthropic and OpenAI. This article describes the details as set out in the Safety Institute’s recent press release. What is the Safety Institute? The Safety Institute located within the Department of Commerce at the National Institute of Standards and Technology (NIST), was established following the Biden-Harris administration’s 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence to advance the science of AI safety and address the risks posed by advanced AI systems. In fact, it is focused on developing the testing, evaluations and guidelines that will help accelerate safe AI innovation in the United States and around the world. The Safety Institute recognizes the potential of artificial intelligence, but simultaneously acknowledges that there are significant present and future harms associated with the technology. Additionally, the Safety Institute is dedicated to advancing research and measurement science for AI safety, conducting safety evaluations of models and systems, and developing guidelines for evaluations and risk mitigations, including content authentication and the detection of synthetic content. And 270 Days Following President Biden’s Executive Order on AI, the Safety Institute created draft guidance in order to help AI developers evaluate and mitigate risks stemming from generative AI and dual-use foundation models. In fact, NIST released three final guidance documents that were first released in April for public comment, as well as a draft guidance document from the Safety Institute that is intended to help mitigate risks. NIST is also releasing a software package designed to measure how adversarial attacks can degrade the performance of an AI system. The goal is to have the following guidance documents and testing platform inform software creators about the risks and help them develop ways to mitigate those risks while supporting innovation: Preventing Misuse of Dual-Use Foundation Models Testing How AI System Models Respond to Attacks Mitigating the Risks of Generative AI Reducing Threats to the Data Used to Train AI Systems Global Engagement on AI Standards One guidance document, Managing Misuse Risk for Dual-Use Foundation Models deals with the key challenges in mapping and measuring misuse risks. This is followed by a discussion of several objectives: anticipate potential misuse risk; establish plans for managing misuse risk; manage the risk of model theft; measure the risk of misuse; ensure that misuse risk is managed before deploying foundation models; collect and respond to information about misuse after deployment; and provide appropriate transparency about misuse risk. What do the Agreements Require? In its press release, the Safety Institute announced collaboration efforts on AI safety research, testing, and evaluation with Anthropic and OpenAI. In fact, each company’s Memorandum of Understanding establishes the framework for the Safety Institute to receive access to major new models from each company prior to and following their public release. The agreements will enable collaborative research on how to evaluate capabilities and safety risks, as well as methods to mitigate those risks. Elizabeth Kelly, director of the U.S. AI Safety Institute, stated: “Safety is essential to fueling breakthrough technological innovation. With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science of AI safety…These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI.” It will be interesting to see what comes of these collaborations. More specifically, time will tell whether the Safety Institute actually provides meaningful feedback to Anthropic and OpenAI on potential safety improvements to their models, and whether the companies attempt to incorporate the Safety Institute’s feedback to improve safety and better protect consumers. Previous Next

  • Playbooks | voyAIge strategy

    Practical AI adoption guides with actionable steps to ensure effective implementation. Playbooks Having clear, actionable guides is essential for successful AI use. Our playbooks provide your team with the direction they need to navigate critical situations confidently and efficiently. What Are Playbooks, exactly? Playbooks are comprehensive, scenario-based guides that outline actions, tips, best practices, case studies, and other value-added guidance for employees and managers to follow in particular situations. They serve as a reference tool that empowers your team to make informed decisions quickly, ensuring consistency and efficiency across your organization. Playbook Examples AI Incident Response Playbook Guides your team on how to react swiftly and effectively when an AI system encounters an error or produces unexpected outcomes. Compliance and Regulatory Playbook Provides step-by-step procedures for ensuring that all AI-related activities are in line with current laws and regulations, helping to avoid costly fines and reputational damage. Ethical AI Decision-Making Playbook Offers a framework for navigating ethical challenges in AI development and deployment, ensuring that your AI practices align with your company’s values and ethical standards. Data Privacy Playbook Details the processes for handling and protecting sensitive data within AI systems. The Playbook Development Process At voyAIge strategy, we take a systematic approach to developing your playbooks, ensuring that each one is tailored to your specific needs and organizational context. Here’s how we work with you to create effective, actionable playbooks: DISCOVERY DEVELOPMENT ITERATION DELIVERY We start by discussing your organization's specific needs, goals, and challenges. This helps us understand the scenarios your team faces and the outcomes you want to achieve. We identify the key areas where playbooks will add the most value and outline the structure of the playbooks we’ll create. We work closely with your team to review the draft playbooks, incorporating feedback and making necessary adjustments. This collaborative process ensures that the playbooks are practical, relevant, and easy to use. Next, we develop the content of your playbooks. This involves creating clear, step-by-step instructions, decision trees, and best practices that your team can easily follow. We ensure that all content is aligned with your organizational goals, industry standards, and regulatory requirements. Once finalized, we deliver the playbooks in a format that suits your needs—whether it’s a digital document, an interactive guide, or a PDF manual. We also offer training sessions to help your team get the most out of the playbooks and ensure they are effectively implemented across your organization. Ready to Equip your Team? Book a Free Consultation

bottom of page