top of page

Search Results

120 results found with an empty search

  • voyAIge strategy | Data & AI Governance Specialists

    Our governance solutions ensures your successful and safe use of data and AI. We are seasoned specialists in AI law, policy, and ethics. We make AI and people work Tommy Cooke Co-Founder voyAIge strategy (VS) helps organizations to responsibly navigate AI through innovative strategic solutions, such as tailored policies, thought leadership content, as well as workforce and executive training. Our goal is to make AI and people work by bridging the gap between AI and people. We align AI use with real human needs and talents. Our solutions empower people to use AI confidently, safely, and effectively. Christina Catenacci Co-Founder Book a Free Consultation Managed Data & AI Governance Services Guidance, Oversight, and Leadership for your AI Journey Our Managed Data & AI Governance S ervices offer your organization the confidence to move forward with data and AI. Think of us as your virtual Chief Data or Chief AI Officer: someone on-call and working with you. We meet you where you are in your data and AI journey and provide you the support services you need at an affordable monthly rate that is within your budget. From a tailored AI strategy and risk assessments to executive and staff training to communications planning, we help ensure your data and AI use are aligned with your goals, and is safe, responsible, and built to last. Whether you are building your first use cases, integrating data across departments, or exploring risk mitigation,we provide the leadership and structure to make AI work for your people and your goals. Our extensive experience in AI and related areas such as data governance, data privacy and security, intellectual property, and confidential information, has assisted us to craft a simple yet reliable three-step approach to guide you in your AI journey: Data Governance End-to-End Management We partner to set up, maintain and evolve the structures, roles, policies and operations required to treat data as a trusted asset across the enterprise AI Governance Strategic AI Enablement We help you govern AI within your business: from use-case identification, model selection, deployment, through to monitoring, ethics & control frameworks Ecosystem Oversight Define data ownership, steward roles, access controls, metadata management and lifecycle processes Use-case to Value Identify high-impact AI opportunities, run pilots, operationalize them and embed the capabilities in business processes Compliance & Risk Reduction We ensure your data handling meets legal, regulatory and ethical standards Operational Readiness Deploy AI governance frameworks, playbooks and training so your teams act confidently and safely Operational Readiness Equip your organisation with the governance structure that enables reliable analytics, BI and advanced data-capabilities Compliance & Risk Reduction We ensure your data handling and AI operations meet legal, regulatory and ethical standards as well as industry best practices Our Managed Data & Governance Services Deliver Benefits That Accelerate Safe and Successful Growth Clarity on strategy and direction We help you set a focused, realistic AI roadmap Compliance and risk management Your AI stays aligned with law and compliance Expertise without full-time cost Access senior-level guidance at a fraction of the cost Support that grows with your needs We adapt as your AI use evolves Faster, safer Implementation Avoid false starts with structured deployment Confidence across teams and stakeholders Build trust in AI with clear guidance and communication Most Organizations Encounter the Same AI Challenges Most organizations encounter the same kinds of roadblocks when adopting AI. These challenges can stall progress, create risk, and leave teams overwhelmed or misaligned. VS provides solutions to address these challenges: Fear of AI Executives fear lost ROI as well as strategic or stakeholder misalignment Employees fear replacement, uncertainty, extra work, and inadequate training The solution is Training Inappropriate Use of AI Executives worry about employee misuse, data leaks, and non-compliance while employees often lack clarity on the rules and accidentally share sensitive information The solution is AI Policies Lack of Preparedness Organizations are unsure if they are ready for AI, lack budget clarity, and struggle to communicate effectively with stakeholders The solutions are thought leadership and stakeholder engagement No Leadership Organizations often do not have an internal AI expert. There may be no AI direction, and no coordination between departments. No one in charge of decision-making. The solution is VS's Managed AI Services No Strategy Leaders do not know what AI tools are the right fit, or they are overwhelmed by options. There is no roadmap or strategy for AI adoption, nor is there a change management plan in place. The solution is Adoption and Transformation Too Many Questions "Where do we start? "Do we need a plan?" "Is AI worth the investment? "What AI do we need?" The solution is the AI Helpdesk Strategic and Critical Insights to Guide your AI Journey Trump Signs Executive Order on AI Read More Legal Tech Woes Read More Meta Wins the Antitrust Case Against It Read More Read More Contact Us Partnership Opportunities Submit RFP Stay Informed Get expert perspectives on AI risks, solutions, and strategies Email address: Submit

  • AI & The Future of Work DL | voyAIge strategy

    Thank you for subscribing! Download the Report Here Read the Report Online Here

  • Selling Chatbots? | voyAIge strategy

    Selling Chatbots? Key Steps and Considerations By Tommy Cooke Dec 5, 2024 Key Points Research and understand the relevant regulatory frameworks at national and state/provincial levels Ensure ethical deployment by diversifying training datasets, conducting regular audits for bias, and communicating chatbot capabilities and limitations clearly to staff and users Scan vendors thoroughly to identify not only a compliant, responsive, and trusted partner but also one that provides a service that fits the diverse and nuanced shape and character of your sales and marketing team’s voices, opinions, and visions Develop a comprehensive internal and external communication plan, engage stakeholders early, and ensure your clients understand the benefits of the chatbot Every Christmas each year I visit the same online music store. I have a lot of family and friends who are musicians, so I enjoy gifting music gear. Typically, I see the usual sales and marketing angles: gear on sale, high-end used equipment, discounts on software, and of course, Christmas ribbons wrapped around everything. But this year, something different caught my eye: my favorite online music store now has a chatbot. There's a floating blue bubble at the bottom of my screen. It says, "Chat with Steve!" I know Steve personally. He's an excellent salesperson. When I click on the bubble, I see a cartoon avatar of Steve, giving me his patented thumbs-up. He asks, "How can I help you today?" It doesn't take long for me to realize that I'm not actually talking to Steve. Instead, the store modeled AI-Steve off his blogs and articles. It feels like I'm talking to Steve, and that makes me feel good. On my latest interaction, I not only found two great gifts, but I found them faster than I would have by picking up the phone and calling real Steve. After the interaction, real Steve called me anyway and said, "Thanks for your business! Do you want us to giftwrap the keyboard for your mom?" I loved the interaction. It revealed a lot about the power of chatbots for sales and marketing teams. In a fast-paced, evolving AI landscape, chatbots are transforming how companies design and deliver their services, products, and content. They help differentiate companies from their competitors, showing they embrace change confidently and empower their customers with information that would normally require phone or email conversations. How else would I have found a keyboard with the exact synth sound of "Take On Me" by A-ha if it wasn't for AI-Steve? My interaction was particularly telling because it demonstrated the pressures faced by sales and marketing teams: the need for consistent lead generation, faster response times, and personalized customer engagement. As technology advances, it's challenging for these teams to stay up-to-date while also meeting performance targets. A well-trained chatbot can address these challenges—reducing repetitive tasks while reflecting the team's ideas, insights, and mannerisms, thereby creating a seamless customer experience. Chatbots are here to stay, but their implementation must be done correctly . They can make costly mistakes that damage a company's reputation and finances. In this Insight, we'll explore what businesses need to consider before rolling out an AI chatbot to ensure these tools are deployed effectively, responsibly, and with a clear understanding of their potential—and their limitations. Questions Companies Selling Chatbots Should Consider What Regulatory Frameworks Do We Need to Follow? AI chatbots exist in complex legal spaces with regulations varying across regions. Consider, for example, a chatbot designed in New York State used by a company in California who services customers in Europe, the United States, and Canada. They must comply with multiple regulatory frameworks, including the GDPR (General Data Protection Regulation) in Europe, the CCPA (California Consumer Privacy Act) in California, PIPEDA (Personal Information Protection and Electronic Documents Act) in Canada. Each one of these regulatory bodies place unique obligations on data handling and user protection every time AI is used. Failing to meet these can lead to penalties, fines, and damaged consumer trust. Here are three things you can do to set yourself up to sell chatbots safely: Research the legal frameworks relevant to your target market Consult legal experts who specialize in AI and data privacy Implement data protection protocols in the form of AI policies and procedures that can be given to your customers to support the implementation of your chatbot How Can We Ensure Ethical AI Deployment? One of the primary ethical concerns with AI chatbots is bias. Chatbots trained on skewed datasets may produce biased responses, leading to unintended discrimination. At times, how a vendor trains chatbots on their own datasets can also create anomalies that may be offensive to users. This can be especially problematic in industries like customer service where respect and professionalism are critical. To make matters more complicated, when companies purchase chatbots and train them on their own data - such as newsletters, business plans, training transcripts, and so on - biases can arise when the chatbot vendor's model does not weigh new data properly. This is especially so when new datasets are too narrow and do not reflect the diversity of an entire sales or marketing team's voices, perspectives, and opinions. The following is recommended to cover some important ethical bases before selling a chatbot: Diversify training datasets with unique voices, perspectives, to minimize biases and ensure that your chatbot does not corner itself when speaking to a wide variety of customers Test your chatbot in a closed environment with your team prior to launching Conduct regular audits to identify and address biases that may come up in responses Communicate transparently with customers about the chatbot's capabilities and limitations. Provide end-users the opportunity to provide feedback What Vendor Should I Select? Selecting the right AI vendor is crucial for the success of the chatbot. Prioritize vendors with a proven track record of compliance and have a public-facing set of documents proving that they have robust data handling practices and ethical AI development and implementation standards. The vendor should be offering tools that can easily integrate with your systems, workflows, market, and sales models. They should also be providing ongoing support for not only chatbot maintenance but refinement and improvement overall as well. Here are some questions you can ask vendors during your chatbot vendor scan: What is your Compliance History? Ensure the vendor has a proven track record of complying with relevant legal frameworks like GDPR, CCPA, or PIPEDA . Ask them when they started implementing compliance and how often they revise their own policies and procedures How Will You Manage Data Security? Evaluate how vendors handle sensitive data, including encryption methods, data storage practices, and access controls How Much Support Do You Offer? Choose vendors that provide ongoing support, including updates, retraining, and bias auditing. It's important to ensure that the tool remains effective and compliant over time What is your Communication Strategy? Before you list a new AI chatbot product or service, it is crucial to communicate what is coming to both your staff, investors, stakeholders, and, as importantly, your long-term clients. Failing to do so can lead to confusion, alienation, or even leaving key people feeling undervalued. It is a common perception that AI replaces human beings, and so it is important that people are not caught off-guard when an AI system is introduced. Moreover, failing to communicate can generate internal misalignment if staff have high expectations for a chatbot that underwhelms or underdelivers in its adoption stage. Without a clear communication strategy, staff and consumers may resist engaging with a new tool, leading to a lack of utilization and ultimately wasted investment. Here are five things you can do to prepare yourself with a strong communication strategy: Start with an Internal Communication Plan : Ensure that your entire organization understands why the chatbot is being introduced, its capabilities, and how it benefits their roles. Host informational chats so staff can ask questions. Provide easy-to-read documentation so everyone feels comfortable with what is coming Engage Stakeholders Early : Who are your key stakeholders? Talk to them. Engage them early in the process. Seek their input to ensure their perspectives are considered. This will accelerate adoption and buy-in Craft Clear External Messaging : Inform your existing clients about the introduction of the chatbot before it goes live. Highlight the benefits, such as faster response times or 24/7 availability. Clarify the scenarios where human intervention is still available, and how chatbots facilitate better human interaction when it matters most. Transparency is key Create Training Materials: Provide staff with scripts and guides on how to introduce the chatbot to clients. Equip them with clear, consistent messaging that aligns with the company’s brand as well as its tone, voice, and vision Gather Feedback : Once the chatbot is live, actively gather feedback from staff and clients. Use the feedback to make necessary adjustments to both the chatbot and your communication strategy to address any concerns promptly Chatbot Success through a Commitment to Research, Communication, and Transparency One of the most critical aspects of deploying AI chatbots is managing user expectations. Users need to understand what a chatbot can and cannot do. Companies selling chatbots can avoid frustrating users and staff who expect more than the chatbot can deliver. This is why transparency is not only best practice but also a trust-building strategy. When users know the chatbot’s limitations and they are not surprised by its arrival, they are more likely to be satisfied with their interactions and more understanding when redirected to a human agent. As AI chatbots become an integral part of the business landscape, companies must also approach their deployment with a comprehensive strategy that accounts for legal, ethical, and practical considerations. Ensuring compliance with regulatory frameworks, minimizing bias, and selecting the right vendor are all crucial steps in rolling out a chatbot successfully. Equally important is preparing a solid communication strategy to align internal teams and clients. Previous Next

  • AI companions in the Workplace | voyAIge strategy

    AI companions in the Workplace An intro to BYOB (Bring your own bots) to work By Christina Catenacci Nov 20, 2024 Key Points AI companions are chatbots that talk with you, offer support, and help with various tasks There are pros and cons to bringing AI companions to work, and they need to be considered before use Employers are recommended to create strong policies and procedures for when they introduce AI companions to the workplace What exactly are AI companions? You may call it a friend. You may call it a mentor. You may even call it a romantic partner. What I’m talking about is the AI companion. Essentially, these AI companions are chatbots that talk with you, offer support, and help with various tasks . Some of the main characteristics of AI companions include: They chat : the conversation quality is important in that the chatbot needs to sound natural, understand context, and keep the conversations flowing They have several features : some of the key features include customization options, voice chat, and any special abilities They are easy to use : user-friendly elements include things like ease of set up, ease of finding features, and ease of use across different devices When evaluating AI companions, each criterion is given a score on a five-point scale, where the higher the score, the more highly rated the AI companion. What are some of the most popular AI companions? There are many websites that comment on and rank popular AI companions. It is important to keep in mind that it depends on the reasons why a person wants to use the AI companions, whether it is for friendship and emotional support, helping with school or work, or going down the romantic path. For instance, some sites rank the seven best AI companions of 2024 , others create comparison matrices for six AI companions , and others list the top 10 to chat and have fun with . Some of these AI companions can also be used as mentors and buddies to bounce ideas off of then thinking about work: think Star Trek and brainstorming in the Holodeck . It might be less challenging than finding a real-life mentor . For example, using AI to enhance—not replace humans can hep to embrace AI as a tool for growth, and therefore enhance human potential. Yet other AI companions are pure business and act as business tools. For example, users can “ Tackle any challenge with Copilot ”. This chatbot can give users straightforward answers so a they can lean, grow, and gain confidence. It helps with any task so users can transform their ideas into stunning visuals, simplify dense information into clear insights, and polish their writing so their voice shines (users need Microsoft 365 plan). There are even Copilot+ PCs. Another example of an AI companion that helps with work is Gemini for Google Workspace . Lauded as the “always-on AI assistant”, it can be used across the Google Workspace, meaning it is built right into Gmail, Docs, Sheets, and more, with enterprise-grade security and privacy (users need a Workspace plan). What are the pros and cons of bringing AI companions to the workplace? Like BYOD (Bring your own device), BYOBs have several pros : Increased productivity and efficiency Enhanced decision-making More efficient and precise learning, reasoning, problem-solving, perception, and language understanding, especially with certain sectors like healthcare Higher likelihood of innovation and competitive edge Ability to automate tasks And here are some of the main cons: Employees have serious job displacement concerns Privacy and cybersecurity concerns Ethical concerns Potential for overdependency Potential for errors When including AI companions in the workplace, there are several uses; that said, it is important to remain aware that there are challenges that need to be addressed . How are AI companions being used by employers at work? Some tasks that are being completed by AI companions are things like notetaking, summarizing meetings, and creating agendas or lists of follow-up tasks. When employees bring their own AI companions to work, they may be cheaper to buy individually compared to enterprise management features, employees can pick their own AI companions that they are familiar and work well with, and employees who frequently use these tools end up self-training them. Employers are recommended to set some rules and guardrails by creating an AI in the Workplace policy for all employees in the workplace. Moreover, it is critical for employers to understand the risks and attempt to mitigate those risks. See my colleague’s insight article on the risks of AI companions and how to mitigate them. Previous Next

  • Accelerator Suite | voyAIge strategy

    Accelerator Suite Whether you are curious about AI, searching for the right AI solution, or are using AI and require risk management solutions, our Accelerators are designed to propel you through every need. Your introduction to AI. For organizations that are curious about AI. AI 101 training, stakeholder engagement, AI readiness assessment, AI law and ethics training, and a vulnerability scan AI Explorers Full Risk Management. For organizations using AI and want to be ethical and compliant. A Compliance Audit, Bias Detection & Mitigation Solutions, an AI Ethics Playbook, and tailored AI Policies AI Stewards Finding the right AI fit. For organizations ready to implement AI. An AI Opportunities Analysis, an AI Vendor Evaluation, an Implementation Roadmap, and a Training Plan AI Adopters AI Adopters Book a Free Consultation How Did You Hear About Us? Please take a minute to share with us how you heard about us! Submit Thanks for submitting!

  • Convention on AI | voyAIge strategy

    Convention on AI Ten parties have signed. Will Canada? By Christina Catenacci Nov 1, 2024 Key Points In September 2024, the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law was released The main principles in the Convention include: human dignity and individual autonomy; transparency and oversight; accountability and reliability; equality and non-discrimination; privacy and personal data protection; reliability; and safe innovation Parties need to adopt or maintain measures to ensure that the activities within the lifecycle of AI systems are consistent with obligations to protect human rights, democracy, and the rule of law—Canada has not yet signed it In September 2024, the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law was released. As of the date of publishing of this article, the following have signed the Convention: the European Union, the United States, Israel, the United Kingdom, Norway, San Marino, the Republic of Moldova, Iceland, Georgia, and Andorra. Canada is not yet on this list. What is in the Convention? The goal of the Convention is to ensure that activities within the lifecycle of AI systems are fully consistent with human rights, democracy, and the rule of law. Parties to the Convention need to adopt or maintain appropriate legislative, administrative, or other measures to give effect to the provisions set out in the Convention. The measures need to be graduated and differentiated as may be necessary in view of the severity and probability of the occurrence of adverse impacts on human rights, democracy, and the rule of law throughout the lifecycle of AI systems. Under the Convention, an AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations or decisions that may influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment. The Convention applies to the activities within the lifecycle of AI systems that have the potential to interfere with human rights, democracy, and the rule of law. More specifically, each party must apply the Convention to the activities within the lifecycle of AI systems undertaken by public authorities or private actors acting on their behalf. Also, each party must address the risks and impacts arising from activities within the lifecycle of AI systems by private actors in line with the goal of the Convention. Pursuant to Article 3 that deals with scope, it is necessary for parties to specify in a declaration submitted to the Secretary General of the Council of Europe at the time of signature, or when depositing its instrument of ratification, acceptance, approval or accession, how it intends to implement this obligation. For instance, Norway has already done so: “In accordance with Article 3, paragraph 1.b, of the Convention, the Kingdom of Norway declares that it shall apply the principles and obligations set forth in Chapters II to VI of this Convention to activities of private actors” There are some exceptions in the Convention: parties are not required to apply the Convention related to the protection of national security interests including national defense. Similarly, the Convention does not apply to research and development activities regarding AI systems that are not yet made available for use, unless testing is undertaken. General Obligations Under the Convention, parties must; adopt or maintain measures to ensure that the activities within the lifecycle of AI systems are consistent with obligations to protect human rights adopt or maintain measures that seek to ensure that AI systems are not used to undermine the integrity, independence, and effectiveness of democratic institutions and processes, including the principle of the separation of powers, respect for judicial independence, and access to justice adopt or maintain measures that seek to protect its democratic processes in the context of activities within the lifecycle of AI systems, including individuals’ fair access to and participation in public debate, as well as their ability to freely form opinions Principles Some of the main principles that are referred to in the Convention include: human dignity and individual autonomy transparency and oversight accountability and reliability equality and non-discrimination privacy and personal data protection reliability safe innovation Remedies, Safeguards, and Mitigation of Risks Parties must also adopt or maintain measures to ensure the availability of accessible and effective remedies for violations of human rights resulting from the activities within the lifecycle of AI systems. More specifically, the measures need to: ensure that relevant information regarding AI systems that have the potential to significantly affect human rights and their relevant usage is documented, provided to bodies authorised to access that information and, where appropriate and applicable, made available or communicated to affected persons; ensure that the information is sufficient for the affected persons to contest the decision(s) made or substantially informed by the use of the system, and the use of the system itself; and an effective possibility for persons concerned to lodge a complaint to competent authorities. Another important responsibility is that parties must ensure that, where an AI system significantly impacts upon the enjoyment of human rights, effective procedural guarantees, safeguards and rights, in accordance with the applicable international and domestic law, are available to persons affected thereby. More precisely, parties need to ensure that the persons interacting with AI systems are notified that they are interacting with such systems rather than with a human. Moreover, parties must take into account the principles noted in the Convention and adopt or maintain measures for the identification, assessment, prevention and mitigation of risks posed by AI systems by considering actual and potential impacts to human rights, democracy, and the rule of law. These measures must be graduated and differentiated, as appropriate, and: take due account of the context and intended use of AI systems, in particular as concerns risks to human rights, democracy, and the rule of law take due account of the severity and probability of potential impacts consider, where appropriate, the perspectives of relevant stakeholders, in particular persons whose rights may be impacted apply iteratively throughout the activities within the lifecycle of the AI system include monitoring for risks and adverse impacts to human rights, democracy, and the rule of law include documentation of risks, actual and potential impacts, and the risk management approach, and require, where appropriate, testing of AI systems before making them available for first use and when they are significantly modified In fact, parties must adopt or maintain measures that seek to ensure that adverse impacts of AI systems to human rights, democracy, and the rule of law are adequately addressed. Such adverse impacts and measures to address them should be documented and inform the relevant risk management measures described. Parties need to assess the need for a moratorium or ban or other appropriate measures in respect of certain uses of AI systems where it considers such uses incompatible with the respect for human rights, the functioning of democracy, or the rule of law. Implementation of the Convention The implementation of the provisions of the Convention by the parties must be secured with a mind to the following: non-discrimination the rights of persons with disabilities and children public consultation digital literacy and skills safeguard for existing human rights wider protection than what is stipulated in the Convention Furthermore, the parties need to consult periodically to facilitate the effective application and implementation of the Convention, the exchange of information on critical developments, and the cooperation of stakeholders. Parties also need to report to the parties within the first two years after becoming a party and periodically thereafter. The parties must cooperate in the realization of the purpose of the Convention. There are also oversight mechanisms in the Convention: parties must establish or designate one or more effective mechanisms to oversee their compliance with the obligations in the Convention. In fact, each party must ensure that such mechanisms exercise their duties independently and impartially and that they have the necessary powers, expertise, and resources to effectively fulfil their tasks of overseeing compliance with the obligations in this Convention. What can we take from the Convention? This is the first-ever international legally binding treaty in this field . Although Canada may have participated in the negotiation of the Convention, it has not yet signed the Convention. Given the situation in Canada where Bill C-27 could die on the order paper at any moment because of an election, it is not clear whether any further progress in AI will be made in Canada in the near future. Previous Next

  • Antitrust Woes | voyAIge strategy

    Antitrust Woes Meta and Google Find Themselves in Hot Water By Christina Catenacci, human writer Apr 18, 2025 Key Points Mark Zuckerberg just gave his testimony in the Meta antitrust case, where he tried to convince the court that he did not buy Instagram and WhatsApp to get rid of competitors Google was found to have illegally built monopoly power with its web advertising business We will have to wait to see what happens to Meta and Google: will they have to break up their companies? This article discusses some news in the world of antitrust law, as it pertains to Meta and Google. It also discusses the importance of competition as per the Competition Bureau of Canada and the Federal Trade Commission so that we can take some important points from recent developments. Meta This was an interesting week—it was reported that Mark Zuckerberg gave his testimony in the hottest antitrust case since the Google antitrust case, which I wrote about here . Over the last few days, Zuckerberg told the U.S. District Judge James Boasberg, that he purchased Instagram and WhatsApp because he saw value in the companies, rather than to eliminate competitors. Still, the FTC alleges that Meta used a monopoly in its technology to generate massive profits. Why is this important? Meta could be forced to break off Instagram and WhatsApp—the two companies were startups when Meta bought them over 10 years ago, but now they are massive components of Meta. Apparently, while he was on the stand, Zuckerberg was asked to look at his own previous emails that he wrote to associates before and after the acquisition of Instagram and WhatsApp to clarify his motives. Was the purchase to halt Instagram’s growth and get rid of a threat? Or was it to improve Meta’s product by having WhatsApp run as an independent brand? Zuckerberg was just the first witness in the trial, so we have some time before we get the answer. The trial is supposed to last about eight weeks. On the stand, Zuckerberg testified that it was his job to understand what was going on so that he and his teams could respond quickly. In fact, he stressed that he was operating in a very competitive environment, and he was not the monopoly—in full deflective mode, he asserted that people spent more time on YouTube than on Facebook and Instagram combined. Of course, the problem with that argument is that the FTC did not consider YouTube to be a proper comparator to the friend-sharing technology, since it was involved in sharing videos. This case went back to 2020 , and was filled with preliminary motions. We will keep you posted on any developments. Google This was also an interesting week for Google—in the second antitrust case regarding the alleged advertising monopoly, Google was found to have illegally built monopoly power with its web advertising business. In fact, the Department of Justice (DOJ) just announced that the Antitrust Division of the DOJ prevailed in its second monopolization case against Google since the company violated antitrust law by monopolizing open-web digital advertising markets. The DOJ stated that this was a landmark victory in the ongoing fight to stop Google from monopolizing the digital public square. It was Judge Leonie Brinkema who made the decision. We will soon find out what will happen to Google: will it need to be broken up? How will it be broken up? How long would it take to break it up? There will likely be a penalty phase where this is determined either late this year or early next year. This means that Google has been found to be a monopolist for the second time in a year: one for Search, and one for online advertising. We will keep you posted on any developments about the case. What Can we Take from These Developments? These cases against Meta and Google demonstrate how important it is for companies to not behave in anti-competitive ways. Interestingly, it does not seem to matter who was president at the time when these cases were commenced; people really frown on anti-competitive behaviour, regardless of who is in power. Simply put, the goal is to get companies to compete without abusing monopoly power. Why is this all important? The Competition Bureau of Canada has explained the following about competition and the link to efficiency, innovation, and productivity: “Competition has the power to drive our productivity forward and benefit Canadian businesses and consumers alike. Competition can improve productivity in three ways: Efficient use of resources : firms facing intense competitive pressure are likely to use their labour and resources more efficiently than those facing slack competition Innovation : competition encourages firms to innovate and invest in new products and processes to gain a competitive edge on their rivals, and Keep markets productive : healthy competition squeezes out lower-productivity firms and allows higher-productivity firms to thrive” The Bureau states that competition pushes individuals, firms and markets to make the best use of their resources, and to think of new ways of doing business and winning customers. As a result, productivity and Canadians’ standard of living increases. Additionally, competition makes goods and services more attractive to Canadian consumers and foreign buyers, which increases the competitiveness of Canadian exports, expands our output, and increases the economic benefits for Canadian workers, businesses, and investors. The Bureau also points out that competition is good for consumers too—it benefits Canadians by keeping prices low and keeping the quality and choice of products and services high. In this way, businesses must produce and sell the products that consumers want, and they need to offer them at prices that consumers are willing to pay. Consequently, consumers are still in the driver’s seat and are not forced to buy goods at unfair prices. The Federal Trade Commission also explains the following: “The FTC takes action to stop and prevent unfair business practices that are likely to reduce competition and lead to higher prices, reduced quality or levels of service, or less innovation. Anticompetitive practices include activities like price fixing, group boycotts, and exclusionary exclusive dealing contracts or trade association rules, and are generally grouped into two types: agreements between competitors, also referred to as horizontal conduct monopolization, also referred to as single firm conduct” Previous Next

  • The Strategic Values of Local AI | voyAIge strategy

    The Strategic Values of Local AI What it Means to Use AI in-house versus in-the-cloud By Tommy Cooke, powered by unusually high amounts of pollen in the air for this time of year Jul 18, 2025 Key Points: 1. Local AI keeps sensitive data in-house, helps businesses meet regulatory requirements, and reduces risk 2. Compared to escalating cloud costs, local AI offers predictable long-term savings for organizations with consistent workloads 3. A hybrid approach (using cloud AI for scale and local AI for control) is emerging as the most strategic model for enterprise AI deployment When you open the ChatGPT app on your phone, its AI runs in the cloud. What do I mean by that, you ask? The AI itself does not happen in your phone—it happens in a server somewhere else in the world. Your phone sends data, the data is ingested into a room filled with processors, and the output is returned to your phone. But there is another way for AI to function. And that way is called “ local AI ”: AI that is deployed, operated, and lives entirely within an organization’s walls. While the idea of local AI seemed like a farfetched dream merely a couple of years ago (and for good reason, as it was correctly perceived to be quite expensive at the time), it is a superior alternative to many organizations that are risk averse; privacy, control, cost predictability, and operational resilience are qualities all at the heart of local AI. Let’s unpack these qualities in further detail as I imagine many of you reading this right now will be highly interested in exploring local AI for your own organizations. Sovereignty over Sensitive Data The premier benefit of local AI is guaranteeing that sensitive data never leaves the organization. In tightly regulated industries, such as healthcare and finance, data sovereignty is critical. Using cloud-based AI, even with robust security protocols, creates uncertainties: Who audits the vendor’s access logs? Where is the data stored, geographically? How is it being used to train backend models? These are unanswered questions that haunt compliance officers and auditing teams. On the other hand, using local AI means that every bit of data that is processed stays within your full visibility and stewardship—this gives businesses a critical advantage particularly during a time of proliferating regulations. Simplified Compliance in a Complex Legal Landscape Regulations such as the EU’s GDPR and Canada’s PIPEDA impose strict obligations on data transfers and processing. This makes local AI models, which operate entirely within the bounds of these regulations, capable of sidestepping many of the issues that cloud AI systems are still struggling to navigate. That is, by minimizing the need to transfer data across and through jurisdictions, local AI reduces exposure to many legal complications. Moreover, and because all operations occur in-house, audit readiness becomes more straightforward: logs, model versions, and access records remain under corporate control. Predictable Operating Costs Cloud-based AI is often marketed as pay-for-use or as something that you can sign up for and begin using immediately. This makes mainstream AIs like ChatGPT and the like attractive: they are elastic, cost-efficient, and easy to access. However, as workloads grow, so too do fees. Application Programming Interface (API) calls, data storage, and compute time are but some of the many characteristics that all begin to add up. Cloud services also often carry usage-based or subscription-based pricing that tend to escalate over time. To be fair, the initial capital expenditure for local AI may be higher, but once it is set up, those costs amortize. For bounded workloads like batch processing, document classification, and real-time inference making, the cumulative total cost of ownership is considerably lower than continual cloud usage. Latency, Resilience, and Offline Capability Local processing also provides tremendous improvements in speed. Without the back-and-forth delays caused by network requests, turnaround times interacting with AI shrinks considerably. This is particularly attractive for real-time applications like manufacturing quality assurance or point-of-care diagnostics. Moreover, local AI continues to operate amidst network disruptions. For instance, remote sites, field offices, or secure facilities with limited connectivity can maintain uninterrupted service by using local AI. In an age where downtime translates directly into lost revenue and reputational risk, it is worth considering avoiding cloud-driven AI. Customization Although generalist cloud models have dazzling bread they often stumble in the face of domain-specific syntax. This is where local AI offers the opportunity to fine-tune with proprietary data: legal briefs, clinical records, manufacturing logs. Additionally, this makes local AIs considerably more reliable than their cloud counterparts in terms of avoiding hallucinations. Practically speaking, that means cleaner summaries, safer predictions, and fewer erroneous suggestions. Enhanced Data Governance Running models locally brings a considerable benefit by way of transparency. When you control the entire stack, from data ingestion to output, you gain visibility into model behaviour. This facilitates a higher level of explainability compared with cloud-driven AI. Local AI means that you are no longer reliant on opaque APIs; this can be a deal breaker for many prospective clients and customers. A Hybrid Future: Balancing Reach and Responsibility between Local and Cloud AI It is important to stress that local AI does not necessarily need to be seen as supplanting cloud-based systems. Rather, they can be complimentary. The optimal business model for many organizations is modular, or using both: Cloud-based AI for delivering massive-scale capabilities (think complex reasoning, multi-domain synthesis, and vast world knowledge) Local AI for handling sensitive tasks, private data, or immediate-response scenarios This balanced, hybrid approach is the future of enterprise AI. It's a “precision-first” approach and is one that could do wonders for aligning AI deployment within the context, risk tolerance, and regulatory demands within your industry. Local AI is not a niche pursuit. It is a strategic investment for businesses that are seeking to reconcile innovation, privacy, and compliance. Through local deployment, companies gain control over data, reduce long-term costs, improve performance, tighten governance, and allow them to converse using their own language or jargon. For businesses that are serious about data privacy, customer trust, and operational continuity, local AI is not just an alternative—it is a better, smarter, more principled choice. And this approach has the flexibility to enable use in conjunction with cloud-driven solutions. Previous Next

  • AI for Inventory Management | voyAIge strategy

    AI for Inventory Management How AI, RFID, and Real-Time Data are Reshaping Retail By Tommy Cooke, fueled by caffeine and creativity Apr 4, 2025 Key Points AI in inventory management isn’t about replacing people—it’s about removing guesswork so that people can do better work Old Navy’s partnership with RADAR shows that when AI, RFID, and vision systems combine, customer experience gets more personal, not less Before AI can work its magic, organizations must confront messy data, tangled systems, and human hesitation—because the tech isn’t the hard part, the people are I had a client that had trouble selling flip flops–the sandals. They are a major pharmacy with a significant retail component to the business. Flip flops were causing three issues: 1. flip flops were piling up in storerooms across the continent 2. the stockpile of dated, old flip flops was growing significantly 3. it was taking too much time to scan inventory of flip flops that nobody wanted The answer to these three pain points was found in AI for inventory management. It was actually three disparate AI systems working in tandem, bundled into a new technological solution. This new solution analyzed historical data to determine when flip flops should be put out on the floor and advertised on sale, triggered automatic replenishment of flip flops (so as to avoid over-ordering), and a system that actively monitored when flip flops would be physically removed from a shelf. The solution is becoming more commonplace. Old Navy, a subsidiary of Gap Inc., recently made retail headlines: they are embarking on a multi-year plan to integrate RADAR’s AI-driven RFID technology into its stores. The idea is to provide associates on the floor with real-time inventory data so that they can locate items quickly within the store. By combining RFID with AI and computer vision (to physically see inventory), Old Navy is not only aiming to improve associate efficiency and accuracy, but they are also aiming to enhance customer service experience. I don’t know about you, but I’m particularly excited; Old Navy never seems to have my size of jeans–ever. Much like my previous client who struggled with selling flip flops, AI can make a significant impact on inventory management. Let’s dive into this a bit further. AI for Inventory Management Enhanced Demand Forecasting. Much like the flip flop example, AI algorithms can analyze historical sales data and marketing data internally. Those data can be combined with external data, such as market and consumption trends, to anticipate future demand. The benefit of doing so shouldn’t be understated. Smart demand forecasting allows retailers to maintain optimal stock levels, thereby saving costs in terms of reducing overstock. For example, rather than just knowing that swimsuits sell better in July, an AI model might flag an early-season heatwave in a particular region, cross-reference those measurements with historical sales surges, and recommend adjust stock levels in that cluster of stores. Automated Replenishment. Think of this as the reactive component of demand forecasting the other side of the same coin. In this instance, AI systems work off inventory data to automate the reordering of new inventory. In the past, replenishment often relied on static rules: if stock drops below five, reorder ten. But AI can flip this logic on its head, making replenishment smarter and not just faster. Much like the way Old Navy will do so with RADAR, RFID monitors shelf-level data and warehouse status simultaneously. If a product is selling quickly in one store but not others, the system can auto-generate a transfer request. Or it can pause auto-orders if it predicts a drop in demand due to, for example, weather events or shifts in promotional priorities. This is a particularly attractive capability for retailers because it means that inventory management becomes more granular and adaptive. Operational Efficiency. Better forecasting and replenishment do not just make inventory numbers look nice. They free up actual people to do better work. When store associates stop manually counting items or looking for hidden stock in the storeroom, they focus on customers. When warehouse teams stop scrambling to process last-minute shipments due to stockouts, they can plan strategically. Take RADAR, for example: this system tracks the movement of every tagged item in real time and allows associates to search for an item using a mobile app and be guided directly to it. It’s a small change, but compounding small changes have a ripple effect, specifically faster order fulfillment. It means a customer can actually find what they came in for. It means an employee gets to spend more time helping someone, and less time on scavenger hunts. Implementation Considerations For all the power AI brings to inventory management, integrating it successfully is not just a matter of plug-and-play. In each of the two examples I discussed above–of my former client and Old Navy–the real challenge is not the technology: it’s the people involved. The following points below are ones to consider. Data Quality. AI is only as smart as the data it’s trained on as well as the data it receives in real time. But retail data is messy. Product SKUs vary across systems, sales data are often fragmented between platforms, and real-time inventory counts can be inconsistent at best. So, for AI to work, organizations must undergo a data hygiene campaign that involves cleaning, labeling, and integrating data sources. It’s a critical step, it’s not particularly enjoyable, and it involves time from people across IT, operations, finance, management, and frontline staff. So, remember that if a system doesn’t trust its data, it can’t act on it, nor will your people be able to trust it. Proper data preparation needs to be preceded by proper communications and proper training plans. Legacy Integration. Many retailers, especially largescale organizations, operate on a patchwork of legacy tools. Sometimes they are systems that are decades old. Many of these systems are bespoke, custom-built designs. Others are bolt-on afterthoughts. When AI is integrated or interacts with these systems, operations can become prohibitively complicated, not to mention expensive. I’d be remiss not to mention the impact of these changes on your people, too. A guide approach is required, one that takes into account what it means to bring together modern technology with antiquated software–particularly when your staff are more accustomed to the former than the latter. Ethics and Privacy. RFID and computer vision systems can, intentionally or not, start to resemble surveillance. Tracking product movement is one thing—tracking employee and shopper behaviour is quite another. If AI systems are being used to monitor human productivity or shopper movements without transparency, it will lead to mistrust let alone legal risks and morale issues. It is not a new issue that consumers are concerned about what data is collected on them when they enter stores. When you add AI to the mix, concerns increase. Retailers need to be thoughtful about how they use these systems, what data they collect, and who has access. Ethical use is not just a matter of compliance, but a matter of culture, priorities, and principles. The Future of AI in Inventory Management The trajectory of AI in inventory management points towards increasingly sophisticated applications and use cases. Machine learning, computer vision, and robotics are set to further enhance inventory accuracy and operational efficiency. There is much to be saved and salvaged through these advancements, though it is important to recognize that these advancements are investments. They require planning, time, and reflection. Old Navy's partnership with RADAR precisely exemplifies the transformative power of AI in inventory management. It’s also a reminder that people always matter when working with AI. Previous Next

  • California Legislature Approves AI Bill | voyAIge strategy

    California Legislature Approves AI Bill Bill 1047 passes in the California Legislative Assembly By Christina Catenacci Aug 30, 2024 Key Points: California could be the first to launch comprehensive AI legislation in the United States Some controversy has arisen in response to the AI bill There are significant penalties associated with contraventions In August 2024, Senate Bill 1047, Safe and Secure Innovation for Frontier Artificial Intelligence Models Act , was read for a third time and passed in the California Legislative Assembly. It was subsequently ordered to the Senate. By August 29, 2024, the bill passed in the Senate (29-9 votes). It now must be signed by Governor Newsom. There are rumblings that he is taking his time weighing the pros and cons of signing a bill that has caused some controversy in Silicon Valley. What does the bill say? Bill 1047 defines important concepts such as advanced persistent threat, AI safety incident, covered model (and derivative), critical harm, developer, fine-tuning, full shutdown, post-training modification, and safety and security protocol. The bill also requires that developers, before beginning to initially train a covered model, comply with several requirements, including using administrative, technical, and physical cybersecurity safeguards; implementing the capability to promptly enact a full shutdown; and implementing a written and separate safety and security protocol. Moreover, the bill requires developers to retain an unredacted copy of the safety and security protocol for as long as the covered model is made available for commercial, public, or foreseeably public use, plus five years. Developers must grant to the Attorney General access to the unredacted safety and security protocol. Also, developers must annually review the protocol and make any necessary modifications to the policy. Additionally, Bill 1047 prohibits developers from using a covered model or derivative for a purpose that is not exclusively related to the training or reasonable evaluation of the covered model, compliance with State or federal law, or making a covered model or derivative available for commercial or public, or foreseeably public use, if there is an unreasonable risk that the covered model or derivative will cause or materially enable a critical harm. Bill 1047 also requires developers, beginning January 1, 2026, to annually retain a third-party auditor to perform an independent audit, consistent with best practices, of compliance with the provisions. The auditor must produce an audit report and require developers to retain an unredacted copy of the audit report for as long as the covered model is made available for commercial, public, or foreseeably public use, plus five years. Developers must grant to the Attorney General access to the unredacted auditor’s report upon request. Bill 1047 requires developers of a covered model to submit to the Attorney General a statement of compliance with these provisions. The bill also requires developers of a covered model to report each AI safety incident affecting the covered model or derivative controlled by the developer to the Attorney General. The bill requires a person who operates a computing cluster to implement written policies and procedures to do certain things when a customer utilizes compute resources that would be sufficient to train a covered model, including assess whether a prospective customer intends on utilizing the computing cluster to train a covered model. There are some hefty penalties contained in Bill 1047. The bill authorizes the Attorney General to bring a civil action for a violation—this includes for a violation that causes death or bodily harm to another human, harm to property, or theft. In this case, as of January 1, 2026, a civil penalty can be in an amount not exceeding 10 percent of the cost of the quantity of computing power used to train the covered model, and it is 30 percent for any subsequent violation. The bill also contains whistleblower protections, whereby developers, contractors, or subcontractors are not allowed to prevent an employee from disclosing information, or retaliate against an employee for disclosing information to the Attorney General or Labor Commissioner if the employee has reasonable cause to believe the information indicates the developer is out of compliance with certain requirements or that the covered model poses an unreasonable risk of critical harm. In this case, the civil penalty is found under the Labor Code . Other violations involving a computer cluster can result in penalties of up to $50,000 for a first violation, $100,000 for any subsequent violation, and a penalty not exceeding $10 million in the aggregate. Also, the Attorney General is free to recover injunctive or declaratory relief, monetary damages as well as punitive damages, fees, costs, and any other relief it deems appropriate. Bill 1047 creates the Board of Frontier Models within the Government Operations Agency, independent of the Department of Technology, and provide for the board’s membership. The Agency is required to, on or before January 1, 2027 and annually thereafter, issue regulations to update the definition of a “covered model,” as provided. The bill establishes in the Agency a consortium required to develop a framework for the creation of a public cloud computing cluster to be known as “CalCompute” that advances the development and deployment of AI that is safe, ethical, equitable, and sustainable by, among other things, fostering research and innovation that benefits the public. On or before January 1, 2026, the Agency must submit a report from the consortium to the Legislature with that framework. What can we take from this development? The main author of the bill, Senator Scott Weiner, has talked about the fact that the bill took a lot of work and collaboration with industry, and emphasized that it deserves to be enacted. Though there has been some criticism arguing that the bill is overly focused on the harms, it can be said that Bill 1047 is the first of its kind in the United States—it requires AI companies operating in California to comply with several requirements when it comes to training AI models. And businesses will have some time to prepare so they can be in compliance. In the preamble, it is declared that California is leading the world in AI innovation and research. One might question whether Canada is even part of the equation any longer given the slow-moving progress of Bill C-27 . And if an election takes place in Canada, there will be further delays in enacting a meaningful piece of AI (and privacy) legislation. We will have to wait and see. Previous Next

  • Code of Conduct | voyAIge strategy

    Code of Conduct DOWNLOAD

  • Services Mobile | voyAIge strategy

    Our Services VS offers industry-leading, experienced, and comprehensive solutions to support your successful use of AI. Have a look at what we provide. Don't hesitate to contact us. Policy & Procedures Streamline your operations with our expertly crafted policies and procedures, ensuring your AI initiatives are both effective and compliant. Research & Writing We lend our extensive experience in professional research and writing to provide insightful, impactful content tailored to support and drive your AI-related needs. Impact Assessments We take a deep dive into your organization's policies as well as data and AI operations to uncover hidden risks. AI Solution Scoping Our team assesses your organization's needs, painpoints, and opportunities. Compliance Let our team review, detect, and eliminate risks in your AI systems and business operations. Invited Talks We engage audiences with unique viewpoints that demystify complex legal, scholarly, political, popular, media, and philosophical understandings of AI. Ethical AI Playbooks Our playbooks assist organizations in navigating and responding to internal and external crises. Stakeholder Engagement Maximize AI adoption and AI project successes as we assist you in aligning your organization's stakeholders.

bottom of page