Search Results
123 results found with an empty search
- Disclaimer and Terms of Use | voyAIge strategy
Disclaimer and Terms of Use - Our Policies for Working with our Clients Disclaimer and Terms of Use DOWNLOAD
- Canada’s Innovation Crossroads | voyAIge strategy
Canada’s Innovation Crossroads Why Performance Matters Now By Tommy Cooke, fueled by curiousity and caffeine Jan 16, 2026 Key Points: Canada’s innovation challenge is no longer about talent or ideas, but about translating world-class knowledge into sustained economic and social impact Decades of declining research and development investment and slow technology adoption are eroding Canada’s competitiveness at the exact moment global innovation pressures are intensifying Reversing this trajectory requires coordinated action across business, academia, and government to treat innovation as essential infrastructure rather than optional ambition Canada is facing a moment of truth. In a world marked by rapid technological change, intensifying global competition, and ever-shifting economic power, a country’s capacity to innovate is no longer a niche advantage. It becomes essential. Alas, the latest assessment from the Council of Canadian Academies (CCA) paints a sobering picture. Canada’s innovation performance is declining. This speaks directly to the future of Canadian competitiveness, jobs, social systems, and the livelihoods of people across such a massive country. The fundamental tension that the report highlights is familiar to many Canadian leaders: despite exceptional research and globally recognizing talent, Canada struggles to turn ideas into economic action. As the expert panel put it, we are at a “critical juncture” in Canada’s history. The Numbers Don’t Lie The CCA’s report systematically benchmarks Canada’s performance in science, technology, and innovation against other nations. In many key metrics Canada lags significantly behind her neighbours. Both business and government research and development spending are well below the Organisation for Economic Co-operation and Development (OECD) average, an underinvestment which impedes productivity growth and dulls competitive edge. What’s striking here isn’t just the relative performance. It is where Canada is heading. While research and development In Canada has been steadily declining over the last 30 years, the rest of the world has been trending in the opposite direction. For business leaders, this development matters because research and development, technology adoption, and innovation directly influence productivity, economic growth, export competitiveness, and the ability of Canadian firms to lead in global markets. Knowledge is Power… if it Catalyzes Change Canadian universities remain a genuine strength. They continue to produce world-class research and attract top talent with high levels of international collaboration and impact. These institutions are a national asset because they illustrate a core challenge. And yet, excellence in knowledge creation does not automatically translate into innovative success. Many post-secondary institutions still struggle to commercialize discoveries. While Canada certainly excels at ideation, the CCA reports that it truly struggles to translate knowledge into pragmatic, measurable action and outcomes. This is precisely where other economies are significantly outperforming Canada. Therefore, it is not surprising that AI is a useful case study here. Canada indeed played an early and influential role in advancing AI research, with breakthroughs that shaped the field globally. But according to the CCA, Canada’s strength in AI research has not yet yielded proportional commercial or economic impact. While Canada’s banks and retailers lead AI adoption nationally, wider industrial adoption and commercialization lag behind the rest of the world, meaning that Canada’s early lead is eroding. The Consequences of Inaction Why does this matter to business leaders? It is self-evident that lagging innovation stifles GDP growth, limits the proliferation of high-paying jobs, and diminishes competitiveness in global markets. But one of the more significant impacts are its societal implications. Innovation isn’t just about making technology. It has a correlating social effect that connects directly to how efficiently public services are delivered. It also effects national resilience; modern healthcare delivery and housing solutions–which are exceptionally expensive yet constant societal hurdles in Canada–depend upon innovation that streamlines population support capabilities. The report’s emphasis on “a serious and widening gap between our potential and performance” is thus a call to action. It signals that maintaining the status quo is not an option if Canada wants to preserve its standard of living and social solidarity in an era of rapid and uncertain change. What Canadian Leaders Must Wrestle With If there is a central insight from this report for Canadian business and public sector leaders, it is this: improving innovation performance requires systemic and coordinated action across the ecosystem. There is no single silver bullet here. Strengthening Canada’s innovation performance means rethinking how it funds research and development, how it supports firms in adopting new technologies, and how it helps startups scale into globally competitive businesses. For business leaders, this means investing more deliberately in technology adoption, building internal innovation capabilities, and collaborating across sectors. It also means engaging more actively with legislators and policymakers to signal where structural barriers are holding firms back. A Path Forward for Canada Despite its severity in tone, the report does not mean Canada is doomed. Far from it. The foundational elements of a high-performing innovation ecosystem are present. Canada has educated citizens, excellent universities, robust industries, and a cultural openness to new ideas. Canada just needs to figure out its struggles with knowledge translation. Canadian leaders in business, academia, and government must align around innovation to make it a strategic imperative. The CCA report is not a verdict—it shows where Canada stands today and highlights the strategic choices that lie ahead. How Canada responds will shape not just economic statistics, but the everyday lives of Canadians in jobs, public services, and opportunities for future generations. Previous Next
- Newsom Vetoes California's SB 1047 | voyAIge strategy
Newsom Vetoes California's SB 1047 A missed opportunity to lead in AI regulation By Christina Catenacci Oct 1, 2024 Key Points: Governor Newsom vetoed SB 1047, California’s AI safety bill, on September 29, 2024 Many view the veto as a missed opportunity for California to lead in AI regulation in the United States The bill created significant controversy in Silicon Valley because of the concern that it was too rigid and would stifle innovationAs you may have heard, Governor Gavin Newsom of California just vetoed California’s AI safety bill, SB 1047. We wrote about this potentially landmark bill earlier , where we explained the inner workings of the text. The article also noted that the bill was the first of its kind in the United States, and had the potential of influencing how other States crafted their AI statutes and regulations. Professors Hinton and Bengio supported it too . So why did the bill fail? The bill that attempted to balance AI innovation with safety made it through readings in the General Assembly and in the Senate—and was ready to be signed by Newsom. It seemed that the author of the bill, Senator Scott Weiner, was confident that a great deal of work had gone into it, and it deserved to pass. Let us examine the veto note signed by Newsom on September 29, 2024. First, it appears that Newsom was concerned about the fact that the bill focused only on the most expensive and large-scale models—something that could give the public a false sense of security about controlling AI. He pointed out that smaller, more specialized models could be equally or even more dangerous than the models that the bill targeted. Second, Newsom called the bill well-intentioned, but also remarked that it did not take into account whether an AI system was deployed in high-risk environments, involved critical decision-making, or dealt with the use of sensitive data. Newsom stated, “Instead, the bill applies stringent standards to even the most basic functions—so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology”. Third, Newsom agreed that we could not afford to wait for a major catastrophe to occur before taking action to protect the public, and he emphasized that California would not abandon its responsibility—but he did not agree that to keep the public safe, the State had to settle for a solution that was not informed by an empirical trajectory analysis of Al systems and capabilities. He accented that any framework for effectively regulating Al had to keep pace with the technology itself. He also added that the US AI Safety Institute was developing guidance on national security risks, informed by evidence-based approaches to guard against demonstratable risks to public safety. Additionally, he noted important initiatives that agencies within his Administration were taking in the form of performing risk analyses of the potential threats and vulnerabilities to California's critical infrastructure using Al. Newsom highlighted that a California-only approach might be warranted, especially absent federal action by Congress, but it had to be based on empirical evidence and science. He concluded his remarks by saying that he was committed to working with the Legislature, federal partners, technology experts, ethicists, and academia to find the appropriate path forward. He stated, “Given the stakes—protecting against actual threats without unnecessarily thwarting the promise of this technology to advance the public good—we must get this right”. He simply could not sign SB 1047. We should all keep in mind that this was a controversial bill that caused a kerfuffle in Silicon Valley. In fact, various tech leaders had been saying that the bill was too rigid and objecting about its potential to hinder innovation and driving them out of California. Several lobbyists and politicians had been communicating their concerns in the past few weeks, including former Speaker Nancy Pelosi . There were other powerful players in Silicon Valley, including venture capital firm Andreessen Horowitz, OpenAI, and trade groups representing Google and Meta, lobbied against the bill, arguing it would slow the development of AI and stifle growth for early-stage companies. It makes sense that there was significant concern about the bill in Silicon Valley: SB 1047 dealt with serious harms and set out considerable consequences for noncompliance. It is no surprise that those tech workers were in favour of avoiding them. Indeed, the bill was called, “hotly contested” since those in the tech industry complained about it. That said, there were many who supported the bill who viewed SB 1047 as an opportunity to lead the way on American AI regulation. To that end, tech industry leaders reacted positively to Newsom’s veto, and even expressed gratitude to Newsom on social media . On September 29, 2024, Senator Weiner responded to the decision to veto the bill, noting the following: The veto was a setback for everyone who believed in oversight of massive corporations that are making critical decisions that affected the safety and welfare of the public and the future of the planet While the large AI labs had made admirable commitments to monitor and mitigate risks, the truth was that voluntary commitments from industry were not enforceable and rarely worked out well for the public This veto left us with the troubling reality that companies aiming to create an extremely powerful technology faced no binding restrictions from American policymakers, particularly given “Congress’s continuing paralysis” around regulating the tech industry in any meaningful way With respect to Newsom’s criticisms, he stated that “SB 1047 was crafted by some of the leading AI minds on the planet, and any implication that it is not based in empirical evidence is patently absurd” The veto was a missed opportunity for California to once again lead on innovative tech regulation Weiner stated that California would continue to lead the conversation, and it was not going anywhere. We will have to wait and see what is proposed in the near future. In the meantime, it is important to note that there were some smaller AI bills that Newsom did sign into law. For instance, Newsom recently signed one to crack down on the spread of deepfakes during elections, and another to protect actors against their likenesses being replicated by AI without their consent. As for Weiner, he stated , “It’s disappointing that this veto happened, but this issue isn’t going away”. Previous Next
- The Most Honest AI Yet? | voyAIge strategy
The Most Honest AI Yet? Why Admitting Uncertainty Might Be the Next Step in Responsible AI By Tommy Cooke, powered by questions and curiousity Jun 20, 2025 Key Points: MIT’s new AI model signals a deeper shift toward transparency, humility, and trust in AI systems Business leaders must recognize that unchecked confidence in AI carries serious reputational and legal risk The real competitive advantage in AI isn’t speed—it’s building cultures and systems that model integrity and caution A client recently shared their favourite moment while attending a tech conference a few months ago. He listened to numerous speakers go on about the promises of AI, but he was not convinced. All he was hearing was the run-of-the-mill talking points: AI is great, it saves money, it increases revenue, it revolutionizes business, etc. But he was taken aback at the end of the conference. The last speaker took a radically different approach and called out the purple elephant hiding in the corner of the room: hallucinations and dishonesty. The speaker had taken serious issue with the fact that AI fails to admit when it’s wrong and said something to the effect of: “Artificial Intelligence, as it were, has some work to do in terms of becoming Honest Intelligence”. The reflection was prompted by the question: Will we ever get AI that simply admits when it doesn’t know the answer? Good news for my client, and for the rest of us just like him. We might be closer than we think. Researchers from MIT and its Uncertainty Labs recently revealed an AI model that recognizes when it doesn’t know the answer … and say so. Yes. You read that correctly: an AI that admits confusion. At first glance, this might seem like a modest or even trivial update in a field known for hype , bold claims, and massive ambition. But this humble innovation signals something bigger. If we squint, we can see the outlines of a bigger trend emerging here: after a gold rush of AI development characterized by confidence and speed, the market has been— as I unpacked recently —quietly shifting toward maturity, safety, and accountability. Companies, regulators, and researchers are all starting to recognize that the future of AI is not only about performance—it is about trust. MIT’s “humble AI” may be one of the clearest signs yet of where we’re headed. From Bold AI to Measured AI: What a Shift in Tone Reveals About Where AI is Heading AI tools are designed to sound confident, but they often sound more confident than they should. When they hallucinate, they do so with conviction. That’s part of what makes them so compelling. It’s also what makes them so dangerous. They often cross the line from helping users to misleading them, whether through invented citations or persuasive but false narratives. In this context then, the MIT team’s breakthrough stands out. Their new system doesn’t merely generate content. Interestingly, it calculates a “belief score” indicating how confident it is in each of its answers. It can express uncertainty in natural language and even abstain from answering altogether. This is not just a technical improvement. It’s a philosophical one, and one that matters to you as a business leader. Why? It signals a new future of AI: one where the goal is not omniscience, but reliability and accuracy. One where companies don’t ask, “How fast can we scale this?” but instead, “Should we scale this at all?” Or, “Is this particular model the best investment?” This tonal shift, from bold to measured, is worth noticing as well. It mirrors the shift we’re seeing in corporate strategy, regulation, and public sentiment in that leaders are realizing that the only sustainable AI is the kind that knows its own limits. What Organizations Should Learn from Honest Artificial Intelligence When AI demonstrates the value of knowing what it doesn’t know, it's a mirror to our own blind spots. Companies that adopt AI without building in mechanisms to identify uncertainty are asking for trouble. They're putting tools in front of employees and customers that may sound confident while being catastrophically wrong. And there are countless examples: Air Canada ended up in court after it’s AI-assisted chatbots gave incorrect advice for securing a bereavement ticket fare Dutch parliamentarians, including their Prime Minister, resigned after an investigation found that 20,000 families were defrauded by a discriminatory AI that the government had endorsed A Norwegian man sued OpenAI after ChatGPT told him he had killed two of his sons and had been jailed for 21 years In 2021, the fertility tracker app Flo Health was forced to settle a lawsuit initiated by the U.S. Federal Trade Commission after it was caught sharing private health data with Facebook and Google I’d be remiss not to mention the near-endless instances of AI deepfakes and the eruption of chaos that they’ve caused, from simulating the voices of political leaders and creating fake sports news conferences to the infamous story of a finance worker a multinational firm who was tricked by AI into paying out $25 million to fraudsters who were using deepfake technology To the business leaders reading this, it’s no longer enough to rely on disclaimers or internal policies that live in the fine print. That’s only half of the battle. As AI integrates into more visible, consequential workflows, business leaders will need to model transparency at the product level. That’s precisely what the MIT model does. Its interface shows that uncertainty is real and accounted for. While this may seem like a setback or extra work, it could also be seen as a leadership opportunity . Why Business Leaders Ought to Normalize Uncertainty in AI I truly believe that being a business leader is less about having the answers than it is about asking the right questions. After over a decade of lecturing in university classrooms, I often found that encouraging my students to ask hard questions not only positioned them to work collaboratively together to find compelling answers, but it also conditioned them to be comfortable with discomfort and uncertainty. As a society, I think that we far too often associate uncertainty with weakness, risk, and indecision. But in complex systems, uncertainty is not the enemy. It is a fact of life. Engaging with it demonstrates maturity and leadership. Conversely, avoiding it makes subordinates nervous about getting it wrong. What MIT’s new model does is provide a practical blueprint for building uncertainty into the system architecture of AI tools. It’s a lesson worth internalizing and mobilizing into an attitude or ‘posture’ for an organization. To work competently and confidently with AI, it’s crucial to foster a culture that trains employees to recognize the limits of AI outputs—not to ignore them or be afraid of them. The Coming Competitive AI Advantage: Trust As I noted in A Quiet Pivot to Safety in AI , the market is entering an age of AI responsibility. Not because it’s fashionable, but because it’s becoming foundational. In sectors like finance, healthcare, insurance, and education, AI that can explain or qualify its results won’t be a luxury; rather, it will become a baseline expectation. We believe the real competitive advantage in AI will not come from the fastest deployment: it will come from building AI systems—and AI cultures—that can be trusted. That means leaders should stop asking only “What can this tool do for us?”, and start asking, “What signals are we sending by how we use it?” Modeling the behaviour you want to see, whether with customers, employees, or other stakeholders, is part of your brand now. To build from there, here’s the next steps you should take to continue fostering a culture that embraces uncertainty and complexity: Train for skepticism. Help teams understand that AI outputs are not gospel. Teach them how to spot uncertainty, even if the tool doesn’t express it directly Invest in explainability. Use or request tools with explainable outputs and uncertainty estimates. Vendors that don’t offer this should face higher scrutiny Design escalation points. Don’t let ambiguous outputs become decision points. Build Human-in-the-Loop mechanisms so that ambiguous or low-confidence results are always reviewed Leaders need to communicate these things outwardly. Customers will forgive slowness, but they will not forgive false confidence. Tell them when the AI isn’t sure—and make that a point of pride. It is important to keep in mind that the best organizations won’t treat humility as a tone. They’ll build it into their infrastructure. Previous Next
- What Happened to the Algorithmic Accountability Act | voyAIge strategy
What Happened to the Algorithmic Accountability Act The US's Algorithmic Accountability Act is in effect By Christina Catenacci Sep 19, 2024 Key Points: the Algorithmic Accountability Act of 2023 is indeed in effect and contains several requirements for covered entities the penalties are serious for covered entities that do not comply—the FTC can enforce the Act and has broad powers to investigate and find violations involving unfair or deceptive acts or practices. Also, States can also bring a civil action on behalf of residents in the State to obtain appropriate relief Canada does not have anything like the Algorithmic Accountability Act of 2023 or the EU’s AI Act Some may be wondering about the Algorithmic Accountability Act in the United States. What happened to it? Did it ever pass? Do companies need to comply with it? How is the American approach different than that of the EU and Canada? What is the Algorithmic Accountability Act ? Generally speaking, the Algorithmic Accountability Act requires businesses that use automated decision systems to make critical decisions to study and report on the impact of those systems on consumers. What are critical decisions? They could be any decisions that have a significant effect on a consumer’s life, including housing, educational opportunities, employment, essential utilities, healthcare, family planning, legal services, or financial services. The Act also establishes the Bureau of Technology to advise the Federal Trade Commission (FTC) about the technological aspects of its functions. What is the status of the Algorithmic Accountability Act ? This story began in the beginning of 2022, with the 117th Congress. Bill HR 6580, the Algorithmic Accountability Act of 2022 , was introduced in the House of Representatives and referred to the House Committee on Energy and Commerce. The bill was then referred to the Subcommittee on Consumer Protection and Commerce. However, nothing happened after that point. That is, it failed to gain the support it needed to become law. It was not until September 2023 that Bill HR 5628, the Algorithmic Accountability Act of 2023, was introduced in the House in the 118th Congress. Subsequently, Bill 5628 was referred to the House Committee on Energy and Commerce, and later referred to the Subcommittee on Innovation, Data, and Commerce. This time, it passed in the House and the Senate. It went to the President and then became law. What does the law require? The Algorithmic Accountability Act of 2023 contains several definitions, some of which include augmented critical decision process (process), automated decision system (system), biometrics, covered entity, critical decision (decision), deploy, develop, identifying information, impact assessment, passive computing infrastructure, and third-party decision recipient. This Act applies to covered entities. Under the Act, a covered entity includes any person, partnership, or corporation that deploys any process and has any of the following: had greater than $50,000,000 in average annual gross receipts or is deemed to have greater than $250,000,000 in equity value for three tax years possesses, manages, modifies, handles, analyzes, controls, or otherwise uses identifying information about more than 1,000,000 consumers, households, or consumer devices for developing or deploying any system or process is substantially owned, operated, or controlled by a person, partnership, or corporation that meets the above two requirements had greater than $5,000,000 in average annual gross receipts or is deemed to have greater than $25,000,000 in equity value for three tax years deploys any system that is developed for implementation or use, or that the person, partnership, or corporation reasonably expects to be implemented or used, in a process meets any of the above criteria in the last three years Essentially, covered entities must perform impact assessments of any deployed process that was developed for implementation or use or that the covered entity reasonably expects to be implemented or used, in an augmented critical decision process and any augmented critical decision process, both prior to and after deployment by the covered entity. The covered entity must also maintain documentation of any impact assessment performed for three years longer than the duration of time for which the system or process is deployed. Some other main requirements include: disclosing status as a covered entity submitting to the FTC an annual summary report for ongoing impact assessment of any deployed system or process (in addition to the initial summary report that is required for new systems or processes) consulting with relevant internal stakeholders such as employees and ethics teams and independent external stakeholders such as civil society and technology experts) as frequently as necessary attempting to eliminate or mitigate any impact that effects a consumer’s life in a timely manner When it comes to the impact assessment, covered entities need to consider several things, depending on whether the system or process is new or ongoing. For example, for new systems or processes, covered entities must: provide any necessary documentation describe the baseline process being enhanced or replaced by a process include information regarding any known harm, shortcoming, failure case, or material negative impact on consumers of the previously existing process used to make the critical decision include information on the intended benefits of and need for the process, and the intended purpose of the system or process It is also important to note that covered entities must, in accordance with National Institute of Standards and Technology (NIST) or other Federal Government best practices and standards, perform ongoing testing and evaluation of the privacy risks and privacy-enhancing measures of the system or process. Some examples of this include assessing and documenting the data minimization practices, assessing the information security measures that are in place, and assessing and documenting the current and potential future or downstream positive and negative impacts of these systems or processes. With respect to training and education, all employees, contractors, and other agents must be trained regarding any documented material negative impacts on consumers from similar systems or processes and any improved methods of developing or performing an impact assessment for such system or process based on industry best practices and relevant proposals and publications from experts, such as advocates, journalists, and academics. Covered entities must also maintain and keep updated documentation of any data or other input documents used to develop, test, maintain, or update the system or process, including things such as sourcing (metadata about the structure and type of data, an explanation of the methodology, and whether consumers provided informed consent for the inclusion and further use of data or other input information about themselves). Other factors to consider include why the data was used and what alternatives were explored, evaluations of the rights of consumers, and assessments of explainability and transparency. One cannot forget about the responsibilities of covered entities to identify any capabilities, tools, standards, datasets, security protocols, improvements to stakeholder engagement, or other resources that may be necessary or beneficial to improving the system, process, or the impact assessment of a system or process, in areas such as: performance, including accuracy, robustness, and reliability; fairness, including bias and non-discrimination; transparency, explainability, contestability, and opportunity for recourse; privacy and security; personal and public safety; efficiency and timeliness; cost; or any other area determined appropriate by the FTC. The FTC will be publishing an annual report summarizing the information in summary reports, and a public repository that is designed to publish a limited subset of the information about each system and process for which the FTC received a summary report. The FTC will also be publishing guidance and technical assistance. Most importantly, covered entities must note that the FTC can enforce the Act as it has broad powers to investigate and find violations involving unfair or deceptive acts or practices. Moreover, States can also bring a civil action on behalf of residents in the State to obtain appropriate relief. How is this Act different from the EU’s Artificial Intelligence Act ? And Canada? As discussed above, the American approach in the Act solely focuses on automated processes and systems deployed to render critical decisions. It is a standalone regime and is quite brief. On the other hand, the EU’s Artificial Intelligence Act (AI Act) covers a wider range of AI systems and provides nuanced regulatory requirements that are associated with the level of risk that an AI system poses to the public. In particular, the EU’s AI Act separates AI systems into three categories: unacceptable risk high-risk low/minimal risk Yet, there are some similarities between the two approaches. That is, both are involved in serious decisions that have a significant impact on consumers. What about Canada? Unfortunately, Canada does not have a law like the US Act described above, or the EU AI Act . That said, there is something similar to the US Act in the Canadian public sector, namely the Directive on Automated Decision-Making (Directive). This Directive requires algorithmic impact assessments for each system that is deployed by a federal institution. Again, this Directive does not apply to private sector businesses in Canada. When it comes to the private sector, we are still dealing with Bill C-27 , which is still in its infancy and combines an updated privacy law to PIPEDA (the CPPA ) with a brand-new AI law ( AIDA ). This legislative process may be delayed even further if there is an early election, which could happen at any time, now that the NDP has prematurely ripped up the 2022 supply and confidence deal he struck to support Prime Minister Justin Trudeau’s minority government. Lastly, it is worth pointing out that neither the proposed CPPA nor AIDA takle the concept of algorithmic accountability in the same way as the US or the EU. In fact, AIDA is completely lacking when it comes to algorithmic accountability. Previous Next
- How an Infrastructure Race is Defining AI’s Future | voyAIge strategy
How an Infrastructure Race is Defining AI’s Future Why Nvidia’s $100B investment in OpenAI signals a shift every business leader must understand By Tommy Cooke, powered by really great espresso Sep 26, 2025 Key Points: 1. Access to compute, not software features, will determine who can compete in AI 2. Vendor entanglement may speed adoption but increases dependency and lock-in risks 3. The AI arms race is accelerating, shrinking the competitive window for differentiation Nvidia has committed up to US$100 billion in a staged investment into OpenAI, with the funds intended to build massive AI data centres powered by Nvidia’s own chips. The first deployments are scheduled for 2026, with each dependent on new infrastructure coming online. On the surface, this is a story about one company betting big on another. But if you are a business leader, it signals something deeper. It means that access to compute power (the chips, servers, and energy needed to run AI) will continue to determine who can compete, how fast they can innovate, and whether they can deliver reliable AI products to clients. Therefore, if you are building, selling, or integrating AI, your advantage is no longer defined by software features alone. It is defined by whether you can access and afford the infrastructure that makes those features possible. Compute as the New Moat OpenAI has said bluntly: “everything starts with compute”. Nvidia’s investment proves the point. Frontier AI models are limited not by imagination but by access to chips, data centres, and power. For businesses, this flips the equation. Software can be replicated, but compute capacity cannot be conjured overnight. The companies that secure infrastructure will enjoy a durable moat. This means faster model training, better uptime, and the ability to scale globally. Those without access risk being left behind—no matter how strong their ideas or datasets. Vendor Financing at Unprecedented Scale This deal is also significant to us as business leaders because it blurs the line between supplier and customer. Nvidia is both investing in OpenAI and guaranteeing that OpenAI’s infrastructure will be built on Nvidia hardware. Some analysts call it vendor financing at an unprecedented scale. The lesson for business leaders is twofold: First, expect suppliers to become more embedded in clients’ strategic direction, offering capital and integration alongside products Second, recognize the risk. Deeper vendor entanglement often accelerates adoption but reduces bargaining power. Tech vendors who become dependent on a single infrastructure partner may find themselves locked into costs and roadmaps that they cannot control Capital Intensity as a Barrier to Entry A single gigawatt of Nvidia systems may cost US$35 billion in hardware alone. This makes clear that the frontier of AI is not just technologically complex. It is financially punishing. For most organizations, the takeaway is not to match Nvidia or OpenAI dollar-for-dollar. Rather, it is to understand that capital intensity itself is now a barrier to entry. Competing at the frontier requires access to extraordinary financial and infrastructure resources. Vendors and enterprises need to calibrate their vision, invest in the right scale of AI for their market, partner strategically where necessary, and focus on ROI-driven deployments rather than chasing the biggest models. Regulatory and Market Risks Nvidia already dominates the global AI chip market. Adding a deep financial stake in OpenAI potentially raises antitrust concerns about preferential access and market distortion. Governments are watching closely. To you, the business leader, this matters because regulation could reshape market dynamics in ways that affect everyone. Just as governments regulated telecom and energy to ensure fair access, AI infrastructure could face new rules that mandate openness, limit exclusivity, or scrutinize vertical integrations. Leaders must anticipate these shifts and avoid strategies that depend on fragile or privileged vendor relationships. The Acceleration Effect Perhaps the most significant implication is this: the investment accelerates the AI arms race. By de-risking OpenAI’s infrastructural future, Nvidia is ensuring that larger models can be trained and deployed faster, compressing innovation cycles from years to months. For businesses, the competitive window is shrinking. The pace of AI progress means that differentiators based solely on early adoption will fade quickly. Staying competitive will require constant reinvestment and operational agility—not just one-time pilots. What Leaders Should Do Now Leaders are recommended to do the following: Treat Infrastructure as Strategy. AI isn’t just software. It depends on access to compute, bandwidth, and energy. Executives must recognize infrastructure as a strategic variable, not an IT detail Diversify Dependencies. Relying on a single vendor—whether for chips, cloud, or capital—is a risk. Explore multi-cloud strategies, alternative hardware, and hybrid deployments Negotiate Beyond Cost. Vendor agreements should secure more than price. Push for supply guarantees, roadmap visibility, and exit flexibility Anticipate Regulation. Monitor antitrust and AI policy developments. Regulation may alter vendor dynamics and market access. Build Literacy. Equip your teams with an understanding of latency, scaling costs, and compute economics. The winners will be those who can align AI ambition with operational reality. Focus on The Bigger Picture Nvidia’s $100 billion bet is more than a financial deal. It is a signal that AI’s future will be shaped by who controls the foundations of compute. For business leaders, the message is clear: innovation, product design, and customer experience flow from infrastructure. The AI market will not be won by those with the cleverest algorithms alone, but by those who can reliably access the chips and data centres that make those algorithms work at scale. This is why the infrastructure race matters, not only to Nvidia and OpenAI, but to every vendor and enterprise hoping to compete in the AI-driven economy. Previous Next
- Training | voyAIge strategy
Hands-on AI training and workshopping to equip teams with the knowledge to navigate AI confidently. Training As experienced corporate trainers and professors, we have the skills to deliver comprehensive and tailored workforce and executive training. We are lifelong learners who prioritize excitement in learning. In the fast-evolving world of AI and technology, staying ahead requires more than just knowledge—it demands a deep, hands-on understanding of the latest tools, techniques, and strategies. At voyAIge strategy, we don't just teach AI; we empower your team to lead with it. Our training programs are designed to be as dynamic and forward-thinking as the technologies they cover, ensuring your team gains the skills and confidence to innovate and excel. From immersive workshops to tailored training modules, our expert-led sessions blend cutting-edge content with practical applications. Whether you're looking to upskill your workforce, implement new AI initiatives, or ensure compliance with emerging regulations, our training solutions are built to meet your specific needs and drive real results. Workshops Courses Briefings Roundtables Workshops Our immersive workshops deliver deep, hands-on learning. Led by our law, policy, and ethics experts who have more than 20 years of experience educating private and public sector audiences, our sessions showcase latest AI technologies, trends, and methods. Participants are engaged in practical exercises, case studies, and real-world scenarios to reinforce key concepts and foster innovative thinking. Book a free consultation Roundtables Our roundtable sessions offer a unique, interactive learning experience where participants engage in open, facilitated discussions on critical AI topics. These sessions are designed to foster deep collaboration, allowing your team to share insights, challenge ideas, and explore innovative solutions in a dynamic group setting. Roundtables are ideal for organizations seeking consensus via collective expertise. Book a free consultation Briefings Designed for senior leaders and decision-makers, our executive briefings provide a strategic overview of AI's impact on your industry. These high-level sessions cover emerging trends, regulatory landscapes, and the strategic considerations necessary to drive AI adoption in your organization. Our briefings ensure that your leadership team is well-informed and ready to make data-driven decisions. Book a free consultation Courses We understand that every organization has unique needs, which is why we offer tailored training modules that can be customized to align with your specific goals. From foundational AI concepts to advanced applications, we work with you to design a curriculum that addresses your team's knowledge gaps and prepares them for the challenges ahead. Book a free consultation Innovative & Adaptive Methodologies Book a free consultation to chat with us about correctly and accurately identifying compliance and regulation applicable to your organization. Interaction Our training includes real-world simulations. Participants apply their knowledge in a risk-free space, solving problems and making decisions that mirror actual scenarios. Collaboration We foster collaborative learning spaces where participants work together on projects and case studies. This approach encourages knowledge sharing and team cohesion. Gamification To make learning fun and motivating, we gamify elements of our sessions. From completing modules to winning prizes, these elements add excitement and anticipation. Before every training engagement, we measure your needs and study your audience and their learning style. We adapt our delivery, tone, targets, and priorities to ensure that your organization's learning objectives are clear, attainable, and achievable. Our Founders, Leading Workshops and Training Sessions To play, press and hold the enter key. To stop, release the enter key. Ready to Equip your Team? Book a Free Consultation
- HOME w/ insights | voyAIge strategy
Navigate your AI Journey with Confidence Innovative Risk Solutions | Strategic AI Management | Law, Policy & Ethics Expertise TALK TO AN EXPERT Trending Insights Canada’s Innovation Crossroads Read More New York Governor Hochul Signs AI Safety and Transparency Bill into Law Read More Privacy Commissioner Investigation into Social Media Platform, X Read More SUBSCRIBE TO STAY INFORMED Our Services Streamline your operations with our expertly crafted policies and procedures, ensuring your AI initiatives are both effective and compliant. For organizations who: need to establish new or update existing AI governance frameworks Benefits organizations by: ensuring that AI is used safely and ethically within set guidelines Examples: An insurance company adopts our crafted policies to govern and manage their AI claims Policy & Procedure Ensure your AI systems adhere to all legal standards with our thorough compliance review, which aims to detect and minimize risk while enhancing trustworthiness. For organizations who: need to ensure their AI systems comply with regulatory standards or anticipate forthcoming legislative changes Benefits organizations by: reducing legal, financial, and reputational risk while enhancing organizational clarity Example: A healthcare provider utilizes our service to align their patient data processing AI tools with PHIPA Compliance We lend our extensive experience in professional research and writing to provide insightful, impactful content tailored to support and drive your AI-related needs. For organizations who: for companies and institutions requiring in-depth analysis of AI topics Benefits organizations by: providing expert insight and professionally written content to support decision-making Example: an App developer contracts us to create a market analysis report to detail AI advancements and emerging industry trends Research & Writing We engage your audiences with unique viewpoints that demystify the complex landscape of scholarly, political, popular, and media understandings of AI. For organizations who: need an expert spokesperson on AI for corporate or public engagement events Benefits organizations by: educating audiences with the latest insights in AI ethics, law, and policy Example: our founders deliver a keynote on AI trends in neighbouring countries so as to prepare Canadians for forthcoming AI legislation in 2025 Invited Talks Empower your team with the latest AI knowledge as well as legal and ethical perspectives, designed to enhance and extend AI decision-making. For organizations who: are aiming to elevate their team's understanding and capabilities in AI Benefits organizations by: boosting AI literacy, enhancing both strategic and operational capacities Examples: a multinational corporation uses our services to enhance their annual leadership development program Executive & Staff Training We take deep dives in your privacy policies as well as data and AI operations to uncover and resolve otherwise hidden risks and biases. For organizations who: want to understand the broader implications of AI projects Benefits organizations by: identifying impacts on employees, customers, and stakeholders as well as operational processes Example: we assess the impact of using AI in public service delivery of a location-based asset tracking system Impact Assessments Navigate the complexities of AI with confidence as we design your own guide to implementing and strategically responding to AI issues effectively. For organizations who: are mindful of their employees, customers, and stakeholders think of AI's impacts Benefits organizations by: guiding teams in ethical decision-making, fostering trust and transparency Example: we create a customized playbook focusing on perceived labour reduction implications around AI adoption Ethical AI Playbook Our team assesses your organization's needs, painpoints, and opportunities. Work with us to discover the right AI solution for you. For organizations who: are exploring potential AI solutions to address specific operational challenges Benefits organizations by: clarifying the feasibility, scope, and value of AI solutions in alignment with business objectives Example: a retailer engages us to scope AI solutions for automating inventory replenishment AI Solution Scoping Maximize AI adoption and AI project success as we assist you in ensuring all parties are informed, involved, and invested from the outset. For organizations who: require buy-in from internal and external stakeholders Benefits organizations by: ensuring that parties are informed, engaged and supportive of your AI initiative Example: a software company utilizes our service to facilitate workshops that bring together developers and end-users in the AI adoption process Stakeholder Engagement Our Servie Our Products Our Products AI Launch Packages Kickstart your AI implementation with: Tools and Resources for Pre, During, and Post AI Deployment Compliance & Ethics Frameworks n Expert Guidance on Best Practices & Strategies LEARN MORE Who We Are We are lawyers and professors with over 20 years of experience in advising private, public, and non-profit organizations on the intersection of technology with law, ethics, and policy. Dr. Tommy Cooke, BA, MA, PhD Co-Founder LEARN MORE Dr. Christina Catenacci, LLM, LLB, PhD Co-Founder Who Are We? Why Hire Us? Canada's Artificial Intelligence and Data Act (AIDA) will launch in 2025. It will place stringent requirements on Canadian organizations using AI. Organizations will require expert guidance to prepare for AIDA . LEARN MORE Why Hire Us Contact Us CONTACT US First Name Last Name Email Subject Message Submit Thanks for submitting! BACK TO TOP
- Artificial Intelligence Act in the European Union | voyAIge strategy
Artificial Intelligence Act in the European Union EU's AI Act came into force on August 1, 2024 By Christina Catenacci Sep 6, 2024 Key Points The AI Act came into force on August 1, 2024 and begins to apply August 2, 2026 The more risk there is, the more stringent the obligations in the AI Act There are serious administrative fines for noncompliance On August 1, 2024, the Artificial Intelligence Act ( AI Act ) entered into force . Pursuant to Article 113, it begins to apply on August 2, 2026. Now referred to as the golden standard in AI regulation, the AI Act is the world's first comprehensive AI law. In fact, creating the AI Act was part of the EU’s digital transformation . As one of the EU’s priorities, the digital transformation integrates digital technologies by companies. For instance, digital platforms, the Internet of Things, cloud computing, and AI are all among the technologies that are involved in the digital revolution. In terms of AI, the EU sees several benefits, including for improving healthcare, competitive advantage for businesses, and the green economy. First proposed in 2021, the AI Act classifies the various applications according to the risk they pose to users. The more risk, the more regulation is required. The main goal is to ensure that AI systems that are used in the EU are safe, transparent, traceable, non-discriminatory, and environmentally friendly. One thing is clear: AI systems are to be overseen by humans. Since there are different rules for different risk levels, we can set out some of the obligations for providers and users depending on the level of risk from AI: Unacceptable risk: certain systems are just too risky since they pose a threat to humans. Therefore, the following are banned: cognitive behavioural manipulation of people or specific vulnerable groups; social scoring; biometric identification and categorisation of people; and real-time and remote biometric identification systems such as facial recognition High risk: certain systems that negatively affect safety or fundamental rights are considered high risk and are divided into two categories: AI systems that are used in products falling under the EU’s product safety legislation like toys, cars, and medical devices; AI systems falling into specific areas that will have to be registered in an EU database like management and operation of critical infrastructure, employment; and law enforcement. These systems need to be assessed before being put on the market and throughout their lifecycle Specific transparency risk : systems like chatbots must clearly inform users that they are interacting with a machine, while certain AI-generated content must be labelled and must be prevented from generating illegal content. For example, generative AI (like ChatGPT) must comply with transparency requirements and copyright law Minimal risk : most AI systems such as spam filters and AI-enabled video games face no obligation under the AI Act , but companies can voluntarily adopt additional codes of conduct. Article 1 clearly sets out the purpose of the AI Act : to improve the functioning of the internal market and promote the uptake of human-centric and trustworthy artificial intelligence (AI), while ensuring a high level of protection of health, safety, fundamental rights enshrined in the Charter , including democracy, the rule of law and environmental protection, against the harmful effects of AI systems in the EU and supporting innovation. It lays down: harmonized rules for the placing on the market, the putting into service, and the use of AI systems in the EU; prohibitions of certain AI practices; specific requirements for high-risk AI systems and obligations for operators of such systems; harmonized transparency rules for certain AI systems; harmonized rules for the placing on the market of general-purpose AI models; rules on market monitoring, market surveillance, governance and enforcement; and measures to support innovation, with a particular focus on SMEs, including start-ups. The AI Act applies to the following: providers placing on the market or putting into service AI systems or placing on the market general-purpose AI models in the EU, irrespective of whether those providers are established or located within the EU or in a third country deployers of AI systems that have their place of establishment or are located within the EU providers and deployers of AI systems that have their place of establishment or are located in a third country, where the output produced by the AI system is used in the EU importers and distributors of AI systems product manufacturers placing on the market or putting into service an AI system together with their product and under their own name or trademark authorized representatives of providers, which are not established in the EU affected persons that are located in the EU This provision is very important for Canadians because, similar to the EU’s General Data Protection Regulation (GDPR) , it can apply if you live in a third country outside the EU. That is, if providers place on the market or put into service AI systems or place on the market general-purpose AI models in the EU, they need to pay attention to and comply with the requirements in the AI Act . And if providers and deployers of AI systems have the output produced by the AI system in the EU, they need to pay attention and comply. Why is this important? Following the list of prohibited practices in Article 5 and the numerous obligations set out in the remaining parts of the AI Act , there is a penalty provision in Article 99 that is critical for private-sector entities to understand: noncompliance with the prohibited AI practices referred to in Article 5 are subject to administrative fines of up to €35,000,000 or, if the offender is an undertaking, up to seven percent of its total worldwide annual turnover for the preceding financial year, whichever is higher. Moreover, noncompliance with Articles 16, 22, 23, 24, 26, 31, 33, 34, or 50 are subject to administrative fines of up to €15,000,000 or, if the offender is an undertaking, up to three percent of its total worldwide annual turnover for the preceding financial year, whichever is higher. Member States need to notify the Commission regarding the penalties imposed. When deciding whether to impose an administrative fine and when deciding on the amount of the administrative fine in each individual case, all relevant circumstances of the specific situation must be taken into account and, and regard must be given to the following: the nature, gravity and duration of the infringement and of its consequences, taking into account the purpose of the AI system, as well as, where appropriate, the number of affected persons and the level of damage suffered by them whether administrative fines have already been applied by other market surveillance authorities to the same operator for the same infringement whether administrative fines have already been applied by other authorities to the same operator for infringements of other EU or national law, when such infringements result from the same activity or omission constituting a relevant infringement of the Act the size, the annual turnover and market share of the operator committing the infringement any other aggravating or mitigating factor applicable to the circumstances of the case, such as financial benefits gained, or losses avoided, directly or indirectly, from the infringement the degree of cooperation with the national competent authorities, in order to remedy the infringement and mitigate the possible adverse effects of the infringement the degree of responsibility of the operator taking into account the technical and organizational measures implemented by it the manner in which the infringement became known to the national competent authorities, in particular whether, and if so to what extent, the operator notified the infringement the intentional or negligent character of the infringement any action taken by the operator to mitigate the harm suffered by the affected persons Administrative fines set out in Article 100 deal with fines on EU institutions, bodies, offices and agencies (the public sector) falling within the scope of the AI Act . These fines can be hefty too and can be up to €1,500,000. When deciding whether to impose an administrative fine and when deciding on the amount of the administrative fine in each individual case, all relevant circumstances of the specific situation must be taken into account, and due regard must be given to the following: the nature, gravity and duration of the infringement and of its consequences, taking into account the purpose of the AI system concerned, as well as, where appropriate, the number of affected persons and the level of damage suffered by them the degree of responsibility of the EU institution, body, office or agency, taking into account technical and organisational measures implemented by them any action taken by the EU institution, body, office or agency to mitigate the damage suffered by affected persons the degree of cooperation with the European Data Protection Supervisor in order to remedy the infringement and mitigate the possible adverse effects of the infringement, including compliance with any of the measures previously ordered by the European Data Protection Supervisor against the EU institution, body, office or agency concerned with regard to the same subject matter any similar previous infringements by the EU institution, body, office or agency the manner in which the infringement became known to the European Data Protection Supervisor, in particular whether, and if so to what extent, the EU institution, body, office or agency notified the infringement the annual budget of the EU institution, body, office or agency As can be seen from the above discussion, we all need to pay attention to the golden standard of AI regulation. And Canadians (and other third countries) to which the AI Act applies should sit up and ensure that they are in compliance. The good news is that there is a grace period to allow organizations to become in compliance. It is highly recommended that businesses use the time to learn as much as possible about the regulation and hire competent professionals to help them comply. Previous Next
- EU Commission finds Apple and Meta in breach of the Digital Markets Act (DMA) | voyAIge strategy
EU Commission finds Apple and Meta in breach of the Digital Markets Act (DMA) The fines were huge—Apple was fined €500 million, and Meta was fined €200 million By Christina Catenacci, human writer May 9, 2025 Key Points: Apple and Meta were fined by the EU Commission for violating the DMA—Apple was fined €500 million, and Meta was fined €200 million The DMA is an EU regulation that aims to ensure fair competition in the EU digital economy Noncompliance of the DMA carries serious consequences in the form of fines, penalties, and additional fines in the case of continued noncompliance On April 22, 2025, the EU Commission announced that Apple breached its anti-steering obligation under the DMA , and that Meta breached the DMA obligation to give consumers the choice of a service that uses less of their personal data. As a result, the Commission fined Apple and Meta with €500 million and €200 million respectively. But what is the DMA ? What were these obligations that Apple and Meta violated? Why were the fines so high? Does this affect businesses in Canada or the United States? This article answers these questions. What is the DMA ? The DMA is an EU regulation that aims to ensure fair competition in the EU digital economy. The main goal is to regulate large online platforms, called gatekeepers (big companies like Apple, Meta, or Google), so that these large companies do not abuse their market power. Essentially, the purpose of the DMA is to make the markets in the digital sector fairer and more contestable (a contestable market is one that is fairly easy for new companies to enter). In other words, the market is more competitive thanks to the DMA . More specifically, gatekeepers have to comply with the do’s (i.e. obligations) and don’ts (i.e. prohibitions) listed in the DMA . For example , gatekeepers have to: allow third parties to inter-operate with the gatekeeper’s own services in certain specific situations; allow their business users to access the data that they generate in their use of the gatekeeper’s platform; provide companies advertising on their platform with the tools and information necessary for advertisers and publishers to carry out their own independent verification of their advertisements hosted by the gatekeeper; and allow their business users to promote their offer and conclude contracts with their customers outside the gatekeeper’s platform. Also, gatekeepers must not: treat services and products offered by the gatekeeper itself more favourably in ranking than similar services or products offered by third parties on the gatekeeper's platform; prevent consumers from linking up to businesses outside their platforms; prevent users from uninstalling any pre-installed software or app if they wish so; and track end users outside of the gatekeepers' core platform service for the purpose of targeted advertising, without effective consent having been granted. As a result of the DMA , consumers have more choice of digital services and can install preferred apps (with choice screens), gain more control over their personal data (users decide whether the companies can use their data), can port their data easily to the platform of their choice, have streamlined access, and have unbiased search results. As we have just seen, the consequences of noncompliance can be quite costly. In particular, there can be fines of up to 10 percent of the company’s total worldwide annual turnover, or up to 20 percent in the event of repeated infringements. Moreover, there could be periodic penalty payments of up to five percent of the average daily turnover. Furthermore, in the case of systematic infringements by gatekeepers, additional remedies may be imposed on the gatekeepers after a market investigation (these remedies have to be proportionate to the offence committed). And if necessary as a last resort option, non-financial remedies can be imposed, including behavioural and structural remedies like divestiture of (parts of) a business. In Canada, we have the Competition Act ; similarly, the United States has antitrust laws such as the Sherman Antitrust Act . For example, in Canada, there was a recent court action brought by the Competition Bureau against Google for abusing a monopoly with search. Likewise, there was an antitrust action brought against Meta by the Antitrust Division of the Department of Justice in the United States regarding its acquisition of Instagram and WhatsApp. I’d be remiss not to mention that Canadian and American businesses could be subject to the DMA in certain circumstances. This is because the DMA applies to core platform services provided or offered by gatekeepers to business users established in the EU or end users established or located in the EU, irrespective of the place of establishment or residence of the gatekeepers and irrespective of the law otherwise applicable to the provision of service. What this means is, regardless of location or residence of gatekeepers, if they offer their services to users in the EU, they are subject to the DMA . This requirement can be found in Article 1 of the DMA . Why was Apple fined €500 million? Under the DMA, app developers distributing their apps on Apple's App Store should be able to inform customers (free of charge) of alternative offers outside the App Store, steer them to those offers, and allow them to make purchases. However, Apple does not do this. Due to a number of restrictions imposed by Apple, app developers cannot fully benefit from the advantages of alternative distribution channels outside the App Store. Similarly, consumers cannot fully benefit from alternative and cheaper offers since Apple prevents app developers from directly informing consumers about such offers. The company has failed to demonstrate that these restrictions are objectively necessary and proportionate. Therefore, the Commission has ordered Apple to remove the technical and commercial restrictions on steering, and to refrain from perpetuating the non-compliant conduct in the future, which includes adopting conduct with an equivalent object or effect. When imposing the €500 million fine, the Commission has taken into account the gravity and duration of the non-compliance. At this point, the Commission has closed the investigation on Apple's user choice obligations, thanks to early and proactive engagement by Apple on a compliance solution. And why was Meta fined €200 million? Under the DMA , gatekeepers must seek users' consent for combining their personal data between services. The users who do not consent must have access to a less personalised but equivalent alternative. But Meta did not do this. Instead, it introduced a binary ‘Consent or Pay' advertising model. Under this model, EU users of Facebook and Instagram had a choice between consenting to personal data combination for personalised advertising, or paying a monthly subscription for an ad-free service. As a result, the Commission found that Meta’s model was not compliant with the DMA , because it did not give users the required specific choice to opt for a service that used less of their personal data but was otherwise equivalent to the ‘personalised ads' service. Meta's model also did not allow users to exercise their right to freely consent to the combination of their personal data. Subsequently (after numerous exchanges with the Commission), Meta introduced another version of the free personalised ads model, offering a new option that allegedly used less personal data to display advertisements. The Commission is currently assessing this new option and continues its dialogue with Meta. The Commission is requesting that the company provide evidence of the impact that this new ads model has in practice. To that end, the decision that found non-compliance involves the time period during which users in the EU were only offered the binary ‘Consent or Pay' option between March 2024 (when the DMA obligations became legally binding) and November 2024 (when Meta's new ads model was introduced). When imposing the fines, the Commission took into account the gravity and duration of the non-compliance. What’s more, the Commission also found that Meta's online intermediation service, Facebook Marketplace, should no longer be designated under the DMA, mostly because Marketplace had less than 10,000 business users in 2024. Meta therefore no longer met the threshold giving rise to a presumption that Marketplace was an important gateway for business users to reach end users. What can we take from this development? It is important to note that these decisions made against Apple and Meta are the first noncompliance decisions adopted under the DMA . Both Apple and Meta are required to comply with the Commission's decisions within 60 days, or else they risk periodic penalty payments. It is clear that the DMA is a serious regulation—businesses that offer products and services to consumers in the EU need to be aware of this and act accordingly if they want to avoid serious fines and penalties. In like manner, businesses that are encapsulated under the DMA will need to be aware that fines and penalties continue and worsen over time if the noncompliance continues. For businesses that are subject to domestic competition/antitrust legislation in Canada and the United States are recommended to note that the consequences, albeit less severe than the DMA , are also grave in the case where businesses are abusing their monopoly power and ignoring regulators. Why is competition so important? The goal of these laws is to protect the competitiveness of the markets and to protect consumers by ensuring that they have choice and are not subject to pressure by companies who abuse monopoly power. Take a look at an article that I wrote about antitrust woes here . Indeed, some companies are watching what is happening to Apple and Meta, and are responding in a positive, proactive, and cooperative manner—for instance, Microsoft President Brad Smith has announced a landmark set of digital commitments aimed at strengthening the company’s relationship with Europe, expanding its cloud and AI infrastructure, and reinforcing its respect for European laws and values. Likely attempting to learn from past antitrust mistakes (think about the antitrust case back in the late 90s), Brad Smith stated: “We respect European values, comply with European laws, and actively defend Europe’s cybersecurity. Our support for Europe has always been–and always will be–steadfast” Previous Next
- HR AI Solutions | voyAIge strategy
Help your organization overcome uncertainty about AI in the workplace. Alleviate Concerns About AI Transform Worries into Trust through Training, Thought Leadership, and AI Policies Book a Free Strategy Session Are your Employees Anxious About AI? Bringing AI into the workplace can lead to significant employee concerns. As an HR leader, you may be dealing with employees who are worried about losing their jobs, are resistant to adopting AI tools, and are uncertain about the ethical use of AI in their role. Reassure Your Team Work with us to deliver customized Training , Thought Leadership , and Policies and Procedures to address employees' concerns and facilitate smoother AI implementation. By working with us, we can empower you to: Train your Staff & Executives Equip your HR team with the skills and knowledge needed to understand and embrace new technologies Generate Thought Leadership Insights Provide cutting-edge insights and expert commentary on AI trends, enabling you to communicate the benefits of AI clearly while fostering trust and acceptance Implement AI Policies & Procedures Collaborate with us to develop ethical guidelines for AI use, ensuring your employees feel safe and valued Book a Free Strategy Session Organization Name First name Last name Email Additional Information or Request Details Submit Thanks for submitting! We will respond within 24 hours Book Now
- New York Governor Hochul Signs AI Safety and Transparency Bill into Law | voyAIge strategy
New York Governor Hochul Signs AI Safety and Transparency Bill into Law New Law Takes Effect January 1, 2027 By Christina Catenacci, human writer Jan 23, 2026 Key Points On December 19, 2025, Governor Hochul signed New York's Senate Bill S6953B into law The new law aims to take the most basic commonsense steps when training an AI model New York has joined a progressive state like California, which is at the forefront of AI regulation On December 19, 2025, Governor Hochul signed New York's Senate Bill S6953B into law . Similar to California’s new AI law, New York’s AI law focuses on safety and transparency by requiring safety reports for powerful frontier artificial intelligence models in order to limit critical harm. It is interesting to see that New York’s newly enacted AI law was just created, notwithstanding President Trump’s recent attempt to thwart the progress of legislative reform via the December 11, 2025 Executive Order , which I wrote about here . What Is the New AI Law About? The preamble aptly notes that the law has not kept up with this rapidly developing AI technology. It is also important to note that even to open up a daycare center, it is necessary to have a safety plan. To this end, the new law aims to take the most basic commonsense steps when training an AI model: Have a safety plan to prevent severe risks Conspicuously publish a redacted version of the safety plan Disclose major security incidents so that no one has to make the same mistake twice It also notes that in 2023, more than a thousand experts, including the CEOs of Google DeepMind, Anthropic, and OpenAI and many world leading academics signed a letter stating that "mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war". As a result, the goal is to reduce risks in a targeted and surgical manner by limiting the law to only a small set of very severe risks; the law will not apply to most AI companies. Rather, it removes economic incentives to cut corners or abandon safety plans for companies that cause over $1 billion in damage or hundreds of deaths or injuries. Accordingly, the law does not address issues involving bias, authenticity, workforce impacts, and other concerns that need to be handled with additional legislation. What Does the New AI Law Require? The law defines “critical harm” as the death or serious injury of 100 or more people or at least $1 billion of damages to rights in money or property caused by a large developer’s use, storage, or release of a frontier model through either the creation of a chemical, biological, radiological, or nuclear weapon, or an AI model engaging in conduct that acts with no meaningful human intervention and would, if committed by a human, constitute a crime requiring intent, recklessness, or gross negligence (or solicitation or aiding and abetting). However, the harm inflicted by a human actor is not considered to be the result of a developer’s activities unless those activities were a substantial factor in bringing about the harm, were reasonably foreseeable, and could have been prevented. The law also defines a “frontier model” as an AI model trained using more than 10º26 computational operations, the compute cost of which exceeds $100 million, or an AI model produced by applying knowledge distillation to a frontier model, provided that the compute cost exceeds $5 million. To be clear, the law only applies to frontier models deployed in whole or in part in New York. Moreover, the law defines a “safety incident” as a known incident of critical harm or an incident that has an increased risk of critical harm such as: a frontier model autonomously engaging in behaviour other than at the request of a user; theft, misappropriation malicious use, inadvertent release, unauthorized access, or escape of the model weights of a frontier model; the critical failure of any technical or administrative controls (including controls limiting the ability to modify a frontier model); or unauthorized use of a frontier model. The following are transparency requirements regarding frontier model training and use, which a large developer must do before deploying the frontier model: Implement a written safety and security protocol Retain an unredacted copy of the safety and security protocol (including records and dates of any updates) for as long as the frontier model is deployed plus five years Conspicuously publish a copy of the safety and security protocol with appropriate redactions and transmit a copy of the redacted protocol to the AG and Division of Homeland Security and Emergency Services—and grant access to the AG Record and when possible, retain information on the specific tests and test results for as long as the frontier model is deployed plus five years regarding assessments of the frontier model—and implement appropriate safeguards The following are prohibitions: A large developer must not deploy a frontier model if doing so would create unreasonable risk of critical harm A large developer must not knowingly make false or materially misleading statements or omissions in or regarding documents produced The AG can bring a civil action for a violation, and a civil penalty can result that is not exceeding $10 million for a first violation and not exceeding $30 million for any subsequent violation. Large developers must also do the following: Conduct an annual review of any safety and security protocol required, and if necessary, make modifications. If there are modifications made, it is necessary for the large developer to publish the protocol in the same manner as previous Disclose each safety incident affecting the frontier model to the AG and Division of Homeland Security and Emergency Services within 72 hours. The disclosure must include: the date of the safety incident; the reasons it qualifies as a safety incident; and a short statement describing the safety incident What Can We Take from This Development? New York has joined those states that are at the forefront of AI regulation. In fact, the law has been referred to as a landmark AI safety bill because it aims to protect New Yorkers from AI risks while also supporting innovation. Indeed, the preamble of the law referred to the January 2025 AI Action Summit’s International AI Safety Report that discussed the myriad AI risks. We can only hope that other progressive states will boldly join California and New York and enact similar safety and transparency legislation. Previous Next