top of page

Search Results

123 results found with an empty search

  • Antitrust Woes | voyAIge strategy

    Antitrust Woes Meta and Google Find Themselves in Hot Water By Christina Catenacci, human writer Apr 18, 2025 Key Points Mark Zuckerberg just gave his testimony in the Meta antitrust case, where he tried to convince the court that he did not buy Instagram and WhatsApp to get rid of competitors Google was found to have illegally built monopoly power with its web advertising business We will have to wait to see what happens to Meta and Google: will they have to break up their companies? This article discusses some news in the world of antitrust law, as it pertains to Meta and Google. It also discusses the importance of competition as per the Competition Bureau of Canada and the Federal Trade Commission so that we can take some important points from recent developments. Meta This was an interesting week—it was reported that Mark Zuckerberg gave his testimony in the hottest antitrust case since the Google antitrust case, which I wrote about here . Over the last few days, Zuckerberg told the U.S. District Judge James Boasberg, that he purchased Instagram and WhatsApp because he saw value in the companies, rather than to eliminate competitors. Still, the FTC alleges that Meta used a monopoly in its technology to generate massive profits. Why is this important? Meta could be forced to break off Instagram and WhatsApp—the two companies were startups when Meta bought them over 10 years ago, but now they are massive components of Meta. Apparently, while he was on the stand, Zuckerberg was asked to look at his own previous emails that he wrote to associates before and after the acquisition of Instagram and WhatsApp to clarify his motives. Was the purchase to halt Instagram’s growth and get rid of a threat? Or was it to improve Meta’s product by having WhatsApp run as an independent brand? Zuckerberg was just the first witness in the trial, so we have some time before we get the answer. The trial is supposed to last about eight weeks. On the stand, Zuckerberg testified that it was his job to understand what was going on so that he and his teams could respond quickly. In fact, he stressed that he was operating in a very competitive environment, and he was not the monopoly—in full deflective mode, he asserted that people spent more time on YouTube than on Facebook and Instagram combined. Of course, the problem with that argument is that the FTC did not consider YouTube to be a proper comparator to the friend-sharing technology, since it was involved in sharing videos. This case went back to 2020 , and was filled with preliminary motions. We will keep you posted on any developments. Google This was also an interesting week for Google—in the second antitrust case regarding the alleged advertising monopoly, Google was found to have illegally built monopoly power with its web advertising business. In fact, the Department of Justice (DOJ) just announced that the Antitrust Division of the DOJ prevailed in its second monopolization case against Google since the company violated antitrust law by monopolizing open-web digital advertising markets. The DOJ stated that this was a landmark victory in the ongoing fight to stop Google from monopolizing the digital public square. It was Judge Leonie Brinkema who made the decision. We will soon find out what will happen to Google: will it need to be broken up? How will it be broken up? How long would it take to break it up? There will likely be a penalty phase where this is determined either late this year or early next year. This means that Google has been found to be a monopolist for the second time in a year: one for Search, and one for online advertising. We will keep you posted on any developments about the case. What Can we Take from These Developments? These cases against Meta and Google demonstrate how important it is for companies to not behave in anti-competitive ways. Interestingly, it does not seem to matter who was president at the time when these cases were commenced; people really frown on anti-competitive behaviour, regardless of who is in power. Simply put, the goal is to get companies to compete without abusing monopoly power. Why is this all important? The Competition Bureau of Canada has explained the following about competition and the link to efficiency, innovation, and productivity: “Competition has the power to drive our productivity forward and benefit Canadian businesses and consumers alike. Competition can improve productivity in three ways: Efficient use of resources : firms facing intense competitive pressure are likely to use their labour and resources more efficiently than those facing slack competition Innovation : competition encourages firms to innovate and invest in new products and processes to gain a competitive edge on their rivals, and Keep markets productive : healthy competition squeezes out lower-productivity firms and allows higher-productivity firms to thrive” The Bureau states that competition pushes individuals, firms and markets to make the best use of their resources, and to think of new ways of doing business and winning customers. As a result, productivity and Canadians’ standard of living increases. Additionally, competition makes goods and services more attractive to Canadian consumers and foreign buyers, which increases the competitiveness of Canadian exports, expands our output, and increases the economic benefits for Canadian workers, businesses, and investors. The Bureau also points out that competition is good for consumers too—it benefits Canadians by keeping prices low and keeping the quality and choice of products and services high. In this way, businesses must produce and sell the products that consumers want, and they need to offer them at prices that consumers are willing to pay. Consequently, consumers are still in the driver’s seat and are not forced to buy goods at unfair prices. The Federal Trade Commission also explains the following: “The FTC takes action to stop and prevent unfair business practices that are likely to reduce competition and lead to higher prices, reduced quality or levels of service, or less innovation. Anticompetitive practices include activities like price fixing, group boycotts, and exclusionary exclusive dealing contracts or trade association rules, and are generally grouped into two types: agreements between competitors, also referred to as horizontal conduct monopolization, also referred to as single firm conduct” Previous Next

  • Six Keys to Consider Before Implementing AI Agents | voyAIge strategy

    Six Keys to Consider Before Implementing AI Agents Issues and Solutions to Help Prepare your Organization By Tommy Cooke Oct 25, 2024 Key Points: Identify specific tasks and processes where AI agents can make compelling contributions Understand and address privacy, security, and ethics vulnerabilities prior to implementation Communicate transparently with employees, offer training, and address their concerns AI agents have been explosively trending this last week. AI agents, as voyAIge strategy’s Co-Founder Dr. Christina Catenacci describes , are applications that use a combination of different AI models together with conventional automation frameworks to process information – without human interaction. The goal of the AI agent is thus to supplement or even supplant human workers. Unlike the AI many of us have explored in the course of the last calendar year, AI agents have autonomy. They make decisions, perform tasks, and create outputs based on defined goals. With companies like Microsoft and Salesforce introducing AI agents, many organizations are considering incorporating them into their operations, and rather quickly. However, it is essential to consider precisely what is at stake in adopting AI agents. How will they change workflows? What impact will they have on workforce morale? In what ways do they – and perhaps do they not – align with your organization’s goals and growth plans? Thoughtful, informed preparation is crucial if we are to ensure that AI agents enhance as opposed to impede critical processes. Here are key considerations that can help generate conversations with your business line leaders and executives that can cultivate the strategic planning and insight your organization requires if/when implementing AI agents – especially if you are considering them in the long term: Define Objectives, Values, and Use Cases Before adopting AI agents, pinpoint exactly what value they can provide. A common misconception is that AI agents can immediately replace complex tasks or roles. This is not the case, especially in early stages where human oversight and model adjustment is required. Moreover, different departments and processes will benefit from AI agents in different ways. Begin by auditing your organization. Identify tasks that are repetitive, rule-based, and time-consuming. These are often prime candidates for automation. Basic administrative tasks, inventory management, calendar scheduling – these are all prime examples of time-consuming tasks that can benefit from an AI agent’s assistance. Start small. Focus on specific workflows and expand from there. Prioritize Security and Privacy AI agents process large amounts of data. Some of those data are sensitive. Given that 51% of employees have tried tools like ChatGPT in the last year and only 17% of employers have AI policies that may otherwise regulate what information employees give to AI systems, there is already a risk in your organization whereby employees are feeding to an AI system client information, data measurements, and insights valuable to your brand. Employees are significantly outpacing their employers in using AI at work and this blind-spot can be highly problematic, opening your organization up to non-compliance and legal risks because most AI systems that employees are unofficially using at work train and model from the data anyone provides them. When dealing with AI agents, the potential risk increases quite significantly. If you do not have AI policies in place as well as data privacy policies, these are crucial requirements. You will also need to develop and implement a clear data governance policy to ensure that AI agents handle your own and your stakeholders’ data safely and securely. Establish a Clear Accountability Structure Because AI agents act autonomously without direct human input, it is difficult to know how their decision-making will align with and deviate from an organization’s values, priorities, and procedures. If and when an AI agent misses the mark, who is accountable for reporting and addressing these errors – and to whom are they accountable, exactly? Establish ethical guidelines for AI agents before they are fully implemented and deployed. How should an AI agent handle sensitive situations? What exactly defines a sensitive situation as well as sensitive information? When should certain decisions by deferred to a human decision-maker? Layers of oversight that are clearly articulated in AI ethics policies and accountability policies ensuring that AI agents can be audited and adjusted if they make inappropriate or harmful decisions. Create a governance structure that holds specific teams and roles accountable for monitoring and managing an AI agent’s actions. Assess and Understand Workforce Impact The introduction of AI agents will change the nature of work in your organization. While it is true that AI agents will likely return valuable time to your hardworking employees, 38% of employees are nervous that AI will replace them while 51% worry that AI will negatively impact their mental health . There is considerable likelihood that your workforce will have questions and concerns. Engaging employees honestly, accurately, and giving them avenues to be heard is critical if any AI system is to work in an organization. It is recommended that businesses use transparent communication with employees . It is paramount that AI agents are clearly explained, described and situated within specific roles and contexts. Employees need to hear that they will not be replaced. HR leaders are recommended to be proactive in providing training opportunities that help employees adapt to these changes as well. Work should be undertaken to restructure roles to reflect altered workflows. Start Small with a Pilot Program Mass implementation of any digital solution is risky. While software and automated tools can save time, they need to be learned before they are fully understood and embraced. A pilot program is a small-scale, isolated, and controlled study that allows an organization to understand feasibility, cost, roadblocks, and opportunities in isolation. Choose a single team, department, or process for your pilot project. For example, automating customer complaint responses. A controlled approach with a small team testing an AI agent over the course of a couple of months allows you to gather data on the effectiveness of AI while minimizing disruption. Choose a team of individuals that understand and familiar with AI, preferably employees that are excited about AI who can champion the cause and socialize it later. Use the pilot’s findings to make any necessary adjustments. Develop a Monitoring Plan AI systems learn. They change by adjusting their behaviour in an attempt to improve over time. Like a teacher in a classroom, how one learns is as important to the learning process itself. Testing, monitoring, and providing opportunities to expand horizons is crucial to any person or AI system’s successful growth. To facilitate successful growth, establish a team that will monitor the AI agent’s performance. Regular audits should become a part of your organization’s AI governance repertoire. Establish key performance indicators (KPIs) to measure the AI agent’s success in meeting its goals, adjusting along the way as needed. Continuous monitoring not only helps mitigate risks, but it will ensure successful and smooth operation in the long-term. Adopt for Long-Term Success, Not Short-Term Gain As with any release of new technology, industry and media hype tremendously stimulates early adoption. A challenge of early adoption is not being aware of a technology’s limits and challenges. Left unchecked and misunderstood, they can derail investments and disturb workforces. Start small, stay informed, and ensure that both technology and human talent are aligned for future success. Previous Next

  • AI in Health Care | voyAIge strategy

    AI in Health Care Some Mitigation Strategies, Use Cases, and 2025 Predictions Christina Catenacci, Human Writer and Editor Dec 13, 2024 Key Points This is an exciting time for using AI in the medical field Both the Canadian and the American Medical Associations have provided guiding principles for use of AI by physicians Some of the main use cases used in the medical field involve research, medical education, administration assistance for medical professionals, diagnosis, treatment, monitoring, and more. The use cases that are predicted to be especially useful in 2025 are striking AI is becoming pervasive in medicine, and many in the health care field predict that it is going to continue to proliferate this realm well into the future. Input from Canadian and American Medical Associations The Canadian Medical Association (CMA) notes that the rapid evolution of AI technologies is expected to improve health care and change the way it is delivered. In fact, the CMA states that AI is being explored, along with other tools, as a means of increasing diagnostic accuracy, improving treatment planning, and forecasting outcomes of care. There has been promise for the following: clinical application in image-intensive fields, including radiology, pathology, ophthalmology, dermatology, and image-guided surgery broader public health purposes, such as disease surveillance Interestingly, Health Canada has already approved several AI applications, but it is worth noting that that the CMA advises doctors that: “Before deciding to use an AI-based technology in your medical practice, it is important to evaluate any findings, recommendations, or diagnoses suggested by the tool. Most AI applications are designed to be clinical aids used by clinicians as appropriate to complement other relevant and reliable clinical information and tools. Medical care provided to the patient should continue to reflect your own recommendations based on objective evidence and sound medical judgment” Moreover, the CMA stresses that physicians do the following: Ensure that AI is used to compliment clinical care. Medical care should reflect doctors’ own recommendations based on objective evidence and sound medical judgment Crucially review and assess whether the AI tool is suited for its intended use and the nature of your practice Consider the measures that are in place to ensure the AI tool’s continued effectiveness and reliability Be mindful of legal and medical professional obligations, including privacy, confidentiality, and how patient data is transferred, stored, and used (and whether reasonable safeguards are in place) Be aware if bias and try to mitigate it as much as possible Have regard to the best interests of the patient The American Medical Association (AMA) similarly recognizes the immense potential of AI in health care in enhancing diagnostic accuracy, treatment outcomes, and patient care; simultaneously, it appreciates that there are ethical considerations and potential risks that demand a proactive and principled approach to the oversight and governance of health care AI. To that end, the AMA created principles that call for comprehensive policies that mitigate risks to patients and physicians, ensuring that the benefits of AI in health care are maximized while potential harms are minimized. These key principles include: Oversight : the AMA encourages a whole of government approach to implement governance policies to mitigate risks associated with health care AI, but also acknowledges that non-government entities have a role in appropriate oversight and governance of health care AI Transparency : The AMA emphasizes that transparency is essential for the use of AI in health care to establish trust among patients and physicians. Key characteristics and information regarding the design, development, and deployment processes should be mandated by law where possible, including potential sources of inequity in problem formulation, inputs, and implementation Disclosure and Documentation : The AMA calls for appropriate disclosure and documentation when AI directly impacts patient care, access to care, medical decision making, communications, or the medical record Generative AI: To manage risk, the AMA calls on health care organizations to develop and adopt appropriate polices that anticipate and minimize negative impacts associated with generative AI. Governance policies should be in place prior to its adoption and use Privacy and Security : Built upon the AMA’s Privacy Principles , the AMA prioritizes robust measures to protect patient privacy and data security. AI developers have a responsibility to design their systems from the ground up with privacy in mind. Developers and health care organizations must implement safeguards to instill confidence in patients that personal information is handled responsibly. Strengthening AI systems against cybersecurity threats is crucial to their reliability, resiliency, and safety Bias Mitigation : To promote equitable health care outcomes, the AMA advocates for the proactive identification and mitigation of bias in AI algorithms to promote a health care system that is fair, inclusive, and free from discrimination Liability : The AMA will continue to advocate to ensure that physician liability for the use of AI-enabled technologies is limited and adheres to current legal approaches to medical liability Furthermore, the AMA principles address when payors use AI and algorithm-based decision-making to determine coverage limits, make claim determinations, and engage in benefit design. The AMA urges that payors’ use of automated decision-making systems do not reduce access to needed care, nor systematically withhold care from specific groups. It states that steps should be taken to ensure that these systems are not overriding clinical judgement and do not eliminate human review of individual circumstances. There should be stronger regulatory oversight, transparency, and audits when payors use these systems for coverage, claim determinations, and benefit design. Another thing to consider is that the AMA has released Principles for Augmented Intelligence Development, Deployment, and Use , which provides explanatory information that elaborates on the above principles. Examples of Use Cases There are several examples of AI use in health care. Here are some of the main ones we came across: Early warning systems : This AI tool has reduced unexpected deaths in hospital by 26 percent. An AI-based early warning system flagged incoming results showing that the patient's white blood cell count was very high and caught an instance of cellulitis (a bacterial skin infection that can cause extensive tissue damage). Another example has been seen in detecting instances of breast cancer—AI is becoming a member of the medical team Optimizing chemotherapy treatment plans and monitoring treatment response : Oncologists rely on imprecise methods to design chemotherapy regimens, leading to suboptimal medication choices. AI models that assess clinical data, genomic biomarkers, and population outcomes help determine optimal treatment plans for patients. Also, cancer treatment plans require frequent adjustment, but quantifying how patients respond to interventions remains challenging. AI imaging algorithms track meaningful changes in tumors over the course of therapy to determine next steps Robotic surgery : AI is enabling surgical robots to perform complex operations with greater precision and control, resulting in reduced recovery times, fewer complications, and better patient outcomes. These AI systems are used for minimally invasive surgeries as well Medical research and training : AI is being used for new and repurposed drug discovery and clinical trials. Additionally, medical students are receiving some feedback from AI tutors as they learn to remove brain tumors and practice skills on AI-based virtual patients Improving precision for Computed Tomography (CT) and Magnetic resonance (MR) image reconstruction : radiology departments are not being replaced—they are being improved by bolstering precision and speed. That is, AI-enabled camera technology can automatically detect anatomical landmarks in a patient to enable fast, accurate and consistent patient positioning. Also, AI-enabled image reconstruction can help to reduce radiation dose and improve CT image quality, thereby supporting diagnostic confidence. This all helps radiologists read images faster and more accurately. For instance, AI assessed Alzheimer’s disease brain atrophy rates with 99% accuracy using longitudinal MRI scans Precision oncology : AI allows for the development of highly personalized treatment plans based on a patient’s individual health data, including their genetics, lifestyle, and treatment history Remote medicine : With wearable devices and mobile health applications, AI can continuously monitor patients remotely. The data collected is analyzed in real time to provide updates on the patient’s condition, making it easier for healthcare providers to intervene early if something goes wrong Administration, professional support, and patient engagement : AI can help professionals identify and reduce fraud, receive necessary supports, and support patients What is in Store for 2025? Indeed, it is an exciting time for medical professionals. AI is fundamentally reimagining our approach to human health. Here are some AI trends that are expected to dominate the medial field in 2025 : Predictive healthcare : Machine learning algorithms now analyze complex datasets from genetic profiles, wearable devices, electronic health records, and environmental factors to create comprehensive health risk assessments. There are platforms that can predict disease onset and recommend preventative interventions and treatment plans Advanced precision medicine and genomic engineering : Driven by remarkable advances in genomic engineering and CRISPR technologies, this is becoming standard practice. The ability to precisely edit genetic codes has opened up revolutionary treatment possibilities for previously untreatable genetic disorders, including correcting genetic mutations, developing targeted therapies, and making customized treatment plans Immersive telemedicine and extended reality healthcare : Extended reality (XR) technologies, including augmented reality (AR) and virtual reality (VR), have transformed remote medical consultations and patient care. Surgeons can now perform complex procedures using haptic feedback robotic systems controlled remotely, while patients can receive comprehensive medical consultations through hyper-realistic virtual environments. This is important when dealing with patients in rural areas and underserved regions Internet of medical things and continuous health monitoring : This has matured into a robust, interconnected ecosystem of smart medical devices that provide continuous, non-invasive health monitoring. Wearable and implantable devices now offer real-time, comprehensive health insights that go far beyond simple fitness tracking. It is important for monitoring, detecting, and transmitting data to healthcare providers Sustainable and regenerative biotechnologies : some of these technologies include: biodegradable medical implants that naturally integrate with human tissue; regenerative therapies that can repair or replace damaged organs; sustainable production of medical treatments with minimal environmental impact; and bioengineered solutions for addressing climate-related health challenges Previous Next

bottom of page