Search Results
123 results found with an empty search
- Accelerator Suite | voyAIge strategy
Accelerator Suite Whether you are curious about AI, searching for the right AI solution, or are using AI and require risk management solutions, our Accelerators are designed to propel you through every need. Your introduction to AI. For organizations that are curious about AI. AI 101 training, stakeholder engagement, AI readiness assessment, AI law and ethics training, and a vulnerability scan AI Explorers Full Risk Management. For organizations using AI and want to be ethical and compliant. A Compliance Audit, Bias Detection & Mitigation Solutions, an AI Ethics Playbook, and tailored AI Policies AI Stewards Finding the right AI fit. For organizations ready to implement AI. An AI Opportunities Analysis, an AI Vendor Evaluation, an Implementation Roadmap, and a Training Plan AI Adopters AI Adopters Book a Free Consultation How Did You Hear About Us? Please take a minute to share with us how you heard about us! Submit Thanks for submitting!
- Privacy Commissioner Investigation into Social Media Platform, X | voyAIge strategy
Privacy Commissioner Investigation into Social Media Platform, X Complaint into the Collection, Use, and Disclosure of Personal Information By Christina Catenacci, human writer Jan 23, 2026 Key Points On February 27, 2025, the Office of the Privacy Commissioner of Canada (OPC) opened an investigation following a complaint it received questioning whether the social media platform, X, had violated the Personal Information Protection and Electronic Documents Act (PIPEDA) On January 15, 2026, the OPC decided to expand the investigation into X Corp following reports of AI-generated sexualized deepfake images—the OPC wanted to know whether X’s chatbot, Grok, was being used to create explicit images of individuals without their consent In addition, the OPC has also launched a related investigation into xAI, the AI company that is responsible for Grok On February 27, 2025, the Office of the Privacy Commissioner of Canada (OPC) opened an investigation following a complaint it received questioning whether the social media platform, X, had violated the Personal Information Protection and Electronic Documents Act (PIPEDA). Subsequently, on January 15, 2026, the OPC decided to expand the investigation into X Corp following reports of AI-generated sexualized deepfake images—the OPC wanted to know whether X’s chatbot, Grok, was being used to create explicit images of individuals without their consent. What is more, the OPC has also launched a related investigation into xAI, the AI company that is responsible for Grok. Therefore, the OPC has announced that it will examine whether X Corp and xAI are meeting their obligations under Canada’s federal private-sector privacy law, PIPEDA . At this time, the matter is being actively investigated; to that end, the OPC cannot provide further details. Philippe Dufresne, Privacy Commissioner of Canada stated: “The use of personal information without consent to create deepfakes, including intimate images, is a growing phenomenon that poses serious risks to individuals’ fundamental right to privacy. I have decided to expand my investigation to address this issue given its importance and the potential serious harms that it may cause to Canadians.” Previous Next
- AI Thought Leadership | voyAIge strategy
Expert AI insights and strategic content to position your organization as an industry leader. Thought Leadership We create expert-level content to position your organization as a leader in the AI space, showcasing your talent, knowledge, and vision. From blogs and newsletters to social media and podcasting, our insights help you build credibility and trust with employees, stakeholders, and clients. What is Thought Leadership? Thought leadership is the strategic process of creating content and communication that positions an individual or organization as an authority in their field. It involves producing insightful, relevant, and forward-thinking content that showcases expertise and deep knowledge on industry trends, challenges, and opportunities. What does Thought Leadership encompass? 1. Establishing Expertise Thought leadership is about sharing in-depth knowledge and insights that highlight an individual or organization's proficiency. For businesses, it goes beyond promotional content. It's about demonstrating a command of the field, which builds credibility and trust. 2. Influencing and Leading Industry Conversations A key aspect of thought leadership is contributing to and shaping discussions within the industry. This includes highlighting emerging trends, opportunities, and challenges. Offering unique perspectives or innovative solutions to common industry issues is a key way of contributing and moving along the conversation on AI. The goal is to be at the forefront of industry conversations, establishing a company or individual as a go-to resource for reliable information and insights. 3. Providing Value to Audiences Thought leadership content is valuable because it educates, informs, and inspires action. It helps audiences understand complex topics, make informed decisions, and see new opportunities. By delivering well-researched and relevant content, thought leaders build a loyal audience who sees them as a reliable source for information and guidance. 4. Building a Brand's Authority and Credibility Companies and professionals who consistently produce thoughtful, authoritative content establish themselves as credible leaders in their sector. This credibility is crucial for building trust with stakeholders, clients, and employees. It also positions the brand as an entity that knows its market and understands its environment. Over time, reputational increases translate into increased opportunities, such as partnerships, media features, speaking opportunities, or new business ventures. 5. Demonstrating Command of Industry Developments Thought leadership keeps audiences informed about the latest industry developments, including technological advancements, regulatory changes, and emerging best practices. It involves research and the ability to interpret and translate information into digestible and actionable insights for the audience. For example, an AI-driven organization might publish thought pieces on ethical AI practices, the impact of regulations like the EU AI Act, or trends in AI modelling. 6. Engaging Stakeholders Through Authentic Storytelling Effective thought leadership combines expertise with storytelling.. It’s not just about facts and data. It’s about weaving narratives that connect with stakeholders and build engagement. This can include sharing customer success stories, case studies, or experiences that showcase how the organization is tackling real-world challenges. 7. Leveraging Various Channels and Formats Thought leadership isn’t limited to written content; it extends across multiple platforms and formats to reach diverse audiences. These channels can include blogs, social media posts, podcasts, webinars, white papers, research reports, and more. Our Thought Leadership Samples Every week, voyAIge strategy generates thought leadership content that it shares on its homepage. Called "Insights" , we regularly share AI-related news, offer insights and analysis, and breakdown what it means for any organization by providing advice and actionable steps. We also provide three key takeaways to help your find what is most relevant, straight away. Book a Free Consultation to Learn More about our Thought Leadership services
- What the Duolingo Layoffs Reveal About People and AI | voyAIge strategy
What the Duolingo Layoffs Reveal About People and AI Keeping People in the Loop with AI Allows Organizations to Outperform Those Who Do Not By Tommy Cooke, fueled by coffee May 9, 2025 Key Points: 1. AI achieves its greatest potential not by replacing humans, but by augmenting and enhancing human capabilities 2. Mass layoffs tied to AI adoption risk damaging reputation, innovation capacity, resilience, and ethical oversight 3. Organizations that prioritize human-AI collaboration—through hybrid workflows, upskilling, and governance—position themselves for long-term success Duolingo, the world’s leading language-learning app, is getting rid of their contract employees and replacing them all with AI . Any human worker that wrote lessons or innovated ways to translate phrases from one language to another are being let go. This news comes on the heels of Duolingo letting go of 10 percent of its workforce last year . Terminating employees and replacing them with AI is not new. Shopify , Expedia , and Cars24 are but a few examples of dozens of large organizations around the world following suit. The reasons? There are a few and they are not unusual. For someone it’s about an “AI-first strategy”—to prioritize technology as a driving force for completing daily tasks. For others, it’s about cost reduction, streamlining operations, automating innovation and marketing, and so forth. For many readers and especially us here at VS, these stories are unsettling. They are harbingers of what so many workers fear: that AI may eventually replace us. However, beneath the surface of these stories are important lessons—about the myths we tell ourselves about AI, about the real value of humans, and the long-term consequences organizations face when they idealize technology to the extent that it removes people from the equation of work. Busting the “AI Will Replace Everyone” Myth Let’s begin where most of these media stories stop. The assumption that AI is here to “replace” workers is a misunderstanding of what AI can actually do and what value it provides. In fact, AI significantly boosts productivity and creativity up to 40 percent , providing that AI is paired with skilled human workers. Simply put, productivity gains do not come when AI eliminates the human role. Rather, they come when AI enhances human capacity, for example, by reducing manual data entry, accelerating review processes, and freeing people up to focus on complex problem-solving, creative design, or interpersonal work. The lesson here is simple. AI does not need to be about mass replacement because it is evidently about reshaping roles, tasks, and collaborations in ways that empower, and not replace, people. Organizations are at Risk when they Overlook the Value of Humans Here’s where the cases of employers replacing workers with AI really gets interesting. While companies like Duolingo and Expedia frame their layoffs as part of their strategic shift to AI tools, a closer look raises some important questions: What institutional knowledge was lost when long-term people were cut? What nuances in language, culture, and humour went out the door with those workers? What risks do these companies now face when AI-generated outputs are misaligned with user expectations? One thing AI does not have is lived experience. It cannot speak from personal context because it has none. AI merely mimics patterns in data. When it does so without careful human review–—that important Human-in-the-Loop — it outputs errors, biases, or tone-deaf missteps that can be extraordinarily costly to clean up financially, reputationally, and legally. Moreover, humans are simply better suited for complex tasks that require real-time adaptation to rapidly emerging changes. As David De Cremer and Garry Kaparov eloquently put in their co-authored article with the Harvard Business Review: “Contrary to AI abilities that are only responsive to the data available, humans have the ability to imagine, anticipate, feel, and judge changing situations, which allows them to shift from short-term to long-term concerns. These abilities are unique to humans and do not require a steady flow of externally provided data to work as is the case with artificial intelligence.” Essentially, short-term gains in efficiency not only seed long-term structural vulnerabilities, but they also place organizations at risk of permanently losing critical capabilities that only humans can provide. The Hidden Value of Keeping People Organizations that embrace a human-centric AI approach tap into some things that are profoundly valuable. Let’s look at a few of them: Embedded institutional memory . Seasoned employees know the why , not just the what. Cultural fluency . People bring deep cultural awareness and ethical discernment to decisions. Creative adaptability . When AI encounters novel problems, humans are the ones who figure out how to pivot, adapt, and respond. Critical self-reflection . People are better at determining when issues need to be escalated. Whereas AI models can drift overtime and become worse at detecting and solving critical issues, people remember from experience what is expected of them in high-stakes scenarios. As Christina Catenacci recently summarized in her review of the World Economic Forum (WEF)’s “The Future of Jobs” report, companies that retain, retrain, and reposition human workers alongside AI adoption outperform competitors on innovation, employee satisfaction, and customer loyalty. The Consequences of Getting It Wrong Employers who misread the AI landscape and pursue mass terminations under the illusion that AI can simply replace people, several risks emerge. First, is reputational fallout. Customers and clients increasingly value ethical, human-centered brands. High-profile layoffs tied to AI sparks backlash and can truly risk tarnishing a company’s public image. Second, is loss of resilience. Hollowed-out workforces are brittle. Without internal talent, companies become overdependent on external vendors or off-the-shelf solutions, making them less adaptable in fast-changing markets. This, of course, leads to reduced quality of products and services—clients will notice. Third, are innovation slowdowns. While AI can efficiently handles certain tasks such as generating images, it truly struggles with ambiguity . Without people who understand edge cases, novel demands, or cultural shifts, companies are at significant risk of losing their innovative edge. Lastly, is increased risk. AI systems are only as strong as the oversight and governance structures around them . Layoffs often undercut those ever-important human guardrails, increasing the odds of ethical missteps, legal violations, or data breaches. What Should Employers Do Instead of Replacing People with AI? Instead of treating AI as a magic bullet for growth, forward-looking organizations should approach AI as a multiplier— as something that augments the capacity, creativity, and performance of humans. Here’s a few ideas business leaders can consider: Invest in upskilling and reskilling . As AI takes over routine tasks, workers need new training to focus on higher-value tasks; in this way, workers can be challenged with higher-level and more complete roles. Prioritize making your people AI literate and ready to embrace a culture of AI-supported growth. Design hybrid workflows . Map out use cases and the subsequent critical operational business processes that might benefit most from human-AI collaboration rather than one-sided automation. Remember, Human-in-the-Loop is crucial for any AI to work. Build a governance framework . Ensure AI deployments have human checkpoints, clear accountability, and robust compliance safeguards. AI is considered to be human-centric when people are supported and guided in how they use AI. It is important to point out that this is exactly what AI governance frameworks do. Communicate transparently with employees . People are more willing to embrace AI when they understand how it supports their work, not threatens it. That is, employees are more willing to accept AI if employers are open, honest, and communicate regularly about AI usage in the organization. AI’s Power is Human-Centric The stories we began with may seem like a sign of things to come, but it’s just one chapter in a much larger story. The deeper truth is that AI is merely a tool. It is not a replacement. Its most powerful applications are those where humans and machines collaborate side by side, amplifying each other’s strengths and compensating for each other’s weaknesses. Previous Next
- AI companions in the Workplace | voyAIge strategy
AI companions in the Workplace An intro to BYOB (Bring your own bots) to work By Christina Catenacci Nov 20, 2024 Key Points AI companions are chatbots that talk with you, offer support, and help with various tasks There are pros and cons to bringing AI companions to work, and they need to be considered before use Employers are recommended to create strong policies and procedures for when they introduce AI companions to the workplace What exactly are AI companions? You may call it a friend. You may call it a mentor. You may even call it a romantic partner. What I’m talking about is the AI companion. Essentially, these AI companions are chatbots that talk with you, offer support, and help with various tasks . Some of the main characteristics of AI companions include: They chat : the conversation quality is important in that the chatbot needs to sound natural, understand context, and keep the conversations flowing They have several features : some of the key features include customization options, voice chat, and any special abilities They are easy to use : user-friendly elements include things like ease of set up, ease of finding features, and ease of use across different devices When evaluating AI companions, each criterion is given a score on a five-point scale, where the higher the score, the more highly rated the AI companion. What are some of the most popular AI companions? There are many websites that comment on and rank popular AI companions. It is important to keep in mind that it depends on the reasons why a person wants to use the AI companions, whether it is for friendship and emotional support, helping with school or work, or going down the romantic path. For instance, some sites rank the seven best AI companions of 2024 , others create comparison matrices for six AI companions , and others list the top 10 to chat and have fun with . Some of these AI companions can also be used as mentors and buddies to bounce ideas off of then thinking about work: think Star Trek and brainstorming in the Holodeck . It might be less challenging than finding a real-life mentor . For example, using AI to enhance—not replace humans can hep to embrace AI as a tool for growth, and therefore enhance human potential. Yet other AI companions are pure business and act as business tools. For example, users can “ Tackle any challenge with Copilot ”. This chatbot can give users straightforward answers so a they can lean, grow, and gain confidence. It helps with any task so users can transform their ideas into stunning visuals, simplify dense information into clear insights, and polish their writing so their voice shines (users need Microsoft 365 plan). There are even Copilot+ PCs. Another example of an AI companion that helps with work is Gemini for Google Workspace . Lauded as the “always-on AI assistant”, it can be used across the Google Workspace, meaning it is built right into Gmail, Docs, Sheets, and more, with enterprise-grade security and privacy (users need a Workspace plan). What are the pros and cons of bringing AI companions to the workplace? Like BYOD (Bring your own device), BYOBs have several pros : Increased productivity and efficiency Enhanced decision-making More efficient and precise learning, reasoning, problem-solving, perception, and language understanding, especially with certain sectors like healthcare Higher likelihood of innovation and competitive edge Ability to automate tasks And here are some of the main cons: Employees have serious job displacement concerns Privacy and cybersecurity concerns Ethical concerns Potential for overdependency Potential for errors When including AI companions in the workplace, there are several uses; that said, it is important to remain aware that there are challenges that need to be addressed . How are AI companions being used by employers at work? Some tasks that are being completed by AI companions are things like notetaking, summarizing meetings, and creating agendas or lists of follow-up tasks. When employees bring their own AI companions to work, they may be cheaper to buy individually compared to enterprise management features, employees can pick their own AI companions that they are familiar and work well with, and employees who frequently use these tools end up self-training them. Employers are recommended to set some rules and guardrails by creating an AI in the Workplace policy for all employees in the workplace. Moreover, it is critical for employers to understand the risks and attempt to mitigate those risks. See my colleague’s insight article on the risks of AI companions and how to mitigate them. Previous Next
- American Attitudes on AI Adoption | voyAIge strategy
American Attitudes on AI Adoption Navigating Benefits, Fears, and Emerging Dynamics By Christina Catenacci Nov 1, 2024 Key Points Enterprises are keen to use AI, but many of their employees have not yet used AI in their workplaces due to fear of AI General AI literacy training, specific job-related AI training, and clear AI policies and procedures can go a long way to help increase employees’ comfort level with AI being present in the workplace Increase the number of Maximalists in the workplace by clearly and honestly communicating with employees in the workplace regarding the technologies used in the workplace In August, 2023, researchers from Slack's Workforce Lab interviewed 5,000 knowledge workers in Australia, India, Ireland, Singapore, the United Kingdom, and the United States, and reported in September, 2024 that it will be necessary to use a tailored approach and help set every employee up for success in the AI-powered workplace. The study found that enterprises are keen to use AI, but many of their employees have not yet used AI in their workplaces. Researchers commented that if this trend continues, both employees and their employers run the risk of missing out on benefits such as improved efficiencies, elevated employee experiences, as well as increased performance and productivity. It is important to understand why the workers are not using AI as much as employers would like them to. It has been observed that some workers are afraid of AI. It is not about being afraid of how the technology works—it is about being afraid that important parts of their jobs will be automated, and they may appear to their employer as replaceable. Moreover, companies face a justifiable risk that resistant workers will hold back productivity and creativity, and this is likely to affect the organization’s overall success. The researchers have revealed that there are two main types of AI workplace personas that employers typically encounter: Maximalists: These workers make up about 30 percent of the professional workforce. They are using AI multiple times per week to improve their work and are very open about it. In fact, about half of Maximalists state that the use of AI is actively encouraged at their company, with or without guidelines. It is anticipated that Maximalists could be recruited to become AI evangelists for the rest of the organization Undergrounds: These workers make up about 20 percent of the professional workforce. Undergrounds say they use AI, but are hesitant to share that with their colleagues, either because it is discouraged at work or because they feel that it could make them seem disposable It appears that Undergrounds have something in common with knowledge hoarders in that they learn, but they do not want to share what they know with anyone—so that they can gain a productivity advantage. There is another possibility: perhaps some workers are not admitting that they are Undergrounds, and the percentage of Undergrounds is actually higher than what has been reported. Or maybe some employees are bringing their own AI to work (under the radar). But there are more kinds of employees that can be found in the workplace: Rebels: These workers make up about 19 percent of the professional workforce. They do not trust AI, avoid using it, and consider it unfair that their co-workers use it at work Superfans and Observers: These workers make up 16 percent each, totaling 32 percent. They have yet to integrate AI into their work, but they are watching developments with interest and caution Those who refuse to use AI emphasize the challenges such as AI tends to hallucinate at times. What this means is that employees are in need of some AI literacy training, and employers can achieve this goal by building the training into the flow of work and strategic planning. Employers are more likely to be successful when they create a foundational training program that highlights the benefits and the risks (and ways to address those risks). Employees can become more comfortable with AI if they are trained in generating prompts and sharing best use cases. With AI training, some of the employees who are hesitant to use AI can make the leap and become Maximalists. There is nothing preventing employees from moving to anther category, and training can go a long way in increasing the number of Maximalists in the organization. Another thing that can help employees gain confidence is encouraging them to experiment with the AI at work. In addition to proper training, employers are recommended to provide appropriate guardrails and guidelines to employees. This can be in the form of AI policies and procedures. Effective policies and procedures set out the purpose of the policy, expectations of the company, roles and responsibilities of all actors in the workplace, and legal, policy, and ethics authority relied upon. Diving a level deeper, a recent survey by the American Psychological Association studied AI, monitoring technology, and psychological well-being with 2,515 employed American adults. Essentially, some workers were concerned about what the future may hold—especially when it came to AI. They worried that AI could replace their jobs and employers could use monitoring technology that would invade their privacy. More precisely, about 38 percent reported worrying that AI might make some or all of their job duties obsolete in the future. Noteworthy was the fact that these AI worries were correlated with mental health concerns. In fact, about 51 percent of employees who were worried about AI replacing their jobs also reported that their work had a negative impact on their mental health, compared with 29 percent of those who did not report being worried about AI. Also, 66 percent of those who said they were worried about AI reported believing that their employer thought their workplace was a lot mentally healthier than it actually was, compared with 48 percent of those who did not report being worried about AI. Moreover, 33 percent of those who reported being worried about AI also reported that their general mental health was poor or fair (as opposed to good or excellent), compared with 25 percent of those who did not report being worried about AI. Alarmingly, there were correlations between worries about AI and stress and workplace burnout. In fact, about 64 percent of those who reported being worried about AI also reported typically feeling tense or stressed during the workday, compared with 38 percent of those who did not report being worried about AI. Feelings of burnout included irritability or anger toward coworkers or customers, a desire to keep to themselves, not feeling motivated to do their very best, feelings of lower productivity, feelings of emotional exhaustion, and feelings of being ineffective. Further, workers who reported being worried about AI often reported feeling that they were not valued at work. More specifically, 37 percent of those who were worried about AI believed that they did not matter to their coworkers, compared with 17 percent of those who did not report being worried about AI. Additionally, 41 percent of those who were worried about AI believed they did not matter to their employer, compared to 23 percent of those who did not report being worried about AI. But what is striking is that 75 percent of those who worried about AI worried that new forms of technology would take over some or all work duties in the next 10 years, compared to 23 percent of those who did not report being worried about AI. This AI anxiety may affect whether an employee wants to seek a mew job with a different employer. In particular, 46 percent of those who were worried about AI intended to look for a new job in the next year, compared with only 25 percent of those who did not report being worried about AI. And there is no question that mental health is affected by monitoring technology—45 percent of those who were monitored stated that work had a negative effect on their mental health, compared to 29 percent of those who were not monitored. Additionally, 62 percent of those who were monitored stated that their employer thought that the workplace environment was a lot mentally healthier than it actually was, compared to 48 percent of those who were not monitored. In fact, there were correlations between monitoring and workplace stress. That is, 56 percent of those who were monitored also reported feeling tense or stressed during their workday, compared with 40 percent of those who were not monitored. Worse, those who were monitored also more frequently expressed feelings often associated with burnout, such as irritability or anger toward coworkers or customers, a desire to keep to themselves at work, not feeling motivated to do their very best, feelings of emotional exhaustion, and feelings of being ineffective at work. Likewise, workers who were monitored at work more often reported feeling that they were not valued at work. For example, 26 percent of those who were monitored did not believe that they were valued at work, compared with 17 percent who were not monitored. Also, 36 percent of those who were monitored believed that they did not matter to their employer, compared with 22 percent of those who were not monitored. Similarly, 32 percent of those who were monitored believed that they did not matter to their coworkers, compared with 17 percent of those who were not monitored. Interestingly, 34 percent of workers believed that their employers were spying on them, and 35 percent were uncomfortable with the way their employer used technology to track them at work. The thought of being monitored made them feel like their employers did not trust them (51 percent), it was an invasion of privacy (50 percent), they felt stressed (48 percent), and they felt anxious (46 percent). In light of the above, what can employers do? When planning on using AI in the workplace (especially if there are plans to have monitoring features), employers are recommended to do the following: be transparent and communicate honestly with the workforce about the technologies that are to be used in the workplace—including any monitoring technologies reduce psychological distress associated with fear of the unknown by providing further details concerning the technologies to be used and allowing employees to provide input into the planned changes increase the level of comfort with AI by providing general AI literacy training and specific AI training regarding working with AI to perform job tasks create more Maximalists in the organization using a foundational training program that highlights the benefits and the risks (and ways to address those risks) of AI provide clear guidelines and guardrails to all actors in the workplace with clear policies and procedures so that everyone is on the same page regarding the rules, roles and responsibilities, as well as the expectations of the employer Previous Next
- Media Room (List) | voyAIge strategy
VS Media Room Found out more about VS. Announcements, news, interactive content. Everything you need in one place. VS Publishes with the International Association of Privacy Professionals (IAPP) How do small businesses govern AI, exactly? Isn't governance just for big businesses? We got the conversation started on how any organization, regardless of its size, can manage AI - affordably, and effectively. The article is titled, "Right-sizing AI governance: Starting the conversation for SMEs". Learn More VS Publishes with the Business Technology Association In an exclusive magazine article for BTA members, VS discusses the importance of AI policies - especially for technology vendors and Managed Service Providers. VS also put on a webinar for BTA members discussing this topic. We also just published another article on communicating AI use, which will appear in the July issue of the BTA's Office Technology magazine. Learn More Featured Post in IN2Communication's Blog What is an AI policy, exactly? We're excited to be featured in IN2's blog, exploring why AI policies are crucial for your business. Learn More Guests at the Canada Club of London We were honoured to be invited by the Canada Club of London to speak about all-things AI governance. Learn More VS is Guest for 4 episodes of IN2's The Smarketing Show podast We loved being guests for four episodes of IN2's The Smarketing Show video podcast! We did four episodes: Beyond the Buzzwords, Workplace Risks and Rewards, The AI Policy Brief, and Why Thought Leadership? Visit The Smarketing Show's YouTube page to watch now! Learn More 1 1 ... 1 ... 1
- Understanding Risk of AI Self-Assessments | voyAIge strategy
Understanding Risk of AI Self-Assessments Balancing Self-Assessments with External Audits By Tommy Coooke Nov 8, 2024 Key Points: AI self-assessments can uncover compliance gaps and build trust but should be paired with external expertise to ensure objectivity Collaborating with an external auditor helps create a comprehensive assessment that aligns AI adoption with an organization’s goals Maintaining a feedback loop with external auditors ensures ongoing improvements, keeping AI systems compliant and aligned with evolving organizational needs The United Kingdom recently announced that it has launched a new platform to promote AI adoption in the private sector. The goal of the platform is to efficiently allow a business to have a look at its operations and organizational design to identify, assess, and mitigate risks associated with AI before getting too entrenched in adopting AI incorrectly. The announcement is timely and encouraging given that complex generative AI models are reportedly struggling with EU legal compliance benchmarks around AI bias and cybersecurity. Moreover, the UK's initiative appears to allow UK businesses to tackle the looming uncertainty around AI adoption head-on; despite its potential impact, 31% of British-based businesses are nervous to adopt AI with 39% of businesses reporting that it would be safer to stick with technologies that they already know how to use. The Value of AI Self-Assessments A self-assessment can generate the kind of clarity and confidence an organization requires to adopt AI. Here are a few benefits: Identify Compliance Gaps Early: A self-assessment can be a quick way to identify potential compliance blind-spots internally before they galvanize into substantive risks down the road. Scanning the regulatory landscape, both at home and abroad (especially for organizations with employees and stakeholders beyond its national borders), reveals what kinds of policy and procedure preparations are required prior to adopting AI. Foster Trust and Transparency: A self-assessment process can also play a crucial role in encouraging businesses to be transparent about their AI use. Per Cisco's 2024 Consumer Privacy Survey, 75% of respondents indicated that trustworthy and transparent data practices directly influence their buying choices. As importantly, trust and transparency not only protect and foster relationships with customers, but with regulators and insurers as well. Validate Internal Stakeholders from the Bottom-Up: Enlisting a workforce to assist in implementing a self-assessment is a powerful way of building trust in AI. When employees understand AI’s impact on their daily work and have an opportunity to assess a model or product, they are more likely to embrace AI rather than resist it. This is a bottom-up engagement strategy and is one that is proven to foster a culture that prioritizes communication, adaptation, and innovation. The Risks of Self-Assessments While there is value in conducting AI impact self-assessments, the process is not without risks. At voyAIge strategy, we encourage organizations to be mindful about the potential shortcomings of self-assessments, which include: A Lack of Objectivity: Conducting a self-assessment without external input and feedback can generate biases that may take years to discover. Despite the power of a self-assessment tool to empower a workforce, they also signal to them quite clearly that AI is coming. Internal stakeholders may overlook weaknesses or ethical concerns because an organization is committed to adopting AI. These two quick examples reveal how a lack of objectivity can generate problems that may undermine the credibility of the assessment altogether. Limited Expertise: While self-assessment tools tend to be designed by AI experts, they are generally one-size-fits-all approaches to understanding an organization's needs. They also tend to look at many times of AI with the same lenses. Moreover and as importantly, those conducting a self-assessment usually lack the depth of knowledge required to fully understand not only AI’s implications but the rationale behind the design of a self-assessment tool. AI systems and assessment criteria are complex, necessitating a deep understanding of technical and regulatory challenges. on the one hand, and an organization's strengths and weaknesses on the other. There are many nuances at stake on either side of the equation. Failing to recognize and meet these nuances can result in superficial assessments that fail to uncover significant issues. Global Blind Spots: One of the biggest risks of a self-assessment is its oft failure to understand the regulatory landscape of neighboring jurisdictions. The key here is recognizing that AI laws are created and updated very quickly - and they do so around the globe at different rates and frequencies. A self-assessment might not fully capture these nuances, particularly if the evaluators behind the design of a self-assessment tool do not conduct adequate research and/or fail to regularly update the self-assessment tool. Balancing the Benefits and Risks of AI Self-Assessments Successfully conduct a self-assessment requires striking a balance between internal accountability and external assistance. Here are three steps your organization can take to responsibly and compellingly conduct a self-assessment: Step One - Enlist the Assistance of an External Auditor: Rather than leaving the entirety of the responsibility of a self-assessment to your workforce, bring in an external auditor as a project manager. By doing so, the auditor can guide key stakeholders in how to identify and recognize the key areas that may have otherwise been missed. Step Two - Engage Cross-Functional Teams with the Auditor: Involve the external auditor in a cross-functional team comprised of AI- and tech-savvy individuals from as many business units as possible. By having them work with the external auditor, specialized insight can be generated in a comprehensive way that minimizes blind spots and ensures that an AI's adoption fits across multiple business lines without being disruptive. Step Three - Develop a Feedback Loop: After your self-assessment is conducted, maintain check-ins with the external auditor. Continuous monitoring and improvement is always highly recommended when onboarding an AI system of any shape or size, especially as an organization grows and changes along the way. As time passes, have the external auditor provide updates on regulatory changes and provide advice on refining an AI system's KPIs to ensure that the system remains compliant and aligned with your organization's goals. Self-assessments can be crucial for generating alignment and excitement in a workforce. They're important tools for uncovering hidden risks and also for generating trust and transparency. However, they have their limitations. Understanding those limitations and overcoming them through external assistance is important for ensuring the successful implementation of any AI system. Previous Next
- AI in Cybersecurity | voyAIge strategy
AI in Cybersecurity A Game Changer for MSPs when People Come First Tommy Cooke Dec 13, 2024 Key Points: AI transforms cybersecurity for MSPs by enabling real-time threat detection, automating responses, and predicting vulnerabilities Effective deployment requires tailoring training, thoughtful vendor selection, and clear communications with all stakeholders through strong thought leadership When it comes to AI in cybersecurity, trust is essential so ensure that people always come first Managed Service Providers (MSPs), companies that provide ongoing technology services and maintenance for their clients, are one of the many business types that are undergoing significant change due to AI—especially for MSPs providing cybersecurity solutions. Cybersecurity is complicated. With average data breaches exceeding $4.75 million per year per organization, coupled with the fact that 88 percent of these breaches are caused by human error , MSPs themselves are often the targets of hackers. It is perhaps unsurprising that the industry is undergoing significant transformation where the days of manual monitoring, static rules, and signature-based detection methods are failing far too quickly to outpace new modalities of cyberattacks driven by AI. AI isn't merely a weapon for bad actors: it is also a tool for progressive MSPs to protect you and themselves. Here are a few ways that AI is transforming cybersecurity. How AI Is Transforming Cybersecurity for MSPs Proactive Threat Detection AI analyzes massive data in real time to identify anomalies or unusual patterns that could reveal malicious activity. Through machine learning models, AI can uncover subtle deviations in network activity, login behaviors, or file access patterns that might otherwise go unnoticed. How many times has your bank emailed or texted you about suspicious purchase activity? There's a good chance that AI is helping them out. Capabilities like this allow MSPs to respond faster and build more defensive strategies for their clients and themselves. Automated Incident Response AI is also excellent at responding to threats automatically . Rather than depending on a human to isolate compromised systems, block malicious IPs, or trigger containment protocols, these tasks can be automated and run 24/7. Reducing downtime and enhancing the ability to prevent damage, AI frees up humans so that they can focus on strategic decision-making. It also gives them more time to use AI cybersecurity tools in a sandbox - a virtual space where they can test the vulnerability of their own and their clients’ systems to ensure that a given cybersecurity solution is watertight. Predictive Intelligence Beyond detecting threats, AI can forecast them . Feeding historical incident data, whether from a client directly or from global threat intelligence indexes, AI can help a cybersecurity firm identify patterns and trends behind emerging vulnerabilities. As many of us experience on a near-daily basis, the software and systems we used are updated all the time. This is yet another instance where AI is likely assisting one of your many preferred vendors in predicting issues and patching them before they arise. Understanding AI-Cybersecurity Challenges While scalability, efficiency, and enhanced trust are attractive to MSPs, AI is not a silver bullet. It is crucial that MSPs understand that AI can still misidentify threats that can lead to alert fatigue. AI solutions must be constantly tweaked, and it is imperative that companies listen to customers who may grow tired of constantly losing access to their credit card because of false positives. Automated cybersecurity solutions are also only as effective as the data on which they train. Data lacking representation of varied attack patterns can lead to gaps in threat detection. AI must also align with privacy laws and ethical standards, especially in client environments where sensitive and personal data are handled regularly—not to mention their intellectual property. As we have discussed on In2Communication's video series, The Smarketing Show , many AI cybersecurity platforms have a bad habit of automatically generating new policies and procedures for an organization, including ones that are not tailored to the dynamic shape and size of a company. This can introduce myriad problems for any organization. Overcoming AI-Cybersecurity Challenges Start with education: it is necessary to train teams on how to understand AI cybersecurity platforms. Ask whether your teams use the platforms as efficiently and with the same precision that they use my Enterprise Resource Planning (ERP). Question whether teams aware of all of the platform's ins and outs, capabilities, tools, and shortcomings. Seamless use and integration depends upon giving a team time to play with the tool—to break it and make it work more effectively for yourself and your clients as well. This is why it is also important to vet vendors thoroughly. Chose partners with proven expertise, transparency, and great support as a vendor who is claiming to innovate by offering automated solutions, and also those that claim to generate automated policies and procedures. Lastly, communicate your vision. Your team, your clients, and your stakeholders are engaging AI at varied rates of exposure. They will have different opinions. Cybersecurity is already a high-stakes application for AI, so talk to your people. Explain how and why AI benefits them. Ensure the data remains secure. Prove that you are a thought leader before you implement anything. AI in Cybersecurity is Effective—if Managed Properly Remember that in the world of AI, people matter most. For the foreseeable future, AI is always going to be a trust issue. Effectively deploying AI means more than just the technology; it requires planning, ethical deployment, excellent training, and superb communication. AI can propel an MSP into the future of innovative and reliable cybersecurity solutions if they are able to recognize the inherent complexities of adoption and strategic thinking—processes that always start and end with people. Previous Next
- Adoption & Transformation | voyAIge strategy
AI strategy and roadmaps to guide digital adoption and transformation. Digital Adoption and Transformation Work with voyAIge strategy to scope, select, and strategize the right AI solution for your organization. When we scope AI solutions for your organization, we ensure that we understand the problems first. We identify root causes as well as your pain points, risks, and opportunities. We consider all angles and perspectives available to then define your ideal solution. The solutions we choose and present to you align with your company’s values, ambitions, and goals. While we don't implement AI systems ourselves, we leverage our expertise to provide clear, actionable direction in your AI journey so that you and your team are empowered to make informed decisions. What is Digital Adoption & Transformation? Adopting is the process of bringing a new technology or platform, like AI, into an organization. Transformation is the process of significantly overhauling an existing system, along with business processes, to improve upon its capabilities and capacities. Whether adopting a new AI system or transforming an existing one, we analyze your current business processes, challenges, and goals to formulate clear insights on gaps and opportunities along with a robust strategy and recommendations for moving forward in your AI journey. Our Digital Adoption & Transformation Process Assessment We start by conducting a comprehensive assessment of your current state. This includes analyzing your existing processes, technology stack, data readiness, and identifying any challenges or opportunities. Our goal is to understand where you are now and where you want to be. Why work with voyAIge strategy? Choosing the right AI solution is critical to business success. We are your trusted partner in this journey. Here's why organizations choose us for AI Solution Scoping: Deep Industry & Academic Knowledge Our team has over 20 years of experience working in the public, private, and not-for-profit sectors as professors, lawyers, and consultants. This gives us unique access to a wide range of best practices, logics, and methods when thinking about AI. Schedule a Free Consultation
- What is “AI Augmentation”, and How Do You Achieve It? | voyAIge strategy
What is “AI Augmentation”, and How Do You Achieve It? The New Frontier for HR By Christina Catenacci Nov 14, 2025 Key Points AI augmentation is the collaborative use of AI systems to enhance, support, and amplify the cognitive and physical capabilities of human workers, rather than replacing them entirely. The purpose is to increase productivity and quality of output by enabling humans to work faster and smarter AI augmentation is a safe way to carefully and gradually include AI as a collaborator There are several steps to achieving AI augmentation, starting with identifying the repetitive tasks that can be automatable AI augmentation is the collaborative use of AI systems to enhance, support, and amplify the cognitive and physical capabilities of human workers, rather than replacing them entirely. The purpose is to increase productivity and quality of output by enabling humans to work faster and smarter. Compared to full automation, augmentation is about giving existing valuable staff superpowers. You may have heard of collaborative robots , also known as cobots, which are industrial robots that can safely operate alongside humans in a shared workspace (unlike autonomous robots, which are hard-coded to repeatedly perform one task, work independently and remain stationary). In short, the goal is to combine the strengths of the AI with those of the human. What is an Example of AI Augmentation? For example, if someone needs to draft a proposal, that person could combine their abilities with AI’s capabilities. That is, the writer can decide which reports to select to include for the coverage in the proposal, and then ask the AI to list five of the most impactful statistics from those reports. At this point, the writer could ask the AI to draft a first draft of the proposal with those five set of statistics. From there, the writer could edit the document and complete an ethics check at the end. Together, the AI and the human writer could synthesize data, draft a document, edit the document, and do the final ethics check. How do HR Leaders Achieve AI Augmentation? AI Augmentation is the most responsible way to introduce AI. The reason is because it is not full automation, which can carry high risk and complexity, but it does not involve compiling statistics manually from multiple reports, which is the traditional way of doing things on the other end of the spectrum. AI augmentation is a happy medium. In fact, this is a safe way to carefully and gradually include AI as a collaborator. The AI can do the things that it is good at like sifting through mountains of data, finding patterns, and completing the repetitive tasks that bores most humans. This frees humans to focus on what they can do best, such as using expertise to solve tricky problems, building relationships with customers, and thinking creatively and empathetically. An humans can perform final ethics checks too. The following steps can lead to full AI augmentation , so that humans can still be in the driver’s seat instead of watching from the sidelines: Level 1: Use AI augmentation to eliminate the boring stuff . Identify the routine, automatable tasks in a job that slows everything down. Have AI start by taking on those tasks. For instance, the AI can clean up customer service tickets and thin out the queue Level 2: Allow workers to have AI tools that act as portable experts . Allow workers to use these experts to enhance the worker’s work quality and productivity. For example, the human customer service agent can ask the AI to read a ticket and respond by creating a first draft of a customer response. The human agent can review it, edit it, and confirm that it is an appropriate message before sending Level 3: Use AI augmentation for predictable tasks . Identify the more predictable tasks. Allow a more autonomous AI system to deal with specific predictable tasks, Predictable tasks could include things like answering the common question, “Where is my order?”, so that AI systems handle these type of tasks completely on their own—but if at some point where the AI system flags a more complex issue, the task escalates and the human agent can seamlessly take over the task—the human is always in control What are Some Best Practices for Using AI Augmentation? Here are a few tips that can help a businesses with AI augmentation: Use the knowledge and experience you have to train the AI system Remember to test the AI in a risk-free environment (a safe and stable sandbox) Make sure to roll out the AI slowly and make necessary adjustments Noted relevant metrics, measure the value created with AI augmentation, and note the value created by the AI-human collaborations Create training opportunities for employees with respect to AI-human collaborations Conclusion According to Gartner, it is necessary for HR leaders to plan for a blended workforce . This involves moving from a mindset where AI is viewed as a nice-to-have bolt-on to a regular practice of designing a human-AI workforce where both use their strengths and co-deliver work. Moreover, EY recommends blending operational gains with a people-first mindset . More specifically, the chances of sustainable business and capability growth hinge on whether organizations keep a people-first mindset while integrating new technologies. To accomplish this, EY suggests that organizations deploy the most efficient tools and processes to create sustainable value while still investing in the skills, career and personal growth of the workforce to create a more exceptional employee experience. This means bringing a holistic, people-centered perspective to an increasingly more digital world of work. While there may be a percentage of tasks for every employee that might be supported by AI tools, organizations will need those employees to be the human-in-the-loop who makes final decisions. Finally, employers are recommended to: Appreciate AI’s role in a comprehensive workforce strategy, and be aware of the potential challenges that lie ahead Determine how AI can empower workers in the organization Explore potential risks and security concerns Consider size, scope and cost in terms of evaluating performance and cost trade-offs With regards to training on the new tools, chart the path forward with people at the center Implement metrics that measure workforce sentiment tied to confidence in and adoption of the new technology Previous Next
- Deep Fakes, Elections, and Safeguarding Democracy | voyAIge strategy
Deep Fakes, Elections, and Safeguarding Democracy Understanding and Preventing the Threat of Deep Fakes in Elections By Tommy Cooke Oct 11, 2024 Key Points: Deep Fakes use AI to create hyper-realistic fake media, posing serious risks to elections They can manipulate voter perceptions and erode trust in democratic institutions Organizations and governments must invest in detection tools, media literacy, and rapid response protocols to combat misinformation As the 2024 US election approaches, the integrity of the democratic voting processes is under threat from Deep Fakes. In January 2024, a robocall impersonating Joe Biden spread misinformation about the election process. It demonstrated how easily AI can be used to manipulate voters. The incident highlights the urgency with which governments, organizations, and citizens need to take in understanding Deep Fakes – and undertaking proactive measures to combat them. What Are Deep Fakes? Deep Fakes are AI-generated content, usually in the form of images, videos, audio recordings. They are designed to closely replicate the appearance, voice, and mannerisms of real people. Deep Fakes are often generated through a class of AI Machine Learning, called Generative Adversarial Networks (GANs), where two neural networks compete with one another. One neural network (the generator) creates a fake image, recording, or video. The other (the discriminator) tries to spot if they are fake. The two networks continue competing with one another, improving over time until the generator produces fake content that the discriminator finds difficult to distinguish from real footage. The process often results in uncanny resemblances that are hyper-realistic, misleading the viewer into witnessing and believing a statement or action that never factually occurred. The ongoing proliferation of Deep Fakes raise considerable ethics and security concerns, particularly during elections. Why Deep Fakes Threaten Democracy Deep Fakes are not merely technological curiosities. They can be a powerful weapon for misinformation, designed to: Manipulate Public Perception : Deep Fakes falsely portray political figures making inflammatory statements or engaging in unethical behavior, leading to confusion while eroding voter trus t. The January 2024 robocall that mimicked Joe Biden is a clear example. Erode Trust in Democratic Institutions : As more Deep Fakes are produced, they generate increasingly higher levels of suspicion and confusion around what is real and what is not. The atmosphere of doubt and uncertainty that Deep Fakes create thus undermine the credibility of both political candidates and the electoral process . In an age where misinformation already spreads rapidly on social media platforms, Deep Fakes are particularly lethal in their ability to sow distrust in an otherwise informed and engaged electorate. Distort Voter Behavior : When Deep Fakes are curated for specific audiences (e.g., traditionally Democratic voters), adversaries of the democratic process specifically manipulate certain voters into questioning the validity of the voting process or even the value of their own vote. This targeted approach can significantly alter or shift voter behavior in way that is detrimental to both political parties. The 2024 Election: A New Era for Deep Fakes As we head into the 2024 United States presidential election in November, the threat of Deep Fakes is unprecedently high. In today’s polarized political climate, Deep Fakes have the potential to escalate tensions by spreading false narratives that align with partisan biases. The current political climate is divisive, and Deep Fakes threaten to make divisions deeper and wider. It is imperative that governments and organizations adhere to best practices to combat Deep Fakes: Monitoring and Detection Governments and organizations can invest in advanced AI tools capable of detecting Deep Fakes in real-time. Detection algorithms can reverse engineer how Deep Fakes are created to flag suspicious content before they gain widespread traction. Media Literacy One of the most effective ways to mitigate the impact of Deep Fakes is through public education. Media literacy campaigns equip voters with the that skills they need to critically evaluate the content they encounter. Rapid Response Governments and election bodies must have clear strategies in place for when Deep Fakes emerge. Rapid response protocols can include issuing statements to correct misinformation, collaborating with tech companies to remove malicious content, and engaging the public through verified communication channels. Cross-Sector Collaboration Governments, tech companies, and media organizations should work togethe r to create transparent systems that verify the authenticity of election-related media. Public-private partnerships can help develop common standards for content verification and share expertise in detecting Deep Fakes. Proactive Measures for Future Elections Because Deep Fakes become more commonplace on social media platforms, organizations, governments, and citizens need to recognize the urgency of this issue – and act proactively: Seek Independent Fact-Checking Organizations : Governments should work closely with fact-checking bodies to ensure swift and accurate debunking of manipulated content. Establish Digital Forensics Units : Election bodies and governments can create teams focused on monitoring and analyzing digital content for manipulation . These units can serve as first responders when Deep Fakes are detected, in turn assisting with coordinating responses. Promote Research and Development : Supporting innovation in the Deep Fake detection and prevention space is crucial. Governments, organizations, and citizens can stay ahead of emerging threats by investing in tools that protect the democratic process. The risks to democratic integrity are real. Without concerted efforts, the impact of Deep Fakes can reshape the political landscape in exceptionally harmful ways. By investing in monitoring systems, educating the public, and establishing rapid response protocols, we can mitigate these risks and protect the foundations of our democratic processes. Previous Next