Search Results
123 results found with an empty search
- Free Consultation | voyAIge strategy
Free Consultation Organization Name Industry First name Last name Email What service or product are you interested in? Are you curious about or interested in a specific AI? Let us know Do you have a project deadline? Additional Information or Request Details Submit Thanks for submitting! We will respond within 24 hours
- Privacy Policy | voyAIge strategy
Privacy Policy DOWNLOAD
- Technology Lessons from Orange Shirt Day | voyAIge strategy
Technology Lessons from Orange Shirt Day Applying Indigenous Principles to Build Ethical and Inclusive Organizational Practices By Tommy Cooke Oct 4, 2024 Key Points: OCAP principles guide respectful and community-centered data management Indigenous Data Sovereignty emphasizes transparency and accountability in data use Indigenizing technology creates inclusive systems that honor cultural perspectives The National Day for Truth and Reconciliation, also known as Orange Shirt Day in Canada, is a time to reflect on the painful history of residential schools and it is a time to honor survivors and those who never came home. It is a day of remembrance – and also a time for learning and growth. At voyAIge strategy, we have recently reflected on the many lessons Indigenous technology and media leaders provide through their work and example. We recognize that Indigenous business practices present an opportunity for organizations to revitalize their own practices. Our Co-Founder Tommy hosts the What’s That Noise?! Podcast . For the past year, it has been the home to a special series called One Feather, Two Pens , co-hosted by Lawrence Lewis , Co-Founder and CEO of OneFeather . This Insight draws upon some of the series’ nine podcast episodes to share key values and principles that can serve as powerful growth opportunities for organizations working with data and complex technology like AI. Here are four principles and values that could help your organization foster more ethical, inclusive, and accountable data practices: 1. Ownership, Control, Access, and Possession (OCAP) The OCAP principles – Ownership, Control, Access, and Possession – are essential for understanding respectful data management. Indigenous communities have long advocated that they must have the ability to govern data about their people and lands. OCAP is not merely about data privacy. It speaks to the heart of how communities can control and protect data that are used to generate stories about Indigenous peoples. These principles are also crucial for protecting the ability for Indigenous peoples to continue authentically telling their own stories. At any organization, OCAP can reshape how data is treated. Data are not just numbers. They represent people, their histories, and their futures. As Ja'Elle Leite , CEO of Ultralogix , mentioned on a recent episode of One Feather, Two Pens, stories from Indigenous communities carry important lessons for those who are willing to listen. Reflecting on how your organization handles and protects data, particularly when it relates to vulnerable or underrepresented groups, is a crucial step toward demonstrating ethical technology use. This applies to AI as much as it does any technology. 2. Indigenous Data Sovereignty Indigenous Data Sovereignty refers to the right of Indigenous peoples to govern data about their communities, cultures, and lands. If OCAP are data management steps an organization can take, Indigenous Data Sovereignty is a goal for many Indigenous peoples who encounter data and technology. In the era of Big Data, an organization can quickly lose sight of who controls the narratives that data create. Indigenous Data Sovereignty emphasizes the importance of giving Indigenous communities agency over their data and ensuring that its use aligns with their values. For your organization, applying this principle could mean being transparent about how data is collected, stored, and shared. It involves making sure that these processes can be explained and understood by the community members themselves. If your organization is gathering data that involves Indigenous populations, this principle is crucial for maintaining trust. 3. Indigenization of Technology Indigenization means embedding Indigenous perspectives and values into existing systems. It’s the deliberate practice of protecting and promoting culture through tools and technologies. It’s helpful to recognize that bringing an Indigenous lens into an organization doesn’t just benefit Indigenous stakeholders. It also helps organizations make their technology more inclusive and culturally aware. James Delorme , CEO of Indigelink Digital Inc., who was featured in Episode 8 of the podcast series, highlighted the importance of intentionally bringing Indigenous perspectives into spaces and places where they haven’t traditionally existed. For businesses, this might look like creating spaces for Indigenous collaboration, particularly in decision-making processes related to technology and data. For example, this could involve re-evaluating how data flows through your company, ensuring that systems in place don’t marginalize Indigenous voices or stories. 4. Narrative Authority Elle-Máijá Tailfeathers , award-winning filmmaker and storyteller, shared in Episode 5 of our series the importance of narrative authority – the power and right of individuals and communities to control, shape, and share their own stories. In an organizational setting, narrative authority prompts organizations to think deeply about how they present information, especially when it relates to Indigenous peoples. Organizations must be self-aware of how their data collection, products, and services might filter or alter Indigenous narratives. Engaging directly with Indigenous communities is vital when your data or technology practices involve or represent their stories. This ensures that narratives are not only accurate but also owned and told by the right people and voices. Bridging Indigenous Data and Technology Insights with Organizational Practice Reflecting on these principles offers organizations a chance to recalibrate their ethical approaches to data and technology. Ethics in the digital age isn’t just about compliance or creating polished policies. It’s about respecting the stories behind the data and the people represented by them. As we learned from Indigenous thought leaders, ethical technology practices require constant dialogue, humility, and an openness. Here are three tips to help bridge Indigenous principles with your own organizational practices: Translate Ethics into Action : Don’t just publish ethics policies. Turn them into daily actions. Ask how OCAP or Indigenous Data Sovereignty can be applied to your organization's specific context. Engage Communities : Actively engage Indigenous voices, especially when your technology touches their data, culture, or representation. Make space for dialogue and collaboration. Be Accountable : Ensure transparency and accountability in how data is managed and shared. Being answerable to the communities your data affects is a hallmark of ethical practice. Orange Shirt Day reminds us of the power of stories, and Indigenous communities have much to teach us if we listen. By adopting even some of their many data and technology principles, organizations can not only create more ethical and inclusive data systems but also honor the cultural wisdom that strengthens them. Previous Next
- Future of Jobs Report | voyAIge strategy
Future of Jobs Report What is Projected for the Future of Work? By Christina Catenacci, human writer May 1, 2025 Key Points The Future of Jobs Report 2025 has thoroughly examined a number of macrotrends and technology trends, and reported on where countries are individually and globally in terms of predicting the share of organizations that have identified the trend as likely to drive transformation in their organization It is predicted that by 2030, new job creation will amount to 170 million jobs (about 14 percent of today’s total employment), offset by the displacement of 92 million current jobs (about eight percent of today’s total employment)—this means there will be a net growth of 78 million jobs (seven percent of today’s total employment) Advances in technology are anticipated to drive skills change more than any other trend over the next five years. The most common workforce strategy in response to the macrotrends analyzed in the report is upskilling the workforce (85 percent) What is Projected for the Future of Work? The World Economic Forum recently released its Future of Jobs Report 2025 . The report discusses the perspectives of over 1,000 leading global employers across 22 industry clusters and 55 economies from around the world. What does it say about the future of work? What does it predict about AI in the workplace? This article answers these questions. More specifically, the article touches on drivers of labour-market transformation; the jobs outlook; the skills outlook; barriers to transformation and strategies that can improve talent availability; and industry insights. What is the Global Labour Market Landscape in 2025? Undoubtedly, 2025 has been marked by the rising cost of living, geopolitical conflicts, climate issues, and economic downturns. This report is dated January, 2025— before the recent tariff wars that were launched by America on several countries. At this point, the longer-term effects of these tariff wars are unclear when it comes to the markets, unemployment, and inflation. The projections for between 2025 and 2030 are outlined below. What are the Drivers of Labour-Market Transformation? The following trends are considered drivers of transformation in the global market, which reshape both jobs and required skills: Technological developments : Hands down, 60 percent of employers expect broadening digital access to transform their businesses, which is more than any other trend. This makes sense since growing digital access is a critical enabler for new technologies to transform labour markets. The three technologies that are expected to have the greatest impact on business transformation are AI, robots and autonomous systems, as well as energy generation and storage technologies. By far, AI is expected to have the most impact, with employers responding that 86 percent expect that AI will transform their businesses by 2030. Indeed, there has been a rapid increase in investment and adoption across several sectors, and a surge in demand for GenAI skills Economic uncertainty : Based on 2024 economic performance, there is some cautious optimism about the global economic outlook; however, more chief economists expect conditions to worsen rather than strengthen. Slow growth and political volatility keep many countries at risk of economic shocks, and 42 percent expect slower growth to impact their operations. Inflation is still high in low-income countries because of high food prices due to supply chain disruptions influenced by climate shocks, regional conflicts, and geopolitical tensions Geoeconomic fragmentation : Geoeconomic tensions threaten trade and supply chains, especially in lower-income economies. Globally, governments are responding to geoeconomic challenges by imposing trade and investment restrictions, increasing subsidies, and adjusting industrial policies. The shift toward geoeconomic fragmentation has significant macroeconomic implications. In fact, about 34 percent of surveyed employers view heightened geopolitical tensions and conflicts as a key driver of organizational transformation The green transition : About 47 percent of employers consider the ramping up of efforts and investments to reduce carbon emissions as a key driver for organizational transformation. As well, 41 percent of employers see the increased efforts and investments made to adapt to climate change as a significant driver for organizational change. The demand for green skills will continue to outpace supply. The report states, “To fully capitalize on opportunities created by the green transition and harness them in a way that is fair and inclusive, prioritizing green skilling is essential”. Employers agree, in that 71 percent in the Automotive and Aerospace industry and 69 percent in the Mining and Metals industry expect carbon emissions reductions to transform their organizations Demographic shifts : we have an aging and declining working-age population predominantly in higher-income economies (due to declining birth rates and longer life expectancy), and a growing working-age population in many lower-income economies (where younger populations are progressively entering the labour market). As a result, we are putting greater pressure on a smaller pool of working-age individuals and raising concerns about long-term labour availability. Many employers facing the effects of the aging population are more pessimistic about talent availability and expect to face bigger challenges in attracting talent, and believe that they may need to rely on automation (79 percent) and advance workforce augmentation (67 percent). In fact, 92 percent of employers think that they will need to prioritize upskilling and reskilling in the next five years What is the Jobs Outlook? The Jobs Outlook addresses the issue of how employers expect certain jobs to grow and decline in response to the above-mentioned trends. It is predicted that by 2030, new job creation will amount to 170 million jobs (about 14 percent of today’s total employment), offset by the displacement of 92 million current jobs (about eight percent of today’s total employment). This means there will be a net growth of 78 million jobs (seven percent of today’s total employment). The fastest growing job roles are driven by technological developments—Big Data Specialist, FinTech Engineers, AI and Machine Learning Specialists, Software and Applications Developers, Security Management Specialists, Data Warehousing Specialists, and Autonomous and EV Specialists . On the other hand, some of the top fastest declining jobs involve clerical roles, such as Cashiers and Ticket Clerks, Administrative Assistants and Executive Secretaries, Bank Tellers, as well as Accounting, Bookkeeping, and Payroll Clerks. Moreover, the largest growing jobs for 2025–2030 include Farmworkers, Labourers, and other Agricultural Workers, Light Truck or Delivery Services Drivers, Software and Applications Developers, Building Farmers Finishers, related Trades Workers, and Shop Salespersons. Conversely, the largest declining jobs include Cashiers and Tickets Clerks, Administrative Assistants and Executive Secretaries, Building Caretakers and Cleaners, Material-recording and Stock-keeping clerks, Printing and related Trades workers, as well as Accounting and Bookkeeping Clerks. The researchers also examined how the above trends would affect employment. Technology has been predicted to be the most divergent driver of labour-market change—broadening digital access will likely create and displace more jobs than any other macrotrend. That is, 19 million jobs would be created, and nine million jobs would be displaced. Also, AI and information processing technology are expected to create 11 million jobs and displace 9 million jobs. When it comes to robotics and autonomous systems, it is predicted to be the largest job displacer with a net decline of five million jobs. In fact, broadening digital access, advancements in AI and information processing, and robotics and autonomous systems technologies are the drivers of the fastest growing and declining jobs. When it comes to technology, there is some question about the interplay between humans, machines, and algorithms as they redefine job roles across industries—is it about autonomation or augmentation? Automation will change the way in which people work. In particular, as technology becomes more versatile, the proportional share of tasks performed solely by humans is expected to decline. Today, 47 percent of work tasks are performed mainly by humans alone, with 22 percent performed mainly by technology (machines and algorithms), and 30 percent completed by a combination of both. But by 2030, employers expect these proportions to be nearly evenly split across these three things. Interestingly, the report states: “both machines and humans might be significantly more productive in 2030 – performing more or higher value tasks in the same or less amount of time than it would have taken them to do so in 2025 – so any concern about humans “running out of things to do” due to automation would be misplaced” Along the same lines, the researchers asked this question: If an increasing amount of a firm’s total output and income is derived from advanced machines and proprietary algorithms, to what extent will human workers be able to share in this prosperity? They stressed that technology could be designed and developed in a way that complements and enhances, rather than displaces, human work. In fact, they underscore the importance of ensuring that talent development, reskilling, and upskilling strategies are designed and delivered in a way to enable and optimize human-machine collaboration. That said, at an industry level, all sectors are expected to see a reduction in the proportion of work tasks performed by humans alone by 2030, but they differ in the share of this reduction that is projected to be attributable to automation versus augmentation and human-machine collaboration. For instance, there are four sectors where automation is projected to reduce the proportion of total work tasks done by humans alone and reduce the share of total work tasks currently delivered through human-machine collaboration. With respect to geoeconomic fragmentation, employers view increased government subsidies and industrial policy, increased geopolitical division and conflicts, and increased restrictions to global trade and investment to be net job creators. Additionally, increased government subsidies and industrial policy are expected to drive increased demand for Business Intelligence Analysts and Business Development Professionals. Increased restrictions to global trade and investment are also predicted to drive growth in those roles, as well as in Strategic Advisors and Supply Chain and Logistics Specialists. And increased geopolitical division and conflicts are projected to drive growth in all these roles, in addition to Information Security Analysts and Security Management Specialists. Employers were also asked about whether they were planning to offshore parts of their workforce, or move operations closer to home through reshoring, nearshoring, or friendshoring. Employers are driven to off-shoring and re-shoring due to the above geoeconomic trends. In terms of the green transition, climate change adaptation is likely to be the third largest contributor to net growth in global jobs by 2030, with an additional five million net jobs; similarly, climate change mitigation is the sixth largest contributor, with an additional three million net jobs. In this context, some fast-growing jobs (they are in the top 15 fastest growing jobs) include Environmental Engineers and Renewable Energy Engineers. Some other fast-growing jobs include Sustainability Specialists and Renewable Energy Technicians. Additionally, green transition macrotrends will also drive labour-market transformation; for instance, there will likely be net job growth for Building Framers, Finishers, and Related Trades Workers. In regards to demographic shifts, the trend of growing working-age populations is expected to be the second largest driver of global net job creation, with nine million net additional jobs by 2030. Likewise, aging and declining working-age populations are expected to be the third-largest driver of job creation (with 11 million additional jobs), as well as the main factor in a global reduction in jobs (with seven million fewer jobs). These demographic trends will likely be drivers for growth in roles for Assembly and Factory Workers, Vocational Education Teachers, Nurses, Sales and Hospitality professionals, Shop Salespersons, Wholesale and Manufacturing Sales Representatives, Food and Beverage Servers, as well as University and Higher Education Teachers and Secondary Education Teachers. The slower economic growth has caused employers to believe that there will be more job destruction (three million jobs) than creation (two million jobs). Similarly, employers believe that the rising cost of living and higher prices will cause some job creation (four million jobs) and displacement (three million jobs). This economic uncertainty will likely contribute to the decline in roles such as Building Caretakers, Cleaners, and Housekeepers, while slower economic growth is also among the top contributors to job decline in Business Services and Administration Managers, General and Operations Managers, and Sales and Marketing Professionals. That said, slower economic growth is also projected to be a top driver for growth in roles such as Business Development Professionals and Sales Representatives. Furthermore, growth in roles driven by increasing cost of living is concentrated in jobs associated with finding ways of increasing efficiency, such as AI and Machine Learning Specialists, Business Development Professionals, and Supply Chain and Logistics Specialists. What is the Skills Outlook? This part discusses expectations of skill disruption by 2030, the skills currently required for work, and whether employers anticipate that these skills will increase or decrease in importance over the next five years. It also examines the skills that are expected to become core skills by 2030, the key drivers of skill transformation, and anticipated training needs. When it comes to skills disruptions, there have been rapid advancements in frontier technologies (tech that significantly changes how we communicate, solve problems, and conduct business) since the pandemic—the post-pandemic era, we see an accelerated adoption of digital tools, remote work solutions, and advanced technologies such as machine learning and generative AI. At this point, employers expect 39 percent of workers’ core skills to change by 2030 and 61 percent of core skills that would remain the same; compared to this global average, Canada is at 38 percent, and the United States is at 35 percent of core skills will change by 2030. This may be why there is a growing focus on continuous learning along with upskilling and reskilling programmes. In fact, about 50 percent have completed training as part of long-term learning strategies. Interestingly, the top 15 skills that are the core skills in today’s workforce: analytical thinking; resilience, flexibility, and agility; leadership and social influence; creative thinking; motivation and self-awareness; technological literacy; empathy and active listening; curiosity and lifelong learning; talent management; service orientation and customer service; AI and big data; systems thinking; resource management and operations; dependability and attention to detail; quality control; and teaching and mentoring. Similarly, the top 15 skills that are on the rise include: AI and big data; networks and cybersecurity; technological literacy; creative thinking; resilience, flexibility, and agility; curiosity and lifelong learning; leadership and social influence; talent management; analytical thinking; environmental stewardship; systems thinking; motivation and self-awareness; empathy and active listening; and design and user experience. It is important to keep in mind that there are industry-specific variations in the evolving importance of skills. For example, both analytical thinking as well as curiosity and lifelong learning are at the top of the list with respect to what is needed in education and training; likewise, environmental stewardship is at the top of the list for what is needed in oil and gas. How are the main trends expected to influence the skills evolution by 2030? In terms of technological change, advances in technology are anticipated to drive skills change more than any other trend over the next five years. In fact, the increasing importance of AI and big data, networks and cybersecurity, and technological literacy is driven by the expansion of digital access and the integration of AI and information processing technologies. These trends have also been seen as responsible for the growing importance of analytical thinking and systems thinking. In a data-driven landscape, there is an increasing complexity of decision-making and the need for critical problem solving. Similarly, design and user experience, along with marketing and media skills, are expected to grow because of technological advancements. On the other hand, technology has accelerated the decline in some skills, including manual dexterity, endurance, precision, and reading, writing, and mathematics—likely due to robotics and automation. As discussed above, the hope is that technologies such as Gen AI will help to augment human skills through human-machine collaboration instead of replacing them, and so there is a continued importance of human-centred skills. In fact, the report states: “These findings underscore an urgent need for appropriate reskilling and upskilling strategies to bridge emerging divides. Such strategies will be essential in helping workers transition to roles that blend technical expertise with human-centred capabilities, supporting a more adaptable workforce in an increasingly technology-driven landscape” The researchers recommend that employers recognize the need for training and upskilling initiatives that focus on both advanced prompt-writing skills and broader GenAI literacy. In terms of geoeconomic fragmentation and economic uncertainty, these trends have led to a demand for network and cybersecurity skills in order to protect digital infrastructure from emerging threats. They have also led to a need for human-centred skills including resilience, flexibility, agility, leadership and social influence, and global citizenship to manage multiple crises and complex social dynamics. With respect to the green transition, environmental skills are becoming increasingly integral across diverse sectors. Moreover, employers that anticipate a rise in the importance of global citizenship cite the convergence of climate-change adaptation, geoeconomic fragmentation, and broadening digital access as key factors. We cannot forget about demographic shifts as a driver of skills demand—aging and declining working-age populations are pressing organizations to prioritize talent management, teaching and mentoring, as well as motivation and self-awareness. At the same time, there is a rising focus on empathy and active listening, resource management, and customer service. This emphasizes the growing need for interpersonal and operational skills that can address the specific needs of an aging workforce and foster more inclusive work environments. What does this all mean when it comes to skills? Employers have increasingly invested in reskilling and upskilling initiatives to ensure that workforce skills are aligned with evolving demands. Since 50 percent of workforces have completed training across nearly all industries, there is a growing recognition of the importance of continuous skill development. However, some industries are outliers: Agriculture, Forestry and Fishing, and Real Estate are the only sectors that have seen a decline in training completion since 2023. For a representative sample of 100 workers, 41 will not require significant training by 2030; 11 will require training, but it will not be accessible to them in the foreseeable future; and 29 will require training and be upskilled within their current roles. Additionally, 19 out of 100 workers will require training and will be reskilled and redeployed within their organization by 2030. To fund the training, employers expect to fund their own training programmes (86 percent), free of cost training (27 percent), government (20 percent), public-private funding (18 percent), and co-funding across the industry (16 percent). From training initiatives, employers expect enhanced productivity (77 percent), and improved competitiveness (70 percent). What are Workforce Strategies? Employers were asked about the workforce strategies that they anticipate adopting in response to the macrotrends mentioned above that will shape the future of work. Also, this part also touches on key barriers to organizational transformation, talent availability, as well as planned workplace practices and policies. The main barrier to organizational transformation is skill gaps in the labour market (63 percent). This challenge exists across practically all industries and geographies. Second and third in line are organizational culture and resistance to change (46 percent), and outdated or inflexible regulatory framework (39 percent). In terms of talent availability outlook, it has decreased since 2023: in 2025, only 29 percent of businesses expect talent availability to improve between 2025–2030. That said, employers are optimistic about talent development (70 percent). But when it comes to talent retention, only 44 percent expect to see improvements in their ability to retain talent. The most common workforce strategy in response to the above macrotrends is upskilling the workforce (85 percent). This is the case across all geographies and economies at all income levels, with employers in high-income economies (87 percent) slightly ahead of those in upper-middle-income (84 percent) and lower-middle-income (82 percent) economies. In addition, process and task automation is expected to be the second most common workforce strategy (73 percent). Automation is a more pronounced strategy in high-income economies (77 percent), compared to upper-middle-income (74 percent) and lower-middle-income economies (57 percent). And third on the list, employers plan on complementing and augmenting their workforce with new technologies (63 percent). It is important to note that 70 percent of organizations plan to hire new staff with emerging in-demand skills, 51 percent plan to transition staff from declining to growing roles internally, and 41 percent plan to reduce staff due to skills obsolescence. Also, 10 percent plan to move operations within closer control through reshoring, nearshoring or friendshoring, and eight percent plan to offshore significant parts of their workforce. In terms of business practices, a top priority is supporting employee health and well-being (64 percent). Other top priorities include providing effective reskilling and upskilling (63 percent), improving talent progression and promotion processes (62 percent), offering higher wages (50 percent), tapping diverse talent pools (47 percent), and offering remote and hybrid work opportunities within countries (43 percent). In regards to public policies, employers identified funding for reskilling and upskilling (55 percent) and provision of reskilling and upskilling (52 percent) as the two most crucial policy measures. The researchers state that there is a clear desire for sustained public investment in skills development to align workforce capabilities with future labour-market demands. Interestingly, 83 percent of employers have already implemented diversity, equity, and inclusion measures; this represents a marked increase since 2023, where there were 67 percent of employers. These measures include training for managers and staff, recruitment and retention initiatives, setting goals and quotas, pay equity reviews, salary audits, having anti-harassment protocols, and ensuring these goals are across the supply chain. And wages are also affected by these trends—52 percent of employers expect to see an increase in the share of their revenue allocated to wages by 2030, 41 percent expect to see wages stay stable, and seven percent expect to see a reduction in wages. It appears that two main factors are related to wage expectations: aligning wages to productivity and performance (77 percent) and competing to retain talent (71 percent). With respect to assessing skills, work experience continues to be the most common assessment mechanism in hiring (81 percent plan on continuing to use this strategy). Second in line is pre-employment tests (48 percent), and third is psychometric tests (34 percent). Of course, resumes are still important (43 percent). Thus, in addition to education, employers want to see applicants use their skills and demonstrate their behavioural traits, cognitive abilities, and cultural fit. In response to AI adoption, 86 percent of employers expect that AI and information processing technologies will transform their businesses by 2030—though certain sectors would have higher numbers due to possible higher anticipated AI exposure, such as the Financial Services (97 percent) and Electronics (95 percent) sectors. In contrast, certain sectors have lower numbers likely due to lower exposure to AI disruption, including Energy Technology and Utilities (72 percent) and Government and Public Sector (76 percent). The following are barriers to AI adoption: lack of skills to support adoption (50 percent), lack of vision among managers and leaders (43 percent), high costs of AI products and services (29 percent), lack of customization to local business needs (24 percent), complex regulations around AI and data usage (21 percent), and lack of consumer demand (16 percent). What the foregoing suggests is that there is a gap in skills required for AI adoption for managers and workers alike. The most anticipated workforce strategy among employers (77 percent) in response to AI disruption is reskilling and upskilling of the existing workforce to work more effectively alongside AI (this applies to 45 out of the 55 covered economies). Moreover, 69 percent plan to recruit talent skilled in AI tool design and enhancement, and 62 percent anticipate that they will hire people with skills in working with AI. What’s more, 49 percent expect to reorient their business models toward new AI-driven opportunities, and 47 percent expect to transition employees from AI-disrupted roles to other positions. However, it is important to keep in mind that 41 percent expect to downsize their workforce as AI capabilities to replicate roles expand. The report also contains insights involving the various macrotrends mentioned above in relation to particular regions and industries. For instance, in North America, technological advancements, demographic shifts, and economic uncertainties are driving strategic decisions of companies. Focusing more precisely on Canada, employers are anticipating an evolving business landscape marked by advances in digital technologies, geoeconomic fragmentation, and increased climate-mitigation efforts. It is important to note that 97 percent of companies expect AI and information processing technologies to transform their operations. In order to ensure that there is a steady flow of talent, employers in Canada are trying to improve talent progression and promotion processes and invest in reskilling and upskilling. The Economy Profile on Canada also contains some helpful information. In Canada, 90 percent have secondary education and 68 percent have tertiary education. However, Canada only invests in mid-career training at a rate of five percent. Moreover, Canada’s individual rates on macrotrends and technology trends (share of organizations that identified the trend as likely to drive transformation in their organization) compared to the global rates were presented: Broadening digital access : 70 percent compared to the global rate of 60 percent Increased geopolitical division and conflicts : 58 percent compared to the global rate of 34 percent Increased efforts and investments to reduce carbon : 54 percent compared to the global rate of 47 percent Increased efforts and investments to adapt to climate change : 52 percent compared to the global rate of 41 percent Slower economic growth : 52 percent compared to the global rate of 42 percent Rising cost of living, higher prices or inflation : 47 percent compared to the global rate of 50 percent Ageing and declining working-age population : 42 percent compared to the global rate of 40 percent Increased focus on labour and social issues : 41 percent compared to the global rate of 46 percent Growing working-age populations : 30 percent compared to the global rate of 24 percent Increased restrictions to global trade and investment : 27 percent compared to the global rate of 23 percent Increased government subsidies and industrial policy : 16 percent compared to the global rate of 21 percent Stricter anti-trust and competition regulations : 16 percent compared to the global rate of 17 percent AI and information processing technologies (big data, VR, AR) : 97 percent compared to the global rate of 86 percent Robots and autonomous systems : 54 percent compared to the global rate of 58 percent Energy generation, storage, and distribution : 40 percent compared to the global rate of 41 percent New materials and composites : 24 percent compared to the global rate of 30 percent Semiconductors and computing technologies : 21 percent compared to the global rate of 20 percent What Can We Take from This Report? This report surveyed over 1,000 global employers on several topics involving employment. For instance, we learned about the trends that will affect organizations and drive business transformation up to 2030, including the rising cost of living, geopolitical conflicts, climate issues, and economic downturns—these issues were noted before the tariff wars began, and the tariffs could worsen the situation and cause further economic uncertainty. Given the above findings, I would like to suggest that employers need to prioritize upskilling and reskilling their workforces—and start thinking about this as soon as possible. Throughout this article, there were important revelations suggesting that, when it comes to skills, there is great opportunity with upskilling and reskilling and most employers say that it is their top workforce strategy that will help address skills misalignments and shape the future of work. Indeed, employers have identified funding for reskilling and upskilling and provision of reskilling and upskilling as the two most crucial policy measures. Employers also want to ensure that there is sustained public investment in skills development to align workforce capabilities with future labour-market demands. The researchers recommend that employers recognize the need for training and upskilling initiatives that focus on both advanced prompt-writing skills and broader GenAI literacy. As I wrote here , the purpose of improving an employee skill (upskilling) or teaching a brand-new skill or skills (reskilling) is to appreciate the nature of continuous learning. Previous Next
- There is a New Minister of AI in Canada | voyAIge strategy
There is a New Minister of AI in Canada What can Canadians Expect? By Christina Catenacci May 23, 2025 It has been reported that Prime Minister Mark Carney has recently created a new Ministry in Canada—he has chosen former journalist Evan Solomon to be the new Minister of AI and Digital Innovation. Solomon was elected for the first time in the April 28, 2025 election in the riding of Toronto Centre. Before that, he worked as a broadcaster for both CBC and CTV. Previously, the topic of AI fell under the industry portfolio—in the Trudeau government, the person who was responsible for something like Bill C-27 (it contained both a privacy and AI proposed piece of legislation) was François-Philippe Champagne , who is now responsible for Finance and is representing the riding of Saint-Maurice. As Minister of Innovation, Science and Industry from 2021 to 2025, he helped attract major investments into Canada, advanced the development and adoption of clean technologies, strengthened research and development, and bolstered Canada’s position in environmental sustainability. What Will the New AI Minister Do? As we have recently seen, Prime Minister Carney has announced his single mandate letter with some streamlined top priorities: Establishing a new economic and security relationship with the United States and strengthening our collaboration with reliable trading partners and allies around the world Building one Canadian economy by removing barriers to interprovincial trade and identifying and expediting nation-building projects that will connect and transform our country Bringing down costs for Canadians and helping them to get ahead Making housing more affordable by unleashing the power of public-private cooperation, catalysing a modern housing industry, and creating new careers in the skilled trades Protecting Canadian sovereignty and keeping Canadians safe by strengthening the Canadian Armed Forces, securing our borders, and reinforcing law enforcement Attracting the best talent in the world to help build our economy, while returning our overall immigration rates to sustainable levels Spending less on government operations so that Canadians can invest more in the people and businesses that will build the strongest economy in the G7 No, AI is not mentioned in there. However, in the preamble of the letter, he touched on AI when he stated: “The combination of the scale of this infrastructure build and the transformative nature of artificial intelligence (AI) will create opportunities for millions of Canadians to find new rewarding careers – provided they have timely access to the education and training they need to develop the necessary skills. Government itself must become much more productive by deploying AI at scale, by focusing on results over spending, and by using scarce tax dollars to catalyse multiples of private investment.” Who is the New Minister of AI and Digital Innovation—Evan Solomon? To many, including Ottawa Law professor Michael Geist, Evan Solomon is smart and tech savvy— exactly what Canada needs to move the ball rolling in AI. In the past, Solomon was the CBC host of Power and Politics on CBC and The House podcast on Radio Canada. He was even considered to be someone who could replace Peter Mansbridge on The National . However, CBC terminated him in 2015 after the Star reported that he was taking secret commission payments from wealthy art buyers related to art sales involving people that he dealt with as a host. Apparently, he took commissions of more than $300,000 for several pieces of art and did not disclose to the buyer that he was being paid fees for introducing buyer and seller. Some of the people that he dealt with included Jim Balsillie and Mark Carney himself. What’s more, Solomon’s appointment was met with criticism , mostly because he does not have a formal science or tech background, and also because of a mishap in March when he briefly reposted a photoshopped offensive image of Carney from a parody account. In fact, some critics argue that someone who could not identify manipulated content in his own social media feed may struggle to develop effective policies to protect Canadians from increasingly sophisticated AI-generated deception. But he is back now, as AI Minister. He will have a lot of work to do in his new role, and we hope that one thing he does is deal with the introduction of a good-quality Canadian AI law. What Can we Take from the Mandate Letter? We heard Prime Minister Carney talk about AI in his election platform , where he promised to make sure Canada takes advantage of the opportunities presented by AI, since it is critical for our competitiveness as the global economy shifts—and for making sure we have a government that actually works. More specifically, he promised to do the following in the area of AI under the build portion of the platform: Build AI infrastructure. The Prime Minister had planned on i nvesting in nation-building energy infrastructure and cutting red tape to make Canada the best place in the world to build data centres. Canada must have the capacity to deploy the AI of the future and ensure we have technological sovereignty. Also, he planned on building the next generation of data centres quickly and efficiently by leveraging federal funding and partnering with the private sector to secure Canada’s technological advantage Invest in AI training, adoption, and commercialization . The Prime Minister had planned on measuring growth by tracking the economic impacts of AI in real-time so we can proactively help Canadians seize new opportunities, boost productivity, and ensure no one is left behind. Also, he planned on boosting adoption with a new AI deployment tax credit for small and medium-sized businesses that incentivizes businesses to leverage AI to boost their bottom lines, create jobs, and support existing employees. Companies would leverage a 20 percent credit on qualifying AI adoption projects, as long as they can demonstrate that they are increasing jobs. Further, he planned on catalyzing commercialization by expanding successful programs at Canada’s AI institutes (Mila, Vector, Amii) so that we can connect more Canadian researchers and startups with businesses across the country, which will supercharge adoption of Canadian innovation in businesses, create jobs, and strengthen our AI ecosystem Improve AI procurement . Prime Minister Carney had planned on establishing a dedicated Office of Digital Transformation at the centre of government to proactively identify, implement, and scale technology solutions and eliminate duplicative and redundant red tape. This will enhance public service delivery for all Canadians and reduce barriers for businesses to operate in Canada, which will grow our economy. This is about fundamentally transforming how Canadians interact with their government, ensuring timely, accessible, and high-quality services that meet Canadians’ needs. Also, he planned on enabling the Office of Digital Transformation to centralize innovative procurement and take a whole-of-government approach to service delivery improvement. This could mean using AI to address government service backlogs and improve service delivery times, so that Canadians get better services, faster. There were some great ideas in the election platform, and I’m sure that Canadians hope that they will manifest. It is important to note that the priorities that were identified in the election platform are encouraging, as they will help both government and SMBs in the private sector with tax credits that incentivizes businesses to leverage AI to boost their bottom lines, create jobs, and support existing employees. Businesses could sure use some help with training existing employees via upskilling and reskilling, as well as AI literacy. With respect to the more general mandate letter that has recently surfaced, it is possible that this means that any additional and prescribed mandate letters to individual Ministers will not be shared with the public. That would be concerning, since public-facing mandate letters have become the norm during the Trudeau government. We will have to see on this issue. Moreover, the couple of paragraphs in the mandate letter’s preamble suggests that there will be targeted improvements for both the public and private sectors. The letter emphasized training and scaling AI. These goals are lofty, but necessary. But on the whole, things are looking promising given the commitment to build, invest, and improve AI procurement. What can Canadians Expect? In my view, it is still too early to tell. But I’m hoping that Prime Minister Carney comes through for Canada. If the government gets this right, Canada could catch up to other jurisdictions like the EU, and become a real leader in AI. Previous Next
- AI Governance in 2025 | voyAIge strategy
AI Governance in 2025 Trust, Compliance, and Innovation to Take Center Stage this Year By Tommy Cooke, powered by caffeine and curiousity Jan 20, 2025 Key Points: AI governance is transitioning from a reactive compliance measure to a proactive discipline Innovations like AI impact assessments help organizations operationalize transparency AI governance frameworks are no longer regulatory shields. They enhance brands What was an emerging concern over the last few years will become a mature and necessary strategic discipline in 2025. As we move deeper into another year and while AI remains in its infancy, it is necessary to have the guardrails in place to ensure that AI grows and contributes successfully. The landscape of AI governance is thus evolving in many meaningful ways, much of which is due to growing international regulatory pressure, increasing stakeholder expectations, and the ongoing need to ensure significant financial investments in AI generate reliable returns. This Insight looks at what has changed over the last couple of years and looks ahead to how AI governance is maturing – and why these shifts matter. From Awareness to Structure In the couple short years of AI’s proliferation across virtually every industry, AI governance can be characterized as reactive. Organizations leveraging AI to innovate and reduce costs – particularly those with high stakes in demonstrating that AI can be trustworthy – has tended to approach governance as a checkbox exercise. Unless an organization existed within the purview of a particular jurisdiction requiring compliance, like the EU’s AI Act , holding up a compass to determine how and whether an organization required a dedicated office with a detailed AI governance strategy depended largely on its own awareness and relationship with its stakeholders. Moving forward, that awareness is intensifying. Organizations are no longer waiting for compliance to simply arrive. Even in the face of shifting political landscapes in North America where AI regulation seems to be losing momentum, the AI governance market is expected to grow from $890 million USD to $5.5 billion USD by 2029 . This statistic is indeed a reflection of regulatory pressure abroad – and it is also a reflection of the maturating need for structured management of AI. With AI systems earning the trust of organizations around the world to make critical decisions, the potential for damage and unintended consequences is becoming far too risky: algorithmic bias, breaches, and ethical violations can cause significant reputational liabilities and financial penalties that would almost certainly erase any organization’s AI investment; non-compliance with the EU’s AI Act , for example, can result in fines up to €35 million or 7 percent of an organization’s annual turnover. Transparency in the Spotlight Over the last couple of years, transparency has been a buzzword. It existed in a gray space because organizations tended to use the term strategically in public-facing white papers and proposal packages. The word “transparency” often appears through corporate “ AI-washing ”: the process of exaggerating the use of AI in products and services, particularly when companies make misleading claims about the safety and ethics of their systems. Moreover, transparency tends to be perceived as difficult to achieve. Many large-scale AI adopters believe that AI systems’ outputs are difficult to explain or that its processes are virtually impossible for laypeople to understand. This adage will no longer be satisfactory in 2025 and the years ahead. Why? Contrary to what some may belief, societal, political, and ethical pressures for transparency are growing. And those pressures are leading to AI transparency innovations. Here are two examples: AI impact assessments (AI-IAs) are not merely designed to identify positive and negative impacts of AI – they are also growing in popularity because they are positioning organizations to critically reflect and discuss the social and ethical consequences of using AI. What AI-IAs essentially do is commit an organization to comprehending how their AI systems can be improved as well as what the risks may be – whether emerging or existing in the present moment. These dynamics already exist in every AI system. By making them visible, organizations take crucial steps toward demonstrating transparent and accountable relationships with their AI systems 2024 observed a significant maturation in the AI model documentation: an explanation of what an AI system’s model is, what it was trained on, any tests or experiments conducted on it, etc. The goal is to document what the AI system is doing. By noting what the system does, an organization provides its stakeholders with a track record that can be examined to ensure responsible and ethical use as well as to demonstrate compliance Data Sovereignty on the Rise While data privacy has been a long-standing focus throughout the previous years, 2025 will mark a significant shift toward data sovereignty. As regulatory, geopolitical, and social concerns continue to rise around responsible and ethical use of AI, 2025 will observe organizations increasingly designing AI systems in ways that deal with how data is stored, processed, and accessed. Compliance with data residency laws to ensure that sensitive data will remain within a national boundary or specific jurisdiction, for example, will trend this year. We will hear more about other privacy-preserving technologies in AI systems, such as federated learning : a machine learning technique that allows AI to be trained across datasets from different sources without transferring sensitive data across borders. Data is no longer viewed as merely a business asset but a national asset. For organizations operating globally, this can make daily operations rather complicated due to the existence of multiple international laws and norms if they are to avoid penalties while maintaining trust. When data involves national security, healthcare, or financial sectors, demonstrating the ability to respect data storage laws when using AI will be a top priority for organizations this year. Ethical AI as a Top Operational Priority Much like the way in which buzzwords like transparency have been used to gesture toward responsible AI use, ethical AI will finally emerge as fully operationalized practices. Unlike the stretch of AI adoption in the early 2020’s where AI ethics tended to be little more than a vague concept, ethical AI discourse and debate have been sustained for a considerable period of time. Organizations are recognizing that failing to act upon the principles and values of ethical AI not only poses reputational harms and financial risks, but they can also harm operational integrity. Organizations have been and will continue to conduct structured reviews to identify potential bias, discrimination, and unintended social consequences of their AI systems. These kinds of assessments are being applied across an AI system’s lifecycle, from design to monitoring after implementation. According to the AI Ethics Institute (2023) , 74 percent of consumers prefer to use products certified as ethically deployed and developed. This makes comprehensive AI training with a focus on governance, privacy, and of course – ethics – a must. The same will hold true for selecting AI vendors and designers committed to embedding ethical considerations into their products from the beginning and not merely as an afterthought. AI Governance as a Strategic Differentiator A commonplace perception of all-things related to privacy, ethics, and governance is that they are expensive and stifle innovation. It is often echoed by techno-libertarians , those who believe that innovation and business should be left largely unregulated in order to maximize growth and creativity – people who resist external intervention unless mandated by law. What proponents of these perceptions and beliefs fail to understand is that proactivity in the realm of responsible and ethical management of technology is becoming extraordinarily risky and costly. In 2025, AI governance will be embraced as business strategies that not only mitigate risks but also allow organizations to actively differentiate themselves in competitive markets. AI-IAs, audits, transparent reporting, and other AI governance-related activities will be more directly attributable to brand equity and stakeholder confidence. In recognizing that the world’s legal, social, political, and ethical standards are strengthening around the use of AI, organizations are realizing that demonstrating and sharing a robust AI governance framework showcases the organization and its talent as thought leaders who are able to navigate complex technology while building trust-based relationships with customers, partners, regulators, and so on. So, Why Now? What is driving the maturation and higher adoption rates of AI governance? Here are three catalysts to consider: Regulatory Evolution: despite the resurgence of techno-liberalists who may be slowing the advance of AI-related regulatory agendas, this only applies to limited jurisdictions. It’s important to remember that sub-sovereign jurisdictions (e.g., state and provincial level government authorities) are developing their own regulations. Whether they deal specifically with AI or not, data and privacy laws are always changing – and they almost always have implications on how organizations use AI. Public Scrutiny: High-profile AI failures have made stakeholders more vigilant about ethical and operational risks. Consumers are increasingly skeptical about how organizations use AI. C-suite executives are becoming more and more aware of how important it is to demonstrate to stakeholders that they are using AI responsibly, necessitating the need to implement strong AI governance frameworks – and prove that they work. Market Maturity: Markets do not mature merely due to an invisible economic hand. Much of their maturation is driven by the behaviour, perception, and demands of its consumers. As AI becomes integral to business operations, it is perhaps unsurprising that consumers do not trust organizations that do not openly disclose that they use AI. Final Thoughts AI governance in 2025 represents a pivotal shift from a regulatory afterthought to a core strategic priority. Organizations that adopt structured governance frameworks, emphasize transparency, and prioritize ethical AI are not only mitigating risks but also distinguishing themselves in competitive markets. As regulatory landscapes evolve and public scrutiny intensifies, investing in robust AI governance is no longer optional. Previous Next
- The Canadian Cyber Security Job Market is Far From NICE | voyAIge strategy
The Canadian Cyber Security Job Market is Far From NICE Main challenges and what to do about them By Matt Milne Jul 25, 2025 Key Points The Cyber Security system is broken, to the point that some may assert that cyber security degrees are “useless" One of the main reasons for the broken system is that organizations are not investing in new talent and training, and AI adds further complications Some proposals for rectifying the situation are: eliminate the experience gap through mandatory training investment; mandate industry-education-government coordination for work placements; and strengthen government regulation and skills-job alignment review At this point, we can all agree that cyber security has a serious problem, and it's not Advanced Persistent Threats or quantum computing; it’s the HR firewall rule set that denies access without experience, and poor government policy and automation that exacerbate an already broken system. The job market in Canada is challenging, which is not particularly significant news to recent graduates, long-time job seekers, those over the age of 45, and those who have recently become unemployed. The job market competition in Canada is fierce. This is particularly true for 15 to 19-year-olds who are now at a 22 percent unemployment rate . Due to a variety of factors, one could conclude that education in Canada either exists as a pretext to scam people or is itself a scam . These days, some might say that a Master’s degree in Canada is helpful if one wants to pursue origami or needs some kindling to start a small fire. This is not entirely the fault of Applicant Tracking System (ATS) hiring systems (software that helps companies manage the recruitment and hiring process), biased recruiters, or infamous catch-22 of needing experience to get initial experience. As I mentioned in a previous article , the 2024 ISC2 Cybersecurity Workforce Study's budget cuts are the most significant reason why new cyber security talent is not being hired or trained. Why Are Cyber Security Degrees “Useless”? Yes, some may deduce that degrees are useless, but not in the way your tough, long, disillusioned older relative warned you about. Of course, dance theory, art, or sociology don’t mesh with the brutal demands of the late-stage neoliberal job market. However, the truth is that while STEM degrees on average pay better than humanities degrees, a quick observation of Statistics Canada’s Interactive Labour Market Tool reveals that the data is from 2018 and shouldn’t be considered relevant due to the unprecedented disruptions to labour markets caused by the pandemic. Why exactly can one be certain that cyber security degrees are useless? Are they not in demand? Is cyber security not a STEM field that requires intense knowledge? Well, that is half-true. Cyber security is in high demand, but the degree is distinct from traditional STEM degrees. Where doctors and engineers secure placements and gain work experience to verify the validity of their degrees, cyber security degrees will hopefully include lab work or projects. In my view, the reality is that the crucial experience component that employers desire is absent. Although this lack of work placement is shifting, it remains challenging to find undergraduate or Master 's-level cyber security programs in Canada that include a work experience component. For instance, according to the Canadian Centre for Cyber Security’s Post-Secondary Cyber Security Related Programs Guide , only ten bachelor's programs and four master’s programs offer a work placement option out of a total of 147 entries. Moreover, according to the 2024 ISC2 Cybersecurity Workforce Study , organizations surveyed around the world have experienced a significant increase in risk and disruption, yet economic pressures, exacerbated by geopolitical uncertainties, have led to budget and workforce reductions in a number of sectors, and cyber security threats and data security incidents have only continued to grow. Resources are strained, and this impacts cyber security teams and their abilities to adopt new technologies and protect against the nuanced threats they pose to their organizations. The conclusion of this study was that in 2024, economic conditions have significantly impacted the workforce, leading to both talent shortages and skills gaps at a time when need has never been greater. On top of this, the introduction of AI to drive transformation, cope with demand, and shape strategic decisions has come with its own challenges: “We found that while cybersecurity teams have ambitious plans for AI within the cybersecurity function, they anticipate the biggest return on investment will occur in two or more years. As a result, they are not immediately overhauling their practices to adopt AI. Cybersecurity professionals are also conscious of the additional risks AI will introduce across the organization. As different departments adopt AI tools, cybersecurity teams are encouraging their organizations to create comprehensive AI strategies” Interestingly, some of the key findings are: Cybersecurity professionals don’t believe their cybersecurity teams have sufficient numbers or the right range of skills to meet their goals Cybersecurity professionals are still focused on higher education and professional development once in the workforce, but they increasingly prioritize work-life balance Many believe that diverse backgrounds can help solve the talent gap The expected advancements of AI will change the way cyber respondents view their skills shortage (certain skills may be automated), yet Cyber professionals confident Gen AI will not replace their role It was found that 45 percent of cyber security teams have implemented Gen AI into their teams’ tools to bridge skills gaps, improve threat detection and provide vast benefits to cybersecurity Organizations need a Gen AI strategy to responsibly implement the technology How HR is Adding to the Problem & Is Far From NICE As I mentioned above, budget cuts are the primary reason organizations are not investing in new talent and training; however, it would be inaccurate to suggest that is the only reason cyber security hiring is broken. During my undergraduate degree in world conflict and peace studies, I observed that most conflicts stem from a lack of communication or a shared language. At a fundamental level, there is a significant gap because of the lack of a standardized language. To rectify this, the National Institute of Standards and Technology (NIST) published the Special Publication 800-181, The National Initiative for Cybersecurity Education (NICE) Framework, in 2017. Canada has since adopted the NICE framework to create the Canadian Cyber Security Skills Framework in 2023. The National Initiative for Cybersecurity Education (NICE) framework categorizes cyber security competencies for the various roles and Knowledge, Skills, and Abilities (KSAs). I note that while Canadian cyber security degree programs effectively teach knowledge and foundational skills, they fall short in the "abilities" component, which can only be developed through practical experience. HR departments, however, treat all three components as a requirement, creating a catch-22 experience gap requirement. It follow then that the combination of HR departments' risk aversion and tight budgets creates a perfect storm, leading to a talent shortage. Bad Policy and Government Decisions Have Ruined the Credibility of Postsecondary Education International students, especially those from South Asia, have created significant business for some private colleges, which often lure students with false promises. Immigration Minister Mark Miller referred to these institutions as “puppy mill” schools. It involves the folling: students are charged four times what Canadians pay to attend college in Ontario while receiving substandard education that doesn't prepare them for meaningful employment. Unfortunately, this systematic exploitation has created a credibility crisis that affects all postsecondary education in Ontario. When HR departments and employers see degrees from Canadian institutions, they now face the challenge of distinguishing between legitimate educational institutions and those “puppy mills.” The credibility crisis in Ontario's postsecondary education stems from government policy decisions that has systematically reduced funding to legitimate educational institutions. How AI is Poised to Make The Job Market Worse The automation of entry-level cybersecurity and IT help desk roles is creating a significant career progression problem that will likely exacerbate the experience gap. The fundamental issue is that AI will exacerbate the entry-level crises by eliminating precisely the entry-level positions that traditionally served as stepping stones to senior roles or even entry-level roles. The menial tasks that AI is designed to automate— basic incident response, routine monitoring, simple troubleshooting, and repetitive security assessments— are the same daily activities that historically proved to employers that candidates had developed practical competencies beyond their theoretical education. The Path Forward Eliminate the Experience Gap Through Mandatory Training Investment. Organizations must abandon the false economy of demanding pre-existing experience over investing in job training. While tight budgets drive risk-averse hiring, the cost of a single cyber security incident far exceeds the investment required to train motivated graduates. It might be worth reminding these companies that refusing to train entry-level talent is like gambling their entire business on an increasingly shrinking pool of experienced professionals and creating a strategic vulnerability that threat actors can exploit more easily than any technical system. Mandate Industry-Education-Government Coordination for Work Placements. Canadian educational institutions must be required to coordinate with government and private industry to create robust work placement programs that directly funnel graduates into in-demand positions. This cannot remain optional—with only ten bachelor's programs and four Master's programs offering work placement out of 147 total entries, the current system is systemically failing students and employers alike. These partnerships must be structured to provide real-world experience that develops the "abilities" component of the NICE framework. Strengthen Government Regulation and Skills-Job Alignment Review. The Canadian government must implement stricter regulation of educational institutions and conduct a thorough review of the mismatch between job-ready skills and student abilities. This includes shutting down diploma mills that have destroyed credential credibility, establishing minimum standards for cyber security program outcomes, and creating accountability mechanisms that tie institutional funding to graduate employment rates and employer satisfaction. That is, educational institutions should be required to demonstrate that their curricula align with current industry needs and that graduates possess demonstrable competencies, not just theoretical knowledge. Previous Next
- Playbooks | voyAIge strategy
Practical AI adoption guides with actionable steps to ensure effective implementation. Playbooks Having clear, actionable guides is essential for successful AI use. Our playbooks provide your team with the direction they need to navigate critical situations confidently and efficiently. What Are Playbooks, exactly? Playbooks are comprehensive, scenario-based guides that outline actions, tips, best practices, case studies, and other value-added guidance for employees and managers to follow in particular situations. They serve as a reference tool that empowers your team to make informed decisions quickly, ensuring consistency and efficiency across your organization. Playbook Examples AI Incident Response Playbook Guides your team on how to react swiftly and effectively when an AI system encounters an error or produces unexpected outcomes. Compliance and Regulatory Playbook Provides step-by-step procedures for ensuring that all AI-related activities are in line with current laws and regulations, helping to avoid costly fines and reputational damage. Ethical AI Decision-Making Playbook Offers a framework for navigating ethical challenges in AI development and deployment, ensuring that your AI practices align with your company’s values and ethical standards. Data Privacy Playbook Details the processes for handling and protecting sensitive data within AI systems. The Playbook Development Process At voyAIge strategy, we take a systematic approach to developing your playbooks, ensuring that each one is tailored to your specific needs and organizational context. Here’s how we work with you to create effective, actionable playbooks: DISCOVERY DEVELOPMENT ITERATION DELIVERY We start by discussing your organization's specific needs, goals, and challenges. This helps us understand the scenarios your team faces and the outcomes you want to achieve. We identify the key areas where playbooks will add the most value and outline the structure of the playbooks we’ll create. We work closely with your team to review the draft playbooks, incorporating feedback and making necessary adjustments. This collaborative process ensures that the playbooks are practical, relevant, and easy to use. Next, we develop the content of your playbooks. This involves creating clear, step-by-step instructions, decision trees, and best practices that your team can easily follow. We ensure that all content is aligned with your organizational goals, industry standards, and regulatory requirements. Once finalized, we deliver the playbooks in a format that suits your needs—whether it’s a digital document, an interactive guide, or a PDF manual. We also offer training sessions to help your team get the most out of the playbooks and ensure they are effectively implemented across your organization. Ready to Equip your Team? Book a Free Consultation
- Selling Chatbots? | voyAIge strategy
Selling Chatbots? Key Steps and Considerations By Tommy Cooke Dec 5, 2024 Key Points Research and understand the relevant regulatory frameworks at national and state/provincial levels Ensure ethical deployment by diversifying training datasets, conducting regular audits for bias, and communicating chatbot capabilities and limitations clearly to staff and users Scan vendors thoroughly to identify not only a compliant, responsive, and trusted partner but also one that provides a service that fits the diverse and nuanced shape and character of your sales and marketing team’s voices, opinions, and visions Develop a comprehensive internal and external communication plan, engage stakeholders early, and ensure your clients understand the benefits of the chatbot Every Christmas each year I visit the same online music store. I have a lot of family and friends who are musicians, so I enjoy gifting music gear. Typically, I see the usual sales and marketing angles: gear on sale, high-end used equipment, discounts on software, and of course, Christmas ribbons wrapped around everything. But this year, something different caught my eye: my favorite online music store now has a chatbot. There's a floating blue bubble at the bottom of my screen. It says, "Chat with Steve!" I know Steve personally. He's an excellent salesperson. When I click on the bubble, I see a cartoon avatar of Steve, giving me his patented thumbs-up. He asks, "How can I help you today?" It doesn't take long for me to realize that I'm not actually talking to Steve. Instead, the store modeled AI-Steve off his blogs and articles. It feels like I'm talking to Steve, and that makes me feel good. On my latest interaction, I not only found two great gifts, but I found them faster than I would have by picking up the phone and calling real Steve. After the interaction, real Steve called me anyway and said, "Thanks for your business! Do you want us to giftwrap the keyboard for your mom?" I loved the interaction. It revealed a lot about the power of chatbots for sales and marketing teams. In a fast-paced, evolving AI landscape, chatbots are transforming how companies design and deliver their services, products, and content. They help differentiate companies from their competitors, showing they embrace change confidently and empower their customers with information that would normally require phone or email conversations. How else would I have found a keyboard with the exact synth sound of "Take On Me" by A-ha if it wasn't for AI-Steve? My interaction was particularly telling because it demonstrated the pressures faced by sales and marketing teams: the need for consistent lead generation, faster response times, and personalized customer engagement. As technology advances, it's challenging for these teams to stay up-to-date while also meeting performance targets. A well-trained chatbot can address these challenges—reducing repetitive tasks while reflecting the team's ideas, insights, and mannerisms, thereby creating a seamless customer experience. Chatbots are here to stay, but their implementation must be done correctly . They can make costly mistakes that damage a company's reputation and finances. In this Insight, we'll explore what businesses need to consider before rolling out an AI chatbot to ensure these tools are deployed effectively, responsibly, and with a clear understanding of their potential—and their limitations. Questions Companies Selling Chatbots Should Consider What Regulatory Frameworks Do We Need to Follow? AI chatbots exist in complex legal spaces with regulations varying across regions. Consider, for example, a chatbot designed in New York State used by a company in California who services customers in Europe, the United States, and Canada. They must comply with multiple regulatory frameworks, including the GDPR (General Data Protection Regulation) in Europe, the CCPA (California Consumer Privacy Act) in California, PIPEDA (Personal Information Protection and Electronic Documents Act) in Canada. Each one of these regulatory bodies place unique obligations on data handling and user protection every time AI is used. Failing to meet these can lead to penalties, fines, and damaged consumer trust. Here are three things you can do to set yourself up to sell chatbots safely: Research the legal frameworks relevant to your target market Consult legal experts who specialize in AI and data privacy Implement data protection protocols in the form of AI policies and procedures that can be given to your customers to support the implementation of your chatbot How Can We Ensure Ethical AI Deployment? One of the primary ethical concerns with AI chatbots is bias. Chatbots trained on skewed datasets may produce biased responses, leading to unintended discrimination. At times, how a vendor trains chatbots on their own datasets can also create anomalies that may be offensive to users. This can be especially problematic in industries like customer service where respect and professionalism are critical. To make matters more complicated, when companies purchase chatbots and train them on their own data - such as newsletters, business plans, training transcripts, and so on - biases can arise when the chatbot vendor's model does not weigh new data properly. This is especially so when new datasets are too narrow and do not reflect the diversity of an entire sales or marketing team's voices, perspectives, and opinions. The following is recommended to cover some important ethical bases before selling a chatbot: Diversify training datasets with unique voices, perspectives, to minimize biases and ensure that your chatbot does not corner itself when speaking to a wide variety of customers Test your chatbot in a closed environment with your team prior to launching Conduct regular audits to identify and address biases that may come up in responses Communicate transparently with customers about the chatbot's capabilities and limitations. Provide end-users the opportunity to provide feedback What Vendor Should I Select? Selecting the right AI vendor is crucial for the success of the chatbot. Prioritize vendors with a proven track record of compliance and have a public-facing set of documents proving that they have robust data handling practices and ethical AI development and implementation standards. The vendor should be offering tools that can easily integrate with your systems, workflows, market, and sales models. They should also be providing ongoing support for not only chatbot maintenance but refinement and improvement overall as well. Here are some questions you can ask vendors during your chatbot vendor scan: What is your Compliance History? Ensure the vendor has a proven track record of complying with relevant legal frameworks like GDPR, CCPA, or PIPEDA . Ask them when they started implementing compliance and how often they revise their own policies and procedures How Will You Manage Data Security? Evaluate how vendors handle sensitive data, including encryption methods, data storage practices, and access controls How Much Support Do You Offer? Choose vendors that provide ongoing support, including updates, retraining, and bias auditing. It's important to ensure that the tool remains effective and compliant over time What is your Communication Strategy? Before you list a new AI chatbot product or service, it is crucial to communicate what is coming to both your staff, investors, stakeholders, and, as importantly, your long-term clients. Failing to do so can lead to confusion, alienation, or even leaving key people feeling undervalued. It is a common perception that AI replaces human beings, and so it is important that people are not caught off-guard when an AI system is introduced. Moreover, failing to communicate can generate internal misalignment if staff have high expectations for a chatbot that underwhelms or underdelivers in its adoption stage. Without a clear communication strategy, staff and consumers may resist engaging with a new tool, leading to a lack of utilization and ultimately wasted investment. Here are five things you can do to prepare yourself with a strong communication strategy: Start with an Internal Communication Plan : Ensure that your entire organization understands why the chatbot is being introduced, its capabilities, and how it benefits their roles. Host informational chats so staff can ask questions. Provide easy-to-read documentation so everyone feels comfortable with what is coming Engage Stakeholders Early : Who are your key stakeholders? Talk to them. Engage them early in the process. Seek their input to ensure their perspectives are considered. This will accelerate adoption and buy-in Craft Clear External Messaging : Inform your existing clients about the introduction of the chatbot before it goes live. Highlight the benefits, such as faster response times or 24/7 availability. Clarify the scenarios where human intervention is still available, and how chatbots facilitate better human interaction when it matters most. Transparency is key Create Training Materials: Provide staff with scripts and guides on how to introduce the chatbot to clients. Equip them with clear, consistent messaging that aligns with the company’s brand as well as its tone, voice, and vision Gather Feedback : Once the chatbot is live, actively gather feedback from staff and clients. Use the feedback to make necessary adjustments to both the chatbot and your communication strategy to address any concerns promptly Chatbot Success through a Commitment to Research, Communication, and Transparency One of the most critical aspects of deploying AI chatbots is managing user expectations. Users need to understand what a chatbot can and cannot do. Companies selling chatbots can avoid frustrating users and staff who expect more than the chatbot can deliver. This is why transparency is not only best practice but also a trust-building strategy. When users know the chatbot’s limitations and they are not surprised by its arrival, they are more likely to be satisfied with their interactions and more understanding when redirected to a human agent. As AI chatbots become an integral part of the business landscape, companies must also approach their deployment with a comprehensive strategy that accounts for legal, ethical, and practical considerations. Ensuring compliance with regulatory frameworks, minimizing bias, and selecting the right vendor are all crucial steps in rolling out a chatbot successfully. Equally important is preparing a solid communication strategy to align internal teams and clients. Previous Next
- Keeping People in the Loop in the Workplace | voyAIge strategy
Keeping People in the Loop in the Workplace Some Thoughts on Work and Meaning By Christina Catenacci, human writer May 16, 2025 Key Points We can look to the infamous words of C.J. Dickson made in the 1987 case, Alberta Reference, for some of the first judicial comments touching on the meaning of work When thinking about what exactly makes work meaningful, we can look to psychologists who have demonstrated through scientific studies that meaningful work can be attributed to community, contribution, and challenge—leaders are recommended to incorporate these aspects in their management strategies Leaders are also encouraged to note that self-actualization is the process of realizing and fulfilling one’s potential, leading to a more meaningful life. It involves personal growth, self-awareness, and the pursuit of authenticity, creativity, and purpose My co-founder, Tommy Cooke, just wrote a great article that discusses the effects of the Duolingo layoffs. In that piece, he talks about how Duolingo just replaced its contract workers (on top of the 10 percent of its workforce it just reduced) and replaced them with AI. Ultimately, he concludes that AI achieves its greatest potential not by replacing humans, but by augmenting and enhancing human capabilities—and it follows that Duolingo runs the risk of reducing employee morale, increasing inefficiencies, and causing other long-term negative consequences like damage to reputation. Duolingo is not alone when it comes to reducing a part of its workforce and replacing it with AI. In fact, Cooke suggests that organizations that prioritize human-AI collaboration through hybrid workflows, upskilling, and governance position themselves for long-term success. This article got me thinking more deeply about the meaning of work. From an Employment Law perspective, I am very familiar with the following statement made by Dickson C.J. in 1987 ( Alberta Reference ): “Work is one of the most fundamental aspects in a person's life, providing the individual with a means of financial support and, as importantly, a contributory role in society. A person's employment is an essential component of his or her sense of identity, self‑worth and emotional well‑being. Accordingly, the conditions in which a person works are highly significant in shaping the whole compendium of psychological, emotional and physical elements of a person's dignity and self-respect” Furthermore, Dickson C.J. elaborated that a person’s employment is an essential component of his or her sense of identity, self‑worth and emotional well‑being. I wrote about these concepts in my PhD dissertation , where I argued that there is an electronic surveillance gap in the employment context in Canada, a gap that is best understood as an absence of appropriate legal provisions to regulate employers’ electronic surveillance of employees both inside and outside the workplace. More specifically, I argued that, through the synthesis of social theories of surveillance and privacy, together with analyses of privacy provisions and workplace privacy cases, a new and better workplace privacy regime can be designed (to improve PIPEDA ). Disappointingly, we have still not seen the much-needed legislative reform, but let’s move on. Thus, it is safe to say that for decades, courts have recognized the essential nature of work when deciding Employment Law cases. Economists have too. For instance, Daniel Susskind wrote a working paper where he explored the theoretical and empirical literature that addressed this relationship between work and meaning. In fact, he explained why this relationship matters for policymakers and economists who are concerned about the impact of technology on work. He pointed out that the relationship matters for understanding how AI affects both the quantity and the quality of work and asserted that new technologies may erode the meaning that people get from their work. What’s more, if jobs are lost because of AI adoption, the relationship between work and meaning matters significantly for designing bold policy interventions like the 'Universal Basic Income' and 'Job Guarantee Schemes'. More precisely, he argues that policymakers must decide whether to simply focus on replacing lost income alone (as with a Universal Basic Income, or UBI) or, if they believe that work is an important and non-substitutable source of meaning, on protecting jobs for that additional role as well (as with a Job Guarantee Scheme, or JGS). With AI becoming more common in the workplace, Susskind points out that although traditional economic literature narrowly focuses on the economic impact of AI on the labour market (for instance, how employment earnings are considered), there has been a change of heart in the field that has evolved into a creeping pessimism involving the quantity of work to be done as well as the quality of that work. In fact, he touches on the notion that paid work is not only a source of an income, but of meaning as well. He also notes that classical political philosophers and sociologists have introduced some ambiguity when envisioning the relationship, but organizational psychologists have argued and successfully demonstrated through scientific studies that people do indeed get meaning from work. What does this all mean? Traditional economic models treat work solely as a source of disutility that people endure only for wages. But it is becoming more evident that the more modern way to think about work entails thinking about meaning—something that goes beyond income. What the foregoing suggests is that, if AI ultimately leads to less work for most people, we may need to better understand what exactly is “meaningful” to people, and how we can ensure that people who are downsized still have these meaningful activities to do. Further, we would need to provide a great deal of opportunity in these meaningful things, so people can feel the feelings of psychological, emotional, and physical elements of a person's dignity and self-respect that C.J. Dickson referred to back in 1987. Along the same lines, we will need to reimagine policy interventions such as UBI and JGS; while advocates of JGS tend to believe that work is necessary for meaning, UBI supporters believe that people can find meaning outside formal employment. More specifically, UBI is a social welfare proposal where all citizens receive a regular cash payment without any conditions or work requirements. The goal of UBI is to alleviate poverty and provide financial security to everyone, regardless of their economic status. On the other hand, the job guarantee is the landmark policy innovation that can secure true full employment, while stabilizing the economy, producing key social assets, and securing a basic economic right. What we can be sure of is the fact that things are changing with respect to how we see work and meaning. For many, work is a major component of their life and views of themselves. Some would go further and suggest that work is the central organizing principle in their lives—they could not imagine life without work, and self-actualizing would not take place without it. To be sure, self-actualization is the process of realizing and fulfilling one’s potential, leading to a more meaningful life. It involves personal growth, self-awareness, and the pursuit of authenticity, creativity, and purpose. What Makes Work Meaningful? A closer look into what makes work meaningful can help in this discussion. Meaningful work comes from: Community : We are human, whether we like it or not. Because of this, we are wired for connection. Studies show that employees who feel a strong sense of belonging are more engaged and productive Contribution : We view the ability to see how one’s work benefits others as one of the strongest motivators in any job. In fact, employees who find meaning in their work are 4.5 times more engaged than those who do not Challenge : We thrive when we are given opportunities to learn and grow. Put another way, when leaders challenge employees to expand their capabilities and provide the support they need to succeed, those employees experience more meaningful development When you stop and think about it, it makes sense that leaders play a considerable role in shaping workplace meaning. Since about 50 percent of employees’ sense of meaning at work can be attributed to the actions of their leaders, leaders are recommended to find ways to cultivate community, contribution, and challenge so that employees and teams can thrive. More precisely, leaders in organizations are recommended to: foster a strong sense of belonging be aware and acknowledge the impacts of employees’ work challenge workers so that they grow in meaningful ways Individuals can also add some other features so that they can create some meaning for themselves, namely with self-instigated learning, volunteering in the community, participating in local government, engaging in care work, and engaging in creative work. Previous Next
- Canadian AI Trends in HR: A Year in Review and Foreshadowing 2025 | voyAIge strategy
Canadian AI Trends in HR: A Year in Review and Foreshadowing 2025 Reflecting on Current and Forthcoming Shifts in HR By Tommy Cooke Nov 29, 2024 Key Points: AI has revolutionized HR recruitment and hiring in Canada by reducing the time-to-hire and enhancing the candidate experience Employee onboarding and learning management have been transformed with AI, streamlining manual tasks and enhancing personalization The year 2025 will see further developments in AI for HR in Canada, including enhanced Employee Experience (EX) management, balancing staffing and workload distribution, and challenging perceptions of trust in leadership Why Do AI Trends Matter? Looking back to understand AI trends may seem curious. By the fall of 2023, Chatbots were barely on the scene. Alas, in less than a year Artificial Intelligence (AI) has exploded in ways that have fundamentally changed the course of daily operations and growth strategies of organizations across virtually every industry around the world. The same holds particularly true for Human Resources (HR) in Canada. Indeed, 2024 has been an important year in terms of AI transformation in HR. It's crucial to understand these trends because they highlight the evolving nature of the HR landscape. 2024's trends indicate where the industry is heading. The HR world is often characterized as a manual world filled with time-consuming processes. Recruitment often involves hours screening resumes, onboarding relies on stacks of checklists, and performance management is limited to a frequency of reviews that can often miss the subtle nuances of an employee's growth. Employee engagement is often driven by surveys, and workforce planning requires significant data analysis efforts that rely on quickly antiquating methods. Since AI entered the landscape this year, it has promised efficiency, scalability, and intelligent decision-making. The appeal of AI in the HR world are numerous, but the most significant tends to be reducing armchair swiveling, providing personalized insights, and positioning HR professionals to be strategic drivers of an organization - and no longer merely administrators. A survey of 2,900 HR leaders and executives across Canada and the United States found that AI adoption has become mainstream for HR teams with smaller companies even more reliant on the technology than their larger competitors. Here are the top three AI in HR trends in Canada throughout 2024: 1. Recruitment and Hiring With the average time-to-hire being north of 36 days on average , recruitment is seeing the most visible transformation through AI and automation. In fact, 79 percent of organizations in North America are currently using AI and automation tools in recruitment. At the time of writing this Insight, there are 81 AI recruiting startups in Canada . AI chatbots have been increasingly used this year in the area of HR to handle candidate engagement, manage application updates, conduct preliminary assessments, and help interviewers prepare for interviews. From AI-based video interviewing software conducting sentiment analysis to AI-driven employee matching, training, talent identification, and professional development, 2024 has been a busy year in terms of AI’s impact on the HR recruitment and hiring scene. 2. Employee Onboarding The often cumbersome and paperwork-intensive process of onboarding has changed significantly because of AI. In the United States, 68 percent of organizations are already using AI in the onboarding process, with 41 percent of respondents in a recent study anticipating AI-driven onboarding by August 2025. In Canada, the stakes around AI use for onboarding are high. Throughout the year, managers have reported that long hiring cycles are leading to high turnover due to heavy workloads, higher recruitment costs, losing top candidates to competitors, and delayed or cancelled projects . AI assists by means of streamlining manual tasks like document submission and compliance training while AI chatbots are playing a role in guiding new hires throughout the onboarding process. Chatbots are also assisting in the production of FAQs. 3. Learning and Development Learning Management Systems (LMS) are familiar platforms to HR professionals. They can be tremendously useful in organizing, creating, and delivering training and educational content in ways that are insightful and easy to use for both HR and the workforce these platforms serve. A key transformation throughout 2024 is the onboarding of AI functionality into LMS. By the end of 2024, it is expected that 47 percent of all LMS tools will feature AI capabilities. In Canada, technology-driven use in teaching and learning is expected to grow well into 2027 with GenAI being a potent driver for change. The continued demand for flexible learning and hybrid offerings is in part due to the impact of GenAI on educational industries, and so AI’s entrance into LMS this year was not a surprise. We have already seen AI assist on LMS platforms by tailoring learning paths based on individual skills and growth goals and adapting to preferred learning styles as well. Moreover, AI is being positioned to reduce administrative support as well by automating enrollment, scheduling, and grading tasks. As we have seen in neighboring platforms, data-driven insights on learner progress and training program effectiveness are one of the many AI elements that are effectively transforming HR professionals into business leaders. Foreshadowing AI-HR Trends in Canada for 2025 Trends in the United States, Europe, and the rest of world serve as important indicators of AI solution marketplace transformations likely to arrive in 2025. Here are three trends Canada can expect to see next year: 1. AI Intersecting with Employee Experience (EX) Economic recovery priorities have quickly shifted organizations’ focus on cost reduction and efficiency boosts but at a considerable cost in terms of lost emphasis on inclusion, equity, mental health, and workplace schedule flexibility. To compensate for an increase in workplace challenges and an overall shift away from the quality of employee experience, AI is increasingly adopted by employees to recover their own workplace experience. As we published recently, AI that can come to the workplace in an employee’s pocket is an attractive assistant for getting through a difficult workday. This trend from 2024 will likely increase throughout 2025. With Gartner reporting that 60 percent of HR leaders believing that their current technology stack hinders rather than improves employee experience, organizations will do well to critically reflect upon what role its own and its employees’ AI play in improving their experience at work. Given the impact AI is making in niche HR workflows already, expect to see more AI targeted toward employee mental wellness to predict burnout, measure morale, and isolate what aspects of the workplace culture are not working. 2. AI to Balance Staffing and Workload More than half of organizations surveyed by SHRM in 2024 reported that their HR units are understaffed. With SHRC also reporting that only 19 percent of HR executives expect their HR unit size will increase next year and that 57 percent of HR professionals are reporting to work beyond normal capacity, AI-driven task automation will continue to offset HR labour shortages well into and beyond 2025. As organizations determine which human inputs add the most impact and value against where they believe AI can fill in and even take the lead, we can also expect to see a sharper rise in the propagation of AI Agents. As our Co-Founder Christina Catenacci explains , AI Agents are expected to work without the need for human supervision. They will play a significant role in reframing the conversation that an organization will have about balancing staffing and workloads. 3. AI’s Role in Challenging and Promoting Trust of Leadership Changes are stressful. Employees have been burning out and disconnecting at alarming rates throughout 2024. While EX will be a challenge to maintain and foster throughout 2025, leadership will also likely continue to face elevated pressure to maintain trust and transparency as well. Layoffs, economic uncertainty, global conflict, and the likelihood of political turmoil has created a global-reading sense of job insecurity for many workers. How leaders navigate these issues, particularly when the adoption of AI is often perceived as a barrier to trust, will certainly make the question of AI’s role in HR leadership a difficult one to answer. 46 percent of HR leaders believe AI boosts their analytics capabilities and 41 percent of business leaders expected to redesign business processes via AI in the next few years , AI will be implemented in more critical business lines and operations across more industries throughout 2025. This means that leadership must find ways to lead their workers with confidence. Thought Leadership will play a critical role in shaping conversations, easing debates, and softening cynicism. It involves creating and distributing key content in the form of blogs, podcasts, videocasts, and newsletters that clarify what AI is versus what AI is not, how and why it is proven to improve workplace processes and productivity and is implemented without necessitated layoffs. In order to protect, foster, and maintain trust with employees, leaders need to exemplify themselves as commanders of AI, and not merely its passengers We expect to see a significant rise in AI-oriented Thought Leadership content generation throughout 2025. 2024 and 2025: a Period of Significant AI Transformation in HR AI's influence on HR in Canada throughout 2024 has been nothing short of transformative. From streamlining recruitment and onboarding to tailoring employee learning experiences, AI has repositioned HR professionals to move beyond administrative tasking and take on more strategic roles. As we look toward 2025, global trends will continue to shape the Canadian HR landscape, requiring leaders to adopt ethical and inclusive AI practices while maintaining a human-centered approach that reflects critically on what it means to be a trustworthy leader who understands and embraces AI. Organizations that embrace these technologies and lessons thoughtfully will be well-positioned to foster a more productive and engaging workplace in the years ahead. Previous Next
- What is Human in the Loop? | voyAIge strategy
What is Human in the Loop? Understanding the Basics By Christina Catenacci Oct 18, 2024 Key points A human-in-the-Loop approach is the selective inclusion of human participation in the automation process There are some pros to using this approach, including increased transparency, an injection of human values and judgment, less pressure for algorithms to be perfect, and more powerful human-machine interactive systems There are also some cons, like Google’s overcorrection with Gemini, leading to historical inaccuracies Humans-in-the-Loop - What Does That Mean? A philosopher at Stanford University asked a computer musician if we will ever have robot musicians, and the musician was doubtful that an AI system would be able to capture the subtle human qualities of music, including conveying meaning during a performance. When we think about Humans-in-the-Loop, we may wonder exactly what this means. Simply put, when we design with a Human-in-the-Loop approach, we can imagine it as the selective inclusion of human participation in the automation process. More specifically, this could manifest as a process that harnesses the efficiency of intelligent automation while remaining open to human feedback, all while retaining a greater sense of meaning. If we picture an AI system on one end, and a human on the other, we can think of AI as a tool in the centre: where the AI system asks for intermediate feedback, the human would be right there (in the loop) providing some minor tweaks to refine the instruction and giving additional information to help the AI system be more on track with what the human wants to see as a final output. There would be arrows that go from the human to the machine and from the machine to the human. By designing this way, we have a Human-in-the-Loop interactive AI system. We can also view this type of design as a way to augment the human—serving as a tool, not a replacement. So, if we take the example of computer music, we could have a situation where the AI system starts creating a piece, and then the human musician could provide some “notes” (pun intended) for the AI system that adds meaning and ultimately begins creating a real-time, interactive machine learning system. What exactly would the human do? The human musician would be there to iteratively, efficiently, and incrementally train tools by example, continually refining the system. As a recording engineer and producer myself, I like and appreciate the idea of AI being able to take recorded songs and separate it into its individual stems or tracks, like keys, vocals, guitars, bass and drums. This is where the creativity begins… What are the pros and cons of this approach? While there are several benefits of using this approach, here are a few: There would be considerable gains in transparency : when humans and machines collaborate, there is less of a black box when it comes to what the AI is doing to get the result There would be a greater chance of incorporating human judgment : human values and preferences would be baked into the decision-making process so that they are reflected in the ultimate output There would be less pressure to build the perfect algorithm : since the human would be providing guidance, the AI system only needs to make meaningful progress to the next interaction point—the human could show a fermata symbol, so the system pauses and relaxes (pun intended) There could be more powerful systems : compared to fully automated or fully human manual systems, Human-in-the-Loop design strategies often lead to even better results Accordingly, it would be highly advantageous to think about AI systems as tools that humans use to collaborate with machines and create a great result. It is thus important to value human agency and enhanced human capability. That is, when humans fine-tune (pun intended) a computer music piece in collaboration with an AI system, that is where the magic begins. On the other hand, there could be cons to using this approach. Let us take the example of Google’s Bard (later renamed Gemini)—a mistake that cost the company dearly because it lost billions of dollars in shares and was quite an embarrassment . What happened? Gemini, Google’s new AI chatbot at the time, began answering queries incorrectly. It is possible that that Google was rushing to catch up to OpenAI’s new ChatGPT at the time. Apparently, there were issues with errors, skewed results, and plagiarism. The one issue that applies here is skewed results. In fact, Google apologized for “missing the mark” after Gemini generated racially diverse Nazis . Google was aware that GenAI had a history of amplifying racial and gender stereotypes, so it tried to fix those problems through Human-in-the-Loop design tactics, which went too far woke. In another example, Google generated a result following a request for “a US senator from the 1800s” by providing an image of Black and Native American women (the first female senator, a white woman, served in 1922). Google stated that it was working on fixing this issue, but the historically inaccurate picture will always be burned in our minds and make us question the accuracy of AI results. While it is being fixed, certain images will not be generated. We see then, what can happen when humans do not do a good job of portraying history correctly and in a balanced manner. While these examples are obvious, one may wonder what will happen when there are only minor issues with diverse representation or inaccuracies due to human preferences… Another con is that Humans-in-the-Loop may not know how to eradicate systemic bias in training data. That is, some biases could be baked right into the training data . We may question: who is identifying this problem in the training data? How is the person finding these biases? And who is making the decision to rectify the issue, and how are they doing it? One may also question whether tech companies should be the ones to be assigned these tasks. With the significant issues of misinformation and disinformation, we need to understand who the gatekeepers are and whether they are the appropriate entities to do this. Previous Next