top of page

Adopt AI by Appealing to People

AI leaders must understand that AI adoption is more than a technical exercise

By Tommy Cooke, powered by caffeine and curiousity

Jan 31, 2025

Key Points:


1.      AI adoption is not just a technological upgrade because AI is not merely math. It’s a social force and cultural transformer that requires strong leadership

 

2.      Successful AI leaders prioritize trust, communication, and training to navigate employee fears and resistance

 

3.      Embedding AI into an organization demands a strategic, human-centered approach that aligns with business goals and workplace culture


On January 29th we were invited to give a fireside chat, moderated by the incredible Christina Fox - CEO of the TechAlliance of Southwestern Ontario - hosted by the Canada Club of London (CCL). It was a fantastic opportunity to connect with people in a wide range of industries, from education and healthcare to finance and agriculture.


What really stood out to Christina Fox, VS's Co-Founder Christina Catenacci and I was not only how much AI has impacted every single person in every industry, but that there seemed to be a common problem among many of them: being a leader in AI adoption.


In fact, bringing AI into an organization is not merely a technical or financial challenge. It is also a matrix of social and cultural hurdles that require leadership to surpass without tripping along the way.


While there is indeed a tendency to introduce and explain AI as merely a tool—as nothing more than linear algebra—this explanation is incomplete and dismissive of the fact that AI is built to interface with humans in deeply social and cultural ways. To be clear, I recognize that technical AI experts prefer to calm the waters by appealing to black and white, logical sensibilities.


Some of the most brilliant AI engineers believe that disarming nerves and anxiety around AI ought to be achieved by simplifying AI on the basis of mathematics. However, AI is larger than the sum of its math. It functions in society, on humans, and that makes AI complex, not just a technology. It is a social force that shapes human behaviour and thinking.


Allow me to further unpack this conundrum. When we polled our audience before our recent talk, we learned that Natural Language Processing (NLP), or AI that understands, manipulates, and interprets human language, was used by 90 percent of our audience.


A year ago, at the CCL's inaugural AI event, almost nobody would have answered “yes”. This is significant because it not only means people are using AI, but they are interfacing with it at unprecedented rates. While our audience used a variety of tools, including Claude, ChatGPT, CoPilot and even DeepSeek, what all of these tools have in common is that they:


  1. Shape perceptions of reality. Users perceive AI as capable of making decisions, demonstrating intent, and even engaging in meaningful back-and-forth dialogue. This means that human perceptions drive adoption. It also means that AI, in turn, builds trust with users over time. They can also create dependency. When humans use AI, they are not merely interacting with math. They are communicating with a technology that emulates human thinking, reflection, and problem solving.


  2. Exhibit power. AI is not conscious, but it is very good at simulating sentience or, the ability to simulate feelings and cognition. This is precisely how AI influences human decision-making in the first place. By appealing to human attributes, AI has power over humans in its ability to convince us that the tool is not only correct but that we will eventually act upon its recommendations in both casual and non-casual ways.


  3. Exist in social and cultural contexts. AI does not exist in a vacuum. They are designed and deployed by people who have curiosities, fears, and biases. These things change over time. This is why AI is built to reflect their owners' and designers' ethics, world views, and even their personalities. The fact that AI feels human is not an accident—it's often their core feature.


So, when leaders want to bring AI into their organization, this is what they are often up against. To successfully adopt AI in any organization, it is incumbent upon executives and senior management to become AI leaders, and that starts with recognizing that AI is more than merely an IT upgrade.


The Leadership Imperative


So, you've understood that AI is more than math. What are your next steps? They are not always obvious. Who is responsible for choosing AI? Who is accountable to its decisions? How do you bring multiple leaders with different opinions into alignment about AI? Will your employees and coworkers trust it? Without clear leadership, AI efforts stall, employees resist change, and organizations miss opportunities.


Before getting into some recommended steps for becoming an AI leader in your organization, let's take inventory of some common adoption obstacles any AI leader faces in their organization.


Common Obstacles to AI Adoption


  1. Lack of Strategic Alignment. AI initiatives often begin in silos, and the first silo tends to be housed by technical teams struggling to get executive buy-in. Cross-departmental collaboration makes things further complicated. Given what we discussed above regarding human curiosities, fears, and biases, it is perhaps unsurprising that people who are trained to do very different tasks are often the biggest obstacles to finding consensus on AI adoption.


  2. Employee Resistance. It is a well-known fact in 2025 that people working in any organization overwhelmingly feel unsure about what AI means to them, their job description, and even their job security. I heard from multiple audience members last night after the event that IT teams struggle to get executive buy-in because employees are afraid of what they do not understand.


  3. Training Needs. AI is not a hammer. It requires training to so individuals can use AI effectively and safely. This is not merely a task for technical teams but for executives and unit decision-makers—people with a vested interest in ensuring that if and when AI is used, people are trained in ways that speak to them, that resonate, and facilitate productive reflection.


  4. Ethics and Trust. AI introduces risks. All systems are susceptible to bias in decision-making, violating privacy laws, and creating unanticipated accountability gaps. Even as employees become more literate with AI, 62 percent of employees believe that their trust in AI will be broken by out-of-date public data in their organization's AI systems.


  5. Lack of Change Management. AI adoption requires the ability to manage the transformation taking place along the way. Without change management strategies, AI efforts fail to gain traction. Effective change management means fostering open dialogue, providing ongoing support, and ensuring that employees feel included in the transformation rather than merely be subjected to it. 

 

Five Leadership Priorities for Successful AI Adoption


Given that human hurdles stall or even derail AI adoption, there are critical first steps that leaders can take to calm the waters and build momentum. Remembering that AI adoption is not just about technology—that it’s about people, trust, and change—leaders need to acknowledge and address these complexities early. Here are five steps you can follow:


First, set a clear vision and take ownership. AI initiatives need leadership beyond your tech people. Appoint an AI champion or a “steering group” who can take inventory of employee sentiments, fears, and curiosities. Task them with figuring out not only the challenges and opportunities for bringing in AI, but make sure they do so in alignment with business objectives, especially with growth plans.


Second, lead the conversation on training. AI success depends on human expertise. Figure out how your people learn best. Build learner personas of your workforce. Determine whether they prefer synchronous or asynchronous learning. Ensure that you take inventory of use cases and exemplify them. Be sure to also think about the long term. Training should be an ongoing effort that evolves alongside AI. Leaders must also ensure training includes not just technical skills but also critical thinking about AI's role, its risks, and ethical considerations. Prompt engineering is increasingly critical when using NLPs at work.


Third, foster a culture of experimentation. Encourage your people to start wondering about the possibilities of AI. Prioritize building a sandbox environment where AI can be used on a limited dataset with no impact on existing databases or critical business operations. Invite people from across the business to play there, to break things, and to innovate along the way. 


Fourth, communicate, communicate, communicate. Do not let AI feel like a top-down decision. Engage employees early. Listen to them. Provide clear messaging on how AI supports and not replaces them. Update them along the journey. Don't wait until the end to tell them AI is coming. Create monthly update meetings and encourage feedback that can be responded to at these meetings.


Fifth, plan for policies and procedures early. One of the most critically important first documents needed to safeguard your investment, and your people are AI in the Workplace policies and procedures. This is a document that not only establishes rules, expectations, roles, and responsibilities, but also generates efficient and productive lockstep so that your talent knows exactly how to optimize their new tools.


AI leadership isn’t about knowing the most about AI. It's about guiding people through difficult, often fearful and unknown change. Leading with clarity, empathy, and deliberate intent will turn AI from an uncertain risk into an opportunity. 

bottom of page