top of page

Chatbots at Work – Emerging Risks and Mitigation Strategies

How to Recognize and Overcome the Invisible Risks of AI

By Dr. Tommy Cooke

Nov 22, 2024

Key Points: 


  • Personal AI chatbots in the workplace can pose significant risks to data privacy, security, and regulatory compliance, which can lead to severe legal and reputational consequences  


  • Employees using personal AI tools can inadvertently expose proprietary information, increasing the risk of intellectual property breaches and confidentiality violations  


  • Organizations can mitigate these risks through clear policies, employee education, and proactive monitoring, allowing for responsible AI usage without compromising security or creativity


AI is rapidly transforming where and how we work and play. As our Co-Founder Dr. Christina Catenacci deftly describes, AI chatbots have become commonplace friends, mentors, and even romantic partners. At a rate that is surprising most any observer in any industry, AI is creating incredible opportunities that are often fraught with challenges. AI services tend to be so quickly packaged and sold that subscribers do not find much space to reflect on fit, appropriateness, and potential blind spots that could cause misalignment in even the most well-intentioned organization.  


As a result, a new kind of workplace is emerging. The remote and hybrid work models ushered in via the pandemic seem a distant memory from yesterday, now that employees are bringing their own personal AI into the office. 


In-pocket AI is appealing. Why wouldn’t an organization want its employees to benefit from improved workflows and creativity, especially if they don’t have to pay for it? A critical dynamic in this new pocket AI workplace reality is that employers are seeing new blind spots and challenges emerge.  


Understanding and navigating them is crucial for avoiding data leaks, maintaining compliance, and protecting intellectual property. As we head into 2025, organizations must take time to recognize that invisible AI is unmanaged. This exposes organizations to far-reaching consequences for itself and its stakeholders. By understanding these risks, they can be addressed in ways that not only protect an organization, but position its executives as thought leaders capable of aligning values, building trust, and enhancing overall efficiency without compromising employee creativity and freedom. 


The Risks of AI Chatbots 


Data privacy and security as well as compliance are top of mind for most employers we speak to. Because an employee’s personal AI chatbot requires constant internet access and cloud storage, access is facilitated by an employer’s Wi-Fi network. This increases the risk of corporate data being stored incorrectly on third-porty servers or inadvertently intercepted and exposed. It’s important to recognize that personal AI chatbots in a workplace thus raise susceptibility to industry regulations like the GDPR or HIPAA, thus significantly raising an organization’s legal exposure to fines or penalties. 


Most AI chatbot services train their AI models in real-time off the data its users provide them—and this includes sensitive intellectual property. Consider the following hypothetical prompt that a marketing employee at a pharmaceutical company may enter into their personal AI chatbot: 


“I have a client named [x] who has 37 patients in New York State with [y] medical conditions. They are born in [a, b, c, and d] years. Analyze our database to identify suitable drug plans. Be sure to reference our latest cancer treatment strategy, named [this].”  


First, the prompt may lead to privacy issues sine it includes potentially identifiable information about patients, such as their location, medical conditions, and birth years. Depending on how an AI chatbot processes and stores this information, it could lead to violations of HIPAA as sharing protected health information (PHI) with an unapproved, third-party application puts the employer at risk of serious regulatory breaches not to mention reputational damage. Moreover, patients’ identities have been incidentally reverse-engineered with far less data via far more seemingly innocuous methods.  


Second, the hypothetical prompt contains confidential information when it mentioned the employer’s latest cancer treatment strategy. Strategic information related to drug plans or treatment approaches may be inadvertently referenced and/or suggested to competitors’ employees who are using the same AI chatbot.  


Third, the hypothetical prompt entered by the make-believe employee incorrectly assumes that the AI chatbot has access to one of the company’s secure databases. Despite having uploaded a few protected PDFs to the AI chatbot, the employee had used the wrong terminology. The potential for this to cause problems is quite significant as it can trigger the AI chatbot to creatively but silently fill-in-the-blanks; as we know, AI chatbots have a tendency to hallucinate. Remember, they do not reflect the living world but rather analyze data models of the real world that you and your employees live in. The point is that the AI chatbot may generate misleading or inaccurate information simply  because it is only as robust and comprehensive as the data it trains from. There is a significant risk that the employee recommends to their clients and colleagues to follow a particular drug plan that could be based off flawed and incomplete health, medication, and business data. 


Mitigation Strategies 


AI is here to stay. The solution is not to ban or forbid these tools. That is unrealistic and may inadvertently cause friction for an employer when it decides to implement its own AI tools for employees down the road. Here are some proactive steps any organization can follow to minimize risks while enabling employees to use AI responsibly: 


1. Build a Policy 


Set expectations that outline what is and is not allowed when it comes to personal AI chatbots. Include rules about handling sensitive data, consequences for non-compliance, and standards for AI tool vetting. Moreover, generate a one-stop guideline PDF that gives your employees steps they should follow along with examples of both problematic and approved prompts. 


2. Educate your Employees 


Training employees on AI risks and best practices ensures they understand their role in protecting your organization and not merely existing inside of it. Training is always the first line of defense, and it is a proven method for promoting awareness and responsible use of AI. 


3. Monitor and Audit 


Numerous security solutions exist to identify what tools are being used inside of a company’s network. Implement systems to track AI tool usage and audit their data flows tools to identify unauthorized or high-risk applications. Inform your employees that you are monitoring AI-based network activity and will be conducting annual audits to ensure that their activity is compliant with organizational policy requirements. 


Mindfully Embracing an Opportunity 


The rapid proliferation of AI companions challenges organizations to rethink how they relate to their employees. Risks certainly exist but they are manageable through thoughtful policies, regular monitoring, and a strong training commitment. Allowing employees to use personal AI chatbots isn’t merely a risk – it’s an opportunity. When that opportunity is embraced, it signals trust, adaptability, and a forward-thinking culture that thinks proactively and not reactively to AI. 


HR leaders, IT pros, and virtually every executive can enable the ability for employees to innovate and create while simplifying tedious workflows through AI chatbots. This can significantly work in the organization’s favor while doing so safely. Consider doing so to show your organization, your employees, and your clients that you are ready for the rapidly evolving digital landscape ahead in 2025 and the years to come. 

bottom of page