top of page

Some of the Main Risks of Creating and Selling a Chatbot

How to mitigate those risks before putting them on the market

By Christina Catenacci

Dec 5, 2024

Key Points 


  • As we approach 2025, chatbots are in high demand and businesses are wondering whether they should buy or build their own chatbots, and put them on the market 

 

  • Although there are several risks associated with businesses having an AI chatbot, there are also many ways to mitigate the risks. One example is that we need to accept that chatbots can be wrong (as in the case of hallucinations), so it is necessary to check the AI chatbot for accuracy 

 

  • There are clear benefits of using chatbot services in a business: chatbots are cost effective since it can automate tasks, chatbots can work 24/7 and don’t need breaks, chatbots can leverage user and data preferences to provide a tailored experience, and chatbots are scalable in that they can handle several conversations simultaneously 


Chatbots are hot these days, and plenty of businesspeople want to create them and put them on the market as soon as possible. They want to stay competitive. They want to make passive income. They want to ride the AI wave.  


It is no wonder that 79 percent of the top-performing businesses already have installed some form of conversational marketing tool. Chatbots are in demand, can unlock new revenue streams, and can help to create high returns. The more clients and the more mid-sized businesses that are involved, the higher the monthly revenue can be. 


In fact, it is possible to search use cases for ready-made chatbots to use in certain industries for certain tasks like lead generation, recruitment, or appointment booking. It is an exciting time to leverage technology to bolster a business’s services. Indeed, a 2022 study by ThriveMyWay, 24 percent of enterprises, 15 percent of midsized companies, and 16 percent of small firms utilized chatbot services.  


There are clear benefits of using chatbot services in a business: chatbots are cost effective since it can automate tasks, chatbots can work 24/7 and don’t need breaks, chatbots can leverage user and data preferences to provide a tailored experience, and chatbots are scalable in that they can handle several conversations simultaneously. They certainly appear helpful for businesses that wish to deliver enhanced customer experiences. Interested businesses who want to build their own chatbot instead of resell one are recommended to explore necessary action steps including signing up for a chatbot platform, build a demo chatbot for a simple client in a target niche, create a landing page that shows off the demo bot, reach out to a small number of prospects, and schedule consultations. 


But what are the risks of doing so, and how can we mitigate those risks? Are these risks present even if businesses become chatbot resellers (they buy a ready-made chatbot and then resell it to their clients) instead of builders (they make a bot and sell it to businesses)? 


Risks 


Some of the main risks include: 


  • Security and data leakage: if there is sensitive third-party or internal company information that is entered into a chatbot, it becomes part of the chatbot’s data model and may be shared with others who ask relevant questions. This could lead to data leakage and violate an organization’s security policies 

 

  • Hallucinations: if there is an inquiry, it is possible that the AI’s answer to that question could be an hallucination. What is this? Simply put, it is when the chatbot makes up stuff (including citations/references) 

 

  • The chatbot could go rogue: if there is a lack of human feedback, or poorly trained systems, the chatbots could provide unexpected, incorrect, or even harmful outputs 

 

  • Disinformation: if chatbots are making it easy for bad actors to create false information at a mass scale—cheaply and rapidly—a business could face reputational risks, legal risks, and other damage too. There is more: the same AI chatbot could teach another AI chatbot to spread even more harmful disinformation   

 

  • Bias and Discrimination: if bias is created because of the biased nature of the data on which AI tools are trained, or if users purposefully manipulate AI systems and chatbots to produce unflattering or prejudiced outputs, there could be problematic consequences.  Worse, when decisions are made based on the biased information, a considerable risk of discriminatory decisions could occur 

 

  • Intellectual Property: if the AI system is trained on enormous amounts of data (including protected data like copyrighted data), the business that uses the data through the chatbot could be violating a business’s intellectual property and could end up on the receiving end of an infringement action 

 

  • Privacy and Confidentiality: if the AI system is trained on or is fed any sensitive information about a person, the business could be violating a person’s privacy. Similar to the Intellectual Property issue, the business could face privacy complaints or actions 

 

 

Mitigating the risks 


Here are some mitigation strategies


  • Be cautious and acknowledge the risks before acting 

 

  • Create policies and procedures that can outline for employees what is acceptable and unacceptable use of AI in the workplace 

 

  • Accept that chatbots can be wrong, and check the references 

 

  • If you are a builder of a chatbot and you are selling, use contractual provisions to limit liability 

 

  • Use transparency when dealing with AI use and communicating with clients and employees 

 

  • Review AI outputs and check if there are bias and discriminatory impacts 

 

  • Create plans to address AI-powered disinformation 

 

bottom of page