top of page

What are AI agents?

Will AI agents make our lives easier?

By Dr. Christina Catenacci

Oct 22, 2024

Key Points 

  • AI agents act autonomously—with agency, and can act as virtual employees throughout an organization and across industries 

 

  • Some companies are starting to launch AI agents; for example, Microsoft is launching Copilot Studio, and Salesforce is launching Agentforce  

 

  • As companies roll out AI agents, it is likely that several tasks will be included in the package, involving sales, tracking expenses, or customer service 


You may be wondering what all the hype is about when hearing about AI agents. What are they? How is it different than a chatbot? How can this make my life easier? 


AI agents are not like chatbots, which need to be prompted. Rather, AI agents act autonomously—with agency. It may not be common right now, but it has been projected that by 2028, 33 percent of enterprise software applications will include agentic AI, up from less than one percent in 2024, enabling 15 percent of day-to-day work decisions to be made autonomously. 

Also referred to as intelligent agents, AI agents use AI techniques to complete tasks and achieve goals. In fact, AI agents do the following:

  • receive instructions 

 

  • create a plan 

 

  • use AI to complete tasks 

 

  • produce outputs 

Why would businesses need these AI agents? By having intelligent agents, organizations can increase the number of automatable tasks and workflows. This saves time and automates monotonous tasks so that (human) employees can be freed up to complete more interesting and intellectually stimulating work. Eventually, AI agents will likely create more complicated plans and create more stunning deliverables—all autonomously. 


As wonderful as these tools might be, some significant concerns are likely to arise—whether it is privacy, security, or ethical concerns. For instance, when an AI agent is carrying out autonomous tasks to achieve goals, there is some question about whether it will violate a privacy law in the process. And the AI agent may not be ethically aware enough to consider how important it is to ensure that how one accesses information is in line with human values. 


When trying to accomplish certain goals, would an AI Agent concern itself with human values? Even if it would, would it sufficiently understand human values in every circumstance? If it does not, who would be responsible if the AI agent did something that was not what a human would consider to be acceptable conduct? Perhaps the law of agency would apply and make the owner of the AI agent responsible for any unintended consequences. But if a large portion of users starts to use these AI agents simultaneously, how would it be possible to minimize the extent of harm that could ensue—all at once? 


Interestingly, it has been reported that Microsoft will be launching these AI agents in it’s Copilot Studio in the very near future. In fact, Microsoft forecasts that these AI agents will carry out tasks throughout the workplace in many industries—to help with that, the company is going to be releasing 10 fine-tuned agents and give users the ability to create their own agents. 


For example, some of the agents that Microsoft Copilot Studio will be releasing include:

 

  • Sales Qualification Agent 

 

  • Supplier Communications Agent 

 

  • Customer Intent Agent 

 

  • Customer Knowledge Management Agent 


Indeed, Microsoft has claimed that AI automation will remove the boring parts of jobs instead of replacing entire jobs. The company plans on achieving this goal by allowing businesses and developers to build AI-powered Copilots that can work as virtual employees. Unlike a chatbot waiting to be prompted, the Copilot would do things such as monitoring email inboxes or automating a series of tasks in the place of employees. 


Employees will need to appreciate that the parts of jobs that will be eliminated are the kinds of things employees do not enjoy doing. When this type of process takes place, HR Managers may need to reconfigure jobs so that there are more tasks that humans can do contained in each job.  


To be sure, Microsoft has built some controls into Copilot Studio so that it does not go rogue and complete inappropriate tasks autonomously. Managers will likely need to provide guidance to employees with respect to what types of tasks are good candidates to be automated; they will need to assign those tasks to the AI Copilot and leave more delicate or complex work tasks to human employees.  


Technically speaking, Copilot Studio combines the natural language understanding models already in Copilot Studio with Azure OpenAI to: 


  • Understand what the copilot maker wants to achieve by parsing their request 

 

  • Apply knowledge of how nodes within a topic work together, and how a topic should be constructed for the best effect 

 

  • Generate a series of connected nodes that together form a full topic 

 

  • Use plain language in any node that contains user-facing text that corresponds with the copilot maker's request 


According to Microsoft, the “Create with Copilot” option in Copilot Studio allows users to just describe what they want to achieve, and then produce a topic path that achieves that goal. Microsoft recommends that users include granular instructions in a description and limit the scope of the description to a single topic. It is possible to modify a topic if necessary, using natural language. 


Apparently, Microsoft’s latest announcement has deepened its rivalry with Salesforce, especially since Salesforce just presented its “Agentforce” at its Dreamforce conference. Indeed, the competition is fierce.  


We shall see: AI agents are in their early stages.  

bottom of page