Data & AI

Agentic AI: the risks & rewards of adaptive AI

Agentic AI is one of the most hotly tipped but underexplored areas of AI. Characterised by their goal-driven behaviour, these adaptive AI systems have the potential to optimise critical business processes. But they also raise ethical questions and implementation challenges. Here we look at the most promising use cases and how to capture value from agentic systems – safely and responsibly. 

9 minutes to read
With insights from...

What is agentic AI? 

Agentic AI is a type of AI system capable of pursuing goals without the need for close human oversight. 
 
OpenAI describes these systems as: 'characterised by the ability to take actions that consistently contribute towards achieving goals over an extended period of time, without their behaviour having been specified’. 

Building upon large language models (LLMs), agentic systems come with some notable additions. Given a written goal and a defined set of actions, LLMs can propose a list of actions that contribute towards reaching that goal. These actions are executed by an external system that interacts with the LLM (typically the backend).

Results from each action are fed back into the LLM, allowing the agentic system to interact with its environment and adjust planning continuously based on real-time contextual information. This is called adaptability. 

'Agentic AI is where an LLM or multi-modal model is given a goal and equipped with the tools to reach that goal autonomously, using dynamic planning to incorporate changing contexts'. 

The true extent to which LLMs can reason and dynamically plan is an area of contention. Here at Zühlke, we've certainly seen them struggle with highly complex tasks, and they have limited capability to self-verify their generations. That said, the technology is advancing at a rapid pace, and we can expect these systems to grow increasingly capable of autonomous operations. 

How agentic systems use tools to complete goals 

Agentic AI is designed to integrate seamlessly with tools such as company applications, internal databases, and web applications to achieve its goals effectively. 

How does the LLM engage these tools? It outputs a well-defined text called an ‘action’ (typically formatted in JSON). This action tells the underlying system (typically the backend) to use the tool in a certain way and return the result back to the LLM. Example actions might be sending a message to a specific individual, making adjustments within a database, retrieving information, calling an external API to trigger an action, or even running some programming code. 

A diagram illustrating how AI agents interact with tools, people, and sub-agents to complete a given goal in line with the tool description provided to them

Above: a diagram showing the elements of a typical agentic AI system. An AI agent is given a goal description and tools description. It interacts with human stakeholders and tools (and additional AI agents as required) to pursue the goal. 

In general, any LLM can use tools in this way. It just needs a description of how to do so as part of the prompt – for example, via a few-shot learning framework like ReAct. Some LLMs are fine-tuned to be especially good at this and are better at recognising when to use a tool to achieve a goal.  

OpenAI calls this capability function calling and has supported it since June 2023. For this reason, agentic systems aren’t inherently new, but most businesses are only now starting to explore their value-driving use cases.  

Agentic AI examples: process optimisation use cases 

Agentic systems have the potential to optimise key business processes significantly.  

This potential extends beyond simply automating workflows. Thanks to an LLM’s ability to communicate in human language very effectively, agentic systems open the door to processes that combine the shared strengths of humans and machines to unlock new efficiencies and effectiveness.  

Think about the world of healthcare, for example. If a doctor was able to hand over manual administrative work to a suitable AI agent, they would gain much more time to attend to their patients. 

Here are some examples of how agentic systems can add value across a variety of business processes:   

Doctor and patient having a conversation

Internal processes including HR & accounts 

Agentic systems can support many routine internal tasks such as payroll processing, invoice management, and more: 

  • Employee onboarding: An LLM assistant, for example, can help onboard new employees, providing them with required information, updating their profile for them, tracking and reminding them of necessary e-learning modules, and recommending colleagues they can connect with. 

  • Recruitment: An agent can act as an intelligent personal assistant, helping a recruitment team to find suitable interview appointments and requesting missing documents from candidates. 

  • Processing invoices: An agent could help with routine tasks like requesting missing invoice signatures. 

Check out the below screenshot of an agentic system we built here at Zühlke. The aim of this agent is to make it easier for employees to complete routine tasks like updating their personal data via Microsoft Teams, a tool we use in our daily work. 

Screenshot of an agentic AI interacting with an individual via Microsoft Teams

Above: a screenshot of an agentic AI system that handles typical employee inquiries. The bot can be used in Microsoft Teams and double-checks system changes with the user. 

Customer support 

One of the most exciting use cases of agentic systems is automating key areas of customer service. Far beyond simply understanding a customer’s words, an agentic system should have the functionality to understand a customer’s problem, and then deliver multi-layered steps to help the customer achieve their goals

Key to handling inquiries effectively is their ability to ask questions, recognise intent, gather required information, run checks with internal systems, and start processes on request. From product recommendations to technical troubleshooting, the possibilities are wide-ranging. 

Here at Zühlke, we have first-hand experience of using agentic AI to optimise customer support processes. For example, we co-created a bot with a telecoms client, using information from internal systems to solve client issues like internet connection troubleshooting and SIM card (de-)activation more effectively. 

Industry vertical processes 

AI agents can also partially automate processes specific to particular industries – including complex, regulated environments like financial services. 

In insurance, for example, an LLM can augment the claims management process by classifying damage types, collecting missing information, and automating responses based on human decisions – properly documenting every step of the process. 

The same holds true in banking for loan applications, account management, financial research, and reporting. Other potential use cases could include managing investment portfolios and fraud identification. 

Beyond process optimisation 

As these examples show, agentic systems provide opportunities beyond process optimisation. Even in complex, regulated spaces, they can help to reduce overheads, improve accuracy, and enhance overall service quality. Other opportunities include: 

  • Combining an LLM with a code interpreter and data access allows it to automate simple BI tasks. It does this by analysing the data, summarising insights, and creating plots (check out our GitHub demo of an AI code interpreter for sensitive data).

  • Agents have the potential to greatly improve GenAI RAG (retrieval augmented generation) systems by moving from a one-shot approach to an active search. They can reformulate the query, summarise findings, and combine different modalities like text and databases.

  • Natural-language-based chat interfaces can provide a low-hurdle interface to complex software applications. This could help to increase user adoption and satisfaction with these applications.

Agentic risks, limitations, and remaining challenges 

It’s already possible to develop useful agentic systems based on LLMs’ current capabilities. But doing so requires careful business process analysis and ethical practices underpinned by a responsible AI framework. 

As these systems grow increasingly capable of autonomous operations with limited human oversight, they will pose additional risks and ethical considerations

Picture this: an LLM without any coded ethics needs to get a missing piece of information from a person. So it starts to email spam that individual repeatedly to reach its goal. Failing to get the input it needs, it then starts spamming a whole host of other individuals it has access to. 

As this example shows, it’s crucial to allocate tasks and process steps to humans and machine agents in a conscious and responsible way, with the right guardrails in place. 

How to use agentic systems safely 

Follow these best practices to mitigate the risks posed by agentic systems: 

Following the principle of ‘least privilege’ is especially important for agentic systems. For example, you might set read-only access for an LLM user within your database based on the required actions. Or you might set rate limits for actions like sending messages.

If possible, it’s always good practice to ask for confirmation before executing certain actions. This confirmation request is done by the underlying system, and not the AI agent itself. The screenshot we shared previously shows an example of this.

There should be an easily accessible log of all the actions an AI agent executed to ensure traceability. These can be extended with reasoning-output from the agent itself to put the actions into context.

It must be easy to turn off the AI agent by involved parties if unexpected behavior is observed.

We’d also recommend checking out OpenAI’s best practices for safeguarding and ensuring accountability in agents’ operations. 

Besides mitigating risks posed by AI agent failure modes, we also need to consider cybersecurity risks from malicious actors.  

For example, it’s crucial that a human stakeholder is not able to misuse an AI system for privilege escalation. This is where an individual convinces the LLM to run actions they don’t have the privilege to do themselves – either directly in-chat or via prompt injection attacks through third-party channels. 

These considerations aside, agentic systems are not all that different from other AI systems when it comes to responsible use. Our responsible AI framework lays out the steps you can take to ideate, build, run, and scale safe, ethical, and sustainable AI solutions. 

Now is the time to start exploring use cases 

The rich potential of agentic LLMs is largely untapped. Most likely because companies have tended to focus on simpler applications, such as GenAI RAG systems.

But the use cases these systems could support are hard to ignore – from optimising essential business processes to enhancing service quality. What’s more, it’s early days in the agentic AI story.

'As the technology develops further, it will support increasingly complex goals and action spaces, working with growing independence to realise advanced use cases'.  

Your business can proactively meet this opportunity by starting small, identifying the use cases you can ideate and test today. As with all AI experimentation, it’s essential to focus on viable, feasible, desirable use cases that deliver tangible value. And to ideate prototypes and proof-of-concepts in a transparent, responsible, and sustainable way based on a responsible AI framework. Autonomous systems bring additional risks and considerations, so be sure to factor these in from the outset of ideation. 

How we can help

Here at Zühlke, we’ve been working with new and emerging technologies for more than 50 years, turning their transformative potential into value-driving solutions for our clients. Speak to us today about how we can help you ideate, create, and scale the AI-augmented models, processes, and products you need to deliver meaningful impact.