Sunday, December 22, 2024

Getting started with AI agents (part 2): Autonomy, safeguards and pitfalls

Date:


Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


In our first installment, we outlined key strategies for leveraging AI agents to improve enterprise efficiency. I explained how, unlike standalone AI models, agents iteratively refine tasks using context and tools to enhance outcomes such as code generation. I also discussed how multi-agent systems foster communication across departments, creating a unified user experience and driving productivity, resilience and faster upgrades.

Success in building these systems hinges on mapping roles and workflows, as well as establishing safeguards such as human oversight and error checks to ensure safe operation. Let’s dive into these critical elements.

Safeguards and autonomy

Agents imply autonomy, so various safeguards must be built into an agent within a multi-agent system to reduce errors, waste, legal exposure or harm when agents are operating autonomously. Applying all of these safeguards to all agents may be overkill and pose a resource challenge, but I highly recommend considering every agent in the system and consciously deciding which of these safeguards they would need. An agent should not be allowed to operate autonomously if any one of these conditions is met.

Explicitly defined human intervention conditions

Triggering any one of a set of predefined rules determines the conditions under which a human needs to confirm some agent behavior. These rules should be defined on a case-by-case basis and can be declared in the agent’s system prompt — or in more critical use-cases, be enforced using deterministic code external to the agent. One such rule, in the case of a purchasing agent, would be: “All purchasing should first be verified and confirmed by a human. Call your ‘check_with_human’ function and do not proceed until it returns a value.”

Safeguard agents

A safeguard agent can be paired with an agent with the role of checking for risky, unethical or noncompliant behavior. The agent can be forced to always check all or certain elements of its behavior against a safeguard agent, and not proceed unless the safeguard agent returns a go-ahead.

Uncertainty

Our lab recently published a paper on a technique that can provide a measure of uncertainty for what a large language model (LLM) generates. Given the propensity for LLMs to confabulate (commonly known as hallucinations), giving a preference to a certain output can make an agent much more reliable. Here, too, there is a cost to be paid. Assessing uncertainty requires us to generate multiple outputs for the same request so that we can rank-order them based on certainty and choose the behavior that has the least uncertainty. That can make the system slow and increase costs, so it should be considered for more critical agents within the system.

Disengage button

There may be times when we need to stop all autonomous agent-based processes. This could be because we need consistency, or we’ve detected behavior in the system that needs to stop while we figure out what is wrong and how to fix it. For more critical workflows and processes, it is important that this disengagement does not result in all processes stopping or becoming fully manual, so it is recommended that a deterministic fallback mode of operation be provisioned.

Agent-generated work orders

Not all agents within an agent network need to be fully integrated into apps and APIs. This might take a while and takes a few iterations to get right. My recommendation is to add a generic placeholder tool to agents (typically leaf nodes in the network) that would simply issue a report or a work-order, containing suggested actions to be taken manually on behalf of the agent. This is a great way to bootstrap and operationalize your agent network in an agile manner.

Testing

With LLM-based agents, we are gaining robustness at the cost of consistency. Also, given the opaque nature of LLMs, we are dealing with black-box nodes in a workflow. This means that we need a different testing regime for agent-based systems than that used in traditional software. The good news, however, is that we are used to testing such systems, as we have been operating human-driven organizations and workflows since the dawn of industrialization.

While the examples I showed above have a single-entry point, all agents in a multi-agent system have an LLM as their brains, and so they can act as the entry point for the system. We should use divide and conquer, and first test subsets of the system by starting from various nodes within the hierarchy.

We can also employ generative AI to come up with test cases that we can run against the network to analyze its behavior and push it to reveal its weaknesses.

Finally, I’m a big advocate for sandboxing. Such systems should be launched at a smaller scale within a controlled and safe environment first, before gradually being rolled out to replace existing workflows.

Fine-tuning

A common misconception with gen AI is that it gets better the more you use it. This is obviously wrong. LLMs are pre-trained. Having said this, they can be fine-tuned to bias their behavior in various ways. Once a multi-agent system has been devised, we may choose to improve its behavior by taking the logs from each agent and labeling our preferences to build a fine-tuning corpus.

Pitfalls

Multi-agent systems can fall into a tailspin, which means that occasionally a query might never terminate, with agents perpetually talking to each other. This requires some form of timeout mechanism. For example, we can check the history of communications for the same query, and if it is growing too large or we detect repetitious behavior, we can terminate the flow and start over.

Another problem that can occur is a phenomenon I will call overloading: Expecting too much of a single agent. The current state-of-the-art for LLMs does not allow us to hand agents long and detailed instructions and expect them to follow them all, all the time. Also, did I mention these systems can be inconsistent?

A mitigation for these situations is what I call granularization: Breaking agents up into multiple connected agents. This reduces the load on each agent and makes the agents more consistent in their behavior and less likely to fall into a tailspin. (An interesting area of research that our lab is undertaking is in automating the process of granularization.)

Another common problem in the way multi-agent systems are designed is the tendency to define a coordinator agent that calls different agents to complete a task. This introduces a single point of failure that can result in a rather complex set of roles and responsibilities. My suggestion in these cases is to consider the workflow as a pipeline, with one agent completing part of the work, then handing it off to the next.

Multi-agent systems also have the tendency to pass the context down the chain to other agents. This can overload those other agents, can confuse them, and is often unnecessary. I suggest allowing agents to keep their own context and resetting context when we know we are dealing with a new request (sort of like how sessions work for websites).

Finally, it is important to note that there’s a relatively high bar for the capabilities of the LLM used as the brain of agents. Smaller LLMs may need a lot of prompt engineering or fine-tuning to fulfill requests. The good news is that there are already several commercial and open-source agents, albeit relatively large ones, that pass the bar.

This means that cost and speed need to be an important consideration when building a multi-agent system at scale. Also, expectations should be set that these systems, while faster than humans, will not be as fast as the software systems we are used to.

Babak Hodjat is CTO for AI at Cognizant.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers



Source link

Share post:

spot_img

Popular

More like this
Related

amd: The (Mona) Lisa effect: AMD’s transformation since CEO Su’s takeover

“It was like death — closest thing to...

Large language overkill: How SLMs can beat their bigger, resource-intensive cousins

Join our daily and weekly newsletters for the...

everything we’re excited to play in 2025

The last twelve months have been packed with...