Dive Brief:
- Salesforce expanded its family of large action models, a set of language models designed to predict and perform next actions and power agents, the software vendor said Thursday. Salesforce first introduced the xLAM set in September.
- The new additions include models tailored for on-device implementation, limited GPU environments and industrial applications. Salesforce said the expansion marks an increase in accessibility and deployment flexibility across potential environments.
- Large action models are smaller than typical LLMs, offering lower cost, faster inference and improved sustainability, according to Salesforce. The vendor also introduced support for multi-turn tool calling, which enables agents to ask clarifying questions and adapt actions as new information becomes available.
Dive Insight:
Tech vendors are working to make AI agents enterprise-ready, lowering barriers to entry, enhancing capabilities and offering guardrails as the technology catches on. But there’s still room for improvement.
“Currently, LLM intelligence is jagged,” Silvio Savarese, chief scientist at Salesforce AI Research, said during a media roundtable Wednesday. “Its performance is extremely unstable in terms of consistency. This is something that we don’t really want deployed in the enterprise.”
Today’s AI agents and generative AI tools are fallible. Enterprise adopters point to AI-generated responses with bias or that lack detail as common mistakes. Developers using AI-powered coding assistants also run into security issues with code produced by the tools.
“We’ve got to find a way to build those champion agents that are highly capable and highly consistent,” Shelby Heinecke, senior AI research manager at Salesforce, said during the call. “We have our work cut out for us, but we’re making strong progress.”
The vendor has developed several benchmarks and frameworks to better test and evaluate models and agents in recent months. The company also introduced what it calls SFR-Guard models last month, built to better defend against prompt injection attacks and enhance toxicity detection.
Security is a pressing concern for enterprises exploring agents. IT leaders cited stronger data privacy and security features as key changes they’d like to see in existing agentic tools, according to a Cloudera report published in April.
The stakes are high to get security right as Gartner predicts 1 in 4 enterprise breaches will be traced back to AI agent abuse in the next three years.