Immediate injection and an expired area might have been used to focus on Salesforce’s Agentforce platform for knowledge theft.
The assault technique, dubbed ForcedLeak, was found by researchers at Noma Safety, an organization that lately raised $100 million for its AI agent safety platform.
Salesforce Agentforce allows companies to construct and deploy autonomous AI brokers throughout capabilities akin to gross sales, advertising, and commerce. These brokers act independently to finish multi-step duties with out fixed human intervention.
The ForcedLeak assault technique recognized by Noma researchers concerned Agentforce’s Net-to-Lead performance, which allows the creation of an internet type that exterior customers akin to convention attendees or people focused in a advertising marketing campaign can fill out to supply lead data. This data is saved into the shopper relationship administration (CRM) system.
The researchers found that attackers can abuse varieties created with the Net-to-Lead performance to submit specifically crafted data, which when processed by Agentforce brokers causes them to hold out numerous actions on the attacker’s behalf.
The potential influence was demonstrated by submitting a payload that included innocent directions alongside directions asking the AI agent to gather electronic mail addresses and add them to the parameters of a request going to a distant server.

When an worker asks Agentforce to course of the lead that features the malicious payload, the immediate injection triggers and the information saved within the CRM is collected and exfiltrated to the attacker’s server.
The assault had vital possibilities of remaining undetected as a result of Noma researchers found {that a} trusted Salesforce area had been left to run out. An attacker might have registered that area and used it for the server receiving the exfiltrated CRM knowledge.
After being notified, Salesforce regained management of the expired area and applied modifications to stop AI agent output from being despatched to untrusted domains.
“The safety panorama for immediate injection stays a fancy and evolving space, and we proceed to spend money on robust safety controls and work carefully with the analysis group to assist defend our clients as a lot of these points floor,” a Salesforce spokesperson instructed SecurityWeek.
Most of these AI assaults are usually not unusual. Researchers in current months demonstrated a number of theoretical assaults the place integration between AI assistants and enterprise instruments had been abused for knowledge theft.
*up to date with assertion from Salesforce
Associated: ChatGPT Focused in Server-Aspect Knowledge Theft Assault
Associated: ChatGPT Tricked Into Fixing CAPTCHAs
Associated: High 25 MCP Vulnerabilities Reveal How AI Brokers Can Be Exploited