Manipulating telemetry data can lead AIOps to become AI Failures, according to recent research findings
A new study titled "When AIOps Become 'AI Oops': Subverting LLM-driven IT Operations via Telemetry Manipulation" has highlighted a significant vulnerability in AI-based IT operations (AIOps) systems. Researchers from RSAC Labs and George Mason University have found that these systems can be attacked using poisoned telemetry data.
Poisoned Telemetry Attacks
These attacks involve injecting false analytics data or "poisoned telemetry" into AIOps systems. This can deceive the AI-driven tools into performing harmful operations, such as downgrading software to vulnerable versions or executing unauthorized actions.
In tests using SocialNet and HotelReservation applications, the attack succeeded in 89.2% of attempts. Evaluations with OpenAI’s GPT-4o and GPT-4.1 models showed susceptibility rates of 97% and 82%, respectively, with GPT-4.1 being somewhat better at detecting inconsistencies.
Current Defense Mechanisms
Existing defense mechanisms against poisoned telemetry attacks are not fully effective. The paper highlights the need to evaluate and enhance these defenses to withstand telemetry manipulation.
Current defenses include prompt injection defenses, real-time monitoring, logging, sandboxing, and tools like Kong’s AI Gateway. However, these defenses have limitations. For instance, AI agents are non-deterministic, making their actions unpredictable. Continuous monitoring is crucial to ensure security.
Security Gaps
The study also points out several security gaps. The lack of deterministic behavior in AI agents, the evolving threat landscape, and the need for robust security protocols like anonymization, encryption, and least-privilege access rules are some of the areas that require attention.
The researchers caution that while their proposed defense, AIOpsShield, can sanitize harmful telemetry data, it may not defend against attackers with additional capabilities, such as the ability to poison other sources of the agent's input or compromise the supply chain.
The Attack and Its Implications
The goal of the "reward hacking" technique is to manipulate an AIOps agent into executing a malicious action under the guise of remediation. To create malicious telemetry data, the researchers use a fuzzer that enumerates the available endpoints within the target application. The application logs this malicious telemetry data, which the AIOps agent incorporates as part of its input during log analysis.
The attack does not take a long time to mount, but the exact amount of effort depends on the system/model being attacked, the implementation, and the way the model interprets logs.
The paper suggests that attackers could use the fuzzer to produce telemetry output that could see AIOps tools produce unpleasant results.
In summary, while there are some defense mechanisms in place, the effectiveness of current defenses against poisoned telemetry attacks is limited, and ongoing research and development are necessary to enhance security against these evolving threats. AIOpsShield, planned to be released as an open-source project, is one such initiative aimed at strengthening the security of AIOps systems.
Read also:
- Health Risk Warning: The Harmful Effects of Sitting Too Much, Exploring Sedentary Lifestyles
- Telecommunications company issues warning about deceptive practices
- Competition heated up: Google Pixel 10 against Samsung Galaxy S25 - a pivotal moment for Google's smartphone dominance
- Advancement from Analog to Digital: A History of Audio Cassettes