Video unavailable
SQLBits 2026
Danger in Delegation: When “Helpful” Becomes Harmful
As the industry pivots from Chatbots to Agents, the threat landscape shifts from misinformation to unauthorized action. Danger in Delegation explores the dark side of autonomous AI, dissecting the "Lethal Trifecta" of agentic risks, UI redressing (clickjacking), and the catastrophic consequences of granting LLMs the power to traverse the web and execute system commands on our behalf.
Last year, in Danger in Dialogue, we explored how LLMs could be manipulated into saying the wrong things. This year, we face a far more critical reality: LLMs are now authorized to do the wrong things.
We are witnessing a mass migration from static information retrieval to dynamic Agentic workflows. We have moved beyond "Chat" and handed these models the keys to the kingdom, granting them the autonomy to browse the live web, manipulate user interfaces, and trigger backend processes.
This session delivers a unfiltered State of the Industry on the security posture of Agentic AI. We will move past theoretical prompt injection to expose the tangible dangers of delegation, where "helpful" assistants become unwitting insider threats.
Expect a showcase of the mechanics of modern interaction attacks, including:
The "Lethal Trifecta" of Agents: The convergence of excessive agency, indirect context injection, and fragile permission boundaries.
Agentic Clickjacking & UI Redressing: How attackers can overlay invisible layers to trick autonomous agents into clicking 'approve' on malicious actions users never see.
The Execution Risks: The specific fallout when read-only vulnerabilities morph into write-access catastrophes—from data exfiltration to irreversible financial transactions.
This is not a guide on how to patch these issues, because inherently that isn't possible. It is a report to raise awareness from the front lines on the specific risks inherent in the current generation of Agentic AI.
We are witnessing a mass migration from static information retrieval to dynamic Agentic workflows. We have moved beyond "Chat" and handed these models the keys to the kingdom, granting them the autonomy to browse the live web, manipulate user interfaces, and trigger backend processes.
This session delivers a unfiltered State of the Industry on the security posture of Agentic AI. We will move past theoretical prompt injection to expose the tangible dangers of delegation, where "helpful" assistants become unwitting insider threats.
Expect a showcase of the mechanics of modern interaction attacks, including:
The "Lethal Trifecta" of Agents: The convergence of excessive agency, indirect context injection, and fragile permission boundaries.
Agentic Clickjacking & UI Redressing: How attackers can overlay invisible layers to trick autonomous agents into clicking 'approve' on malicious actions users never see.
The Execution Risks: The specific fallout when read-only vulnerabilities morph into write-access catastrophes—from data exfiltration to irreversible financial transactions.
This is not a guide on how to patch these issues, because inherently that isn't possible. It is a report to raise awareness from the front lines on the specific risks inherent in the current generation of Agentic AI.