Danger in Delegation: When “Helpful” Becomes Harmful
Proposed session for SQLBits 2026TL; DR
As the industry pivots from Chatbots to Agents, the threat landscape shifts from misinformation to unauthorized action. Danger in Delegation explores the dark side of autonomous AI, dissecting the "Lethal Trifecta" of agentic risks, UI redressing (clickjacking), and the catastrophic consequences of granting LLMs the power to traverse the web and execute system commands on our behalf.
Session Details
Last year, in Danger in Dialogue, we explored how LLMs could be manipulated into saying the wrong things. This year, we face a far more critical reality: LLMs are now authorized to do the wrong things.
We are witnessing a mass migration from static information retrieval to dynamic Agentic workflows. We have moved beyond "Chat" and handed these models the keys to the kingdom, granting them the autonomy to browse the live web, manipulate user interfaces, and trigger backend processes.
This session delivers a unfiltered State of the Industry on the security posture of Agentic AI. We will move past theoretical prompt injection to expose the tangible dangers of delegation, where "helpful" assistants become unwitting insider threats.
Expect a showcase of the mechanics of modern interaction attacks, including:
The "Lethal Trifecta" of Agents: The convergence of excessive agency, indirect context injection, and fragile permission boundaries.
Agentic Clickjacking & UI Redressing: How attackers can overlay invisible layers to trick autonomous agents into clicking 'approve' on malicious actions users never see.
The Execution Risks: The specific fallout when read-only vulnerabilities morph into write-access catastrophes—from data exfiltration to irreversible financial transactions.
This is not a guide on how to patch these issues, because inherently that isn't possible. It is a report to raise awareness from the front lines on the specific risks inherent in the current generation of Agentic AI.
We are witnessing a mass migration from static information retrieval to dynamic Agentic workflows. We have moved beyond "Chat" and handed these models the keys to the kingdom, granting them the autonomy to browse the live web, manipulate user interfaces, and trigger backend processes.
This session delivers a unfiltered State of the Industry on the security posture of Agentic AI. We will move past theoretical prompt injection to expose the tangible dangers of delegation, where "helpful" assistants become unwitting insider threats.
Expect a showcase of the mechanics of modern interaction attacks, including:
The "Lethal Trifecta" of Agents: The convergence of excessive agency, indirect context injection, and fragile permission boundaries.
Agentic Clickjacking & UI Redressing: How attackers can overlay invisible layers to trick autonomous agents into clicking 'approve' on malicious actions users never see.
The Execution Risks: The specific fallout when read-only vulnerabilities morph into write-access catastrophes—from data exfiltration to irreversible financial transactions.
This is not a guide on how to patch these issues, because inherently that isn't possible. It is a report to raise awareness from the front lines on the specific risks inherent in the current generation of Agentic AI.
3 things you'll get out of this session
Gain a realistic view of current agentic vulnerabilities in the wild, independent of marketing hype or unproven defensive tools.
Understand the fundamental security differences between conversational LLMs (Dialogue) and goal-oriented Agents (Delegation).
Comprehend the three critical converging risk factors that make AI Agents uniquely dangerous in enterprise environments.
Speakers
Scott Bell's other proposed sessions for 2026
AI Security in Practice: How to run Threat Modeling Workshops - 2026
Scott Bell's previous sessions
Navigating Data Governance in the Age of Generative AI
In the rapidly evolving world of data analytics, the emergence of Large Language Models (LLMs) has sparked a debate: Are LLMs signaling the end of traditional data analytics? This session delves into the heart of this question, exploring the fundamental workings of LLMs and their transformative impact on the analytics landscape. Attendees will gain insights into the advantages and potential pitfalls of integrating LLMs into their data strategies. We'll discuss the innovative use cases LLMs unlock and emphasize the paramount importance of governance and lineage in harnessing their full potential. Whether you're intrigued by the brilliance of LLMs or wary of their implications, this session will equip you with a balanced perspective to navigate the future of data analytics.
Cosmos 101
Find out everything you need to know to get started with Azure Cosmos DB in 20 minutes or less.
Is HTAP the future?
Hybrid Transactional Analytical Processing solve the age old problem of integrating operational processes with analytical capabilities within a single system. Find out what they're and how they deliver value