AI Security in Practice: How to run Threat Modeling Workshops
Regular 50 minute session for SQLBits 2026Thursday - 01 Jan 1970 - 01:00 - 01:00TL; DR
As GenAI moves from experiment to production, security cannot be an afterthought. This session empowers data professionals to lead the charge in securing LLMs without needing to be cybersecurity experts. Learn how to facilitate a practical Threat Modeling Workshop, a collaborative exercise to identify risks like prompt injection and data leakage,and seamlessly integrate these security checks into an AIOps strategy.
Session Details
Data professionals are under immense pressure to explore new Generative AI and LLM solutions. However, with these new potential upsides are big security risks and data teams frequently lack the vocabulary to articulate these specific risks to their business or security counterparts. This disconnect leaves solutions vulnerable to prompt injections, training data poisoning, and more.
You don’t need a background in cybersecurity or even technology to start taking practical secure your model; you just need the right framework. This talk introduces Threat Modeling techniques that offer a structured, collaborative sessions designed to address this gap in your team.
In this practical session, we will demystify AI security. We will focus on how to facilitate a workshop that uncovers architectural flaws before these solutions go near production.
We will adapt standard frameworks (like STRIDE) specifically for Generative AI, giving you a repeatable process to identify "what can go wrong" and "what are we going to do about it."
Who Should Attend: Data Scientists, Data Engineers, and anyone with a passion for building better solutions for end users.
You don’t need a background in cybersecurity or even technology to start taking practical secure your model; you just need the right framework. This talk introduces Threat Modeling techniques that offer a structured, collaborative sessions designed to address this gap in your team.
In this practical session, we will demystify AI security. We will focus on how to facilitate a workshop that uncovers architectural flaws before these solutions go near production.
We will adapt standard frameworks (like STRIDE) specifically for Generative AI, giving you a repeatable process to identify "what can go wrong" and "what are we going to do about it."
Who Should Attend: Data Scientists, Data Engineers, and anyone with a passion for building better solutions for end users.
3 things you'll get out of this session
Articulate LLM vulnerabilities (such as the OWASP Top 10 for LLMs) in language that both data teams and security stakeholders understand.
Have the knowledge to Plan and execute a "Threat Modeling" session that can identify attack surfaces in GenAI applications.
Utilize a modified threat framework (e.g., STRIDE) to systematically categorize risks related to hallucinations, non-determinism, and third-party API dependencies.
Embed security into the strategic operating models (e.g. Data Governance/MLOps/AIOps) to ensure threat modeling as a continuous improvement process
Have the knowledge to Plan and execute a "Threat Modeling" session that can identify attack surfaces in GenAI applications.
Utilize a modified threat framework (e.g., STRIDE) to systematically categorize risks related to hallucinations, non-determinism, and third-party API dependencies.
Embed security into the strategic operating models (e.g. Data Governance/MLOps/AIOps) to ensure threat modeling as a continuous improvement process
Speakers
Scott Bell's other proposed sessions for 2026
AI for Developers, Not End Users: Master Agentic BI Development Workflows - 2026
AI for Developers, Not End Users: Master Agentic BI Development Workflows - 2026
AI for Developers, Not End Users: Master Agentic BI Development Workflows - 2026
Building Context, Not Vibes Pratical AI Augmented Data Engineering - 2026
Building Context, Not Vibes Pratical AI Augmented Data Engineering - Part 1 - 2026
Building Context, Not Vibes Pratical AI Augmented Data Engineering (Part 2) - 2026
Danger in Delegation: When “Helpful” Becomes Harmful - 2026
Optimizing Your Delta Lake: Beyond the Defaults - 2026
Scott Bell's previous sessions
Navigating Data Governance in the Age of Generative AI
In the rapidly evolving world of data analytics, the emergence of Large Language Models (LLMs) has sparked a debate: Are LLMs signaling the end of traditional data analytics? This session delves into the heart of this question, exploring the fundamental workings of LLMs and their transformative impact on the analytics landscape. Attendees will gain insights into the advantages and potential pitfalls of integrating LLMs into their data strategies. We'll discuss the innovative use cases LLMs unlock and emphasize the paramount importance of governance and lineage in harnessing their full potential. Whether you're intrigued by the brilliance of LLMs or wary of their implications, this session will equip you with a balanced perspective to navigate the future of data analytics.
Cosmos 101
Find out everything you need to know to get started with Azure Cosmos DB in 20 minutes or less.
Is HTAP the future?
Hybrid Transactional Analytical Processing solve the age old problem of integrating operational processes with analytical capabilities within a single system. Find out what they're and how they deliver value