22-25 April 2026

AI Security in Practice: How to run Threat Modeling Workshops

Regular 50 minute session for SQLBits 2026Thursday - 01 Jan 1970 - 01:00 - 01:00

TL; DR

As GenAI moves from experiment to production, security cannot be an afterthought. This session empowers data professionals to lead the charge in securing LLMs without needing to be cybersecurity experts. Learn how to facilitate a practical Threat Modeling Workshop, a collaborative exercise to identify risks like prompt injection and data leakage,and seamlessly integrate these security checks into an AIOps strategy.

Session Details

Data professionals are under immense pressure to explore new Generative AI and LLM solutions. However, with these new potential upsides are big security risks and data teams frequently lack the vocabulary to articulate these specific risks to their business or security counterparts. This disconnect leaves solutions vulnerable to prompt injections, training data poisoning, and more.

You don’t need a background in cybersecurity or even technology to start taking practical secure your model; you just need the right framework. This talk introduces Threat Modeling techniques that offer a structured, collaborative sessions designed to address this gap in your team.

In this practical session, we will demystify AI security. We will focus on how to facilitate a workshop that uncovers architectural flaws before these solutions go near production.

We will adapt standard frameworks (like STRIDE) specifically for Generative AI, giving you a repeatable process to identify "what can go wrong" and "what are we going to do about it."

Who Should Attend: Data Scientists, Data Engineers, and anyone with a passion for building better solutions for end users.

3 things you'll get out of this session

Articulate LLM vulnerabilities (such as the OWASP Top 10 for LLMs) in language that both data teams and security stakeholders understand.

Have the knowledge to Plan and execute a "Threat Modeling" session that can identify attack surfaces in GenAI applications.

Utilize a modified threat framework (e.g., STRIDE) to systematically categorize risks related to hallucinations, non-determinism, and third-party API dependencies.

Embed security into the strategic operating models (e.g. Data Governance/MLOps/AIOps) to ensure threat modeling as a continuous improvement process

Speakers

Scott Bell

myyearindata.com