Tori Tompkins
Proposed Sessions for 2026
Retrieval-Augmented Generation (RAG) is revolutionizing the capabilities of Generative AI by addressing critical limitations such as knowledge cut-offs, hallucinations, and lack of domain specificity. By integrating external knowledge sources with LLMs, RAG ensures outputs are more accurate, dynamic, and contextually relevant than ever before. In this session, we’ll begin with why RAG is essential for building scalable, trustworthy AI systems and before diving into advanced patterns like Modular RAG for adaptable designs, Graph RAG for structured data handling, and Voice RAG for audio-driven retrieval. Additionally, we’ll explore Corrective RAG, Branched RAG, and RAG-Fusion to tackle complex, multi-modal challenges. Whether you’re new to RAG or looking to refine your approach, this session will equip you with the tools and strategies to harness the full power of RAG workflows.
As organisations move from traditional machine learning to large language model (LLM) applications and AI agents, the operational landscape changes dramatically. While MLOps provided a foundation for deploying and managing ML models, the emergence of LLMs introduces new challenges in scale, architecture, governance, and observability. This session explores the critical differences between MLOps and LLMOps, highlighting why GenAI applications require dedicated tooling, processes, and design patterns. We’ll walk through the key components of a robust LLMOps pipeline, from data preparation, prompt management, and fine-tuning, to retrieval-augmented generation (RAG), evaluation, monitoring, and cost optimisation. Real-world examples and architectural patterns will demonstrate how organisations are evolving their ML infrastructure to meet the demands of production-grade LLM systems. By the end of this session, you’ll understand what it takes to operationalise GenAI at scale and why extending your MLOps stack simply isn’t enough.
Databricks now offers two paths for building enterprise AI agents: AgentBricks and Mosaic AI. Join Tori and Gavi, 2 AI MVPs as they compare these capabilities, architecture, and ideal use cases. We’ll explore how AgentBricks simplifies agent creation with automated evaluation, synthetic data, and low‑code workflows, while Mosaic AI provides the broader infrastructure for model governance, routing, observability, and production‑grade agent systems. Attendees will learn when to use each, how they work together, and how to build scalable agentic applications across the Databricks platform.
AgentBricks is a new framework for designing, testing, and deploying production-grade AI agents natively on the Databricks Lakehouse. This session introduces the architecture behind AgentBricks and shows how it enables teams to create reliable, auditable, and enterprise-ready agents. We’ll explore how AgentBricks leverages Unity Catalog governance, Lakehouse data pipelines, and Model Serving to orchestrate multi-step agent workflows with full observability.
Databricks’ new Lakebase-powered Online Feature Stores provide scalable, low‑latency features for real-time ML and GenAI. The session covers the architecture, key differences, and demos on publishing, streaming, and serving features.