22-25 April 2026

Mastering Direct Lake

Full day training session for SQLBits 2026

TL; DR

In this immersive hands-on workshop, we'll cover every facet of Direct Lake, from getting the basics right to advanced modeling and performance tuning - turning you into a pro Direct Lake modeler.

Session Details

This training day is for anyone who wants to master all aspects of Direct Lake. Direct Lake revolutionizes the way we do data modeling, but also comes with its own set of limitations, quirks, and principles to understand.

Mastering Direct Lake is a key skill set that sets you apart from other data modelers by enabling you to blend Import-like query speed with constantly updated data. Performant, flexible, and fast, even for very large tables.

In this immersive hands-on workshop, we'll cover every facet of Direct Lake, from getting the basics right to advanced modeling and performance tuning - turning you into a pro Direct Lake modeler on the way.

Modules:

1. Getting started with Direct Lake
- Direct Lake prerequisites
- Variations and use cases for Direct Lake
- Creating a Direct Lake model

2. Data prep for Direct Lake
- Examining Parquet, Delta, and compression engines
- Pushing transformations upstream
- Analyzing Delta Lake for optimal Direct Lake performance

3. From Basics to Advanced Data Modeling with Direct Lake
- Data modeling best practices
- Massive data volumes in Direct Lake
- Composite models (Import + Direct Lake)

4. Performance tuning for Direct Lake
- Transcoding
- Cold/hot cache management
- Framing, Partitioning and incremental updates
- Fine-tuning table performance

5. Security for Direct Lake
- Row-level security for Direct Lake
- Lakehouse security implications for Direct Lake
- Setting up security in practice

6. Migration pathways
- From Import to Direct Lake
- From Direct Lake on SQL to Direct Lake on OneLake

3 things you'll get out of this session

- Understand the full scope of Direct Lake
- Practice developing and finetuning Direct Lake models
- Learn about migration and security implications

Previous experience recommended

A solid, intermediate (or higher) understanding of data modelling. You should already be comfortable with concepts such as facts and dimensions, star schemas, grain, and basic modelling trade-offs. This session builds on that foundation rather than introducing the basics.

Access to a Microsoft Fabric capacity (recommended, not mandatory). Having access to a Fabric capacity will allow you to follow along more closely with real examples and scenarios. If you don’t have access, that’s okay—we’ll compensate with explanations, screenshots, and walkthroughs so you can still fully benefit from the session.

A basic understanding of lakehouses. You should have a basic understanding of what a lakehouse is, how data is typically stored and organized within it, and how it fits into a modern analytics architecture.

Speakers

Mathias Halkjaer

fluxbi.com