22-25 April 2026

Practical lessons in optimizing Data Engineering with Spark - Part 2

Proposed session for SQLBits 2026

TL; DR

Part 2: Optimizing Spark in large-scale Fabric deployments. Learn to maximize execution efficiency through Spark Job Definitions, Livy API, and Native Execution Engine. Demo-centric session showcasing real-world trade-offs and optimization strategies.

Session Details

Based on Luke's experience in working with some of the largest Fabric deployment, this session walks practical tips for optimizing data engineering focused Fabric deployments. This session is very demo-centric to provide real world examples of the tradeoffs that occur and how simple changes can have a large impact - especially as a deployment scales.

In part 2 we focus on optimizing Spark and in particular focus on:
1 - orchestrating for scale and minimizing start up time
2 - walkthrough and demo the use of Spark Job Definitions and Livy API as a way to optimize for certain, developer-centric experiences.
3 - explain how notebookutils can be used to unlock scenarios
4 - evaluate how native execution engine works, and when it will aid your workloads.

3 things you'll get out of this session

1 - detailed understanding of the different ways to run Spark and the trade-offs 2 - knowledge on best practices for orchestrating Spark at scale for performance. 3 - understanding of how native execution engine works and when you'd want to use and when you might need to be careful.