22-25 April 2026

A Guide to Community Tools for Data Engineering on Azure

2022

TL; DR

This session looks at some of the great tools produced by the data community that can help speed up building a data engineering project on Azure, as well as providing great examples of how to build reliable components and simple ways to solve common data engineering problems.

Session Details

This session looks at some of the great tools produced by the data community that can help speed up building a data engineering project on Azure. These can really help people new and experienced with data engineering in Azure, as they provide great examples of how to build solid, reliable components, and show simple ways to solve common problems. You hopefully learn of a project or tool that could save you some time, or be inspired to contribute to an existing tool to help the community grow.

3 things you'll get out of this session

Speakers

Niall Langley

niall-langley.me

Niall Langley's previous sessions

Better ETL with Managed Airflow in ADF
Building complex data workflows using Azure Data Factory can get a little clunky, especially as you orchestration needs get more complex. Recently another option become available, Managed Airflow in ADF. Managed Airflow brings Apace Airflow to Azure as a PaaS service. In this session we discover what Airflow is, why we might want to use it for ETL orchestration, and see how it works with lots of demos.
 
Introduction to Databricks Delta Live Tables
Delta Live Tables is a new framework available in Databricks that aims to accelerate building data pipelines by providing out of the box scheduling, dependency resolution, data validation and logging. We'll cover the basics, and then get into the demo's to show how we can: - Setup a notebook to hold our code and queries - Ingest quickly and easily into bronze tables using Auto Loader - Create views and tables on top of the ingested data using SQL and/or python to build our silver and gold layers - Create a pipeline to run the notebook - See how we can run the pipeline as either a batch job, or as a continuous job for low latency updates - Use APPLY CHANGES INTO to upsert changed data into a live table - Apply data validation rules to our live table definition queries, and get detailed logging info on how many records caused problems on each execution. By the end of the session you should have a good view of whether this can help you build our your next data project faster, and make it more reliable.
 
Slowly Changing Dimensions made Easy with Durable Keys
In this session we look at a simple way to implement Kimball durable keys on a SCD2 dimension. This provides an easy, performant, way to support reporting on data using historical and current hierarchies.
 
SQL Server Encryption for the Layman
With GDPR and the number of data breaches we see in the news, encrypting sensitive data is incredibly important. In this talk we start with the basics of encryption, moving on to look at the ways we can encrypt data in SQL Server and Azure SQL DB.