Metadata Driven Pipelines with Python and Databricks
2022TL; DR
In this demo filled session we'll take a journey from data pipelines built with notebooks to building scaleable, metadata driven pipelines using python functions.
Session Details
In this demo filled session we'll take a journey, starting with some simple data transforms in a notebook.
- We'll look at what a Spark transformation function actually does, and how to build our own transformation functions.
- We'll see how combining generic functions with some metadata can allow us to perform common data engineering tasks such as data cleansing and validation using less code, in a more testable way.
- Finally we will see just how simple it is to deploy these functions into Databricks and get them to production.
Attendees should come away with some ideas of how to build more scalable, metadata driven data pipelines using Databricks and Spark.
3 things you'll get out of this session
Speakers
Niall Langley's previous sessions
Better ETL with Managed Airflow in ADF
Building complex data workflows using Azure Data Factory can get a little clunky, especially as you orchestration needs get more complex. Recently another option become available, Managed Airflow in ADF.
Managed Airflow brings Apace Airflow to Azure as a PaaS service. In this session we discover what Airflow is, why we might want to use it for ETL orchestration, and see how it works with lots of demos.
Introduction to Databricks Delta Live Tables
Delta Live Tables is a new framework available in Databricks that aims to accelerate building data pipelines by providing out of the box scheduling, dependency resolution, data validation and logging.
We'll cover the basics, and then get into the demo's to show how we can:
- Setup a notebook to hold our code and queries
- Ingest quickly and easily into bronze tables using Auto Loader
- Create views and tables on top of the ingested data using SQL and/or python to build our silver and gold layers
- Create a pipeline to run the notebook
- See how we can run the pipeline as either a batch job, or as a continuous job for low latency updates
- Use APPLY CHANGES INTO to upsert changed data into a live table
- Apply data validation rules to our live table definition queries, and get detailed logging info on how many records caused problems on each execution.
By the end of the session you should have a good view of whether this can help you build our your next data project faster, and make it more reliable.
Slowly Changing Dimensions made Easy with Durable Keys
In this session we look at a simple way to implement Kimball durable keys on a SCD2 dimension. This provides an easy, performant, way to support reporting on data using historical and current hierarchies.
SQL Server Encryption for the Layman
With GDPR and the number of data breaches we see in the news, encrypting sensitive data is incredibly important. In this talk we start with the basics of encryption, moving on to look at the ways we can encrypt data in SQL Server and Azure SQL DB.