Building a Modern Data Platform Using a Test Driven Approach
2022TL; DR
In data engineering, testing our pipelines and data flows is all too often a difficult, manual and time consuming process. In this session we will look at how we can use Azure DevOps and Azure Dev Test Labs to auto-magically build out a whole data platform environment, both for developers to use and for automated tests to run.
Session Details
In data engineering, testing our pipelines and data flows is all too often a difficult, manual and time consuming process.
In this session we will look at how we can use Azure DevOps and Azure Dev Test Labs to auto-magically build out a whole data platform environment in minutes, including ADLS, Data Factory, Databricks and Key Vault. We'll then use that foundation to see how we can build both development environments, and setup automated testing using DevOps that adds real value to the Modern Data Platform.
3 things you'll get out of this session
Speakers
Niall Langley's previous sessions
Better ETL with Managed Airflow in ADF
Building complex data workflows using Azure Data Factory can get a little clunky, especially as you orchestration needs get more complex. Recently another option become available, Managed Airflow in ADF.
Managed Airflow brings Apace Airflow to Azure as a PaaS service. In this session we discover what Airflow is, why we might want to use it for ETL orchestration, and see how it works with lots of demos.
Introduction to Databricks Delta Live Tables
Delta Live Tables is a new framework available in Databricks that aims to accelerate building data pipelines by providing out of the box scheduling, dependency resolution, data validation and logging.
We'll cover the basics, and then get into the demo's to show how we can:
- Setup a notebook to hold our code and queries
- Ingest quickly and easily into bronze tables using Auto Loader
- Create views and tables on top of the ingested data using SQL and/or python to build our silver and gold layers
- Create a pipeline to run the notebook
- See how we can run the pipeline as either a batch job, or as a continuous job for low latency updates
- Use APPLY CHANGES INTO to upsert changed data into a live table
- Apply data validation rules to our live table definition queries, and get detailed logging info on how many records caused problems on each execution.
By the end of the session you should have a good view of whether this can help you build our your next data project faster, and make it more reliable.
Slowly Changing Dimensions made Easy with Durable Keys
In this session we look at a simple way to implement Kimball durable keys on a SCD2 dimension. This provides an easy, performant, way to support reporting on data using historical and current hierarchies.
SQL Server Encryption for the Layman
With GDPR and the number of data breaches we see in the news, encrypting sensitive data is incredibly important. In this talk we start with the basics of encryption, moving on to look at the ways we can encrypt data in SQL Server and Azure SQL DB.