Databricks is the Swiss Army Knife of the Azure Data Analytics environment. Databricks implements Apache Spark, a robust in-memory cluster data processing framework. Spark Core supports SQL, DataFrames, Streaming, Graph Processing and Machine Learning.

In this session we focus on how Spark implements Machine Learning at Scale with Spark ML. Spark ML can do a lot, but not everything. In this session we go end-to-end, building a model in an iterative way, monitoring our improvements in MLFlow, training, testing and evaluating a Machine Learning pipeline. We will also discuss how to migrate your models to perform at scale. If you're working with Deep Learning, then let me know and we can discuss how Databricks can scale Deep Learning.

Presented by Terry McCann at SQLBits XX