These are the sessions submitted so far for SQLBits 2019.

SQL Server and Azure SQL Database support a multitude of data types and functions related to dates and times, and chances are you've never heard of some of them.

In this session, start with the science in an easy to understand way. Then you'll get a rundown of each of the data types and how they work. Finish with the most useful date and time system functions, using practical examples.
There's a life beyond DATETIME and GETDATE(). Start that journey here and end up in a world that embraces the latest versions of SQL Server and Azure SQL Database in just over an hour.
The lines between who manages what when it comes to Data Platform technologies is blurring, the traditional role of the DBA is evolving. Are the skills and techniques that we have used over the last decade still applicable or do we need to update our thinking and look at how we manage Data Platforms with a fresh pair of eyes?
In this talk, we will cover the Top 7 things that a DBA needs to know in order to manage a modern Data Platform solution. Covering key skills, technologies, and useful applications that mean we can ensure that data is secure and the systems perform at levels that are acceptable for our end users.
What are Azure SQL Database Managed Instances?
The range of options for storing data in Microsoft Azure keeps growing, the most notable recent addition is the Managed Instance. But what is it, and why is it there? Join John as he walks through what these options are and how you might start using them.
Managed Instances add a new option for running workloads in the cloud. Allowing near parity with a traditional on-premises SQL Server, including SQL Agent, Cross Database Queries, Service Broker, CDC, and many more. Overcoming many of the challenges of using Azure SQL Databases.
But, what is the reality, how do we make use of it, and are there any gotcha’s that we need to be aware of? This is what we will cover, going beyond the hype and looking at how we can make use of this new technology, working through a full migration, including workload analysis, selecting the appropriate migration pathway and then putting it in place.
Whether you are a Developer or DBA, managing infrastructure as code is becoming more common, but what if you need to managed Hybrid or multi-cloud deployments? Having one tool to do this can simplify management and this is where Terraform comes in.

Together we will look at what Terraform is and how we can use it to simplify managing our infrastructure needs alongside our application code. Join me as I talk through how we are able to define one VM through to building production ready infrastructure by using the open source tool Terraform.
In the age of GDPR having accurate data is vital, but how do we achieve this? Simple, we test it.
Testing data might seem like a simple task, but there are many levels of complexity including defining and understanding the rules for the tests. All the way through to how to implement these tests, are you looking at one test at a time or composite tests for the data entity?
Together we will explore the approaches that we can take for defining test strategies as well as how testing fits into a larger Master Data Management solution.
We all hate writing it, but it is essential for the stringent compliance requirements that we, as data professionals, have to support for the businesses we work for. We will look at ways that we can speed up & minimise the overhead of documentation.
We will look at the key types of documentation as well as how we can use the data model to do a lot of the work for us, as well as the features and capabilities within SQL Server that we can use. Doing the work up-front allows us to automate and simplify the creation and maintenance of system documentation.
You heard about the “R” language and it’s growing popularity for data analysis. Now you need a walk-through on what is possible analyzing your data? Then this session is for you:

You’ll get a short introduction how R came to be, and what the R ecosystem looks like today. Then we will extract sales data from different companies off a Navision ERP database on SQL Server.
Our data will be cleaned, aggregated and enriched in the RStudio environment. We’ll generate different diagrams on-the-fly to gain first insights.
Finally we’ll see how to use the Shiny framework to display our data on a map, interactively changing our criteria, and showing us where the white spots really are.
By now, all the data pro world should have heard about the R language, especially since Microsoft is committed to integrate it into their data platform products. So you installed the R base system and the IDE of your choice. But it's like buying a new car - nobody is content with the standard. You know there are packages to get you started with analysis and visualization, but which ones?

A bundle called The Tidyverse comes in handy, consisting of a philosophy of tidy data and some packages mostly (co-)authored by Hadley Wickham, one of the brightest minds in the R ecosystem. We will take a look at the most popular Tidyverse ingredients like tidyr, ggplot2, dplyr and readr, and we'll have lots of code demos on real world examples.
Microsoft introduced geospatial data types already in SQL Server 2008, some enhancements followed in version 2012. Today, geo data is used almost everywhere. Time to refresh your memories of geometry and geography!

We'll walk through which data types are supported - from 0 to 2 dimensions, from points to polygons and some more - and see how to get spatial data into and out of SQL Server tables.

Then there are built-in functions to determine relationships between geo objects, such as intersection, inclusion or shortest distance.
And of course there will be examples of practical applications of geospatial data.
Are you prepared if a tornado filled with sharks destroys your primary data center? You may have backed up your databases, but what about logins, agent jobs, availability groups or extended events? 

Join SQL Server and PowerShell MVP Chrissy LeMaire for this session as she demos how disaster recovery can be simplified using dbatools, the SQL Server Community's PowerShell module.
High availability (HA) can be highly important to DBAs. Whether it be log shipping, classic mirroring or availability groups, HA can also be a pain to setup. Until now.

Join MVPs Chrissy LeMaire and Rob Sewell for this session as they demos how high availability can be simplified using dbatools, the SQL Server Community's PowerShell module.
SQL Sever 2019 is a major new update for the data professional, and there's a lot more than the big data stuff.

This session will highlight certain features that busy DBAs and developers will find useful. We'll look at UTF-8 support, row-mode query processing improvements, Always Encrypted with Secure Enclaves. Finally, we'll look at a better sqlcmd by covering the command-line tool mssql-cli.

At the end of the session, you will have a better understanding of several new features in SQL Server 2019 which will help you as a database administrator or developer.

Demos will be included for most of the features covered.
Do you suffer from fear and loathing of sqlcmd? Let's look at a better way with the mssql-cli command line tool. Syntax highlighting? Check. Autocomplete? Check. Better formatting? Check. Cross-platform support? Check. SQL Server 2019 support? Of course!

In this session, we will first cover the basics of the mssql-cli tool, including how to install it. Then we'll see how to replace sqlcmd with mssql-cli and see some nifty tips and tricks to get the best out of your SQL Server instances from Windows, Linux and macOS.

This demo-heavy session will make you love the command line again, and regain control of your batch scripts.
SQL Server has a lot of different execution plan operators. By far the most interesting, and the most versatile, has to be the Hash Match operator. 

Hash Match is the only operator that can have either one or two inputs. It is the only operator that can either block, stream, or block partially. And it is one of just a few operators that contribute to the total memory grant of an execution plan. 

If you ever looked at execution plans, you will have seen this operator. And you probably have a rough idea of what it does. But do you know EXACTLY what happens when this operator is used? In this two-hour 500-level session, we will dive deep into the bowels of the operator to learn how it performs. 

It is going to be wild ride, so keep your hands, arms, and legs inside the conference room at all times; and please remain seated until the presenter has come to a full stop. 

Topic covered in this session include:
* What is an in-memory hash table and how exactly is it built?
* The logical operations supported by Hash Match: what do they do and how do they work?
* Memory usage: what is a memory grant, which factors are used to compute/estimate it? What exactly happens when a Hash Match operator has to spill? (Dynamic destaging, dynamic role-reversal, bail-out, bit-vector filtering)
* How is memory divided when multiple operators in a single plan use a memory grant?
* Hash teams: What are they, when are they used, what is the benefit?
As announced in September 2018, SQL Server 2019 expands the "adaptive query processing" features of SQL 2017 and relabels them as "intelligent query processing". This name now covers many features, such as batch mode on rowstore, memory grant feedback, interleaved execution, adaptive joins, deferred compilation, and approximate query processing. 

In this high-paced session, we will look at all these features and cover some use cases where they might help - or hurt! - you.
SQL (the language) is not a third generation language, where the developer tells the computer every step it needs to take. It is a declarative language that specifies the required results. SQL Server itself will figure out what steps it takes to get to those results. Most of the time, that works very well.

But sometimes it doesn't. Sometimes a query takes too much time. You need to find out why, so you can fix it. That's where the execution plan comes in. In the execution plan, SQL Server exposes exactly which steps it took for your query, so you can see why it's slow.

However, execution plans can be daunting to the uninitiated. Especially for complex queries. Where do you even start?

In this session you will learn how to obtain execution plans, and how to start reading and understanding them.

Prerequisites: Attendees are expected to be well-versed in SQL and to have a fair understanding of indexes.
You’ve just been given a server that is having problems and you need to diagnose it quickly. This session will take you through designing your own toolkit to help you quickly diagnose a wide array of problems. We will walk through scripts that will help you pinpoint various issues quickly and efficiently. This session will take you through;

- What’s on fire? – These scripts will help you diagnose what’s happening right now
- Specs – What hardware are you dealing with here (you’ll need to know this to make the appropriate decisions)?
- Settings – are the most important settings correct for your workload?
- Bottlenecks – We’ll see if there are any areas of the system that are throttling us.

By the end of this session, you should have the knowledge of what you need to do in order to start on your own kit. This kit is designed to be your lifeline to fix servers quickly and get them working.All code we’ll go through is either provided as part of this presentation or are open source/community tools.
When your development team is up to a certain size, and often no matter what size it is, you want to start following best development practices. 

These include things like source control, multiple environments, deployment processes, and governance.

As Power BI content is developed using Power BI Desktop and not Visual Studio as most Microsoft BI solutions are, these things can get tricky. In this session we will look at what Power BI has to offer when it comes to development lifecycle. 

We will look at the different options available to the developer when it comes to source control, multiple environments, deployment and distribution of Power BI content. Lastly, we will look at governance and see how it is possible to secure the content and audit the usage of Power BI.

For all these topics we will look at the capabilities Power BI offers and how we, in the company where I work, decided to implement it
In this session we will create an Power App that will allow users to check-in their location. We will then create a Flow that will take that location and write to a Power BI data source and refresh it. We will then create a Power BI report that will display the data on a map.

Power Apps is a great tool that allows you to create a desktop or mobile app with minimal coding. The app we are creating in this session uses the Bing location services to get the users location when a button is pressed. 

The Microsoft Flow we create in this session will take the location and user information and write it to an Excel file. We will also look at a custom connector in Flow that will allow us to refresh a Power BI data set. 

In the Power BI report we will create we will connect to an Excel file with the location information in it and display it in a report including the location on a map.

The audience will take away useful information about Power Apps, Flow and Power BI including all the code used
Poor data quality has a cost. 
Examples of data quality challenges and their impact.
Having correct data is very important in order to make correct decisions. 
Data quality goes hand in hand with proper data modelling.
Knowledge about use cases and workload is important input to your data modelling. 
Different kinds of compression can be relevant depending on your usage scenario.
Having focus on deadlines without having (data) quality in mind will hit you hard at a later point.

I will show a simple way of how you can get attention, but also how to integrate to existing
monitoring system.

Delivering good query performances and reports in time is important to business users,
but how do you measure it from their perspective?
Row Level Security (RLS) can be based on foreign key relationship. This allows you to keep the
data model unchanged.

Why you should not use SQL logins, but integrated security instead:
Passwords of SQL logins can be "recovered" - a guide on how to do it with VM in Azure.

What is the performance impact?
It is not different to the RLS solution with views joining on table valued function(s).
But the use of is_member can cause trouble. I will present a solution that caches AD-role membership information.
This includes a tiny PowerShell script and some advanced settings in job.

I will show you how to take care of implicit knowledge. Knowledge which could be extracted.
In addition, i will demonstrate how to write tests to check that TVF's are working correctly.
Some examples on how to read and write different types of files
and why this is relevant for automation of data analysis.

Write XLS file using CLR as well as doing it manually.
Write CSV file using CLR to save output from Rscript connecting to cube.
Read XML file stored in file table.
Writing and reading through external tables (Polybase).
Plotting JPG inside SQL Serve"R" 2016/2017.

My expierence using CLR (statistical functions, comma list concatenate, downsampling)
and how to debug it.

* the value of data wrangling skills.
* more about the Excel data file format.
* when and when not to use CLR.
Traditionally, when a server starts to reach its limit we have simply thrown more resources at it, either more CPU, memory or disk. However there comes a point, especially in the cloud, where it is no longer possible to add more  resources to a database.  Here we need a different solution. Instead of scaling up we must scale out, sometimes called horizontal scaling or sharding. In this talk we will look at how to scale out in Microsoft Azure SQL database using the Azure Elastic Database tools We will look at the requirements and options for horizontal scaling in Azure and then we will have a go at sharding an Azure SQL database and then querying and updating the different shards We will be using t-sql, PowerShell and c# so come prepared for some serious coding
Beware of the Dark Side - A Guided Tour of Oracle for the SQL DBA
Today, SQL Server DBAs are more than likely at some point in their careers to come across Oracle and Oracle DBAs. To the unwary this can be very daunting and, at first glance, Oracle can look completely different with little obvious similarities to SQL Server.

This talk sets out to explain some of the terminology, the differences and the similarities between Oracle and SQL Server and hopefully make Oracle not look quite so intimidating.

At the end of this session you will have a better understanding of Oracle and the differences between the Oracle RDBMS and SQL Server. 
Although you won’t be ready to be an Oracle DBA it will give you a foundation to build on.
The word Kerberos can strike fear into a SQL DBA as well as many Windows Server Administrators.
What should be a straight forward and simple process can lead to all sorts of issues and trying to resolve them can turn into a nightmare. This talk looks at the principle of Kerberos, how it applies to SQL Server and what we need to do ensure it works.
We will look at
  • What is the purpose of Kerberos in relation to SQL Server?
  • When do we need to use it?  Do we need to worry about it at all?
  • How  do we configure it?  What tools can we use?
  • Who can configure it?  Is it the DBA job to manage and configure Kerberos?
  • Why does it cause so many issues?
Because on the face of it setting up Kerberos for a SQL Server is actually straightforward but it is very easy to get wrong and then sometimes very difficult to see what is wrong preview here
It seems like that every month we hear about another company having a major data breach. GDPR raises the stakes with huge fines for those that lose or don't keep data safe. Ensuring that your data is secure has become more important than ever. With this in mind, in SQL Server 2016, Microsoft gave us three new features that have the potential to improve the security of your SQL database, either on premises or in the cloud. These are: Dynamic Data Masking which allows us to obfuscate data in real time Always Encrypted which helps protect data both at rest and in motion with a master key Row Level Security that gives us control over who can see which rows in a table based on the user's rights In this session we will have an overview of these important security features and demos on how to configure them and make them work.
It is very common to use temporary data structures in the database.
In SQL Server, we can choose between temporary tables (#MyTable) and table
variables (@MyTable). There are many differences between these two structures,
some are obvious and well known, and some might surprise you.

The main difference in terms of performance is statistics, which
exist for a temporary table, but do not exist for a table variable. For that
reason, there can be a huge difference in performance of a stored procedure
that uses one data structure or the other.

In this session, we will demonstrate the differences and analyze
performance for various use cases. We will cover all kinds of ways to work with
these data structures, such as OPTION (RECOMPILE) and trace flag 2453.

By the end of this session you will know exactly when and how to use
each one in order to achieve the desired functionality with the best performance.
Big Data is everywhere these days. There are a lot of Big Data
scenarios, from IoT (Internet of Things) to AI (Artificial Intelligence). The
common challenge to all of these scenarios is to handle very large volume,
velocity and variety of data (a.k.a “the 3 V’s”).

While there are plenty of other data platforms to handle Big Data,
SQL Server 2017 offers a lot of features and capabilities to handle Big Data
scenarios, and in many cases, it can be the best solution.

In this session, we will first identify these scenarios, and we will
try to answer the question: “Is SQL Server the right tool for the job?”. We
will then cover many features in SQL Server that can be used together to handle
Big Data scenarios, such as: Delayed Durability, Machine Learning Services,
Partitioning and Parallelism, XML & JSON Support, Real-Time Operational
Analytics, Polybase, and more.
If you install a SQL Server instance by just clicking
Next->Next->Next, it will be set up with a default configuration, which
is pretty good for many environments. But this doesn’t mean that it is good for
your environment. Some of the default configurations in SQL Server are
actually not optimal. Some might be optimal for one environment, but not for
another. And there are many recommended configurations that aren’t even part of
the setup process.

In this session, we will present a recipe for installing SQL Server
the right way (not the Next->Next->Next way). This recipe includes many
recommendations for better performance, availability, security and
manageability. We will explain and demonstrate each recommendation. By the end
of the session, you will have a list of recommendations to apply to your
existing as well as new SQL Server instances.
Monitoring page splits in SQL Server is useful. It can be very
useful, for example, in order to choose the appropriate fill factor for an
index. By monitoring the number of page splits for a particular index over a
period of time, we can determine whether the fill factor for that index should
be adjusted in order to reduce the number of page splits and improve the
overall performance.

In this session we are going to demonstrate all the possible options
to monitor page splits in SQL Server, and we are going to show that all of these
options are wrong. SQL Server is lying, and we're going to prove it. We will
then demonstrate the only (documented) method to correctly monitor page splits
in SQL Server as of today.
There are many methods and options in SSIS to handle errors during a
package execution. You can use event handlers, event propagation, package
transactions, precedence constraints, error rows, and more. Things get more
complicated when your package has multiple levels of containers, or even when
one package executes another. It is very easy (and common) to get lost and do
things the wrong way.

In this session we will learn about all the options available for us
to handle errors in SSIS, but more important – we will learn about the best
practices and the way to do it right.
A common use case in many databases is a very large table, which
serves as some kind of activity log, with an ever-increasing date/time column.
This table is usually partitioned, and it suffers from heavy load of reads and
writes. Such a table presents a challenge in terms of maintenance and
performance. Activities such as loading data into the table, querying the
table, rebuilding indexes or updating statistics become quite challenging.

The latest versions of SQL Server, including 2017, offer several new
features that can make all these challenges go away. In this session we will
analyze a use case involving such a large table. We will examine features such
as Incremental Statistics, New Cardinality Estimation, Delayed Durability and
Stretch Database, and we will apply them on our challenging table and see what
Parameters are a fundamental part of T-SQL programming, whether they are used in stored
procedures, in dynamic statements or in ad-hoc queries. Although widely used,
most people aren’t aware of the crucial influence they have on query
performance. In fact, wrong use of parameters is one of the common reasons for
poor application performance.

Does your query sometimes run fast and sometimes slow – even when nothing’s changed?
Did it happen to you that a stored procedure, which had always been running for
less than a second, suddenly started to run for more than 5 seconds
consistently – even when nothing had changed?

In this session we will learn about plan caching and how the query optimizer
handles parameters. We will talk about the pros and cons of parameter sniffing
(don’t worry if you don’t know what that means) as well as about simple vs.
forced parameterization. But most important – we will learn how to identify
performance problems caused by poor parameter handling, and we will also learn
many techniques for solving these problems and boosting your application
If you are releasing database changes, new reports, cubes or ssis packages on a regular basis, you've probably offered up your share of blood, toil, tears and sweat on getting them delivered into production in working condition. DevOps is a way to bridge the gap between developers and IT professionals and for that we need to address the toolchain to support the practices. Microsoft offers a set of tools that'll help you on your journey towards the end goal: Maximize predictability, efficiency, security and maintainability of operational processes.   We will in detail be looking at:   Agile Development Frame of Mind Visual Studio Online (tool) Feature/Backlog/Work Item (concept) Build Agents (tool) PowerShell Microsoft's Glue (tool) and How to setup a pipeline
In this session we will look at all the important topics that are needed to get your Power BI modelling skills to the next level. We will cover the in memory engine, relationships,  DAX, Composite models and aggregations. This will allow set you on the path to master any modelling challenge with Power BI or Analysis Services. 
How does the AI make my BI applications smarter? How can the AI make my BI application more intelligent? Everybody is talking about AI, but what can you do with it today? Currently there are tools like the Cognitive Services, Azure ML or Azure Bot Framework, which can help in the classic ETL process to prepare or enrich data. Examples are the analysis of large data streams from the IoT area, how can I improve my demand planning of a call center or the analysis of social media regarding trends.If you already have your BI application in the cloud, Azure Data Factory, Logic App or Azure Stream Analytics would be the right components for an AI extension.
Microsoft Cognitive Services (formerly Project Oxford) are a set of APIs, SDKs and Services. They are available to developers to make their applications smarter, more engaging and easier to find. Cognitive services extend Microsoft's AI platform. 

This is a great playground for young and old. Here you can try out to your heart's content what might be in use tomorrow. With the various building blocks such as Q&A Maker, Vision, Emotion, Face, Text Analytics or Recommendations (to name just a few), impressive applications can be put together in a short time.
In this session, we will explore real world DAX and Model performance scenario's.  These scenarios range from memory pressure to high CPU load. Once we identify them using the monitoring tools, watch as we will dive into the model and improve performance by going through concrete examples. Will it be fixed by optimizing the DAX statement or by making changes to the model? Come watch to find out!
One hot topic with Power BI is security, in this deep dive session we will look at all the aspects of security of Power BI, from users, logging to where and how your data is stored and we even look at how to leverage additional Azure services to secure it even more.
How can you bring your existing on prime data warehouse and reporting into the cloud? That is precisely what the question was more than a year ago. The aim was to use everything as a service. Azure offers many possibilities with Azuer SQL DB / Azuer SQL DW / Azure Data Factory / Logic Apps / Streamanalytics / Power BI.  Now, after more than a year in live operation, a short summary and evaluation on the subject of BI in Azure. 
Azure offers a wide range of services that can be combined into a BI solution in the cloud. What possibilities does Azure currently offer to create a modern BI architecture? The components currently offered range from Azure SQL DB and SQL DWH to Data Factory, Stream Analytics, Logic App, Analysis Services and Power BI to name a few. This is a very good toolbox, with which one can reach the first successes very fast. Step by step you will learn how to create the classic ETL in the cloud and analyze the results in Power BI.
Who does not know the problem, you sit in the bar and just don't know which cocktail to order?
The Cogntive services offer here with face, emotion and recommendation three APIs that can help you. How do you best combine these services to get a suggestion for your cocktail?
You use different apps to organize your work. Outlook, Onedrive, Onenote, Sharepoint, Dynamics 365, Power BI and so on. All for different tasks. Microsoft introduced Flow to let these apps talk to each other. This allows us to create new automated workflows in an easy way. 

In these workflows Power BI can play an important role. Power BI generates data alerts which can be used to create emails, work tasks or even start a new flow. Also, you can automatically publish data to Power BI from apps like Outlook and SharePoint to analyze your email and documents.

In this session, we’ll introduce Flow and look at use cases to integrate apps with Power BI. By using different demo’s, you will get a good understanding of automating new work processes with Flow and Power BI.
PowerApps, Power BI and Microsoft Flow... see how they can work together! If you’re using one of these services, you’ve probably heard of the other two but didn’t use it yet.  Am I right? You can use PowerApps, Power BI and Microsoft Flow independently, though combining these services gives you much more possibilities.

In this session, we’ll create multiple solutions together from scratch. This will give you a good understanding of combining PowerApps, Power BI and Microsoft Flow.
In this talk you will learn how to use Power BI to prototype/develop a BI solution in days and then (if needed) evolve it into a fully scalable Azure BI solution. The goal of the session is to show, with a real-world example, how to use Power BI as a prototype solution (with real data) and the process to scale-it-up to a fully scalable Azure BI solution using Azure AS and Azure SQL DB. In the process I will share a few tips & tools that you can use to help you in that process.
This talk is you will learn a lot of tools, tips & hacks on the Power BI platform that will be very useful on your daily Power BI projects.
It's a very demo oriented session with tips on:
  • Governance
  • Deploy
  • DevOps
  • Productivity
  • Real Time
You don't have to be a super hero, to make a difference to our planet. Just a bit of programming skills will suffice. We can't imagine our lives without electricity. We use it to have light and heat, keep our food fresh, we work with computers, use mobile phones, and don't forget entertainment! We need electricity for driving cars, even more now with EVs. Electricity is generated mostly with fossil fuels, we can use nuclear power and hope that nobody makes a mistake and people do make mistakes. That's why we build wind farms, solar panels, hydro power plants, but we can't force them to generate electricity when we want. They are not aware of world cup finals. So how can we make sure, we use green power more? We need to store electricity when they can generate it, and use when they produce less, but storing electricity is hard. We have to change the way we consume energy, but it has to be automatic, so people wouldn't even know. That's what we do at OVO Energy, using IoT devices to change the power usage patterns, create virtual power plant, which can be used when the demand exceeds supply. I will show you how we use Azure IoT Hub to do that, you don't have to be C or C++ developer to work with IoT.
There are cases where you create indexes to support query elements that rely on order, but the query optimizer seems to not realize this, and ends up applying a sort. This session demonstrates a number of such situations and provides workarounds that enable the optimizer to rely on index order and this way avoid unnecessary sorting.
Has your manager come to you and said "I expect the SQL Server machines to have zero downtime?" Have you been told to make your environment "Always On" without any guidance (or budget) as to how to do that or what that means? This session will walk you through the high availability options in on-premises SQL Server, the high availability options in Azure SQL Database, and how those can be combined to enable you to achieve the ambitious goals of your management. Beyond the academic knowledge, we'll discuss real world case studies covering exactly how your on-premises environments and Azure services can work together to keep your phone quiet at night.
Have you ever wanted to travel back through time and fix problems using modern techniques? Are you frustrated by having to design tables to look at data across time when it seems so simple to query "tell me this answer from this time"? Are you annoyed at having to build complicated security schemes to protect data? We can't time travel yet, but temporal tables allow us to traverse our data throughout time in a far easier fashion than before this feature was brought to SQL Server and Azure SQL Database. Row-level security allows us to secure that same data in a more direct and flexible way as well, freeing data modelers and developers from dealing with the complicated syntax and table topologies that previous homebuilt iterations of these features required. Come learn how your job as a DBA or database developer has gotten a bit easier (and how you too can travel through time)!
Long gone are the days where the only architecture decision you had to make when scaling an environment was deciding which part of the datacenter would store your new server. There is a dizzying array of options available in the SQL Server and Azure ecosystems and those are evolving by the day. Is “the cloud” a fad? Are private datacenters a thing of the past? Could both questions have a kernel of truth in them? In this session I will go over real world scenarios and walk you through real world solutions that utilize your datacenter, cloud providers, and everything in between to keep your data highly available and your customers happy.
Have you ever wondered what it takes to keep an Always On availability group running and the users and administrators who depend on it happy? Let my experience maintaining several production Always On Availability Groups provide you some battle-tested information and hopefully save you some sleepless nights. From security tips to maintenance advice, come hear about some less than obvious tips that will keep users happy and the DBA’s phone quiet.
The job of a data professional is evolving rapidly, driving many of us to platforms and technologies that were not on our radar screen a few months ago. I am certainly no exception to that trend. Most of us aren't just monitoring backups and tuning queries - we are collaborating with teams throughout the company to provide them data and insights that drive decisions. Cloud providers are democratizing technologies and techniques that were complicated and proprietary just a few months ago. This presentation walks you through how a silly idea from a soccer podcast got me thinking about how Azure Logic Apps, the Cognitive Services API, Azure SQL DB, and Power BI combine to provide potentially powerful insights to any company with a social media and sales presence. Join me as I walk you through building a solution that can impact your company's bottom line - and potentially yours too!
Are you considering becoming a speaker, but feel nervous about getting on stage for the first time? Have you already presented a few sessions and want advice on how to improve? Do you learn more from seeing examples of what you should NOT do during a presentation instead of reading a list of bullet points on how to become a better speaker?

Don't worry! I have made plenty of presentation mistakes over the years so you won't have to :)

In this session, we will go through common presentation mistakes and how you can avoid them, as well as how you can prepare for those dreaded worst-case scenarios. Don't let those "uhms" and "uhhs" dominate your presentation, help the audience focus on the key message you're delivering instead of making them read a wall of text in your slides, recover gracefully from any demo failures, and stop distracting your attendees with floppy bunny hands.

All it takes is a little preparation and practice. You can do this!
In this session we'll go from zero to a fully working ADF v2 data driven ELT process. We'll look at dynamic task properties and how to set these to get the most out of ADF v2

We'll focus on moving from SQL Server to SQL Server but the principles can be expanded to move between a variety of data sources.

Using lots of demos and examples, attendees will leave with a good foundation and enough understanding to go and build their own flavour of a data driven ELT process in ADF v2
Have you ever wanted to add some element of application style navigation and interface into your PowerBI reports?
Then this session is for you!

In this session we'll cover a variety of techniques for creating things such as pop up dialogues, drop down menus and some clever navigation tricks, to name a few

Using lots of demos and examples, this fun and light hearted session attendees will leave will a variety of useful techniques that can be applied to their PowerBI work immediately  
In this session we'll examine the ups and downs of being a consultant and if its for you.

Full disclaimer, I LOVE being a consultant and I work for a fantastic company, but I do understand its not for everyone. 

I've been a consultant for many years and this session will be based on my experiences and the opinions I've formed over the years, both form my observations and also seeing many people come and go from the field.

This will be a fun and light hearted session with lots of discussion and an interaction with the attendees. Everyone that attendees will leave with a good idea about how consultancy works and if its for them 
PowerApps is an exciting and easy to pick up application development platform offered by Microsoft.
In this session we'll take an overview look at PowerApps from both the development and deployment sides.

We'll go through the process of building and publishing an app and show the rapid value that PowerApps can offer.

Using plenty of demos and examples attendees will leave with a good rounded understanding of the PowerApps offering and how it could be used in their organisations   
This session will cover the basics of dynamic SQL; how, why and when you may wish to use it with demos of use cases and scenarios where it can really save the day (trying to perform a search with a variable number of optional search terms, anyone?). We will also cover the performance and security impacts touching on the effect on query plans, index usage and security (SQL injection!) along with some best practices.
Have you ever had a requirement to provide a method for many users to insert or change data in a database with minimal time and/or tools in order to do this? Have you ever considered Excel for the job? Whilst probably not the most recommended way of interacting with SQL, an Excel tool is a highly customisable method you can develop quickly with little prior VBA knowledge and be able to use on most Windows computers instantly. Sounds good right?! During this session I will explain and demonstrate how to create an Excel application that enables multiple users to make changes to the database and show you how easy this is to distribute quickly to the users.

Despite several years as a SQL/BI Developer I was completely unaware that this was even a possibility; so if like me you are also surprised to find out that Excel can be used in such a manner then this demo is for you!

Note this session is primarily using VBA with very basic SQL strings to perform the INSERT/UPDATE/DELETE.
When using Azure SQL Database, you're paying for performance. In this session, you'll learn what tools and techniques are now available to help you be cost-effective. You'll see how to use features such as scaling, in-memory OLTP, and columnstore to minimize query run times and optimize resource use. Query Performance Insight and Automatic Tuning will be covered so you know how to monitor your environment and automate tuning. You'll be ready to get the most performance for the least amount of money from SQL Database. 
“I heard about this new feature, and I think it sounds great. I want to use it in my database.” Have you heard – or said – those words? You’re not alone. Too often, we dive into using a new feature without exploring what business or technical problem it will solve, then run into problems once we’re using it. In this session, I cover what problems can be solved using the major performance features of SQL Server – columnstore, in-memory OLTP, partitioning, Resource Governor, and more.
"Time and tide wait for no man", but SQL Server must wait for resources when executing a query. The lifecycle of a query is full of waiting, and SQL Server records when it's waiting, why, and for how long. By learning how each query is executed, and observing how waits can accumulate along the way, you can unlock the secrets to what resource or code changes are needed to improve your query and server performance. 
Do your users click run on a report or dashboard and then go for a cup of another city?  Optimization exercises can be wrought with disappointment if a scientific approach isn't part of the solution.  This session will teach you a long-standing scientific approach to optimization and how to diagnose performance problems in Power BI. 

It will begin with ensuring your Power BI environment  is configured optimally, covering tips and tricks, followed by methodology to perform an optimization review of a Power BI environment and the steps, (along with demonstration) of how to track log, performance and other data to diagnose the source of issues instead of guessing what the cause is.
Join me to learn about Azure Cosmos DB. I will cover the following topics in this session
  • Why do we need another database system?
  • How to setup Cosmos DB
  • How much does it cost?
  • Multi-Model Apis
  • Cosmos DB vs SQL Server
  • How to Import Data
  • How to use Cosmos DB Emulator
  • Cosmos DB Limitations
I will cover the following topics in this session :
  • What is Spatial Data
  • Available SQL Server Spatial Data types/Objects
  • Common SQL Server Spatial Functions
  • Definition of Spatial Reference Identifier.
  • Selecting right Spatial Data type
  • How to find distance between two points in SQL Server
  • How to search by radius in SQL Server
  • Common Spatial Formats
  • How to import Spatial Data into SQL Server Database free
  • Use SQL Server 2016 to create GeoJSON
  • Use SQL Server 2017 to cache Spatial Data
SQL Server 2017 has great new additions and features. Join me to learn what's new in SQL Server 2017 with many demos.I will cover the following features.
  • Linux Support
  • Graph Tables
  • Intelligent Query Processing
  • Resumable Online Index Rebuild
  • How to run R/Python with Machine Learning Services
  • In-Memory Tables (NoSQL in SQL Server)
Authoring SSAS tabular models using the standard tools (SSDT) can be a pain when working with large models. This is because SSDT keeps a connection open to a live workspace database, which needs to be synchronized with changes in the UI. This makes the developer experience slow and buggy at times, especially when working with larger models. Tabular Editor is an open source alternative that relies only on the Model.bim JSON metadata and the Tabular Object Model (TOM), thus providing an offline developer experience. Compared to SSDT, making changes to measures, calculated columns, display folders, etc. is lightning fast, and the UI provides a "what-you-see-is-what-you-get" model tree, that lets you view Display Folders, Perspectives and Translations, making it much easier to manage and author large models. Combined with scripting functionality, a Best Practice Analyzer, command-line build and deployment, and much more, Tabular Editor is a must for every SSAS Tabular developer. The tool is completely free, and feedback, questions or feature requests are more than welcome. This sessions will keep the PowerPoint slides to a minimum, focusing on demoing the capabilities of the tool. Attendees are assumed to be familiar with Tabular Model development.
Do you really need Server Backups? Yes. Unless you don't care about your data or you don't mind having to completely recreate you database from the scratch by losing all your data, you need a way to restore your data to a useful point in time.
In this section we will walk you through what happens inside SQL Server and give you details on each action performed by the database engine when you are running your Backups, (Full, Differential and T-Log), thus allowing you to better understand and optimize the performance and trustworthiness of the backups of your data.
I've been working with SQL Server on Windows for well over a decade, but now it can run on Linux. I had to learn a lot of things to ramp up. Let me share those with you, so you can successfully manage SQL Server on Linux! In this session, I'll cover basic Linux commands, what to prep for installation, how to install, how to configure, and what you need to know to monitor and troubleshoot. 
In recent years, the idea of source control has become inextricably linked with git, the version control system created for the development of the Linux kernel.

Whilst the primitives of git are very simple, certain operations, including but not limited to branching, merging, resetting, rebasing, and reverting can be confusing to the uninitiated.

We will look at the most common developer interactions with git version control, using a mixture of command line tools, graphical clients, and IDE integrations, as well as covering how to extract ourselves from a few common difficulties.

We'll also discuss workflows for using git as part of a team, using the context of repository hosting services such as GitHub and Visual Studio Team Services.
It's uncontroversial to suggest that developers work more effectively given isolated development environments, which are not subject to "surprises" caused by the rest of the team or "change freezes" imposed by the release process.

In many circumstances, these can be provided using a myriad of Virtual Machines, but the case of applications which rely on a range of PaaS services such as Azure Functions, Azure Data Factory or Azure SQL Data Warehouse can be more complicated.

This session will discuss a way of dynamically creating and destroying an isolated end-to-end environment for every individual feature branch using the facilities provided by VSTS and Azure Resource Manager, as well as some reasons why you might or might not want to adopt such an approach.
In this session Ben will walk you through a home-made IoT project with the data ultimately landing in Power BI for visualisation. 

A Raspberry Pi is the IoT device, with sensors and camera attached to give an end-to-end streaming solution. 
You will see Python used to run the program and process the images. Microsoft Azure plays its part where Microsoft Cognitive Services enriches the images with facial attributes and recognition, Azure SQL stores the metadata, Azure Blob storage holds the images, Power BI visualises the activity and Microsoft Flow sends mobile notifications. 

You'll see enough to walk out and get your own project started straight away!
You know locking and blocking very well in SQL Server? You know how the isolation
level influences locking? Perfect! Join me in this session to make a further
deep dive into how SQL Server implements physical locking with lightweight
synchronization objects like Latches and Spinlocks. We will cover the
differences between both, and their use-cases in SQL Server. You will learn
about best practices how to analyze and resolve Latch- and Spinlock
contentation for your performance critical workload. At the end we will talk
about lock free data structures, what they are, and how they are used by the
new In-Memory OLTP technology that is part of SQL Server since Server 2014.    
Do you have ever looked on an execution plan that performs a join between 2 tables, and you have wondered what a "Left Anti Semi Join" is? Joining 2 tables in SQL Server isn't the easiest part! Join me in this session where we will deep dive into how join processing happens in SQL Server. In the first step we lay out the foundation of logical join processing. We will also further deep dive into physical join processing in the execution plan, where we will also see the "Left Anti Semi Join". After attending this session you are well prepared to understand the various join techniques used by SQL Server. Interpreting joins from an execution plan is now the easiest part for you.    
UNIQUEIDENTIFIERs as Primary Keys in SQL Server - a good or bad best practice? They have a lot
of pros for DEVs, but DBAs just cry when they see them enforced by default as
unique Clustered Indexes. In this session we will cover the basics about
UNIQUEIDENTIFIERs, why they bad and sometimes even good, and how you can find
out if they affect the performance of your performance critical database. If
they are affecting your database negatively, you will also learn some best
practices how you can resolve those performance limitations without changing
your underlying application.
SQL Server needs its locking mechanism to provide the isolation aspect of
transactions. As a side-affect your workload can run into deadlock situations
- headache for you as a DBA is guaranteed! In this session we will look into
the basics about locking & blocking in SQL Server. Based on that
knowledge you will learn about the various kinds of deadlocks that can occur
in SQL Server, how to troubleshooting them, and how you can resolve them by
changing your queries, your indexing strategy, and your database settings.

There are so many different concepts and things that you have to know when you have to configure memory settings for SQL Server correctly. In this session you get an overview about the memory architecture of SQL Server and how SQL Server is using memory internally. You will also learn how you can track memory consumption inside SQL Server, and what are the most relevant memory configuration options that you can use to fine-tune and troubleshoot SQL Server.
One of the newest buzzwords of the last few months and years is definitely Kubernetes - or K8S. Microsoft has made with SQL Server 2019 a huge investment into Kubernetes and provides a Docker Container Image that can run on top of Kubernetes. In this session we will explore the key ideas of Kubernetes. You will learn the basic concepts around Kubernetes, and how you can run SQL Server 2019 successfully with Kubernetes.
Power BI Premium enables you to build comprehensive, enterprise-scale analytic solutions that deliver actionable insights through familiar data visualization tools such as Microsoft Power BI and Microsoft Excel. This session will dive deep into exciting, new and upcoming features including aggregations for big data to unlock petabyte-scale datasets that was not possible before! We will uncover how the trillion-row demo was built in Power BI on top of HDI Spark. The session will focus on performance, scalability, and application lifecycle management (ALM). Learn how to use Power BI Premium to create semantic models that are reused throughout large, enterprise organizations
Database and application development need to be synchronized in order to provide the proper behavior, otherwise something
will be broken.

Database unit tests can be the contract between database and application. This contract, not only
avoids database breaking the contract with the application, but also ensures
that the database presents an expected behavior.

This talk will address the basic steps to introduce unit testing to SQL (tSQLt a database
unit testing framework), and create a deployment pipeline able to create a test
environment (local machine, database as a service, docker), run tests, create
tests reports and deploy if the build succeeds.

So, the plan it’s to show since how to write the first to add a set of database
tests to the deployment pipeline.       
As a SQL DBA you want to know that your SQL Server Estate is compliant with the rules that you have set up. Now there is a simple method using PowerShell and you can get the results in PowerBi or embedded into your CI/CD solution

Details such as:

How long since your last backup?
How long since your last DBCC Check?
Are your Agent Operators Correct?
Is AutoClose, AutoShrink, Auto Update Stats set up correctly?
Is DAC Allowed?
Are your file growth settings correct, what about the number of VLFs?
Is your latency, TCP Port, PS remoting as you expect?
Is Page Verify, Data Purity, Compression correctly set up?
And many more checks (even your own) can be achieved using the dbachecks PowerShell module brought to you by the SQL Collaborative team.

and all configurable so that you can validate YOUR settings

Join one of the founders of the module, Rob Sewell MVP. and he will show you how easy it is to use this module and release time for more important things whilst keeping the confidence that your estate is as you would expect it.
A live demo of assembling Raspberry Pi, Breadboard, Temperature sensor will be performed to extract current temperature data and push to IoT Hub, Stream analytics & live streaming Power BI with Python.
No, not that sort of slacking. The type of slacking. We'll be ignoring the gifs and looking at how using Slack, PoshBot, dbatools and a little bit of PowerShell glue you can build a simple solution that enables you to quickly respond to and fix problems from anywhere without having to carry anything more specialised than your smart phone. And we'll see how you can then extend that to allow you to hand off tasks to other users and teams in a safe secure manner.
A backup that's not been tested isn't worth the disk space it's wasting.

To make sure you can recover from a disaster you need to have confidence that your backups are working properly, and that you can restore them in the time you've promised your management. Testing this used to be really tricky, but with the dbatools project we've tried to make it as simple as possible.

In this session I'll show why testing your backups regularly and frequently is a vital part of a Disaster Recovery plan. I'll explain how this can be used to ensure you can meant RTO and RPO, strategies that will stop you worrying about how long a recovery will take and proving to your boss and auditors that your recovery strategy is rock solid and reliable.
Struggling to cope with extremely large tables? Is performance tuning a nightmare? Struggling to delete old data? 

The solution you're looking for could be Partitioning. Properly introduced to SQL Server with the 2005, and available in all editions since SQL Server 2016 Service Pack 1, partitioning allows you to split your tables into manageable chunks without having to rebuild you code and applications

In this session I'll take you through the benefits of partitioning, implementation best practices, pitfalls to avoid and strategies you can take away to use.
Where to start when your SQLServer is under pressure? If your server is
misconfigured or strange things are happening, there are a lot of free tools
and scripts available online.These tools will help you decide whether you have
a problem you can fix yourself or you really need a specialized DBA to solve it.  Those scripts and tools are written by renouwned SQLServer specialists. Those tools provide you with insights of what
might be wrong on your SQLServer in an quick and easy manner. You don’t need extensive knowledge of
SQLServer nor do you need expensive tools to do your primary analysis of what
is going wrong
And in a lot of instances these tools will tell you that you yourself can fix the problem. 
In this presentation we’ll go through Microsoft’s new tool, Azure Data Studio (formerly SQL Operations Studio), what it can do, and whether it can help in your environment;

    – Inbuilt T-SQL Editor (with IntelliSense) – Could you replace SSMS?
    – Smart T-SQL Code Snippets – This is new and a massive time saver
    – Customizable Dashboards for your server estate – great if you don’t have a monitoring solution currently
    – Connection Management – Group your servers to help you organise what’s important.
    – Integrated Terminal (run your PowerShell, sqlcmd, bcp etc directly in Ops Studio)

We’ll show real life uses for this tool and leave you with the ability to see if it’s something you want to jump into and start using (to make your life easier).
The ability for multiple processes to query and update a database concurrently has long-been a hallmark of database technology, but this feature can be implemented in many ways. This session will explore the different isolation levels supported by SQL Server and Azure SQL Database, why they exist, how they work, how they differ, and how In-Memory OLTP fits in. Demonstrations will also show how different isolation levels can determine not only the performance, but also the result set returned by a query. Additionally, attendees will learn how to choose the optimal isolation level for a given workload, and see how easy it can be to improve performance by adjusting isolation settings. An understanding of SQL Server's isolation levels can help relieve bottlenecks that no amount of query tuning or indexing can address - attend this session and gain Senior DBA-level skills on how to maximize your database's ability to process transactions concurrently.
Azure Cosmos DB has quickly become a buzzword in database circles over the past year, but what exactly is it, and why does it matter? This session will cover the basics of Azure Cosmos DB, how it works, and what it can do for your organization. You will learn how it differs from SQL Server and Azure SQL Database, what its strengths are, and how to leverage them. We will also discuss Azure Cosmos DB's partitioning, distribution, and consistency methods to gain an understanding of how they contribute to its unprecedented scalability. Finally we will demonstrate how to provision, connect to, and query Azure Cosmos DB. If you're wondering what Azure Cosmos DB is and why you should care, attend this session and learn why Azure Cosmos DB is an out-of-this-world tool you'll want in your data toolbox!
"Learn DAX", they said. "It'll be easy with your background", they said. Well, it turns out that it wasn't. Transitioning from SQL to DAX gave me nightmares and ulcers, and this session is for everyone who is looking over the edge and considering undertaking the challenge. It is in no way impossible, only frustrating, as DAX has a very different approach to data from what a SQL programmer is used to. In this introduction to the DAX language, I'll be putting a somewhat different spin on DAX from a beginners' standpoint.  I'll be going over the basic mental mistakes that many people trying to learn DAX do, how to solve them and how to put your brain on the right track!
We've all been there - the client wants a report, gives you some random requirements and expect you to "sort it out". When the report is done and presented to the client, the silence heralds yet another less-than-successful report delivery. It's time to turn this on its head: build the report as they watch, while having a constant discussion about the content! In this session I go through a new approach to capturing requirements and creating a report on the fly, all with the benefit of a happier client, a more relevant report and - perhaps surprisingly - less work for the report developer!
Technical speakers are most often very good at the "technical" part of speaking but sometimes leave the "speaking" part with something to be desired. The body language may be stilted (or in some cases nonexistent), the voice can be a dull monotone or the font size can make the demos unreadable from more than five feet away. We have all been there - the technical content is great but the presenter just won't cut it. This session is for anyone who either is or aspires to become a technical speaker, and I will go through tips and techniques for better understanding and using body language, how to use your voice for maximum benefit and finally give some pointers, ideas and software tips to make sure your demos are as clear and readable as possible.
Special care must be taken when managing virtualized database servers. Performance bottlenecks are everywhere, but most are silent and lurk in the shadows without VM admins knowing they exist. DBAs “feel” the performance problems but cannot find them in the database, and the blame starts to grow. These areas slow down business and create strife in the IT department. Come learn from experts in database virtualization all of the areas that VM administrators should focus on to maintain and improve performance of these heavy resource consumers, such as CPU scheduling techniques, vNUMA, and improving disk queueing. Leave this session armed with the deep-dive skills to troubleshoot and improve the performance of the most demanding virtualized database servers.

Think infrastructure in the cloud is still just for sysadmins? Think again! As your organization moves into the
cloud, infrastructure architecture skills are more important than ever for DBAs to master. Expert knowledge of cloud-related infrastructure will help you maintain performance and availability for databases in the cloud. For example, know what an IOP is? Should you use a database-as-a-service or provision a cloud-based VM? How many compute resources does your database consume during a given day? Can you secure it properly? Come learn many of the key cloud infrastructure points that you should master as the DBA role continues to
If your boss asked you for the list of the five most CPU-hungry databases in your environment six months from now for an upcoming licensing review, could you come up with an answer? Performance data can be overwhelming, but this session can help you make sense of the mess. Twisting your brain and looking at the
data in different ways can help you identify resource bottlenecks that are limiting your SQL Server performance today. Painting a clear picture of what your servers should be doing can help alert you when something is abnormal. Trending this data over time will help you project how much resource consumption you will have months away. Come learn how to extract meaning from your performance trends and how to use it to proactively manage the resource consumption from your SQL Servers.
Azure Data Factory V2 came with many new capabilities and improvements. One of biggest game-changers is the Data Flow feature, allowing you to transform data at scale without having to write a single line of code.

In this session, we will look at the new Data Flow feature in Azure Data Factory (ADF), compare it to Data Flows in SQL Server Integration Services (SSIS), and build a demo to see the new data transformation capabilities in action. Along the way, we will cover design patterns, best practices, and lessons learned.

Cloud data integration has never been easier! 
Data virtualization is an alternative to Extract, Transform and Load (ETL) processes. It handles the complexity of integrating different data sources and formats without requiring you to replicate or move the data itself. Save time, minimize effort, and eliminate duplicate data by creating a virtual data layer using PolyBase in SQL Server.

In this session, we will first go through fundamental PolyBase concepts such as external data sources and external tables. Then, we will look at the PolyBase improvements in SQL Server 2019. Finally, we will create a virtual data layer that accesses and integrates both structured and unstructured data from different sources. Along the way, we will cover lessons learned, best practices, and known limitations.
Azure Data Studio is a modern, lightweight, and cross-platform editor for data professionals. It is built for data professionals who spend most of their time developing data solutions or editing queries, and works with both on-premises and cloud sources.

In this session, we will go through some of the core features of Azure Data Studio that can increase your productivity: IntelliSense, code snippets, dashboards, and extensions. We will also cover common settings, keyboard shortcuts, and the command palette. Finally, we will look at some of the similarities and differences between Azure Data Studio and SQL Server Management Studio (SSMS) to help you decide which editor is right for you.

Join us to learn about the latest Azure Data Studio updates and features, and see how you can increase your productivity!
You can have the perfect indexing strategy, go to great lengths to write optimizer and access path friendly t-sql but there is a fundamental scalability anti-pattern that will prevent the transactional throughput of your
workload from scaling. Because of this, more hardware does not equal more throughput. This is the
anti-pattern whereby multiple threads require access to a resource that is singleton by nature and a spinlock
is required to ensure that only. There are no knobs, levers or dials that can be altered to get around this
problem. However, there is an innovative approach using containers that can help overcome this scalability final frontier.
The in-memory OLTP engine has been around since version 2014 of the database engine, do you need to be a SaaS provider processing billions of transactions a day to use this ?, the short answer is absolutely not and this session will show you why by presenting a number of use cases that most people should find useful from the bulk loading of data to scalable sequence generation. But what is wrong with the legacy engine that we all know and love ?, why do we need the in-memory engine ?. Along the way this session will provide an overview of what the in-memory engine is, why it is required and why with SQL Server 2016 service pack 1, it is more cost effective to use than ever before.
Every new release of SQL Server brings a whole load of new features that an administrator can add to their arsenal of efficiency. SQL Server 2017 / 2019 has introduced many new features. In this 60 minute session we will be learning quite a few of the new features of SQL Server 2017 / 2019. Here is the glimpse of the features we will cover in this session. • Adaptive Query Plans • Batch Mode Adaptive Join • New cardinality estimate for optimal performance • Adaptive Query Processing • Indexing Improvements • Introduction to Automatic Tuning / Intelligent query processing This 75 minutes will be the most productive time for any DBA and Developer, who wants to quickly jump start with SQL Server 2017 / 2019 and its new features.
Not many people know too many indexes are not bad for Insert, Update and Delete but also for Select statements as well. We will dive deeper on this subject and will show the secrets behind the scene stories with example. We will also display how they can solve this performance problem which they were not aware of it EVER. Slow Running Queries are the most common problem that developers face while working with SQL Server. While it is easy to blame the SQL Server for unsatisfactory performance, however the issue often persists with the way queries have been written, and how Indexes has been set up. The session will focus on the ways of identifying problems that slow down SQL Server, and Indexing tricks to fix them. We will have a quiz during the session to keep the conversation alive. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session.
“Oh! What did I do?” Chances are you have heard, or even uttered, this expression. This demo-oriented session will show many examples where database professionals were dumbfounded by their own mistakes, and could even bring back memories of your own early DBA days. The goal of this session is to expose the small details that can be dangerous to the production environment and SQL Server as a whole, as well as talk about worst practices and how to avoid them. In this session we will focus on some of the common errors and their resolution. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session.
Pop quiz DBA: Your developers are running rampant in production. Logic, reason, and threats have all failed. You're on the edge. What do you do? WHAT DO YOU DO?

Hint: You attend Revenge: The SQL!

This session will show you how to "correct" all those bad practices. Everyone logging in as sa? Running huge cursors? Using SELECT * and ad-hoc SQL? Stop them dead, without actually killing them. Ever dropped a table, or database, or WHERE clause? You can prevent that! And if you’re tired of folks ignoring your naming conventions, make them behave with Unicode…and take your revenge!

Revenge: The SQL! is fun and educational and may even have some practical use, but you’ll want to attend simply to indulge your Dark Side. Revenge: The SQL! assumes no liability and is not available in all 50 states. Do not taunt Revenge: The SQL! or Happy Fun Ball.
How do you test your SSIS packages? Do you prepare them, set the parameters and variables, maybe get some sample or production data and run a few times by hand in SSDT? It’s not a bad practice when you start your ETL journey, but after some time you probably think about automation. If not – you should. It’s time for you to start automated SSIS unit and integration testing.

On this session, I will show you how to begin an adventure of testing packages using ssisUnit – the free SSIS testing library. I will build from scratch the tests for example packages and then automate the whole proces.
Data lakes have been the hyped technology of choice for several years, but what is it? What does it do? How does it work? And most importantly - why should I care? Depending on your job role, the answers to the previous questions can be very different.
With the second generation of Azure data lake storage right around the corner, it's time to get to grips with this technology. Regardless of if you're a DBA wanting to swim in a different pond, a BI architect already knee-deep in data or a data scientist who is skeptical to drinking from something you know little or nothing about, this session will give you new insights and ideas. Let's wade through the hype and see where this piece of the puzzle fits in your data landscape!
Have you had performance tank despite the code working fine in another environment? Maybe heard that some SQL is bad but not why? If so, this is the session for you!
This session will start with a walkthrough of some of the basic settings in SQL Server and how they affect you as a developer. It follows with key tips on what settings to change, why some code will wreak havoc on your performance and how isolation levels matter (and why NOLOCK can be an exceptionally bad idea!) The session is led by a 20-year DBA veteran who decided to try to help developers understand performance issues by seeing things from his perspective.
If you want to explore how default settings kill your performance, investigate why harmless SQL might not be quite so harmless and gain insight into how isolation levels affect function and performance, then this session will provide you with the tools to think outside the box and incorporate database engine knowledge into your developer prowess!
On conferences like this there are many sessions on What's new, Cool new stuff, Roadmaps and so on. But in real life we also have to work with existing and proven technologies. SQL Server Reporting Services is one of those. Many organizations still use it heavily and are dependent on it for creating paginated reports and web dashboards. And it's expected to be there for a long time.

In this session I will share miscellaneous SSRS tips and tricks. For instance, dealing with multi language scenarios, handling corporate styles, storing user preferences, creating a good dashboard experience in the portal, and more. I will try to share as many tips and tricks as will fit in one hour.

This session assumes you have a working experience with SSRS, but you need not be a guru.
Creating a proper Tabular Model is essential for the success of your modern BI solution. If you set up the foundations properly, you will benefit when building the relationships, formulas and visualizations. Also your Self-Service BI users will understand and use the data model better. This talk guides you through the process of creating a Tabular Model. The session will be packed with very practical tips and tricks and the steps you should do to create a proper model. The session is based on “real life” projects, and will be backed with some theory. After this hour you will understand how to create a proper model, how to optimize for memory usage and speed, enhance the user experience, use some DAX expressions and to use the right tools for the job. You will go home with a very useful step-by-step-guide.

Extended Support for SQL Server 2008 and 2008R2 ends in July 2019. With so many databases still using these versions, upgrade planning should be on everyone's current to-do list.

Microsoft's investment in tooling in this space will certainly help to ease the pain normally involved with upgrades. The required toolkit for any DBA planning their upgrades from SQL Server 2008 and SQL Server 2008R2 should include:

  • Database Migration Assistant (DMA) - to help you plan and identify the correct target version.
  • Database Experimentation Assistant (DEA) - to make it easier to do A/B testing between options to check performance.
  • Database Migration Service (DMS) - to move your DB to Azure SQL DB with almost zero downtime

In this session you will see how each of these tools can be used to reduce the risk involved in your upgrade or migration.

You will see how DMA hugely enhances the old Upgrade Advisor functionality and can even be used to help plan the required Service Tier if you are moving to Azure SQL Database. It can also be used to complete the migration of schema and/or data between your old database and your new one.

DMS can enhance the migration part when moving to an Azure PaaS database or Managed Instance by providing an online migration component. This can help you achieve your migration with minimal application downtime.

The DEA provides a way to perform multiple A/B tests of your database performance between your current platform and the target one being considered. The reports can highlight any changes in performance, query plans, or compatibility issues from non-schema sources such as dynamic queries.

This session will provide you with the confidence to move away from SQL Server 2008 and SQL Server 2008R2 with greater confidence.

Visualization is the most powerful way to disseminate information.  Mistakes can be both intentional and unintentional and go unnoticed.  

Humans see images 60,000x better than text but are we always seeing what is being shown? In this talk, we will look at ways a visual designer can intentionally or unintentionally confuse readers by using techniques that are common but not correct. We will discuss topics such as color theory, chart selection and placement among others. Come join us to learn what makes a visualization clear and learn how to convey your story.
We have more information available to us today than ever before.  So much so that we run the risk of not being able to tell concise stories.  How are you presenting it?  Does it convey what you think?  There's a lot more to creating that story than just getting the correct information.

Is there a better way?  Come find out! 

Come learn not just the do's and don'ts, but the whys…
Relational databases have their strengths. Ironically data relationships are not one of them. Graph databases excel in this department using nodes and edges. They are optimized to find and view relationships using graph theory.

One of the best new features of SQL Server 2017 is the Graph Database! It brings us the best of both worlds in one easy platform! Come learn about the history of graph databases, how they work and why you should be using it!
Welcome to the dungeon. Yes, SQL Server memory concepts are like entering a dungeon where you are guaranteed to get lost. It’s dark and complex out there and not many have come back alive. Join Microsoft Certified Master of SQL Server, Amit Bansal, and find your way out from the dungeon. In this deep-dive session you will understand SQL Server memory architecture, how the database engine consumes memory and how to track memory usage. You will also see a few practical examples of memory troubleshooting. Complex concepts will be made simple and you will see some light beyond the darkness. This session will be an eye-opener for you. Assured.
Bi developpers are using Visual Studio for SSAS, SSRS and SSIS development all the time,but tend to forget to use Visual studio for there database development.

A database project, even when you generate all your structures, opens the road to putting your database code under source control and move forward to a continuous integration/deployment for the database part of your datawarehouse. You can start from scratch or start with your currenct databases as a starting point to move foreward with the reverse engineering option.
With a template in Visual studio you can generate all your tables and views in a second from a metadata solution.

I'll show how to set up your database project, how to generate tables and views from metadata directly in the database project and  where database project can help in deploying your entire solution quickly to all you environments, including security settings and statics base data.
You have probably seen the Q&A feature in Power BI demoed many times, but is it just a demo feature? Do users really want to type questions and see Power BI visuals generated for them automatically? Does it work well in real-world scenarios? In this session you’ll learn:

  • What Q&A is, where it can be accessed by users and who it might be useful to
  • How the design of your dataset and reports can influence how well Q&A works
  • How advanced features like Synonyms and editing the Linguistic Schema file can enable more complex interactions 
  • Whether it’s worth your time and effort to use Q&A in your Power BI deployment
Dataflows are an important new data preparation and loading feature in Power BI. In this session you will learn:
  • What dataflows are and when you might want to use them
  • The advantages and disadvantages of using them over Power BI Desktop's data loading features
  • Configuring incremental refresh
  • Additional features available in Power BI Premium
  • Integration with Azure Data Lake Store, the Common Data Model and other Microsoft services
If you want to get started with continuous integration and deployment using TFS toolkit, this session is for you. It gives an overview of end-to-end architecture of MS BI ALM and practical tips on how to make it happen with TFS toolkit.

This presentation aims to provide a learning framework and show what is possible to achieve with TFS. It gives an overview of end-to-end architecture of MS BI ALM and practical tips on how to make it happen with TFS toolkit. The presentation will also cover automated testing using NBi, continuous integration with SSDT and a demo of TFS Deployment Manager for a typical BI application. This will include a Data Warehouse (DW) database project, SSIS, and SSAS projects.

The material does not assume prior knowledge of TFS administration, but some experience using TFS source control and general TFS terminology will be helpful.
I have recently had a lot of practice working on a large enterprise data warehouse project with over 40TB worth of data (5TB compressed), where most fact tables and many dimensions were created with CCI. I would like to share the lessons I had learned on how to load, query and manage data in Clustered Columnstore Indexes (CCI). Amongst other things you will discover the following:
- CCI for temporary tables: method or a madness?
- Patterns for CCI on dimensions
- Partitioning CCI: tips and tricks
- Data loading DOs and DON'Ts
- Data management DOs and DON'Ts
- How to make the most of your CCI
The session assumes familiarity with the CCI basics: delta-store, tuple mover, rowgroup.  Please familiarize yourself with these before coming to this session.
This session will focus on
maps in Power BI. We'll look at the most appropriate maps for the kind
of data you’re visualising, with some tips and techniques for working with
larger volumes of data. Through back to back demos, we’ll focus particularly on
the Map and ShapeMap visuals that come built in to Power BI, then take a deeper
look at some of the more advanced capabilities available with the MapBox and Icon
Map custom visuals..

The dbatools module now has over 300 commands and anyone who wants to start is overloaded with amount of functionality in this module.
There are not enough hours in the day to get everything done as a DBA. We need to automate our repetitive tasks to free up time for the important and more fun tasks.
In this session I'll show you a set of commands which will help you start automating your tasks.
Database administrators have the responsibility to provision other environments with production data. This is mostly done for developers that have to query against the most recent data.

We spend a lot of time and effort with that, because we want to do it right and we're dealing with lots of data in most of the cases.

What if I told you that you can provision your environments within minutes and save lots of space in the process using PowerShell.

Do you want to spend less time provision databases and save a lot of space in the process?
Come to my session and I'll show you how to do that.
Customers have feelings. If you meet with all your customers regularly you probably think you know how they feel. But what if you have millions of customers? How can you even begin to get to understand how each of those customers feel about your business?

This is where cognitive API’s come in. By harnessing the power of deep neural networks, we can accurately derive emotional insight from our data and use this to improve our offering to our customers. In this session we will look at the tools needed to connect your data to the right API’s and how we can leverage their capability. Starting out with sentiment analysis we move to facial expression recognition and image tagging.

Attendees will gain the knowledge of how to connect different types of data to appropriate cognitive services. This talk would be of interest to anyone that deals regularly with customer data or has a curiosity toward data science and its application within a business.
In this demo rich whirlwind session, we will run thought the
Microsoft AI stack to give you an understanding of what tooling is available
and the why, how and where you would use them.

If you are careful not to blink by the end of this session you
will gain an understanding of the landscape of the various AI toolings that are
available and how you might utilise them.

Technologies we will cover are:

  • Azure Machine Learning Studio
  • Cognitive Services
  • Running R and Python in SQL
  • Azure Notebooks
  • Azure Databricks
  • and more

In this session we will look at the Machine Learning capabilities
of SQL 2016, SQL 2017 and Azure SQL Database. We will discuss and demonstrate
everything from simple in database ML predictions to running deep neural
networks for sentiment analysis and image categorisation.

We will look at how to operationalise your AI workloads with
both SQL on premise and in the cloud, as well as performance considerations for
your workloads

Technologies we will cover are 

  • SQL 2016/2017
  • Azure SQL Database
  • In database machine learning with R and Python

In this session we will cover three different ways to create
and operationalise Machine Learning models. From drag and drop interfaces to
open source coding. Technologies we will cover are Azure Machine Learning
Studio, SQL Server, Azure Machine Learning Services with the new azureml Python

At the end of this session you will understand some of the tooling’s
available to make your AI development simpler and easier and how to easily move
from raw data to productionised predictive models.

In this session we will look at how PowerBI interacts with
some of the more common data services and considerations involved in choosing
one over the other. We will look at the various SKUs of PowerBI and in which
architectures you should choose one over the other. We will also look at the
various security models and general gotchas in terms of designing a PowerBI

ETL development can be packed with variety or as repetitive as WHILE 1 = 1 – and when it's the latter it's time-consuming, boring and error-prone. In this session I'll get the ball rolling with some basic dynamic T-SQL before supercharging it with metadata to generate (and re-generate) a variety of ETL components in T-SQL. We'll wrap up with some thoughts about how to tackle this in the real world with a heady mixture of good practice and metadata abstraction.
SQL Server 2016 Introduced an incredible monitoring and tuning technology called Query Store that provides us with insight into the optimizer’s plan choices and the history of changes in plans. Query Store allows you to revert to a previous better performing plan through the use of Plan Guides. In this session, we’ll see what kinds of query performance problems can be solved with Query Store and what kinds can’t be. We’ll look at how Query Store evaluates query performance information and how we can revert to an old plan. Finally, we’ll see a new SQL Server 2017 technology called Automatic Plan Correction that is built on top of Query Store.
Documentation has never been this much fun! In this session I'll be introducing Graphviz – free, open-source, graph visualisation software with relevance that extends beyond traditional graph applications. I will show how we can use it to build informative visualisations of common data management artefacts, specifically ETL pipelines and SQL Server database diagrams. Combining the approach with sources of metadata we'll see how we can quickly and automatically generate suites of interlinked diagrams to describe large and complex systems in an easy-to-navigate way. 
As data engineers we're great at putting together and managing complex process flows, but what happens if we stop trying to control the flow and start thinking about the metadata it needs instead? In this session we'll look at a variety of ETL metadata, how we can use it to drive process execution, and see benefits quickly emerge. I'll talk about design principles for metadata-first process control and show how using this approach reduces complexity, enhances resilience and allows a suite of ETL processes adaptively to reorganise itself.
 Tempdb has been a source of configuration confusion since
the very first version of SQL Server. Each SQL Server instance only has one
tempdb database used by all users in all databases. We’ll look at all the
different uses of tempdb and how to configure your tempdb to support them.
We’ll look at all the different kinds of temporary tables and see when you
should use each type: global and local temp tables, table variables and work
tables. We’ll explore the various tools available that let you monitor the use
(and abuse) of your tempdb database and we look at some best practice
guidelines for how to keep your tempdb as healthy as possible.
As long as it is only a matter of SELECT, INSERT etc, you can put these statements in a stored procedure and users only need permission to run the procedure. That is, it seems that the stored procedure acts as a container for the permission. But you find that this no longer seems to work when you use dynamic SQL, create non-temp tables or try other "advanced" things. The story is that the procedure can still act as a container for the permission and users do not need to be granted elevated permissions if you take extra steps in form of certificate signing or use EXECUTE AS. In this session you will learn how to use these techniques and why one of them is better than the other.

The session does not stop at the database level, but it also looks at how these techniques can be used to permit users to perform actions that require server-level permission in a controlled way, for instance letting a database owner see all sessions connected to his database but not other databases. As a side-effect of this, you will also learn why the TRUSTWORTHY setting for a database is dangerous.
Temporal tables, introduced with SQL Server 2016, have a number of useful applications within the data warehouse. This session provides an introduction to temporal tables and what you need to know about them in order to get started using them in your data warehouse.
Learn how create temporal tables from scratch or convert existing tables, find out what the catches are and when you should (and probably shouldn't) consider using them, how they can simplify the code needed in comparison to other solutions and take a brief look at performance in comparison to their alternatives. Finally, we walk through some real-world use cases. 
You might have heard "don't use cursors, they are slow!". In this presentation, you will learn what this actually means: you should normally write set-based statements instead and I will explain why they generally are faster than writing your own loops. But I will also look at situations where using a loop for one reason or another is preferable, and you will learn that the best way to run a loop in most cases is a cursor, provided that you implement it properly. The presentation also gives some tips how you can troubleshoot performance problems with loops.
Early in your career you learnt that loops are bad and that you should use set-based statements. However, there are situations when trying to processing all at once takes you into problems. In this session we will learn what these situations are and how we can address them by splitting up the work in batches. We will learn techniques for batching and pitfalls to watch out for so that we don't introduce new performance issues.

We will also look at batching from a different angle: problems that requires a loop for, say, a single customer, but where we can process all customers abreast for better performance.
SQL disk configuration and planning can really hurt you if you get it wrong in azure. There is a lot more to getting SQL right on Azure VMs than next-next-next.   Come along and dive deeper into azure storage for SQL. Topics covered include: • SQL storage Capacity Planning concepts • Understanding Storage Accounts, VM limits, and disk types • Understanding and planning around throttling • Benchmarking • Optimal Drive configuration • TempDB considerations in Azure • Hitting “max” disk throughput
Ever wondered what actually happens when a cube is “processed”, why it takes so long, how you can configure and optimise model processing, or what strategies people commonly use for improving processing? This session provides a deep dive into cube processing for tabular models, to help understand how it works and what you can do to work with it. Come to this session for a better understanding of how to configure, optimise and tune cube processing. Included in the session is case studies from our performance lab and some sample tools to analyse processing logs.
You want to use all the new and sexy cloud based data services like PowerBI, Flow and hosted SSAS, but your data remains strictly on premise. Come along to see how to use the data gateway to connect Azure to your on-premise data.

Content includes:
- Architecture
- Installation and Configuration
- Deployment Patterns
- Transferring Identity and UPNs.
- Implementing Row Level Security
- Monitoring and Performance
- Troubleshooting
Polybase was the goto technology on SQL DW for fast ingest
of bulk data. Learn how you can now use this extended feature on SQL 2019 to
modernise your data ingest and ETL even on traditional SQL running on premise or from a

Well show you demos and performance best practise as well as
tools to fully automate data ingest using Polybase. Plus the data load face off
between Polybase, BCP, SSIS and the venerable OPENQUERY.

1hour - No Slide - 10 DAX queries and tips
From a simple calculated column to more advanced KPIs, that session will present 10 concrete use-cases that need DAX calculation. We will explore the art of ratios, time intelligence measures, visual tricks but also tools like DAX Studio.
Intended to beginners and end users, it is better if you know a little bit DAX. For experienced users, it can be a good wrap-up too.
Discover the Advanced Analytics and Data Lake pattern in Azure Data Platform through a complete demo : how to get insights from text, photos and videos ? 
From different media files and raw data, we will analyze sentiment of characters and
get valuable information in a Power BI dashboard, using Cognitive Services, CNTK, .NET and U-SQL.
This session will mainly showcase Azure Data Lake and U-SQL language. But demos will
involve different tools like Azure Data Factory for data supply chain and orchestration, Azure SQL Database for corporate data or even Machine Learning technics.
Even if this session is demo-driven, concepts and features of Azure Data technologies will be discussed.
OK, your Power BI report is ready, everything works on your desktop. Now you need to share it and work with others.
How can you collaborate easily with colleagues and partners, respecting security
constraints ?
This session answers this question detailing all options available : Native sharing,
Groups, Apps, Embedding, etc.
Best practices, recommandations and comparisons will be given during this session to
conduct you to the best choice for your company.
"Hell is other people" Huis-Clos, Jean-Paul Sartre, French Philosopher
How to govern and administer your Power BI Tenant ?
In this session, we will review different ways and tools to administer Power BI. We will deal with common questions asked by customers regarding their Power BI Tenant.
Some demos using Power BI API and PowerShell will be shown during the session.
In an ideal world, we would not need any error handling, because there would be no errors. But in the real world we need to have error handling in our stored procedures. Error handling in SQL Server is a most confusing topic, because there are such great inconsistencies. But that does not mean that we as database developers can hide our head in the sand. This presentation starts with a horror show of the many different actions SQL Server can take in case of an error. We will then learn how we should deal with this - what we should do and what we should not. We will learn that with SET XACT_ABORT we get better consistency. We will learn how TRY-CATCH works in SQL Server, and we will get a recipe for how to write CATCH blocks. More generally, we will learn why it pays off to be simple-minded to survive in this maze. The session mainly looks at traditional T-SQL code, but the session ends with a quick look at natively compiled stored procedures, where everything is different.
Gone are the days where a third of every page on a Power BI report needed to be dedicated to slicers.
The introduction of new features such as bookmarks, buttons, synchronised slicers and report page tool-tips provide tools that used either individually or in combination allow you to build reports with flexible interfaces that allow users to focus on the most important part of any report - the data.
This session will present practical examples of techniques you can start using to improve reports’ interface straight away.
Ever considered giving a presentation of your own? Pondered how your favorite speakers got their start? Contemplated whether you could ever do that too, but were not sure where to begin?

In this session I will show you how to get started. We will go over how to develop your idea, create session content, and share my favorite tips & tricks. 

You will leave armed with a wealth of resources (and hopefully some inspiration) to venture forth and develop your first presentation.
Ever wonder how the SQL Server storage engine processes your T-SQL queries? Curious about what else you could be doing to improve query performance?

Having basic exposure to SQL Server Internals enables one to write more effective T-SQL. Join me as we peek into the black box of the storage engine. Topics will include an overview of the storage engine, indexing, and the query optimizer, accompanied by practical consequences and solutions.

When you leave, you will have a greater understanding of the storage engine and how to avoid T-SQL design patterns that negatively impact performance.
This certification exam prep session is designed for people with analysing, modeling and visualizing Data knowledge/expertise and are interested in taking the 70-778. 

Attendees of this session can expect to review the topics covered in this exam in a a fast-paced format, as well as receive some valuable test taking techniques.

Attendees will leave with an understanding of how Microsoft Certification works, what are the key topics covered in the exams and an exhaustive look at resources for finalising and getting ready for the exam.
Power BI is one of the best BI tool ever built. R is a great statistical analysis platform and combining these 2 is going to be a great experience in presenting the advanced R Analytics in Power BI. 

This session includes a great demo to witness the power of PowerBI with advanced R analytics.
Empowering Every Person with Power BI, has been selected as Jury Award Winner, headed by Will Thompson (Microsoft) in Data & BI Summit in Dublin. A great opportunity to experience the same in SQLBITS.

Global challenges is a new area to talk about meaningful storytelling. The main focus is to spread awareness and Empower Every Person with "Awareness Enabled Reports" from sleeping open datasets. This session is packed with plenty of story telling, compelling & custom visuals with real-time datasets, collected across the globe, to empower every person in the planet targeting the global issues and create awareness with a simple story telling dashboards and reports.
Create and deploy a website from your phone, make edits live, and learn about important web development concepts as we go along!  This introduction will teach you key web development skills using the static site generator Hugo, whilst also getting you comfortable with source control and continuous deployment.

Using Hugo, an awesome static site generator, we'll start out by getting you setup with a starter website. From there, we'll start covering important concepts and give you time to follow along in creating your own awesome blog.

We'll overview:

1. The modern website and static site generators
2. HTML, CSS, and JavaScript frameworks
3. Hugo themes and templates
4. Consuming data to generate interactive visuals
5. Git workflow
6. Web infrastructure and continuous deployment
Real time is fundamental for the life of an organization and to the decision making process, therefore, building a real-time analytical model becomes a business need.

In this session we will learn which processes are required to build a real-time analytical model along with batch-based workloads, which are the foundation of a Lambda architecture.

At the end of the session, you will have learned how to ingest, store, prepare and serve the data with Apache Kafka for HDInsight, Azure Data Lake Store Gen 2, Azure Databricks, SQL Data Warehouse and Power BI.
SSRS is a complex and often times awkward beast, and being handed the reigns and told to get to work can be a bit daunting. To make matters worse there's often a general lack of knowledge in any given team, and training is hard to get; we end up spending most of our time franticly Googling for an answer every time he hit a roadblock. From "What Is SSRS?" to managing completed reports in Report Manager, we'll go over everything you need to know to get working with SSRS.

This session aims to teach you all you need to know to be able to get to work with SSRS, so you can feel comfortable working with it, and be aware of common issues. It is aimed at people with little to no experience with SSRS.
You’ve mastered the basics of SSRS: creating charts and tables, adding datasets and data sources, publishing reports, but you know there’s more to it. How can you make your reports more interactive? How can you control the look and feel more freely? How can you stop it from looking awful in Excel and PDF? This session aims to answer these questions and more.

This session aims to build upon existing knowledge of SSRS, to enable users to get more from the platform. It is aimed at people with little to moderate experience with SSRS. 
Just started using Git? Finding it really hard to use? It's nowhere near as complicated as it looks! We'll go through an overview of Git, and dive into what you really need to know in order to be able to work effectively and easily with it; from getting set up on Git Hub and learning the lingo, to working with Git Gui, Git Bash, and my personal favourite: Git Kraken! We'll go over all the important functions, and learn the correct workflow for working with Git.
Just started using Git? Finding it really hard to use? It's nowhere near as complicated as it looks! We'll go through an overview of Git, and dive into what you really need to know in order to be able to work effectively and easily with it; from getting set up on Git Hub and learning the lingo, to working with Git Gui, Git Bash, and my personal favourite: Git Kraken! We'll go over all the important functions, and learn the correct workflow for working with Git.
Just started using Git? Finding it really hard to use? It's nowhere near as complicated as it looks! We'll go through an overview of Git, and dive into what you really need to know in order to be able to work effectively and easily with it; from getting set up on Git Hub and learning the lingo, to working with Git Gui, Git Bash, and my personal favourite: Git Kraken! We'll go over all the important functions, and learn the correct workflow for working with Git.
You've learned the basics of Excel (what's a cell? How do you write a sum? How does formatting work?) but it can be hard to make it do what you want. It's very common for new Excel users to build their sheets in a way that looks good but is useless for working with. This session aims to teach you the right way to use Excel, so you can access all it has to offer. We'll cover Excel's table functionality, pivot tables, vlookups, and even look at Power Pivot and how it can be used to link up your tables.
In this session Steve Morgan of Microsoft will aim to provide an explanation of why you need to worry less about High Availability solutions when using Azure SQL Database. He’ll also be talking about how “HA included” works under the hood and the various options associated with it.

This session will be mainly demo led with a look at how to set-up and configure the High Availability options in Azure SQL Database as well as the options for delivering zero data loss in the Azure environment.

Specifically this session will cover:
- An explanation of what Azure SQL Database “HAincluded” means and how it’s delivered
- The difference available even with the automatic-HA options
- How Geo-replication and Failover Groups can be used in Azure SQL Database to deliver protection against entire Azure Data Centre outage
- The options for delivery zero-data-loss whenrunning SQL Server databases on the Azure platform
As an experienced Data Warehouse engineer or architect, Data Bricks may well be one of those new-fangled tools you’ve heard about but never had time to look at in any detail let alone work out where it may (or may not)be of use.
In this hands-on session Steve Morgan of Microsoft will provide an introduction to Azure Data Bricks positioning it alongside the SQLServer stack allowing you to understand its purpose and therefore make an informed decision as to where it can be used in your Modern Data Warehouse architecture.
The session will talk about what Data Bricks is as well as looking at how you set it up and work with it. We’ll cover off the tools and environments as well having a brief a look at its place within various Modern Data Warehouse architectures.
Using Python from the comfort of your own SQL Server Management Studio

The noise started far away: you need to learn Python, it whispered. We ignored it for a while because we could.  Besides we were busy, busy architecting databases, busy writing T-SQL and busy learning all of the other stuff on the MS Data Platform. Real data. Then something happened: you could code Python from within SSMS.  The whisper had become deafening. This was now part of T-SQL coding. This was us.

The Python had infiltrated our world, but what did it mean for us, the SQL folk?  Did we need to up-skill? What now? Did we need knew technology? Textbooks? New education?

Or could we just ignore it? This decade’s LINQ (sorry LINQ folks)?  This talk tackles these questions by coaxing Python out of its natural environment and debating the pros and cons of: SQL vs Python, Native Python vs Python in SSMS; and, ultimately, why we might just be able to have the best of both worlds - all from the comfort of your own SQL Server Management Studio.
Choosing the wrong data platform for your application can be an expensive mistake. There are a number of factors that should be considered when choosing a cloud data platform. In this session you will learn about all of the Microsoft cloud data offerings, and their strengths and weaknesses for different types of workloads. You will learn about application patterns and costs, and how to make the best decision for your application.
SQL Server is expensive. Licensing, hosting, personnel – it all costs money.

DBAs often focus on the Tech, overlooking the fact that they are viewed as a huge Cost Center by the Business.

In this session, I’ll be discussing ways in which you can reduce costs and improve Business value through:
  • SQL Server Consolidation
  • Getting the Licensing right
  • Using the right Edition of SQL Server
  • Architectural designs
  • On-premises and Cloud considerations
This session is suitable for all, whether you manage a SQL Server estate, or pay for it, Get ready for that "gosh, you're expensive - what can we do about that then?" chat.
Containers have quietly been taking over the world of infrastructure, especially amongst developers and CI/CD practitioners. However, in the database space, container adoption has been lower. SQL Server 2017 introduced the concept of deploying databases into Docker containers. In this session, you will learn the fundamentals of creating containers, learning about Kubernetes for management, and how to further your learning in this new and emerging space.
There are three main ways to deploy Always On Availability Groups (AGs) and Always On Failover Cluster Instances (FCIs) - physical hardware, virtualized, and IaaS in the public cloud. In SQL Server 2017 or 2019, these can be on Linux, Windows Server ... or both. While some deployment aspects are the same across all platforms and locations, each of the possible permutations and combinations affects how you plan, deploy, and administer AGs and FCIs. This session cuts right to the chase and will give you the top tips and tricks for successfully deploying and administering AGs and FCIs so you can be an availability hero no matter where you are deploying or what operating system you are using.
Many organizations would like to take advantage of the benefits of using a platform as a service database like Azure SQL Database. Automated backups, patching, and costs are just some of the benefits. However, Azure SQL Database is not a 100% feature compatible with SQL Server—features like SQL Agent, CLR and Filestream are not supported. Migration to Azure SQL Database is also a challenge, as backup and restore and log shipping are not supported methods. Microsoft recently introduced Managed Instances—a new option that provides a bridge between on-premises or Azure VM implementations of SQL Server and Azure SQL Database. Managed Instances provide full SQL Server surface compatibility and support database sizes up to 35 TB. In this session, you will learn about migrating your databases to Managed Instances, developing applications for managed instances. You will also learn about the underlying high availability and disaster recovery options for the solution.
An insight into the life of a DBA by recounting real-world experiences.

Amongst other topics, I'll be discussing
• Consolidation, Virtualisation and Licensing 
• Guiding Developers, such as the use of appropriate DataTypes
• Compression
• How to help, not hinder
• Facilitating success and cleaning up mess
… and when to do that thing you're told not to do

Something for everyone:
• Developers, help your work go into Prod more smoothly
• Finance? I'll tell you how your DBA can save you money
• Want to be a DBA?, I can help
• Already a DBA, there's plenty of empathy and some tales to make you think "it's not so bad"
2017 marked the 10th anniversary of when I left the world of being a full time employee to being an independent consultant. Back then I was just trying to make it to year two. Ten years later, I've learned a lot and I'm still standing with a few battle scars to prove it. Join me as I discuss what it takes to be (and survive as) a consultant, lessons learned - both good and bad, and how you can apply the principles of consulting even if you are a full time employee to step up your game.
Many new SQL Server deployments suffer from using outdated concepts and "best practices" that date back 5, 10, or even 20 years. Some older concepts still apply, but the rules that apply to deployments using physical hardware, virtualization, and the various cloud deployment methods (IaaS specifically) have changed quite a bit. Whether you are a developer, DBA, IT Pro, DevOps person, or anything inbetween - everyone needs a solid foundation for what is going on underneath SQL Server. To make sure your skills are up to date and you are getting the best availability, reliability, and performance, attend this session to update your knowledge about things like networking and storage. Come with an open mind to learn things that may challenge what you do today.
SQL Server gives us plenty of options when it comes to encrypting our data. But have you ever wanted to write your own encryption routines, perhaps you think that what SQL Server offers us doesn’t quite fit the bill for you?

I’m going to look into the basics of how encryption works and then we’ll learn how we can go about writing our own encryption routines within SQL Server. When we’re happy that those routines are secure, we’ll look at ways that we can go about cracking those routines.

Writing our own encryption within SQL might sound like a good idea and could even be something that you’ve tried out yourself but there’s a chance you might change your mind when you see how easily amateur cryptography can be broken.

The session covers,

Bitwise Logic in T-SQL
XOR Cypher
Caesar Cypher
Brute Force Attacks
Statistical Attacks
Plain Text Attacks
Ways to Enhance and Strengthen Encryption Algorithms
One of the most useful tools to the DBA when we need to test new features, recreate a fault that we’ve seen in production or just want to see ‘what if…?’ is a test lab.

Some of you are going to be lucky enough to have a few servers kicking around or a chunk of the virtual environment that you can build a test lab in but not all of us do.

This session walks you through how to create a virtualised test lab right on on your workstation or even laptop. 

We will start by selecting a hypervisor, look at building a virtual machine and then creating a domain controller, a couple of clustered SQL Servers and finally fulling functioning availability group.

The session will cover,

Selection and installation of a hypervisor
Creating your first VM
Building a domain controller and setting up a Windows domain within your virtual environment
Setting up a Windows failover cluster
Installing a couple of SQL Servers
Creating a fully functioning availability group
One of the most highly anticipated new features in the SQL Server 2016 release was Query Store. It's referred to as the "flight data recorder" for SQL Server because it tracks query information over time – including the text, the plan, and execution statistics - and it allows you to force a plan for a query. When you include the new Automatic Plan Correction feature in SQL Server 2017, suddenly it seems like you might spend less time fighting fires and more time enjoying a lunch break that’s not at your desk. In this session, we'll walk through how to stabilize query performance with a series of demos designed to help you understand how you can immediately start to use plan forcing once you’ve upgraded to SQL Server 2016 or higher. We'll review how to force a plan, discuss the pitfalls you need to be aware of, and then dive into how you can leverage Automatic Plan Correction and reduce the time you spend on Severity 1 calls fighting fires. It’s time to embrace the future and learn how to make troubleshooting easier using the plethora of intelligent data natively captured in SQL Server and SQL Azure Database.
CROSS and OUTER APPLY are the Swiss Army knives of joins.
This session will show you how to apply these two join types to your code not just for table valued functions but to make your queries faster, shorter, easier to read or simply to help to sweep those data quality gremlins under the carpet and avoid errors.
Examples will be taken from situations the speaker has encountered within Business Intelligence work but will be equally applicable elsewhere.
Manish Kumar from Microsoft will take you through the journey of IaaS vs PaaS in general and SQL Server (IaaS) vs Azure SQL DB (PaaS) in more detail. From there you will get to see the different flavours of Azure SQL DB, their features and use cases and then will get a chance to take a deep dive into Azure SQL Managed Instance where you will see and experience everything you should know about SQL MI and its migration path from on-prem/IaaS SQL Server. The session will be packed with information and backed with demos.
When things go wrong with Always On Availability Groups (AGs) or Always On Failover Cluster Instances (FCIs), it is not always apparent where to look and how to diagnose what happened. This session will help demystify where to look when you have a problem, as well as cover some of the more common issues you will see when implementing AGs and FCIs, whether you have them deployed on Windows Server or Linux.
Manish Kumar from Microsoft will take you through various performance tuning measures you should take before increasing service tier of your database. The session will be packed with tons of information, steps and demos.

In this session we discuss SQL Server on Linux, we focus on installation and a few simple administration tasks, such as backing up and restoring databases.
In this session we give you everything you need to know to get you up and running with SQL Server on Docker. We cover installation of SQL Server on Docker, the management of SQL Server on Docker, stopping / starting containers, creating images and storage inside containers.

In this session we cover the latest changes in Microsoft certification program.

The new AZ exams and the Microsoft Professional Program. 

Which is best for you?

In this session we give you everything you need to know to get you up and running with SQL Server on Docker. We cover installation of SQL Server on Docker, the management of SQL Server on Docker, stopping / starting containers, creating images and storage inside containers.
Have you heard about the SQL Tiger Team? Do you know they provide a free set of SQL scripts to help you administer your SQL Server?
In this session we explore the scripts in Tiger Team Toolbox.
Can't wait for the session? Download the scripts from here.

This isn’t a session about being funny or making people laugh.

We look at a different performance art and see what lessons we can learn from it.

There are hundreds of comedians performing every night across the UK, thousands across the world.

Surely we can take the processes and methods that comedians use and adapt it for our audiences?

In this session, we look at how stand-up comedians plan their sets, the rule of 3, repetition, finding your style and most importantly how practice makes perfect.  

Running SQL Server in Docker containers brings benefits that data professionals should be exploring. Having the ability to spin up an instance of SQL Server in a very short period of time has huge implications for CI/CD processes.

However, running standalone Docker containers presents challenges so other technologies are needed to support them. This session provides an overview of the various options for running SQL Server containers in Azure.
The topics covered will be: -
  • The Azure Container Registry
  • Azure Container Instances
  • Azure Container Services
This session is aimed at SQL Server DBAs and Developers who have experience with Docker and want to know more about the different options that are available in Azure.
Each topic will be backed up with demos that show how simple it is to get up and running with these technologies.
With the advent of Windows containers and SQL Server on Linux, the options for running SQL Server are growing. 

Should you run SQL Server on physical machines, virtualised, in the cloud or do you get with the cool kids and run it on Docker?

Docker is great for development, testing and stateless apps, but would you run your production SQL Server on it?

It may sound crazy to run your data tier in ephemeral containers, but I'll discuss the reasons why this might be a good idea if we can figure out the following challenges:

Data persistence
High Availability
Deploying SQL Server updates

Microsoft themselves have said they expect containerisation of applications to become as common as virtualisation. If this happens then containers are something we all need to understand.
Microsoft Power BI has evolved significantly over the last years to a strong and mature Self-Service BI platform. But how does this fit into an enterprise-centric BI architecture built upon SQL Server, Hadoop and Azure? In my presentation I will explain how we at InnoGames put all these components together to build a Modern Enterprise BI architecture. Gathering reliable insights from a massive stream of data and making them easily available is vital for our business success. If you love data, no matter whether you are a decision maker, a BI developer or an admin, this is the right talk for you.
A demo's filled session packed with tips and tricks to show how to transform
usual Power BI reports to stunning reports 
In this session you’ll learn about:
  • How to use background images and useful resources to create the background templates 
  • Use of colours, various resources to get appealing colour pallets 
  • Multiple ways of using conditional formatting to highlight the specific data points 
  • How to create Power BI theme files 
  • Various DataViz resources

Azure Container Instances are Microsoft's offering allowing us to run containers in the cloud without having to manage VMs. This session will provide an overview of how to create Container Instances running SQL Server and the various options that are available.
Topics covered will be: -
  • Deploying custom container images to an Azure Container Registry.
  • Using Azure Tasks to automate creation of container images.
  • Spinning up Container Instances running SQL Server.
  • Using Azure DevOps to deploy Azure Container Instances running SQL Server.

This session is aimed at SQL Server DBAs and Developers who want to learn how to run SQL Server in Container Instances and hook them into a CI process.
Each topic will be backed up with live demos to show how easy it is to get up and running with these technologies.
"PowerApps is a service that lets you build business apps that run in a browser or on a phone or tablet, and no coding experience is required. PowerApps combines visual drag-and-drop concepts from PowerPoint with Excel-like expressions for logic and working with data."
In this session, attendees learn about how to create PowerApps solutions, how to use PowerApps as data entry application for Power BI. How to integrate PowerApps in Power BI and Power BI in PowerApps.
Running SQL Server in containers has huge benefits for Data Platform professionals but there are challenges to running SQL Server in stand alone containers. Orchestrators provide a platform and  the tools to overcome these challenges.

This session will provide an overview of running SQL Server in Kubernetes.
Topics covered will be: -
  • An overview of Kubernetes.
  • Definition of deployments, pods, and services.
  • Deploying SQL Server containers to Kubernetes.
  • Persisting data for SQL Server in Kubernetes.
This session is aimed at SQL Server DBAs and Developers who want to learn the what, the why, and the how to run SQL Server in Kubernetes.
Lifting and shifting your application to the cloud is extremely easy, on paper. The hard truth is that the only way to know for sure how it is going to perform is to test it. Benchmarking on premises is hard enough, but benchmarking in the cloud can get really hairy because of the restrictions in PaaS environments and the lack of tooling.

Join me in this session and learn how to capture a production workload, replay it to your cloud database and compare the performance. I will introduce you to the methodology and the tools to bring your database to the cloud without breaking a sweat.
I will also introduce you to WorkloadTools: a new Open Source project that I created specifically for this scenario. Benchmarking will be just as easy as pie.
This session is all about getting away from manual SSIS packages. Instead of reinventing the wheel every time you need to change or extend a package, let’s talk about metadata models and how we can use them to design and describe our data warehouses and packages. We will cover the gamut from initial design, to maintenance and support, and even documentation and compliance. 

In addition, we’ll see how the Business Intelligence Markup Language (Biml) can help us translate our metadata into a ready-to-use SSIS solution!

Prerequisites: Basic knowledge about building SSIS and/or adf packages and data staging
Have you heard about the Business Intelligence Markup Language (Biml)? Maybe you’ve even seen a session about it before but you still have doubts about how easily you can make something useful out of it. In this session, we’ll use Biml to build and populate a staging area including the corresponding SSIS packages. But there won’t be any pre-compiled demos - everything is happening live! Starting with a blank staging database, we will end up building a complete solution over the course of this session to prove that you can start from scratch and still quickly be successful.

Let’s see, how that goes… :)

PS: Even if you have not heard about Biml but are still tired of manually building SSIS packages, this is the right session for you!Prerequisites: Basic knowledge about building SSIS packages and data staging
There is a natural limit to how many dataflows you can run in parallel in SSIS. Regardless of whether your limit is on the source or destination side, you will eventually reach those limits.
You might have set up all your package orchestration in a way that made perfect sense at that time, but over time, some tables grow faster than expected and others don’t grow at all. Due to foreign key relationships, you may not be able simply to shuffle the dataflow tasks around to maximize throughput. Manual reengineering along these lines would potentially be very time consuming, and even worse, the result would be obsolete shortly thereafter.

This session is about using the Business Intelligence Markup Language (Biml) to monitor and control your orchestration patterns. By automatically analyzing the results in ETL logs, we’ll be able to automate our staging orchestration!

Prerequisites: Good understanding of dataflows with SSIS, especially with higher volumes of data
Power BI is the shiny new tech for processing and visualizing data in the Microsoft Data Platform. However, the plumbing in the background does need managing (even if it is cloud-based and supposedly automagic).

In this session we will take a look at how to manage your datasets, security, monitor licensing and more, all through the ultimate administration interface: PowerShell!

In this session we'll explore why we need to manage Power BI, what management capabilities we have at our fingertips and the power of PowerShell to simplify and expand these capabilities. We'll then tie it all together to get you started on the road to building a suite of management reports and automated processes because PowerShell loves Power BI!
Every expert has their own set of tools they use to find and fix the problem areas of queries, but SQL Server provides the necessary information to both diagnose and troubleshoot where those problems actually are, and help you fix those issues, right in the box. In this session we will examine a variety of tools to analyze and solve query performance problems.
With viruses like ransomware occurring more frequently, we need to be ready for server and even data center loss. Just like pilots who are prepared for disaster recovery through regular practice, we as Database Administrators need to actually spend time practicing recovering with those backups. Ransomware has made it critical to prepare to rebuild your datacenter at any moment. This session will focus on the kinds of situations that can dramatically affect a data center, and how to practice recovery processes to assure business continuity.
It’s an age-old problem: devs want prod data for dev and test.

It helps them write better code. Self-service access to usable test-data aligns with DevOps principles such
as adopting a “shift-left” mentality to testing.

Unfortunately, in the age of data breaches and tighter regulation it's generally unwise and/or illegal to
give devs access to some of the data.

So what do you do?

We’ll talk about the GDPR, anonymisation, pseudonymisation and five techniques to provide appropriate “production-like” data. I’ll demo these techniques both in raw T-SQL and using some of the Microsoft and 3rd party tools that make the task easier.

This session will equip you to discuss the problem in an informed manner and suggest several solutions and their pros and cons.
Maintaining a solid set of information about our servers and their performance is critical when issues arise, and often help us see a problem before it occurs. Building a baseline of performance metrics allows us to know when something is wrong and help us to track it down and fix the problem. This session will walk you through a series of PowerShell scripts you can schedule which will capture the most important data and a set of reports to show you how to use that data to keep your server running smoothly.
Most of the time you’ll see ETL being done with a tool such as SSIS, but what if you need near-realtime reporting? You need to get the updates in your OLTP database to the Data Warehouse quickly, but with minimal impact on your application. This session will demonstrate how to keep your data warehouse updated in near real-time using Service Broker messages from your OLTP database.
Steph (Affirmative):
DevOps is great, but it left data folks out in the cold. To get data pros working in a collaborative, faster, and robust manner with the business we need to spend dedicated time on how to do it and the answers aren't necessarily the same as they were for developers. The term DataOps will help us explain to our various tribes what we're aiming to achieve and that they matter.

Alex (Negative):
The problem is real - but inventing a new name for data folks will exacerbate it. DevOps promotes breaking down silos. Different names divide us rather than unite us. The problem is that DevOps now carries all sorts of assumptions about tooling, job roles and processes. Instead of inventing a new DevOps, let's get back to the core of what DevOps is supposed to be about.
There’s the quick way or there’s the right way. In this session we will look at good practices and standards to follow when writing PowerShell to make it easier for you and others to trust and reuse your code
By the end of this session you’ll have a guide to being a better PowerShell citizen, following best practices, all the aspects of your code that help make it usable and readable. We'll share tools and tips to make it easy and discuss contributing to Open Source and sharing your code with others..
Become a member of the PowerShell Standards Agency* (Not a real thing) and write better code for Everybody
You would like to start speaking, but you don’t think you are ready. That was me in 2014. I didn’t believe anyone wanted to hear what I had to say.

Since then I’ve spoken at over 100 events. In 2017 Microsoft awarded me my first MVP award. In 2018 I gave my first SQL Bits pre-con. Speaking has changed my life.

You can do it too.

I’m going to make a deal with you:

I’ll tell you what I’ve learned since 2014, including how to:

- create a compelling abstract that stands a real chance of getting selected

- craft a great talk that is informative, engaging and memorable

- slay the demo Gods and manage those nerves to deliver a kick-ass talk

In exchange, you are going to submit to:

- your local user group or meetup

- Data Relay 2019

- SQL Bits 2020

Are you ready? Of course you are! You can do this.
The SQLOS scheduler has been a core feature of SQL Server ever since its appearance as the User Mode Scheduler in version 7.0. In this session you will learn what makes it tick, where lines of responsibility are drawn between schedulers, workers and tasks, and how everybody has their own selfish ideas about fairness.

We'll pay particular attention to synchronisation: the need to synchronise, the balancing act between busy waiting and context switching, and examples of internal SQLOS synchronisation primitives. All of this will complement your existing mental model of SQL Server waits.

It is a very deep session (stack traces and obscure functions will be aired!), but not a broad one. As long as you have a healthy interest in either SQL Server or operating system internals, you'll have a fair chance of following along.
Database DevOps practices call on you to continuously deliver value to your customers, but there's a problem: database changes are notoriously tricky to implement. In this session, Microsoft Certified Master Kendra Little will show you patterns to design schema and data changes for SQL Server that help you maximize performance and availability. You'll get a guide to which changes may work differently in test and production, and a checklist for testing each tricky change.
How do we measure the value of DevOps? It varies, based on our perspective. In this session, Kendra Little will examine the four pillars of DevOps through three lenses: the viewpoint of the CEO, the CIO, and the IT Manager or Team Leader. We'll discuss the values and concerns that come naturally to each of these roles regarding standardization, automation, protecting data, and monitoring. You'll leave the session with a new understanding of the business advantages of DevOps, and next steps to take to bring the benefits of DevOps to your customers.
Reading execution plans is easy right? Look for the highest cost operators or scans and you're pretty much done. Not really. Execution plans are actually quite complicated and can hide more information than the graphical plan reveals. However, if you learn how to walk through the details of an execution plan, you will be more thoroughly prepared to understand the information in that plan. We'll unlock and decode where the information is within a plan in order for you to know why the optimizer made certain choices. You'll be able to better understand how your T-SQL code is interpreted by the optimizer. All this knowledge will make it easier to debug and tune your T-SQL.
As estates grow in size and complexity, the process of manually monitoring them becomes untenable. Not only do manual checks take time, they are also prone to missing crucial elements that can leave your organization vulnerable and miss the historical context that can enable proactive data management. Without the right processes or tooling in place, your operations can be blind to the performance of your estate, and you may not realize a compliance breach until it’s too late. In this session learn how to monitor your SQL Server estate to maintain compliance and ensure availability.
Scaling out reads across database servers is hard enough, but when it comes to scaling out writes, you are potentially in a world of pain. In this session we take a look at Conflict-free Replicated Data Types, a piece of the technology puzzle which can relieve that pain.

CRDTs don't (yet) feature at the surface of the Microsoft stack, but they are already hard at work in the background within CosmosDB. 

Make no mistake, they aren't a panacea which make conflict issues disappear without careful upfront design. But as things stand, a database professional may well be thrust into a situation where someone else is pushing a conflict management solution. Having some understanding of common CRDTs, and their inner workings, is useful preparation for that day when someone tries to bamboozle you with what might appear to be black magic.

As such, this session is about getting to grip with the simple underlying concepts, not about taking home a new technique you will use right away. But these concepts are still fairly fresh out of academia, and the literature isn't always easy reading. I do my utmost to make the material accessible, sidestep the symbolic logic, and to make it the introductory session I wish I could have attended when I needed it!
One of the biggest challenges to successful implementation of data encryption has been the back and forth between the application and the database.  You have to overcome the obstacle of the application decrypting the data it needs.  Microsoft tried to simplify this process when it introduced Always Encrypted (AE) into SQL Server 2016 and Azure SQL Database.  In this demo intense session, you will learn about what Always Encrypted is, how it works, and the implications for your environment. By the end you will know how to now easily encrypt columns of data and just as importantly how to unencrypt. You will also learn about the current limitations of the feature and what your options are to work around them.
Are you the only database person at your company? Are you both the DBA and the Developer? Being the only data professional in an environment can seem overwhelming, daunting, and darn near impossible sometimes. However,it can also be extremely rewarding and empowering. This session will cover
how you can keep your sanity, get stuff done, and still love your job. We'll cover how I have survived and thrived being a Lone DBA for 15 years and how you can too. When you finish this session, you'll know what you can do to make your job easier, where to find help, and how to still be able to advance and enrich your career.
Many of us have to deal with hardware that doesn’t meet our standards or contributes to performance problems. This session will cover how to work around hardware issues when it isn’t in the budget for newer, faster, stronger, better hardware.  It’s time to make that existing hardware work for us. Learn tips and tricks on how to reduce IO,relieve memory pressure, and reduce blocking. Let’s see how compression, statistics, and indexes bring new life into your existing hardware.
Over a dozen tips and tricks worked through explaining how they can help, why they work the way they do and why you ought to be using PowerShell more.
topics covered include ISNULL , ensuring your build scripts are secure, splatting, parsing event logs, using -format correctly, custom sorting
Moving databases and workloads to the cloud has never been easier. For Sql Server there is number of products that offer almost perfect feature parity. One of the last technical challenges is right security configuration. That's because security model in the public cloud is different and requires different approach, skillset and knowledge. This session covers governance, risk management and compliance in public cloud and specifically focuses on Azure Sql PaaS resources. It provides practical examples of network topologies with their strengths and weaknesses including recommendations and best practices for hybrid and cloud-only solutions. Explains orchestration and instrumentation available in Azure like Security Center, Vulnerability Assessment, Threat Detection, Log Analytics/OMS, Data Classification, Key Vault and more. Finally shows techniques to acquire knowledge and gain advantage over attackers like deception and chaos engineering.
For years we have been bombarded with AI-enabled/smart/intelligent features, tools, databases and clouds. But what does it actually mean for Sql Server developers and DBAs in practical terms? Is it just marketing hype or contrary - distinct trend that has already started and impacts how and what we do, our workplaces and future careers? This session defines what AI is, provides framework to measure it, goes through the list and evaluates 'the latest and greatest' tools and features available in Sql Server both on-premises and in the cloud and finally shows practical use cases of the best of them that we have to adopt to stay relevant on increasingly competitive market. Let's find out if maintenance free, self-healing, auto-tuning databases that are able to detect and automatically mitigate security risks are ready for real-world workloads!
DBAs and sysadmins never have time for the fun stuff. We are always restoring a DB for a dev or setting up a new instance for that new BI project. What if I told you that you can make all that time consuming busy-work disappear?

In this session we will learn to embrace the power of automation to allow us to sit back and relax..... or rather focus on the real work of designing better, faster systems instead of fighting for short time slots when we can do actual work.

Along the way we will see that we can benefit from the wide world of automation expertise already available to us and avoid re-inventing the wheel, again!
We've all experienced weird situations in IT - things break without any real apparent reason. Sometimes error messages can be helpful, but mostly they are cryptic and lead to no real explanations/solutions.

In this session, I will show a variety of problems that I have run into in the past and explain how I approached them. Sometimes finding simple solutions, but sometimes having to be creative and employ methods that may not be so intuitive.

You will leave the session with a better understanding on how to approach solving any technical issues you experience at work.
Any discussion about Azure SQL Data Warehouse usually involves talk of big data and enterprise scale.  At British Engineering Services, we recognised an opportunity to revolutionise our approach to BI delivery.  We’re not an enterprise customer and we don’t deal with billions of transactions per day, yet along with delivering much improved BI to our business, we have managed to save time and money by migrating our traditional on-premises capability to premium Azure services including SQL Data Warehouse.

This session will discuss our project, our approach and the things that we learned along the way. 
Query optimizer is a magical piece of architecture inside SQL Server which instructs the engine how your query is going to be executed. It performs a great job …unless it doesn’t.

In this session we will have a deep look on cases where the query optimizer fails and explain why is it happening. Expect to see a real code written by real developers which will be tuned just in front of you to perform several times faster. If you want to (or possibly need to) take control of execution plan, this session is right for you!
Everyone wants to know if there are magic buttons you can push to make SQL Server run faster, better and more efficiently. In this session we will go over some of my go-to performance tricks that you can implement to get the biggest improvement with the least amount of change.  When it comes to performance tuning, every second counts. We will cover memory  optimization, isolation levels, trace flags,statistics, configuration changes and more.  I’ll go over real life scenarios we come across as consultants and the changes we made to fix them.
I talk will be talking about the strategic, architectural and compliance level considerations while designing a modern cloud data solution. I will also cover best practices, security consideration and lessons learnt.This session will be great for anyone who wants to understand the big picture of a design, or wants to architect or develop a complete solution in a greenfield or brownfield implementation.We will be covering the full stack, so I advise to grab a coffee before this session! 
It is becoming ever increasing the need to present analytic outcomes. Analytics are only ever as good, as the robustness of the data collection and analysis. This session will cover the raft of research skills that can be applied in industry to improve the quality of your investigative work.

The session covers the end to end process of data management, the things to consider when improving data quality and data science in industry. It also covers data collection areas, that are often used by marketing teams. I share a few details of my research findings about the complexity of managing database systems, the use of the Microsoft data platform for research and the possible future AI developments to help people manage database systems with greater ease.
A summary of security measures, practices and configurations you should consider when setting up an architecture on Azure. 
Covering best practices on applying security by architecture and how access control can be inherited down from AD/Subscription level to resource level.
We will also cover some key popular services such as Storage, Database, Lake, Databricks, etc and how you can apply security at the service level.
The world of data is moving quickly and traditional relational database technology can be a limiting factor in responding to change. Teams want to move quicker, work with a wider array of data, handle massive datasets and augment their code with open-source libraries and projects. Data delivery and demand for immediate insights mean we no longer have the luxury to extract, transform and load datasets before we need to realise the value locked within our data. SQL Server 2019 has made radical architecture changes to meet these challenges, introducing in-built data lakes, spark clusters, massive data ingestion engines and the ability to harness massively parallel processing architectures. These engines are all implemented behind a single, scalable interface that streamlines data acquisition, transparently and without costly movement operations.

In this talk we will outline the problems that can be tackled with the new SQL Server Big Data Clusters, provide an overview of how they have been implemented and discuss how SQL Server can now handle your Big Data problems. We will be drawing parallels to the Azure Data Platform and highlight where we can adopt similar patterns in our on-premises data platforms.
SQL Server Integration Services has been a good friend since its first
appearance in SQL Server 2005. But now, after a slightly bumpy start,
Azure Data Factory is here and ready to replace all our DTSX package
capabilities. This cloud native orchestration tool is a powerful
equivalent for SSIS and the SQL Agent as a primary component within the
Modern Data Warehouse. In this session we will start with the basics of
Azure Data Factory. What do we need to build cloud ETL pipelines? What’s
the integration runtime? Do we have an SSIS equivalent cloud data flow
engine? Can we easily lift and shift existing SSIS packages into the
cloud? The answers to all these questions and more in this session.
If you have already mastered the basics of Azure Data Factory (ADF) and
are now looking to advance your knowledge of the tool this is the
session for you. Yes, Data Factory can handle the orchestration of our
ETL pipelines. But what about our wider Azure environment? In this
session we’ll go beyond the basics looking at how we build custom
activities, metadata driven dynamic design patterns for Data Factory.
Plus, considerations for optimising compute costs by controlling other
service scaling as part of normal data processing. Once we can hit a
REST API with an ADF web activity anything is possible, extending our
Data Factory and orchestrating everything.
What happens when you combine a cloud orchestration service with a Spark
cluster?! The answer is a feature rich, graphical, scalable data flow
environment to rival any ETL tech we’ve previously had available in
Azure. In this session we’ll look at Azure Data Factory v2 and how it
integrates with Azure Data Bricks to produce a powerful abstraction over
the Apache Spark analytics ecosystem. Now we can transform data in
Azure using Data Bricks but without the need to write a single line of
Scala or Python! If you haven’t used either service yet, don’t worry,
you’ll get a quick introduction to both before we go deeper into the new
ADF Data Flow feature.
The desire and expectation to use real-time data is constantly growing,
businesses need to react to market trends instantly. In this new data
driven age a daily ETL load/processing window isn’t enough. We need a
constant stream of information and analytics achieved in real-time. In
this session will look at how that can be achieved using Azure Stream
Analytics. Building streaming jobs that can blend and aggregate data as
it arrives to drive live Power BI dashboards. Plus, we’ll explore how a
complete lambda architecture can be created when combining stream and
batch data together.
Yes I said it! Full stack, from compute to Devops, from networking to AI!
An introduction level fun coverage of all the popular Azure services. Use case based explanation to help you with choosing your desired service which is right for your business needs.
E.g. If your applications need cheap storage for tables, then use Table Store, If you need high consistency globally distributed low latency access data storage for web/mobile apps, then use Cosmos DB.
Also might drop in why Azure is better than AWS ;) as we go through them.
You're an IT professional who knows their way around an on-premises business intelligence solution.

You're also aware of the Microsoft Azure cloud platform and all the beautiful benefits it brings.

Attend this session to learn how you should march confidently into a brave new world and deploy your business intelligence solution using Azure Platform/Software-as-a-service offerings. When should you use data factory, data lake, SQL Data Warehouse, Azure SQL database, Azure Analysis Services, Power BI, and SQL Server Reporting Services? Leave with the knowledge of which tools you should pick and why.
How are artificial intelligence and data visualization connected? It’s the data. Data is everything to AI, and AI does not happen without the data. For the business to make use of AI, Data visualization is essential because it communicates insights from the data. For Artificial Intelligence, data visualization is particularly important since the concepts and data are complex, and data visualization is crucial to lead the business to articulate and understand difficult concepts. 

There are many great new technologies in Microsoft which offer opportunities for businesses to use AI, but we need to close the loop so that the insights are not lost. In this session, we will look at Microsoft Power BI, and open source technologies such as R and Python in Azure for AI and data visualization, along with best practices for visualising data for artificial intelligence.
Organisations need to know how to get started with Artificial Intelligence. This practical session offers organizations, small and large, with a helping hand in practical advice and demos using Microsoft Azure with Open Source technologies. For organizations who have no clue what they'd use AI for, the session will offer a practical framework: the Five 'C's of Artificial Intelligence, which is a framework to help stimulate ideas for using AI in your own organization. In order to provide a technical focus in getting started, R and Python will be shown in AzureML and Microsoft ML Server.  
As the industry adopts more Big Data technologies, the industry is seeing more uptake of Data Vault 2.0, which is a framework successfully applied to data warehousing projects. In this session, learn more about the framework.
In this session, we will look at the Data Vault methodology and translate it into Azure data offerings. We will look at the methodology in practice in Azure as a basis for the foundations to create a technical data warehouse layer. 
Working in manufacturing industry means that you must deal with product failures. As a BI and/or Data Scientist developer, your task is not only monitor and report product’s health state during its lifecycle, but also predict the likelihood of a fail in the production phase or when product has been delivered to the customer. 
Machine Learning techniques can help us to accomplish this task.  Starting from past failure data, we can build up a predictive model to forecast the likelihood for a product to fail or giving an estimate on its duration. And now it is possible to develop an end-to-end solution in SQL Server, because of the introduction 
You like Power BI. You
think it’s a great suite of tools for data analytics, modeling and report. Now
you’d like to adopt it into your organization as the standard reporting tool.
And here we are with the first questions. How many times have you been asked:
“Can you share this report/dashboard, with me?”; “Can we distribute our work to
other users?”; “Shall we pay for it? Can we have licenses for free?”.

To make things worse,
the licensing model is constantly evolving, bringing more confusion to
end-users. When using sharing? What is an App workspace? And Power BI Embedded?
How can you manage permissions to reports and dashboards? Is it possible to
send reports via e-mail through a subscription?

Come to this session,
if you want to dispel any doubt about the sharing methods in Power BI. We’ll
give a clear and complete overview of all the collaborative features in Power
BI, helping you to choose the solution that best fits your needs.
Everything in our
world is located “somewhere” and is related to other things. Spatial analysis
consists of studying these relationships to find out meaningful patterns and

Figure out, you’re
looking for the best position to open a new store. It´s not only a matter of
“where”, but also there are more implications; is the area easily accessible by
customers? Is there any parking? Is it easy to reach for suppliers? Are there
any competitors store around? What is the volume of shopping for the same
business in the area?

Here is where spatial analysis
can help us collecting, comparing and matching data to build up a framework of


Since 2008 release,
SQL Server is supporting spatial data type. Now new amazing features are
offered with the addition of R. R is shipped with a huge number of packages for
performing spatial analysis, mapping, geocoding, etc . There virtually anything
you can’t do with R: finding relationships, measuring spatial autocorrelation,
interpolating point data, mapping point data, …

And, last but not least,
we have Power BI that offers a full range of mapping capabilities. Not only
bubble or choropleth maps, but visual for performing spatial analysis like
ArcGIS, or for creating custom shape maps. And R scripts naturally.


In the session, we
will show how the joint use of these three tools empowers us to analyze and
query the spatial properties of data.

We’ll showcase a
real-world example for a better understanding of the endless possibilities that
are now offered to us.

Come, have fun and
discover a world of information inside your data with Spatial Analytics!
In this hour long session we will attempt to include lots of advice and guidance on how to write code that will easily get approved by your DBA prior to release to production. We’ll cover Unit tests, Continuous Integration, Source Control and some coding best practice. 
This will be quite a fast paced session but will aim to give you a taster of what you should include to increase the acceptance rate of your code by the approvers and how to ensure your code does what it should and that future changes don’t break it. 
In this session Mary Fealty (@Br0adtree) will take you through her personal review of Power BI by looking back at the many updates there have been since its launch. She will do a walkthrough of many of the updates or enhancement to Power BI, though she may occasionally dwell on some more than others, as for folks familiar with Power BI she aims to generate some of the following responses: 

• Know that 

• Forgot that 

• When did that happen

Azure DataBricks brings a PaaS offering of Apache Spark, which allows for blazing fast data processing, interactive querying and hosting of ML models all in one place! Most of the buzz is around Data Science & AI - what about the humble data engineer who wants to harness the in-memory processing power within their ETL pipelines?

This session focuses on Azure DataBricks as your data ingestion, transformation and curation tool of choice.

We will: 
Introduce the DataBricks service & language options available
Discuss the hosting & compute options available
Demonstrate a sample data processing task
Compare against alternative approaches using SSIS, U-SQL and HDInsight
Demonstrate pipeline management & orchestration
Review the wider architectures and extension patterns

The session is aimed at Data Engineers seeking to put the Azure DataBricks technology in the right context and learn how to use the service.

We will not be covering the python programming language in detail.
By extending DevOps practices to SQL Server databases, high performing organizations are removing the bottleneck traditionally caused by the database and benefiting from faster delivery, reduced downtime and improved compliance. But with companies, like Skyscanner, now releasing database changes 95 times a day, rather than once every six weeks, the demands on database administrators are greater than they have ever been before. In this session, we will discuss why the DBA is fundamental to DevOps success and the steps you can take to ensure database deployments can be made as frequently as the business demands, whilst also keeping your data safe.
SSIS has been around for some 14 years now, but how it works hasn't really changed, and neither have the use patterns that we see. The flexibility of SSIS is one of its greatest features, but also one of its greatest failures - some patterns and use cases actually prevent your ETL from performing as well as it should. Join us for an explanation of why that is, what you can do about it. We'll even dissect a gnarly package, and reduce its runtime from 15 hours + to a matter of seconds !
SQLServer 101 - the first things you do with SQLServer - are really important, but sometimes those 'best practices' become groupthink and not subject to challenge. To really understand WHY those setting are best practice needs further information and explanation. We'll take a look at common received wisdom for backups, indexes, transaction logs, partitioning, all sort of stuff, and hopefully provide you with some knowledge to take back to your production environment. Who knows, you may even change stuff for the better ?
Totin' ivory-stocked Winchesters, Azure Event Hubs ride into town like they owns the place, promising a better life for those that don't offer no resistance.
Service Broker, that old dude slumped in bak of the saloon, may still have some tricks up his sleeve, though - it was him in the White Hat not too long back -just some folks around here seems to have forgotten.
It's getting close to High Noon - who's side will *you* be on ?
Deep learning has been used to write new Shakespearean sonnets, to imagine new delicious recipes, write hilarious Harry Potter novels and even come up with new names for beer! In this session we will understand, what is deep learning, what are neural nets, what are the steps required to build a deep learning model and look at some of the great examples mentioned. 

We will then turn our new skills to the problem most speakers have! Writing session abstracts. Together we will develop a recursive neural net designed to generate new session abstracts, entirely based on previously submitted sessions to SQL Server conferences. Will we be able to produce a session you would have attended? Come along and fine out. 

As Data Scientists we are great at machine learning, statistical modelling, visualising data and using data to tell a story. What are we not so good at? A lot of the core skills required in traditional software development. If you answer no to any of the following you need to attend this session.
  • Do you source control your models?
  • Do you test your models?
  • Is the percentage of models deployed in production less than 10 per cent?
  • Did you deploy the model?
In this session I will show you how to apply DevOps practices to speed up your development cycle and ensure that you have robust deployable models. We will focus on the Azure cloud platform in particular, however this is applicable to other cloud platforms.
SQL Server 2017 introduced support for Python. In this session we will look at creating a model in Python, from there we will look at how Python integrates in to SQL Server. We will build a model and then serialise in inside SQL Server, ready to be scored. 

This session is more than an introduction to Python or SQL Server. In this session we will make a model and deploy it - that is a big deal. 

There are different possibilities to Migrate data to an Azure SQL DW. During this session I will walk through all the steps you have to take care off if you want to use Data Factory V2
From setting up the Data Management Gateways, Azure Data Factory and of course an Azure SQL DW.
After setting up these Azure Components we will do the preparation, the migration and the steps you have to care off after the migration.
It will be a session with a lot of demo’s and after the session you should have a basic knowledge how you can achieve such a Migration yourself.
Azure Data Factory V2 is the newest version of ADF. With this new version you can run now your SSIS Packages in the cloud without the need of a Server. During this session I will walk you through the setup of the SSIS Integration Runtime, how you can use custom assemblies, how you can achieve optimal cost savings, how you can schedule your SSIS Packages. As you can see a lot to talk about in 60 minutes.
Continuous Integration(CI) and Continuous Development(CD) are the buzz words of the last year. But how does this work for a Bi Solution? How can we build and deploy our SSIS packages and SSAS Cubes in Azure Dev Ops. During this session I will walk you through all the necessary steps. At the end of the session you will know how to setup your own CI/CD pipelines for your projects with Azure Dev Ops. We are convinced that it will benefit BI projects and I will show you how?
In this session, we will look at data science models and provide clear guidelines on how to select the best model(s) for your purposes. We will use examples in R and Python in Microsoft ML Server, and display the output in Power BI. We will also look at visualising the output of these models in Power BI, and finalizing the models for production success for the business.

This session is aimed at developers and power users who want to understand data science models better so they can become more involved in the data science process in their organization.
In this session, we will start with an overview of Azure Data Factory V2 concepts, then show you how you can use metadata to quickly build scalable serverless pipelines to move data from disparate data sources including On-Premises and Platform As A Service. Next, we will look at how to integrate the solution using continuous integration and deployment techniques. Finally, we will look at how to schedule, monitor and log our solution.
Whether you are just getting started with Azure Data Factory or looking to make your current data factory robust and enterprise-ready this session will take you to the next level.
Dimensional modeling is arguably one of the most important fundamentals of business intelligence. It is still relevant even as new technologies like Power BI and SSAS Tabular Models are becoming standard. Correctly modeling your organization's data not only protects the most important asset your company has but ensures that your data mart or data warehouse will be responsive and capable of accommodating emerging requirements. This session provides a deeper dive into the art of dimensional modeling. We will look at the different types of fact tables and dimension tables, how and when to use them. We will also some approaches to creating rich hierarchies that make reporting a snap. Finally we will cover physical design choices.This case study and demo based session promises to be very interactive and engaging, bring your toughest Dimensional Modeling quandaries.
You heard a lot about Azure SQL DW and may even be getting some pressure to move to
this new platform; but what makes it so different from Azure SQL Database or
even on-premise SQL Server. In this session, we will provide a solid overview
of the concepts that make Azure SQL Data Warehouse different from other
versions. We will also look a bit at common migration challenges and how to
overcome them as well as explore and debunk some basic myths about Azure SQL
DW. Attend this dynamic session to ensure that your Azure SQL Data Warehouse
has a fairy tale ending and you aren't left with a Data Warehouse pumpkin on
your hands.
A common issue organizations face when developing application is data lineage.  The problem is how to correctly attribute data changes in linear fashion that correlates to current data.  Many use triggers, CDC, third party tools, or roll their own type of auditing tool.  Over time, these solutions become difficult to manage.  In SQL Server 2016, a new feature called Temporal Tables was introduced and helps to simplify this common need.  Much like the Tardis in Dr. Who, temporal tables help us to navigate the data in time and space and bring the right data into the correct dimension.  In this session, we will take a look at what temporal tables are, how they work and how you can implement them into your environment.    
It's not a question on whether or not the landscape for the common DBA is changing. Without a doubt, it is. Azure offers up a new world of possibilities for DBA's and we should all strive to learn it. In this session we'll cover some basic knowledge and terminology of Azure as well as how easy it is to incorporate Azure into your environment. We will stand up a new Azure virtual machine as well as a setup SQL DB. You will see how easy it is to accomplish this. This new found knowledge will help propel your career into the new landscape.    
Have you ever taken apart a toaster or an alarm clock just to see how it worked? Ever wondered how that database actually functions at the record level, behind the scenes? SQL Server Databaseology is the study of SQL Server databases and their structures down to the very core of the records themselves. In this session, we will explore some of the deep inner workings of a SQL Server database at the record and page level.  You will walk away with a better understanding of how SQL Server stores data and that knowledge that will allow you to build better and faster databases.    
This session is a comparative for DBAs who have experience in Windows Server for DBAs that will assume a new environment on Linux.
Everything is basic between Linux and Windows for DBAs SQL Server side by side.
The use of containers allows flexibility this is one of the great reasons for the growing use of containers.
Here's how to leverage scalable execution with parallelized tasks in containers with up to five synchronized replicas.
Data lakes have been around for several years and there is still much hype and hyperbole surrounding their use. This session covers the basic design patterns and architectural principles to make sure you are using the data lake and underlying technologies effectively. We will cover things like best practices for data ingestion and recommendations on file formats as well as designing effective zones and folder hierarchies to prevent the dreaded data swamp. We’ll also discuss how to consume and process data from a data lake. And we will cover the often overlooked areas of governance and security best practices. This session goes beyond corny puns and broken metaphors and provides real-world guidance from dozens of successful implementations in Azure.
Azure is cheaper, Azure is faster, Azure is more secure. Azure... everywhere is azure. Everywhere is data.
Even if not today, certainly in the future (yes, believe me) you will face a case: how to move my data from premise Data Warehouse to Azure.
This session will reveal the ideas how to do that and compare those methods. I will describe potential issues and give you hints on how to avoid them.
Finally, we will see what speed we can achieve during a migration.
Microsoft's services in Azure helps us to leverage big data more easily and even more often accessible for non-technical users. Having UI in ADF version 2 - Microsoft added a new feature: Data Flow which resembles components of SSIS. This is a very user-friendly and non-code approach tool-set.
But, has that been only UI introduction? Why and how Databricks does work under the hood?
Do you want to know this new (still in private preview) feature of ADF and reveal the power of modern big data processes without knowledge of such languages like Python or Scala?
We will review this new feature of ADFv2, do deep dive to understand the mentioned techniques, compare them to SSIS and/or T-SQL and learn how modelled data flow runs Scala behind the scenes.
A task seems to be easy. Maintenance a project of a database in the code repository, treat as master-version and do deployment evenly and frequently. Simple? Seemingly. The things become more complex as fast as a number of objects in database growing. While instead of one database, we have over a dozen. When databases have got the references to each other. And how about dictionary tables? Where to keep them and how to script? Additional issues are coming whilst we would like to control instance-level objects.
All these topics I will explain in the session focused on practical aspects of work with Microsoft Visual Studio Data Tools.
In the beginning we were manually deploying database changes and checking these into source control -  after the fact using SSDT projects. This gave us the impression we were at least using source control as we could track changes and that we had CI builds… However, the true benefit of source control is in fact using it as the source for your deployment! We quickly managed to use build pipelines to deploy SSIS packages as we didn’t have to worry about the data, but with complex data structures and the risky nature of state-based deployments, the mountain seemed too difficult to overcome, or at least to find the time! New publishing options in SSDT in 2017 spurred us on to finally start deploying database changes from source as we could deploy with lower risk and avert any data loss if we so required.

Due to the risky nature of state-based deployments, such as the dreaded renaming or reordering of columns, we also started raising awareness and embedding some best practices in all developers. Test environment rebuilds of data structures could go undetected due to the smaller volumes, so we used SQLPackage.exe to carry out comparisons at CI time so we could check XML breakdowns and the actual
scripts of what was eventually going to be deployed against production, this occurring earlier in the development process.

We then started to get some quick wins of the smaller databases and proving that it was now possible to deploy databases using SSDT from source. At this time, we were using builds in TFS, but by embedding a database team member with the configuration team we quickly started to utilise TFS release pipelines. This also introduced TFS task groups and libraries, these were utilised for parameter-driven template deployment sequences and centralised secrets respectively.

The release pipelines involved quality gates where we introduced QA as approvers through the test environments and our operations team for deployment into production. After many discussions and feedback, we were then able to remove the necessity for release tickets for the majority of releases as the proof was in the pudding (of the approved release pipeline!). Progress was slow and difficult at this stage, but other tools such as Slack and a great culture of change really helped us on.

We also started to introduce RedGate SQL Change automation projects, which was great for reference and configuration data, due to its migration based approach.

Later combining the deployment of database and their associated ETLs followed by the automated execution of the related job, using PowerShell, all in the test release pipelines meant we started getting
even faster feedback of our release packets. If a non-database developer was making changes to schema
only, we could still address whether that impacted other systems by failing the release if the job began to fail.

Migrating to VSTS and utilising key vault to link to libraries is also making the whole system even more platform of a service giving us an even lower infrastructure overhead. More recently we have introduced ARM deployments prior to the database schema, meaning we can combine entire data architectures in our release packets (Infrastucture, Database and ETL). 

There is still some way to go but I know that the majority of tasks can and now will be automated. I therefore want to share with you this journey and convince you manual deployments of data architecture will be a thing of the past. 
We debuted dbachecks 1.0 last year at SQL Bits after just two months of development. How did we do it? We used open source tools that enabled rapid development. 

Join MVPs Chrissy LeMaire and Rob Sewell to learn how you can use a similar approach to quickly build PowerShell modules for work or play!

In this session, we will build a useful PowerShell module, including piping, WhatIf support, and more. All from scratch!
To some people NOLOCK is the magic turbo button that makes queries run faster, to others it means your query results are going to be incorrect.

In this talk we look at what the NOLOCK hint actually does, with lots of demo’s to illustrate some of the interesting ways it can return incorrect results.

We then move on to how optimistic concurrency can solve many of the issues that NOLOCK is often used to ‘fix’. We look at more demos to highlight some of the the common gotchas that can trip us up when implementing optimistic concurrency, and how we can avoid them.

By the end of the session attendees should have a better understanding of why we should be careful with using NOLOCK, using the transaction isolation levels that provide optimistic concurrency, and which approaches are most suitable for which workloads.
Even with the introduction of big data and streaming data systems, the traditional star schema used in many data warehouses still has plenty to offer. In this talk we will look at some design patterns, tips and tricks to make loading and querying your data simpler, faster, and more reliable.

In this demo heavy presentation we will look at:
  • Using durable keys to simplify slowly changing dimension implementation where both historical and current context is required
  • Using inferred members when loading fact tables for increased resiliency
  • Where to use Columnstore indexes
  • Making sure statistics are up to date for better query performance
  • Design patterns for making SSIS more resilient to bad data
With GDPR and the number of data breaches we see in the news, encrypting sensitive data is becoming more and more important. In this session we will start by understanding the basics of encryption, before moving onto look at the ways we can encrypt data in SQL Server and Azure SQL DB. We will cover:
  • Certificates, symmetric & asymmetric keys
  • Encryption algorithms
  • Encryption hierarchy
  • Transparent Data Encryption (TDE)
  • Always Encrypted
  • Dynamic Data Masking (DDM)
  • Encryption functions
  • Stored procedure signing
Attendees should leave with an understanding of the options available to them, and which options are most suitable for different scenarios.
The incredible Columnstore Indexes can increase your analytical query processing speed multiple times, they are updatable (Clustered from SQL Server 2014 and Nonclustered from SQL Server 2016 respectively), but they keep on supporting different sets of the functionalities – such as Change Data Capture (Nonclustered Columnstore) or LOBs (Clustered Columnstore), and this brings a great confusion onto the table.

This session will light up your path on when to use what functionality to use and when, even though sometimes one of the type of the Columnstore Indexes does not seems to appear as a default choice for your scenario.
Every time you see a Columnstore Index getting involved in the execution plan, do you realise that there are whole execution plans behind those Index Scans ? Did you ever ask yourself, what are those strange and weird HT_* waits stand for ? Why do we wait for seconds/minutes for something like HTBUILD while it seems that nothing happens ?

Why do we have a ROWGROUP_VERSION wait on one server, while the other allows queries to run faster ?This session focuses on answering those question – to help you understand the reasons and the conditions behind every single available wait for the Columnstore Indexes and the Batch Execution Mode.
With the upcoming appearance of the SQL Server 2019, Microsoft is bringing the super-fast Batch Execution Mode to the processing of the big amount of data even for the traditional Rowstore Indexes on SQL Server 2019 & Azure SQL DB. Learn with me how & when it will function and which challenges we shall meet on the path of making our workloads working blazingly faster.
During this session Jorg & Henko will show the audience how to batch-analyze enormous volumes of data using cognitive machine learning algorithms. The solution will be based on a real word end-to-end scenario using Azure Data Lake and Azure Databricks. After this session participants can start to implement this technology in their own solutions.
During this session Jorg & Dave will show the audience how to setup an end-to-end real-time analytics solution using Azure Databricks and Power BI Premium. This will include data ingestion, data analytics and data visualization, using the best practices at hand. 
The new world of Machine Learning can sound quite overwhelming! Do I need a PhD in Machine Learning to get involved? The answer, thankfully, is no. But what you do need, which is often neglected, is a solid understanding of the theory behind the models.

This session will look to do just that, looking at some of the most popular algorithms used today, we will explore the maths involved, to help us understand our problems and get the best results possible. We will be also diving into practical examples, using Databricks to consume a dataset and to visualise results, with Python scripts to execute the Machine Learning models.

If you would like an introduction to the world of Machine Learning and to acquire a solid grounding that will help you develop the skill, then this is the session for you.
SQL Server Management Studio is at the heart of any SQL Server DBA or developer’s day. We take it for granted but rarely do we take a look at how we can customise or improve it to make our day to day work easier and more productive.
This presentation will take look at many of the hidden features and shortcuts that you had forgotten about or didn’t know were there including some new features in SSMS.

At the end of this session you will have learnt at least one new feature of SSMS that you can use to improve your productivity.
In this presentation, we focused on three strengths at the bottom half of the scale, with Basic (5
DTU), Standard S3 (100 DTU) and Premium P2 (250 DTU). Based on our measurements we have seen:

- Significant variations in performance between the different geographical locations
- Significant variations in performance for the same database at different times
- Surprisingly poor performance for the Standard S3 and Basic Service

Based on our measurements we can conclude that an Azure database with a specific DTU strength
does not necessarily provide the same performance at all cloud locations. For the Azure Basic
databases, there was a 20% performance difference between the three tested locations (West
Europe, West Japan and West US). For Standard S3 databases the difference had grown to 25%,
while on Azure Premium P2 databases the performance difference was a staggering 100%. When the
same test takes 2 times as longer on one database than the other, given the same DTU strength, it’s
surprising that it can be sold as the same product. In other words, ordering the same service at
different locations you may end up with completely different underlying hardware. It addition there
are large variations in the performance at the same location, which is indicative of an immature and
unstable service performance.
Do you really understand dating? 
In this session you will learn a new way to look at dates and how to handle them gracefully.
After this session you will master new smooth tricks about dates.
A real-life example of how I implemented SQL Server Policy-Based Management to manage hundreds of SQL Servers across multiple data centres.

Often overlooked and underestimated Policy-based SQL Server management is a very powerful feature allowing to stay compliant with any size SQL Server Estate.

Virtualisation makes it easier and faster than ever to provision new servers and constantly increasing number of databases makes it difficult for DBAs and companies to stay on top of their environments.

In an hour-long session, we will discuss what the Policy-based management is, how to leverage its power to auto-configure newly built servers and how to enforce the desired configuration in production and non-production environments. I will show my own approach to centrally managed master repository and different ways of policy distribution to make sure all policies stay in sync. 

All this is based on my own experience gained whilst implementing PBM in a large scale environment. 
You created a wonderful Power BI report, but when you open it you wait too much time. Changing a slicer selection is also slow. Where should you start analyzing the problem? What can you do to optimize

This session will guide you in analyzing the possible reasons for a slow Power BI report. By using Task Manager and DAX Studio, you will be able to determine whether you should change the report layout, or if there is something in DAX formulas or in the data model that is responsible for the slow response.
Every Power BI model has dates and the need of calculation over dates to aggregate and compare data, like Year-To-Date, Same-Period-Last-Year, Moving Average, and so on. Quick measures and DAX functions can help, but how do you manage holidays, working days, weeks based fiscal calendars and any non-standard calculation.

This session provides you the best practices to correctly shape a data model and to implement time intelligence calculations using both built-in DAX functions and custom DAX calculation for more complex and non-standard requirements.
It became possible to code Python in SQL 2017 but did it become necessary? Is this a vital missing piece of the SQL estate? A new opportunity to grasp with both hands? And should we have done this yesterday? Or is this bloatware, a cost on the server, the method and our actual brains.

Using a simple to understand movie rating system,I will build a series of classification models in both T-SQL and native Python. I will ask are these models any good? I will visualise the results and in doing so not only answer this but pose (and answer) a bigger one: did using SQL Server add anything? Join me, you just might be surprised by the conclusion.

Topics covered are: Python, Python in SSMS, Machine Learning 101, Data Science, Overfitting vs Bias, Data Visualisation, the Scientific Method, Resource Management.
Have you ever started a warehouse or ETL project and realized that the data wasn't as "clean" as you were told?  If only you had profiled your data before you started then you wouldn't have to rework design elements, change code or redesign your database.  In this session we will talk about what data profiling is, why you should do it and how you can do it with tools that are already included in the Microsoft BI stack.
They're just numbers, right?  A date's a date.  It's just string data, who cares?  I can't tell you how many times I've heard these phrases.  This session will help you understand why choosing the correct data type for your data is so important.  It affects data quality, storage and performance.  It can even produce incorrect query results.
The introduction of the weak relationships in Power BI composite models enables new data modeling techniques. However, not all of the many-to-many relationships can be managed by using weak relationship.

The "classical" many-to-many relationships in data warehouse is a design pattern requiring a bridge table, which is not required by a weak relationship in Power BI. The weak relationship can establish another type of many-to-many relationship that is different from the one commonly used in dimensional modeling, and it commonly solves a granularity issue in managing data coming from different data sources.

This session clarifies design patterns and best practices for using weak relationships and implementing different type of many-to-many relationships in Power BI.
Datatypes are an essential building block of relational databases, yet they rarely are given the consideration they deserve. Poor datatype choices can have widespread impact on database design, performance, and ability to scale to meet future needs.

This session will discuss how SQL Server stores records, and the affect that datatype choices can have on efficiency. From here, we will work through multiple scenarios showcasing issues that can silently have a significant negative impact on performance. Some of these might even be impacting your systems as we speak! Armed with this knowledge, you will leave this session with the ability to assess your systems, explain the value of proper datatyping to your colleagues and superiors, and correct sub-optimal datatypes within your environment.
Relationships are the foundation of any Power BI or Analysis Services Tabular data model with multiple entities. At first sight, this is a trivial concept, especially if one has a knowledge of relational data modeling. However, the ability to create multiple relationships between the same tables and the existence of bidirectional filters increase the complexity of this topic. In this session, we will discover the complexity behind relationships and how they work in complex and potentially ambiguous data models.
Ever struggled with a formula in DAX that does not compute what you want? It happened to us many times, and we know that the problem is always (really: ALWAYS) related to the evaluation context. Filter context, row context, and context transition, are only the starting point of a deep dive in how DAX computes the evaluation context for a formula.
During the session, you will see several examples of formulas that lead to unexpected results and then you will learn the unifying theory of evaluation contexts based on expanded tables, filter context operators and blocking semantic. If you are serious about DAX, this session is a real must.
Aggregations have been introduced in 2018 in Power BI, as an optimization technique to manage large tables. By providing pre-aggregated tables, you can highly improve the performance of a Tabular data model.
In this session we introduce the concept of aggregation, we show several examples of their usage understanding the advantages and the limitations of aggregations, with the goal of building a solid understanding on how and when to use the feature in data models.
So you’re a SQL Server administrator and you just installed SQL Server on Linux. It’s a whole new world. Don’t fear, it’s just an operating system. It has all the same components Windows has and in this session, we’ll show you that. We will look at the Linux operating system architecture and show you where to look for the performance data you’re used to! Further, we'll dive into SQLPAL and how its architecture and internals enables high performance for your SQL Server. By the end of this session, you’ll be ready to go back to the office and have a solid understanding of performance monitoring Linux systems and SQL on Linux. We’ll look at the core system components of CPU, Disk, Memory, and Networking monitoring techniques for each and look at some of the new tools available from DMVs to DBFS.

In this session, we’ll cover the following 
- System resource management concepts, CPU, memory, and disk
- Introduce SQLPAL architecture and internals and how it's design enables high performance for SQL Server on Linux
You’ve heard the buzz about containers and Kubernetes, now let’s start your journey towards rapidly deploying and scaling your container-based applications in Azure. In this session, we will introduce containers and the container orchestrator Kubernetes. Then we’ll dive into how to build a container image, push it into our Azure Container Registry and deploy it our Azure Kubernetes Services cluster. Once deployed, we’ll learn how to keep our applications available and how to scale them using Kubernetes.

Key topics introduced
Publishing containers to Azure Container Registry
Deploying Azure Kubernetes Services Clusters
Scaling our container-based applications in Azure Kubernetes Services. 
In this session we will introduce Kubernetes, we’ll deep dive into each component and its responsibility in a cluster. We will also look at and demonstrate higher-level abstractions such as Services, Controllers, and Deployments and how they can be used to ensure the desired state of an application and data platform deployed in Kubernetes. Next, we’ll look at Kubernetes networking and intercluster communication patterns. With that foundation, we will then introduce various cluster scenarios such as a single node, single head, and high availability designs. By the end of this session, you will understand what's needed to put your applications and data platform in production in a Kubernetes cluster

Session Objectives:
Understand Kubernetes cluster architecture
Understand Services, Controllers, and Deployments
Designing Production Ready Kubernetes Clusters
Containers are taking over, changing the way systems are developed and deployed…and that’s NOT hyperbole. Just imagine if you could deploy SQL Server or even your whole application stack in just minutes. You can do that, leveraging containers! In this session, we’ll get you started on your container journey learning container fundamentals in Docker, then look at some common container scenarios and introduce deployment automation with Kubernetes.

Prerequisites: Operating system concepts such as command line use and basic networking skills.
Connect with me to learn the basics of Power BI Embedded and how to incorporate Power BI Reporting capabilities (reports and dashboards) in your own (web)applications. With the last major rerelease (Summer 2017) it has now the same feature set as the Power BI Service (

And embedding a Power BI reports in an application is something different then creating or modelling one: what web techniques do you or your team need. And how about security, can I use one report for all my clients?

With hands-on experience I will demonstrate in this session how to start and use Power BI Embedded to the full extend.
Connect with me to learn the basics of Microsoft Power BI’s open source visualization platform and build a customer visual from scratch and then test and package the visual to check for full functionality on both Power BI and Excel.

With hands-on experience in creating and submitting custom visuals I will explain and demonstrate in this session how to start creating your own visual, what are the best practices and what are the extra next steps needed before submitting the visual to the Power BI Custom Visual marketplace.
In this session we will be covering the core concepts of Azure Databricks, and what you need to know to start getting great insights from your data. 
- A General Spark and Databricks overview
- Security
- Core Artifacts
- Different Workloads types
Here I want to share what have I learn from working with Cosmos DB in the field, I will talk about Data Modeling, Different Modeling APIs, Indexing and MonitoringI will be mostly expending most of the session focusing on the SQL API, although I will also speak about the others APIs, and some concepts apply to all of them
What challenges are data professionals facing in understanding and protecting the data assets in their care?
Now that data protection is a c-level concern, who do you have to assure that the right thing is done, and how should you go about it?
Data classification is an iterative, multi-phase task. Learn how to get started and get to 'done' on SQL Server, then use it to both drive policy, and to underpin compliant database devops.
We'll look at different approaches that we've seen in the real world, then deep dive on an implementation using the Redgate stack as an extension of the Microsoft platform capabilites.
Power BI Desktop is a versatile tool that can be used for modeling and reporting. When a model already exists, the tool can be used to simply develop a report which sources the model. However, when a model is required, it provides three development choices: Import, DirectQuery and Mixed (in preview). In this presentation, the four Power BI Desktop modes will be introduced, described and demonstrated. Also, guidance will be provided to help you choose the appropriate mode for your project.

In this session you will learn:
  • How to develop a Live Connection report
  • How to develop an import model
  • How to develop a DirectQuery model
  • How to develop a Composite model
  • How to determine the appropriate Power BI Desktop mode for your project
Power BI Desktop is a tool targeting business analysts and IT Pros. Analysts are typically concerned with smaller scale solutions, while IT Pros are likely to develop enterprise models which represent large data volumes, and which are expected to scale and deliver high performance to many users. There are many considerations and design techniques which are important to delivering a successful enterprise model.

In this session you will learn:
  • What to consider when designing an enterprise model
  • How and when to leverage Mixed Mode
  • How to accelerate performance over large data sources by using aggregations
  • How to determine the right capacity to host an enterprise model
This session will provide the know-how to deliver real-time Power BI dashboards. It will cover real-time dashboard tiles, the Power BI REST API, and Azure Stream Analytics (potentially incorporating predictive analytics). This session will be relevant to business analysts, developers and IT Pros.
Session will cover:
  • What's the big deal? - Basic overview of Power BI and it's rivals and why it's so popular
  • Where can it live? - Difference between Deskop, on Premise and Power BI Online
  • Who can it talk to? - Setting up Power BI data Gateway and data sources
  • To R or not to R - Supported R visualizations & limitations
  • F.A.Q.from developers, few gotchas and considerations 
  • Automated deployments
This session will describe how developers can deliver real-time dashboards, develop custom visuals and connectors, and embed rich interactive analytics in their apps. This presentation specifically targets experienced app developers, and also those curious to understand what developers can achieve with Power BI. Numerous demonstrations will put the theory into action.
Power BI Premium provides more consistent performance, support for large data volumes, and the flexibility of a unified self-service and enterprise BI platform for everyone in the organization. This session targets Power BI administrators, and content authors and publishers. It aims to help them understand the potential of Power BI Premium, and to explain how to design, deploy, monitor and troubleshoot scalable solutions.
Nowadays millions of people are telecommuting from either home or somewhere else than the office. 
As a small business owner, you may be working—right now—from the kitchen table, a coffee shop, hotel lounge, or a co-working space.  
By working remotely you’re part of a fast-growing trend in the modern workforce.  But no matter how great it is to be able to work from anywhere, it also takes a healthy dose of self-discipline to be as productive outside the confines of an office as you are within one. 
In other words, it’s easy to go from working remotely to, well, remotely working.
As business increases its pace, so too does governmental compliance issues. Both are leading to a place where failing to automate development and deployment will lead to serious negative impacts. The embrace of the automation and communication mechanisms outlined through DevOps are becoming a must-have skill for the modern DBA. This session will examine the approaches available to the data professional that will result in a safe, but automated, approach to developing and deploying databases. This session will explore the needs of database development from self-service provisioning of development databases, to the data cleansing necessary for protection and compliance and finally, to automating a deployment to production. The faster pace and tighter compliance needed to meet it demand that the DBA move to automation. This session will give you the tools to make that happen.
Depending on security, you'll have different levels of access to the source systems and databases

The 3 most useful options to explore exactly where the data you report on is stored are:
  1. Read Only access options
    • Old fashioned manual browsing and documentation approach
  2. Read/Write access
    • Analysing stored procedures, identifying core tables, testing fields with edits
  3. SQL Profiler and tracing
    • Full admin access

An in-depth discussion on these 3 options and other effective tools such as Redgate SQL Search and SSDT

If you have legacy data warehouses, either in third party or Microsoft, how do you approach a data migration? What patterns and practices you need?  How do you handle schema, data, business logic migration?

This session will answer the above questions and assist you in coming up with a strategy for migrating legacy data warehouses using a real life example.

We will cover the Azure services, patterns, practices. Using tools to migrated schemas and then performance tuning the warehouse. How to migrate data in a repeatable pipeline and considerations around reporting from multiple TBs of data.
Azure introduces a range of new services for transaction processing and analytics solutions which mean we don’t need to deploy virtual machines.  This session provides insight into how we see customers deploying evergreen and futureproof data solutions using services such as containers and platform data services like CosmosDB.  If you’d like to scale solutions on demand and never patch, backup or manage anti-virus on a server ever again, this session is for you!
Azure has been a great enabler for many organisations to remove barriers to deployment and scalability.  However, when cloud consumption grows unchecked, many organisations find themselves with spend spiralling, unsure how to attribute costs, and with lack of controls and governance around spend.  Microsoft provide a number of commercial mechanisms to help customers optimise their Azure spend for data workloads, including Azure Hybrid Use Benefit and Reserved Instances.  These provide a trade off of cost and flexibility, which may help in some circumstances.  Understand how and when to use each is essential to optimising your Azure spend in data services.
Around 34% of SQL Server environments run on SQL Server 2008 or SQL Server 2008 R2, which will become end of support in July 2019.  Many organisations have not yet planned how to handle the end of support and are unaware how to size the problem and plan.  Microsoft have provided options to continue running SQL Server 2008, and options to modernise to a newer SQL Server version or Azure data services.  This session provides a robust approach to maintain vendor support and get the most from your data platform
Tired of traditional DW ETLs that take hours or even days to populate your Data Warehouse and then keep your data static for reporting until the next ETL schedule runs? Your business is asking for live reports but your DW architecture is not designed for that and don't know what to do?Join us to know how we used Change Tracking to build a close to real time Data Warehouse. The challenges we experienced and how we tackled them.
We developers spend a lot of time developing our computer / programming / data skills.  These skills are only part of what developers need to be successful. In this session I would like to talk about some of the soft skills that I have used when working with colleagues, customers and even my family.
Working as part of a team is not always easy. There are things we can do to make it easier. In this session I will share some techniques and strategies on how to: 
Ways to make it easier for people to agree with you 
Recognizing different people's talents 
Learning how to agree 
Strategies for working with people 
Hidden power of listening 
Working with challenging team members 
And more
There are many wonderful tools, functions and special features of TSQL.  Have you used Common Table expressions? If you have not then this session is for you  In this session, we look at how to write them.  Then we will look at examples of how to use them to make your code easier to read and maintain.  Next, we will consider how they can make your code go faster, with the help of some examples.  In the last part of the session we will examine some of special uses of common table expressions.
Failing to deliver a well-designed Power BI Report can be a common reporting pitfall.  What good is quality data if it is not presented in a way that is meaningful or easily understood? Someone without any prior knowledge should be able to quickly understand a report without explanation and be quickly drawn to the key elements you want them to view.  This talk will walk through many elements of bad report design. Learn about visual cues and how certain chart types can convey data more accurately than others. Also, learn about the basic dos and don’ts of report design and layout, using easy-to-learn techniques that bring data to life.
Power BI offers a variety of ways to filter data. As a developer, it can be confusing to know when and where filters should be applied, how the client should be using them, or how to design them to work intuitively for the client. This presentation consists of two sections. Section one covers the filters that can be applied on import, or applied in the data model. These filters would not be filters the clients would use once published. Part two talks about the types of filters the client (or end user) would use one the report is published. Including implicit chart filters, native slicers, and custom visual slicers from the marketplace.
Relational database engines are great, but with increasing data volumes and user patience
decreasing we need to look for other options. Step forward ElasticSearch, a search engine for our data!
Our users and customers today want to ask new and interesting questions of the data that they have access to, but you don’t want the overhead of building and managing indexes for every query. Join me as I walk you through the essentials of what ElasticSearch is and how it can help you deliver a data platform that can satisfy the most demanding of users.

We will start by discussing use cases for ElasticSearch and then we will set up a simple cluster see how easy it is to insert, update and search for data using both a UI and API calls. We'll finish by looking at how a combination of text analysers and clever indexing strategies can help you to build a powerful, modern and highly scalable platform that can complement your existing or new infrastructure, and can
continue to grow with you and your data.  
Is it possible for multiple developers to work simultaneously on the same Tabular Model? What about branching strategies? As the native tooling (Visual Studio / SSDT) stores all the model metadata within just a single file, this can cause all sorts of issues in a source controlled environment. In this session, we'll see how Tabular Editor can be used in a team setting to improve parallel development and branche handling. Using Tabular Editors command line options, all of this can be integrated into automated build and release pipelines - even when your developers prefer to stick to SSDT!

Attendees of this session should be familiar with Tabular Model development. Prior experience using Tabular Editor is not required.

Topics covered:

  • Breaking the Tabular Object Model into multiple files
  • Automation using Tabular Editors command line
  • Branching strategies for Tabular Models
  • Tabular build and release pipelines in Azure DevOps
"Wait, what? Biml is not just for generating SSIS packages?"

Absolutely not! Come and see how you can use Biml (Business Intelligence Markup Language) to save time and speed up other Data Warehouse development tasks. You can generate complex T-SQL statements with Biml instead of using dynamic SQL, create test data, and even populate static dimensions.

Don't Repeat Yourself, start automating those boring, manual tasks today!
Is climate change for real? What does the data say? Join this session to see Power BI in action for good and to understand the story through data to see, understand and value the story that our world is telling us.
We will look at topics and debates, such as whether we should grow hemp as a viable alternative to cotton, and how we can make a difference. 
The biggest mistake we can make in climate change is thinking that someone else will solve the issue. But to do that, we need to understand it.
Join this session to understand the data on climate change, look at our options and learn the story of how it is impacting us now and in the future.
Through the last 10 years there has been a major shift in BI technologies. Modern BI tools like Power BI and SSAS Tabular are now more popular than ever. These tools are very easy to use and less rigid about data modeling compared to the old school SSAS MOLAP. Does that mean that data modeling has become less important? Is Kimball still the king, or is it time for some changes? In this session you will learn to identify signs showing that their data model needs a service check. We will discuss how traditional modeling techniques can be adjusted to fit into the modern world of BI with Power BI and SSAS Tabular. And finally, you learn how common data modeling scenarios, like roleplaying dimensions and mixed granularity, can be implemented in Power BI and SSAS Tabular.
It's not a secret that a deadlock - it's not very good. Definition of deadlock is straightforward and quite clear: This is an exceptional situation when two concurrent queries request the same resources but in a different order. Classic deadlock can occur when two concurrent transactions are modifying data from the two tables in a different order. Unfortunately, in real world deadlocks often are more complex and unobvious. One of the rules, which I always keep in mind, sounds: "You can not design a database, in which the occurrence of deadlock is impossible". And we should deal with them. The algorithm is quite simple: catch, analyze, fix. In practice, the process can be challenging and can require different types of analysis. 

In this session, we will learn and remind some basics about Locks and Transaction Isolation Levels and then analyze and solve as many deadlocks as we can during the session timeframe.
When you create a new database and your database objects do you ensure you have done everything you can to give it the chance to shine when things get tough? How many times have you seen objects created with all the defaults and down the road it becomes out of control due to the size, amount of data or even activity? If you really want to ensure your database and the objects contained in it can scale and perform well as it grows you need to do the proper homework before you ever create the database. We will walk thru the various factors that affect performance and scalability under real life conditions and help you understand how to properly configure them up front to avoid issues down the road. Scalability is all about having a proper foundation to build on.
Creating an index seems to be a silver bullet for optimizing a
performance of a query. If this was true, then most of performance problems would
be easy to avoid or fix, but they still exist. And there are lots of cases when
creating yet another index doesn’t help, or even make the things worst. When do
indexes help and when do you need to just take a different approach – rewriting
the query for removing a bad coding pattern. In this session I will show you
when to create an index and which query patterns don’t benefit from indexes.
You will see some reasons of the plan warnings, cardinality errors, and how to isolate
and avoid them
Diving into parallel execution plans is challenging, understanding parallel plans and being able to read and
troubleshoot them could be even more difficult. But it’s doable! Come to this session to explore how the parallelism in SQL Server works, what is important to know when you read and troubleshoot parallel plans. You will understand the CXPACKET wait, Cost Threshold of parallelism, MDOP, parallel plan operators and
much more. You will feel more confident in analyzing and troubleshooting SQL Server parallel executions.
InMemory optimized tables is a feature introduced in SQL Server 2014 that is still underused. It provides significant performance gains for OLTP workloads. But still there are a lot of concerns about its usage. The session will uncover the In-Memory OLTP architecture, the concerns about data durability and database startup and recovery as well as some important consideration on Management of in-memory objects. The session will go through the number of potential use cases and facilitates implementation with less development effort and risk.
Different database engines are built to be good at a specific set of operations. 

Relational engines, for example, are typically optimised for transaction control and protecting data from damage and loss during update.
They are typically not optimised for detecting fraud and performing recommendations (“Customers who bought this book frequently bought….”).  Graph databases are essentially the opposite, poor at transactions and good at tasks such as fraud detection and recommendations.  The key to using graph
databases effectively is understanding not only how they work but why they were designed that way – in other words, understanding what underpins their strengths and weaknesses.  So this talk will explore their origins and how and why they work.

Based on real life scenarios, this audience interactive session will
cover some scenarios you might encounter whilst dealing with SQL Server
databases and you will be provided with some options about what to do. Members
of the audience will then select from these options what to do and we will follow
that path and see what the outcome is from there.Each selection will
have a different outcome, and along the way you will probably learn some new
things. Various things like Azure, migrations and Performance Tuning might
be covered.
Power BI Premium and Analysis Services enable you to build comprehensive, enterprise-scale analytic solutions that deliver actionable insights through familiar data visualization tools such as Microsoft Power BI and Microsoft Excel. This session will deep dive into exciting new and upcoming features. Various topics will be covered such as management of large, complex models, connectivity, programmability, performance, scalability, management of artifacts, source-control integration, and monitoring. Learn how to use Power BI Premium to create semantic models that are reused throughout large, enterprise organizations.
When someone tells you a query is slow, you need to have a process for figuring out why. 

If your process involves reboots, rebuilds, clearing caches, or digging through a folder full of dusty scripts that you're not sure of, you need to attend this talk.

We'll be looking at query plans, code, indexes, and how the way they get used can all change when you least expect it.

This is a demo heavy session, but you'll walk away with the steps and resources I use every day to solve people's worst problems.
The ability to baseline, monitor, review and compare time correlated SQL Server performance metrics over a period of time is critical in any application. There are some excellent 3rd party monitoring solutions available but often due to the cost their use is limited to production environments only. With the free SQLWATCH.IO you can monitor not only your production SQL Server but also development, lab and test environments to ensure optimum performance before production deployment. 

Join me in this hour long session where I will take you through the concept of SQL Server performance monitoring and what to look for, the design of SQLWATCH.IO, key functionality, installation, data collection and reports.

Some of the reports currently available include:
  • Performance Counters
  • Database Growth
  • Disk Space
  • Missing indexes
  • Index Statistics
  • Long running queries
  • and more....
Lots of sessions about indexes give yo