Voting is now open to help us choose the general sessions at SQLBits.

You are not logged in at the moment, to vote you need to be logged in to the site Click here to login or register on the site
 

SQL Server and Azure SQL Database support a multitude of data types and functions related to dates and times, and chances are you've never heard of some of them.

In this session, start with the science in an easy to understand way. Then you'll get a rundown of each of the data types and how they work. Finish with the most useful date and time system functions, using practical examples.
There's a life beyond DATETIME and GETDATE(). Start that journey here and end up in a world that embraces the latest versions of SQL Server and Azure SQL Database in just over an hour.
The lines between who manages what when it comes to Data Platform technologies is blurring, the traditional role of the DBA is evolving. Are the skills and techniques that we have used over the last decade still applicable or do we need to update our thinking and look at how we manage Data Platforms with a fresh pair of eyes?
In this talk, we will cover the Top 7 things that a DBA needs to know in order to manage a modern Data Platform solution. Covering key skills, technologies, and useful applications that mean we can ensure that data is secure and the systems perform at levels that are acceptable for our end users.
What are Azure SQL Database Managed Instances?
The range of options for storing data in Microsoft Azure keeps growing, the most notable recent addition is the Managed Instance. But what is it, and why is it there? Join John as he walks through what these options are and how you might start using them.
Managed Instances add a new option for running workloads in the cloud. Allowing near parity with a traditional on-premises SQL Server, including SQL Agent, Cross Database Queries, Service Broker, CDC, and many more. Overcoming many of the challenges of using Azure SQL Databases.
But, what is the reality, how do we make use of it, and are there any gotcha’s that we need to be aware of? This is what we will cover, going beyond the hype and looking at how we can make use of this new technology, working through a full migration, including workload analysis, selecting the appropriate migration pathway and then putting it in place.
Whether you are a Developer or DBA, managing infrastructure as code is becoming more common, but what if you need to managed Hybrid or multi-cloud deployments? Having one tool to do this can simplify management and this is where Terraform comes in.

Together we will look at what Terraform is and how we can use it to simplify managing our infrastructure needs alongside our application code. Join me as I talk through how we are able to define one VM through to building production ready infrastructure by using the open source tool Terraform.
In the age of GDPR having accurate data is vital, but how do we achieve this? Simple, we test it.
Testing data might seem like a simple task, but there are many levels of complexity including defining and understanding the rules for the tests. All the way through to how to implement these tests, are you looking at one test at a time or composite tests for the data entity?
Together we will explore the approaches that we can take for defining test strategies as well as how testing fits into a larger Master Data Management solution.
We all hate writing it, but it is essential for the stringent compliance requirements that we, as data professionals, have to support for the businesses we work for. We will look at ways that we can speed up & minimise the overhead of documentation.
We will look at the key types of documentation as well as how we can use the data model to do a lot of the work for us, as well as the features and capabilities within SQL Server that we can use. Doing the work up-front allows us to automate and simplify the creation and maintenance of system documentation.
You heard about the “R” language and it’s growing popularity for data analysis. Now you need a walk-through on what is possible analyzing your data? Then this session is for you:

You’ll get a short introduction how R came to be, and what the R ecosystem looks like today. Then we will extract sales data from different companies off a Navision ERP database on SQL Server.
Our data will be cleaned, aggregated and enriched in the RStudio environment. We’ll generate different diagrams on-the-fly to gain first insights.
Finally we’ll see how to use the Shiny framework to display our data on a map, interactively changing our criteria, and showing us where the white spots really are.
By now, all the data pro world should have heard about the R language, especially since Microsoft is committed to integrate it into their data platform products. So you installed the R base system and the IDE of your choice. But it's like buying a new car - nobody is content with the standard. You know there are packages to get you started with analysis and visualization, but which ones?

A bundle called The Tidyverse comes in handy, consisting of a philosophy of tidy data and some packages mostly (co-)authored by Hadley Wickham, one of the brightest minds in the R ecosystem. We will take a look at the most popular Tidyverse ingredients like tidyr, ggplot2, dplyr and readr, and we'll have lots of code demos on real world examples.
Microsoft introduced geospatial data types already in SQL Server 2008, some enhancements followed in version 2012. Today, geo data is used almost everywhere. Time to refresh your memories of geometry and geography!

We'll walk through which data types are supported - from 0 to 2 dimensions, from points to polygons and some more - and see how to get spatial data into and out of SQL Server tables.

Then there are built-in functions to determine relationships between geo objects, such as intersection, inclusion or shortest distance.
And of course there will be examples of practical applications of geospatial data.
Are you prepared if a tornado filled with sharks destroys your primary data center? You may have backed up your databases, but what about logins, agent jobs, availability groups or extended events? 

Join SQL Server and PowerShell MVP Chrissy LeMaire for this session as she demos how disaster recovery can be simplified using dbatools, the SQL Server Community's PowerShell module.
High availability (HA) can be highly important to DBAs. Whether it be log shipping, classic mirroring or availability groups, HA can also be a pain to setup. Until now.

Join MVPs Chrissy LeMaire and Rob Sewell for this session as they demos how high availability can be simplified using dbatools, the SQL Server Community's PowerShell module.
SQL Sever 2019 is a major new update for the data professional, and there's a lot more than the big data stuff.

This session will highlight certain features that busy DBAs and developers will find useful. We'll look at UTF-8 support, row-mode query processing improvements, Always Encrypted with Secure Enclaves. Finally, we'll look at a better sqlcmd by covering the command-line tool mssql-cli.

At the end of the session, you will have a better understanding of several new features in SQL Server 2019 which will help you as a database administrator or developer.

Demos will be included for most of the features covered.
Do you suffer from fear and loathing of sqlcmd? Let's look at a better way with the mssql-cli command line tool. Syntax highlighting? Check. Autocomplete? Check. Better formatting? Check. Cross-platform support? Check. SQL Server 2019 support? Of course!

In this session, we will first cover the basics of the mssql-cli tool, including how to install it. Then we'll see how to replace sqlcmd with mssql-cli and see some nifty tips and tricks to get the best out of your SQL Server instances from Windows, Linux and macOS.

This demo-heavy session will make you love the command line again, and regain control of your batch scripts.
SQL Server has a lot of different execution plan operators. By far the most interesting, and the most versatile, has to be the Hash Match operator. 

Hash Match is the only operator that can have either one or two inputs. It is the only operator that can either block, stream, or block partially. And it is one of just a few operators that contribute to the total memory grant of an execution plan. 

If you ever looked at execution plans, you will have seen this operator. And you probably have a rough idea of what it does. But do you know EXACTLY what happens when this operator is used? In this two-hour 500-level session, we will dive deep into the bowels of the operator to learn how it performs. 

It is going to be wild ride, so keep your hands, arms, and legs inside the conference room at all times; and please remain seated until the presenter has come to a full stop. 

Topic covered in this session include:
* What is an in-memory hash table and how exactly is it built?
* The logical operations supported by Hash Match: what do they do and how do they work?
* Memory usage: what is a memory grant, which factors are used to compute/estimate it? What exactly happens when a Hash Match operator has to spill? (Dynamic destaging, dynamic role-reversal, bail-out, bit-vector filtering)
* How is memory divided when multiple operators in a single plan use a memory grant?
* Hash teams: What are they, when are they used, what is the benefit?
As announced in September 2018, SQL Server 2019 expands the "adaptive query processing" features of SQL 2017 and relabels them as "intelligent query processing". This name now covers many features, such as batch mode on rowstore, memory grant feedback, interleaved execution, adaptive joins, deferred compilation, and approximate query processing. 

In this high-paced session, we will look at all these features and cover some use cases where they might help - or hurt! - you.
SQL (the language) is not a third generation language, where the developer tells the computer every step it needs to take. It is a declarative language that specifies the required results. SQL Server itself will figure out what steps it takes to get to those results. Most of the time, that works very well.

But sometimes it doesn't. Sometimes a query takes too much time. You need to find out why, so you can fix it. That's where the execution plan comes in. In the execution plan, SQL Server exposes exactly which steps it took for your query, so you can see why it's slow.

However, execution plans can be daunting to the uninitiated. Especially for complex queries. Where do you even start?

In this session you will learn how to obtain execution plans, and how to start reading and understanding them.

Prerequisites: Attendees are expected to be well-versed in SQL and to have a fair understanding of indexes.
You’ve just been given a server that is having problems and you need to diagnose it quickly. This session will take you through designing your own toolkit to help you quickly diagnose a wide array of problems. We will walk through scripts that will help you pinpoint various issues quickly and efficiently. This session will take you through;

- What’s on fire? – These scripts will help you diagnose what’s happening right now
- Specs – What hardware are you dealing with here (you’ll need to know this to make the appropriate decisions)?
- Settings – are the most important settings correct for your workload?
- Bottlenecks – We’ll see if there are any areas of the system that are throttling us.

By the end of this session, you should have the knowledge of what you need to do in order to start on your own kit. This kit is designed to be your lifeline to fix servers quickly and get them working.All code we’ll go through is either provided as part of this presentation or are open source/community tools.
When your development team is up to a certain size, and often no matter what size it is, you want to start following best development practices. 

These include things like source control, multiple environments, deployment processes, and governance.

As Power BI content is developed using Power BI Desktop and not Visual Studio as most Microsoft BI solutions are, these things can get tricky. In this session we will look at what Power BI has to offer when it comes to development lifecycle. 

We will look at the different options available to the developer when it comes to source control, multiple environments, deployment and distribution of Power BI content. Lastly, we will look at governance and see how it is possible to secure the content and audit the usage of Power BI.

For all these topics we will look at the capabilities Power BI offers and how we, in the company where I work, decided to implement it
In this session we will create an Power App that will allow users to check-in their location. We will then create a Flow that will take that location and write to a Power BI data source and refresh it. We will then create a Power BI report that will display the data on a map.

Power Apps is a great tool that allows you to create a desktop or mobile app with minimal coding. The app we are creating in this session uses the Bing location services to get the users location when a button is pressed. 

The Microsoft Flow we create in this session will take the location and user information and write it to an Excel file. We will also look at a custom connector in Flow that will allow us to refresh a Power BI data set. 

In the Power BI report we will create we will connect to an Excel file with the location information in it and display it in a report including the location on a map.

The audience will take away useful information about Power Apps, Flow and Power BI including all the code used
Poor data quality has a cost. 
Examples of data quality challenges and their impact.
Having correct data is very important in order to make correct decisions. 
Data quality goes hand in hand with proper data modelling.
Knowledge about use cases and workload is important input to your data modelling. 
Different kinds of compression can be relevant depending on your usage scenario.
Having focus on deadlines without having (data) quality in mind will hit you hard at a later point.

I will show a simple way of how you can get attention, but also how to integrate to existing
monitoring system.

Delivering good query performances and reports in time is important to business users,
but how do you measure it from their perspective?
Row Level Security (RLS) can be based on foreign key relationship. This allows you to keep the
data model unchanged.


Why you should not use SQL logins, but integrated security instead:
Passwords of SQL logins can be "recovered" - a guide on how to do it with VM in Azure.


What is the performance impact?
It is not different to the RLS solution with views joining on table valued function(s).
But the use of is_member can cause trouble. I will present a solution that caches AD-role membership information.
This includes a tiny PowerShell script and some advanced settings in job.


I will show you how to take care of implicit knowledge. Knowledge which could be extracted.
In addition, i will demonstrate how to write tests to check that TVF's are working correctly.
Some examples on how to read and write different types of files
and why this is relevant for automation of data analysis.

Write XLS file using CLR as well as doing it manually.
Write CSV file using CLR to save output from Rscript connecting to cube.
Read XML file stored in file table.
Writing and reading through external tables (Polybase).
Plotting JPG inside SQL Serve"R" 2016/2017.


My expierence using CLR (statistical functions, comma list concatenate, downsampling)
and how to debug it.

Understand:
* the value of data wrangling skills.
* more about the Excel data file format.
* when and when not to use CLR.
Traditionally, when a server starts to reach its limit we have simply thrown more resources at it, either more CPU, memory or disk. However there comes a point, especially in the cloud, where it is no longer possible to add more  resources to a database.  Here we need a different solution. Instead of scaling up we must scale out, sometimes called horizontal scaling or sharding. In this talk we will look at how to scale out in Microsoft Azure SQL database using the Azure Elastic Database tools We will look at the requirements and options for horizontal scaling in Azure and then we will have a go at sharding an Azure SQL database and then querying and updating the different shards We will be using t-sql, PowerShell and c# so come prepared for some serious coding
Beware of the Dark Side - A Guided Tour of Oracle for the SQL DBA
Today, SQL Server DBAs are more than likely at some point in their careers to come across Oracle and Oracle DBAs. To the unwary this can be very daunting and, at first glance, Oracle can look completely different with little obvious similarities to SQL Server.

This talk sets out to explain some of the terminology, the differences and the similarities between Oracle and SQL Server and hopefully make Oracle not look quite so intimidating.

At the end of this session you will have a better understanding of Oracle and the differences between the Oracle RDBMS and SQL Server. 
Although you won’t be ready to be an Oracle DBA it will give you a foundation to build on.
The word Kerberos can strike fear into a SQL DBA as well as many Windows Server Administrators.
What should be a straight forward and simple process can lead to all sorts of issues and trying to resolve them can turn into a nightmare. This talk looks at the principle of Kerberos, how it applies to SQL Server and what we need to do ensure it works.
We will look at
  • What is the purpose of Kerberos in relation to SQL Server?
  • When do we need to use it?  Do we need to worry about it at all?
  • How  do we configure it?  What tools can we use?
  • Who can configure it?  Is it the DBA job to manage and configure Kerberos?
  • Why does it cause so many issues?
Because on the face of it setting up Kerberos for a SQL Server is actually straightforward but it is very easy to get wrong and then sometimes very difficult to see what is wrong preview here https://www.youtube.com/watch?v=uO9NqxizT_8
It seems like that every month we hear about another company having a major data breach. GDPR raises the stakes with huge fines for those that lose or don't keep data safe. Ensuring that your data is secure has become more important than ever. With this in mind, in SQL Server 2016, Microsoft gave us three new features that have the potential to improve the security of your SQL database, either on premises or in the cloud. These are: Dynamic Data Masking which allows us to obfuscate data in real time Always Encrypted which helps protect data both at rest and in motion with a master key Row Level Security that gives us control over who can see which rows in a table based on the user's rights In this session we will have an overview of these important security features and demos on how to configure them and make them work.
It is very common to use temporary data structures in the database.
In SQL Server, we can choose between temporary tables (#MyTable) and table
variables (@MyTable). There are many differences between these two structures,
some are obvious and well known, and some might surprise you.

The main difference in terms of performance is statistics, which
exist for a temporary table, but do not exist for a table variable. For that
reason, there can be a huge difference in performance of a stored procedure
that uses one data structure or the other.

In this session, we will demonstrate the differences and analyze
performance for various use cases. We will cover all kinds of ways to work with
these data structures, such as OPTION (RECOMPILE) and trace flag 2453.

By the end of this session you will know exactly when and how to use
each one in order to achieve the desired functionality with the best performance.
Big Data is everywhere these days. There are a lot of Big Data
scenarios, from IoT (Internet of Things) to AI (Artificial Intelligence). The
common challenge to all of these scenarios is to handle very large volume,
velocity and variety of data (a.k.a “the 3 V’s”).

While there are plenty of other data platforms to handle Big Data,
SQL Server 2017 offers a lot of features and capabilities to handle Big Data
scenarios, and in many cases, it can be the best solution.

In this session, we will first identify these scenarios, and we will
try to answer the question: “Is SQL Server the right tool for the job?”. We
will then cover many features in SQL Server that can be used together to handle
Big Data scenarios, such as: Delayed Durability, Machine Learning Services,
Partitioning and Parallelism, XML & JSON Support, Real-Time Operational
Analytics, Polybase, and more.
If you install a SQL Server instance by just clicking
Next->Next->Next, it will be set up with a default configuration, which
is pretty good for many environments. But this doesn’t mean that it is good for
your environment. Some of the default configurations in SQL Server are
actually not optimal. Some might be optimal for one environment, but not for
another. And there are many recommended configurations that aren’t even part of
the setup process.

In this session, we will present a recipe for installing SQL Server
the right way (not the Next->Next->Next way). This recipe includes many
recommendations for better performance, availability, security and
manageability. We will explain and demonstrate each recommendation. By the end
of the session, you will have a list of recommendations to apply to your
existing as well as new SQL Server instances.
Monitoring page splits in SQL Server is useful. It can be very
useful, for example, in order to choose the appropriate fill factor for an
index. By monitoring the number of page splits for a particular index over a
period of time, we can determine whether the fill factor for that index should
be adjusted in order to reduce the number of page splits and improve the
overall performance.

In this session we are going to demonstrate all the possible options
to monitor page splits in SQL Server, and we are going to show that all of these
options are wrong. SQL Server is lying, and we're going to prove it. We will
then demonstrate the only (documented) method to correctly monitor page splits
in SQL Server as of today.
There are many methods and options in SSIS to handle errors during a
package execution. You can use event handlers, event propagation, package
transactions, precedence constraints, error rows, and more. Things get more
complicated when your package has multiple levels of containers, or even when
one package executes another. It is very easy (and common) to get lost and do
things the wrong way.

In this session we will learn about all the options available for us
to handle errors in SSIS, but more important – we will learn about the best
practices and the way to do it right.
A common use case in many databases is a very large table, which
serves as some kind of activity log, with an ever-increasing date/time column.
This table is usually partitioned, and it suffers from heavy load of reads and
writes. Such a table presents a challenge in terms of maintenance and
performance. Activities such as loading data into the table, querying the
table, rebuilding indexes or updating statistics become quite challenging.

The latest versions of SQL Server, including 2017, offer several new
features that can make all these challenges go away. In this session we will
analyze a use case involving such a large table. We will examine features such
as Incremental Statistics, New Cardinality Estimation, Delayed Durability and
Stretch Database, and we will apply them on our challenging table and see what
happens...
Parameters are a fundamental part of T-SQL programming, whether they are used in stored
procedures, in dynamic statements or in ad-hoc queries. Although widely used,
most people aren’t aware of the crucial influence they have on query
performance. In fact, wrong use of parameters is one of the common reasons for
poor application performance.

Does your query sometimes run fast and sometimes slow – even when nothing’s changed?
Did it happen to you that a stored procedure, which had always been running for
less than a second, suddenly started to run for more than 5 seconds
consistently – even when nothing had changed?

In this session we will learn about plan caching and how the query optimizer
handles parameters. We will talk about the pros and cons of parameter sniffing
(don’t worry if you don’t know what that means) as well as about simple vs.
forced parameterization. But most important – we will learn how to identify
performance problems caused by poor parameter handling, and we will also learn
many techniques for solving these problems and boosting your application
performance.
If you are releasing database changes, new reports, cubes or ssis packages on a regular basis, you've probably offered up your share of blood, toil, tears and sweat on getting them delivered into production in working condition. DevOps is a way to bridge the gap between developers and IT professionals and for that we need to address the toolchain to support the practices. Microsoft offers a set of tools that'll help you on your journey towards the end goal: Maximize predictability, efficiency, security and maintainability of operational processes.   We will in detail be looking at:   Agile Development Frame of Mind Visual Studio Online (tool) Feature/Backlog/Work Item (concept) Build Agents (tool) PowerShell Microsoft's Glue (tool) and How to setup a pipeline
In this session we will look at all the important topics that are needed to get your Power BI modelling skills to the next level. We will cover the in memory engine, relationships,  DAX, Composite models and aggregations. This will allow set you on the path to master any modelling challenge with Power BI or Analysis Services. 
How does the AI make my BI applications smarter? How can the AI make my BI application more intelligent? Everybody is talking about AI, but what can you do with it today? Currently there are tools like the Cognitive Services, Azure ML or Azure Bot Framework, which can help in the classic ETL process to prepare or enrich data. Examples are the analysis of large data streams from the IoT area, how can I improve my demand planning of a call center or the analysis of social media regarding trends.If you already have your BI application in the cloud, Azure Data Factory, Logic App or Azure Stream Analytics would be the right components for an AI extension.
Microsoft Cognitive Services (formerly Project Oxford) are a set of APIs, SDKs and Services. They are available to developers to make their applications smarter, more engaging and easier to find. Cognitive services extend Microsoft's AI platform. 

This is a great playground for young and old. Here you can try out to your heart's content what might be in use tomorrow. With the various building blocks such as Q&A Maker, Vision, Emotion, Face, Text Analytics or Recommendations (to name just a few), impressive applications can be put together in a short time.
In this session, we will explore real world DAX and Model performance scenario's.  These scenarios range from memory pressure to high CPU load. Once we identify them using the monitoring tools, watch as we will dive into the model and improve performance by going through concrete examples. Will it be fixed by optimizing the DAX statement or by making changes to the model? Come watch to find out!
One hot topic with Power BI is security, in this deep dive session we will look at all the aspects of security of Power BI, from users, logging to where and how your data is stored and we even look at how to leverage additional Azure services to secure it even more.
How can you bring your existing on prime data warehouse and reporting into the cloud? That is precisely what the question was more than a year ago. The aim was to use everything as a service. Azure offers many possibilities with Azuer SQL DB / Azuer SQL DW / Azure Data Factory / Logic Apps / Streamanalytics / Power BI.  Now, after more than a year in live operation, a short summary and evaluation on the subject of BI in Azure. 
Azure offers a wide range of services that can be combined into a BI solution in the cloud. What possibilities does Azure currently offer to create a modern BI architecture? The components currently offered range from Azure SQL DB and SQL DWH to Data Factory, Stream Analytics, Logic App, Analysis Services and Power BI to name a few. This is a very good toolbox, with which one can reach the first successes very fast. Step by step you will learn how to create the classic ETL in the cloud and analyze the results in Power BI.
Who does not know the problem, you sit in the bar and just don't know which cocktail to order?
The Cogntive services offer here with face, emotion and recommendation three APIs that can help you. How do you best combine these services to get a suggestion for your cocktail?
You use different apps to organize your work. Outlook, Onedrive, Onenote, Sharepoint, Dynamics 365, Power BI and so on. All for different tasks. Microsoft introduced Flow to let these apps talk to each other. This allows us to create new automated workflows in an easy way. 

In these workflows Power BI can play an important role. Power BI generates data alerts which can be used to create emails, work tasks or even start a new flow. Also, you can automatically publish data to Power BI from apps like Outlook and SharePoint to analyze your email and documents.

In this session, we’ll introduce Flow and look at use cases to integrate apps with Power BI. By using different demo’s, you will get a good understanding of automating new work processes with Flow and Power BI.
PowerApps, Power BI and Microsoft Flow... see how they can work together! If you’re using one of these services, you’ve probably heard of the other two but didn’t use it yet.  Am I right? You can use PowerApps, Power BI and Microsoft Flow independently, though combining these services gives you much more possibilities.

In this session, we’ll create multiple solutions together from scratch. This will give you a good understanding of combining PowerApps, Power BI and Microsoft Flow.
In this talk you will learn how to use Power BI to prototype/develop a BI solution in days and then (if needed) evolve it into a fully scalable Azure BI solution. The goal of the session is to show, with a real-world example, how to use Power BI as a prototype solution (with real data) and the process to scale-it-up to a fully scalable Azure BI solution using Azure AS and Azure SQL DB. In the process I will share a few tips & tools that you can use to help you in that process.
This talk is you will learn a lot of tools, tips & hacks on the Power BI platform that will be very useful on your daily Power BI projects.
It's a very demo oriented session with tips on:
  • Governance
  • Deploy
  • DevOps
  • Productivity
  • Real Time
You don't have to be a super hero, to make a difference to our planet. Just a bit of programming skills will suffice. We can't imagine our lives without electricity. We use it to have light and heat, keep our food fresh, we work with computers, use mobile phones, and don't forget entertainment! We need electricity for driving cars, even more now with EVs. Electricity is generated mostly with fossil fuels, we can use nuclear power and hope that nobody makes a mistake and people do make mistakes. That's why we build wind farms, solar panels, hydro power plants, but we can't force them to generate electricity when we want. They are not aware of world cup finals. So how can we make sure, we use green power more? We need to store electricity when they can generate it, and use when they produce less, but storing electricity is hard. We have to change the way we consume energy, but it has to be automatic, so people wouldn't even know. That's what we do at OVO Energy, using IoT devices to change the power usage patterns, create virtual power plant, which can be used when the demand exceeds supply. I will show you how we use Azure IoT Hub to do that, you don't have to be C or C++ developer to work with IoT.
There are cases where you create indexes to support query elements that rely on order, but the query optimizer seems to not realize this, and ends up applying a sort. This session demonstrates a number of such situations and provides workarounds that enable the optimizer to rely on index order and this way avoid unnecessary sorting.
Has your manager come to you and said "I expect the SQL Server machines to have zero downtime?" Have you been told to make your environment "Always On" without any guidance (or budget) as to how to do that or what that means? This session will walk you through the high availability options in on-premises SQL Server, the high availability options in Azure SQL Database, and how those can be combined to enable you to achieve the ambitious goals of your management. Beyond the academic knowledge, we'll discuss real world case studies covering exactly how your on-premises environments and Azure services can work together to keep your phone quiet at night.
Have you ever wanted to travel back through time and fix problems using modern techniques? Are you frustrated by having to design tables to look at data across time when it seems so simple to query "tell me this answer from this time"? Are you annoyed at having to build complicated security schemes to protect data? We can't time travel yet, but temporal tables allow us to traverse our data throughout time in a far easier fashion than before this feature was brought to SQL Server and Azure SQL Database. Row-level security allows us to secure that same data in a more direct and flexible way as well, freeing data modelers and developers from dealing with the complicated syntax and table topologies that previous homebuilt iterations of these features required. Come learn how your job as a DBA or database developer has gotten a bit easier (and how you too can travel through time)!
Long gone are the days where the only architecture decision you had to make when scaling an environment was deciding which part of the datacenter would store your new server. There is a dizzying array of options available in the SQL Server and Azure ecosystems and those are evolving by the day. Is “the cloud” a fad? Are private datacenters a thing of the past? Could both questions have a kernel of truth in them? In this session I will go over real world scenarios and walk you through real world solutions that utilize your datacenter, cloud providers, and everything in between to keep your data highly available and your customers happy.
Have you ever wondered what it takes to keep an Always On availability group running and the users and administrators who depend on it happy? Let my experience maintaining several production Always On Availability Groups provide you some battle-tested information and hopefully save you some sleepless nights. From security tips to maintenance advice, come hear about some less than obvious tips that will keep users happy and the DBA’s phone quiet.
The job of a data professional is evolving rapidly, driving many of us to platforms and technologies that were not on our radar screen a few months ago. I am certainly no exception to that trend. Most of us aren't just monitoring backups and tuning queries - we are collaborating with teams throughout the company to provide them data and insights that drive decisions. Cloud providers are democratizing technologies and techniques that were complicated and proprietary just a few months ago. This presentation walks you through how a silly idea from a soccer podcast got me thinking about how Azure Logic Apps, the Cognitive Services API, Azure SQL DB, and Power BI combine to provide potentially powerful insights to any company with a social media and sales presence. Join me as I walk you through building a solution that can impact your company's bottom line - and potentially yours too!
Are you considering becoming a speaker, but feel nervous about getting on stage for the first time? Have you already presented a few sessions and want advice on how to improve? Do you learn more from seeing examples of what you should NOT do during a presentation instead of reading a list of bullet points on how to become a better speaker?

Don't worry! I have made plenty of presentation mistakes over the years so you won't have to :)

In this session, we will go through common presentation mistakes and how you can avoid them, as well as how you can prepare for those dreaded worst-case scenarios. Don't let those "uhms" and "uhhs" dominate your presentation, help the audience focus on the key message you're delivering instead of making them read a wall of text in your slides, recover gracefully from any demo failures, and stop distracting your attendees with floppy bunny hands.

All it takes is a little preparation and practice. You can do this!
In this session we'll go from zero to a fully working ADF v2 data driven ELT process. We'll look at dynamic task properties and how to set these to get the most out of ADF v2


We'll focus on moving from SQL Server to SQL Server but the principles can be expanded to move between a variety of data sources.

Using lots of demos and examples, attendees will leave with a good foundation and enough understanding to go and build their own flavour of a data driven ELT process in ADF v2
Have you ever wanted to add some element of application style navigation and interface into your PowerBI reports?
Then this session is for you!

In this session we'll cover a variety of techniques for creating things such as pop up dialogues, drop down menus and some clever navigation tricks, to name a few

Using lots of demos and examples, this fun and light hearted session attendees will leave will a variety of useful techniques that can be applied to their PowerBI work immediately  
In this session we'll examine the ups and downs of being a consultant and if its for you.

Full disclaimer, I LOVE being a consultant and I work for a fantastic company, but I do understand its not for everyone. 

I've been a consultant for many years and this session will be based on my experiences and the opinions I've formed over the years, both form my observations and also seeing many people come and go from the field.

This will be a fun and light hearted session with lots of discussion and an interaction with the attendees. Everyone that attendees will leave with a good idea about how consultancy works and if its for them 
PowerApps is an exciting and easy to pick up application development platform offered by Microsoft.
In this session we'll take an overview look at PowerApps from both the development and deployment sides.

We'll go through the process of building and publishing an app and show the rapid value that PowerApps can offer.

Using plenty of demos and examples attendees will leave with a good rounded understanding of the PowerApps offering and how it could be used in their organisations   
This session will cover the basics of dynamic SQL; how, why and when you may wish to use it with demos of use cases and scenarios where it can really save the day (trying to perform a search with a variable number of optional search terms, anyone?). We will also cover the performance and security impacts touching on the effect on query plans, index usage and security (SQL injection!) along with some best practices.
Have you ever had a requirement to provide a method for many users to insert or change data in a database with minimal time and/or tools in order to do this? Have you ever considered Excel for the job? Whilst probably not the most recommended way of interacting with SQL, an Excel tool is a highly customisable method you can develop quickly with little prior VBA knowledge and be able to use on most Windows computers instantly. Sounds good right?! During this session I will explain and demonstrate how to create an Excel application that enables multiple users to make changes to the database and show you how easy this is to distribute quickly to the users.

Despite several years as a SQL/BI Developer I was completely unaware that this was even a possibility; so if like me you are also surprised to find out that Excel can be used in such a manner then this demo is for you!

Note this session is primarily using VBA with very basic SQL strings to perform the INSERT/UPDATE/DELETE.
When using Azure SQL Database, you're paying for performance. In this session, you'll learn what tools and techniques are now available to help you be cost-effective. You'll see how to use features such as scaling, in-memory OLTP, and columnstore to minimize query run times and optimize resource use. Query Performance Insight and Automatic Tuning will be covered so you know how to monitor your environment and automate tuning. You'll be ready to get the most performance for the least amount of money from SQL Database. 
“I heard about this new feature, and I think it sounds great. I want to use it in my database.” Have you heard – or said – those words? You’re not alone. Too often, we dive into using a new feature without exploring what business or technical problem it will solve, then run into problems once we’re using it. In this session, I cover what problems can be solved using the major performance features of SQL Server – columnstore, in-memory OLTP, partitioning, Resource Governor, and more.
"Time and tide wait for no man", but SQL Server must wait for resources when executing a query. The lifecycle of a query is full of waiting, and SQL Server records when it's waiting, why, and for how long. By learning how each query is executed, and observing how waits can accumulate along the way, you can unlock the secrets to what resource or code changes are needed to improve your query and server performance. 
Do your users click run on a report or dashboard and then go for a cup of coffee...in another city?  Optimization exercises can be wrought with disappointment if a scientific approach isn't part of the solution.  This session will teach you a long-standing scientific approach to optimization and how to diagnose performance problems in Power BI. 

It will begin with ensuring your Power BI environment  is configured optimally, covering tips and tricks, followed by methodology to perform an optimization review of a Power BI environment and the steps, (along with demonstration) of how to track log, performance and other data to diagnose the source of issues instead of guessing what the cause is.
Join me to learn about Azure Cosmos DB. I will cover the following topics in this session
  • Why do we need another database system?
  • How to setup Cosmos DB
  • How much does it cost?
  • Multi-Model Apis
  • Cosmos DB vs SQL Server
  • How to Import Data
  • How to use Cosmos DB Emulator
  • Cosmos DB Limitations
I will cover the following topics in this session :
  • What is Spatial Data
  • Available SQL Server Spatial Data types/Objects
  • Common SQL Server Spatial Functions
  • Definition of Spatial Reference Identifier.
  • Selecting right Spatial Data type
  • How to find distance between two points in SQL Server
  • How to search by radius in SQL Server
  • Common Spatial Formats
  • How to import Spatial Data into SQL Server Database free
  • Use SQL Server 2016 to create GeoJSON
  • Use SQL Server 2017 to cache Spatial Data
SQL Server 2017 has great new additions and features. Join me to learn what's new in SQL Server 2017 with many demos.I will cover the following features.
  • Linux Support
  • Graph Tables
  • Intelligent Query Processing
  • Resumable Online Index Rebuild
  • How to run R/Python with Machine Learning Services
  • In-Memory Tables (NoSQL in SQL Server)
Authoring SSAS tabular models using the standard tools (SSDT) can be a pain when working with large models. This is because SSDT keeps a connection open to a live workspace database, which needs to be synchronized with changes in the UI. This makes the developer experience slow and buggy at times, especially when working with larger models. Tabular Editor is an open source alternative that relies only on the Model.bim JSON metadata and the Tabular Object Model (TOM), thus providing an offline developer experience. Compared to SSDT, making changes to measures, calculated columns, display folders, etc. is lightning fast, and the UI provides a "what-you-see-is-what-you-get" model tree, that lets you view Display Folders, Perspectives and Translations, making it much easier to manage and author large models. Combined with scripting functionality, a Best Practice Analyzer, command-line build and deployment, and much more, Tabular Editor is a must for every SSAS Tabular developer. The tool is completely free, and feedback, questions or feature requests are more than welcome. This sessions will keep the PowerPoint slides to a minimum, focusing on demoing the capabilities of the tool. Attendees are assumed to be familiar with Tabular Model development. https://tabulareditor.github.io/
Do you really need Server Backups? Yes. Unless you don't care about your data or you don't mind having to completely recreate you database from the scratch by loosing all your data, you need a way to restore your data to a useful point in time.
In this section we will walk you through what happens inside SQL Server and give you details on each action performed by the database engine when you are running your Backups, (Full, Differential and T-Log), thus allowing you to better understand and optimize the performance and trustworthiness of the backups of your data.
I've been working with SQL Server on Windows for well over a decade, but now it can run on Linux. I had to learn a lot of things to ramp up. Let me share those with you, so you can successfully manage SQL Server on Linux! In this session, I'll cover basic Linux commands, what to prep for installation, how to install, how to configure, and what you need to know to monitor and troubleshoot. 
In recent years, the idea of source control has become inextricably linked with git, the version control system created for the development of the Linux kernel.

Whilst the primitives of git are very simple, certain operations, including but not limited to branching, merging, resetting, rebasing, and reverting can be confusing to the uninitiated.

We will look at the most common developer interactions with git version control, using a mixture of command line tools, graphical clients, and IDE integrations, as well as covering how to extract ourselves from a few common difficulties.

We'll also discuss workflows for using git as part of a team, using the context of repository hosting services such as GitHub and Visual Studio Team Services.
It's uncontroversial to suggest that developers work more effectively given isolated development environments, which are not subject to "surprises" caused by the rest of the team or "change freezes" imposed by the release process.

In many circumstances, these can be provided using a myriad of Virtual Machines, but the case of applications which rely on a range of PaaS services such as Azure Functions, Azure Data Factory or Azure SQL Data Warehouse can be more complicated.

This session will discuss a way of dynamically creating and destroying an isolated end-to-end environment for every individual feature branch using the facilities provided by VSTS and Azure Resource Manager, as well as some reasons why you might or might not want to adopt such an approach.
In this session Ben will walk you through a home-made IoT project with the data ultimately landing in Power BI for visualisation. 

A Raspberry Pi is the IoT device, with sensors and camera attached to give an end-to-end streaming solution. 
You will see Python used to run the program and process the images. Microsoft Azure plays its part where Microsoft Cognitive Services enriches the images with facial attributes and recognition, Azure SQL stores the metadata, Azure Blob storage holds the images, Power BI visualises the activity and Microsoft Flow sends mobile notifications. 

You'll see enough to walk out and get your own project started straight away!
You know locking and blocking very well in SQL Server? You know how the isolation
level influences locking? Perfect! Join me in this session to make a further
deep dive into how SQL Server implements physical locking with lightweight
synchronization objects like Latches and Spinlocks. We will cover the
differences between both, and their use-cases in SQL Server. You will learn
about best practices how to analyze and resolve Latch- and Spinlock
contentation for your performance critical workload. At the end we will talk
about lock free data structures, what they are, and how they are used by the
new In-Memory OLTP technology that is part of SQL Server since Server 2014.    
Do you have ever looked on an execution plan that performs a join between 2 tables, and you have wondered what a "Left Anti Semi Join" is? Joining 2 tables in SQL Server isn't the easiest part! Join me in this session where we will deep dive into how join processing happens in SQL Server. In the first step we lay out the foundation of logical join processing. We will also further deep dive into physical join processing in the execution plan, where we will also see the "Left Anti Semi Join". After attending this session you are well prepared to understand the various join techniques used by SQL Server. Interpreting joins from an execution plan is now the easiest part for you.    
UNIQUEIDENTIFIERs as Primary Keys in SQL Server - a good or bad best practice? They have a lot
of pros for DEVs, but DBAs just cry when they see them enforced by default as
unique Clustered Indexes. In this session we will cover the basics about
UNIQUEIDENTIFIERs, why they bad and sometimes even good, and how you can find
out if they affect the performance of your performance critical database. If
they are affecting your database negatively, you will also learn some best
practices how you can resolve those performance limitations without changing
your underlying application.
SQL Server needs its locking mechanism to provide the isolation aspect of
transactions. As a side-affect your workload can run into deadlock situations
- headache for you as a DBA is guaranteed! In this session we will look into
the basics about locking & blocking in SQL Server. Based on that
knowledge you will learn about the various kinds of deadlocks that can occur
in SQL Server, how to troubleshooting them, and how you can resolve them by
changing your queries, your indexing strategy, and your database settings.

There are so many different concepts and things that you have to know when you have to configure memory settings for SQL Server correctly. In this session you get an overview about the memory architecture of SQL Server and how SQL Server is using memory internally. You will also learn how you can track memory consumption inside SQL Server, and what are the most relevant memory configuration options that you can use to fine-tune and troubleshoot SQL Server.
One of the newest buzzwords of the last few months and years is definitely Kubernetes - or K8S. Microsoft has made with SQL Server 2019 a huge investment into Kubernetes and provides a Docker Container Image that can run on top of Kubernetes. In this session we will explore the key ideas of Kubernetes. You will learn the basic concepts around Kubernetes, and how you can run SQL Server 2019 successfully with Kubernetes.
Power BI Premium enables you to build comprehensive, enterprise-scale analytic solutions that deliver actionable insights through familiar data visualization tools such as Microsoft Power BI and Microsoft Excel. This session will dive deep into exciting, new and upcoming features including aggregations for big data to unlock petabyte-scale datasets that was not possible before! We will uncover how the trillion-row demo was built in Power BI on top of HDI Spark. The session will focus on performance, scalability, and application lifecycle management (ALM). Learn how to use Power BI Premium to create semantic models that are reused throughout large, enterprise organizations
Database and application development need to be synchronized in order to provide the proper behavior, otherwise something
will be broken.

Database unit tests can be the contract between database and application. This contract, not only
avoids database breaking the contract with the application, but also ensures
that the database presents an expected behavior.

This talk will address the basic steps to introduce unit testing to SQL (tSQLt a database
unit testing framework), and create a deployment pipeline able to create a test
environment (local machine, database as a service, docker), run tests, create
tests reports and deploy if the build succeeds.

So, the plan it’s to show since how to write the first to add a set of database
tests to the deployment pipeline.       
As a SQL DBA you want to know that your SQL Server Estate is compliant with the rules that you have set up. Now there is a simple method using PowerShell and you can get the results in PowerBi or embedded into your CI/CD solution

Details such as:

How long since your last backup?
How long since your last DBCC Check?
Are your Agent Operators Correct?
Is AutoClose, AutoShrink, Auto Update Stats set up correctly?
Is DAC Allowed?
Are your file growth settings correct, what about the number of VLFs?
Is your latency, TCP Port, PS remoting as you expect?
Is Page Verify, Data Purity, Compression correctly set up?
And many more checks (even your own) can be achieved using the dbachecks PowerShell module brought to you by the SQL Collaborative team.

and all configurable so that you can validate YOUR settings

Join one of the founders of the module, Rob Sewell MVP. and he will show you how easy it is to use this module and release time for more important things whilst keeping the confidence that your estate is as you would expect it.
A live demo of assembling Raspberry Pi, Breadboard, Temperature sensor will be performed to extract current temperature data and push to IoT Hub, Stream analytics & live streaming Power BI with Python.
No, not that sort of slacking. The Slack.com type of slacking. We'll be ignoring the gifs and looking at how using Slack, PoshBot, dbatools and a little bit of PowerShell glue you can build a simple solution that enables you to quickly respond to and fix problems from anywhere without having to carry anything more specialised than your smart phone. And we'll see how you can then extend that to allow you to hand off tasks to other users and teams in a safe secure manner.
A backup that's not been tested isn't worth the disk space it's wasting.

To make sure you can recover from a disaster you need to have confidence that your backups are working properly, and that you can restore them in the time you've promised your management. Testing this used to be really tricky, but with the dbatools project we've tried to make it as simple as possible.

In this session I'll show why testing your backups regularly and frequently is a vital part of a Disaster Recovery plan. I'll explain how this can be used to ensure you can meant RTO and RPO, strategies that will stop you worrying about how long a recovery will take and proving to your boss and auditors that your recovery strategy is rock solid and reliable.
Struggling to cope with extremely large tables? Is performance tuning a nightmare? Struggling to delete old data? 

The solution you're looking for could be Partitioning. Properly introduced to SQL Server with the 2005, and available in all editions since SQL Server 2016 Service Pack 1, partitioning allows you to split your tables into manageable chunks without having to rebuild you code and applications

In this session I'll take you through the benefits of partitioning, implementation best practices, pitfalls to avoid and strategies you can take away to use.
Where to start when your SQLServer is under pressure? If your server is
misconfigured or strange things are happening, there are a lot of free tools
and scripts available online.These tools will help you decide whether you have
a problem you can fix yourself or you really need a specialized DBA to solve it.  Those scripts and tools are written by renouwned SQLServer specialists. Those tools provide you with insights of what
might be wrong on your SQLServer in an quick and easy manner. You don’t need extensive knowledge of
SQLServer nor do you need expensive tools to do your primary analysis of what
is going wrong
And in a lot of instances these tools will tell you that you yourself can fix the problem. 
In this presentation we’ll go through Microsoft’s new tool, Azure Data Studio (formerly SQL Operations Studio), what it can do, and whether it can help in your environment;

    – Inbuilt T-SQL Editor (with IntelliSense) – Could you replace SSMS?
    – Smart T-SQL Code Snippets – This is new and a massive time saver
    – Customizable Dashboards for your server estate – great if you don’t have a monitoring solution currently
    – Connection Management – Group your servers to help you organise what’s important.
    – Integrated Terminal (run your PowerShell, sqlcmd, bcp etc directly in Ops Studio)

We’ll show real life uses for this tool and leave you with the ability to see if it’s something you want to jump into and start using (to make your life easier).
The ability for multiple processes to query and update a database concurrently has long-been a hallmark of database technology, but this feature can be implemented in many ways. This session will explore the different isolation levels supported by SQL Server and Azure SQL Database, why they exist, how they work, how they differ, and how In-Memory OLTP fits in. Demonstrations will also show how different isolation levels can determine not only the performance, but also the result set returned by a query. Additionally, attendees will learn how to choose the optimal isolation level for a given workload, and see how easy it can be to improve performance by adjusting isolation settings. An understanding of SQL Server's isolation levels can help relieve bottlenecks that no amount of query tuning or indexing can address - attend this session and gain Senior DBA-level skills on how to maximize your database's ability to process transactions concurrently.
Azure Cosmos DB has quickly become a buzzword in database circles over the past year, but what exactly is it, and why does it matter? This session will cover the basics of Azure Cosmos DB, how it works, and what it can do for your organization. You will learn how it differs from SQL Server and Azure SQL Database, what its strengths are, and how to leverage them. We will also discuss Azure Cosmos DB's partitioning, distribution, and consistency methods to gain an understanding of how they contribute to its unprecedented scalability. Finally we will demonstrate how to provision, connect to, and query Azure Cosmos DB. If you're wondering what Azure Cosmos DB is and why you should care, attend this session and learn why Azure Cosmos DB is an out-of-this-world tool you'll want in your data toolbox!
"Learn DAX", they said. "It'll be easy with your background", they said. Well, it turns out that it wasn't. Transitioning from SQL to DAX gave me nightmares and ulcers, and this session is for everyone who is looking over the edge and considering undertaking the challenge. It is in no way impossible, only frustrating, as DAX has a very different approach to data from what a SQL programmer is used to. In this introduction to the DAX language, I'll be putting a somewhat different spin on DAX from a beginners' standpoint.  I'll be going over the basic mental mistakes that many people trying to learn DAX do, how to solve them and how to put your brain on the right track!
We've all been there - the client wants a report, gives you some random requirements and expect you to "sort it out". When the report is done and presented to the client, the silence heralds yet another less-than-successful report delivery. It's time to turn this on its head: build the report as they watch, while having a constant discussion about the content! In this session I go through a new approach to capturing requirements and creating a report on the fly, all with the benefit of a happier client, a more relevant report and - perhaps surprisingly - less work for the report developer!
Technical speakers are most often very good at the "technical" part of speaking but sometimes leave the "speaking" part with something to be desired. The body language may be stilted (or in some cases nonexistent), the voice can be a dull monotone or the font size can make the demos unreadable from more than five feet away. We have all been there - the technical content is great but the presenter just won't cut it. This session is for anyone who either is or aspires to become a technical speaker, and I will go through tips and techniques for better understanding and using body language, how to use your voice for maximum benefit and finally give some pointers, ideas and software tips to make sure your demos are as clear and readable as possible.
Special care must be taken when managing virtualized database servers. Performance bottlenecks are everywhere, but most are silent and lurk in the shadows without VM admins knowing they exist. DBAs “feel” the performance problems but cannot find them in the database, and the blame starts to grow. These areas slow down business and create strife in the IT department. Come learn from experts in database virtualization all of the areas that VM administrators should focus on to maintain and improve performance of these heavy resource consumers, such as CPU scheduling techniques, vNUMA, and improving disk queueing. Leave this session armed with the deep-dive skills to troubleshoot and improve the performance of the most demanding virtualized database servers.

Think infrastructure in the cloud is still just for sysadmins? Think again! As your organization moves into the
cloud, infrastructure architecture skills are more important than ever for DBAs to master. Expert knowledge of cloud-related infrastructure will help you maintain performance and availability for databases in the cloud. For example, know what an IOP is? Should you use a database-as-a-service or provision a cloud-based VM? How many compute resources does your database consume during a given day? Can you secure it properly? Come learn many of the key cloud infrastructure points that you should master as the DBA role continues to
evolve!
If your boss asked you for the list of the five most CPU-hungry databases in your environment six months from now for an upcoming licensing review, could you come up with an answer? Performance data can be overwhelming, but this session can help you make sense of the mess. Twisting your brain and looking at the
data in different ways can help you identify resource bottlenecks that are limiting your SQL Server performance today. Painting a clear picture of what your servers should be doing can help alert you when something is abnormal. Trending this data over time will help you project how much resource consumption you will have months away. Come learn how to extract meaning from your performance trends and how to use it to proactively manage the resource consumption from your SQL Servers.
Azure Data Factory V2 came with many new capabilities and improvements. One of biggest game-changers is the Data Flow feature, allowing you to transform data at scale without having to write a single line of code.

In this session, we will look at the new Data Flow feature in Azure Data Factory (ADF), compare it to Data Flows in SQL Server Integration Services (SSIS), and build a demo to see the new data transformation capabilities in action. Along the way, we will cover design patterns, best practices, and lessons learned.

Cloud data integration has never been easier! 
Data virtualization is an alternative to Extract, Transform and Load (ETL) processes. It handles the complexity of integrating different data sources and formats without requiring you to replicate or move the data itself. Save time, minimize effort, and eliminate duplicate data by creating a virtual data layer using PolyBase in SQL Server.

In this session, we will first go through fundamental PolyBase concepts such as external data sources and external tables. Then, we will look at the PolyBase improvements in SQL Server 2019. Finally, we will create a virtual data layer that accesses and integrates both structured and unstructured data from different sources. Along the way, we will cover lessons learned, best practices, and known limitations.
Azure Data Studio is a modern, lightweight, and cross-platform editor for data professionals. It is built for data professionals who spend most of their time developing data solutions or editing queries, and works with both on-premises and cloud sources.

In this session, we will go through some of the core features of Azure Data Studio that can increase your productivity: IntelliSense, code snippets, dashboards, and extensions. We will also cover common settings, keyboard shortcuts, and the command palette. Finally, we will look at some of the similarities and differences between Azure Data Studio and SQL Server Management Studio (SSMS) to help you decide which editor is right for you.

Join us to learn about the latest Azure Data Studio updates and features, and see how you can increase your productivity!
You can have the perfect indexing strategy, go to great lengths to write optimizer and access path friendly t-sql but there is a fundamental scalability anti-pattern that will prevent the transactional throughput of your
workload from scaling. Because of this, more hardware does not equal more throughput. This is the
anti-pattern whereby multiple threads require access to a resource that is singleton by nature and a spinlock
is required to ensure that only. There are no knobs, levers or dials that can be altered to get around this
problem. However, there is an innovative approach using containers that can help overcome this scalability final frontier.
The in-memory OLTP engine has been around since version 2014 of the database engine, do you need to be a SaaS provider processing billions of transactions a day to use this ?, the short answer is absolutely not and this session will show you why by presenting a number of use cases that most people should find useful from the bulk loading of data to scalable sequence generation. But what is wrong with the legacy engine that we all know and love ?, why do we need the in-memory engine ?. Along the way this session will provide an overview of what the in-memory engine is, why it is required and why with SQL Server 2016 service pack 1, it is more cost effective to use than ever before.
Every new release of SQL Server brings a whole load of new features that an administrator can add to their arsenal of efficiency. SQL Server 2017 / 2019 has introduced many new features. In this 60 minute session we will be learning quite a few of the new features of SQL Server 2017 / 2019. Here is the glimpse of the features we will cover in this session. • Adaptive Query Plans • Batch Mode Adaptive Join • New cardinality estimate for optimal performance • Adaptive Query Processing • Indexing Improvements • Introduction to Automatic Tuning / Intelligent query processing This 75 minutes will be the most productive time for any DBA and Developer, who wants to quickly jump start with SQL Server 2017 / 2019 and its new features.
Not many people know too many indexes are not bad for Insert, Update and Delete but also for Select statements as well. We will dive deeper on this subject and will show the secrets behind the scene stories with example. We will also display how they can solve this performance problem which they were not aware of it EVER. Slow Running Queries are the most common problem that developers face while working with SQL Server. While it is easy to blame the SQL Server for unsatisfactory performance, however the issue often persists with the way queries have been written, and how Indexes has been set up. The session will focus on the ways of identifying problems that slow down SQL Server, and Indexing tricks to fix them. We will have a quiz during the session to keep the conversation alive. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session.
“Oh! What did I do?” Chances are you have heard, or even uttered, this expression. This demo-oriented session will show many examples where database professionals were dumbfounded by their own mistakes, and could even bring back memories of your own early DBA days. The goal of this session is to expose the small details that can be dangerous to the production environment and SQL Server as a whole, as well as talk about worst practices and how to avoid them. In this session we will focus on some of the common errors and their resolution. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session.
Pop quiz DBA: Your developers are running rampant in production. Logic, reason, and threats have all failed. You're on the edge. What do you do? WHAT DO YOU DO?

Hint: You attend Revenge: The SQL!

This session will show you how to "correct" all those bad practices. Everyone logging in as sa? Running huge cursors? Using SELECT * and ad-hoc SQL? Stop them dead, without actually killing them. Ever dropped a table, or database, or WHERE clause? You can prevent that! And if you’re tired of folks ignoring your naming conventions, make them behave with Unicode…and take your revenge!

Revenge: The SQL! is fun and educational and may even have some practical use, but you’ll want to attend simply to indulge your Dark Side. Revenge: The SQL! assumes no liability and is not available in all 50 states. Do not taunt Revenge: The SQL! or Happy Fun Ball.
How do you test your SSIS packages? Do you prepare them, set the parameters and variables, maybe get some sample or production data and run a few times by hand in SSDT? It’s not a bad practice when you start your ETL journey, but after some time you probably think about automation. If not – you should. It’s time for you to start automated SSIS unit and integration testing.

On this session, I will show you how to begin an adventure of testing packages using ssisUnit – the free SSIS testing library. I will build from scratch the tests for example packages and then automate the whole proces.
Data lakes have been the hyped technology of choice for several years, but what is it? What does it do? How does it work? And most importantly - why should I care? Depending on your job role, the answers to the previous questions can be very different.
With the second generation of Azure data lake storage right around the corner, it's time to get to grips with this technology. Regardless of if you're a DBA wanting to swim in a different pond, a BI architect already knee-deep in data or a data scientist who is skeptical to drinking from something you know little or nothing about, this session will give you new insights and ideas. Let's wade through the hype and see where this piece of the puzzle fits in your data landscape!
Have you had performance tank despite the code working fine in another environment? Maybe heard that some SQL is bad but not why? If so, this is the session for you!
This session will start with a walkthrough of some of the basic settings in SQL Server and how they affect you as a developer. It follows with key tips on what settings to change, why some code will wreak havoc on your performance and how isolation levels matter (and why NOLOCK can be an exceptionally bad idea!) The session is led by a 20-year DBA veteran who decided to try to help developers understand performance issues by seeing things from his perspective.
If you want to explore how default settings kill your performance, investigate why harmless SQL might not be quite so harmless and gain insight into how isolation levels affect function and performance, then this session will provide you with the tools to think outside the box and incorporate database engine knowledge into your developer prowess!
On conferences like this there are many sessions on What's new, Cool new stuff, Roadmaps and so on. But in real life we also have to work with existing and proven technologies. SQL Server Reporting Services is one of those. Many organizations still use it heavily and are dependent on it for creating paginated reports and web dashboards. And it's expected to be there for a long time.

In this session I will share miscellaneous SSRS tips and tricks. For instance, dealing with multi language scenarios, handling corporate styles, storing user preferences, creating a good dashboard experience in the portal, and more. I will try to share as many tips and tricks as will fit in one hour.

This session assumes you have a working experience with SSRS, but you need not be a guru.
Creating a proper Tabular Model is essential for the success of your modern BI solution. If you set up the foundations properly, you will benefit when building the relationships, formulas and visualizations. Also your Self-Service BI users will understand and use the data model better. This talk guides you through the process of creating a Tabular Model. The session will be packed with very practical tips and tricks and the steps you should do to create a proper model. The session is based on “real life” projects, and will be backed with some theory. After this hour you will understand how to create a proper model, how to optimize for memory usage and speed, enhance the user experience, use some DAX expressions and to use the right tools for the job. You will go home with a very useful step-by-step-guide.
Extended Support for SQL Server 2008 and 2008R2 ends in July 2019. With so many databases still using these versions, upgrade planning should be on everyone's current to-do list.

Microsoft's investment in tooling in this space will certainly help to ease the pain normally involved with upgrades.

This session will look at the tools available including:
  • Database Migration Assistant - to help you plan and identify the correct target version.
  • Database Experimentation Assistant - to make it easier to do A/B testing between options to check performance.
  • Database Migration Service - to move your DB to Azure SQL DB with almost zero downtime
Visualization is the most powerful way to disseminate information.  Mistakes can be both intentional and unintentional and go unnoticed.  

Humans see images 60,000x better than text but are we always seeing what is being shown? In this talk, we will look at ways a visual designer can intentionally or unintentionally confuse readers by using techniques that are common but not correct. We will discuss topics such as color theory, chart selection and placement among others. Come join us to learn what makes a visualization clear and learn how to convey your story.
We have more information available to us today than ever before.  So much so that we run the risk of not being able to tell concise stories.  How are you presenting it?  Does it convey what you think?  There's a lot more to creating that story than just getting the correct information.

Is there a better way?  Come find out! 

Come learn not just the do's and don'ts, but the whys…
Relational databases have their strengths. Ironically data relationships are not one of them. Graph databases excel in this department using nodes and edges. They are optimized to find and view relationships using graph theory.

One of the best new features of SQL Server 2017 is the Graph Database! It brings us the best of both worlds in one easy platform! Come learn about the history of graph databases, how they work and why you should be using it!
Welcome to the dungeon. Yes, SQL Server memory concepts are like entering a dungeon where you are guaranteed to get lost. It’s dark and complex out there and not many have come back alive. Join Microsoft Certified Master of SQL Server, Amit Bansal, and find your way out from the dungeon. In this deep-dive session you will understand SQL Server memory architecture, how the database engine consumes memory and how to track memory usage. You will also see a few practical examples of memory troubleshooting. Complex concepts will be made simple and you will see some light beyond the darkness. This session will be an eye-opener for you. Assured.
Bi developpers are using Visual Studio for SSAS, SSRS and SSIS development all the time,but tend to forget to use Visual studio for there database development.

A database project, even when you generate all your structures, opens the road to putting your database code under source control and move forward to a continuous integration/deployment for the database part of your datawarehouse. You can start from scratch or start with your currenct databases as a starting point to move foreward with the reverse engineering option.
With a template in Visual studio you can generate all your tables and views in a second from a metadata solution.

I'll show how to set up your database project, how to generate tables and views from metadata directly in the database project and  where database project can help in deploying your entire solution quickly to all you environments, including security settings and statics base data.
You have probably seen the Q&A feature in Power BI demoed many times, but is it just a demo feature? Do users really want to type questions and see Power BI visuals generated for them automatically? Does it work well in real-world scenarios? In this session you’ll learn:

  • What Q&A is, where it can be accessed by users and who it might be useful to
  • How the design of your dataset and reports can influence how well Q&A works
  • How advanced features like Synonyms and editing the Linguistic Schema file can enable more complex interactions 
  • Whether it’s worth your time and effort to use Q&A in your Power BI deployment
Dataflows are an important new data preparation and loading feature in Power BI. In this session you will learn:
  • What dataflows are and when you might want to use them
  • The advantages and disadvantages of using them over Power BI Desktop's data loading features
  • Configuring incremental refresh
  • Additional features available in Power BI Premium
  • Integration with Azure Data Lake Store, the Common Data Model and other Microsoft services
Microsoft offering has seen many improvements lately in terms of providing suitable tools for Managing the entire Lifecycle (ALM) of Business Intelligence (BI) Applications. However, for many reasons, uptake of these tools have not been particularly great. One of such reasons is a steep learning curve for many BI professionals, who came from analytical or admin backgrounds with little experience of .Net development with Team Foundation Server (TFS).

This presentation aims to provide a learning framework and show what is possible to achieve with TFS. It gives an overview of end-to-end architecture of MS BI ALM and practical tips on how to make it happen with TFS toolkit. The presentation will also cover unit testing using MS Test, continuous integration with MS Build and a demo of TFS Deployment Manager for a typical BI application. This will include a database project, SSIS, and SSAS projects.

The material does not assume prior knowledge of TFS administration, but some experience using TFS source control and general TFS terminology will be helpful.
Come and learn my experience of using it in anger for a 3TB compressed data warehouse and amongst other things discover the following:
- columnstore for temporary tables - method or a madness
- patterns for columnstore dimensions
- data loading patterns and anti-patterns
- data management patterns and anti-patterns


The session assumes the knowledge of the basic comlumnstore concepts: delta-store, tuple mover, rowgroup and segment. Please familiarize yourself with these before coming to this session.
This session will focus on
maps in Power BI. We'll look at the most appropriate maps for the kind
of data you’re visualising, with some tips and techniques for working with
larger volumes of data. Through back to back demos, we’ll focus particularly on
the Map and ShapeMap visuals that come built in to Power BI, then take a deeper
look at some of the more advanced capabilities available with the MapBox and Icon
Map custom visuals..

The dbatools module now has over 300 commands and anyone who wants to start is overloaded with amount of functionality in this module.
There are not enough hours in the day to get everything done as a DBA. We need to automate our repetitive tasks to free up time for the important and more fun tasks.
In this session I'll show you a set of commands which will help you start automating your tasks.
Database administrators have the responsibility to provision other environments with production data. This is mostly done for developers that have to query against the most recent data.

We spend a lot of time and effort with that, because we want to do it right and we're dealing with lots of data in most of the cases.

What if I told you that you can provision your environments within minutes and save lots of space in the process using PowerShell.

Do you want to spend less time provision databases and save a lot of space in the process?
Come to my session and I'll show you how to do that.
Customers have feelings. If you meet with all your customers regularly you probably think you know how they feel. But what if you have millions of customers? How can you even begin to get to understand how each of those customers feel about your business?

This is where cognitive API’s come in. By harnessing the power of deep neural networks, we can accurately derive emotional insight from our data and use this to improve our offering to our customers. In this session we will look at the tools needed to connect your data to the right API’s and how we can leverage their capability. Starting out with sentiment analysis we move to facial expression recognition and image tagging.

Attendees will gain the knowledge of how to connect different types of data to appropriate cognitive services. This talk would be of interest to anyone that deals regularly with customer data or has a curiosity toward data science and its application within a business.
In this demo rich whirlwind session, we will run thought the
Microsoft AI stack to give you an understanding of what tooling is available
and the why, how and where you would use them.

If you are careful not to blink by the end of this session you
will gain an understanding of the landscape of the various AI toolings that are
available and how you might utilise them.

Technologies we will cover are:

  • Azure Machine Learning Studio
  • Cognitive Services
  • Running R and Python in SQL
  • Azure Notebooks
  • Azure Databricks
  • and more


In this session we will look at the Machine Learning capabilities
of SQL 2016, SQL 2017 and Azure SQL Database. We will discuss and demonstrate
everything from simple in database ML predictions to running deep neural
networks for sentiment analysis and image categorisation.

We will look at how to operationalise your AI workloads with
both SQL on premise and in the cloud, as well as performance considerations for
your workloads

Technologies we will cover are 

  • SQL 2016/2017
  • Azure SQL Database
  • In database machine learning with R and Python

In this session we will cover three different ways to create
and operationalise Machine Learning models. From drag and drop interfaces to
open source coding. Technologies we will cover are Azure Machine Learning
Studio, SQL Server, Azure Machine Learning Services with the new azureml Python
SDK.

At the end of this session you will understand some of the tooling’s
available to make your AI development simpler and easier and how to easily move
from raw data to productionised predictive models.

In this session we will look at how PowerBI interacts with
some of the more common data services and considerations involved in choosing
one over the other. We will look at the various SKUs of PowerBI and in which
architectures you should choose one over the other. We will also look at the
various security models and general gotchas in terms of designing a PowerBI
architecture.

ETL development can be packed with variety or as repetitive as WHILE 1 = 1 – and when it's the latter it's time-consuming, boring and error-prone. In this session I'll get the ball rolling with some basic dynamic T-SQL before supercharging it with metadata to generate (and re-generate) a variety of ETL components in T-SQL. We'll wrap up with some thoughts about how to tackle this in the real world with a heady mixture of good practice and metadata abstraction.
SQL Server 2016 Introduced an incredible monitoring and tuning technology called Query Store that provides us with insight into the optimizer’s plan choices and the history of changes in plans. Query Store allows you to revert to a previous better performing plan through the use of Plan Guides. In this session, we’ll see what kinds of query performance problems can be solved with Query Store and what kinds can’t be. We’ll look at how Query Store evaluates query performance information and how we can revert to an old plan. Finally, we’ll see a new SQL Server 2017 technology called Automatic Plan Correction that is built on top of Query Store.
Documentation has never been this much fun! In this session I'll be introducing Graphviz – free, open-source, graph visualisation software with relevance that extends beyond traditional graph applications. I will show how we can use it to build informative visualisations of common data management artefacts, specifically ETL pipelines and SQL Server database diagrams. Combining the approach with sources of metadata we'll see how we can quickly and automatically generate suites of interlinked diagrams to describe large and complex systems in an easy-to-navigate way. 
As data engineers we're great at putting together and managing complex process flows, but what happens if we stop trying to control the flow and start thinking about the metadata it needs instead? In this session we'll look at a variety of ETL metadata, how we can use it to drive process execution, and see benefits quickly emerge. I'll talk about design principles for metadata-first process control and show how using this approach reduces complexity, enhances resilience and allows a suite of ETL processes adaptively to reorganise itself.
 Tempdb has been a source of configuration confusion since
the very first version of SQL Server. Each SQL Server instance only has one
tempdb database used by all users in all databases. We’ll look at all the
different uses of tempdb and how to configure your tempdb to support them.
We’ll look at all the different kinds of temporary tables and see when you
should use each type: global and local temp tables, table variables and work
tables. We’ll explore the various tools available that let you monitor the use
(and abuse) of your tempdb database and we look at some best practice
guidelines for how to keep your tempdb as healthy as possible.
As long as it is only a matter of SELECT, INSERT etc, you can put these statements in a stored procedure and users only need permission to run the procedure. That is, it seems that the stored procedure acts as a container for the permission. But you find that this no longer seems to work when you use dynamic SQL, create non-temp tables or try other "advanced" things. The story is that the procedure can still act as a container for the permission and users do not need to be granted elevated permissions if you take extra steps in form of certificate signing or use EXECUTE AS. In this session you will learn how to use these techniques and why one of them is better than the other.

The session does not stop at the database level, but it also looks at how these techniques can be used to permit users to perform actions that require server-level permission in a controlled way, for instance letting a database owner see all sessions connected to his database but not other databases. As a side-effect of this, you will also learn why the TRUSTWORTHY setting for a database is dangerous.
Temporal tables, introduced with SQL Server 2016, have a number of useful applications within the data warehouse. This session provides an introduction to temporal tables and what you need to know about them in order to get started using them in your data warehouse.
Learn how create temporal tables from scratch or convert existing tables, find out what the catches are and when you should (and probably shouldn't) consider using them, how they can simplify the code needed in comparison to other solutions and take a brief look at performance in comparison to their alternatives. Finally, we walk through some real-world use cases. 
You might have heard "don't use cursors, they are slow!". In this presentation, you will learn what this actually means: you should normally write set-based statements instead and I will explain why they generally are faster than writing your own loops. But I will also look at situations where using a loop for one reason or another is preferable, and you will learn that the best way to run a loop in most cases is a cursor, provided that you implement it properly. The presentation also gives some tips how you can troubleshoot performance problems with loops.
Early in your career you learnt that loops are bad and that you should use set-based statements. However, there are situations when trying to processing all at once takes you into problems. In this session we will learn what these situations are and how we can address them by splitting up the work in batches. We will learn techniques for batching and pitfalls to watch out for so that we don't introduce new performance issues.

We will also look at batching from a different angle: problems that requires a loop for, say, a single customer, but where we can process all customers abreast for better performance.
SQL disk configuration and planning can really hurt you if you get it wrong in azure. There is a lot more to getting SQL right on Azure VMs than next-next-next.   Come along and dive deeper into azure storage for SQL. Topics covered include: • SQL storage Capacity Planning concepts • Understanding Storage Accounts, VM limits, and disk types • Understanding and planning around throttling • Benchmarking • Optimal Drive configuration • TempDB considerations in Azure • Hitting “max” disk throughput
Ever wondered what actually happens when a cube is “processed”, why it takes so long, how you can configure and optimise model processing, or what strategies people commonly use for improving processing? This session provides a deep dive into cube processing for tabular models, to help understand how it works and what you can do to work with it. Come to this session for a better understanding of how to configure, optimise and tune cube processing. Included in the session is case studies from our performance lab and some sample tools to analyse processing logs.
You want to use all the new and sexy cloud based data services like PowerBI, Flow and hosted SSAS, but your data remains strictly on premise. Come along to see how to use the data gateway to connect Azure to your on-premise data.

Content includes:
- Architecture
- Installation and Configuration
- Deployment Patterns
- Transferring Identity and UPNs.
- Implementing Row Level Security
- Monitoring and Performance
- Troubleshooting
Polybase was the goto technology on SQL DW for fast ingest
of bulk data. Learn how you can now use this extended feature on SQL 2019 to
modernise your data ingest and ETL even on traditional SQL running on premise or from a
VM.

Well show you demos and performance best practise as well as
tools to fully automate data ingest using Polybase. Plus the data load face off
between Polybase, BCP, SSIS and the venerable OPENQUERY.

1hour - No Slide - 10 DAX queries and tips
From a simple calculated column to more advanced KPIs, that session will present 10 concrete use-cases that need DAX calculation. We will explore the art of ratios, time intelligence measures, visual tricks but also tools like DAX Studio.
Intended to beginners and end users, it is better if you know a little bit DAX. For experienced users, it can be a good wrap-up too.
Discover the Advanced Analytics and Data Lake pattern in Azure Data Platform through a complete demo : how to get insights from text, photos and videos ? 
From different media files and raw data, we will analyze sentiment of characters and
get valuable information in a Power BI dashboard, using Cognitive Services, CNTK, .NET and U-SQL.
This session will mainly showcase Azure Data Lake and U-SQL language. But demos will
involve different tools like Azure Data Factory for data supply chain and orchestration, Azure SQL Database for corporate data or even Machine Learning technics.
Even if this session is demo-driven, concepts and features of Azure Data technologies will be discussed.
OK, your Power BI report is ready, everything works on your desktop. Now you need to share it and work with others.
How can you collaborate easily with colleagues and partners, respecting security
constraints ?
This session answers this question detailing all options available : Native sharing,
Groups, Apps, Embedding, etc.
Best practices, recommandations and comparisons will be given during this session to
conduct you to the best choice for your company.
"Hell is other people" Huis-Clos, Jean-Paul Sartre, French Philosopher
How to govern and administer your Power BI Tenant ?
In this session, we will review different ways and tools to administer Power BI. We will deal with common questions asked by customers regarding their Power BI Tenant.
Some demos using Power BI API and PowerShell will be shown during the session.
In an ideal world, we would not need any error handling, because there would be no errors. But in the real world we need to have error handling in our stored procedures. Error handling in SQL Server is a most confusing topic, because there are such great inconsistencies. But that does not mean that we as database developers can hide our head in the sand. This presentation starts with a horror show of the many different actions SQL Server can take in case of an error. We will then learn how we should deal with this - what we should do and what we should not. We will learn that with SET XACT_ABORT we get better consistency. We will learn how TRY-CATCH works in SQL Server, and we will get a recipe for how to write CATCH blocks. More generally, we will learn why it pays off to be simple-minded to survive in this maze. The session mainly looks at traditional T-SQL code, but the session ends with a quick look at natively compiled stored procedures, where everything is different.
Gone are the days where a third of every page on a Power BI report needed to be dedicated to slicers.
The introduction of new features such as bookmarks, buttons, synchronised slicers and report page tool-tips provide tools that used either individually or in combination allow you to build reports with flexible interfaces that allow users to focus on the most important part of any report - the data.
This session will present practical examples of techniques you can start using to improve reports’ interface straight away.
Ever considered giving a presentation of your own? Pondered how your favorite speakers got their start? Contemplated whether you could ever do that too, but were not sure where to begin?

In this session I will show you how to get started. We will go over how to develop your idea, create session content, and share my favorite tips & tricks. 

You will leave armed with a wealth of resources (and hopefully some inspiration) to venture forth and develop your first presentation.
Ever wonder how the SQL Server storage engine processes your T-SQL queries? Curious about what else you could be doing to improve query performance?


Having basic exposure to SQL Server Internals enables one to write more effective T-SQL. Join me as we peek into the black box of the storage engine. Topics will include an overview of the storage engine, indexing, and the query optimizer, accompanied by practical consequences and solutions.


When you leave, you will have a greater understanding of the storage engine and how to avoid T-SQL design patterns that negatively impact performance.
This certification exam prep session is designed for people with analysing, modeling and visualizing Data knowledge/expertise and are interested in taking the 70-778. 

Attendees of this session can expect to review the topics covered in this exam in a a fast-paced format, as well as receive some valuable test taking techniques.

Attendees will leave with an understanding of how Microsoft Certification works, what are the key topics covered in the exams and an exhaustive look at resources for finalising and getting ready for the exam.
Power BI is one of the best BI tool ever built. R is a great statistical analysis platform and combining these 2 is going to be a great experience in presenting the advanced R Analytics in Power BI. 

This session includes a great demo to witness the power of PowerBI with advanced R analytics.
Empowering Every Person with Power BI, has been selected as Jury Award Winner, headed by Will Thompson (Microsoft) in Data & BI Summit in Dublin. A great opportunity to experience the same in SQLBITS.

Global challenges is a new area to talk about meaningful storytelling. The main focus is to spread awareness and Empower Every Person with "Awareness Enabled Reports" from sleeping open datasets. This session is packed with plenty of story telling, compelling & custom visuals with real-time datasets, collected across the globe, to empower every person in the planet targeting the global issues and create awareness with a simple story telling dashboards and reports.
 Create and deploy a website from your phone, make edits live, and learn about important web development concepts as we go along!  This introduction will teach you key web development skills using the static site generator Hugo, whilst also getting you comfortable with source control and continuous deployment.



Using Hugo, an awesome static site generator, we'll start out by getting you setup with a starter website. From there, we'll start covering important concepts and give you time to follow along in creating your own awesome blog.



We'll overview:

1. The modern website and static site generators

2. HTML, CSS, and JavaScript frameworks

3. Hugo themes and templates

4. Consuming data to generate interactive visuals

5. Git workflow

6. Web infrastructure and continuous deployment
Real time is fundamental for the life of an organization and to the decision making process, therefore, building a real-time analytical model becomes a business need.

In this session we will learn which processes are required to build a real-time analytical model along with batch-based workloads, which are the foundation of a Lambda architecture.

At the end of the session, you will have learned how to ingest, store, prepare and serve the data with Apache Kafka for HDInsight, Azure Data Lake Store Gen 2, Azure Databricks, SQL Data Warehouse and Power BI.
SSRS is a complex and often times awkward beast, and being handed the reigns and told to get to work can be a bit daunting. To make matters worse there's often a general lack of knowledge in any given team, and training is hard to get; we end up spending most of our time franticly Googling for an answer every time he hit a roadblock. From "What Is SSRS?" to managing completed reports in Report Manager, we'll go over everything you need to know to get working with SSRS.

This session aims to teach you all you need to know to be able to get to work with SSRS, so you can feel comfortable working with it, and be aware of common issues. It is aimed at people with little to no experience with SSRS.
You’ve mastered the basics of SSRS: creating charts and tables, adding datasets and data sources, publishing reports, but you know there’s more to it. How can you make your reports more interactive? How can you control the look and feel more freely? How can you stop it from looking awful in Excel and PDF? This session aims to answer these questions and more.

This session aims to build upon existing knowledge of SSRS, to enable users to get more from the platform. It is aimed at people with little to moderate experience with SSRS. 
Just started using Git? Finding it really hard to use? It's nowhere near as complicated as it looks! We'll go through an overview of Git, and dive into what you really need to know in order to be able to work effectively and easily with it; from getting set up on Git Hub and learning the lingo, to working with Git Gui, Git Bash, and my personal favourite: Git Kraken! We'll go over all the important functions, and learn the correct workflow for working with Git.
Just started using Git? Finding it really hard to use? It's nowhere near as complicated as it looks! We'll go through an overview of Git, and dive into what you really need to know in order to be able to work effectively and easily with it; from getting set up on Git Hub and learning the lingo, to working with Git Gui, Git Bash, and my personal favourite: Git Kraken! We'll go over all the important functions, and learn the correct workflow for working with Git.
Just started using Git? Finding it really hard to use? It's nowhere near as complicated as it looks! We'll go through an overview of Git, and dive into what you really need to know in order to be able to work effectively and easily with it; from getting set up on Git Hub and learning the lingo, to working with Git Gui, Git Bash, and my personal favourite: Git Kraken! We'll go over all the important functions, and learn the correct workflow for working with Git.
You've learned the basics of Excel (what's a cell? How do you write a sum? How does formatting work?) but it can be hard to make it do what you want. It's very common for new Excel users to build their sheets in a way that looks good but is useless for working with. This session aims to teach you the right way to use Excel, so you can access all it has to offer. We'll cover Excel's table functionality, pivot tables, vlookups, and even look at Power Pivot and how it can be used to link up your tables.
In this session Steve Morgan of Microsoft will aim to provide an explanation of why you need to worry less about High Availability solutions when using Azure SQL Database. He’ll also be talking about how “HA included” works under the hood and the various options associated with it.

This session will be mainly demo led with a look at how to set-up and configure the High Availability options in Azure SQL Database as well as the options for delivering zero data loss in the Azure environment.

Specifically this session will cover:
- An explanation of what Azure SQL Database “HAincluded” means and how it’s delivered
- The difference available even with the automatic-HA options
- How Geo-replication and Failover Groups can be used in Azure SQL Database to deliver protection against entire Azure Data Centre outage
- The options for delivery zero-data-loss whenrunning SQL Server databases on the Azure platform
As an experienced Data Warehouse engineer or architect, Data Bricks may well be one of those new-fangled tools you’ve heard about but never had time to look at in any detail let alone work out where it may (or may not)be of use.
In this hands-on session Steve Morgan of Microsoft will provide an introduction to Azure Data Bricks positioning it alongside the SQLServer stack allowing you to understand its purpose and therefore make an informed decision as to where it can be used in your Modern Data Warehouse architecture.
The session will talk about what Data Bricks is as well as looking at how you set it up and work with it. We’ll cover off the tools and environments as well having a brief a look at its place within various Modern Data Warehouse architectures.
Using Python from the comfort of your own SQL Server Management Studio


The noise started far away: you need to learn Python, it whispered. We ignored it for a while because we could.  Besides we were busy, busy architecting databases, busy writing T-SQL and busy learning all of the other stuff on the MS Data Platform. Real data. Then something happened: you could code Python from within SSMS.  The whisper had become deafening. This was now part of T-SQL coding. This was us.


The Python had infiltrated our world, but what did it mean for us, the SQL folk?  Did we need to up-skill? What now? Did we need knew technology? Textbooks? New education?


Or could we just ignore it? This decade’s LINQ (sorry LINQ folks)?  This talk tackles these questions by coaxing Python out of its natural environment and debating the pros and cons of: SQL vs Python, Native Python vs Python in SSMS; and, ultimately, why we might just be able to have the best of both worlds - all from the comfort of your own SQL Server Management Studio.
Choosing the wrong data platform for your application can be an expensive mistake. There are a number of factors that should be considered when choosing a cloud data platform. In this session you will learn about all of the Microsoft cloud data offerings, and their strengths and weaknesses for different types of workloads. You will learn about application patterns and costs, and how to make the best decision for your application.
SQL Server is expensive. Licensing, hosting, personnel – it all costs money.

DBAs often focus on the Tech, overlooking the fact that they are viewed as a huge Cost Center by the Business.

In this session, I’ll be discussing ways in which you can reduce costs and improve Business value through:
  • SQL Server Consolidation
  • Getting the Licensing right
  • Using the right Edition of SQL Server
  • Architectural designs
  • On-premises and Cloud considerations
This session is suitable for all, whether you manage a SQL Server estate, or pay for it, Get ready for that "gosh, you're expensive - what can we do about that then?" chat.
Containers have quietly been taking over the world of infrastructure, especially amongst developers and CI/CD practitioners. However, in the database space, container adoption has been lower. SQL Server 2017 introduced the concept of deploying databases into Docker containers. In this session, you will learn the fundamentals of creating containers, learning about Kubernetes for management, and how to further your learning in this new and emerging space.
There are three main ways to deploy Always On Availability Groups (AGs) and Always On Failover Cluster Instances (FCIs) - physical hardware, virtualized, and IaaS in the public cloud. In SQL Server 2017 or 2019, these can be on Linux, Windows Server ... or both. While some deployment aspects are the same across all platforms and locations, each of the possible permutations and combinations affects how you plan, deploy, and administer AGs and FCIs. This session cuts right to the chase and will give you the top tips and tricks for successfully deploying and administering AGs and FCIs so you can be an availability hero no matter where you are deploying or what operating system you are using.
Many organizations would like to take advantage of the benefits of using a platform as a service database like Azure SQL Database. Automated backups, patching, and costs are just some of the benefits. However, Azure SQL Database is not a 100% feature compatible with SQL Server—features like SQL Agent, CLR and Filestream are not supported. Migration to Azure SQL Database is also a challenge, as backup and restore and log shipping are not supported methods. Microsoft recently introduced Managed Instances—a new option that provides a bridge between on-premises or Azure VM implementations of SQL Server and Azure SQL Database. Managed Instances provide full SQL Server surface compatibility and support database sizes up to 35 TB. In this session, you will learn about migrating your databases to Managed Instances, developing applications for managed instances. You will also learn about the underlying high availability and disaster recovery options for the solution.
An insight into the life of a DBA by recounting real-world experiences.

Amongst other topics, I'll be discussing
• Consolidation, Virtualisation and Licensing 
• Guiding Developers, such as the use of appropriate DataTypes
• Compression
• How to help, not hinder
• Facilitating success and cleaning up mess
… and when to do that thing you're told not to do

Something for everyone:
• Developers, help your work go into Prod more smoothly
• Finance? I'll tell you how your DBA can save you money
• Want to be a DBA?, I can help
• Already a DBA, there's plenty of empathy and some tales to make you think "it's not so bad"
2017 marked the 10th anniversary of when I left the world of being a full time employee to being an independent consultant. Back then I was just trying to make it to year two. Ten years later, I've learned a lot and I'm still standing with a few battle scars to prove it. Join me as I discuss what it takes to be (and survive as) a consultant, lessons learned - both good and bad, and how you can apply the principles of consulting even if you are a full time employee to step up your game.
Many new SQL Server deployments suffer from using outdated concepts and "best practices" that date back 5, 10, or even 20 years. Some older concepts still apply, but the rules that apply to deployments using physical hardware, virtualization, and the various cloud deployment methods (IaaS specifically) have changed quite a bit. Whether you are a developer, DBA, IT Pro, DevOps person, or anything inbetween - everyone needs a solid foundation for what is going on underneath SQL Server. To make sure your skills are up to date and you are getting the best availability, reliability, and performance, attend this session to update your knowledge about things like networking and storage. Come with an open mind to learn things that may challenge what you do today.
SQL Server gives us plenty of options when it comes to encrypting our data. But have you ever wanted to write your own encryption routines, perhaps you think that what SQL Server offers us doesn’t quite fit the bill for you?

I’m going to look into the basics of how encryption works and then we’ll learn how we can go about writing our own encryption routines within SQL Server. When we’re happy that those routines are secure, we’ll look at ways that we can go about cracking those routines.

Writing our own encryption within SQL might sound like a good idea and could even be something that you’ve tried out yourself but there’s a chance you might change your mind when you see how easily amateur cryptography can be broken.

The session covers,

Binary,
Bitwise Logic in T-SQL
XOR Cypher
Caesar Cypher
Brute Force Attacks
Statistical Attacks
Plain Text Attacks
Ways to Enhance and Strengthen Encryption Algorithms
One of the most useful tools to the DBA when we need to test new features, recreate a fault that we’ve seen in production or just want to see ‘what if…?’ is a test lab.

Some of you are going to be lucky enough to have a few servers kicking around or a chunk of the virtual environment that you can build a test lab in but not all of us do.

This session walks you through how to create a virtualised test lab right on on your workstation or even laptop. 

We will start by selecting a hypervisor, look at building a virtual machine and then creating a domain controller, a couple of clustered SQL Servers and finally fulling functioning availability group.

The session will cover,

Selection and installation of a hypervisor
Creating your first VM
Building a domain controller and setting up a Windows domain within your virtual environment
Setting up a Windows failover cluster
Installing a couple of SQL Servers
Creating a fully functioning availability group
One of the most highly anticipated new features in the SQL Server 2016 release was Query Store. It's referred to as the "flight data recorder" for SQL Server because it tracks query information over time – including the text, the plan, and execution statistics - and it allows you to force a plan for a query. When you include the new Automatic Plan Correction feature in SQL Server 2017, suddenly it seems like you might spend less time fighting fires and more time enjoying a lunch break that’s not at your desk. In this session, we'll walk through how to stabilize query performance with a series of demos designed to help you understand how you can immediately start to use plan forcing once you’ve upgraded to SQL Server 2016 or higher. We'll review how to force a plan, discuss the pitfalls you need to be aware of, and then dive into how you can leverage Automatic Plan Correction and reduce the time you spend on Severity 1 calls fighting fires. It’s time to embrace the future and learn how to make troubleshooting easier using the plethora of intelligent data natively captured in SQL Server and SQL Azure Database.
CROSS and OUTER APPLY are the Swiss Army knives of joins.
This session will show you how to apply these two join types to your code not just for table valued functions but to make your queries faster, shorter, easier to read or simply to help to sweep those data quality gremlins under the carpet and avoid errors.
Examples will be taken from situations the speaker has encountered within Business Intelligence work but will be equally applicable elsewhere.
When things go wrong with Always On Availability Groups (AGs) or Always On Failover Cluster Instances (FCIs), it is not always apparent where to look and how to diagnose what happened. This session will help demystify where to look when you have a problem, as well as cover some of the more common issues you will see when implementing AGs and FCIs, whether you have them deployed on Windows Server or Linux.

In this session we discuss SQL Server on Linux, we focus on installation and a few simple administration tasks, such as backing up and restoring databases.
In this session we give you everything you need to know to get you up and running with SQL Server on Docker. We cover installation of SQL Server on Docker, the management of SQL Server on Docker, stopping / starting containers, creating images and storage inside containers.
In this session we cover the latest changes in  Microsoft certification program..  The new AZ exams and the Microsoft Professional Program.  Which is best for you?
In this session we give you everything you need to know to get you up and running with SQL Server on Docker. We cover installation of SQL Server on Docker, the management of SQL Server on Docker, stopping / starting containers, creating images and storage inside containers.
Have you heard about the SQL Tiger Team? Do you know they provide a free set of SQL scripts to help you administer your SQL Server?
In this session we explore the scripts in Tiger Team Toolbox.
Can't wait for the session? Download the scripts from here. https://github.com/Microsoft/tigertoolbox
In the 60 minute mostly autobiographical session, we look at the key aspects that go into making a successful presentation.  From writing titles, abstracts, picking topics, preparing and what to do if it all goes wrong.  I'll give my experiences so hopefully you wont make the same mistakes I did... 
Running SQL Server in Docker containers brings benefits that data professionals should be exploring. Having the ability to spin up an instance of SQL Server in a very short period of time has huge implications for CI/CD processes.

However, running standalone Docker containers presents challenges so other technologies are needed to support them. This session provides an overview of the various options for running SQL Server containers in Azure.
The topics covered will be: -
  • The Azure Container Registry
  • Azure Container Instances
  • Azure Container Services
This session is aimed at SQL Server DBAs and Developers who have experience with Docker and want to know more about the different options that are available in Azure.
Each topic will be backed up with demos that show how simple it is to get up and running with these technologies.
With the advent of Windows containers and SQL Server on Linux, the options for running SQL Server are growing. 


Should you run SQL Server on physical machines, virtualised, in the cloud or do you get with the cool kids and run it on Docker?


Docker is great for development, testing and stateless apps, but would you run your production SQL Server on it?


It may sound crazy to run your data tier in ephemeral containers, but I'll discuss the reasons why this might be a good idea if we can figure out the following challenges:


Data persistence
Security
High Availability
Licensing
Monitoring
Deploying SQL Server updates


Microsoft themselves have said they expect containerisation of applications to become as common as virtualisation. If this happens then containers are something we all need to understand.
Microsoft Power BI has evolved significantly over the last years to a strong and mature Self-Service BI platform. But how does this fit into an enterprise-centric BI architecture built upon SQL Server, Hadoop and Azure? In my presentation I will explain how we at InnoGames put all these components together to build a Modern Enterprise BI architecture. Gathering reliable insights from a massive stream of data and making them easily available is vital for our business success. If you love data, no matter whether you are a decision maker, a BI developer or an admin, this is the right talk for you.
A demo's filled session packed with tips and tricks to show how to transform
usual Power BI reports to stunning reports 
 
In this session you’ll learn about:
  • How to use background images and useful resources to create the background templates 
  • Use of colours, various resources to get appealing colour pallets 
  • Multiple ways of using conditional formatting to highlight the specific data points 
  • How to create Power BI theme files 
  • Various DataViz resources


Azure Container Instances are Microsoft's offering allowing us to run containers in the cloud without having to manage VMs. This session will provide an overview of how to create Container Instances running SQL Server and the various options that are available.
Topics covered will be: -
  • Deploying custom container images to an Azure Container Registry.
  • Using Azure Tasks to automate creation of container images.
  • Spinning up Container Instances running SQL Server.
  • Using Azure DevOps to deploy Azure Container Instances running SQL Server.

This session is aimed at SQL Server DBAs and Developers who want to learn how to run SQL Server in Container Instances and hook them into a CI process.
Each topic will be backed up with live demos to show how easy it is to get up and running with these technologies.
"PowerApps is a service that lets you build business apps that run in a browser or on a phone or tablet, and no coding experience is required. PowerApps combines visual drag-and-drop concepts from PowerPoint with Excel-like expressions for logic and working with data."
        
In this session, attendees learn about how to create PowerApps solutions, how to use PowerApps as data entry application for Power BI. How to integrate PowerApps in Power BI and Power BI in PowerApps.
Running SQL Server in containers has huge benefits for Data Platform professionals but there are challenges to running SQL Server in stand alone containers. Orchestrators provide a platform and  the tools to overcome these challenges.

This session will provide an overview of running SQL Server in Kubernetes.
Topics covered will be: -
  • An overview of Kubernetes.
  • Definition of deployments, pods, and services.
  • Deploying SQL Server containers to Kubernetes.
  • Persisting data for SQL Server in Kubernetes.
This session is aimed at SQL Server DBAs and Developers who want to learn the what, the why, and the how to run SQL Server in Kubernetes.
Lifting and shifting your application to the cloud is extremely easy, on paper. The hard truth is that the only way to know for sure how it is going to perform is to test it. Benchmarking on premises is hard enough, but benchmarking in the cloud can get really hairy because of the restrictions in PaaS environments and the lack of tooling.

Join me in this session and learn how to capture a production workload, replay it to your cloud database and compare the performance. I will introduce you to the methodology and the tools to bring your database to the cloud without breaking a sweat.
I will also introduce you to WorkloadTools: a new Open Source project that I created specifically for this scenario. Benchmarking will be just as easy as pie.
This session is all about getting away from manual SSIS packages. Instead of reinventing the wheel every time you need to change or extend a package, let’s talk about metadata models and how we can use them to design and describe our data warehouses and packages. We will cover the gamut from initial design, to maintenance and support, and even documentation and compliance. 


In addition, we’ll see how the Business Intelligence Markup Language (Biml) can help us translate our metadata into a ready-to-use SSIS solution!


Prerequisites: Basic knowledge about building SSIS and/or adf packages and data staging
Have you heard about the Business Intelligence Markup Language (Biml)? Maybe you’ve even seen a session about it before but you still have doubts about how easily you can make something useful out of it. In this session, we’ll use Biml to build and populate a staging area including the corresponding SSIS packages. But there won’t be any pre-compiled demos - everything is happening live! Starting with a blank staging database, we will end up building a complete solution over the course of this session to prove that you can start from scratch and still quickly be successful.

Let’s see, how that goes… :)

PS: Even if you have not heard about Biml but are still tired of manually building SSIS packages, this is the right session for you!Prerequisites: Basic knowledge about building SSIS packages and data staging
There is a natural limit to how many dataflows you can run in parallel in SSIS. Regardless of whether your limit is on the source or destination side, you will eventually reach those limits.
You might have set up all your package orchestration in a way that made perfect sense at that time, but over time, some tables grow faster than expected and others don’t grow at all. Due to foreign key relationships, you may not be able simply to shuffle the dataflow tasks around to maximize throughput. Manual reengineering along these lines would potentially be very time consuming, and even worse, the result would be obsolete shortly thereafter.

This session is about using the Business Intelligence Markup Language (Biml) to monitor and control your orchestration patterns. By automatically analyzing the results in ETL logs, we’ll be able to automate our staging orchestration!

Prerequisites: Good understanding of dataflows with SSIS, especially with higher volumes of data
Power BI is the shiny new tech for processing and visualizing data in the Microsoft Data Platform. However, the plumbing in the background does need managing (even if it is cloud-based and supposedly automagic).


In this session we will take a look at how to manage your datasets, security, monitor licensing and more, all through the ultimate administration interface: PowerShell!


In this session we'll explore why we need to manage Power BI, what management capabilities we have at our fingertips and the power of PowerShell to simplify and expand these capabilities. We'll then tie it all together to get you started on the road to building a suite of management reports and automated processes because PowerShell loves Power BI!
Every expert has their own set of tools they use to find and fix the problem areas of queries, but SQL Server provides the necessary information to both diagnose and troubleshoot where those problems actually are, and help you fix those issues, right in the box. In this session we will examine a variety of tools to analyze and solve query performance problems.
With viruses like ransomware occurring more frequently, we need to be ready for server and even data center loss. Just like pilots who are prepared for disaster recovery through regular practice, we as Database Administrators need to actually spend time practicing recovering with those backups. Ransomware has made it critical to prepare to rebuild your datacenter at any moment. This session will focus on the kinds of situations that can dramatically affect a data center, and how to practice recovery processes to assure business continuity.
It’s an age-old problem: devs want prod data for dev and test.

It helps them write better code. Self-service access to usable test-data aligns with DevOps principles such
as adopting a “shift-left” mentality to testing.

Unfortunately, in the age of data breaches and tighter regulation it's generally unwise and/or illegal to
give devs access to some of the data.

So what do you do?

We’ll talk about the GDPR, anonymisation, pseudonymisation and five techniques to provide appropriate “production-like” data. I’ll demo these techniques both in raw T-SQL and using some of the Microsoft and 3rd party tools that make the task easier.

This session will equip you to discuss the problem in an informed manner and suggest several solutions and their pros and cons.
Maintaining a solid set of information about our servers and their performance is critical when issues arise, and often help us see a problem before it occurs. Building a baseline of performance metrics allows us to know when something is wrong and help us to track it down and fix the problem. This session will walk you through a series of PowerShell scripts you can schedule which will capture the most important data and a set of reports to show you how to use that data to keep your server running smoothly.
Most of the time you’ll see ETL being done with a tool such as SSIS, but what if you need near-realtime reporting? You need to get the updates in your OLTP database to the Data Warehouse quickly, but with minimal impact on your application. This session will demonstrate how to keep your data warehouse updated in near real-time using Service Broker messages from your OLTP database.
Steph (Affirmative):
DevOps is great, but it left data folks out in the cold. To get data pros working in a collaborative, faster, and robust manner with the business we need to spend dedicated time on how to do it and the answers aren't necessarily the same as they were for developers. The term DataOps will help us explain to our various tribes what we're aiming to achieve and that they matter.

Alex (Negative):
The problem is real - but inventing a new name for data folks will exacerbate it. DevOps promotes breaking down silos. Different names divide us rather than unite us. The problem is that DevOps now carries all sorts of assumptions about tooling, job roles and processes. Instead of inventing a new DevOps, let's get back to the core of what DevOps is supposed to be about.
There’s the quick way or there’s the right way. In this session we will look at good practices and standards to follow when writing PowerShell to make it easier for you and others to trust and reuse your code
By the end of this session you’ll have a guide to being a better PowerShell citizen, following best practices, all the aspects of your code that help make it usable and readable. We'll share tools and tips to make it easy and discuss contributing to Open Source and sharing your code with others..
Become a member of the PowerShell Standards Agency* (Not a real thing) and write better code for Everybody
You would like to start speaking, but you don’t think you are ready. That was me in 2014. I didn’t believe anyone wanted to hear what I had to say.

Since then I’ve spoken at over 100 events. In 2017 Microsoft awarded me my first MVP award. In 2018 I gave my first SQL Bits pre-con. Speaking has changed my life.

You can do it too.

I’m going to make a deal with you:

I’ll tell you what I’ve learned since 2014, including how to:

- create a compelling abstract that stands a real chance of getting selected

- craft a great talk that is informative, engaging and memorable

- slay the demo Gods and manage those nerves to deliver a kick-ass talk

In exchange, you are going to submit to:

- your local user group or meetup

- Data Relay 2019

- SQL Bits 2020

Are you ready? Of course you are! You can do this.
The SQLOS scheduler has been a core feature of SQL Server ever since its appearance as the User Mode Scheduler in version 7.0. In this session you will learn what makes it tick, where lines of responsibility are drawn between schedulers, workers and tasks, and how everybody has their own selfish ideas about fairness.

We'll pay particular attention to synchronisation: the need to synchronise, the balancing act between busy waiting and context switching, and examples of internal SQLOS synchronisation primitives. All of this will complement your existing mental model of SQL Server waits.

It is a very deep session (stack traces and obscure functions will be aired!), but not a broad one. As long as you have a healthy interest in either SQL Server or operating system internals, you'll have a fair chance of following along.
Database DevOps practices call on you to continuously deliver value to your customers, but there's a problem: database changes are notoriously tricky to implement. In this session, Microsoft Certified Master Kendra Little will show you patterns to design schema and data changes for SQL Server that help you maximize performance and availability. You'll get a guide to which changes may work differently in test and production, and a checklist for testing each tricky change.
How do we measure the value of DevOps? It varies, based on our perspective. In this session, Kendra Little will examine the four pillars of DevOps through three lenses: the viewpoint of the CEO, the CIO, and the IT Manager or Team Leader. We'll discuss the values and concerns that come naturally to each of these roles regarding standardization, automation, protecting data, and monitoring. You'll leave the session with a new understanding of the business advantages of DevOps, and next steps to take to bring the benefits of DevOps to your customers.
Reading execution plans is easy right? Look for the highest cost operators or scans and you're pretty much done. Not really. Execution plans are actually quite complicated and can hide more information than the graphical plan reveals. However, if you learn how to walk through the details of an execution plan, you will be more thoroughly prepared to understand the information in that plan. We'll unlock and decode where the information is within a plan in order for you to know why the optimizer made certain choices. You'll be able to better understand how your T-SQL code is interpreted by the optimizer. All this knowledge will make it easier to debug and tune your T-SQL.
As estates grow in size and complexity, the process of manually monitoring them becomes untenable. Not only do manual checks take time, they are also prone to missing crucial elements that can leave your organization vulnerable and miss the historical context that can enable proactive data management. Without the right processes or tooling in place, your operations can be blind to the performance of your estate, and you may not realize a compliance breach until it’s too late. In this session learn how to monitor your SQL Server estate to maintain compliance and ensure availability.
Scaling out reads across database servers is hard enough, but when it comes to scaling out writes, you are potentially in a world of pain. In this session we take a look at Conflict-free Replicated Data Types, a piece of the technology puzzle which can relieve that pain.

CRDTs don't (yet) feature at the surface of the Microsoft stack, but they are already hard at work in the background within CosmosDB. 

Make no mistake, they aren't a panacea which make conflict issues disappear without careful upfront design. But as things stand, a database professional may well be thrust into a situation where someone else is pushing a conflict management solution. Having some understanding of common CRDTs, and their inner workings, is useful preparation for that day when someone tries to bamboozle you with what might appear to be black magic.

As such, this session is about getting to grip with the simple underlying concepts, not about taking home a new technique you will use right away. But these concepts are still fairly fresh out of academia, and the literature isn't always easy reading. I do my utmost to make the material accessible, sidestep the symbolic logic, and to make it the introductory session I wish I could have attended when I needed it!
One of the biggest challenges to successful implementation of data encryption has been the back and forth between the application and the database.  You have to overcome the obstacle of the application decrypting the data it needs.  Microsoft tried to simplify this process when it introduced Always Encrypted (AE) into SQL Server 2016 and Azure SQL Database.  In this demo intense session, you will learn about what Always Encrypted is, how it works, and the implications for your environment. By the end you will know how to now easily encrypt columns of data and just as importantly how to unencrypt. You will also learn about the current limitations of the feature and what your options are to work around them.
Are you the only database person at your company? Are you both the DBA and the Developer? Being the only data professional in an environment can seem overwhelming, daunting, and darn near impossible sometimes. However,it can also be extremely rewarding and empowering. This session will cover
how you can keep your sanity, get stuff done, and still love your job. We'll cover how I have survived and thrived being a Lone DBA for 15 years and how you can too. When you finish this session, you'll know what you can do to make your job easier, where to find help, and how to still be able to advance and enrich your career.
Many of us have to deal with hardware that doesn’t meet our standards or contributes to performance problems. This session will cover how to work around hardware issues when it isn’t in the budget for newer, faster, stronger, better hardware.  It’s time to make that existing hardware work for us. Learn tips and tricks on how to reduce IO,relieve memory pressure, and reduce blocking. Let’s see how compression, statistics, and indexes bring new life into your existing hardware.
Over a dozen tips and tricks worked through explaining how they can help, why they work the way they do and why you ought to be using PowerShell more.
topics covered include ISNULL , ensuring your build scripts are secure, splatting, parsing event logs, using -format correctly, custom sorting
Over a dozen tips and tricks worked through explaining how they can help, why they work the way they do and why you ought to be using PowerShell more.
topics covered include ISNULL , ensuring your build scripts are secure, splatting, parsing event logs, using -format correctly, custom sorting
Moving databases and workloads to the cloud has never been easier. For Sql Server there is number of products that offer almost perfect feature parity. One of the last technical challenges is right security configuration. That's because security model in the public cloud is different and requires different approach, skillset and knowledge. This session covers governance, risk management and compliance in public cloud and specifically focuses on Azure Sql PaaS resources. It provides practical examples of network topologies with their strengths and weaknesses including recommendations and best practices for hybrid and cloud-only solutions. Explains orchestration and instrumentation available in Azure like Security Center, Vulnerability Assessment, Threat Detection, Log Analytics/OMS, Data Classification, Key Vault and more. Finally shows techniques to acquire knowledge and gain advantage over attackers like deception and chaos engineering.
For years we have been bombarded with AI-enabled/smart/intelligent features, tools, databases and clouds. But what does it actually mean for Sql Server developers and DBAs in practical terms? Is it just marketing hype or contrary - distinct trend that has already started and impacts how and what we do, our workplaces and future careers? This session defines what AI is, provides framework to measure it, goes through the list and evaluates 'the latest and greatest' tools and features available in Sql Server both on-premises and in the cloud and finally shows practical use cases of the best of them that we have to adopt to stay relevant on increasingly competitive market. Let's find out if maintenance free, self-healing, auto-tuning databases that are able to detect and automatically mitigate security risks are ready for real-world workloads!
DBAs and sysadmins never have time for the fun stuff. We are always restoring a DB for a dev or setting up a new instance for that new BI project. What if I told you that you can make all that time consuming busy-work disappear?

In this session we will learn to embrace the power of automation to allow us to sit back and relax..... or rather focus on the real work of designing better, faster systems instead of fighting for short time slots when we can do actual work.

Along the way we will see that we can benefit from the wide world of automation expertise already available to us and avoid re-inventing the wheel, again!
We've all experienced weird situations in IT - things break without any real apparent reason. Sometimes error messages can be helpful, but mostly they are cryptic and lead to no real explanations/solutions.

In this session, I will show a variety of problems that I have run into in the past and explain how I approached them. Sometimes finding simple solutions, but sometimes having to be creative and employ methods that may not be so intuitive.

You will leave the session with a better understanding on how to approach solving any technical issues you experience at work.
Any discussion about Azure SQL Data Warehouse usually involves talk of big data and enterprise scale.  At British Engineering Services, we recognised an opportunity to revolutionise our approach to BI delivery.  We’re not an enterprise customer and we don’t deal with billions of transactions per day, yet along with delivering much improved BI to our business, we have managed to save time and money by migrating our traditional on-premises capability to premium Azure services including SQL Data Warehouse.

This session will discuss our project, our approach and the things that we learned along the way. 
Query optimizer is a magical piece of architecture inside SQL Server which instructs the engine how your query is going to be executed. It performs a great job …unless it doesn’t.

In this session we will have a deep look on cases where the query optimizer fails and explain why is it happening. Expect to see a real code written by real developers which will be tuned just in front of you to perform several times faster. If you want to (or possibly need to) take control of execution plan, this session is right for you!
Everyone wants to know if there are magic buttons you can push to make SQL Server run faster, better and more efficiently. In this session we will go over some of my go-to performance tricks that you can implement to get the biggest improvement with the least amount of change.  When it comes to performance tuning, every second counts. We will cover memory  optimization, isolation levels, trace flags,statistics, configuration changes and more.  I’ll go over real life scenarios we come across as consultants and the changes we made to fix them.
I talk will be talking about the strategic, architectural and compliance level considerations while designing a modern cloud data solution. I will also cover best practices, security consideration and lessons learnt.This session will be great for anyone who wants to understand the big picture of a design, or wants to architect or develop a complete solution in a greenfield or brownfield implementation.We will be covering the full stack, so I advise to grab a coffee before this session! 
It is becoming ever increasing the need to present analytic outcomes. Analytics are only ever as good, as the robustness of the data collection and analysis. This session will cover the raft of research skills that can be applied in industry to improve the quality of your investigative work.


The session covers the end to end process of data management, the things to consider when improving data quality and data science in industry. It also covers data collection areas, that are often used by marketing teams. I share a few details of my research findings about the complexity of managing database systems, the use of the Microsoft data platform for research and the possible future AI developments to help people manage database systems with greater ease.
A summary of security measures, practices and configurations you should consider when setting up an architecture on Azure. 
Covering best practices on applying security by architecture and how access control can be inherited down from AD/Subscription level to resource level.
We will also cover some key popular services such as Storage, Database, Lake, Databricks, etc and how you can apply security at the service level.
In this age of big data processing avoiding data movement is becoming
increasingly more important for our analytics pipelines. Data delivery
and demand for immediate insights mean we no longer have time to
extract, transform and load datasets. We need a new single, scalable
interface that streamlines data acquisition, transparently and without
costly movement operations. SQL Server’s 2019 Big Data Cluster offering
answers that call, enhancing PolyBase services first introduced in SQL
Server 2016 we can now interact and push down processing to many
disparate data sources using scale compute instances from a single SQL
Server head node. In this session we’ll explore patterns for
implementing this scale out SQL Server architecture over relational data
stores, no SQL structures and event data lakes. Query your entire data
estate using only T-SQL and let your SQL Server 2019 Big Data Cluster do
the heavy lifting through a scalable PolyBase abstraction layer.
SQL Server Integration Services has been a good friend since its first
appearance in SQL Server 2005. But now, after a slightly bumpy start,
Azure Data Factory is here and ready to replace all our DTSX package
capabilities. This cloud native orchestration tool is a powerful
equivalent for SSIS and the SQL Agent as a primary component within the
Modern Data Warehouse. In this session we will start with the basics of
Azure Data Factory. What do we need to build cloud ETL pipelines? What’s
the integration runtime? Do we have an SSIS equivalent cloud data flow
engine? Can we easily lift and shift existing SSIS packages into the
cloud? The answers to all these questions and more in this session.
If you have already mastered the basics of Azure Data Factory (ADF) and
are now looking to advance your knowledge of the tool this is the
session for you. Yes, Data Factory can handle the orchestration of our
ETL pipelines. But what about our wider Azure environment? In this
session we’ll go beyond the basics looking at how we build custom
activities, metadata driven dynamic design patterns for Data Factory.
Plus, considerations for optimising compute costs by controlling other
service scaling as part of normal data processing. Once we can hit a
REST API with an ADF web activity anything is possible, extending our
Data Factory and orchestrating everything.
What happens when you combine a cloud orchestration service with a Spark
cluster?! The answer is a feature rich, graphical, scalable data flow
environment to rival any ETL tech we’ve previously had available in
Azure. In this session we’ll look at Azure Data Factory v2 and how it
integrates with Azure Data Bricks to produce a powerful abstraction over
the Apache Spark analytics ecosystem. Now we can transform data in
Azure using Data Bricks but without the need to write a single line of
Scala or Python! If you haven’t used either service yet, don’t worry,
you’ll get a quick introduction to both before we go deeper into the new
ADF Data Flow feature.
The desire and expectation to use real-time data is constantly growing,
businesses need to react to market trends instantly. In this new data
driven age a daily ETL load/processing window isn’t enough. We need a
constant stream of information and analytics achieved in real-time. In
this session will look at how that can be achieved using Azure Stream
Analytics. Building streaming jobs that can blend and aggregate data as
it arrives to drive live Power BI dashboards. Plus, we’ll explore how a
complete lambda architecture can be created when combining stream and
batch data together.
Yes I said it! Full stack, from compute to Devops, from networking to AI!
An introduction level fun coverage of all the popular Azure services. Use case based explanation to help you with choosing your desired service which is right for your business needs.
E.g. If your applications need cheap storage for tables, then use Table Store, If you need high consistency globally distributed low latency access data storage for web/mobile apps, then use Cosmos DB.
Also might drop in why Azure is better than AWS ;) as we go through them.
You're an IT professional who knows their way around an on-premises business intelligence solution.

You're also aware of the Microsoft Azure cloud platform and all the beautiful benefits it brings.

Attend this session to learn how you should march confidently into a brave new world and deploy your business intelligence solution using Azure Platform/Software-as-a-service offerings. When should you use data factory, data lake, SQL Data Warehouse, Azure SQL database, Azure Analysis Service, Power BI, and SQL Server Reporting Services? Leave with the knowledge of which tools you should pick and why.

How are artificial intelligence and data visualization connected? It’s the data. Data is everything to AI, and AI does not happen without the data. For the business to make use of AI, Data visualization is essential because it communicates insights from the data. For Artificial Intelligence, data visualization is particularly important since the concepts and data are complex, and data visualization is crucial to lead the business to articulate and understand difficult concepts. 

There are many great new technologies in Microsoft which offer opportunities for businesses to use AI, but we need to close the loop so that the insights are not lost. In this session, we will look at Microsoft Power BI, and open source technologies such as R and Python in Azure for AI and data visualization, along with best practices for visualising data for artificial intelligence.
Organisations need to know how to get started with Artificial Intelligence. This practical session offers organizations, small and large, with a helping hand in practical advice and demos using Microsoft Azure with Open Source technologies. For organizations who have no clue what they'd use AI for, the session will offer a practical framework: the Five 'C's of Artificial Intelligence, which is a framework to help stimulate ideas for using AI in your own organization. In order to provide a technical focus in getting started, R and Python will be shown in AzureML and Microsoft ML Server.  
As the industry adopts more Big Data technologies, the industry is seeing more uptake of Data Vault 2.0, which is a framework successfully applied to data warehousing projects. In this session, learn more about the framework.
In this session, we will look at the Data Vault methodology and translate it into Azure data offerings. We will look at the methodology in practice in Azure as a basis for the foundations to create a technical data warehouse layer. 
Working in manufacturing industry means that you must deal with product failures. As a BI and/or Data Scientist developer, your task is not only monitor and report product’s health state during its lifecycle, but also predict the likelihood of a fail in the production phase or when product has been delivered to the customer. 
Machine Learning techniques can help us to accomplish this task.  Starting from past failure data, we can build up a predictive model to forecast the likelihood for a product to fail or giving an estimate on its duration. And now it is possible to develop an end-to-end solution in SQL Server, because of the introduction 
You like Power BI. You
think it’s a great suite of tools for data analytics, modeling and report. Now
you’d like to adopt it into your organization as the standard reporting tool.
And here we are with the first questions. How many times have you been asked:
“Can you share this report/dashboard, with me?”; “Can we distribute our work to
other users?”; “Shall we pay for it? Can we have licenses for free?”.

To make things worse,
the licensing model is constantly evolving, bringing more confusion to
end-users. When using sharing? What is an App workspace? And Power BI Embedded?
How can you manage permissions to reports and dashboards? Is it possible to
send reports via e-mail through a subscription?

Come to this session,
if you want to dispel any doubt about the sharing methods in Power BI. We’ll
give a clear and complete overview of all the collaborative features in Power
BI, helping you to choose the solution that best fits your needs.
Everything in our
world is located “somewhere” and is related to other things. Spatial analysis
consists of studying these relationships to find out meaningful patterns and
behaviors.

Figure out, you’re
looking for the best position to open a new store. It´s not only a matter of
“where”, but also there are more implications; is the area easily accessible by
customers? Is there any parking? Is it easy to reach for suppliers? Are there
any competitors store around? What is the volume of shopping for the same
business in the area?

Here is where spatial analysis
can help us collecting, comparing and matching data to build up a framework of
possibilities.

 

Since 2008 release,
SQL Server is supporting spatial data type. Now new amazing features are
offered with the addition of R. R is shipped with a huge number of packages for
performing spatial analysis, mapping, geocoding, etc . There virtually anything
you can’t do with R: finding relationships, measuring spatial autocorrelation,
interpolating point data, mapping point data, …

And, last but not least,
we have Power BI that offers a full range of mapping capabilities. Not only
bubble or choropleth maps, but visual for performing spatial analysis like
ArcGIS, or for creating custom shape maps. And R scripts naturally.

 

In the session, we
will show how the joint use of these three tools empowers us to analyze and
query the spatial properties of data.

We’ll showcase a
real-world example for a better understanding of the endless possibilities that
are now offered to us.

Come, have fun and
discover a world of information inside your data with Spatial Analytics!
In this hour long session we will attempt to include lots of advice and guidance on how to write code that will easily get approved by your DBA prior to release to production. We’ll cover Unit tests, Continuous Integration, Source Control and some coding best practice. 
This will be quite a fast paced session but will aim to give you a taster of what you should include to increase the acceptance rate of your code by the approvers and how to ensure your code does what it should and that future changes don’t break it.