When SQL Server 2016 was released, it offered a fantastic new feature with the Query Store. Long term, statistics based, query tuning became a reality. But what about the thousands of servers that aren't upgrading to SQL 2016 or newer? The open source project Open Query Store is designed to fulfill that need.

This session will give a short introduction to the Query Store feature in SQL 2016 and then dive into the Open Query Store (OQS) solution. Enrico and William (the co-creators of the OQS project) will explain the design of OQS and demonstrate the features. You will leave this session with an understanding of the features of Query Store and Open Query Store, and a desire to implement OQS in your systems when you return to the office.

In this talk you will learn how to use Power BI to prototype/develop a BI solution in days and then (if needed) evolve it into a fully scalable Azure BI solution.

The goal of the session is to show, with a real-world example, how to use Power BI as a prototype solution (with real data) and the process to scale-it-up to a fully scalable Azure BI solution using Azure AS and Azure SQL DB.

In the process I will share a few tips & tools that you can use to help you in that process.

In this talk we will discuss best practices around how to design and maintain an Azure SQL Data Warehouse for best throughput and query performance. We will look at distribution types, index considerations, execution plans, workload management and loading patterns. At the end of this talk you will understand the common pitfalls and be empowered to either construct a highly performant Azure SQL Data Warehouse or address performance issues in an existing deployment.
Hierarchies and graphs are the bread and butter of most business applications and you find them almost everywhere:

  • Product Categories
  • Sales Territories
  • Bill of Material
  • Calendar and Time

Even when there is a big need from a business perspective, the solutions in relational databases are mostly sort of awkward. The most flexible hierarchies are usually modeled as self-referenced tables. If you want to successfully query such self-referenced hierarchies, you will need either loops or recursive Common Table Expressions. SQL Server 2017 comes now with a different approach: Graph Database.

Join this session for a journey through best practices to transform your hierarchies into useful information. We will have fun playing around with a sample database based on G. R. R. Martin’s famous “Game of Thrones”.
You’ve probably already seen that R icon in the Power BI GUI.
It shows up when creating sources, transformations and reports. But the ugly
textbox you got when you clicked upon those icons didn’t encourage you to proceed?
In this session you will learn just a few basic things about R that will
greatly extend your Power BI data loading, transformation and reporting skills
in Power BI Desktop and the Power BI service.
Query optimizer is getting smart, computers are taking DBAs jobs. In this session MVP Fabiano Amorim will talk about new “automatic” optimizations on SQL Server 2017. Adaptive query processing, auto tuning and few other features added into the product. Are you taking weekend off? What about turn automatic tuning on to avoid bad queries to show up after an index rebuild or an ‘unexpected’ change?
Back to the Future is the greatest time travel movie ever. I'll show you how temporal tables work, in both SQL Server and Azure SQL Database, without needing a DeLorean.

We cover point in time analysis, reconstructing state at any time in the past, recovering from accidental data loss, calculating trends, and my personal favourite: auditing.

There's even a bit of In-Memory OLTP.

There are lots of demos at the end.
Machine Learning is not magic.  You can’t just throw the data through an algorithm and expect it to provide insights. You have to prepare the data and very often you have to tune the algorithm.  Some algorithms - Neural Nets, Deep Learning, Support Vector Machines and Nearest Neighbour  - are starting to dominate the field.  A great deal of attention is often focused on the maths behind these, and it IS fascinating. 
But you don’t have to understand the maths to be able to use these algorithms effectively.  What you do need
to know is how they work because that is the information that allows you to tune them effectively.  This talk will explain how they work from a non-mathematical standpoint.
Machine Learning is not magic.  You can’t just throw the data through an algorithm and expect it to provide insights. You have to prepare the data and very often you have to tune the algorithm.  Some algorithms - Neural Nets, Deep Learning, Support Vector Machines and Nearest Neighbour  - are starting to dominate the field.  A great deal of attention is often focused on the maths behind these, and it IS fascinating. 
But you don’t have to understand the maths to be able to use these algorithms effectively.  What you do need
to know is how they work because that is the information that allows you to tune them effectively.  This talk will explain how they work from a non-mathematical standpoint.
Analysing highly connected data using SQL is hard! Relational databases were simply not designed  to handle this,  but graph databases were.  Built from the ground up to understand interconnectivity, graph databases enable a flexible performant way to analyse relationships, and one has just landed in SQL Server 2017! SQL  Server supports two new table types NODE and EDGE and a new function MATCH, which enables deeper exploration of the relationships in your data than ever before.

In this session, we seek to explore, what is a graph database, why you should be interested, what query patterns does they solve and how does SQL Server compare with competitors. We will explore each of these based on real data shredded from IMDB.


Microsoft Azure Analysis Services and SQL Server Analysis Services enable you to build comprehensive, enterprise-scale analytic solutions that deliver actionable insights through familiar data visualization tools such as Microsoft Power BI and Microsoft Excel. This session will reveal new features for large, enterprise models in the areas of performance, scalability, advanced calculations, model management, and monitoring. Learn how to use these new features to deliver tabular models of unprecedented scale, with easy data loading and simplified user consumption, enabling the best reporting experiences over corporate, managed datasets.
For years, SQL Server Reporting Services chugged along with very few updates. Although it remained a reliable and popular reporting tool, the feature set largely remained unchanged for a decade. With the most recent two major editions (2016 and the upcoming 2017), everything changed. Microsoft delivered a brand new SSRS, instantly transforming Reporting Services from a spartan reporting tool to a rich portal for at-a-glance metrics. No longer do you have to purchase a third-party reporting tool; everything you need is right here!

This session will review and demonstrate the newly-remodeled SQL Server Reporting Services. We'll walk through the essential changes in SSRS, from the all-new reporting portal to the new visualizations. We'll also discuss the SSRS ecosystem and how it fits together with mobile reports and its recent integration with PowerBI.
Joins are a thing you learn on Day 1 of T-SQL 101. But they are so much more involved than what you learned then. Logical v physical, Semi Joins, Lookup Joins, Redundant Joins, not to mention those times when you thought you specified one kind of join and the execution plan says it's doing something else.

Luckily, it's not magic - it's all very straightforward once you understand the different types of joins and how they work. This session will cover the different types of logical and physical joins - and even look at joins that don't exist at all.
In a real data mining or machine learning project, you spend more than half of the time on data preparation and data understanding. The R language is extremely powerful in this area. The Python language is a match. Of course, you do work with data by using T-SQL. You will learn in this session how to get data understanding with really quickly prepared basic graphs and descriptive statistics analysis. You can do advanced data preparation with many data manipulation methods available out of the box and in additional packages fro R and Python. After this session, you will understand what tasks the data preparation involves, and what tools you have in SQL Server suite for these tasks.
Databases that serve business applications should often support temporal data. For example, suppose
a contract with a supplier is valid for a limited time only. It can be valid from a specific point in time onward, or it can be valid for a specific time interval—from a starting time point to an ending time point. In addition, many times you need to audit all changes in one or more tables. You might also need to be able to show the state in a specific point in time, or all changes made to a table in a specific period of time. From the data integrity perspective, you might need to implement many additional temporal specific constraints.
This session introduces the temporal problems, deals with solutions that go beyond SQL Server support, and shows out-of-the-box solution in SQL Server, including defining temporal data, application versioned tables, system versioned tables, and what kind of temporal support is still missing in SQL Server.
The range of options for storing data in Microsoft Azure keeps growing, the most notable recent addition is the Managed Instance. But what is it, and why is it there? Join John as he walks through what they are
and how you might start using them.

Managed Instances add a new option for running workloads in the cloud. Allowing near parity with a traditional on-premises SQL Server. Including SQL Agent, Cross Database Queries, Service Broker, CDC, and many more. Overcoming many of the challenges to using Azure SQL Databases.

But, what is the reality, how do we make use of it, and are there any gotcha’s that we need to be aware of? This is what we will cover, going beyond the hype and looking at how we can make use of this new
Ever wondered if you can optimize your sql projects so you don't have to do unnecessary work? Take a look at how I've optimized SSDT Deployments by a use case. This session will take a deep dive into the dacpac, and give you ideas on how you can leverage the knowledge to write your own tools, to get the best out of SSDT. The session will focus on two areas. SSDT and MSBuild. 
„A picture is worth a thousand words“ - well, that is especially true when it comes to analyzing data. Visualization is the quick and easy way to get the big ‘picture’ in your data and the R ecosystem has a lot to offer in this regard. 

They may not add up to exactly 50, but in this session I’ll show you lots of compelling visualizations produced with the help of the ggplot2 package and friends - and their usual small effort of code. We will start beyond the usual bar, line or scatter plots. 

Instead our screen will show diagrams that always made you think „How do they do that?“. We will see waterfall diagrams, violins, joyplots, marginal histograms, maps and more… and you’ll get the code to reproduce everything.
Open source alternatives to the SQL Server data platform are becoming more and more popular in large enterprises.

Today's marketplace means that your next project may be considering moving away from 'traditional' relational data stores - indeed, you may have already been involved in one.

This session will help you understand the Apache Cassandra eco-system, and can help you evaluate or implement a complimentary DBMS to add to your data platform toolkit.
Warning: this is not an introductory session. These are going to be tough problems.

You've been performance tuning queries and indexes for a few years, but lately, you've been running into problems you can't explain. Could it be RESOURCE_SEMAPHORE, THREADPOOL, or lock escalation? These problems only pop up under heavy load or concurrency, so they're very hard to detect in a development environment.

In a very fast-paced session, I'll show these three performance problems pop up under load. I won't be able to teach you how to fix them for good - not inside the span of 75 minutes - but at least you'll be able to recognize the symptoms when they strike, and I'll show you where to go to learn more.
You don't like the idea to change the data model to create kind of 
virtual private database?

Come and see how foreign key relationships can be used.
Why to avoid is_member.
How to cache AD-role membership, example with job catching output from PowerShell scripts
How to write tests to check TVF's working correctly
The analysis of text documents is rapidly growing in importance and not just of social media  but also for legal, academic and financial documents.   We'll use a case study based on the analysis of a bank's corporate responsibility reports to understand the changing priorities of the bank over the last decade.  We'll employ several analytic techniques; frequency analysis, finding words and phrases specific to one or a few documents in a collection, and many visualisations  using a variety of tools; R, text analytics web services and Power BI.
There has always been a challenge of publishing an on-prem developed reports to external clients (non-AD users) and this has been decently achieved for a very simple requirement. Is this achievable with the latest SQL Server 2017 with RLS enabled? 

But how does this solution would work for a non-AD user, if SSRS 2017 Mobile Report is published to Sharepoint and then enabling ADFS & external IDaaS (Identity as a Service) to view the report and render only their own specific data.

Key Learning: SSRS 2017 Mobile Reports, 
ADFS Configuration, 
IDaaS (3rd Party Authorization)

The software development landscape is changing. More and more, there is an increased demand for AI and cloud solutions. As a user buying cinema tickets online, I would like to simply ask "I want to buy two cinema tickets for the movie Dunkirk, tomorrow's viewing at 1pm" instead of manually following a pre-defined process. In this session, we will learn how to build, debug and deploy a chatbot using the Azure Bot Service and the Microsoft Bot Framework. We will enrich it using the Microsoft Cognitive suite to achieve human like interactions. Will it pass the Turing test, no, but we can extend the bot service using Machine Learning (LUIS), APIs (Web Apps) and Worflows (Logic Apps).

DBA is key when a database platform change occurs and necessary to support the application, release processes and there is a miracle waiting to happen! Based on my experience DBA is left out in the key element of DEVOPS, this is unfortunate. DBAs have a lot to offer . In this session let us overview where exactly DBAs can make miracles with their magic wand, let's talk about process and procedures. To evaluate each change request to ensure that it is well thought out, is compliant with organizational best practices. Take away best practices associated in DEVOPS and DBA world.
Since SQL Server 2016, SSAS Tabular has included Tabular Model Scripting Language (TMSL). This allows to define objects in the Analysis Services model. Instead of taking days or weeks to create a tabular model, with TMSL and PowerShell it is now possible to create a tabular model in seconds.

In this session we'll go through the component parts of TMSL; the PowerShell cmdlets; and a practical demonstration of TMSL and PowerShell working together to create and deploy a tabular model. 
In this session, we'll share tips for report creation, including tips about gathering requirements, creating dashboards, understanding business drivers, implementing machine learning quickly and easily, judiciously using coloring, delivering reports strategically, and learn when it's time to retire a report.  Each tip is about sixty seconds and most have demos.  All code will be available for download.
Join this session and learn everything you need to know about T-SQL windowing functions!

SQL Server 2005 and later versions introduced several T-SQL features that are like power tools in the hands of T-SQL developers. If you aren’t using these features, you’re probably writing code that doesn’t perform as well as it could. This session will teach you how to avoid cursor solutions and create simpler code by using the windowing functions that have been introduced between 2005 and 2012. You'll learn how to use the new functions and how to apply them to several design patterns that are commonly found in the real world.

You will also learn what you need to know to take full advantage of these features to get great performance. We’ll also discuss which features perform worse or better than older techniques, what to watch out for in the execution plan, and more.
Join this session and learn everything you need to know about T-SQL windowing functions!

SQL Server 2005 and later versions introduced several T-SQL features that are like power tools in the hands of T-SQL developers. If you aren’t using these features, you’re probably writing code that doesn’t perform as well as it could. This session will teach you how to avoid cursor solutions and create simpler code by using the windowing functions that have been introduced between 2005 and 2012. You'll learn how to use the new functions and how to apply them to several design patterns that are commonly found in the real world.

You will also learn what you need to know to take full advantage of these features to get great performance. We’ll also discuss which features perform worse or better than older techniques, what to watch out for in the execution plan, and more.
Session will be aimed at Database Administrators\Developers who have not previously implemented partitioning within an OLTP database and is designed to give a overview into the concepts and implementation. 

Session will cover the following:- 
An introduction to partitioning, core concepts and benefits. 
Overview of partitioning functions & schemes. 
Considerations for selecting a partitioning column. 
Creating a partitioned table. 
Explanation of aligned and non-aligned indexes. 
Manually switching a partition. 
Manually merging a partition. 
Manually splitting a partition. 
Demo on partition implementation & maintenance, covering automatic sliding windows.

After the session, attendees will have an insight into partitioning and have a platform on which to be able to investigate further into implementing partitioning in their own environment.
T-SQL window functions allow you to perform data analysis calculations like aggregates, ranking, offset and more. When compared with alternative tools like grouping, joins and subqueries, window functions have several advantages that enable solving tasks more elegantly and efficiently. Furthermore, window functions can be used to solve a wide variety of T-SQL querying tasks well beyond their original intended use case, which is data analysis. This session introduces window functions and their evolution from SQL Server 2005 to SQL Server 2017, explains how they get optimized, and shows practical use cases.
Session deals with the database maintenance problems (like defragmentation) in situation of 24/7 system. As we walk through the basic maintenance we keep in mind that our database is big and it takes a lot of time to do the proper maintenance. We will try to solve this problem using T-SQL and CLR.
Session deals with the database maintenance problems (like defragmentation) in situation of 24/7 system. As we walk through the basic maintenance we keep in mind that our database is big and it takes a lot of time to do the proper maintenance. We will try to solve this problem using T-SQL and CLR.
SQL Server is a high frequently used piece of software which need to serve single requests
and/or hundreds of thousands of requests in a minute. Within these different
kinds of workloads Microsoft SQL Server has to handle the concurrency of tasks
in a fashion manner. This demo driven session shows different scenarios where
Microsoft SQL Server has to wait and manage hundreds of tasks. See, analyze and
solve different wait stats due to their performance impact:

- CXPACKET: when a query goes parallel
- ASYNC_IO_COMPLETION: speed up IO operations (Growth / Backup / Restore)
- ASYNC_NETWORK_IO: What happens if your application refuses data?
- THREADPOOL starvation: crush of requests for Microsoft SQL Server
- PAGELATCH_xx: How Microsoft SQL Server protects data?
Big Data using Hadoop, Cloud Computing in Azure, Self Service BI or “classic” BI with SQL Server have grown quickly over the last years.   In my presentation, I will explain how we use these components at InnoGames and how they fit into our holistic enterprise BI architecture.    Gathering reliable insights from large volumes of data quickly is vital in online games – we use this data to optimise the registration of new players and retain existing ones for as long as possible.   This presentation is relevant for all people who like to get their hands ‘dirty’ with data. We will look through the components that make up our BI Infrastructure, as well as giving you the big Picture.
It's understandable that developers love to work in separate code branches, but this can create painful complications if not managed.

Do you dread large merge conflicts when integrating code?
Continuous Integration is a method of working where we merge and fully test our code multiple times a day. This is only possible with a high level of automation.

I'll be discussing the tools I use to achieve this automation when developing SQL Server databases.

Finding automating the deployment of database changes hard?
ReadyRoll is a tool that allows you to test deployments during development.

How do you know your database change won’t affect something you haven’t thought of?
tSQLt and Pester unit tests can put your mind at rest.

Having trouble keeping your test environments in sync with production?
Docker enables us to fix this with infrastructure as code

You will see how a CI approach to database development can increase team efficiency and reduce the time to go from an idea to production.
How do we implement Azure Data Lake?
How does a lake fit into our data platform architecture? Is Data Lake going to run in isolation or be part of a larger pipeline?
How do we use and work with USQL?
Does size matter?!
The answers to all these questions and more in this session as we immerse ourselves in the lake, that’s in a cloud.
We'll take an end to end look at the components and understand why the compute and storage are separate services.
For the developers, what tools should we be using and where should we deploy our USQL scripts. Also, what options are available for handling our C# code behind and supporting assemblies.
We’ll cover everything you need to know to get started developing data solutions with Azure Data Lake.Finally, let’s extend the U-SQL capabilities with the Microsoft Cognitive Services!
If your organisation doesn't have dirty data, it's because you are not looking hard enough. how do you tackle dirty data for your business intelligence projects, data warehousing projects, or your data science projects?

In this session, we will examine ways of cleaning up dirty customer data using the following technologies in SQL Server 2017 such as:

  • R
  • Python
  • AzureML and Machine Learning
  • SSIS

We will also examine techniques for cleaning data with artificial intelligence and advanced computing such as knowledge-based systems and using algorithms such as Levenshtein distance and its various implementations.

Join this session to examine your options regarding what you can do to clean up your data properly.
There have been four (4!) new releases of SQL Server since the introduction of Extended Events in SQL Server 2008, and DBAs and developers alike *still* prefer Profiler. Friends, it's time to move on. If you've tried Extended Events and struggled, or if you've been thinking about it but just aren't sure where to begin, then come to this session. Using your existing knowledge and experience, we bridge the gap between Profiler and Extended Events through a series of demos, starting with the Profiler UI you know and love, and ending with an understanding of how to leverage functionality in the Extended Events UI for data analysis. By the end of this session, you’ll know how to use Extended Events in place of Profiler to continue the tasks you've been doing for years--and more. Whether you attend kicking and screaming, with resignation because you’ve finally given up, or with boundless enthusiasm for Extended Events, you'll learn practical techniques you can put to use immediately.
Are you faced with complaints from users, poor performing code from developers, and regular requests to build reports? Do you uncover installation and configuration issues on your SQL Server instances? Have you ever thought that in dire times avoiding Worst Practices could be a good starting point? If the answer is “yes”, then this session is for you: together we will discover how not to torture a SQL Server instance and we will see how to avoid making choices that turn out to be not so smart in the long run.
You are probably thinking: “Hey, wait, what about Best Practices?”. Sometimes Best Practices are not enough, especially for beginners, and it is not always clear what happens if we fail to follow them. Worst Practices can show the mistakes to avoid. I have made lots of mistakes throughout my career: come and learn from my mistakes!
As your pesonal Virgil, I will guide you through the circles of the SQL Server hell:
  • Design sins:
    • Undernormalizers
    • Generalizers
    • Shaky Typers
    • Anarchic Designers
    • Inconsistent Baptists
  • Development sins:
    • Environment Pollutors
    • Overly Optimistic Testers
    • Indolent Developers
  • Installation sins:
    • Stingy Buyers
    • Next next finish installers
  • Maintenance sins:
    • Careless caretakers
    • Performance killers
Every new release of SQL Server brings a whole load of new features that an administrator can add to their arsenal of efficiency. SQL Server 2016 / 2017 has introduced many new features. In this 75 minute session we will be learning quite a few of the new features of SQL Server 2016 / 2017. Here is the glimpse of the features we will cover in this session.

• Adaptive Query Plans
• Batch Mode Adaptive Join
• New cardinality estimate for optimal performance
• Adaptive Query Processing
• Indexing Improvements
• Introduction to Automatic Tuning

This 75 minutes will be the most productive time for any DBA and Developer, who wants to quickly jump start with SQL Server 2016 / 2017 and its new features.
One hot topic with Power BI is security, in this deep dive session we will look at all the aspects of security of Power BI, from users, logging to where and how your data is stored and we even look at how to leverage additional Azure services to secure it even more.
In this session we will look at all the important topics that are needed to get your Power BI modelling skills to the next level. We will cover the in memory engine, relationships, DAX filter context, DAX vs M and DirectQuery. This will allow set you on the path to master any modelling challenge with Power BI or Analysis Services. 
Extended Events, Dynamic Management Views, and Query Store are powerful and lightweight tools that gives you a lot of data when analyzing performance problems. All this is great news for database administrators. The challenge is which tool to use for which problems and how to combine the data.

Imagine a scenario where you are getting timeouts from a business critical application, the users are complaining, and you are trying to understand what is happening. You have data from XEvents, you are looking in the execution related DMVs, and now you are trying to find the query in Query Store. How do you put it all together?

In this session you will learn techniques for combining the data from these tools, to gain great insight, when analyzing performance problems. We will look at common real-world problems, do the troubleshooting step by step, and visualize the data using PowerBI.
Microsoft Power BI is rich with its default visualizations and can also be extended by adding custom visuals from the Office Store (store.office.com). But besides those visuals, there is another option: you can also create your own visual to be used in your reports. How is this done? Where to start? These were also questions I had before I started creating my own visuals. Now with hands-on experience in creating and submitting custom visuals I will explain and demonstrate in this session how to start creating your own visual, what are the best practices and what are the extra next steps needed before submitting the visual to the Office Store.
You are responsible for writing or deploying SQL Server code, and want to avoid unleashing catastrophe in your databases. In this session you will learn how to install and use the tSQLt testing framework to run automated repeatable tests. You will gain an understanding of test driven development in the context of SQL Server to isolate and test the smallest unit of code. Some simple techniques can catch bugs early when they are cheapest to fix and make your development life-cycle far more robust. Remove that stomach churning feeling at release time wondering what will break and when your phone will start ringing ominously. Replace that nightmare with an optimism that your changes have been fully tested and are production ready. Relax, sleep well and get it right first time every time.
Persistence is Futile - Implementing Delayed Durability in SQL Server

The concurrency model of most Relational Database Systems are defined by the ACID properties but as they aim for ever-increasing transactional throughput, those rules are bent, ignored, or even broken. In this session, we will investigate how SQL Server implements transactional durability in order to understand how Delayed Durability bends the rules to remove transactional bottlenecks and achieve improved throughput. We will take a look at how this can be used to compliment In-Memory OLTP performance, and how it might impact or compromise other things. Attend this session and you will be assimilated!
Moving to the cloud in a big way? In this case study, learn about building a complex end-to-end infrastructure involving SQL Server (on-premises), Microsoft Azure SQL Data Warehouse, and Azure SQL Database.
Gain an understanding of how to use Azure Automation to reduce your costs and automate processes. You will learn about integration with Azure Active Directory, virtual networks, and data flows. Additionally, you will learn how to make decisions based on service and business requirements. 
SSDT and SSMS are the primary tools of BI developers for developing and managing SSAS Tabular. Unfortunately, their possibilities are limited, and we should look for other tools that help us automate monitoring and partitioning, understand what is going on inside VertiPaq engine and optimize our queries, manage our project and code. In this session, I'm going to show you six amazing tools that must be in your developer/consultant toolbelt. These tools help you develop, manage, monitor and optimize your Tabular model, in other words, it makes your day to day job easier.
Many existing Data Factory solutions include a large number of workarounds due to limitations with the service. Now that Data Factory V2 is available, we can restructure our Data Factories to be lean, efficient data pipelines, and this session will show you how.

The initial Data Factory release was targeted at managing hadoop clusters, with a couple of additional integrations thrown in - it was mistakenly believed to be "the new SSIS" and subsequently there were a lot of very disappointed people. The new release remedies many of these complaints, adding in workflow management, expressions, ad-hoc triggers and many more features that open up a world of possibilities.

This session will run through the new features in ADFV2 and discuss how they can be used to streamline your factories, putting them in the context of real-world solutions. We will also look at the additional compute options provided by the new SSIS integration, how it works within the context of Data Factory and the flexibility it provides.

A working knowledge of ADF V1 is assumed.
DevOps and continuous integration provide huge benefits to data warehouse development.  However, most BI professionals have little exposure to the tools and techniques involved. John will be showing how you can use VSTS - Visual Studio Team Services (formally known as TFS) to build and test your data warehouse code and how to use Octopus Deploy to deploy everything to UAT and production.  In particular the session will cover:

• Setting up Visual Studio Team Services to act as your build server
• How to use Octopus Deploy to deploy your entire data warehouse
• Developing a build-centric PowerShell script with psake
• Building and deploying SQL Server Data Tools projects with DAC Publish profiles 
• Writing and running automated unit tests 
• The many problems of automating tabular model deployments
PowerApps is an exciting and easy to pick up application development platform offered by Microsoft.
In this session we'll take an overview look at PowerApps from both the development and deployment sides.

We'll go through the process of building and publishing an app and show the rapid value that PowerApps can offer.

Using plenty of demos and examples attendees will leave with a good rounded understanding of the PowerApps offering and how it could be used in their organisations   
The system database TempDB has often been called a dumping ground, even the public toilet of SQL Server. (There has to be a joke about spills in there somewhere). In this session, you will learn to find those criminal activities that are going on deep in the depths of SQL Server that are causing performance issues. Not just for one session, but those that affect everybody on that instance. 
After this session, you will have learned how to architect TempDB for better performance, how to create code more efficiently for TempDB, understand how space is utilized within TempDB,  and have learned about queries and counters that will help diagnose where your bottlenecks are coming from.
A good DBA performs his/her morning checklist every day to verify if all the databases and SQL Servers are still in a good condition. In larger environments the DBA checklist can become really time consuming and you don’t even have the time for a coffee… In this session you will learn how you can perform your DBA morning checklist while sipping coffee. I will demonstrate how you can use Policy Based Management to evaluate your servers and how I configured my setup. By the end of this session, you can verify your own SQL environment in no time by using this solution and have plenty of time for your morning coffee!
We live in a cloud first World, but many organisation still run largely on-premise.  How could you start your cloud journey?  What services are easiest to deploy and how can you demonstrate value, security, control and governance? This session covers how some of the organisations we work with start their journey and leverage Azure data services.  Azure provides tons of capability that isn't available on-premise, this session will also highlight these sessions than can help illustrate and differentiate cloud services to complement your data strategy.
Microsoft announced the retirement of Power BI Embedded and recommended that everyone migrate to Power BI Premium and embed those reports. Unfortunately the documentation on doing this is "light" to say the least. In this session we will go through a worked example from end to end so you can add Power BI to your next web project
You are a DBA and have a few years’ experience, but you are having performance problems in your SQL Server environment.

In this demo-only session, you will learn how Premier Field Engineers at Microsoft troubleshoot performance problems and what tool and scripts they use. We will take a look at tools and scripts like SQLDiag, SQLNexus, PAL, and BPCheck.
Forget the stock answer of just click a button to create a service and off you go in the cloud. What about if you have 100's SQL Servers, some legacy applications, running on older versions and no cloud subscription? Then the path is very different.   This session focuses on the key steps you need prior to data migration such as connectivity, identity, target destination, data migration and running in the cloud.   What is the decision point to lift n shift, modernise then shift or go for the art of the possible? Is Azure SQL Managed Instance the right target? What is it? Should it be Azure SQL Azure DB? Should I run SQL Server on Infrastructure as a Service? What's difference? What pathway fits my scenario?   We will take a real world example of how you can take an application, migrate using the Azure Data Migration Service, deep dive into the Azure SQL Managed Instances and then use the built in features to tune, protect and optimise the data tier. Products: Azure Active Directory Networking Azure SQL Managed Instances Azure Data Migration Service   Background: During the past year, I have been working on architecting several projects which involve moving large volume of SQL Servers to the cloud. We’ve hit blockers, we've had success and during the journey learnt that the art of the possible sometimes scares businesses who are looking at the here and now.
Forget the stock answer of just click a button to create a service and off you go in the cloud. What about if you have 100's SQL Servers, some legacy applications, running on older versions and no cloud subscription? Then the path is very different.   This session focuses on the key steps you need prior to data migration such as connectivity, identity, target destination, data migration and running in the cloud.   What is the decision point to lift n shift, modernise then shift or go for the art of the possible? Is Azure SQL Managed Instance the right target? What is it? Should it be Azure SQL Azure DB? Should I run SQL Server on Infrastructure as a Service? What's difference? What pathway fits my scenario?   We will take a real world example of how you can take an application, migrate using the Azure Data Migration Service, deep dive into the Azure SQL Managed Instances and then use the built in features to tune, protect and optimise the data tier. Products: Azure Active Directory Networking Azure SQL Managed Instances Azure Data Migration Service   Background: During the past year, I have been working on architecting several projects which involve moving large volume of SQL Servers to the cloud. We’ve hit blockers, we've had success and during the journey learnt that the art of the possible sometimes scares businesses who are looking at the here and now.
If you are a DBA and want to get started with Data Science, then this session is for you. This demo-packed session will show you an end-to-end Data Science project covering the core technologies in Microsoft Data + AI stack. You will learn the basics of R & Python programming languages, Machine Learning, real-world analytics and visualizations that businesses need today. In this session you will get a head start about each component in Microsoft Data + AI stack and the possibilities that can be achieved with them. You will see a real world application in action.
Is your Biml solution starting to remind you of a bowl of tangled spaghetti code? Good! That means you are solving real problems while saving a lot of time. The next step is to make sure that your solution does not grow too complex and confusing - you do not want to waste all that saved time on future maintenance!

Attend this session for an overview of Biml best practices and coding techniques. Learn how to centralize and reuse code with include files and the CallBimlScript methods. Make your code easier to read and write by utilizing LINQ (Language-Integrated Queries). Share code between files by using Annotations and ObjectTags. And finally, if standard Biml is not enough to solve your problems, you can create your own C# helper classes and extension methods to implement custom logic.

Start improving your code today and level up your Biml in no time!
You’re a DBA or Developer, and you have a gut feeling that these simple queries with an egregious number of columns in the SELECT list are dragging your server down.

You’re not quite sure why, or how to index for them. Worst of all, no one seems to be okay with you returning fewer columns.

In this session, you’ll learn why and when queries like this are a problem, your indexing options, and even query tuning methods to make them much faster.
This session will present an overview of Continuous Delivery and the benefits such approaches bring in the context of developing database applications with SQL Server. We will take a look at the features of SSDT that faciliate rapid database development, as well as the features of VSTS that can ease development in a shared environment.
As with any other language, you can write good DAX but you can also write bad DAX. Good DAX works fine, it is fast and reliable and can be updated easily. Bad DAX, on the other hand is… well, just bad.

In this session, we will show several DAX formulas, taken from our experience as consultants and teachers, analyzing (very briefly) the performances and looking for errors, or for different ways of writing them. As you will see, writing good DAX means following some simple rules and, of course, understanding well how evaluation contexts work!
The topics covered will be: naming convention, variables, error handling, ALL vs ALLEXCEPT, bidirectional filters, context transition in iterators, and FILTER vs. CALCULATE.
How do you optimize a DAX expression? In this sessionwe analyze some DAX expressions and Tabular models and, through the usage of DAX Studio and some understanding of the VertiPaq model, we will look at how to optimize them.

As you will see, most optimizations are the direct application of best practices, but the session has the additional takeaway of understanding what kind of performance you should expect from your formulas, and the improvement you might expect from learning how to optimize the model and the code
DirectQuery is a feature of Analysis Services that transforms a Tabular model in a semantic layer on top of a relational database, transforming any MDX or DAX query in a real-time request to the underlying relational engine using the SQL language. This feature has been improved and optimized in the latest versions, including Azure Analysis Services, extending the support to relational databases other than SQL Server, and dramatically improving its performance.
In this session, you will learn what are the features of DirectQuery, how to implement best practices in order to obtain the best results, and what are typical use cases where DirectQuery should be considered as an alternative to the in-memory engine embedded in Analysis Services.
In this session you’ll learn everything you need to know about using Analysis Services Multidimensional as a data source for Power BI. Topics covered will include the difference between importing data and live connections, how SSAS objects such as cubes and dimensions are surfaced – or not – in Power BI, how to design your cubes so that your users get the best experience in Power BI, how MDX calculations behave, and performance problems to watch out for.
Take charge of any performance issue coming your way. "SQL Server is hurting!" Turn feelings to symptoms, and become the hero that saved the day. Streamline the process of troubleshooting performance issues with new tools and capabilities, for faster insights and effective turnaround.   The agenda includes:
  • Query Performance troubleshooting fundamentals.
  • Analyzing query plan properties (Getting the execution context – what properties are available and what do they give you in showplan).
  • Analyzing query plan properties (warnings and context, runtime stats).
  • Bringing it all together with lightweight profiling and live troubleshooting scenario.
All of the above includes latest content introduced in SQL Server 2017 since RTM.
Do you know if your database's indexes are being used to their fullest potential? Know if SQL Server wants other indexes to improve performance?


Come and learn how SQL Server tracks actual index usage, and how you can use that data to improve the utilization of your indexes. You will be shown how to use this data to identify wasteful, unused, & redundant indexes, and shown some performance penalties you pay for not addressing these inefficiencies. Finally, we will dive into the Missing Index DMVs and explore the art of evaluating its recommendations to make proper indexing decisions.
Times are certainly changing with Microsoft’s recent announcement to adopt the Linux operating system with the SQL Server 2017 release, and you should be prepared to support it. But, what is Linux? Why run your critical databases on an unfamiliar operating system? How do I do the basics, such as backing up to a network share or add additional drives for data, logs, and tempdb files?

This introductory session will help seasoned SQL Server DBAs understand the basics of Linux and how it differs from Windows, all the way from basic management to performance monitoring. By the end of the session, you will be able to launch your own Linux-based SQL Server instance on a production ready VM.
See how to use the latest SQL Server Integration Services (SSIS) 2017 to modernize traditional on-premises ETL workflows, transforming them into scalable hybrid ETL/ELT workflows in preparation for Big Data Analytics workloads in the cloud. We will showcase the latest additions to SSIS Azure Feature Pack, introducing/improving connectivity components for Azure Data Lake Store (ADLS), Azure SQL Data Warehouse (SQL DW), and Azure HDInsight (HDI).  We will also take a deep dive into SSIS Scale-Out feature, guiding you end-to-end from cluster installation to parallel execution, to help reduce the overall runtime of your workflows.  Finally, we will show you how to execute the SSIS packages on Azure as PaaS via Azure Data Factory V2.
Are you still asking yourself, what is big data? What is HDInsight and how can I benefit from it? which type of HDInsight Cluster should I use?
In this session I will explain in simple terms:
• When big data is needed and what it is
• What is Hadoop, its components and utilities
• What is HDInsight, its different cluster types and when to use one or the other
• How HDInsight integrates with other Azure Services
This session introduces the Receiver Operating Characteristics (ROC) curve and explains its strengths and
weaknesses in evaluating models.  We will then show the use of ROC curves and performance measures used in Azure Machine Learning Studio.
Extended Events are much more powerful than any other monitoring technology available in SQL Server. Despite this potential, many DBAs have yet to abandon Traces and Profiler. Partially because of habit, but mostly because the tooling around Extended Events was less intuitive until recently.

Now, it's easier than ever to set up, control and inspect Extended Events sessions with dbatools! Not only does it simplify your basic interaction with XEvents, but it also helps solve your day-to-day problems, such as capturing and notifying deadlocks or blocking sessions.

Join SQL Server MVP Gianluca Sartori and PowerShell MVP Chrissy LeMaire to see how PowerShell can simplify and empower your Extended Events experience. Say goodbye to #TeamProfiler and join #TeamXEvents with the power of dbatools.
Do you have large-scale SSIS platforms which experiencebottlenecks due to the number of packages or volumes of data that need to beprocessed?


Then you need to explore SSIS scale out, allowing multipleworkers to process your workload. But, what is SSIS Scale-out? In this sessionwe will walk through the use cases, system setup and patterns for developingyour SSIS packages to get the most out of this technology. At the end of thissession you will have learned the key elements of this new technology and be ina position to assess if it can help solve some of the problems you are facing.
GDPR is coming, no matter where you are if you are handling data on European data subjects. Laying a solid foundation of data security practices is vital to avoid the potential fines and damage to reputation that being non-compliant can bring.

Practicing good data hygiene is vital to meeting compliance requirements, whether it is GDPR, PCI-DSS, HIPAA or other standards. The fundamentals around data identification, classification, and management are universal. Together we will look at some of the key areas that you can address to speed up your readiness for meeting GDPR requirements. Including what data is covered, principals for gaining consent, data access requests as well as other key recommended practices.

By the end of this session you will be able to start the groundwork on getting your organization in shape for its journey to compliance.If you want to avoid the big fines, up to EUR 20 million or 4% of global turnover whichever is higher, it is important to act early.
Far too many people responsible for production data management systems are reluctant to embrace DevOps. The concepts behind DevOps can appear to be contrary to many of the established best practices for securing, maintaining and operating a reliable database. However, there is nothing inherent to a well-designed DevOps process that would preclude ensuring that the information stored within your data management system is completely protected. This session will examine the various methods and approaches available to the data professional to both embrace a DevOps approach to building, deploying, maintaining and managing their databases and protect those databases just as well as they ever have been. We will explore practices and plans that can be pursued using a variety of tooling and processes to provide DevOps methodologies to the systems under your control. You can embrace DevOps and protect your data.
Getting started reading execution plans is very straight forward. The real issue is understanding the plans as they grow in size and complexity. This session will show you how to explore the nooks and crannies of an execution plan in order to more easily find the necessary information needed to make what the plan is telling you crystal clear. The information presented here will better empower you to traverse the execution plans you’ll see on your own servers. That knowledge will make it possible to more efficiently and accurately tune and troubleshoot your queries.
Windows Server 2016 and later have two new ways to enhance SQL Server failover cluster instances (FCIs): storage spaces direct (S2D) and storage replica (SR). S2D is one of the biggest changes for FCI deployments in quite some time and could change how you approach FCIs. Storage replica enhances disaster recovery scenarios. This session will not only show the features in use, but also how to plan and implement them whether you are using physical or virutal (on premises or cloud).
Provisioning dev environments is often a slow, complicated and manual process. Often devs simply don’t have the diskspace. And then there is GDPR.

You can solve many of the these problems with virtualisation technologies and source controlled powershell scripts. We’ll show you how by talking you through:

1. Defining containers
2. Configuring Windows Server 2016 to run containers
3. Running SQL Server containers
4. Creating custom container images
5. Sharing container images

6. Defining database clones
7. Configuring the SQL Clone server
8. Creating database images from backups or live databases
9. Provisioning clones to a container in one click

The session will explain concepts via slides which will be backed up by demos.
Do you manage one or many SQL Server Reporting Services instances? Do any of them have multiple folders, dozens of reports or hundreds of subscriptions? 

Historically, managing and/or migrating these subscriptions, reports and folders has been incredibly time-consuming. But what if you could leverage open source PowerShell module from Microsoft to simplify these and other SSRS management tasks? And what if those tasks could be accomplished 500 times faster than the web-based GUI?

Join this session and you'll see all of this action using real-world scenarios!
Start from nothing and use Test Driven Development to write a PowerShell function that uses the Microsoft Cognitive Services API to analyse pictures. I will take you on a journey from nothing to a complete function, making sure that all of the code works as expected, is written according to PowerShell best practices and has a complete help system. You will leave this session with a good understanding of what Pester can do and a methodology to develop your own PowerShell functions
Want to become a community speaker or involved in the SQL Community?
Don't feel confident enough to try?
Need some advice and guidance?
We want to help you, we will help you

Join Richard and Rob (and some special guests) in a gentle conversational session where we will discuss and help to alleviate some of the common worries about joining the community in a more visible role and you can get advice and guidance on not only the methodology but also the benefits of becoming further involved in the SQL community either as a speaker, a volunteer or an organiser
This session is intended to do a deep dive into the Power BI Service and infrastructure to ensure that you are able to monitor your solution before it starts performing or when your users are already complaining.As part of the session i will give advise you on how to address the main pains causing slow performance by answering the following questions:
* What are the components of the Power BI Service?
     - DirectQuery
     - Live connection
     - Import
* How do you identify a bottleneck?
* What should i do to fix performance?
* Monitoring
     - What parts to monitor and why?
* What are the report developers doing wrong?
     - how do i monitor the different parts?
* Overview of best practices and considerations for implementations
Azure is ready to recieve all your event- and devicedata for storage and analysis.
But which options in the Azure IoT portfolio should you use to recieve and manage your data?
In this session I will explain the different options in the portfolio, take a closer look at how they work and what this means for you. Furthermore, I will take a closer look at the Azure Stream Analytics (ASA) language.
You will learn how to develop both simple and complex ASA queries, and how to debug. We will look at the possibilities, limitations and pitfalls in the Azure Stream Analytics language.
And finally look at the different input and output choices and when to use which one. This includes a look at how to build a live stream dashboard with Stream Analytics data in PowerBI. The session is based on real world project experiences and will use real data in the demos.
So, you think you know everything you need to know about the Power BI Report Server because you use traditional SQL Server Reporting Services.  Well, don’t believe it.  In this session we are going to discuss topics such as Configuring Kerberos to resolve connectivity issues. We will discuss different authentication types, when you need them, why you need them and how to use them.  We will then jump into configuring your report server to host Excel workbooks using Office Online Server.   Finally, we will demonstrate how to configure an SSAS Power Pivot instance for the Excel data model.  In addition to these topics, we will discuss other advanced topics such as connectivity and high availability during this demo-heavy session.
Microsoft’s Patrick and Adam answer a lot of questions. Those questions result in videos on their YouTube channel. This session combines some of the best challenges that they have dealt with including Power BI Desktop to the service, data source connectivity and Azure Analysis Services. Don’t miss out, there is a little something for everyone.
Authoring SSAS tabular models using the standard tools (SSDT) can be a pain when working with large models. This is because SSDT keeps a connection open to a live workspace database, which needs to be synchronized with changes in the UI. This makes the developer experience slow and buggy at times, especially when working with larger models. Tabular Editor is an open source alternative that relies only on the Model.bim JSON metadata and the Tabular Object Model (TOM), thus providing an offline developer experience. Compared to SSDT, making changes to measures, calculated columns, display folders, etc. is lightning fast, and the UI provides a "what-you-see-is-what-you-get" model tree, that lets you view Display Folders, Perspectives and Translations, making it much easier to manage and author large models. Combined with scripting functionality, a Best Practice Analyzer, command-line build and deployment, and much more, Tabular Editor is a must for every SSAS Tabular developer. The tool is completely free, and feedback, questions or feature requests are more than welcome. This sessions will keep the PowerPoint slides to a minimum, focusing on demoing the capabilities of the tool. Attendees are assumed to be familiar with Tabular Model development. https://tabulareditor.github.io/

A little bit of knowledge about how SQL Server works can go a long way towards making large data engineering queries run faster.  Whether you use SQL Server as a data source or as a R or Python query processing platform, knowing how it processes queries, manages memory and reads from disk is key to making it work harder and faster.

This session introduces and demonstrates how SQL Server:

  • operates internally
  • performs select queries
  • uses indexes to make queries run faster 
  • executes machine learning code to make operational predictions

It then introduces some query tuning techniques to help heavyweight analytics queries run faster.

The session uses Gavin Payne’s 20 years’ experience of working with SQL Server – mostly making it run faster, stay secure and remain available.
Machine Learning uses lots of algorithms. Things like Boosted Decision Trees, Fast Forest Quantile Regression and Multiclass Neural Network. Fortunately, you don't have to know the ins and outs of the algorithms to use them. But where's the fun in that? In this session, I'll walk you through the mathematical underpinnings of three simple algorithms, linear regression, decision trees and neural networks, showing how they work and how they generate their results. Warning: This session contains Mathematics.
Whoever coined the term "one size fits all" was not a DBA. Very large databases (VLDBs) have different needs from their smaller counterparts, and the techniques for effectively managing them need to grow along with their contents. In this session, join Microsoft Certified Master Bob Pusateri as he shares lessons learned over years of maintaining databases over 20TB in size. This talk will include techniques for speeding up maintenance operations before they start running unacceptably long, and methods for minimizing user impact for critical administrative processes. You'll also see how generally-accepted best practices aren't always the best idea for VLDB environments, and how, when, and why deviating from them can be appropriate. Just because databases are huge doesn't mean they aren't manageable, attend this session and see for yourself!
Internet connectivity to everyday devices such as light bulbs, thermostats, smart
watches, and even voice-command devices is exploding. These connected devices
and their respective applications generate large amounts of data that can be
mined to enhance user-friendliness and make predictions about what a user might
be likely to do next. This demo-heavy session will show how to use simple
device and sensors and the Microsoft Azure IoT suite, ideal for collecting data
from connected devices, to learn about real-time data acquisition and analysis
in an end-to-end, holistic approach to IoT with real-world solutions and ideas.
As data warehouses become more advanced and move to the cloud, Master Data Management is often bottom of the list. Being tied to an IaaS VM solely for MDS feels like a big step in the wrong direction! In this session, I will show you the secret of ‘app-iness with a cloud alternative which pieces together Azure and Office 365 services to deliver a beautifully mobile ready front end, coupled with a serverless, scalable and artificially intelligent back end.

Attendees of this session should have a basic understanding of:

  • Azure Services (Data Lake, SQL DB,Data Factory, Logic Apps)
  • SQL Server Master Data Services
Understanding how to reduce the attack surface area of applications and SQL Server environments is imperative in today's world of constant system attacks from inside and outside threats. Learn about the methodology to increase security related development practices, backed by real world examples. Including securely accessing a database, properly encrypting data, using SSL/TLS and certificates throughout the system, guarding against common front-end attacks like SQL Injection (SQLi) and Cross Site Scripting (XSS), etc.  This session will include both T-SQL and .Net code to give you an overview of how everything works together.
SQL disk configuration and planning can really hurt you if you get it wrong in azure. There is a lot more to getting SQL right on Azure VMs than next-next-next.   Come along and dive deeper into azure storage for SQL. Topics covered include: • SQL storage Capacity Planning concepts • Understanding Storage Accounts, VM limits, and disk types • Understanding and planning around throttling • Benchmarking • Optimal Drive configuration • TempDB considerations in Azure • Hitting “max” disk throughput
Learn how to build an Azure Machine Learning model, how to use, integrate and consume the model within other applications, and learn the basic principles and statistics concepts available in the different ML algorithms. If you want to know whether to choose a 'neural net' or a 'two class boosted decision tree', this session will reveal all!
So you're thinking about doing implementing data science project in your business?

You might be considering one or all of these options:
  • Hiring a data scientist
  • Using existing staff
  • Engaging a consultant
Like with most things in business, if you fail to plan, you plan to fail.

Starting out on a project without adequate planning, risks wasted time and money when you hit unexpected roadblocks. Additionally, putting a data science project into production without sufficient testing, monitoring, and due diligence around legal obligations, can expose you to substantial problems.

I want to help you avoid as much as risk as possible by taking you through my data science readiness checklist, including topics like:
  • Application development processes and capabilities
  • Data platform maturity
  • Use of data products within the business
  • Skillsets of existing business intelligence and other analytical teams
  • Analytical teams processes and capabilities
  • IT and analytical teams alignment to business goals
  • Recruitment, induction, and professional development processes
  • Legal, ethical, and regulatory considerations
Armed with the checklist, there'll be fewer "unknown unknowns" that could derail your project or cause extra cost. Let's get planning!

Power BI: Minutes to create and seconds to impress. 

Yes, it takes minutes to create and share Power BI reports. Which means every user who has relevant Power BI access can create work spaces, apps, reports, dashboards and schedules. With no deployment strategy, maintaining Power BI service could become a terrible job. 
In this session, I will cover how having a having deployment strategy can make a terrible maintenance job seamless.

This session will include:
  • What happens when a user creates a Power BI app workspace
  • How to control user access
  • Working with different environments
  • Creating and sharing reports using Power BI Apps 
  • How to monitor dataset schedules and failure notifications
  • How to use Power BI API to document your organisation Power BI Service
I'm a SQL Server DBA and a lot of my time is spent in Powershell these days. Database environments can be quite complex and in my attempts to automate setting up lab environments for (automated) testing I discovered the open source Powershell library called Lability. It has a slight learning curve and leans heavily on DSC which also has a bit of a learning curve. In this session I'll show you how to set up a fairly complex lab from start to finish and take you through my lessons learned.
SQL Server and Azure are built for each other. New hybrid scenarios between on-premise SQL Server and Azure mean they don't have to exclude each other but instead you can have the best of both worlds, reducing operational costs.

For example, by taking advantage of services like Azure Blob Storage or Azure VMs
we can increase the availability of our services or distribute data in smart
ways that benefit our performance and decrease cost.

In this demo-heavy session, you will learn the strongest use cases for hybrid scenarios
between on-premises and the cloud, and open a new horizon of what you can do
with your SQL Server infrastructure.
Being a BI Professional, you need all the performance tuning the DB folks get and more. In this hour, we will go over important performance tuning tips that you can use to help make your deliverables faster and more effective. We will touch on the MSBI tools of SSIS, SSAS, SSRS and PowerBI.
SQL Server 2017 is all about choice. Choice for developers to run on Windows, Linux, or Containers. In this session we will show you the experience of running SQL Server on Linux and Containers including a behind the scenes of how we built it.
Looking to upgrade your SQL Server? This session will discuss all the new features and capabilities of SQL Server 2017 including SQL Server on Linux, Docker Containers, Graph Database, Python, Adaptive Query Processing, Automatic Tuning, and new HADR capabilities. We will even talk a few hidden gems included in the release. Walk away with content and demos that you can use to discover the value of upgrading to SQL Server 2017.
In today's fast-paced world, businesses require up to the minute information to support critical decisions. Traditional business intelligence solutions, however, are not able to keep up with this demand and a new approach is required. Azure Stream Analytics is a real-time event processing engine capable of analyzing millions of events every second. During this session, you will learn some of the key concepts needed to work with streaming data before stepping through an end-to-end streaming data solution.
Pop quiz DBA: Your developers are running rampant in production. Logic, reason, and threats have all failed. You're on the edge. What do you do? WHAT DO YOU DO?

Hint: You attend Revenge: The SQL!

This session will show you how to "correct" all those bad practices. Everyone logging in as sa? Running huge cursors? Using SELECT * and ad-hoc SQL? Stop them dead, without actually killing them. Ever dropped a table, or database, or WHERE clause? You can prevent that! And if you’re tired of folks ignoring your naming conventions, make them behave with Unicode…and take your revenge!

Revenge: The SQL! is fun and educational and may even have some practical use, but you’ll want to attend simply to indulge your Dark Side. Revenge: The SQL! assumes no liability and is not available in all 50 states. Do not taunt Revenge: The SQL! or Happy Fun Ball.
Never given time or care, never forming good relationships, becoming bloated, corrupt and rife with indistinguishable copies, and all so horrifyingly pervasive in society. But enough about the Kardashians, what about YOUR DATA? If you want to straighten it out and prevent it from going too far in the first place, this session is for you. We will cover constraint basics (not null, check, primary key/unique, foreign keys), provide standard use cases, and address misconceptions about constraint use and performance. We will also look at triggers and application logic and why these are NOT substitutes for (but can effectively complement) good constraint usage. Attendees will enjoy learning how to keep THEIR data off the tabloid page!
In this session, we will dive deeper into the art of dimensional modeling.
We will identify the different types of fact tables and dimension tables and
discuss how and when to use each type. We will also review approaches to
creating rich hierarchies that simplify complex reporting. This session will
be very interactive--bring your toughest dimensional modeling quandaries!
In this session, we will dive deeper into the art of dimensional modeling.
We will identify the different types of fact tables and dimension tables and
discuss how and when to use each type. We will also review approaches to
creating rich hierarchies that simplify complex reporting. This session will
be very interactive--bring your toughest dimensional modeling quandaries!
Developers, architects, and data professionals face unprecedented rates of change – in which businesses must elastically respond to customer demand as user populations grow and shrink dramatically and unpredictably, while functionality must rapidly evolve to meet customer needs and respond to competitive pressures. To address these realities, developers are increasingly selecting cloud-born distributed databases for massively scalable, high performance, globally distributed data storage. Come learn about the business goals and technical challenges faced by real-world customers, why they chose Azure Cosmos DB, and the patterns they used to deliver highly available, globally distributed experiences.
Azure SQL Database built-in intelligence features will helpyou improve performance and security of your database and dramatically reducethe overhead of managing thousands of databases. Running millions of customerworkloads, SQL Database collects, processes, and evaluates a massive amount oftelemetry to come up with recommendations and alerts tailored to your workload,making your databases faster and more secure. This session covers most advancedfeatures including Threat Detection, Vulnerability Assessment, Automatic Tuningand Intelligent Insights. European customer testimonial benefiting from SQLDatabase built-in intelligence features and live demos that will help youunderstand how SQL Database is making a major difference for his company.
Those are some of the questions that I will answer to you.

1) How do I make my patching process easier when I do have AG's?

2) I want to easily Backup/Restore my databases that are part of an Availability Groups. Is that possible?

3) I want to make sure that my AG's are healthy.

4) I am just tired of using Management Studio for checking my AG's. Can I make this easier?

You will see that adding Powershell to your life will make you work smarter not harder!
Yes, this is a widespread problem: you expect something, but the reality is a bit different. You assume that query will run 1 sec, but it runs (Oh, my God!) 1 hour. You expect that your query will perform an index seek, but it performs index scan instead. You know that your query doesn't use locks, but it uses them. Let's try to understand why this happens.

This session will focus on understanding the internals of such situations and making our expectations more close to reality. During the session, we will look at the queries which accidentally became slow, queries which can't finish and even queries which can't start and we will find the root causes for all those cases. And for each case, we will find the way how to handle that.
Azure SQL Data Warehouse is a massively parallel processing (MPP) cloud-based, scale-out, relational database capable of processing massive volumes of data.
SQL Data Warehouse:
Combines the SQL Server relational database with Azure cloud scale-out capabilities.
Decouples storage from compute.
Enables increasing, decreasing, pausing, or resuming compute.
Integrates across the Azure platform.
Utilizes SQL Server Transact-SQL (T-SQL) and tools.
In this session we will discuss about this awesome feature from Microsoft Azure and how we can use it to help on our day to day.
Learn about Microsoft SQL Operations Studio. It is a new light-weight, free and multi-os tools for SQL Server, Azure SQLDB and DW running anywhere. Learn about the current state of the art and future of SQL Server, SQL Tools from the product group leaders.
Ken Van Hynning, Principal Engineering Manager
Asad Khan: Principal Group PM Manager
Columnstore index can speed up the performance of analytics queries significantly but are you getting the best performance possible? Come to this session on to learn how to diagnose performance issues in queries accessing columnstore index and the steps you can take to troubleshoot. Some of the techniques we discuss here are rowgroup elimination, statistics, partitioning, improving the query plan quality, tweaking the schema, and creating one or more nonclustered btree indexes.
Ensuring the ongoing protection of personally identifiable information is mandatory in today's business, helping you to guard against data breaches, and comply with the GDPR.

In a climate where cyber attacks are all too frequent, and data is spread across a growing number of different environments, the challenge of protecting your data can seem daunting.

Microsoft MVP and PASS President Grant Fritchey, and Coeo’s James Boother will address the implications of the GDPR on database management, and demonstrate a privacy-first approach to controlling and protecting data as it changes and moves through your SQL Server estate.

As well as offering guidance for assessing your data estate for GDPR readiness, this session will include some great tools and tips for building data protection and privacy into your development processes, and dispel the myth that database DevOps and compliance can't go hand in hand.

With the right preparation, you can build compliance into your processes, keep sensitive data safe, and deliver value quickly to your end users.
For a time the challenging decision was about moving to the the cloud. Then the challenge was whether to implement PaaS over IaaS. Now, with the introduction of CosmosDb, what is the right storage solution for your application? 

Focusing on the micro-service / OLTP domain, this talk looks at the challenges facing developers and teams when choosing between Azure SQL Database and Cosmos Db (Document Db). The talk approaches this challenge by solution use cases to test each storage's offering for appropriateness, looking at areas such a consistency, performance, security, availability and cost. 

Demo gods permitting, there will be at least one demo on how to approach a fair performance test. 

This talk is suitable to anyone wanting to get better insight into Azure SQL Database or DocumentDb, or for anyone thinking about jumping from one to the other.

At the end of the talk you will have a better understanding on how to approach this problem and arrive at the right solution with an unbiased approach.

See the latest features of SSIS in ADFv2.  We will show you how to join your Azure-SSIS Integration Runtime (IR) to an ARM VNet, so you can use Azure SQL Managed Instance to host your SSISDB and access data on premises.  You will learn how to select Enterprise Edition for your IR, enabling you to use advanced/premium features, e.g. Oracle/Teradata/SAP BW connectors, CDC components, Fuzzy Grouping/Lookup transformations, etc.  You will also learn how to customize your IR via a custom setup interface to modify system configurations/install additional components, e.g. (un)licensed 3rd party/Open Source extensions, assemblies, drivers, tools, APIs, etc.  Finally, we will show you how to trigger/schedule/orchestrate SSIS package executions as first-class activities in ADFv2 pipelines.
Imagine a database system that can perform computations on sensitive data without ever having access to the data in plaintext. With such confidential computing capabilities, you could protect your sensitive data from powerful adversaries, including malicious machine admins, cloud admins, or rogue DBAs, while preserving database system’s processing power. With Always Encrypted using enclave technologies, including Intel Secure Guard Extensions (SGX), this powerful vision has become a reality. Join us for this session to learn about this game-changing technology and opportunities to preview it.
Azure Data Catalog is a flexible service for enterprise metadata and data source discovery. In this session Senior Program Manager Matthew Roche will present patterns and best practices for successfully adopting Data Catalog.

Large and small organizations use Data Catalog to help get more value from their existing data sources. Working with organizations around the world, the Data Catalog team has identified common patterns that predict success, as well as common pitfalls to avoid. Attend this session to understand the most common patterns and the most important practices for a successful Data Catalog adoption.
Come learn about the new adaptive query processor in SQL Server 2017 and Azure SQL Database. We will discuss how this feature works, how it can benefit your application and queries, and the future of intelligent query processing.
Relational Databases, Graph Databases, Search engine databases, Key/Value data stores, Columnar databases, Time series databases, document databases, analytics data stores, object stores ... the list goes on and on, when should you choose which option ?

I walk through the database services in Azure and how to characterize a workload to make the right DBaaS selection for your use case. 

The DBaaS ecosystem can be daunting but understanding the key strengths of each solution and how they map to specific workloads is key to making the correct choices for any data workload in Azure. 
The Azure SQL Data Warehouse service recently launched a new compute-optimized performance tier that delivers new performance capabilities and drives faster insight. With this innovation Azure SQL Data Warehouse is a true powerhouse in the cloud data warehouse industry leveraging the latest hardware innovations such as NVMe SSDs to deliver up to 100x performance gains on customer workloads. This session explains the innovation behind the new offering and demonstrates the value that an optimized for compute performance tier can bring to your cloud data warehouse.

In this session you will learn about the new compute optimized performance tier in Azure SQL Data Warehouse and how to maximize your performance through improved table design and columnstore optimization.
You want to make your IT organization better, whether that is by purchasing an important software package, piece of hardware, new team process, or other significant change. But your organization’s middle and executive management are prone to saying ‘No’ to things like this in the past. How do you move forward with any hope of making progress? This session will teach you many of the mistakes that IT pros make when pitching an idea and how to avoid those. We’ll also cover a tried-and-true workflow for IT pros that gets positive results when pitching ideas to management and executives. Don’t you want your next idea to be successfully implemented? Attend this session to learn the ropes!
Data Science, artificial intelligence and machine learning are driving much of the discussion around enhanced data analytics. Deep learning utilises neural networks with many hidden layers and is exceptional at identifying patterns in unstructured data, including image, sound, video and text data, as well as time-series data. These deep neural networks, however, require a large amount of data in order to set their weights correctly and can be slow to learn. In this presentation I will discuss the challenges to training deep neural networks, and how these can be overcome by using better design and utilising parallelisation offered by GPUs and horizontal scaling. 

The preview Azure Batch AI service enables scalable deep neural network training by maximising the compute power of CPUs or GPUs across multiple machines to minimise the challenges faced when training data models. Azure Batch AI supports an array of deep learning frameworks including TensorFlow, Caffe, Keras, and Microsoft’s own CNTK. This presentation will also examine the capabilities of Azure Batch AI and highlight use cases which can benefit from this scalable architecture.
There is a vast array of PaaS components in Azure. Azure Data Warehouse, Azure Database, Data Factory, Stream Analytics, Analysis Services, Machine Learning, Blob Storage, Data Lake etc. How do we choose the
right components from this pool to build an architecture that meets the analytics needs of an organisation? What are the key decisions?

Starting from discussing the evolution of the Azure Architecture for an organisation over the course of 2 years, and based on experiences with other clients, we will review pros & cons of alternatives, examine specifically data storage, orchestration, real-time analytics, environment setup options.
Deep dive into different types of snapshots of SQL databases
and discover the difference between crash-consistent and application-consistent
snapshots. Mix and match snapshotting with other data protection methods using
best practices & lab findings shared in this session. Want to improve
dev/test/staging workflows? .BAK files are slow and inefficient... there are
better ways to achieve the desired results. Stay safe, and add new tools to
your arsenal!
Building effective PowerBI reports and dashboards requires a well-designed data model with the right relationships and measures, this session tackles those challenges your encounter during the design phase and where to get training required for this.
Using an example project to guide us through the design phase and the stages you will progress through to complete the data model

Datamovements are a Silver Data Analytics & Training Microsoft partner, joining them @ SQLBits 2018 are their partner Pragmatic Works who provide Learning-on-Demand training for PowerBI & more..

Experience the Magic of Pyramid 2018!

See high-end analytics and data science on any browser or device, on any data source, using any database storage system.
See how Machine Learning, using R and Python, makes sophisticated analytics simple.
See an innovative, modern user interface and Visualisations without limits.
See unique end user data preparation and processing in one web platform.
See how reuse and collaboration across the enterprise with low cost and easy implementation fulfils the promise of self-service analytics, on premise, in the cloud or both with Pyramid PULSE™.

Experience the Magic of Pyramid 2018!

See high-end analytics and data science on any browser or device, on any data source, using any database storage system.
See how Machine Learning, using R and Python, makes sophisticated analytics simple.
See an innovative, modern user interface and Visualisations without limits.
See unique end user data preparation and processing in one web platform.
See how reuse and collaboration across the enterprise with low cost and easy implementation fulfils the promise of self-service analytics, on premise, in the cloud or both with Pyramid PULSE™.

If you are a DBA who is dealing with multiple platforms like SQL Server, MySQL, MariaDB, PostgreSQL, RedShift etc, this session is for you. The number one issue multi-platform DBA often face is that they have to learn multiple Integrated Development Environment (IDE)s when they have to deal with different databases. The initial curiosity of learning something new often gets old very fast when dealing with multiple platforms for different databases. There are often scenarios when we want simple efficiency enhancement in our daily routine like powerful search, cloud storage and collaboration, which we often see often missing in our run of the mill clients.

This session is not your usual tips and tricks session. It is rather a very different take on the life of multi-platform DBAs's life. We will share some real-world scenarios and life stories of DBA who have to deal with different database platform every day and their constant battle with efficiency. We will see some neat solutions, demos, and tools which can make our daily job pleasant. Remember that everybody who will attend the session will get free scripts which they can use to improve their SQL Server's performance.
In recent times there has been some speculation and even a bit of doom
and gloom about the future of the DBA. While the pessimists prepare for the
worst, the rest of us knuckle down, keep our organization's databases alive and
well and continue to do what DBAS have always done– we adapt to change and
prepare to meet the future challenges of the infinitely dynamic world of data.


This session will present real-world statistics on the evolving role of
the DBA, the concerns facing DBAs today and valuable insights on the DBAs path
to career success. Learn how the megatrends of DevOps, Cloud, NoSQL, Big Data
and more are affecting your role today and how they are likely to shape it into
the future. Join, Peter O’Connell and Martin Wild  to explore:

  • What are top trends impacting DBAs today?
  • How do you compare with other DBAs in the
  • industry
  • Are you a cost center or a value center?
  • How do DBA’s manage their evolving role while
  • planning for the future
  • How to increase your value to the
  • organization
DevOps for the Data platform is often seen as the elephant in the room that is DevOps, everyone thinks its too hard and left until last or not done at all
Mike Prince, Senior Technical Consultant and Independent User of MaxParallel™ reveals how it worked for Ultima Business Solutions and details the levels of performance enhancements they now achieve.
Resolve the common problems with building an automated Devops release process.

In one hour we will build a deployment pipeline to ensure reliable releases with controls in place. We will show you the techniques we use to make database devops possible.
If you've been to https://sqlbits.com/Sessions/Event17/DevOps_from_Sabin_io session prior this is practical example of DevOps in action.

We look at the common problems such as 3 part names, old invalid code, and managing data. We will also look at why these practices are important to succeeding and the problems they prevent.

For the demonstrations we will be looking at SSDT in Visual Studio and VSTS. If you are getting started with these tools or want to further your existing processes please join us.
Big data processing increasingly needs to address not just querying big data but
needs to apply domain specific algorithms to large amounts of data at scale.
This ranges from developing and applying machine learning models to custom,
domain specific processing of images, texts, geospatial data etc. Often the
domain experts and programmers have a favorite language that they use to
implement their algorithms such as Python, R, C#, etc. Microsoft Azure Data
Lake Analytics service is making it easy for customers to bring their domain
expertise and their favorite languages to address their big data processing
needs. In this session, I will showcase how you can bring your Python, R, and
.NET code and apply it at scale using U-SQL.
More and more customers who are looking to modernize analytics needs are exploring the data lake approach in Azure. Typically, they are most challenged by a bewildering array of poorly
integrated technologies and a variety of data formats, data types not all of
which are conveniently handled by existing ETL technologies. In this session,
we’ll explore the basic shape of a modern ETL pipeline through the lens of
Azure Data Lake. We will explore how this pipeline can scale from one to
thousands of nodes at a moment’s notice to respond to business needs, how it’s
extensibility model allows pipelines to simultaneously integrate procedural
code written in .NET languages or even Python and R, how that same
extensibility model allows pipelines to deal with a variety of formats such as
CSV, XML, JSON, Images, or any enterprise-specific document format, and finally
explore how the next generation of ETL scenarios are enabled though the integration
of Intelligence in the data layer in the form of built-in Cognitive
GDPR is affecting any business that stores data including the SQL Server product group. 

Come and listen to how they approached it for SQL and SQL Azure.

SQL Server and SQL Azure is not only used by customers to hold data. In order to build, ship, monitor, maintain and enhance the product the SQL Database product group has to capture and process data about customers. This does include some personal details and thus they needed to address the challenge of GDPR.

Conor will talk through how they approached the challenge and the lessons learnt. 
Graphs are ubiquitous. From social networks, transportation networks, fraud detection to making recommendations or predicting the propensity to purchase. But, do you need an exclusive graph database to solve your graph problems? In this session we will look at how the Graph engine introduced with SQL Server 2017 and Azure SQL Database can help you generate insights from your highly connected data.
Azure Analysis Services is a cloud-based version of Analysis Services, and in this session you’ll learn about:

• What Azure Analysis Services is
• When you should use it - including comparisons with on-premises Analysis Services and Power BI Premium
• Configuring Azure Analysis Services in the Azure portal
• Developing and deploying Azure Analysis Services models
• Building reports in Excel and Power BI using Azure Analysis Services
• Connecting to on-premises data sources from Azure Analysis Services
• Automation
• Integration with other cloud BI services
• Sizing and pricing
• Monitoring
With the emergence of SQL Server 2017 vNext on Linux, new challenges arise for High Availability and Disaster Recovery solutions. What kind of features and add-on's exists in Linux that provide this type of solutions and the interoperability between instances in hybrid scenarios (with Linux and Windows) ? How can we configure all the scenarios we know of Windows on Linux and additionally how we can implement such hybrid scenarios ? Join me in this session where we will discuss all these points, as well as possible architectures and best practices in implementing HA \ DR scenarios in SQL Server 2017 vNext on Linux.