Please see the Agenda for more details of sessions and times.

Moving to the cloud means that we need to take a different approach when it comes to how we design and tune databases. Managed Instance is a case in point, get the storage configuration wrong and it will not perform. Together we will look at the storage options and architecture for Managed Instance before demonstrating how to get tune the storage layer to get the performance we need. Of course none of this is any use without knowing what level of performance you need so we will look at how you can get this information before you start testing.
You heard about the “R” language and it’s growing popularity for data analysis. Now you need a walk-through on what is possible analyzing your data? Then this session is for you:

You’ll get a short introduction how R came to be, and what the R ecosystem looks like today. Then we will extract sales data from different companies off a Navision ERP database on SQL Server.
Our data will be cleaned, aggregated and enriched in the RStudio environment. We’ll generate different diagrams on-the-fly to gain first insights.
Finally we’ll see how to use the Shiny framework to display our data on a map, interactively changing our criteria, and showing us where the white spots really are.
Are you prepared if a tornado filled with sharks destroys your primary data center? You may have backed up your databases, but what about logins, agent jobs, availability groups or extended events? 

Join SQL Server and PowerShell MVP Chrissy LeMaire for this session as she demos how disaster recovery can be simplified using dbatools, the SQL Server Community's PowerShell module.
SQL Sever 2019 is a major new update for the data professional, and there's a lot more than the big data stuff.

This session will highlight certain features that busy DBAs and developers will find useful. We'll look at UTF-8 support, row-mode query processing improvements, Always Encrypted with Secure Enclaves. Finally, we'll look at a better sqlcmd by covering the command-line tool mssql-cli.

At the end of the session, you will have a better understanding of several new features in SQL Server 2019 which will help you as a database administrator or developer.

Demos will be included for most of the features covered.
As announced in September 2018, SQL Server 2019 expands the "adaptive query processing" features of SQL 2017 and relabels them as "intelligent query processing". This name now covers many features, such as batch mode on rowstore, memory grant feedback, interleaved execution, adaptive joins, deferred compilation, and approximate query processing. 

In this high-paced session, we will look at all these features and cover some use cases where they might help - or hurt! - you.
The word Kerberos can strike fear into a SQL DBA as well as many Windows Server Administrators.
What should be a straight forward and simple process can lead to all sorts of issues and trying to resolve them can turn into a nightmare. This talk looks at the principle of Kerberos, how it applies to SQL Server and what we need to do ensure it works.
We will look at
  • What is the purpose of Kerberos in relation to SQL Server?
  • When do we need to use it?  Do we need to worry about it at all?
  • How  do we configure it?  What tools can we use?
  • Who can configure it?  Is it the DBA job to manage and configure Kerberos?
  • Why does it cause so many issues?
Because on the face of it setting up Kerberos for a SQL Server is actually straightforward but it is very easy to get wrong and then sometimes very difficult to see what is wrong preview here https://www.youtube.com/watch?v=uO9NqxizT_8
There are many methods and options in SSIS to handle errors during a
package execution. You can use event handlers, event propagation, package
transactions, precedence constraints, error rows, and more. Things get more
complicated when your package has multiple levels of containers, or even when
one package executes another. It is very easy (and common) to get lost and do
things the wrong way.

In this session we will learn about all the options available for us
to handle errors in SSIS, but more important – we will learn about the best
practices and the way to do it right.
In this session, we will explore real world DAX and Model performance scenario's.  These scenarios range from memory pressure to high CPU load. Once we identify them using the monitoring tools, watch as we will dive into the model and improve performance by going through concrete examples. Will it be fixed by optimizing the DAX statement or by making changes to the model? Come watch to find out!
Azure offers a wide range of services that can be combined into a BI solution in the cloud. What possibilities does Azure currently offer to create a modern BI architecture? The components currently offered range from Azure SQL DB and SQL DWH to Data Factory, Stream Analytics, Logic App, Analysis Services and Power BI to name a few. This is a very good toolbox, with which one can reach the first successes very fast. Step by step you will learn how to create the classic ETL in the cloud and analyze the results in Power BI.
You don't have to be a super hero, to make a difference to our planet. Just a bit of programming skills will suffice. We can't imagine our lives without electricity. We use it to have light and heat, keep our food fresh, we work with computers, use mobile phones, and don't forget entertainment! We need electricity for driving cars, even more now with EVs. Electricity is generated mostly with fossil fuels, we can use nuclear power and hope that nobody makes a mistake and people do make mistakes. That's why we build wind farms, solar panels, hydro power plants, but we can't force them to generate electricity when we want. They are not aware of world cup finals. So how can we make sure, we use green power more? We need to store electricity when they can generate it, and use when they produce less, but storing electricity is hard. We have to change the way we consume energy, but it has to be automatic, so people wouldn't even know. That's what we do at OVO Energy, using IoT devices to change the power usage patterns, create virtual power plant, which can be used when the demand exceeds supply. I will show you how we use Azure IoT Hub to do that, you don't have to be C or C++ developer to work with IoT.
There are cases where you create indexes to support query elements that rely on order, but the query optimizer seems to not realize this, and ends up applying a sort. This session demonstrates a number of such situations and provides workarounds that enable the optimizer to rely on index order and this way avoid unnecessary sorting.
Have you ever wondered what it takes to keep an Always On availability group running and the users and administrators who depend on it happy? Let my experience maintaining several production Always On Availability Groups provide you some battle-tested information and hopefully save you some sleepless nights. From security tips to maintenance advice, come hear about some less than obvious tips that will keep users happy and the DBA’s phone quiet.
PowerApps is an exciting and easy to pick up application development platform offered by Microsoft.
In this session we'll take an overview look at PowerApps from both the development and deployment sides.

We'll go through the process of building and publishing an app and show the rapid value that PowerApps can offer.

Using plenty of demos and examples attendees will leave with a good rounded understanding of the PowerApps offering and how it could be used in their organisations   
I've been working with SQL Server on Windows for well over a decade, but now it can run on Linux. I had to learn a lot of things to ramp up. Let me share those with you, so you can successfully manage SQL Server on Linux! In this session, I'll cover basic Linux commands, what to prep for installation, how to install, how to configure, and what you need to know to monitor and troubleshoot. 
It's uncontroversial to suggest that developers work more effectively given isolated development environments, which are not subject to "surprises" caused by the rest of the team or "change freezes" imposed by the release process.

In many circumstances, these can be provided using a myriad of Virtual Machines, but the case of applications which rely on a range of PaaS services such as Azure Functions, Azure Data Factory or Azure SQL Data Warehouse can be more complicated.

This session will discuss a way of dynamically creating and destroying an isolated end-to-end environment for every individual feature branch using the facilities provided by VSTS and Azure Resource Manager, as well as some reasons why you might or might not want to adopt such an approach.
In this session Ben will walk you through a home-made IoT project with the data ultimately landing in Power BI for visualisation. 

A Raspberry Pi is the IoT device, with sensors and camera attached to give an end-to-end streaming solution. 
You will see Python used to run the program and process the images. Microsoft Azure plays its part where Microsoft Cognitive Services enriches the images with facial attributes and recognition, Azure SQL stores the metadata, Azure Blob storage holds the images, Power BI visualises the activity and Microsoft Flow sends mobile notifications. 

You'll see enough to walk out and get your own project started straight away!
Power BI Premium enables you to build comprehensive, enterprise-scale analytic solutions that deliver actionable insights through familiar data visualization tools such as Microsoft Power BI and Microsoft Excel. This session will dive deep into exciting, new and upcoming features including aggregations for big data to unlock petabyte-scale datasets that was not possible before! We will uncover how the trillion-row demo was built in Power BI on top of HDI Spark. The session will focus on performance, scalability, and application lifecycle management (ALM). Learn how to use Power BI Premium to create semantic models that are reused throughout large, enterprise organizations
Database and application development need to be synchronized in order to provide the proper behavior, otherwise something
will be broken.

Database unit tests can be the contract between database and application. This contract, not only
avoids database breaking the contract with the application, but also ensures
that the database presents an expected behavior.

This talk will address the basic steps to introduce unit testing to SQL (tSQLt a database
unit testing framework), and create a deployment pipeline able to create a test
environment (local machine, database as a service, docker), run tests, create
tests reports and deploy if the build succeeds.

So, the plan it’s to show since how to write the first to add a set of database
tests to the deployment pipeline.       
As a SQL DBA you want to know that your SQL Server Estate is compliant with the rules that you have set up. Now there is a simple method using PowerShell and you can get the results in PowerBi or embedded into your CI/CD solution

Details such as:

How long since your last backup?
How long since your last DBCC Check?
Are your Agent Operators Correct?
Is AutoClose, AutoShrink, Auto Update Stats set up correctly?
Is DAC Allowed?
Are your file growth settings correct, what about the number of VLFs?
Is your latency, TCP Port, PS remoting as you expect?
Is Page Verify, Data Purity, Compression correctly set up?
And many more checks (even your own) can be achieved using the dbachecks PowerShell module brought to you by the SQL Collaborative team.

and all configurable so that you can validate YOUR settings

Join one of the founders of the module, Rob Sewell MVP. and he will show you how easy it is to use this module and release time for more important things whilst keeping the confidence that your estate is as you would expect it.
No, not that sort of slacking. The Slack.com type of slacking. We'll be ignoring the gifs and looking at how using Slack, PoshBot, dbatools and a little bit of PowerShell glue you can build a simple solution that enables you to quickly respond to and fix problems from anywhere without having to carry anything more specialised than your smart phone. And we'll see how you can then extend that to allow you to hand off tasks to other users and teams in a safe secure manner.
In this presentation we’ll go through Microsoft’s new tool, Azure Data Studio (formerly SQL Operations Studio), what it can do, and whether it can help in your environment;

    – Inbuilt T-SQL Editor (with IntelliSense) – Could you replace SSMS?
    – Smart T-SQL Code Snippets – This is new and a massive time saver
    – Customizable Dashboards for your server estate – great if you don’t have a monitoring solution currently
    – Connection Management – Group your servers to help you organise what’s important.
    – Integrated Terminal (run your PowerShell, sqlcmd, bcp etc directly in Ops Studio)

We’ll show real life uses for this tool and leave you with the ability to see if it’s something you want to jump into and start using (to make your life easier).
Azure Cosmos DB has quickly become a buzzword in database circles over the past year, but what exactly is it, and why does it matter? This session will cover the basics of Azure Cosmos DB, how it works, and what it can do for your organization. You will learn how it differs from SQL Server and Azure SQL Database, what its strengths are, and how to leverage them. We will also discuss Azure Cosmos DB's partitioning, distribution, and consistency methods to gain an understanding of how they contribute to its unprecedented scalability. Finally we will demonstrate how to provision, connect to, and query Azure Cosmos DB. If you're wondering what Azure Cosmos DB is and why you should care, attend this session and learn why Azure Cosmos DB is an out-of-this-world tool you'll want in your data toolbox!
"Learn DAX", they said. "It'll be easy with your background", they said. Well, it turns out that it wasn't. Transitioning from SQL to DAX gave me nightmares and ulcers, and this session is for everyone who is looking over the edge and considering undertaking the challenge. It is in no way impossible, only frustrating, as DAX has a very different approach to data from what a SQL programmer is used to. In this introduction to the DAX language, I'll be putting a somewhat different spin on DAX from a beginners' standpoint.  I'll be going over the basic mental mistakes that many people trying to learn DAX do, how to solve them and how to put your brain on the right track!
If your boss asked you for the list of the five most CPU-hungry databases in your environment six months from now for an upcoming licensing review, could you come up with an answer? Performance data can be overwhelming, but this session can help you make sense of the mess. Twisting your brain and looking at the
data in different ways can help you identify resource bottlenecks that are limiting your SQL Server performance today. Painting a clear picture of what your servers should be doing can help alert you when something is abnormal. Trending this data over time will help you project how much resource consumption you will have months away. Come learn how to extract meaning from your performance trends and how to use it to proactively manage the resource consumption from your SQL Servers.
The in-memory OLTP engine has been around since version 2014 of the database engine, do you need to be a SaaS provider processing billions of transactions a day to use this ?, the short answer is absolutely not and this session will show you why by presenting a number of use cases that most people should find useful from the bulk loading of data to scalable sequence generation. But what is wrong with the legacy engine that we all know and love ?, why do we need the in-memory engine ?. Along the way this session will provide an overview of what the in-memory engine is, why it is required and why with SQL Server 2016 service pack 1, it is more cost effective to use than ever before.
Every new release of SQL Server brings a whole load of new features that an administrator can add to their arsenal of efficiency. SQL Server 2017 / 2019 has introduced many new features. In this 60 minute session we will be learning quite a few of the new features of SQL Server 2017 / 2019. Here is the glimpse of the features we will cover in this session. • Adaptive Query Plans • Batch Mode Adaptive Join • New cardinality estimate for optimal performance • Adaptive Query Processing • Indexing Improvements • Introduction to Automatic Tuning / Intelligent query processing This 75 minutes will be the most productive time for any DBA and Developer, who wants to quickly jump start with SQL Server 2017 / 2019 and its new features.
“Oh! What did I do?” Chances are you have heard, or even uttered, this expression. This demo-oriented session will show many examples where database professionals were dumbfounded by their own mistakes, and could even bring back memories of your own early DBA days. The goal of this session is to expose the small details that can be dangerous to the production environment and SQL Server as a whole, as well as talk about worst practices and how to avoid them. In this session we will focus on some of the common errors and their resolution. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session.
Dataflows are an important new data preparation and loading feature in Power BI. In this session you will learn:
  • What dataflows are and when you might want to use them
  • The advantages and disadvantages of using them over Power BI Desktop's data loading features
  • Configuring incremental refresh
  • Additional features available in Power BI Premium
  • Integration with Azure Data Lake Store, the Common Data Model and other Microsoft services
The dbatools module now has over 300 commands and anyone who wants to start is overloaded with amount of functionality in this module.
There are not enough hours in the day to get everything done as a DBA. We need to automate our repetitive tasks to free up time for the important and more fun tasks.
In this session I'll show you a set of commands which will help you start automating your tasks.
Customers have feelings. If you meet with all your customers regularly you probably think you know how they feel. But what if you have millions of customers? How can you even begin to get to understand how each of those customers feel about your business?

This is where cognitive API’s come in. By harnessing the power of deep neural networks, we can accurately derive emotional insight from our data and use this to improve our offering to our customers. In this session we will look at the tools needed to connect your data to the right API’s and how we can leverage their capability. Starting out with sentiment analysis we move to facial expression recognition and image tagging.

Attendees will gain the knowledge of how to connect different types of data to appropriate cognitive services. This talk would be of interest to anyone that deals regularly with customer data or has a curiosity toward data science and its application within a business.
SQL Server 2016 Introduced an incredible monitoring and tuning technology called Query Store that provides us with insight into the optimizer’s plan choices and the history of changes in plans. Query Store allows you to revert to a previous better performing plan through the use of Plan Guides. In this session, we’ll see what kinds of query performance problems can be solved with Query Store and what kinds can’t be. We’ll look at how Query Store evaluates query performance information and how we can revert to an old plan. Finally, we’ll see a new SQL Server 2017 technology called Automatic Plan Correction that is built on top of Query Store.
As data engineers we're great at putting together and managing complex process flows, but what happens if we stop trying to control the flow and start thinking about the metadata it needs instead? In this session we'll look at a variety of ETL metadata, how we can use it to drive process execution, and see benefits quickly emerge. I'll talk about design principles for metadata-first process control and show how using this approach reduces complexity, enhances resilience and allows a suite of ETL processes adaptively to reorganise itself.
As long as it is only a matter of SELECT, INSERT etc, you can put these statements in a stored procedure and users only need permission to run the procedure. That is, it seems that the stored procedure acts as a container for the permission. But you find that this no longer seems to work when you use dynamic SQL, create non-temp tables or try other "advanced" things. The story is that the procedure can still act as a container for the permission and users do not need to be granted elevated permissions if you take extra steps in form of certificate signing or use EXECUTE AS. In this session you will learn how to use these techniques and why one of them is better than the other.

The session does not stop at the database level, but it also looks at how these techniques can be used to permit users to perform actions that require server-level permission in a controlled way, for instance letting a database owner see all sessions connected to his database but not other databases. As a side-effect of this, you will also learn why the TRUSTWORTHY setting for a database is dangerous.
You want to use all the new and sexy cloud based data services like PowerBI, Flow and hosted SSAS, but your data remains strictly on premise. Come along to see how to use the data gateway to connect Azure to your on-premise data.

Content includes:
- Architecture
- Installation and Configuration
- Deployment Patterns
- Transferring Identity and UPNs.
- Implementing Row Level Security
- Monitoring and Performance
- Troubleshooting
Discover the Advanced Analytics and Data Lake pattern in Azure Data Platform through a complete demo : how to get insights from text, photos and videos ? 
From different media files and raw data, we will analyze sentiment of characters and
get valuable information in a Power BI dashboard, using Cognitive Services, CNTK, .NET and U-SQL.
This session will mainly showcase Azure Data Lake and U-SQL language. But demos will
involve different tools like Azure Data Factory for data supply chain and orchestration, Azure SQL Database for corporate data or even Machine Learning technics.
Even if this session is demo-driven, concepts and features of Azure Data technologies will be discussed.
Ever considered giving a presentation of your own? Pondered how your favorite speakers got their start? Contemplated whether you could ever do that too, but were not sure where to begin?

In this session I will show you how to get started. We will go over how to develop your idea, create session content, and share my favorite tips & tricks. 

You will leave armed with a wealth of resources (and hopefully some inspiration) to venture forth and develop your first presentation.
Real time is fundamental for the life of an organization and to the decision making process, therefore, building a real-time analytical model becomes a business need.

In this session we will learn which processes are required to build a real-time analytical model along with batch-based workloads, which are the foundation of a Lambda architecture.

At the end of the session, you will have learned how to ingest, store, prepare and serve the data with Apache Kafka for HDInsight, Azure Data Lake Store Gen 2, Azure Databricks, SQL Data Warehouse and Power BI.
Many organizations would like to take advantage of the benefits of using a platform as a service database like Azure SQL Database. Automated backups, patching, and costs are just some of the benefits. However, Azure SQL Database is not a 100% feature compatible with SQL Server—features like SQL Agent, CLR and Filestream are not supported. Migration to Azure SQL Database is also a challenge, as backup and restore and log shipping are not supported methods. Microsoft recently introduced Managed Instances—a new option that provides a bridge between on-premises or Azure VM implementations of SQL Server and Azure SQL Database. Managed Instances provide full SQL Server surface compatibility and support database sizes up to 35 TB. In this session, you will learn about migrating your databases to Managed Instances, developing applications for managed instances. You will also learn about the underlying high availability and disaster recovery options for the solution.
One of the most useful tools to the DBA when we need to test new features, recreate a fault that we’ve seen in production or just want to see ‘what if…?’ is a test lab.

Some of you are going to be lucky enough to have a few servers kicking around or a chunk of the virtual environment that you can build a test lab in but not all of us do.

This session walks you through how to create a virtualised test lab right on on your workstation or even laptop. 

We will start by selecting a hypervisor, look at building a virtual machine and then creating a domain controller, a couple of clustered SQL Servers and finally fulling functioning availability group.

The session will cover,

Selection and installation of a hypervisor
Creating your first VM
Building a domain controller and setting up a Windows domain within your virtual environment
Setting up a Windows failover cluster
Installing a couple of SQL Servers
Creating a fully functioning availability group
One of the most highly anticipated new features in the SQL Server 2016 release was Query Store. It's referred to as the "flight data recorder" for SQL Server because it tracks query information over time – including the text, the plan, and execution statistics - and it allows you to force a plan for a query. When you include the new Automatic Plan Correction feature in SQL Server 2017, suddenly it seems like you might spend less time fighting fires and more time enjoying a lunch break that’s not at your desk. In this session, we'll walk through how to stabilize query performance with a series of demos designed to help you understand how you can immediately start to use plan forcing once you’ve upgraded to SQL Server 2016 or higher. We'll review how to force a plan, discuss the pitfalls you need to be aware of, and then dive into how you can leverage Automatic Plan Correction and reduce the time you spend on Severity 1 calls fighting fires. It’s time to embrace the future and learn how to make troubleshooting easier using the plethora of intelligent data natively captured in SQL Server and SQL Azure Database.
When things go wrong with Always On Availability Groups (AGs) or Always On Failover Cluster Instances (FCIs), it is not always apparent where to look and how to diagnose what happened. This session will help demystify where to look when you have a problem, as well as cover some of the more common issues you will see when implementing AGs and FCIs, whether you have them deployed on Windows Server or Linux.
Manish Kumar from Microsoft will take you through various performance tuning measures you should take before increasing service tier of your database. The session will be packed with tons of information, steps and demos.
Manish Kumar from Microsoft will take you through various performance tuning measures you should take before increasing service tier of your database. The session will be packed with tons of information, steps and demos.
With the advent of Windows containers and SQL Server on Linux, the options for running SQL Server are growing. 


Should you run SQL Server on physical machines, virtualised, in the cloud or do you get with the cool kids and run it on Docker?


Docker is great for development, testing and stateless apps, but would you run your production SQL Server on it?


It may sound crazy to run your data tier in ephemeral containers, but I'll discuss the reasons why this might be a good idea if we can figure out the following challenges:


Data persistence
Security
High Availability
Licensing
Monitoring
Deploying SQL Server updates


Microsoft themselves have said they expect containerisation of applications to become as common as virtualisation. If this happens then containers are something we all need to understand.
"PowerApps is a service that lets you build business apps that run in a browser or on a phone or tablet, and no coding experience is required. PowerApps combines visual drag-and-drop concepts from PowerPoint with Excel-like expressions for logic and working with data."
        
In this session, attendees learn about how to create PowerApps solutions, how to use PowerApps as data entry application for Power BI. How to integrate PowerApps in Power BI and Power BI in PowerApps.
Running SQL Server in containers has huge benefits for Data Platform professionals but there are challenges to running SQL Server in stand alone containers. Orchestrators provide a platform and  the tools to overcome these challenges.

This session will provide an overview of running SQL Server in Kubernetes.
Topics covered will be: -
  • An overview of Kubernetes.
  • Definition of deployments, pods, and services.
  • Deploying SQL Server containers to Kubernetes.
  • Persisting data for SQL Server in Kubernetes.
This session is aimed at SQL Server DBAs and Developers who want to learn the what, the why, and the how to run SQL Server in Kubernetes.
Lifting and shifting your application to the cloud is extremely easy, on paper. The hard truth is that the only way to know for sure how it is going to perform is to test it. Benchmarking on premises is hard enough, but benchmarking in the cloud can get really hairy because of the restrictions in PaaS environments and the lack of tooling.

Join me in this session and learn how to capture a production workload, replay it to your cloud database and compare the performance. I will introduce you to the methodology and the tools to bring your database to the cloud without breaking a sweat.
I will also introduce you to WorkloadTools: a new Open Source project that I created specifically for this scenario. Benchmarking will be just as easy as pie.
There is a natural limit to how many dataflows you can run in parallel in SSIS. Regardless of whether your limit is on the source or destination side, you will eventually reach those limits.
You might have set up all your package orchestration in a way that made perfect sense at that time, but over time, some tables grow faster than expected and others don’t grow at all. Due to foreign key relationships, you may not be able simply to shuffle the dataflow tasks around to maximize throughput. Manual reengineering along these lines would potentially be very time consuming, and even worse, the result would be obsolete shortly thereafter.

This session is about using the Business Intelligence Markup Language (Biml) to monitor and control your orchestration patterns. By automatically analyzing the results in ETL logs, we’ll be able to automate our staging orchestration!

Prerequisites: Good understanding of dataflows with SSIS, especially with higher volumes of data
Power BI is the shiny new tech for processing and visualizing data in the Microsoft Data Platform. However, the plumbing in the background does need managing (even if it is cloud-based and supposedly automagic).


In this session we will take a look at how to manage your datasets, security, monitor licensing and more, all through the ultimate administration interface: PowerShell!


In this session we'll explore why we need to manage Power BI, what management capabilities we have at our fingertips and the power of PowerShell to simplify and expand these capabilities. We'll then tie it all together to get you started on the road to building a suite of management reports and automated processes because PowerShell loves Power BI!
Every expert has their own set of tools they use to find and fix the problem areas of queries, but SQL Server provides the necessary information to both diagnose and troubleshoot where those problems actually are, and help you fix those issues, right in the box. In this session we will examine a variety of tools to analyze and solve query performance problems.
Reading execution plans is easy right? Look for the highest cost operators or scans and you're pretty much done. Not really. Execution plans are actually quite complicated and can hide more information than the graphical plan reveals. However, if you learn how to walk through the details of an execution plan, you will be more thoroughly prepared to understand the information in that plan. We'll unlock and decode where the information is within a plan in order for you to know why the optimizer made certain choices. You'll be able to better understand how your T-SQL code is interpreted by the optimizer. All this knowledge will make it easier to debug and tune your T-SQL.
Many of us have to deal with hardware that doesn’t meet our standards or contributes to performance problems. This session will cover how to work around hardware issues when it isn’t in the budget for newer, faster, stronger, better hardware.  It’s time to make that existing hardware work for us. Learn tips and tricks on how to reduce IO,relieve memory pressure, and reduce blocking. Let’s see how compression, statistics, and indexes bring new life into your existing hardware.
Over a dozen tips and tricks worked through explaining how they can help, why they work the way they do and why you ought to be using PowerShell more.
topics covered include ISNULL , ensuring your build scripts are secure, splatting, parsing event logs, using -format correctly, custom sorting
Moving databases and workloads to the cloud has never been easier. For Sql Server there is number of products that offer almost perfect feature parity. One of the last technical challenges is right security configuration. That's because security model in the public cloud is different and requires different approach, skillset and knowledge. This session covers governance, risk management and compliance in public cloud and specifically focuses on Azure Sql PaaS resources. It provides practical examples of network topologies with their strengths and weaknesses including recommendations and best practices for hybrid and cloud-only solutions. Explains orchestration and instrumentation available in Azure like Security Center, Vulnerability Assessment, Threat Detection, Log Analytics/OMS, Data Classification, Key Vault and more. Finally shows techniques to acquire knowledge and gain advantage over attackers like deception and chaos engineering.
If you have already mastered the basics of Azure Data Factory (ADF) and
are now looking to advance your knowledge of the tool this is the
session for you. Yes, Data Factory can handle the orchestration of our
ETL pipelines. But what about our wider Azure environment? In this
session we’ll go beyond the basics looking at how we build custom
activities, metadata driven dynamic design patterns for Data Factory.
Plus, considerations for optimising compute costs by controlling other
service scaling as part of normal data processing. Once we can hit a
REST API with an ADF web activity anything is possible, extending our
Data Factory and orchestrating everything.
Organisations need to know how to get started with Artificial Intelligence. This practical session offers organizations, small and large, with a helping hand in practical advice and demos using Microsoft Azure with Open Source technologies. For organizations who have no clue what they'd use AI for, the session will offer a practical framework: the Five 'C's of Artificial Intelligence, which is a framework to help stimulate ideas for using AI in your own organization. In order to provide a technical focus in getting started, R and Python will be shown in AzureML and Microsoft ML Server.  
In this hour long session we will attempt to include lots of advice and guidance on how to write code that will easily get approved by your DBA prior to release to production. We’ll cover Unit tests, Continuous Integration, Source Control and some coding best practice. 
This will be quite a fast paced session but will aim to give you a taster of what you should include to increase the acceptance rate of your code by the approvers and how to ensure your code does what it should and that future changes don’t break it. 
Azure DataBricks brings a PaaS offering of Apache Spark, which allows for blazing fast data processing, interactive querying and hosting of ML models all in one place! Most of the buzz is around Data Science & AI - what about the humble data engineer who wants to harness the in-memory processing power within their ETL pipelines?

This session focuses on Azure DataBricks as your data ingestion, transformation and curation tool of choice.

We will: 
Introduce the DataBricks service & language options available
Discuss the hosting & compute options available
Demonstrate a sample data processing task
Compare against alternative approaches using SSIS, U-SQL and HDInsight
Demonstrate pipeline management & orchestration
Review the wider architectures and extension patterns

The session is aimed at Data Engineers seeking to put the Azure DataBricks technology in the right context and learn how to use the service.

We will not be covering the python programming language in detail.
SSIS has been around for some 14 years now, but how it works hasn't really changed, and neither have the use patterns that we see. The flexibility of SSIS is one of its greatest features, but also one of its greatest failures - some patterns and use cases actually prevent your ETL from performing as well as it should. Join us for an explanation of why that is, what you can do about it. We'll even dissect a gnarly package, and reduce its runtime from 15 hours + to a matter of seconds !
Deep learning has been used to write new Shakespearean sonnets, to imagine new delicious recipes, write hilarious Harry Potter novels and even come up with new names for beer! In this session we will understand, what is deep learning, what are neural nets, what are the steps required to build a deep learning model and look at some of the great examples mentioned. 


We will then turn our new skills to the problem most speakers have! Writing session abstracts. Together we will develop a recursive neural net designed to generate new session abstracts, entirely based on previously submitted sessions to SQL Server conferences. Will we be able to produce a session you would have attended? Come along and fine out. 

As Data Scientists we are great at machine learning, statistical modelling, visualising data and using data to tell a story. What are we not so good at? A lot of the core skills required in traditional software development. If you answer no to any of the following you need to attend this session.
  • Do you source control your models?
  • Do you test your models?
  • Is the percentage of models deployed in production less than 10 per cent?
  • Did you deploy the model?
In this session I will show you how to apply DevOps practices to speed up your development cycle and ensure that you have robust deployable models. We will focus on the Azure cloud platform in particular, however this is applicable to other cloud platforms.
Have you ever taken apart a toaster or an alarm clock just to see how it worked? Ever wondered how that database actually functions at the record level, behind the scenes? SQL Server Databaseology is the study of SQL Server databases and their structures down to the very core of the records themselves. In this session, we will explore some of the deep inner workings of a SQL Server database at the record and page level.  You will walk away with a better understanding of how SQL Server stores data and that knowledge that will allow you to build better and faster databases.    
Data lakes have been around for several years and there is still much hype and hyperbole surrounding their use. This session covers the basic design patterns and architectural principles to make sure you are using the data lake and underlying technologies effectively. We will cover things like best practices for data ingestion and recommendations on file formats as well as designing effective zones and folder hierarchies to prevent the dreaded data swamp. We’ll also discuss how to consume and process data from a data lake. And we will cover the often overlooked areas of governance and security best practices. This session goes beyond corny puns and broken metaphors and provides real-world guidance from dozens of successful implementations in Azure.
Microsoft's services in Azure helps us to leverage big data more easily and even more often accessible for non-technical users. Having UI in ADF version 2 - Microsoft added a new feature: Data Flow which resembles components of SSIS. This is a very user-friendly and non-code approach tool-set.
But, has that been only UI introduction? Why and how Databricks does work under the hood?
Do you want to know this new (still in private preview) feature of ADF and reveal the power of modern big data processes without knowledge of such languages like Python or Scala?
We will review this new feature of ADFv2, do deep dive to understand the mentioned techniques, compare them to SSIS and/or T-SQL and learn how modelled data flow runs Scala behind the scenes.
With GDPR and the number of data breaches we see in the news, encrypting sensitive data is becoming more and more important. In this session we will start by understanding the basics of encryption, before moving onto look at the ways we can encrypt data in SQL Server and Azure SQL DB. We will cover:
  • Certificates, symmetric & asymmetric keys
  • Encryption algorithms
  • Encryption hierarchy
  • Transparent Data Encryption (TDE)
  • Always Encrypted
  • Dynamic Data Masking (DDM)
  • Encryption functions
  • Stored procedure signing
Attendees should leave with an understanding of the options available to them, and which options are most suitable for different scenarios.
With the upcoming appearance of the SQL Server 2019, Microsoft is bringing the super-fast Batch Execution Mode to the processing of the big amount of data even for the traditional Rowstore Indexes on SQL Server 2019 & Azure SQL DB. Learn with me how & when it will function and which challenges we shall meet on the path of making our workloads working blazingly faster.
The new world of Machine Learning can sound quite overwhelming! Do I need a PhD in Machine Learning to get involved? The answer, thankfully, is no. But what you do need, which is often neglected, is a solid understanding of the theory behind the models.

This session will look to do just that, looking at some of the most popular algorithms used today, we will explore the maths involved, to help us understand our problems and get the best results possible. We will be also diving into practical examples, using Databricks to consume a dataset and to visualise results, with Python scripts to execute the Machine Learning models.

If you would like an introduction to the world of Machine Learning and to acquire a solid grounding that will help you develop the skill, then this is the session for you.
Have you ever started a warehouse or ETL project and realized that the data wasn't as "clean" as you were told?  If only you had profiled your data before you started then you wouldn't have to rework design elements, change code or redesign your database.  In this session we will talk about what data profiling is, why you should do it and how you can do it with tools that are already included in the Microsoft BI stack.
The introduction of the weak relationships in Power BI composite models enables new data modeling techniques. However, not all of the many-to-many relationships can be managed by using weak relationship.

The "classical" many-to-many relationships in data warehouse is a design pattern requiring a bridge table, which is not required by a weak relationship in Power BI. The weak relationship can establish another type of many-to-many relationship that is different from the one commonly used in dimensional modeling, and it commonly solves a granularity issue in managing data coming from different data sources.

This session clarifies design patterns and best practices for using weak relationships and implementing different type of many-to-many relationships in Power BI.
Relationships are the foundation of any Power BI or Analysis Services Tabular data model with multiple entities. At first sight, this is a trivial concept, especially if one has a knowledge of relational data modeling. However, the ability to create multiple relationships between the same tables and the existence of bidirectional filters increase the complexity of this topic. In this session, we will discover the complexity behind relationships and how they work in complex and potentially ambiguous data models.
Aggregations have been introduced in 2018 in Power BI, as an optimization technique to manage large tables. By providing pre-aggregated tables, you can highly improve the performance of a Tabular data model.
In this session we introduce the concept of aggregation, we show several examples of their usage understanding the advantages and the limitations of aggregations, with the goal of building a solid understanding on how and when to use the feature in data models.
In this session we will introduce Kubernetes, we’ll deep dive into each component and its responsibility in a cluster. We will also look at and demonstrate higher-level abstractions such as Services, Controllers, and Deployments and how they can be used to ensure the desired state of an application and data platform deployed in Kubernetes. Next, we’ll look at Kubernetes networking and intercluster communication patterns. With that foundation, we will then introduce various cluster scenarios such as a single node, single head, and high availability designs. By the end of this session, you will understand what's needed to put your applications and data platform in production in a Kubernetes cluster

Session Objectives:
Understand Kubernetes cluster architecture
Understand Services, Controllers, and Deployments
Designing Production Ready Kubernetes Clusters
Azure introduces a range of new services for transaction processing and analytics solutions which mean we don’t need to deploy virtual machines.  This session provides insight into how we see customers deploying evergreen and futureproof data solutions using services such as containers and platform data services like CosmosDB.  If you’d like to scale solutions on demand and never patch, backup or manage anti-virus on a server ever again, this session is for you!
Tired of traditional DW ETLs that take hours or even days to populate your Data Warehouse and then keep your data static for reporting until the next ETL schedule runs? Your business is asking for live reports but your DW architecture is not designed for that and don't know what to do?Join us to know how we used Change Tracking to build a close to real time Data Warehouse. The challenges we experienced and how we tackled them.
Relational database engines are great, but with increasing data volumes and user patience
decreasing we need to look for other options. Step forward ElasticSearch, a search engine for our data!
 
Our users and customers today want to ask new and interesting questions of the data that they have access to, but you don’t want the overhead of building and managing indexes for every query. Join me as I walk you through the essentials of what ElasticSearch is and how it can help you deliver a data platform that can satisfy the most demanding of users.

We will start by discussing use cases for ElasticSearch and then we will set up a simple cluster see how easy it is to insert, update and search for data using both a UI and API calls. We'll finish by looking at how a combination of text analysers and clever indexing strategies can help you to build a powerful, modern and highly scalable platform that can complement your existing or new infrastructure, and can
continue to grow with you and your data.  
Is it possible for multiple developers to work simultaneously on the same Tabular Model? What about branching strategies? As the native tooling (Visual Studio / SSDT) stores all the model metadata within just a single file, this can cause all sorts of issues in a source controlled environment. In this session, we'll see how Tabular Editor can be used in a team setting to improve parallel development and branche handling. Using Tabular Editors command line options, all of this can be integrated into automated build and release pipelines - even when your developers prefer to stick to SSDT!

Attendees of this session should be familiar with Tabular Model development. Prior experience using Tabular Editor is not required.

Topics covered:

  • Breaking the Tabular Object Model into multiple files
  • Automation using Tabular Editors command line
  • Branching strategies for Tabular Models
  • Tabular build and release pipelines in Azure DevOps
"Wait, what? Biml is not just for generating SSIS packages?"


Absolutely not! Come and see how you can use Biml (Business Intelligence Markup Language) to save time and speed up other Data Warehouse development tasks. You can generate complex T-SQL statements with Biml instead of using dynamic SQL, create test data, and even populate static dimensions.


Don't Repeat Yourself, start automating those boring, manual tasks today!
When you create a new database and your database objects do you ensure you have done everything you can to give it the chance to shine when things get tough? How many times have you seen objects created with all the defaults and down the road it becomes out of control due to the size, amount of data or even activity? If you really want to ensure your database and the objects contained in it can scale and perform well as it grows you need to do the proper homework before you ever create the database. We will walk thru the various factors that affect performance and scalability under real life conditions and help you understand how to properly configure them up front to avoid issues down the road. Scalability is all about having a proper foundation to build on.
InMemory optimized tables is a feature introduced in SQL Server 2014 that is still underused. It provides significant performance gains for OLTP workloads. But still there are a lot of concerns about its usage. The session will uncover the In-Memory OLTP architecture, the concerns about data durability and database startup and recovery as well as some important consideration on Management of in-memory objects. The session will go through the number of potential use cases and facilitates implementation with less development effort and risk.
Different database engines are built to be good at a specific set of operations. 

Relational engines, for example, are typically optimised for transaction control and protecting data from damage and loss during update.
 
They are typically not optimised for detecting fraud and performing recommendations (“Customers who bought this book frequently bought….”).  Graph databases are essentially the opposite, poor at transactions and good at tasks such as fraud detection and recommendations.  The key to using graph
databases effectively is understanding not only how they work but why they were designed that way – in other words, understanding what underpins their strengths and weaknesses.  So this talk will explore their origins and how and why they work.

 
Based on real life scenarios, this audience interactive session will
cover some scenarios you might encounter whilst dealing with SQL Server
databases and you will be provided with some options about what to do. Members
of the audience will then select from these options what to do and we will follow
that path and see what the outcome is from there.Each selection will
have a different outcome, and along the way you will probably learn some new
things. Various things like Azure, migrations and Performance Tuning might
be covered.
Power BI Premium and Analysis Services enable you to build comprehensive, enterprise-scale analytic solutions that deliver actionable insights through familiar data visualization tools such as Microsoft Power BI and Microsoft Excel. This session will deep dive into exciting new and upcoming features. Various topics will be covered such as management of large, complex models, connectivity, programmability, performance, scalability, management of artifacts, source-control integration, and monitoring. Learn how to use Power BI Premium to create semantic models that are reused throughout large, enterprise organizations.
Lots of sessions about indexes give you technical, but not practical information.

It's not that the information is bad, it's just that you can't take it and apply it to your problems.

In this session, we're gonna dive deep into how queries use indexes, and the warning signs you'll see when you have a problem indexing can solve.

We're going to be looking at query plan stuff like memory grants, spools, and lookups.

We're also going to look at how indexes can give you a leg up on locking.

And best of all, you'll learn how to stop worrying about fragmentation and start fixing real problems.
Azure Databricks seems to be the new sheriff in town. It promises easy but capable ETL (amongst other things). But how does it work? For an SSIS (and possibly BIML) person this all looks new and strange. Python notebooks? Scala? Spark? What is all this, and where do I start? Well, you start by coming to this presentation. We’ll take a look at how to approach making an Azure Databricks based ETL solution from start to finish. Along the way it will become clear how Azure Databricks works and we will use our SSIS based ETL knowledge to see if it can handle the common use-cases from our daily jobs. And of course we will also have some fun and see how fast it can go!
Machine Learning is a popular buzzword, but what does it actually look like, and how can we use it?
This session will show a number of examples of using ML to do some useful and fun stuff, including creating a prediction model without writing any code, and training a deep learning neural net model to play a game

May I introduce you to the Microsoft Power Platform!?

This new term was introduced by Satya Nadella and James Philips as the serving foundation for building powerful (cloud-baked) applications. PowerApps as the data manipulation part, Microsoft Flow as the connecting workflow engine and Power BI for analysis and reporting. In addition, the Common Data Service (CDS) and the Common Data Model (CDM) provide a solid framework for data storage and modelling.

Join me in this session if you want to get an overview about the involved technologies, how they work together, see them live & working together in many demos and how they will definitely change the way of building powerful solutions in the future!

Microsoft Premier Field Engineers and Consultants are the people that go out to talk to customers and implement solutions.  

In the room will be senior people from Microsoft with a passion for sharing.

Feel free to come along and ask questions or just listen to questions from other data professionals.

Everyone that has been involved in the Database Development can notice how huge the impact of a bug can be, especially when these kind of mistakes could be easily avoided through the Test Driven Development (TDD) approach and the real implementation of Unit Testing on SQL Server objects as stored procedures, functions and triggers.

When you start to google search about how to start implementing Unit Testing for SQL Server you always find tSQLt as the first result returned, however, the extremely low number of downloads tell us about the issue with this implementation. Some possible causes could be explained, such as the lack of a free UI tool for executing the tests, but I have been working with many full-stack developers and one of the most important complaints is related to the fact that AAA (Arrange-Act-Assert) is not native and they can’t feel a soft transition in their regular tasks for Unit Testing with Databases.

In this presentation I would like to focus about how to get a better performance and good results through the SQL Unit Test feature of Visual Studio, how to overcome the main problems and how you will be able to use a free framework  (https://github.com/SimpleSqlUnitTesting/SimpleSqlUnitTesting) and extend it for your own convenience, otherwise I will be showing a simple way of installing it on the local machine or how you could extend to a centralized integration server.


Indexing presents daunting challenges for even the most seasoned professionals, as it offers countless options to choose from. With a little help you’ll see how to simplify indexing in your environment and improve
the overall performance of your SQL Server applications. In this session you will learn all about the different architectures of indexes, from that how to make the right choices when designing your indexes so that both the database engine and your DBA will love you for it. The session will also cover how to find missing and unused indexes, the cause and how to resolve fragmentation issues as well as how to maintain your indexes after they have been deployed.

After attending this session you will have a much better understanding of how to create the right indexes for your entire environment, not just for that one troublesome query.

The Power BI Desktop is making huge waves in the Business Intelligence community.  We have seen people leverage it to build games, tell stories, solve problems and mostly to answer questions.  In today's constantly changing technological environment, just using a product is not enough.  You really need to spend some time and get intimate with it.  Join us in this session, where we will spend some time really getting to know Power BI Desktop.  During this all-demo session, we will dig deep into some of the most captivating, but less publicized features of the desktop.

DAX is a very powerful and expressive language, but during my journey of learning it, I discovered several quirks that took me awhile to get my head around.

For example, the FILTER function does not create a new filter context but a row context. The ROW function is executed in a filter context without creating a row context. How does that make sense?

And then we have that CALCULATE function. First you guess that you need it to put it around your SUM and AVERAGE functions in order to perform those aggregations. Then you realize that it does not do anything! But is this then always true?

To demystify these and all the other DAX Gotchas that I bumped into is the goal of this session.

Once you have killed them all, then you will start to live life to the DAX!

To build analytical models, we need to start by extracting, transforming, cleaning, preparing and loading the data. This session analyzes a set of scenarios that may happen during the ETL step using the Power Query in Power BI. ETL process is the first and definitely , a layer of a Business Intelligence system, we could say, it’s a “Hidden layer “, final user, who looks at the analytical reports and dashboards, does not see all the stuff necessary for cleansing, filtering and wrangling data before loading it.

By first, we will look briefly the definition of ETL process in general and ETL process in Power BI in particular

And then, we will travel through the three steps of ETL: Extraction, transform (that includes Data quality and Data transformation, and Load data to the Data model

Power Query editor in Power BI is a powerful tool for ETL purposes, data could be extracted from many different sources and transformed from different format to model, that means: cleansed and improved considering business requirements.

During the Power BI ETL Extraction, Power BI connects to a large amount of data sources: File, Database, Azure, Online Services, more Online Services and other. Power BI uses the specific connection and navigation wizards that allows us an easy way for getting data from the source. In this session, we will connect to different data sources, including files, folders, SQL Server tables, R Scripts, and blank queries. We will combine queries and files. There are several transformations that should be done before combining files, due to we should create them for Sample File first. Besides that, we will create and use functions and parameters.

There are transformations by a table, by any column and specific transformations by text, number and date columns. The use of transformations for pivoting, unpivoting and transposing tables, filtering by rows and columns, filling columns, and others will be used for solving problems related to the structure or quality issues of the data. We will analyze some situations for which will be necessary to clean before grouping, filtering and using columns for evaluations.

We will also cover some aspects of the M programming language, as it for handling the model.

We will execute a lot of transformations, that definitely could save our ETL allowing us to create: data model, additional queries, parameters, and functions. Finally, we will obtain a beautiful star schema model.

If you have made the leap to using Extended Events instead of Trace, chances are you’re using Extended Events to do the exact same thing you used to do in Trace. Maybe you use it to capture query performance over time, or find queries that exceed a specific duration, I/O, or CPU.  Maybe you want to find what login executed a query, or from what workstation or application the query originated.  But Extended Events is so much more than a replacement for trace; it is a whole new way to think about troubleshooting.

In this session we will discuss the targets, actions, and predicates available in Extended Events and see how you can leverage them to look at problems in SQL Server in ways that you never could before. We will step through as many demos as possible in the time available so you can see why joining Team XE was one of the best decisions you ever made.  You’ll walk away with a deeper understanding of Extended Events along with a new methodology for approaching and solving issues in SQL Server.

Power Bi Premium has many great features, however it is not possible to explore these features before committing to the more expensive premium tier. This makes it difficult to justify the sizable cost.

Based on a suggestion by Chris Webb, in this talk we will implement a selection of Power BI Premium features for a fraction of the cost, using the Azure Power BI Embedded. Since Azure Power BI Embedded does not include the same rich portal experience as the Power BI Service  we will implement all of this functionality using the Power BI Rest API’s with just a dash code.

Do you ever worry that your reports look they were born in the 90s, or designed by a 3rd grader? Do you feel overwhelmed when tasked with creating attractive reports which tell the customer a story? Thankfully, a few simple design concepts can have big impacts on your reports. There are easy-to-learn techniques and patterns to create a pleasing look and feel to our reports, and some well defined principles to decide which visual is most appropriate in your situation.

Join me in this session to learn:

  • - How to choose the right visual to complement the data message.
  • - Best practices for visual formatting to tell a complete story.
  • - Colour theory - designing a colour palette, and ensuring accessibility.
  • - Layout techniques for clean and simple reports.
  • - Real estate management, or how to fit everything on a report without it looking cluttered and losing effectiveness.

If you have already dipped your toe into the waters of Azure SQL Database, you'll know that there is no SQL Agent provided with the PaaS service.

Fortunately there are a number of options available to get over this hurdle. One of these is Azure Automation which can be used as a scheduling engine to run PowerShell or Python scripts to perform your tasks. However, Azure Automation Runbooks can also be triggered from Azure SQL Database alerts.

This session will introduce Azure Automation, from a basic manually execute Runbook to an alert-drive responsive utility that can save you considerable effort.

We can also take out Alert responses a step further and use them as the trigger for an Azure Logic App workflow. Azure Logic Apps provide a robust workspace with built-in scalability and retry logic for your workflow along with a multitude of connectors which can expand the response to your alert.

Imagine being able to capture an Alert, perform the remedial action, log the issue and response in your ticketing system and switch on your kettle so you can sit back and let Azure do it all for you.

Query Tuning is easier said than done. Here is an opportunity to learn some real-world query tuning examples. In this 100% demo, deep-dive session you will learn how you can re-write T-SQL queries using new constructs, tune indexes & deal with statistics to improve query performance. You will also learn a few advanced concepts about execution plans and iterators. This session will be an eye-opener for you and you will learn things that Google cannot find for you. Assured.

You finally got the go ahead and now you have a nice and shiny Power BI service or perhaps a Power BI Report Server environment.
Maybe you went all out and even have set up a deployment pipeline to automate deployments.

The one thing you probably haven't don't yet, is setting up that feedback loop.
You're missing metrics. The important metrics that enable you to manage your environment beyond 10 users.

So many things to think about...
Who's using your reports, how often and at what times?
Who actually needs a pro license, who doesn't need it anymore?
You need to plan for maintenance, you need impact assessments for outages or deployment failures.
And you surely need these metrics to show the validity of Power BI within your department or even the enterprise.

In this session you'll learn everything you'd ever want to know about monitoring your Power BI environments and how you too can start monitoring Power BI Report Server or the Power BI service like a pro!

You have created great cubes, Power BI and Reporting Services reports but how do you know if it is being used? Learn how to set up the collection of the usage data and how you can use this data in your decision making.

We will talk about how to collect the data, how to build something meaningful from the data and how you can report on top of the data. We will do this for SSAS cubes (Multidimensional and Tabular), for Power BI reports residing in the Power BI Service and for Reporting Services Reports and we will explore ways you can further develop this for your own organization.

At the end of the session all participants will leave with all the code as well as the know how to get started with the collection of usage statistics for their Microsoft BI Solutions

There are numerous challenges that come with trying to automate deployments of software. One of those is the tools we use and the management of the software used to deploy the software. A pipeline that is not trusted because of it's rate of failure due to unreliability is sometimes worse than no pipeline at all. In this talk we will discuss our experiences of what works and what prevents a reliable deployment process, in terms of both the pipeline and the management/configuration of the deployment software. We will covers areas such as headless build agents, software versioning, shared tooling and shared pipeline infrastructure.

Have you ever needed to learn a new database design and don't know where to begin? Or are trying to find out why a query doesn't perform well?  Or need to provide security information to auditors or your security team?

SQL Server has numerous metadata facilities available to help you with these tasks and more. Functions, dynamic management views, and system stored procedures can illuminate details from a single column up through an entire SQL Server instance. We will demonstrate metadata techniques to help you:

 - Document your database schema objects such as procedures, functions, tables, columns and indexes

- Investigate performance and look for bottlenecks and tuning opportunities

- Discover metadata to administer your databases backups, index maintenance, and security

- Apply your own metadata using extended properties

 We will also cover the official Microsoft documentation on these features and other resources on how to use them.

Do you want to install a new SQL Server instance in less time than it takes to get a latte at Starbucks?

I've worked on teams managing hundreds of SQL Server instances, and tens of thousands of databases. I've also been the lone DBA, managing all thing SQL Server with nobody around to help. In both these cases, automation is critical to ensure you can be productive and keep the lights on. It doesn't matter what scale you work on--if you're interested in using automation to eliminate routine tasks, this session is for you.

In this session, we'll walk through the reasons you need to automate, the process you should use in determining what tasks you need to automate, and how I approach the automation itself--including picking the right tool for the job and convincing your boss to let you spend time writing automation. 

Hive Query Language (HiveQL) is a SQL like language which enables querying and analysis of data on Hadoop.

This session aims to introduce database professionals to the language of HiveQL.

Whilst Power BI is considered to be the best self-serve reporting tool in the market, it is often criticised for its lack of governance.  Many organisations are seeking the Nirvana of self-serve and enterprise reporting, but in reality, they are left with a completely unstructured, poorly governed strategy. 

Poor governance actually has nothing to do with Power BI.  Yes, Power BI files and datasets do not play nicely with object driven source control providers like GitHub, however, DevOps is not just about code automation.  Let’s think about the process of report deployment and simplify our problem. 

The 1-hour session focuses on a specific customer who wanted a process to create, edit, and manage their content – without the danger of un-validated reports being published into a Power BI production app workspace.  The Demo includes services such as Microsoft Flow, Power BI and Outlook, working collaboratively to improve end-to-end reporting governance.  Audience participation is encouraged throughout.

Whether you are hosting SQL Server in your office, in a data center, or in the cloud, SentryOne enables you to monitor, alert, and tune so that you get the most out of your servers – even servers that aren’t running SQL Server. In this session, your hosts guide you through the spectrum of SentryOne products - products that help you build, test, document, and monitor the entire Microsoft Data Platform.

SentryOne tools provide best-of-breed capabilities for SQL Server, SSRS, SSIS, and SSAS across deployments on Azure, Amazon, and on-premises installations. You’ll see time saving strategies for managing performance across hybrid environments - whether you are running physical or virtual servers, SQL Server, Azure SQL Database, APS, Amazon RDS, or Azure SQL Data Warehouse.

You will see many quick demonstrations of SentryOne software highlighting everything from data integration components for SSIS, automated testing tools for the Microsoft Data Platform, documentation and metadata management tools, and of course, the industry’s best performance monitoring and optimization product, SQL Sentry. SentryOne provides a unified management experience; it’s your one platform for physical, virtual, and cloud performance.

The Microsoft SQL Server leadership team present an interactive Q&A session

First, we were hunters. Creating and refreshing environments for dev/test/UAT by “hitting it ‘til it works”. Running a restore job using the SSMS wizard. Then running it again from T-SQL after adding the WITH MOVE option to reflect the different disk layout. Then yet again after freeing up some diskspace.

That was the first age of environment provisioning.

Some of us moved onto rubbing sticks together; putting the script into a SQLAgent job, scripting out the permissions and sticking our sledgehammer masking techniques in as a job step. We gathered up our jobs, which often failed and took too long but that was the way things were. Nobody had told us about the schema change, or the tiny diskspace we were trying to land in.

We were pleased to be hunter-gatherers.

In this session, we discover the third age of environment provisioners. The farmers. Developers, analysts and others can self-serve, through repeatable, and audited processes. In today’s age, masking desensitizes environments to mitigate against breaches and ensure compliance, while enabling developers and testers to work with realistic data. Whole environments are delivered at scale with databases customized to their specific security needs.

Learn how you can join the farmers!

Microsoft Azure SQLDB is a database as a service. That means that it is “fully managed”. It removes many of the tasks related to the set-up and administration that we are used to with SQL server on- premise. Not having to worry about an operating system, hypervisor or thread count can simplify performance management. Azure SQLDB even introduces functionality that aims to automatically tune the workload performance. However,  that should not make us complacent.

The performance of any database as a service is inextricably linked to cost. A poorly optimized workload can cause the provisioning of extra resources. It is great to be able to scale automatically to stay within SLA, but this costs money. If left unchecked it can accumulate fast and hit hard on your company's bottom line. The performance challenge has evolved with the database as a service, but it certainly has not gone away.

Join this session to learn how managing performance is even more critical with  Azure SQL DB. Learn what key performance indicators are most applicable, what auto-tuning really means and get some tools to help you identify performance issues and correctly size your database.

The proposed Agenda is as follows:

  • How is performance measured (throughput, response time), I/O, check KPIs, check top statements)
  • Key performance indicators on-premise Vs  azure (DTU vs CPU, IO, Locks, Log)
  • Performance and Cost
  • Use DMVs to monitor performance
  • Use Query Store to monitor performance
  • Automatic Tuning (Intelligent insights)

The world of data warehousing is evolving quickly. Enterprises today need more agility and scalability so are incorporating new technologies into their data warehousing strategy to meet their goals. This includes everything from cloud services and automation tools, to analytics engines like Spark. The goal of modern data warehousing is to not only deliver better insights faster to more users, but also provide a richer picture of operations afforded by a greater volume and variety of data for analysis. Attend this session to learn about how to future-proof your modern data warehousing environment to meet the needs of the business for the long term; as well as how to overcome common data warehousing challenges, the related must-have technology solution.

The software engineering life cycle is well established in the industry, from requirements, to design, build, deploy and maintain. Now its time to apply this to our machine learning projects. Azure Machine Learning is a platform for developing and deploying your machine learning models on Azure.

Use your favourite frameworks, libraries and tools to build and deploy your machine learning experiments with the help of the cloud.

In this session we will look at the life cycle of your projects: from data, to model and model to consumption via a container. As well as exploring how Automated Machine Learning can support your data science projects with optimisations, pace and explain-ability.

All code samples will be shared so you can run them yourself after the session.

Come learn all the new capabilities of SQL Server 2019 directly from the Microsoft engineering team. This session will provide the strategy behind SQL Server 2019 as well as a deep dive into all the new features and how they can help you modernize your data estate with SQL Server.
In this session Buck Woody explains how Microsoft has implemented the SQL Server 2019 relational database engine in a big data cluster leverages an elastically scalable storage layer that integrates SQL Server and HDFS to scale to petabytes of data storage. You’ll see the three ways you can interact with massive amounts of data: Data Virtualization, Data Marts, and working with a complete Kubernetes Cluster in SQL Server. You’ll also learn common use case scenarios that leverage big data and the SQL server 2019 Big Data Cluster on-premises, in the cloud, and in a hybrid architecture.
This session will showcase several improvements in SQL Server, focusing on the latest and upcoming query performance diagnostics and query processing enhancements that address some of the most common customer pain points. Ranging from new xEvents and Showplan improvements, to Intelligent Query Processing innovations, learn how you can leverage these features and streamline the process of troubleshooting query performance with faster insights.
Containers are the new virtual machines. Containers present a new way to deploy, manage, and run SQL Server never possible before. This session will present an internal view of how Docker containers work and how SQL Server runs in them. Will cover the architecture of containers, how we have built SQL Server to run in containers, and how they work in environments such as Kubernetes. While this is an internal focused session you will walk away with knowledge of practical scenarios where SQL Server in Containers may be the right deployment model for you.
Learn how Machine Learning Services in SQL Server is a powerful end-to-end ML platform for customers, on both Windows and Linux. Come learn about the unique value proposition of doing your entire machine learning pipeline in-database – right from data pre-processing, feature engineering, and model training to deploying ML models and scripts to production in secure and compliant environment without moving data out. 
With SQL 2017 we entered a brave new world with cross-platform SQL Server, and with the new Azure Data Studio, we now have the tools to go with it. In this demo-packed session, learn about the newest tool for SQL Server for Windows, MacOS, and Linux, and how you can use it and extend it for your own needs. Be one of the first to see the new Notebooks feature for SQL Server, which brings the power of Jupyter notebooks to your SQL Server workflow.
We are now giving you the opportunity to securely execute Java code in SQL Server using the new 3rd party extensibility framework. Come to this session to learn how you can execute Java code in SQL Server on both Windows and Linux. This sessions will also cover the 3rd party extensibility architecture we are launching soon, which will enable the SQL Server community to build their own language extensions!
Upgrades should be approached with the same rigor and processes as a full software or hardware project – a solid methodology is required for success. Microsoft provides you with all the tools you need to achieve a seamless, reliable upgrade experience. In this session we demonstrate some of the free tools that Microsoft provides in order to ensure your SQL Server upgrade is a success.
SQL Server 2019 expands on the Polybase feature from SQL Server 2019 by providing a robust data virtualization solution to reduce the need for ETL and data movement. Come learn how new data connectors work with sources like Oracle, MongoDB, CosmosDB, Terradata, and HDFS.
In this session, you'll learn about the various prebuilt AI/ML models that are available for your consumption through Microsoft's Cognitive Services. Additionally, you'll see examples of how these prebuilt services can be leveraged with SQL Server 2019.
In this session you'll learn how Hyperscale removes limits for cloud based VLDBs.  We'll demonstrate how to create, migrate, and do point in time restore of very large databases in Azure.
This session presents how to migrate, replicate, and synchronize data between SQL Server, SQL VM, Azure SQL Database, and Azure SQL Database Managed Instance, across on-premises, Microsoft Azure, and other cloud platforms to build a real hybrid data platform. We introduce our current technology choices, deep dive in customer scenarios, and use cases, and share product roadmap.
This session will focus on the new features and capabilities that help you meet compliance and security needs with SQL Server on-premises as well as in Azure SQL Database. This includes the new Static Data Masking, new authentication capabilities, new functionalities in Vulnerability Assessment and Threat Detection as well as Always Encrypted. If you want to know about the latest developments in SQL Security, this session is for you.
Learn about the Graph functionality integrated into the SQL engine, and the unique value it provides
As SQL Server 2008/R2 is reaching the end of support very soon. Are any of you still running SQL Server 2008/R2 and would like to upgrade soon? Are they willing to upgrade to modern SQL Server platforms to achieve breakthrough performance, maintain security and compliance, and optimize data platform infrastructure? Azure SQL Database and Managed Instances allows you to build globally scalable applications with extremely low latency. Azure SQL Database is the best cloud database offering in market. In this session, we will take a detailed look at the migration life cycle and show you how we have made it easy to migrate SQL Server instances to Azure with near-zero downtime by using the Azure Database Migration Service and related tools. We will also cover most commonly seen migration blocking scenarios and demonstrate how our service can unblock your migration to Azure SQL Databases. We will also give you a deep dive how to perform scale migrations using our CLI components.
In this session, you will learn about the latest additions to the portfolio of network security features for Azure SQL Database. This session will be heavy on the “how-to” for securing the network connectivity for Managed Instance.  ?
Modernizing is often a loaded term in a software application setting. In this session, we will break down what are the options of modernizing an existing application, what are the benefit of moving applications to managed Azure Open Source Database platform and what are the best practices and gotcha(s) that you need to look out during data migration.
Azure Cosmos DB has many use-cases, and not all of them are clear to Azure Cosmos DB newcomers. If you're a relational expert and have been wondering about graph, how you'd survive without a schema, and scale out databases this session can help. We'll cover the Cosmos DB Gremlin API and how to set up a graph database. We'll then put multiple items into a single collection with different schemas and show you how to link them and query them along with an explanation of partition keys for limitless scaleout.
Azure offers a vast comprehensive data estate! While this is great for enabling users to pick the right tools for the right job - this has also increased the surface area for understanding how to integrate a vast number of components. In this session, we will show off the plug and play nature of Azure products by showcasing you can write to Azure Cosmos DB with a data pipeline moving data from multiple sources powered by Azure Data Factory and Databricks.
Do you want to know why customers chose Cosmos DB? Come learn about the business goals and technical challenges faced by real world customers, and learn about key Cosmos DB features so you can help your customers deliver their high-performance business-critical applications on Cosmos DB.
Find out how a major UK hotel chain unified their wildly different sources of data to build a supercharged analytics and pricing engine to power their business. Find out how Cosmos and Databricks helped them get to know their customers, how best to retain them, and how best to keep them happy, all while ensuring GDPR compliance and the right to be forgotten. Learn how downstream business users consumed the insights via Power BI, while app developers were able to keep a close eye on performance using Application Insights and how scale up and scale down allowed the partner to meet challenges around massive initial data loads, without blowing the project budget. Find out what best practices helped ensure that their Cosmos DB, Function, Event Hubs and Azure Databricks instances all played harmoniously together. Hear how they used partitioning, indexing, scaling and other cloud design patterns to deliver incredible performance at the lowest possible cost.
For many newcomers to Azure Cosmos DB's SQL API, the learning process starts with how to model and partition data effectively. How should I think about modeling data in Cosmos DB? When should I co-locate data in single collection verses multiple collections? When should I de-normalize or normalize properties in the same document vs multiple documents? How should I apply a partition key to this object model? In this session, we will discuss the strategies and thought process one should adopt for modeling and partition data effectively in Azure Cosmos DB. We will also briefly cover related topics such implementing optimistic concurrency control, transactions with stored procedures, batch operations, and tuning queries + indexing.
Within Azure we have a rich ecosystem of AI services that can be leveraged to gain new insights into your data. This session will give you an easy to digest breakdown of the key services that matter and how to approach each one. We will look at: •AI Developer services such as Cognitive Services and the Bot Framework •Democratising AI services such as Azure Machine Learning Studio •Data Developer services such as Databricks •Data Scientist services such as Notebooks, the Azure ML SDK for Python and the Azure Machine Learning Service
Azure Databricks support both Classic and Deep Learning ML Algorithms to analyse Large DataSets at scale. The Integrated Notebook experience gives the Data Scientists and Data Engineers to do exploratory Data Analysis, also feels like native to Jupyter notebook users. In this session we will extract intelligence from Higgs Dataset (Particle Physics) by running Classic and Deep Learning models using Azure Databricks. We will also peek into AMl service's integration with Azure Databricks for managing the end-to-end machine learning lifecycle.
The drive to build systems that can increasingly hear, see, speak, understand, and even begin to reason is real, and with Microsoft Cognitive Services APIs, Azure Bot Service, and Azure Cognitive Search adding this functionality is easier than ever. In this session, we’ll look at the different options within the Cognitive Services suite, show you how to connect to the APIs using Python code, walk through a live bot demo, and build an Azure Cognitive Search index. You should leave this session feeling like you’ve had a jump start to further your AI developer skill set.
Azure offers a comprehensive set of big-data solutions that help you gather, store, process, analyse and visualise data of any variety, volume or velocity, so you can discover new opportunities and take quick action. In this overview session, we’ll look at the various components within Azure that make up the Modern Data Warehouse, enable Real-Time Analytics, and support Advanced Analytics scenarios. You should leave with a high level understanding of the capabilities and limitations of each of the products within the Azure Analytics portfolio.
Why do you need anything more than SQL Server? We will discuss what factors influence platform choices, such as data and processing scaling and open source tool chains that include Hive and Spark.  This session will include a quick overview of HDInsight and some of the tools that SQL Developers can use to interact with it, as well as some best practices and gotchas. There will be a short demo on some of the tool choices.
Ever wondered how you can add the power of ML to your existing SQL estate without the need to invest in new services? Come to this session to learn about: •Using sp_execute_external_script to run Python and R workloads in SQL •The PREDICT function and how to operationalise models in Azure SQL Database •Performance considerations of running your ML workloads in your production database •How to train and operationalise deep neural networks in database Take your database to next level and make your data work for you with the power of ML
Azure Stream Analytics is a fully managed serverless offering that enables customers to perform real-time data transforms and hot-path analytics using a simple SQL language. Recently, Azure Stream Analytics enhanced the integration with Azure SQL Databases and Azure SQL DW, making SQL the ideal database to collect and further process data coming from real-time systems and IoT devices. In this session, we will notably show how to combine SQL reference data to augment data coming from devices and create real-time alerts, leverage partitioning to write data to SQL at high speed, and create real-time dashboards. W
SQL Data Warehouse is a cloud-based Enterprise Data Warehouse (EDW) that leverages Massively Parallel Processing (MPP) to quickly run complex queries across petabytes of data.. Azure Databricks is a fast, easy and collaborative Apache Spark-based analytics service. That can be used to accelerate big data analytics and artificial intelligence (AI) solutions. This session will cover the architecture patterns for using the two in synergy to create a agile, adaptable data ecosystem.
Real-time decisions enabled by massively scalable distributed event driven systems with embedded analytics are re-shaping our digital landscapes. But how should you know which technology to choose? And how do you select the right service or deployment option for a given technology? In this session, we will give a rundown of the fundamentals of streaming systems, how and why they can deliver value to your business, and a comprehensive break down of the options for streaming analytics in Azure. You should leave with an understanding of which services you can use for your next real-time analytics project.
A technical overview of Azure SQL Data Warehouse Gen 2. SQL Data Warehouse is a cloud-based Enterprise Data Warehouse (EDW) that leverages Massively Parallel Processing (MPP) to quickly run complex queries across petabytes of data.. Unlock new insights from your data with Azure SQL Data Warehouse, a fully managed cloud data warehouse for enterprises of any size that combines lightning-fast query performance with industry-leading data security. Optimise workloads by elastically scaling with built-in auditing and threat detection.
Back in 2003 Microsoft released an addition to SQL Server 2000 named SQL Server Reporting Services (SSRS). This is a story of the journey of the BI Engine that could. From updates, to periods or rest and updates again. Not only has it been updated, but new features, capabilities and event a new project has been born from its humble beginnings. Now, paginated reports, the only type of report that was available when SSRS was initially released, are available in both on-premises and cloud-based solutions. Join this session to experience the journey of SSRS and how to create and deploy reports across all the different products available in the Microsoft Ecosystem.
Power BI Premium and Analysis Services enable you to build comprehensive, enterprise-scale analytic solutions that deliver actionable insights through familiar data visualization tools such as Microsoft Power BI and Microsoft Excel. This session will deep dive into exciting new and upcoming features. Various topics will be covered such as management of large, complex models, connectivity, programmability, performance, scalability, management of artifacts, source-control integration, and monitoring. Learn how to use Power BI Premium to create semantic models that are reused throughout large, enterprise organizations.
Come to this session to learn how to enable collaboration and solution design for business users and IT specialists to build solutions that enable an origanization to harness the power of their big data.  This session will enable you to collaborate across business and IT, and how we can extend intelligence beyond Power BI into Azure Data Services. Once Power BI has landed in an organization, attaching and extending into Azure can be achieved using common use cases and modernization plays.
Hybrid data landscapes are common and the on-premises data gateway enables connecting to your on-premises data sources from online services (like Power BI, PowerApps, Microsoft Flow and Logic Apps) without the need to move your data to the cloud. Come to this session to see the latest gateway features and best practices related to setup and configuration along with troubleshooting tips and tricks, investigate bottlenecks and resolve your common gateway errors.
A chance for speakers new and seasoned to take the stage for 5 minutes and effuse about something/anything in SQL Server they love.

This session focuses on the deeper integration of SQL Server Integration Services (SSIS) in Azure Data Factory (ADF) and the broad extensibility of Azure-SSIS Integration Runtime (IR). We will first show you how to provision Azure-SSIS IR - dedicated ADF servers for lifting & shifting SSIS packages – with your SSIS catalog (SSISDB) hosted by Azure SQL Database server/Managed Instance and extend it with custom/3rd party components. Preserving your skillsets, you can then use the familiar SQL Server Data Tools (SSDT)/SQL Server Management Studio (SSMS) to design/deploy/configure/execute/monitor your SSIS packages in the cloud just like you do on premises. Next, we will guide you to modernize your ETL workloads by triggering/scheduling SSIS package executions as first-class activities in ADF pipelines and combining/chaining them with other activities, allowing you to automatically provision Azure-SSIS IR on demand/just in time, inject/splice built-in and custom/3rd party data transformations, such as Power Query Source, Data Quality (Matching/Cleansing) components, etc.  And finally, you will learn about the licensing model for ISVs to develop their paid components/extensions and join the growing 3rd party ecosystem for SSIS in ADF.

Deploying untested code to production isn't ideal, manual testing can be slow and unreliable. In this talk we will look at the different types of automated testing we can use for our databases to give us confidence in the quality of the code.

This talk is for developers and DBA’s, developers who want to understand how and where they should write tests and what is the point of each testing phase and DBA’s who want to make sure that the developers are testing their code effectively.

Do you hear terms like unit testing and integration testing and shrug your shoulders and think that it all sounds a bit confusing, or do you see the memes on Twitter captioned “unit tests pass” but some piece of household furniture is entirely non-functional? Do you ever wonder how the blocking, cursor riddled proc ever managed to get through any performance testing? Then come to this talk and learn about the different types of testing and how we can be useful in proving that the code functions as required but also meets other requirements such as performance goals.

We will cover what types of testing can and should include in the development process. This talk is about the theory of testing and although there aren’t any demo’s you will learn where and when to apply testing techniques.

You will go away with enough knowledge to know how to start testing yourself or, for DBA’s, to help your developers start testing themselves.

 

The Microsoft Power BI and Analytics team present an interactive Q&A session
DevOps for AI presents several challenges. For developing an AI application, there are frequently two streams of work: 1. Data Scientists building machine learning models, and 2. App developers building the application and exposing it to end users for consumption. Building a Continuous Integration (CI)/Continuous Delivery (CD) pipeline allows the decoupling of the streams and allows for quicker deployment of scalable models. In this workshop, we will implement a CI/CD pipeline for Anomaly Detection and Predictive Maintenance applications. The following modules will be covered in the presentation (objctives): 1. Introduction to CI/CD 2. Create and customize a CI/CD pipeline using Azure 3. Leverage Azure Model Management to version models and discover them using a CI/CD pipeline 4. Learn how to develop a Machine Learning pipeline to update models and create service Speaker: Mithun Prasad