These are the sessions submitted so far for SQLBits 16.

Agile BI promises to deliver value much quicker to its end users. But how do you keep track of versions and prioritize all the demands users have?
With Visual Studio Online (cloud version of Team Foundation Server) it is possible to start for free with 5 users, with Version Control, Work Item management and much more.
In my session you will get the directions to a quick start with Visual Studio Online. You will learn the possibilities of Version Control and in which way to implement Scrum work item management with all available tools.
Your tabular model is done, approved and ready to be used by the user. By means of using Excel the user gets very excited about the use of tabular Models. For a while the user uses Excel as a self-service business intelligence tool. Then all of a sudden the user starts asking if they can use the program to extract more and other information from the tabular model by the use of Excel. Now it is up to you to familiarize the user with all the possibilities of working with the tabular model by means of Excel.
Given the small amount of documented knowledge about the use of tabular models by means of Excel, I will show you how to get the best of your tabular models by using Excel as a self-service business intelligence tool. Filters, named sets, and calculations in the pivot table: I will explain it all!
Where to start when your SQLServer is under pressure? If your server is
misconfigured or strange things are happening, there are a lot of free tools
and scripts available online.These tools will help you decide whether you have
a problem you can fix yourself or you really need a specialized DBA to solve it.  Those scripts and tools are written by renouwned SQLServer specialists. Those tools provide you with insights of what
might be wrong on your SQLServer in an quick and easy manner. You don’t need extensive knowledge of
SQLServer nor do you need expensive tools to do your primary analysis of what
is going wrong
And in a lot of instances these tools will tell you that you yourself can fix the problem. 
Whether you've dabbled in PowerShell or wondered what all the fuss is about, make no mistake: PowerShell is something worth learning to make your life as a SQL Server professional easier. Whether you're a DBA, a SSIS developer, or security professional, In this session you'll see practical, real world examples of how you can blend SQL Server and PowerShell together, and not just a bunch of regular T-SQL tasks that have been wrapped in PowerShell code.

In this session, you'll first get a brief introduction to the SQL Server PowerShell module. From there, it's nothing but code and examples to show you what's possible and why learning PowerShell is such a fundamentally awesome skill to make part of your portfolio or CV. You don't need to know anything about the language before stopping in, but by the time you leave you'll be excited to learn more!
Want to build Analysis Services tabular models that not only perform well but also provide an outstanding user experience? Want to do it in less time with less heart-ache?

In this demo-heavy session you'll be introduced to the top FREE tools that every SSAS Tabular developer should have in their toolbox. Created by passionate (and generous) developers in the community, these tools can help you create beautiful models that perform well and provide an optimal user experience in less time.

Tools covered include: BIDS Helper, DAX Studio, VertiPaq Analyzer, DAX Editor, BISM Normalizer

Note: This session assumes developer experience in designing and implementing SSAS tabular solutions.



We will kick off with a small history on how Columnstore indexes evolved since SQL Server 2012 and the internals of Columnstore indexes, because this is how Operational analytics became possible. Then we will build up the session showing tips and tricks into how to make Operational analytics fit in your environment by lessons learned in the field. In the end we will show you how to make it fit with various techniques and heavy loads to a system.
After this session you will be able to know how to make this fit for your environment, there will be no more mystery in what is your hot and cold data & you will know the different uses of columnstore to make this work.

If your regular SQL Server becomes too slow for running your data warehouse queries, or uploading the new data takes too long, you might benefit from the Azure Data Warehouse. Via its “divide and conquer” approach it provides significant performance improvements, yet most client applications can connect to it as if it is a regular SQL Server.

To benefit from these performance improvements we need to implement our Azure Data Warehouse in the right way. In this session - through a lot of demos - you will learn how to setup your Azure Data Warehouse (ADW), review indexing in the context of ADW and see that monitoring is done slightly different from what you’re used to.

Digital Transformation is much more than just sticking a few Virtual Machines in the cloud; it is real, transformative, long-term change that benefits and impacts the whole organisation.
Digital Transformation is a hot topic with CEOs and the C-level suite, renewing their interest in data and what it can do to empower the organisation.
With the right metrics and data visualisation, Power BI can help to bring clarity and predictability to the CEO to make strategic decisions, understand how their customers behave, and measure what really matters to the organization. This session is aimed at helping you to please your CEO with insightful dashboards in Power BI that are relevant to the CxO in your organisation, or your customers’ organisations.
Using data visualisation principles in Power BI, we will demonstrate how you can help the CEO by giving her the metrics she needs to develop a guiding philosophy based on data-driven leadership. Join this session to get practical advice on how you can help drive your organisation’s short and long term future, using data and Power BI.
As an MBA student and external consultant who delivers solutions worldwide, Jen has experience in advising CEO and C-level executives in terms of strategic and technical direction.
Join this session to learn how to speak their language in order to meet their needs, and impress your CEO with proving it, using Power BI.

We all know about the "Vs" of Big Data: velocity, volume and variety. What about the missing "V?" That would be visualization? What does "big data" look like? It has to be more than beautiful. It must convey information and insights in a way that people understand. Furthermore, people expect to make actionable insights from their data. How can we make big data friendly to users? This session will look at a mix of technologies for visualizing big data sources and ways of achieving BigViz harmony in your Big Data.You will learn:

  • About Big Data—from machine scale size down to human scale understanding
  • Technologies for visualizing Big Data, both Microsoft and Open Source Data Visualization principles
So Azure SQL DataWarehouse is now available and starting to be use, but what does that mean to you and why should you care?

Reflecting on a large-scale Azure DW project, this session gathers together learnings, successes, failures and general opinions into a crash course in using Azure DataWarehouse “properly”.

We'll start by quickly putting the technology in context, so you know WHEN to use it, WHERE it’s appropriate and WHY it works the way it does.
  • Introducing the ADW technology
  • Explaining distributions & performance
  • Explaining polybase
Then we'll dive into HOW to use it, looking at some real-life design patterns, best practice and some “tales from the trenches” from a recent large Azure DW project.
  • Performance tips & tricks (designing for minimal data movement, managing distribution skew, CTAS, Resource classes and more)
  • ETL Patterns (Surrogate keys & Orchestration)
  • Common Mistakes & Pitfalls
  • Conclusions & Recommendations
You’ve probably already seen that R icon in the Power BI GUI. It shows up when creating sources, transformations and reports. But the ugly textbox you got when you clicked upon those icons didn’t encourage you to proceed? In this session you will learn just a few basic things about R that will greatly extend your Power BI data loading, transformation and reporting skills in Power BI Desktop and PowerBI.com

In this session, we will look at data visualisation using:

  • Apache Spark
  • R Python Power BI

We will look at how we can enhance the suite of offerings to your data consumers by opening your toolbox of Open Source data visualisation tools, as well as Microsoft's Power BI.

Using Microsoft SQL Server and Big Data as a source, we will look at these open source and proprietary tools as an accompaniment to Power BI - not a replacement. 

Join us for this demo-heavy session, and be sure to download the code from Jen's blog right before the session in order to try it out as we proceed through the session. The emphasis is on practical takeaways that you can apply when you go back to your office after SQLBits, and try it out for yourself.

So you have made first contact with Biml and are excited? Good!

You're wondering, if Biml can do more than just transfer data from SQL table to another? Great!

Because Biml does so much more than just simple SSIS packages. We'll explore the potential on how to improve your existing packages using BimlScript and LINQ.

Topics covered, amongst others, are derived columns, incremental changes and how to handle flat files.

You'll leave with sample code and hopefully a couple of ideas on how to bring your Biml development to the next level.

In this demo-heavy session, you will learn about the basic concepts of increasing productivity by creating your SSIS packages using Biml.

We will look into manual Biml code to understand the general idea of Biml, then take it from there and generate a whole staging area from scratch and end with a complete manageable solution to completely maintain your staging process using SQL tables.

Have you ever spent hours fixing your SSIS due to a schema change on the source? Ever wanted to add a "load timestamp" to 370 tables in your staging area but refrained because it would have taken you weeks to do so? If so, this is the session for you!

Most of the time pilots are learning to fly, they're actually learning how to recover from emergency conditions. While we as Database Administrators focus on taking backups, how much time do we actually spend practicing recovering with those backups? This session will focus on the kinds of situations that can dramatically affect a data center, and how to use checklists to practice recovery processes to assure business continuity.
Most of the time you’ll see ETL being done with a tool such as SSIS, but what if you need near-realtime reporting? This session will demonstrate how to keep your data warehouse updated using Service Broker messages from your OLTP database.
Did you know, you can use Biml for much more than just SSIS? No? This session will probably surprise you - even with some DBA related topics! We will look into deployment, sample data creation, test cases and other ideas, that will show you, how powerful and flexible Biml really is - and that it does way more than you may have thought!
Maintaining a solid set of information about our servers and their performance is critical when issues arise, and often help us see a problem before it occurs.  Building a baseline of performance metrics allows us to know when something is wrong and help us to track it down and fix the problem.  This session will walk you through a series of PowerShell scripts you can schedule which will capture the most important data and a set of reports to show you how to use that data to keep your server running smoothly.

Containers are a new and exciting technology but what does it mean for DBAs?

This session will give an introduction into SQL Server running in containers and what options are available.

Attendees will be taken through the following: -

Defining what containers are (benefits and limitations)
Configuring Windows Server 2016 to run containers
Installing the docker engine
Pulling SQL images from the docker respository
Running SQL Server containers
Committing new SQL Server images
Exploring 3rd party options to run containers on previous versions of Windows Server (real world example)

This sessions assumes that attendees have a good background in SQL Server administration and a basic knowledge of Windows Server administration
SQL Server 2016: Data quality and data cleansing have always been major challenges to any enterprise that deals with data. For those of us who have dealt with Data Profiler Tasks in SQL Server 2008, we were shocked yet pleasantly surprised with the great advances that Microsoft has made with the advent of Data Quality Services, in the SQL Server 2012 release. In this hands-on presentation we shall be looking at how to set up a new knowledge base, based upon an existing one, set up rules, do knowledge discovery within the new knowledge base and finally cleanse the data through a data quality project. The end results being more effective data, guaranteed to keep end users and management happy.
Master Data Services can be readily be employed for Rapid Application Development. We shall be looking at important development, data security and data maintenance aspects, all based upon a recent client implementation.
Ah, SQL Server Transactional Replication. The technology everyone loves to hate. But for all the notoriety, there's some interesting technology to be had in it that might you might want to leverage. In this session, we'll explore what happens when you create a new transactional publication: what happens during a snapshot, how data gets delivered to subscribers, and how you can monitor, tweak, and tune your publications. We'll also look at a couple "gotcha" scenarios regarding replication and high availability, and even how replication can put you out of compliance if you're not careful. There will be plenty of examples and demos and, yes, even some PowerShell!
Tired of Bar Charts? We'll build out a custom PowerBI Visual and show the power of PowerBI whilst going into a deep dive on how this is achieved. We will be exploring web technologies along with data technologies, and seeing how some very powerful constructs are used to produce PowerBI reports. 

We will be covering a variety of content; including: Typescript, Javascript, HTML5, Gulp, Visual Studio Code, the MVVM pattern, D3.js, and without giving the game away too much, Google Maps.
In this session we will look at how you can use Power BI to analyse you company’s online presence.  

We will look at how you can source data from Google Analytics, Facebook, Twitter, LinkedIn and Instagram.

We will discuss different ways to get the data and what the difficulties/hindrances are when working with API's in Power BI.

We will then show how you can work with the data to get some meaningful analysis from it and ask the question: Can you get a holistic view of your online presence? 

This is a demo rich session where we will go through getting the data in and looking through how we can extend the model before visualizing the results.  

The participants will leave the session with ideas on how they can use Power BI (or indeed some other BI tool) to start analyzing their company's online presence.
Have you ever wondered if there are scored more goals when it rains or if Stoke wins more games on cold weekday evening at bet265 stadium? This session shows how you can use open data sources and web pages to gather data.

How you can then manipulate and extend it and finally report on it. We will look into how you can use data from Azure data market or other open source and combine it with data from web pages to create the dataset you need.

When we have our dataset we will manipulate and extend it using M and DAX so that we can get meaningful insights from it. We will then dive into the data to see if there is anything to report. 

In this end to end Power BI Desktop demo we will use fun data that many can relate to as the English Premier League is one of the most popular football leagues in the world. The audience will take away many nuggets of information as they see how a real world example could look like. I will share with them all the obstacles and learning I got when creating this report so they will see both the limitation of Power BI Desktop and open data as well as its strength. First of all the audience will learn how to use Power BI Desktop to create something real with fun data.

Both the data acquisition and manipulation as well as the reporting is covered in this demo rich session. The audience will learn about Power BI desktops strength and weaknesses as well as the benefits and potential problems with using open data and web pages as source.

In the end the Power BI Desktop file will be available to download for the audience. 
Understanding how SQL Server stores your data can seem like a daunting task. In this session we'll learn how objects such as tables and indexes are stored in a data file. We’ll also look at how these concepts tie in to your work. We’ll see these concepts in action using demos and see how we can use this knowledge to better design solutions.

We’ll start off by looking at the structure of a row and then move onto the concept of a data page. From there we’ll cover a few special page types like the index allocation map and GAM and SGAM pages. Then we’ll look at index structures and talk about the differences between heaps and clustered indexes.
With partitioning, we can break a table or index into smaller more manageable chunks. We can then perform maintenance on just part of a table or index. We can even move data in and out of tables with quick and easy metadata only operations. We’ll go over basic partitioning concepts and techniques like partitioned views and full blown table partitioning. We’ll look at how partitioning affects things under the hood. Finally you'll see some cool demos/tricks around index maintenance and data movement. At the end of this session you’ll have a firm understanding of how partitioning works and know how and when to implement it.

You'll learn…

• The components of table partitioning and how they fit together

• How to make your index maintenance partition aware

• How Partition elimination can help your queries

• How to split different parts of tables over different storage tiers

• How to manage partitions. We'll demo this by implementing the sliding window technique. 
Microsoft have been positioned as a Leader in the Gartner Magic Quadrant for Business Intelligence and Analytics Platforms. Th "new" Microsoft shows in the "new" Business Intelligence, and SQL Server 2016 provides a plethora of new tools for the Business Intelligence including R, Datazen and Power BI. There is a new emphasis on mobile, too.

In this session, we will have an overview of these tools and learn "what to use, when" so that you have a blueprint for your mobile, cloud, hybrid and on-premise Business Intelligence strategy, using SQL Server 2016 as the supporting technology.

Takeaways: SQL Server Reporting Services (SSRS) has been given the love it deserves; learn about Datazen and how if fits with SSRS and Power BI; see the new capabilities for SQL Server Analysis Services; see Microsoft R Server in SQL Server, explained; mobile business intelligence for everyone!

Join this session to get the inside scoop on SQL Server 2016 from the Business Intelligence perspective.
Digital transformation leverages the changes and opportunities of digital technologies to transform business activities, processes, competencies and models to converge for the benefit of the organisation.

How do you go from building Business Intelligence solutions as a technical team member, to shifting your focus to building businesses as who rely on your data to become data-driven organisations? How do you become part of the Digital Transformation story? 

Today’s hype about the possibilities and opportunities in Data, and particularly Big Data, mean that organisations can find it difficult to know where to start, or even how to start. Very often, businesses simply store all of their data, rather than think proactively about the data that they have, and how they could use it. In order to survive, businesses will have to move forward.

As businesses continue to get excited about the opportunities of Big Data, they will also need Data-Driven Leadership within their organisations in order to guide them effectively towards success. Now is the time
for businesses to bring their data and their strategy together, using the latest technologies.

Join this session to see how you can transition from being a business intelligence technical expert, to a data-driven leader in your organisation that is going through a process of Digital Transformation. This is a rare and valuable skill, and talking the language of the boardroom means that you can have a real impact on transforming your organisation through data.
We will look at the definitions, models and metrics that are essential for a successful Data Transformation strategy using Power BI as a way of articulating useful metrics.
The biggest barrier facing new ML practitioners is the assumption that to become a serious data science practitioner, you need deep knowledge of machine learning theory, deep mathematics, formulation or training methods. In this session, Nabeel will argue that the essential skills needed are: a love for working with data, hard work, and the devotion of a good deal of time to play with lots of data and apply various ML tools on them.

You will learn by an actual R experiment how ML works, and how to transition from a beginner to an expert through a clear roadmap.
Database Professional is always challenged with monitoring and maintenance across the environment. Be it 10 server or a 1000 servers, would it be easier to have a centralized solution? In this session I will walk you through the simple steps of building a centralized database maintenance and monitoring solution. 


This solution will help a database professional to query all the monitored instances, collect performance metrics, run customized scripts and any maintenance task a database professional needs to perform across the database environment. Talk about a easy DBA life, this is a definite simple solution you should build.
Have you ever struggled to get a HA and DR solution for a single instance with more than 50 applications, 100 databases and 4 TB of data? Over the years, HA and DR has been a challenge with SQL Server. As the product evolved so did the options simplified for achieving a feasible HADR solution. 


In this session I will walk you through various projects I have worked on in past 8 years. I will start with the basic DR solution built using Log shipping, an alternate to AGs in SQL 2008 using Powershell/.NET and SAN Replication and the latest 4 node, two site cluster built with Availability Groups.
During the acrimonious US election, both sides used a combination of cherry-picked polls and misleading data visualization to paint different pictures with data.
In this session, we will use a range of Microsoft Power BI and SSRS technologies in order to examine how people can mislead with data and how to fix it. We will also look at best practices with data visualisation.
We will examine the data with Microsoft SSRS and Power BI so that you can see the differences and similarities in these reporting tools when selecting your own Data Visualisation toolkit.
Whether you are a Trump supporter, a Clinton supporter or you don't really care, join this session to spot data lies better in order to make up your own mind.
The biggest barrier facing new ML practitioners is the assumption that to become a serious data science practitioner, you need deep knowledge of machine learning theory, deep mathematics, formulation or training methods. In this session, Nabeel will argue that the essential skills needed are: a love for working with data, hard work, and the devotion of a good deal of time to play with lots of data and apply various ML tools on them.

You will learn by an actual R experiment how ML works, and how to transition from a beginner to an expert through a clear roadmap.
Based on the popular blog series, join me in taking a deep drive and a behind the scenes look at how SQL Server 2016 "Just Runs Faster",focused on scalability and performance enhancements. This talk will discuss specific engine improvements, not only for awareness, but to expose design and internal change details. The beauty behind "It Just Runs Faster" is your ability to just upgrade and gain performance with out lengthy and costly application or infrastructure changes. If you are looking at why SQL Server 2016 makes sense from a technical perspective for your business you won't want to miss this session.
We will explore the details of how In-Memory OLTP ("Hekaton") is integrated into the SQL Server 2016 Engine. This includes internals of threads, data and index design, row formats, transactions and concurrency, logging, storage, and natively compiled procedures. This talk has plenty of demos including a rich use of the Windows Debugger to learn how In-Memory OLTP works. Even though this is a deep session, you will come away with a fundamental knowledge of why In-Memory OLTP can give you a 30x performance boost for OLTP applications.
Are you a developer or a systems admin and you've just been handed a SQL Server database and you've got no idea what to do with it?  I've got some of the answers here in this session for you.  During this session we will cover a variety of topics including backup and restore, recovery models, database maintenance, compression, data corruption, database compatibility levels and indexing. While this session won't teach you everything you need to know, it will give you some insights into the SQL Server database engine and give you the ability to better know what to look for.

With the introduction of SQL Server 2014 the line between SQL Server running in your data center and running in the cloud is becoming more and more blurred.  In this session we will review the features which are available with SQL Server 2014 and which integrate with Windows Azure.  After reviewing the available features we'll look at how to configure these features and how to build these features out in the real world to reduce your data center footprint quickly and easily.
In this fun session we'll review a bunch of problem implementations that have been seen in the real world.  Most importantly we will look at why these implementations went horribly wrong so that we can learn from them and never repeat these mistakes.
In this session we will be looking at the best and worse practices for indexing tables within your SQL Server 2000-2016 databases.  We will also be looking into the new indexing features that are available in SQL Server 2012/2014/2016 (and SQL Server 2005/2008) and how you the .NET developer can make the best use of them to get your code running its best.
In this session we will review the new enhancement to SQL Server security available in SQL Server 2016 and Azure SQL DB.  These include Always Encrypted, Row-Level Security and Dynamic Data Masking as well as whatever else Microsoft has released since I've written this abstract. We'll look at how to set these features up, how to use them, and most importantly when to use them.
In this session we will review the differences between deploying Microsoft SQL Server 2016 in Microsoft Azure and on-premises from a Security, Reliability and Scalability perspective.  We'll review the common mistakes which people make when deploying SQL Server Virtual Machines to Azure which can lead to security problems including data breaches. We'll review the common performance problems which people encounter, and how to resolve them. We'll review the common scalability misunderstandings of Azure and SQL Server Virtual Machines. Join us for this fun session and learn how to improve the security, reliability and scalability of your Azure deployments of SQL Server 2016.

Are you a:
  • Production DBA that wants to know which new xEvents and DMVs/DMFs are available to get further insights into SQL Server operations?
  • Production DBA that needs to troubleshoot a performance issue, and needs to collect performance data for others to analyze?
  • Developer that needs to analyze query performance and already has basic knowledge of how to read an execution plan?
Then this session is for you! It just works – performance and scale in SQL Server 2016 database engine and what is being added to in-market versions. This session will showcase several improvements in SQL Server, focusing on the latest enhancements that address some of the most common customer pain points the Database Engine, involving tempdb, new CE, memory management, T-SQL constructs as well as diagnostics for troubleshooting query plans, memory grants, and backup/restore. Understand these changes in performance and scale, and the new and improved diagnostics for faster troubleshooting and mitigation.
BPCheck came to be back in 2011, as a way to empower Microsoft support engineers onsite with customers to gain insight on best practices not being followed, and evolved to include performance based checks. But this previously internal BPCheck script was released to Microsoft SQL Server GitHub recently and is now free to use. In this session you will learn how to leverage it for a comprehensive perf and health check of your SQL Server instance.



How to read the information, how to interpret results, how to use it for a quick health check, or a full comprehensive performance check as well.

You have created great cubes and Reporting Services reports but how do you know if it is being used?

Learn how to set up the collection of the usage data and how you can use this data in your decision making.  We will cover what methods are available and their strength and weaknesses. We will talk about how to collect the data, how to build something meaningful from the data and how you can report on top of the data. We will do this for OLAP cubes and for Reporting Services Reports and we will explore ways you can further develop this for your own organization. 

At the end of the session all participants will leave with all the code as well as the know how to get started with the collection of usage statistics for their Microsoft BI Solutions.

SQL Server Management Studio is at the heart of any SQL Server DBA or developer’s day. We take it for granted  but rarely do we take a look at how we can customise or improve it to make our day to day work easier and more productive.

This presentation will take look at many of the hidden features and shortcuts that you had forgotten about or didn’t know were there including some new features in SSMS 2016.

At the end of this session you will have learnt at least one new feature of SSMS that you can use to improve your productivity.
Beware of the Dark Side - A Guided Tour of Oracle for the SQL DBA



Today, SQL Server DBAs are more than likely at some point in their careers to come across Oracle and Oracle DBAs. To the unwary this can be very daunting and, at first glance, Oracle can look completely different with little obvious similarities to SQL Server.



This talk sets out to explain some of the terminology, the differences and the similarities between Oracle and SQL Server and hopefully make Oracle not look quite so intimidating.



At the end of this session you will have a better understanding of Oracle and the differences between the Oracle RDBMS and SQL Server. 



Although you won’t be ready to be an Oracle DBA it will give you a foundation to build on.
The word Kerberos can strike fear into a SQL DBA as well as many Windows Server Administrators.



What should be a straight forward and simple process can lead to all sorts of issues and trying to resolve them can turn into a nightmare. This talk looks at the principle of Kerberos, how it applies to SQL Server and what we need to do ensure it works.



We will look at
  • What is the purpose of Kerberos in relation to SQL Server?
  • When do we need to use it?  Do we need to worry about it at all?
  • How  do we configure it?  What tools can we use?
  • Who can configure it?  Is it the DBA job to manage and configure Kerberos?
  • Why does it cause so many issues?
Because on the face of it setting up Kerberos for a SQL Server is actually straightforward but it is very easy to get wrong and then sometimes very difficult to see what is wrong.
It seems like every month we hear about another company having a major data breach. Ensuring that your data is secure has become more important than ever.

With this in mind, in SQL Server 2016, Microsoft has given us three new features that have the potential to improve the security of your SQL database, either on premises or in the cloud.

These are Dynamic Data Masking, Row Level Security and Always Encrypted.

This session will provide an overview of these important new security features, how to configure them and make them work.
It's nearly a year since SQL 2016 was launched with lots of new features and functionality such as Always Encrypted, Dynamic Data Masking, temporal tables, stretched tables, Query Store, Live Query stats and more. If you haven't had time to explore these new features then this session is for you This talk gives an overview of what's  new and improved in SQL Server 2016 and demonstrates some of these features
The session covers tested migration strategies for migrating off Teradata to SQL DW on Azure. It covers multiple approaches to the problem, dependent on the size. And suggests various strategies to avoid the pitfalls associated with the migration. At Cognizant we have used this to migrate clients off Teradata resulting in huge savings for the client, as well as improve overall efficiency of the BI and Analytics capabilities.
Query Store is an exciting new feature in SQL Server 2016. It can automatically capture and store a history of queries, query execution plans and execution statistics that makes troubleshooting performance problems caused by query plan changes much easier.

In this session we will examine Query Store, it's architecture, configuration and how it can be used to solve performance problems.
SQL Server Management Studio is at the heart of any SQL Server DBA or developer’s day. We take it for
granted  but rarely do we take a look at how we can customise or improve it to make our day to day work easier and more productive. 

This presentation will take look at many of the hidden features and shortcuts that you had forgotten about or didn’t know were there including some new features in SSMS 2016.

At the end of this session you will have learnt at least one new feature of
SSMS that you can use to improve your productivity.
The Internet of Things is the new kid on the block offering a wealth of possibilities for data streaming and rich analytics. Using a Raspberry Pi 3 we will take an end to end look at how to interact with the physical world collecting sensor values and feeding that data in real-time into cloud services for manipulation and consumption. This will be a heavily demonstrated session looking at how such an environment can be setup using Microsoft offerings, including; Windows 10 IoT Core, a C# Universal Windows Platform application, an Azure IoT Event Hub, Azure Stream Analytics, Azure SQL DB and Power BI. This is an overview of what’s possible, but showing exactly how to build such a simplified solution with a session which will be 90% demonstrations. This will hopefully add that level excitement to real-time data with plenty of hardware out there showing what it can do when setup with Microsoft software. In addition, spare Raspberry Pi devices will be available for the audience to pass around and look at. 
In this session we’ll go beyond the Azure Data Factory copy activity normally presented using the limited portal wizard. Extract and load are never the hard parts of the pipeline. It is the ability to transform, manipulate and clean data that normally requires more effort. Sadly, this task doesn’t come so naturally to Azure Data Factory as an orchestration tool so we need to rely on its custom activities to break out the C# or VB to perform such tasks. Using Visual Studio, we’ll look at how to do exactly that and see what’s involved in Azure to utilise this pipeline extensibility feature. What handles the compute for the compiled .Net code and how can does this get deployed by ADF. With real work use cases we’ll learn how to fight back against those poorly formed CSV files and what we could to if Excel files are our only data source. Plus, lots of useful tips and tricks along the way when working with this emerging technology.
There has been an awakening. SQL Database in Azure is no longer merely useful tool, but essential for your continued innovation. This session outlines the  offerings available in Azure SQL Database and how to move your database loads away from virtual machines and towards Azure SQL Database, as well as gives a glimpse of the astounding force multipliers inherent in the Azure cloud. May the cloud be with you – always.
The Azure Data Lake is one of the newest additions to the Microsoft Azure Cloud Platform, bringing together cheap storage and massive parallel processing in two quick-to-setup and, relatively, easy-to-use technologies. In the session we will dive into the Azure Data lake.

First of all exploring: 
Azure Data Lake Store: How is data stored? What are the best practices?; 
Azure Data Lake Analytics: How does the query optimiser work? What are the best practices?

Second, a practical demonstration of: Tuning U-SQL: Partitioning and distribution: Job Execution in Visual Studio
Have you ever wondered how to design your data warehouse so that users can get the information they want efficiently and effectively?  This session covers ways to make your fact tables work well for you and your users.  It will answer such questions as:



 



How do I make it easy for users to get the information they want?



How can good design make my data warehouse or cube perform better?



What are the different types of fact tables and when should I use them?



What do we mean by grain? Why is it important?



Why and how should I use surrogate keys?



Which data types should I be using?



 



If you’re not sure whether your fact table is transactional or accumulating, whether it’s a bridge or factless, or if it contains degenerates, then this session is for you! "If the facts don't fit the theory, change the facts".  Albert Einstein

What is Azure SQL Data Warehouse, why and how should I use it? Here we will look at the fundamental basics to answer these questions and more, giving you the foundation knowledge to get started. Looking at key design concepts, query patterns, and loading mechanisms for getting data into and out of Azure SQL Data Warehouse. By the end of this session, you will be able to start using and avoid the major pitfalls of using Azure SQL Data Warehouse.
Azure SQL DB has had Row Level Security and Dynamic Data Masking for a while now, SQL Server 2016 brings that on-premises. But just how can you use it, what changes do you need to make to your model to get the most from it? These new features have the potential to really improve application security, especially in compliance scenarios. Pushing the security restrictions down into the database layer has many benefits, notably only the data needed leaves the database. But there are a number of gotchas that you need to be aware of that can really mess up performance if you get them wrong. In this session we will look at how you can get the most out of these features and retain the performance of your system.
When planning your SQL Server deployments there are a lot of different options to choose from and configurations that you can use, but which ones are for you? Here I will discuss and highlight some of the key recommended practices that you should follow and put in place when you build your SQL Servers and ways that you can automate these in order to standardize your server deployments covering; - Trace Flags, Configuration Options and Design Choices - Maintenance and Monitoring essentials - How to select the right High Availability or Disaster Recovery option - Performance Tuning and Troubleshooting methods
SQL Server normally relies on concurrency models in order to maintain the Isolation in ACID. As systems scale, this method can go from being a benefit to a hindrance, limiting the concurrency for our applications and hurting performance.   Throwing hardware at the problem can help, but does not address the fundamental issues. These can only be resolved with better database design and the correct configurations.   Join John as we work through the different design patterns that exist for increasing concurrency. From using specific data types, and partitioning, to how In-Memory OLTP can help. By the end of this season you will understand the different methods available to boost concurrency regardless of the size of your environment.
Have you ever wondered how to design your data warehouse dimensions so that users can get the information they want efficiently and effectively?  This session covers tried and tested ways to handle time in your dimension tables. How should I store historical values in my dimension tables?

How should I design my Date and Time dimensions?  Should I combine or separate them?

What are SCD Types 0, 1, 2, 3, 6, 7?  How and when should I use them?

My users want the warehouse to do everything and be easy to use - Help! If you’re not sure of your SCD Type 2 from Type 6, or how best to distinguish this morning from last month, then this session is for you! "Time is what keeps everything from happening at once." - Ray Cummings
What is the future going to look like? When are we going to reach true Artificial Intelligence? Is the Singularity going to happen? There is a lot of talk today about AI and what it means for the human society. Let's forget about the future for an hour and focus on what is possible today. We are going to look at the most promising area in AI research, Deep Learning and understand how it fits in the wider picture of Machine Learning. Be prepared for some maths, loads of graphs and deep learning in action.
Ever feel like you are just doing busy work while creating new SSIS packages? Feel like you are doing the same thing over and over while changing the names to protect the innocent? Ever wonder if there is a better way? Well wonder no more. Come learn about the magical world of BIML and how it can help transform your environment by increasing your productivity while reducing the possibility of errors. Come with intrigue and leave with a fundamental understanding of BIML!
We have more information available to us today than ever before. So much so that we run the risk of not being able to tell concise stories. There's a lot more to creating that story than just getting the correct information. Come learn not just the do's and don'ts, but the whys…
Being a BI Professional, you need all the performance tuning the DB folks get and more. In this hour, we will go over important performance tuning tips that you can use to help make your deliverables faster and more effective. We will touch on the MSBI tools of SSIS, SSAS, SSRS and PowerBI as well as some core engine stuff.
I always fancied being mechanical and electrical. But being a DBA is more about working on virtual nuts and bolts. An inspiration got into mind when IoT spread like wild fire with all kinds of applications. A simple thought of why not be a cool DBA and build something at a click of a button.


In this session, I will show you some cool interfacing between my Raspberry Pi and SQL Server. I
will show what a click of a button (not virtual :)) can do for a DBA. This is just the beginning and there will be no limits for what else can be done. Take away this idea from this session and use your imagination to build something cool and innovative.
You know the situation that a query yesterday still worked
quickly and satisfactorily and today it suffers from performance problems?

What will you do in such a situation?

- you may restart SQL Server (did work all the other times before)
- you drop the procedure cache (has been told by a dba)
- you  get yourself a coffee and think about what you learned in this session

Microsoft SQL Server requires statistics for ideal execution
plans. If statistics are not up-to-date, Microsoft SQL Server may create execution
plans that run a query many times slower. In addition to the basic
understanding of statistics, special situations are shown in this session that
are only known to a small group of experts.

After a brief introduction to the functionality of
statistics (Level 100), the special query situations, which lead to wrong
decisions without experience, are immediately apparent. The following topics
are covered by usage of large number of demos:

- When will statistics get updated?
- examples for estimates and how they can go wrong?
- outdated statistics and ascending keys?
- when will outdated statistics updated for unique indexes?
- drawback of statistics in empty tables?

Follow me on an adventurous journey through the world of statistics of Microsoft SQL Server.
In this session we'll look at a range of SQL Server tips, tricks and misconceptions. Ranging from TSQL coding, query plans, statistics and loads in between. 

We'll take a look at some clever TSQL coding practices, some key things to look for in query plans and how to resolve them. We'll also look at statistics and go through a list of things to look out for.

As well as this there will be a whole load of other things from indexing to server settings to look at.

Delivered in a fun and light hearted manner, the emphasis of the session will be learning lots of little take aways that can be applied to your environments and practices when you get back to work 
This beginner to intermediate session is aimed at those that want to get a good working knowledge of query plans in sql server.

In the first half we will start off with the basics of reading query plans and understanding them. We'll learn what to look out for and how to spot issues

In the second half we'll gradually move into more intermediate topics such as costing and encouraging sql server to make different choices. We'll also start learn how the query optimiser makes it's decisions  This session is ideal for those who are either just starting out with query plans or those whom are familiar with them 

Attendees will leave the session with a good working knowledge of query plans, being able to read and interpret them and make informed decisions about their SQL Coding
SQL Server 2016 is a class leading Tier One product but how do you get customer to upgrade and migrate? How can you address the "so what" SQL does everything I need to do now?

For the last year, Mike Boswell  has been working on migrating a leading UK public sector customer to SQL Server 2016. We have had to go deep on performance and support the business argument.

This chalk n talk session will dive into how various groups, in Microsoft including SQL CAT, were used to change the customer perception of "we are just fine with legacy SQL Server" and give you ideas on what you need to look into.

We talk about the areas you need customers to focus on and what is likely to become blockers?  Performance comparisons, implementation of new features, bringing the Oracle DBAs to understand the differences are just some of the challenge you will face. I'll tell you how we failed at first and how we turned it all around.

We will look at new CE, where In memory is a good fit, indexing, testing, checkpoints, how it "just goes faster" helps workloads, performance counters, KPIs, business discussions, insight into migration planning and how to even work with an Offshore model!
SQL 2016 brings us many new features but one of the most
anticipated is surely the Query Store. The Query Store now allows us to track
query plans as they change over time giving us a whole slew of new
possibilities when it comes to tuning our queries. Even just the ability to
compare a previous plan to a new plan is a huge step towards understanding what
may be happening in our instance. We can even tell the optimizer which plan we
want it to use. These were all either extremely difficult to do before and in
some cases impossible to do. This session will give you the insight to get
started using this new and wonderful feature set.
During this session we'll take an "under the bonnet" look at query plans in SQL server. We'll look at how the components work in different versions of SQL Server and the things to look out for that could cause you problems in your queries.

We'll also take a look at how the query optimiser makes its decisions and how to influence it to make better choices 
What do SQL Azure, DocumentDB, Cortana, Power BI, Skype, and MMPG Age of Ascent – all have in common?  Answer: they are all built on Service Fabric – that gives limitless scale, auto-healing, auto-balancing, and zero downtime upgrades.

Azure could not exist without Service Fabric.. but this wonder sauce is free for you to use to build hyperscale apps now and deploys seamlessly on Azure, AWS, and on-prem - so it's not just for Azure.
 
Many Service Fabric apps store their data in the cluster – this isn't "NoSql".. it is literally "NoDB" – in stateful services achieving tiny latencies even at massive scale. Age of Ascent at peak handles 267 Million messages per _second_ - impossible to imagine without Service Fabric.

In this all demos session we'll spin-up a Service Fabric cluster and showcase these capabilities – and explore the challenges of data storage without a database in apps being built now - that many of us are building now and many of us will be managing soon. A truly interesting and eye-opening session.
Where to begin the step 1 of data science to become a Data scientist? What are these Data Scientists up to? This is an initial data science session, for a novice and learn how easily you can step in to Data Science and start to become a professional.

How can we see and try using a data scientist statistical model in our day to day familiar tool like Microsoft SQL Server?

Thanks to Microsoft for integrating R Revolution within SQL Server 2016. We all now have the opportunity to use R packages and see the results within SQL 2016 and also to utilize it for any applications and/or Reporting services.
When was the price of an article changed and how was the original price?
How has the price of an article developed over a period of time?
Developers must solve their own solution with the help of triggers and/or stored procedures.
With temporary tables an implementation is ready in a few seconds - but what are the special requirements?
In addition to a brief introduction to the technology of Temporal Tables, this session provides an overview of all the special features associated with Temporal Tables.

- Rename tables and columns
- Temporal Tables related to triggers?
- Temporal Tables and InMemory - can this go well?
- can computed columns be used?
- how to configure the security
- ...
Ever found yourself deconstructing endless layers of nested code? Is your T-SQL codebase written in an object-oriented format with functions & views? Did you know that object-oriented code reuse can come with a significant penalty?  

In this session, learn how T-SQL is not like other common programming languages. We will peek inside the Query Optimizer to understand why applying object-oriented principles can be detrimental to your T-SQL's performance. Extensive demos will not only explore solutions to maximize performance, you will also be introduced to a T-SQL tool that will aid you in unraveling nested code.
Though datatypes are fundamental, they are often disregarded or given little thought. But did you know that poor data type choices can have a significant impact on your database design and performance? 

Attend this session to learn how database records are stored within SQL Server and why all data types are not created equal. Armed with that knowledge, we will explore several performance scenarios that may be impacting your systems right now! When you leave, you will be able to explain to your colleagues why data type choices matter, assess your own systems, and implement some best practices to mitigate these performance killers.
Do you know if your database's indexes are being used to their fullest potential? Know if SQL Server wants other indexes to improve performance? 

Come and learn how SQL Server tracks actual index usage, and how you can use that data to improve the utilization of your indexes. You will be shown how to use this data to identify wasteful, unused, & redundant indexes, and shown some performance penalties you pay for not addressing these inefficiencies. Finally, we will dive into the Missing Index DMV and explore the art of evaluating its recommendations to make proper indexing decisions.
SQL Server’s memory-optimized tables support two new kinds of indexes: hash and range. How are these indexes different than the B-tree indexes on your disk-based tables and how do they help in-memory OLTP achieve such blazing-fast performance? Goals:
  • Understand the internal structure of hash and range indexes
  • Discuss the pros, cons, and best practices for both types of indexes
  • Examine the metadata that shows how the index on memory-optimized tables are being used.
Where do the estimated rowcount values come from? Look inside SQL Server’s distribution statistics to see how they are used to come up with the estimates. We’ll also discuss changes in the cardinality estimator in recent versions and look at some new metadata that gives us more statistics information.  Goals:
  • Explore the output of DBCC SHOW_STATISTICS
  • Describe when the density information is useful
  • Look at some problem scenarios for which the statistics can’t give good estimates
  • Understand why cardinality estimation involves more than just the statistics
This session covers various T-SQL tips and tricks that
illustrate creative ways to handle querying tasks elegantly and efficiently.

Examples for some of the topics that will be covered are:
unusual search arguments, batch mode processing over rowstore, string splitting
and string aggregation, join reordering and the AT TIME ZONE function.
We will take the top 25 of most common database problems and combine them
in an action packed demofull session. An impartial observer picked from
the crowd can pick a problem which we will handle & then another can
pick a new one until the time is up.
In this session we will show you how the problem arises, what the impact
is for your sql server & how you can fix it short term and long
term.
For most DBAs and DEVs the TempDb is a crystal ball. But the TempDb is the most
critical component in a SQL Server installation and is used by your
applications and also internally by SQL Server. TempDb is also one of the
performance bottlenecks by design, because it is shared across the whole SQL
Server instance. In this session we will take a closer look into the TempDb,
how it is used by SQL Server, and how you can troubleshoot performance problems
inside TempDb and how you can resolve them.    
SQL Server needs its locking mechanism to provide the isolation aspect of
transactions. As a side-affect your workload can run into deadlock situations
- headache for you as a DBA is guaranteed! In this session we will look into
the basics about locking & blocking in SQL Server. Based on that
knowledge you will learn about the various kinds of deadlocks that can occur
in SQL Server, how to troubleshooting them, and how you can resolve them by
changing your queries, your indexing strategy, and your database settings.


UNIQUEIDENTIFIERs as Primary Keys in SQL Server - a good or bad best practice? They have a lot
of pros for DEVs, but DBAs just cry when they see them enforced by default as
unique Clustered Indexes. In this session we will cover the basics about
UNIQUEIDENTIFIERs, why they bad and sometimes even good, and how you can find
out if they affect the performance of your performance critical database. If
they are affecting your database negatively, you will also learn some best
practices how you can resolve those performance limitations without changing
your underlying application.


You know locking and blocking very well in SQL Server? You know how the isolation
level influences locking? Perfect! Join me in this session to make a further
deep dive into how SQL Server implements physical locking with lightweight
synchronization objects like Latches and Spinlocks. We will cover the
differences between both, and their use-cases in SQL Server. You will learn
about best practices how to analyze and resolve Latch- and Spinlock
contentation for your performance critical workload. At the end we will talk
about lock free data structures, what they are, and how they are used by the
new In-Memory OLTP technology that is part of SQL Server 2014/2016.    
Plan Caching is the most powerful concept in SQL Server. But on the other hand it's
also the most dangerous thing that can lead to queries that are executed with a
completely wrong Execution Plan. In this session we will have a more detailed
look into the Plan Cache of SQL Server, in which different ways SQL Server can
cache Execution Plans in the Plan Cache, and how you can troubleshooting wrong
performing queries directly from the Plan Cache.    
SqlCmd and Powershell are two things a lot of SQL guys and developers shy away from. But really they are simple and easy to use and often far quicker and easier to use than .NET or SQL coding. In this session we’ll look at the basics of PowerShell to get you up and running quickly. Then we’ll look at how to use SqlCmd from the command line, through PowerShell and also in SSMS. Finally we’ll put it all together to see how you can create as complex a script as you like to do any kind of scheduled or repetitive task on your SQL Data Platform. Lots of examples to take away and plagiarise at a later date!
One of my favorite new features in SQL Server 2016 is the Query Store. The Query Store houses valuable information on performance of your queries as well as gives you great insights into your
query workload. This presentation will take a look at the Query Store, how it works and the type information it holds and how you can use it to analyze performance issues.  New DMVs, stored procedures, wait types and extended events will be introduced and the performance impact of enabling the Query Store will be
discussed. Several demos will tie it all together. Both DBAs and developers can increase their performance tuning skills by attending this session.
Performance tuning can be complex. It's often hard to know which knob to turn or button to press to get the biggest performance boost. In this presentation, Janis Griffin, Database Performance Evangelist, SolarWinds, will detail 12 steps to quickly identify performance issues and resolve them. The attendees will learn how to quickly find which SQL statements to tune by using a technique called, Response Time Analysis. Several DMVs will be discussed as well as common wait types to learn how to reduce the amount of time a query spends in the database. The participant will learn how to quickly fine tune a SQL statement by using SQL Diagramming techniques to find the best execution plan.  Also, several  performance inhibitors will be discussed to help avoid future performance issues. Finally, the attendees will be able to recognize and understand how new SQL Server features can help improve query performance.

Did you ever wanted to be a detective? Does searching for clues and deciphering hints sound like a dream come true? If you answered yes to any of the questions above this session might just be the thing for you!

As a true detective we will analyze performance clues SQL Server gives us in the form of Wait Statistics. Using these Wait Statistics we can start to unravel the mysteries surrounding some of the worst SQL Server performance crimes and learn how to solve them!

This session, filled with examples and demo’s, will give you an insight on how SQL Server schedules work, why requests sometimes have to wait and what they are actually waiting for. Most importantly, you will learn various methods of analyzing this information to help you solve performance problems and bottlenecks on an instance, session and even query level!
When you were a kid, did you enjoy taking things apart to look at their inner workings? If you did, or perhaps still do, this session might be just the thing for you! Combining SQL Server and Windows internals with our curiosity on how things work, we will take a look underneath the hood of the SQL Server Engine by
debugging it.

What caused SQL Server to crash? How does the SQL Server Engine return rows from a table? These are just two questions we can only answer by debugging and in this session we will go through the various methods and tools available to us to debug SQL Server. A large part of the session will be spent on demonstrations were we will walk through the SQL Server call stack together, like debugging a live SQL Server process, analyzing a stack trace or viewing the contents of a crash dump.

Don’t worry if you are not a programmer or don’t have a developer background. We will go through this session at an easy pace without causing (too many) headaches. 
With the introduction of the Query Store feature in SQL Server 2016 analyzing query performance became a whole lot easier for everyone. But where does the Query Store record it’s wealth of query performance metrics, and how can we access it? In this session we will focus on retrieving, interpreting and visualizing the data inside the Query Store.

Not only will we look at the built-in reports the Query Store provides by default for analyzing query performance, we will also explore alternative methods, like custom performance dashboards, the sp_WhatsupQueryStore stored procedure or replaying a query workload through the Query Store Replay script. We will use all these different tools and methods in real-world use cases, for example, to minimize the performance impact of changing database compatibility levels. 


After this session you will have all the information you need to leverage the power of the Query Store for analyzing query performance, minimizing the impact of database migrations and develop your own, custom, Query Store solutions.
This session is not only for people working with Master Data, but also everyone working with Business Intelligence. With Master Data Services 2016 it's now easy to handle all your dimension data, including Type 2 history. In this session you will get a brief introduction to the basic principles of Master Data Services, together with an overview of all the new features brought to you with Master Data Services 2016.
You will learn about features like:
  • New features and better performance in the Excel Add-In
  • Track Type 2 history
  • Many-To-Many hierarchies
  • New security and administrator capabilities
  • New approval flows
If you are using Master Data Services or are thinking about it, - this is the session you cannot miss.
With the release of Azure Analysis Services we have, at long last, got a crucial piece of the Microsoft BI stack available in the cloud. Azure Analysis Services is a SaaS version of Analysis Services, and in this session you’ll learn about:
  • What Azure Analysis Services is
  • When you should use it
  • Configuring Azure Analysis Services in the Azure portal
  • Developing and deploying Azure Analysis Services models
  • Building reports in Excel and Power BI using Azure Analysis Services
  • Connecting to on-premises data sources from Azure Analysis Services
  • Automation with PowerShell
  • Integration with other cloud BI services
  • Sizing and pricing
  • Monitoring
Always Encrypted was added to SQL Server to easily allow developers and DBAs to add security to their application. Come learn how to design and implement Always Encrypted, including a discussion of the requirements, capabilities, and limitations for your environment. We will discuss the places where this encryption mechanism makes sense while showing the ease with which this can be added to an application while providing the potential architectural issues you may encounter.
They say the first million is the hardest - I may never know.  I gave up the promise of future riches because the data told me to. I left a job I loved in a company that loved me because I wanted more.  More technology, more challenge. More data.

The start-up I built is happening *now* and I want to tell you about it warts and all. 

This is a lean start up making it hard to predict April in December.  What I ever I tell you though it will be lean, it will be SQL, it will be Azure and most of all it will be about the data.  

Join me.
DevOps is changing today's software development world by helping us build better software, faster. However many organizations struggle to include their database changes with their application deployment. In this session, we will examine how the concepts and principles of DevOps can be applied to database development by looking at both automated comparison analysis as well as migration script management. We will cover using branches and pull requests for database development while performing automated building, testing, and deployment of database changes to on premise and cloud databases.
One of the ways we deploy our database changes in our environments is through the use of migration scripts. These are ordered scripts that replay development work on QA, Staging, UAT, Production, and other environments. This session will examine how a migration script system works and show the tips and tricks needed to smoothly integrate this into your database development work.
This session will explain and demonstrate the concepts of a migration script strategy. You will learn how to setup and maintain migration scripts in a team environment with version control. We will also cover tricks for responding to changing requirements and managing the code in their scripts.
Everyone wants a dream job that they enjoy going to each week. However finding that job, and getting yourself hired can be hard for most people. Steve Jones will give you practical tips and suggestions in this session that show you how to better market yourself, how to get the attention of employers, and help improve the chances that you will get an offer for the the job you want.

We will showcase the practical ways you can begin networking, blogging, demonstrating leadership and learning, and giving back. Learn ways that you can begin improving your brand and career immediately.
Many database developers aren't familiar with some of the common, modern software development practices. Come learn how git works, the most popular version control system today. Learn about setting up branches for database development and making pull requests to merge code into a central development line for deployment to your test and production environments.

We will examine how the command line as well as Visual Studio and other IDEs can work with your database and a version control system. Learn how you can get started today on a small project with Visual Studio Team Services and TFS/git for tracking code.
When you need to extract data from the database you are writing, more or less complex, T-SQL code. Often simplistic and procedural approach reflects what you have in your mind, however this could have a negatively impact about performance because the database engine might think otherwise. Fortunately T-SQL, as a declarative language, allows us to ask the "what" and delegate to the engine the "how". Everything works best as long as you respect a few simple rules and you may use special constructs. In this session, with few slides and a lot of real-case scenarios, you can see the advantages of writing the query for high performance, even when they are written by that "someone else" called ORM.
Ever wanted to convince the boss try something new, but didn't know where to start?  Ever tried lead your peers only to fail to achieve your goals?  This session teaches you the eight techniques of influencing IT professionals, so that you can innovate and achieve change in your organization. 
1. Learn about the fundamental difference between influence and authority and how you can achieve a high degree of influence without explicit authority.
2. Learn the eight techniques of influencing IT professionals, when to apply them, and how to best use them.
3. Discover the communication and procedural techniques that ensure your ideas get a hearing by bosses and peers, and how to best win support for them. 
Prerequisites: Basic interpersonal communication skills and command of the English language.   

Goal 1: Learn about the fundamental difference between influence and authority and how you can achieve a high degree of influence without explicit authority. 
Goal 2:  Learn the eight techniques of influencing IT professionals, when to apply them, and how to best use them. 
Goal 3: Discover the communication and procedural techniques that ensure your ideas get a hearing by bosses and peers, and how to best win support for those ideas.
You already know a thing or two about tuning a SQL query on Microsoft SQL Server. You can read an execution plan and know the most significant red flags to look for. And you have also learned about the important information revealed by SET STATISTICS statements. But you want to take it up another level!

In this session, go even deeper with query tuning with three new lessons. First, we’ll examine a few new and seldom used features inside of SSMS specifically for query performance. Second, we’ll spend a bit of time learning query related DMVs and how to read query plans in directly within XML. Finally, we will discuss and demo a set of powerful Trace Flags, the 8600 series, that reveal additional details about how the query optimizer behaves as it processes a query.
Most company have limited enterprise IT development resources and frequently fill those gaps by purchasing apps from Independent Software Vendors (ISV). Most ISVs are motivated by the need to make their products work against multiple database backends, which makes specialized and high-performance code more difficult,  in addition to their desire to get products to market quickly. So what are the metrics and instrumentation that a SQL Server pro can use to determine overall quality of such products without doing lengthy code reviews? The good news is that it’s possible to gauge the overall quality of a purchased app by looking at a short list of PerfMon counters, wait statistics, and DMV queries. In this session, we’ll reveal the analysis required to rapidly determine vendor product quality and methods, both technical and nontechnical, to deal with a poorly performing product that’s already in production, including dealing with stifling maintenance agreements / vendor support teams, and SQL Server plan substitution.
The major new features of a new release always capture the headlines of SQL Server blogs. You also probably know that countless bug fixes are released with each new release of SQL Server. But many people do not realize that very specific upgrades and improvements are put into play with new releases of SQL Server. SQL Server 2016 has loads of improved internals, in fact, too many to cover in an hour. So we’ll go on a fast-paced exploration of improvements like faster bulk insertion, Availability Groups transport and compression improvements, multiple logging improvements like LDF stamping, faster checkpoints, updates to the scheduling algorithms, DBCC improvements like MultiObjectScanner enhancements, and automatic soft NUMA.
Having multiple data visualizations doesn't make it easier to choose the right one to reveal the story. Choosing the wrong visualization can obscure the story - or worse yet, distort it! In this session, we start by exploring how telling a story with data visualizations is different from creating a report. Then we explore the vocabulary of data visualization and learn how to apply grammar (visualization design principles) to your data. Along the way, you will also learn how to evaluate the goal of your data story and how to choose the correct visualizations that communicate this story accurately and effectively.
Baffled about why or how to use Master Data Services? In this session, you learn the basics of Master Data Services (MDS) in the context of a simple case study in which users maintain a data set used to support data analysis. You start by learning how to build a development environment. Then we review your options for loading data into MDS, how to update or delete data, and how to integrate MDS with a data warehouse using SSIS. Last, you learn how to deploy a model to a production environment and how to deploy changes.

Throughout this session, we review the pros and cons of performing these tasks with the user interface. We also explore the various data structures under the covers to understand how to prepare data for loading and how to troubleshoot problems in the data. If you are using flat files or spreadsheets to obtain data from users, this session teaches you how to get started quickly and easily with an easy-to-manage, auditable, securable, and centralized solution for that data. 
It’s generally accepted that creating and maintaining ETL packages is one of the more time-consuming steps in the data warehouse development process. Which would you rather do with your time? Spend seemingly endless hours working through repetitive SSIS package development tasks and more hours tweaking those packages as schemas evolve? Or invest in a framework that gracefully adapts to changes and updates your package designs in minutes so you can spend your time solving bigger problems or expanding the scope of your data warehouse?

In this session, you’ll learn about Business Intelligence Markup Language (Biml), your secret weapon for saving time on SSIS package development. We’ll start by learning about the history of Biml, the tools you can use to work with Biml, what it looks like, and the problems it’s designed to solve.

Then we’ll dive into the syntax of Biml by building out a simple SSIS package step by step. You’ll learn the structure of a Biml file and how to generate a package that you can view in Business Intelligence Development Studio (BIDS) or SQL Server Data Tools for BI (SSDT-BI).

Next, we’ll explore how to use BimlScript to automate package development. You’ll learn how to use control blocks to conditionally generate sections of Biml or even complete Biml files. For example, you can programmatically read a collection of tables from your source and generate a set of extraction packages to extract data from those tables into a corresponding staging table, among other capabilities that you’ll learn about in this portion of the workshop. You’ll also learn how to break up Biml instructions into multiple files and how to use directives to manage the way in which Biml is interpreted across multiple files.

With these building blocks in place, we’ll examine a simple framework for using Biml to create a package to support the full ETL process from metadata. You’ll learn how easily Biml can update packages to reflect changes in your source or target schemas. In addition, you’ll have a set of example Biml files that you can adapt to fit your environment and start saving time on your package development efforts!
Have you ever considered a situation where Columnstore Index can be quite the opposite of what one would expect from it? A slow, wasteful source of painfully slow queries, lagging the performance, consuming irresponsible amount of resources …

Setting the wrong expectations (it won’t run 100 times faster on EVERY query), selecting the wrong architecture (partition by 100s of rows instead of millions), using and aggregating by the large Strings in the fact tables – this list is actually quite large.

What about some of the less known limitations for building Columnstore Indexes ? The ones that will bite you suddenly in the middle of the project – when you do not expect it at all ?

Let me show you how to achieve those painful mistakes and you will surely know how to avoid them 
Announced at PASS Summit 2016, the Graph database engine extension for SQL Server should be in the public preview should be available before SQLBits, when more details will be shared.

Supporting CRUD operations for Nodes & Edges creation, integrated into SQL Server vNext engine the Query Language Extension will support Multi-hop navigation using join-free pattern matching.
With increasing speed in relational query execution classical analytical solutions get challenged more and more. Why loose time for processing data into multi-dimensional databases? Why analyze outdated data if you can have fresh data instead?

We are analyzing typical scenarios from classical multi-dimensional analysis like YTD calculation, DistinctCount and others in regards to their efficiency with different solution approaches: Classical multi-dimensional databases in ROLAP mode, DirectQuery, T-SQL… And we are going to show how Columnstore indexes are influencing those solutions. Find out about advantages and disadvantages of the different solutions in regards to the problem. And maybe you will discover new approaches for your own challenges. 

Prerequisites: SSAS knowledge, basic SSAS performance tuning plus the relational lndexing basics with good understanding of the relational Columnstore capacities.
Learn the news for the Columnstore Indexes from the SQL Server 2016, the Service Pack 1 and naturally the upcoming vNext release of SQL Server!

Diving into the details of the implementations, the session will also focus on the solution scenarios where one should use which type of the indexes and how to tune Columnstore Indexes.
Diving into the HTAP (Hybrid Transactional Analytical Processing) advantages and limitations and some of the newest and upcoming features, the main idea of this session is to keep up-to-date the SQLBits attendees with what happened in the world of Columnstore since the last edition that took place in Liverpool in 2016.

Additionally, this session will also clarify the limits for the different editions of the Columnstore Indexes, such as the MAXDOP or the exact Batch Mode engine limitations that are supporting Enterprise Edition only.
When R is installed on SQL Server 2016 there are new settings and configurations the DBA needs to know to ensure that SQL Server is not adversely impacted when R is run on the server.  This session will show what settings need to be changed to monitor R and provide tools to assist in the process.  The components installed and their interaction with SQL Server will be explored to provide a better understanding of what processes are running when and what their impact is on performance. Attendees will learn what R code needs to include to run not only in memory but also to use the ScaleR processes of R Server to swap to disk when all memory is in use.  The maintenance requirements for implementing R code are reviewed so that DBAs will know what steps are involved to support R running on SQL Server.  By default SQL Server allows R users to utilize server resources at will without having any R code installed on the server.  Learn about this process and how to restrict it. If  you are planning on running R on your SQL Server 2016 instance you need this session to configure the server to ensure optimal performance for SQL Server and R.
The visualizations available in Power BI can be expanded by using R.  Some R visuals have been provided be Microsoft without even having to write R code. In this session, attendees will learn what components need to be installed to support R in Power BI and what it takes to include them within Power BI Reports.  To be able to provide the best use cases for these visuals, limitations and functionality are discussed to ensure attendees know when these visuals are the best tool and perhaps more importantly, when they are not.  Step-by-step instruction what R code is required will be provided. Tips for ensuring that the data included can be refreshed using Power BI Gateways will be reviewed to ensure attendees create a report which uses the same update policies as the rest of Power BI. While it is assumed that attendees of this session have seen Power BI in action, there is no assumption or expectation that attendees will enter this session knowing anything about R. After this session, attendees will know how use R with Power BI. 
The Analysis Services model used in both Power BI Desktop and SSAS allows solving of very complex modelling issue. In this session we will look at solving several complex modelling problems like many-to-many and RLS using the latest version of Power BI desktop and Analysis Services.
"It is much easier to trick someone into giving a password for a system than to spend the effort to crack into the system."
This is a common line of thought in today's world of increased cyber-security dangers.
In this interactive session we'll take a look at how social engineering works, 
the psychology behind it and why is it still the most effective way to gain access to your company's secrets.
The best attacks happen when people don't even realize they are being attacked and 
in this session we're going to try to fix that and educate you on how to realize when someone is trying to hack you.
Proper way of storing encrypted data is to encrypt it on the client and send it to the server that doesn't know how to decrypt it. However, this solution lacks a simple way of searching through the encrypted data once it's on the server. You can do equality checks, but that's where most applications stop. But what to do if you have thousands of text documents you need to search through? Getting them all to the client and decrypting them there is simply out of the question as it would slow the system down to a crawl due to latency.

In this session we'll take a look at a few algorithms that enable you to search through the encrypted text data on the server without decrypting anything and returning only the search results to the client with performance in mind.
Just the way a search should be performed.
Hive is a data warehousing system that simplifies analyzing large datasets stored in Hadoop clusters, using SQL-Like language known as HiveQL. Hive converts queries to either map/reduce, Apache Tez or Apache Spark jobs. 
To highlight how customers can efficiently leverage HDInsight Hive to analyze big data stored in Azure Blob Storage, this demo heavy session provides an end-to-end walkthrough of analyzing a web transaction log of an imaginary book store using Hive. 

After completing this lab, you will learn,
  1. Different ways to execute hive queries on an HDInsight cluster
  2. Tez Vs Mapreduce execution engine
  3. To use join, aggregates, analytic function, ranking function, group by and order by in HiveQL
  4. Analyze hive tables with PowerBI desktop
Apache Storm is a distributed, fault-tolerant, open-source computation system that allows you to process data in real-time with Hadoop. Storm solutions can also provide guaranteed processing of data, with the ability to replay data that was not successfully processed the first time. In this 2 hour demo heavy session, you'll learn to create a real time Power Bi sales performance dashboard using Azure HDInsight Storm cluster for an imaginary online book store.

After completing this session, you'll be able to 
  1. Provision an Azure HDInsight Storm Cluster
  2. Write C# storm topology to read the data from Azure Event Hub
  3. Write C# storm topology to insert the data into Power BI data sets
  4. Create Power BI datasets using C#
Clustering SQL Server still vexes many in one way or another. For some, it is even worse now that both AlwaysOn features - clustered instances (FCIs) and availability groups (AGs) - require an underlying Windows Server failover cluster. Storage, networking, Active Directory, quorum, and more are topics that come up quite often. Learn from one of the world's experts on clustering SQL Server about some of the most important considerations - both good and bad - that you need to do to be successful whether you are creating an FCI, AG, or combining both together in one solution.
Even though SQL Server 2016 was just released, v.Next is already in preview. v.Next is the first time SQL Server will also be supported on Linux. As with Windows Server deployments, you will need to ensure that your SQL Server databases and instances will need to be highly available.This sesssion will cover the options, tips, and tricks as that are known of the conference for making SQL Server on Linux available, including whether or not you can mix Windows Server and Linux deployments in a single solution.
When building automated deployment pipelines relational databases are often missed out because of difficult technical and cultural challenges.
In this session I’m going to explore the fundamental technical requirements of an automated database deployment pipeline.
I’ll then explain some of the open source, third party and built-in technical solutions for SQL Server and describe which sorts of teams will be better suited to which solution.
We’ll look at:
• SSDT
• Redgate SQL Toolbelt
• Redgate ReadyRoll
• FlyWay
• DbUp
• Liquibase
• Entity Framework
• DIY solutions
You’ll leave with a better understanding of the options, and which will suit you.
2009. John Allspaw and Paul Hammond deliver the session “10 deploys per day – Dev & ops cooperation at Flickr.” In forty six minutes they change the way millions of people would think about software delivery for years to come. It didn’t have a name yet, but DevOps was born.

Automation, Azure and NoSQL begin chipping away at traditional on-prem SQL Server DBA responsibilities.

In 2013 Kenny Gorman declared “The DBA is Dead”. For the record, I don’t believe that, but a lot of people do.

I’m going to explain what DevOps is, where it came from, and its implications for databases - as well as some changes data folk need to make to stay relevant.
For several years MS have promoted declarative, model-based SQL development with tools like SSDT. At the same time people like Jez Humble, Dave Farley and Pramod Sadalage promote an iterative, migration script based approach asserting that update scripts should be tested early and not generated by tools.

Presenters of “how to do database DevOps” sessions annoy me when they say one way is good and the other is bad. It depends.

I’ll illustrate the limitations of each approach with a simple scenario. I’ll describe which projects are better suited to a model or a migrations approach, and whether it’s possible to get the best of both worlds.
It used to be so simple. SQL Server sat on a SAN in some internal or external server room/broom cupboard. You knew where all the bits and bytes were kept. But things move on.
Next came VMs, virtual computers that lived inside your physical servers allowing you to use your physical resources more efficiently.
But it didn’t stop there – nowadays SQL Server can run in containers. And we have database clones too.
What is the difference between a VM, a container and a clone? How does each work? Which should a data professional use and when?
And, crucially, where does the data actually live? There’s a server under there somewhere, but where?
It isn’t the dark ages any more.
You’ve learned how to express your database in script form using something like SSDT, Flyway or Redgate. You track those scripts in source control with tools like TFS or Git. Well done.
But you haven’t written as many automated tests as you know you should. You haven’t looked at the build functionality in VSTS or gotten to grips with build servers like TeamCity or Jenkins. Even if you have it was for C# apps and you aren’t sure how to get the same benefits for SQL Server.
I’ll explain how to unit test SQL with tSQLt and to automate your tests with a build server to give you confidence in the quality of your code.
Practical Business, Technical and Methodology Tips & Hints taken from the Agile development of a SQL Server Data Warehouse for a Retail Bank.
During the 2016 PASS Summit, Microsoft announced that the next major release of SQL Server will see support for "adaptive query processing", a family of features aimed at correcting bad execution plans as they happen.

In this session, you get a first look at this feature family, You will see demos of all features available in public preview, as well as information on other features that are announced for later. Both the benefits as well as the potential downfalls and limitations of these exciting new features will be pointed out to you.

You might not be on an instance that supports adapitve query processing right now, but one day in the future you will be. This session will help you prepare for this future.
You are already familiar with T-SQL and are eager to learn R but do not know, where to start?
Start from what you already know: T-SQL. Both languages have many things in
common on some levels, but are very different on others. This session will kick
you off to the new language by using analogies from T-SQL. You will learn how to
write your first R-scripts, make usage of packages and will leave this session with
a basic understanding of typical use-cases of R and how to integrate that into
your existing environment with SQL Server.
R is the first choice for data scientists for a good reason: besides accessing and transforming data
and applying statistical methods and models to it, it has a wide variety of
possibilities to visualize data. As visual perception of data is the key to
understanding data, this capability is crucial. This session will give you a
broad overview over available packages and diagram types you can build with
them on the one hand and a deep dive into common visualizations and their
possibilities on the other hand. Impress yourself and your peers with stunning
visualizations which will give you insights into data you could not achieve
with other tools of Microsoft’s BI stack.
In recent years’ computer systems have learned to detect new types of cancer, to disclose credit card
fraud or to understand natural language. They help us far beyond simple sales
forecasts to offer the right product to the right customer or to identify critical
customers before they are going to churn. The reason for those capabilities is
less the availability of new algorithms but mere nowadays computing power. This
power shortens both, the time between developing a data experiment and
deploying a data product and applying those data products in real-time.

The key to almost
unlimited computer power is the cloud. Therefore: Machine Learning is the
killer app which will help cloud systems to their break-through into industries
which have been avoided this technology so far.

In this session you
will learn what machine learning is about, how Microsoft approached it through
their offers and how you can get started already tomorrow with the first
use-cases of your organization.
The SQL Server Query Optimizer makes its plan choices based on estimated rowcounts. If those estimates are wrong, the optimizer will very likely produce a poor plan. And there's nothing you can do about it. Or is there?

In this session, you will learn exactly where these estimates come from. You will gain intimate knowledge of how statistics are used to estimate row counts, how filters and joins further influence those estimates, and how these algorithms changed when SQL Server 2014 introduced a new cardinality estimator.

Though the focus of this session is on understanding the cause of bad estimates, you will also learn some ways to fix the problems and get better estimates - and hence, better performing queries.

Note that, while most of the material presented in this session is (sparsely) documented, some of it is not; these topics are based on extrapolation, speculation, and extensive testing.

If you finally want to understand those bad estimates and are not afraid of diving deep into the engine, or if you simply like to geek out on implementation internals, then you do not want to miss this session!
We are starting to use some parts of Azure. Whether you are a DBA storing backups to BLOB, a BI professional making use of Power BI, or a data scientist using Machine Learning to perform analytics. But what does the big picture look like? And how can you leverage the range of technologies to meet practical business data ingestion and analytics scenarios. This is where the Cortana Intelligence Suite comes in.

We’ll cover a series of technologies that guide you to understanding an analytics workload, the Cortana Intelligence Suite process and technologies that enable data transfer and storage, data source documentation, and processing data using various tools. You’ll also learn how to work through a real-world scenario using the Cortana Intelligence Suite.
The need for batch movement of data on a regular time schedule is a requirement for most analytics solutions. Within the Cortana Intelligence Suite, Azure Data Factory (ADF) is the service that can be used to fulfil such a requirement.

In this session, you will see the components required within ADF to orchestrate the movement of data within the Cortana Intelligence Suite. Looking at scenarios such as moving data from on premise servers to Azure BLOB storage or a database, to invoking analytical workloads such as invoking Machine Learning model evaluations. Azure Data Factory brings all this together.
Do you believe the myths that “Third Normal Form is good enough”, or that “Higher Normal Forms are hard to understand”?
Do you believe the people who claim that these statements are myths?
Or do you prefer to form your own opinion?

If you take database design seriously, you cannot afford to miss this session. You will get a clear and easy to understand overview of all the higher Normal Forms: what they are, how to check if they are met, and what consequences their violations can have. This will arm you with the knowledge to reject the myths about higher Normal Forms. But, more important: it will make you a better designer!
Azure Data Catalog is an enterprise-wide metadata catalog that makes data asset discovery trivial. It’s a fully managed service that lets any user—from analyst to data scientist to data developer—register, enrich, discover, understand, and consume data sources

We will use an example to explore how an organisation can use Azure Data Catalog to enrich their understanding of the data assets that they have with a practical example
The queries that are used for reports are usually the least efficient because you are dealing with huge amounts of data from tables sometimes joined many, many times. In this session we will go through good and bad examples of aggregating the data from your relational database. You will see tips on how to get good use of the indexes, how to lower the number of actual table queries and other tricks that will make your queries run faster.
In an AlwaysOn world we focus on entire databases being highly available. However, replication offers another, arguably more powerful, way to make data available on multiple servers/locations that steps outside of "normal" High Availabilty scenarios. This session will explain what database replication is, what the different parts are that make up the replication architecture and when/why you would use replication. You will leave the session with an understanding of how you can leverage this feature to achieve solutions that are not possible using other High Availabilty features. The content will be valid for all versions of SQL Server from 2005 onward.
The best performing systems are finely tuned machines, with the correct settings at each level. However, many environments that I encounter are in a state of disarray when it comes to server and database settings.

Most companies have some sort of standard that was defined years ago, but was not adhered to on many servers/databases. There are plenty of other companies where "Next, Next, Next, Finish" is the "standard" used. When issues arise, these missing standards can cause all sorts of headaches and slow down troubleshooting efforts needlessly.

In this session we will cover common industry standards for SQL Server instances and databases and investigate ways to ensure that they are implemented into your environments, thereby making your systems great again!
Presenting the Prediction In this session we will look at how we may combine machine learning technologies such as R with rich data visualization technologies such as PowerBI to quickly produce impactful interactive reports. If you want to know more about PowerBI, R, machine learning, and predicting the future then this is the session for you. Technologies we will cover are:
  • Using R for visualizations and machine learning
  • PowerBI
  • Azure Machine Learning
  • SQL Server Reporting Services
Performance troubleshooting is a complex subject with many factors under consideration when you find poorly performing SQL statements, using proven methodologies, and evaluating performance data available in the Dynamic Management Views and Functions.

In this session, we’ll go over a foundation of how and which DMVs to use to identify those problematic statements for versions of SQL Server from 2005 – 2016 & vNext. We’ll be demonstrating using practical examples, including code that can be taken away and used on attendees’ own SQL Servers. We’ll also discuss how to identify common causes of performance issues and learn how to quickly review and understand the wealth of performance data available.
Policy Based Management (PBM) is a feature to help control the 'conditions' a SQL
Server is allowed, is it adhering to your policy standards, are "Best-Practices" being implemented, learn to how 'Evaluate' and 'Control' your SQL estate with minimum effort

Learn how to harness powershell to aid your efforts in maintaining order throughout your systems, gain insights into those potentially problematic servers before they become headaches

Discover the depths of control available using PBM that every DBA should have knowledge of, combined with some Powershell scripts to speed through the deployment of the policies and reporting on your SQL Servers health
Whether you are a developer, DBA, or anything in between, chances are you are not always following best practices when you write T-SQL. Unfortunately, many so-called “bad habits” aren’t always obvious, but can lead to poor performance, maintainability issues, and compatibility problems.

In this session, you will learn about several bad habits, how they develop, and how you can avoid them. While we will briefly discuss advice you’ve probably heard before, like avoid SELECT * and don’t use NOLOCK, you will also learn some subtleties in SQL Server that might surprise you, how some shorthand can bite you in the long run, and a very easy way to improve cursor performance.

By changing your techniques and ditching some of these bad habits for best practices, you will take new techniques back to your environment that will lead to more efficient code, a more productive workflow, or both.
Discover the ins and outs of some of the newest capabilities of our favorite data language. From JSON to COMPRESS / DECOMPRESS, from AT TIME ZONE to DATEDIFF_BIG(), and from SESSION_CONTEXT() to new query hints like NO_PERFORMANCE_SPOOL and MIN / MAX_GRANT_PERCENT, as well as some other surprises, you’ll walk away with a long list of reasons to consider upgrading to the latest version  - or the next version.
Continuous Integration attracts more and more attention throughout all areas of development. But not every area is provided with a satisfactory amount of tools and support for this topic. Especially in BI there is still a gap to close.  During this session we will look into ways to start on your own Continuous Integration approach for your SSAS tabular solution. After a short review of what aspects Continuous Integration actually consists of we'll focus on automated deployment. Let's have a look at the Tabular Object Model (TOM) in C# as well as the JSON based Tabular Model Scripting Language (TMSL) available since SQL Server 2016 as possible approaches to build your own automated deployment process. And what about BIML?
Big data yes, dark data perhaps but small data – really?  Small data is high value data which is often the key numbers or performance indicators that are used by senior management to guide the business.  It has been collected at great effort and expense but usually supplied in format that is not conducive to analysis, reporting or visualisation.  Using a few examples based on real life case studies we’ll look how R can transform and clean this data.   This is a practical session with lots of example of R code.  So if the veracity, variety and value of data are more important aspects than volume and velocity then this session will be of interest to you.
R language has for a long time been the most  popular
for data processing and statistical analysis. Among R's strengths are vibrian
community and extensive repository of libraries for performing all kinds of
analyses. However, major deficiency with R are that is slow, memory-bound and
hard to operationalize.

Microsoft R Server (MRS) mitigate R limitations, and run multi-threaded analysis on a large
dataset. The new release, MRS 9, goes even further.

First off, it contains a Microsoft ML (machine learning) package - a collection of
best-of-breed ML algorithms that have been battle-tested by Microsoft on a
variety of its products. It includes improved logistic regression, fast boosted
decision tree, fast random forest, GPU-accelerated Deep Neural Networks and
One-Class Support Vector Machine (for outlisiers detections).

Secondly, MRS 9 allows R models to be exposed as Web services. Furthermore with MRS 9 models
that are trained in one environment can even be moved to, and scored in, other
environments.

In this session, I will show you how to use Microsoft R server to evaluate and prepare data, build and score predictive models and deploy them into production.
Firms make massive efforts to collect, clean, aggregate and report on their data but this is only meaningful when managers use it to make better decisions.  Senior management often rely on a well-designed visualisation to get insight and take action.

We’ll look at three visualisations created in Power BI, with a little help from R, and discuss how the challenges to prepare the data, calculate the key results and visualise in a helpful manner.  The first example uses data provided by the European Banking Authority about the resilience of 51 big European banks to economic stress.  The second example is a market risk backtest chart – it compares trading profit & loss (P&L)against value-at-risk (VaR) against regulatory limits over time for different trading desks.  The third example recreates in Power BI the famous visualisation of the health and wealth of nations over 200 years by Hans Rosling.
Hierarchies are the bread and butter of most business applications and you find them almost
everywhere:

* Product Categories

* Sales Territories

* Calendar and Time

Even when there is a big need from a business perspective, the solutions in relational databases are
still sort of awkward. If you want to successfully query self-referenced hierarchies,
you will need either loops or recursive Common Table Expressions. You will
learn how to master both and will learn how make them fast.

Join this session for a journey through best practices to transform your hierarchies into useful
information. We will have fun playing around with a sample database based on G.
R. R. Martin’s famous “Game of Thrones”.
The first thing we need to do with a new set of data is to explore it.  We want to get an idea of the usefulness, relevance, completeness and quality of the data for our purposes and to explore patterns, trends and outliers to gain some insight.   In this session we’ll explore a public dataset - the Titanic passenger list. Most people know the of the tragic events. The RMS Titanic sailed from Southampton on 10th April 1912, hit an iceberg in the Atlantic 4 days later and sank. Of the 2000 people on board, only 700 survived.   We'll build some rough and ready visualisations and crunch some basic statistics on this dataset using 4 different tools, Excel & Power Query, Power BI Desktop, R and Azure Machine Learning.  The purpose of the session is to introduce you to some tools that you may not be familiar with so that you can decide which tools you prefer for exploring your data.
There are some massive datasets out there and open source (CRAN) R begins to struggle with these.  This session will show how to use Microsoft R Services (MRS) to handle these big datasets.  We'll start with a CRAN R Script that transform and does machine learning on a sample of a large dataset in a SQL Server database.  We'll then look at the equivalent MRS Script that can operate on the entire dateset.
From the growth of data science to the popularity of agile cloud services, the Microsoft data platform is undergoing once-in-a-generation changes.  These changes are forcing architects and developers to make difficult choices about which new megatrends to adopt first and how to then adapt to them.

This session will help attendees prioritise the adoption of these 5 data platform megatrends on both their organisational innovation and personal development roadmaps:


- Operational data science
- Cloud-only innovation
- Embracing open source
- Big data storage
- Outsourced operations
This session will help attendees understand how to use the latest generation of cloud-only Microsoft analytics services alongside their revenue generating systems which show no sign of leaving on-premises data centres.  

Attendees will learn about the solution design patterns and technologies used in hybrid analytics platforms and what to consider when designing their own.  

It discusses common solutions that:

- Take data from on-premises SQL Server databases
- Securely transfer it to an Azure solution
- Store it in a cloud data warehouse 
- Visualise it using Power BI
This session gives an overview of five key skills that a modern digital employer needs employees to demonstrate but struggles to find followed by discussion of how candidates can develop them.

Delivered by Coeo, a data management and analytics consultancy, the session is based on our experience of interviewing applicants who are strong technically but lack the business and communication skills to do well in a digital workplace.

The five skills covered will be:
- Remembering the difference between business and technical benefits
- Understanding all of the problem before you create a solution
- Knowing your audience and how to present to them
Running has never been so popular – not just at the competitive level but also with people wanting to do something healthy and fun.  10km races, parkrun events and Run4Life charity runs have all become part of everyday life whether you live in a town or the country, yet feeling prepared to do more than a short jog around your neighbourhood can be more than daunting, it can be off putting.  The reality is most runners are like me and have no coaching qualifications and have instead just picked up tips and tricks over
time.  In the last couple of years I’ve run with running clubs and trained with coaches which has led me to run further and faster than I’d have ever imagined. 

In this session I’ll share my experience about:

- How to get started
- The kit you’ll need to make running safe and fun
- Good first events
- What to do on race day
SQL Server AlwaysOn Availability Groups provides an integrated approach to High Availability and Disaster Recovery. It’s a technology that works really great, but do you know what to do in case something goes wrong? Where do you start looking if end users start complaining or you get alerts from your monitoring system? In this session you will learn how to identify problems like, Failover Cluster issues - Replica Unavailable - Availability Group not Healthy - Performance Issues, … and you will learn what actions you can take in order to solve the issues. At the end of the session you’ll have a good knowledge to bring your availability groups back in a healthy state!
JSON support in SQL Database was one of the top feature requests in the past years. SQL Database is general purpose database engine that besides relational, spatial, XML data now includes JSON as one of the data formats that can be used. In this session we will discuss the following topics:
  • Brief overview of JSON features
  • Use cases (and when you should not use it)
  • How JSON works with other SQL Engine components and technologies 
  • Comparison with other product such as Azure DocumentDb and PostgreSQL
Azure SQL Database becomes more and more interesting for data professionals. The concept of a database is still the same but the migration process of an on-premises database to an Azure SQL Database can be quite challenging.  Attend this interactive session and learn how to migrate your schema and your data from the SQL Server database in your current environment into Azure SQL Database.   Attendees of this session will learn:

- How to test for Compatibility
- How to Fix Compatibility Issues
- How to Perform the Migration
SQL Server 2016 has several new integration points with Microsoft Azure. Are you curious about how this might give your organization benefits? In this session, you will learn how you can use SQL Server 2016 to create a hybrid environment. We will see an overview of all the new Microsoft Azure features that are available in SQL Server 2016, like Striped Backups to Microsoft Azure Blob Storage, Stretchdb, Replication to SQL Azure DB and many more. The session is bulk loaded with demos and it will give you a good idea what features can be helpful in your environment.
A good DBA performs his/her morning checklist every day to verify if all the databases and SQL Servers are still in a good condition. In larger environments the DBA checklist can become really time consuming and you don’t even have the time for a coffee… In this session you will learn how you can perform your DBA morning checklist while sipping coffee. I will demonstrate how you can use Policy Based Management to evaluate your servers and how I configured my setup. By the end of this session, you can verify your own SQL environment in no time by using this solution and have plenty of time for your morning coffee!
Does your application suffer from performance problems even though you followed best practices on schema design? Have you looked at your transaction log?
There’s no doubt about it, the transaction log is treated like a poor cousin. The poor thing does not receive much love. The transaction log however is a very essential and misunderstood part of your database. There will be a team of developers creating an absolutely awesome elegant design the likes of which have never been seen before, but then leave the transaction log using default settings. It’s as if it doesn’t matter, an afterthought, a relic of the platform architecture.
In this session you will learn to appreciate how the transaction log works and how you can improve the performance of your applications by making the right architectural choices.
Learning SQL is easy, mastering it is hard. In this session you’ll learn simple but effective tricks to design your database objects better and write more optimized code.

As an attendee you will gain a deeper understanding of common database development and administration mistakes, and how you can avoid them.

Ever thought that you were adhering to best practices but still seeing performance problems? You might well be.

In this session I will be covering why the optimizer isn’t using all available processors, when the database engine fails to report all the resources a query has used and why the optimizer doesn’t always use the best plan.You will leave this session with a list of things that you can check for in your environment to improve performance for your users.
The system database TempDB has often been called a dumping ground, even the public toilet of SQL Server. (There has to be a joke about spills in there somewhere). In this session, you will learn to find those criminal activities that are going on deep in the depths of SQL Server that are causing performance issues. Not just for one session, but those that affect everybody on that instance. In this session, you will learn how to architect TempDB for better performance, understand how you can reduce contention, troubleshoot bloating issues, and much much more.

You use SQL a fair bit, but don't know when things are going wrong. Did you know that Microsoft SQL Server ships with alerting capability out of the box?

However, this is not on by default, in this session, you will learn how to set up some core basic alerts. We will then progress on to more interesting ways of being notified using buzzwords like IoT.

This session will allow you to be notified in ways that work for you and your team without being tied down to Outlook.
Are you faced with complaints from users, poor performing code from developers, and regular requests to build reports? Do you uncover installation and configuration issues on your SQL Server instances? Have you ever thought that in dire times avoiding Worst Practices could be a good starting point? If the answer is “yes”, then this session is for you: together we will discover how not to torture a SQL Server instance and we will see how to avoid making choices that turn out to be not so smart in the long run.
You are probably thinking: “Hey, wait, what about Best Practices?”. Sometimes Best Practices are not enough, especially for beginners, and it is not always clear what happens if we fail to follow them. Worst Practices can show the mistakes to avoid. I have made lots of mistakes throughout my career: come and learn from my mistakes!
As your pesonal Virgil, I will guide you through the circles of the SQL Server hell:
  • Design sins:
    • Undernormalizers
    • Generalizers
    • Shaky Typers
    • Anarchic Designers
    • Inconsistent Baptists
  • Development sins:
    • Environment Pollutors
    • Overly Optimistic Testers
    • Indolent Developers
  • Installation sins:
    • Stingy Buyers
    • Next next finish installers
  • Maintenance sins:
    • Careless caretakers
    • Performance killers
Exciting times ahead! You bought a license for SQL Server 2016 and you are going to upgrade to the new shiny version of SQL Server on a beefy new machine!
Fantastic! Except that you have no idea how your application will work on the new version. There’s a new cardinality estimator in 2016: how will it affect performance? The new features in In-Memory OLTP and Columnstore Indexes look really promising, but how will your workload take advantage of these features?
The best way to know for sure is to conduct a benchmark and compare it to your current system.
In this demo-intensive session you will discover how to capture a meaningful workload in production and how to replay it against your test system. You will also learn which performance metrics to capture and compare, and which tools can help you in the task.
Upgrading to the latest version of SQL Server is often seen as a comprehensive and difficult project. Management often fails to see the benefit for migrating to the latest version and your end users aren’t interested in all of the extra testing. As a DBA you need to come up with a plan that earns both management and end-user support.

In this session we will cover all the information you need to collect and consider before, during, and after upgrading to SQL Server 2016. We will discuss licensing, upgrade approaches, and building out your own upgrade checklist.

If you feel stuck on an older version of SQL Server, attend this session to understand the features and benefits of SQL Server 2016 that will justify your upgrade project. Come and learn about the tools and methods that will make your upgrade project be successful.
The Hybrid Datacenter has become more mainstream. Today's data professionals need to understand how to effectively manage and migrate data between on-premises and cloud servers. Attend this interactive session and learn how to create, deploy, and migrate data from your on-premises instance of SQL Server to Microsoft Azure Virtual Machines and Microsoft Azure SQL Database.

Attendees of this session will learn:
  • How to decide if IaaS of PaaS is the right option 
  • How to prepare your database for migration to SQL Azure 
  • How to best migrate data to Microsoft Azure
PowerBI is one of the newest and coolest tools in the data professional’s toolbox. It allows you to quickly and easily create a data model and begin reporting, against almost any data source. Whether you want to report on web site data, Hadoop data, or streaming data, you can easily merge that data with your existing data warehouse or other data sets, to create new insights. We'll look at solutions like database monitoring with PowerBI, integrating weather data with retail sales data, analyzing streaming data, and a couple other interesting projects. Every attendee will leave with everything Josh demonstrates during the session, along with the ability to build their own projects along with the ability to build on the projects that Josh demonstrates.
SQL Server 2016 gives us many additional ways to measure performance.  Service Pack 1 scales the advanced features previously restricted to Enterprise Edition and grants them to Standard, Express, Web, and even LocalDB!  But how do I performance tune and manage them?  We will cover regular index’s plus Columnstore & Inmemory indexes, not just the plan cache but the Query Store for these features.  Can I use Trace Flags to enhance the performance in Azure?  The answer may surprise you.  In this session we will cover end to end query tuning across the new features in SQL Server 2016!
Successful virtualization of database platforms such as SQL Server requires a different approach than other servers in your enterprise. Maintaining reliable database performance after the database has been virtualized is dependent upon providing a clear view of performance metrics beyond traditional SQL Server metrics. DBAs need insight into these performance metrics at the virtual layer in order to apply effective performance tuning methods.   Attend this session to discover key virtual performance tips and tricks, which metrics matter the most, and the architectural design choices that will help you prepare and validate your VMware environment for your database workloads.
Communication is the key.  In life, in work, and especially when consulting.  I have been a consultants and managed teams of consultants.  There is one thing that separates the elite consultants from the average.  It’s not just communicating but communicating the right way.  Imagine if you have a client or business manager that doesn’t see eye to eye with your end deliverable despite extra-long effort and work weeks, betrayed trust and missed expectations, the stress of communicating unpopular business ideas to your team, or having to tell your boss how they messed up.  None of this is fun, all of it happens.  Over my career I’ve learned how to communicate in a way to shape the outcomes that need to occur in a business setting.  In this session I will review experiences, give references to books that helped me understand concepts, and help you find a way to communicate your way out of nightmare situations that you will face in your work life.
Microsoft Azure is quickly emerging as every company's new
data center. Companies can use Azure to augment existing systems with cloud
platforms, or expand their infrastructure without huge up-front costs. By
taking advantage of Infrastructure as a Service(IaaS) or Platform as a
Service(PaaS) offerings in Azure, anyone can expand their data center quickly. Although
the option is there, a lot of organizations still aren’t sure how they can
integrate the power of Azure in to their mainly on premise environment. In this
session, students will be given a breakdown of the tools available in Azure,
both PaaS and IaaS. We’ll cover many common deployment scenarios why you would
want to use them. Another big advantage of Azure is the ability to use
PowerShell to automate all of your deployments, shutdowns, and architecture
changes. Attendees will be introduced to the power of PowerShell, when it comes
to Azure Automation, along with deployment templates.

Attendees will understand:

  -common cloud architecture scenarios
  -an overview of virtual networking in Azure
  -what PaaS options exist in Azure and what they are for

When I first started learning about SQL Server, really deeply learning, there were a few “key” concepts that you hear repeated often by top speakers and SQL MVP’s. Internals, recovery models, and backups.  They are interconnected.  As the learning continued, it was self-evident how understanding basic data internals with pages, extents, and allocation bitmaps or database recovery models, the transaction log, and VLF’s or advanced backup options backups like stripping and piecemeal restores affected the uses of SQL Server.  They affected not just SQL Server but the way you make decisions in order to determine how best to use SQL Server to support your business.  This session enables you to have that core set of understanding required for advanced SQL learning.   
R is a language and environment for statistical computing and graphics.  R provides a wide variety of statistical techniques such as linear and nonlinear modeling, classical statistical tests, time series analysis, classification, clustering and more.  R is highly extensible.  One of R’s strengths is the ease with which well-designed publication-quality plots can be produced.  R has been in use by the Data Scientist community for quite some time.  In SQL Server 2016 we get support for R in our favorite relational engine.  In this session we will do an introduction to the language and show demo’s, examples, recommended tutorials, as we prepare to utilize and support R.
In SQL Server 2016 you get Polybase.  What in the heck is Polybase?  Great Question!  It is a way to query data in Hadoop, Cloudera, or Azure Storage from SQL Server 2016.  You do NOT need to understand USQL, HDFS, Hive, Pig, or Scoop.  You only need T-SQL.  In this session we review our environment with a Hadoop Cluster running on Linux, a SQL Server Polybase Scale Out Cluster, and show how to retrieve data and even make it available for PowerBI.com or in-database R.
This session will cover the main aspects to start or migrate from an OnPremise model  to one of the two architectures available in SQL Server on Azure, providing the keys to select the more suitable environment  for our requirements.
We will cover:
1. Red pill or blue pill
       SQL Server on Azure VM (IaaS) vs SQL Azure DB (PaaS) Which one?
2. Always available.
       High availability in Azure
3. Now what ...
       Best practices and configuration connectivity.
Reports very often extract their data from relational datawarehouse, students believe that querying with MDX is very complicated. I want to show an easy way to query cubes using parameters and all the other advantages cubes deliver.
We have seen SSRS getting better and better in every single release and SQL 2016 is a mind blowing upgrade of whatever we have seen thus far from Microsoft.With the SQL 2016 new features, we can easily embed R statistical models within SSRS reports.

As we have always seen a steady growth in SSRS since 2005, we now have much more advanced brand new features in 2016 SQL Server Reporting Services, beyond our imagination.Sit tight and buckle up for an amazing roller coaster ride, to not only briefly see the advanced SSRS killer features but also some of the R statistical charts within SSRS 2016 and the new user interface of Report Builder.
Are you working on to become another great MVP? Then, this session is really for you. Come and learn all the ways and areas that you can work on, to become the next MVP in this session. You can also enhance your homework with more tips and guidance to be the next successful MVP.
The storage industry is being revolutionised  by flash storage, but how does this change how a DBA should view and use storage and does conventional DBA lore around performance and storage still stand regarding such things as:

- compression, there are different types and some can work for you and others against you in the new world of flash
- testing, are sqlio and diskspd still the best tools to use for this, you may find the actual answers surprising
- fragmentation, this is band for the world of spinning disk what about flash
- getting the best possible performance for OLAP style applications by optimising for sequential scans, is chasing large IO sizes and read aheads still relevant in the new world ?.
You are hitting performance and scalability issues with the database engine, conventional wisdom and tools and getting you nowhere, where do you go ?. Windows performance toolkit has the answers and this session will allow you to unlock the secrets of the database engine, around things such as:

- CPU saturation, this will cover how the engine is structured, how to read call stacks and why windows performance toolkit is an order of magnitude more powerful than the debugger in this respect.

- any behaviour which is not documented, including latching and spin locking, the likes of the developer team, Bob Dorr and his ilk can infer a world of meaning from call stacks, this session will show how mere mortals can do this also.

- IO anomalies, is something in the path between your server and storage holding on to IOs or reordering them ?

- what is going off under the covers when you see strange wait activity, this will include a dive into the windows threading model.
DML is used in most cases without thinking about the multiple operations for the db engine. This session will give a deep dive into the internal storage engine down to record level.

After finishing the theory the differen DML commands and their tremendous operational tasks for the db engine will be investigated.

See, what effect a workload will have to handle page splits and/or fowarded records.
This session is a demo session. The demonstration of the different workloads will be explained in detail while the demos are executed.
Machine learning service is Microsoft Azure drag and drop tool for building,testing and deploying any kind of predictive model on your data-set. Finalized solution is published and used by daily business in larger stack of your Microsoft Azure services. With easy and interactive creation of models, algoritms and decisions do not tend to be that simple! Especially when one has to make business decision on results.

Focus on this session will be mathematical and graphical explanation of algorithms available for predictive analytics in Azure Machine Learning service. Algorithms - grouped by learning type - will be examined and crossed referenced through all available and ready-to-use. Understanding the the basics - data inference, data splitting, data stratification, to sweeping, SMOTH, to logic and theory of algorithms:  regression, decision trees/forest/jungle, Clustering and Naive Bayes.

This session will clarify the confusion over algorithms, which data is suitable for which algorithm and what kind of empirical problem can be tackled with.
Join us for a discussion of strategies and architecture options for implementing a modern data warehousing environment.  We will explore advantages of augmenting an existing data warehouse investment with a data lake, and ideas for organizing the data lake for optimal data retrieval. We will also look at situations when federated queries are appropriate within a "logical" data warehouse and how federated queries work with SQL Server, Azure SQL DB, Azure SQL DW, Azure Data Lake, and/or Azure Blob Storage. This is an intermediate session suitable for attendees who are familiar with data warehousing fundamentals.
In this session we will review sensible techniques for developing a data warehousing environment which is relevant, agile, and extensible. We will cover practical dimensional modeling fundamentals and design patterns, along with when to use techniques such as partitioning or clustered columnstore indexes in SQL Server. We'll also review tips for using a database project in SQL Server Data Tools (SSDT) effectively. The session will conclude with tips for planning the future growth of your data warehouse. This is an introductory session best suited to attendees who are new to data warehousing concepts.
Join us for a practical look at the components of Cortana Intelligence Suite for information management, data storage, analytics, and visualization. Purpose, capabilities, and use cases for each component of the suite will be discussed. If you are a technology professional who is involved with delivering business intelligence, analytics, data warehousing, or big data utilizing Azure services, this technical overview will help you gain familiarity with the components of Cortana Intelligence Suite and its potential for delivering value.
It’s fairly well known that the query optimizer is what creates execution plans. Lots of people are aware that execution plans are the thing that makes queries run fast, or slow. What seems to be less well known is just how vital the number of rows that the optimizer thinks may be returned by any given query is the primary factor driving the choices that the optimizer makes. This session focuses on how the row counts for queries are arrived at and how those row counts impact the choices made by the optimizer and, ultimately, the performance on your system. With the knowledge you gain from this session, you will make superior choices in writing T-SQL, creating indexes and maintaining your statistics. This leads to a better performing system. All thanks to counting the number of rows.
Everyone knows that Azure SQL Database only supports a small subset of SQL Server functionality, small databases, and has really bad performance. Except, everyone is wrong. In fact, Azure SQL Server Database is ready to support many, if not most, databases within your enterprise. This session reintroduces Azure SQL Database and shows the high degree of functionality and improved performance that is now available. You’ll leave this session with a more thorough understanding of the strengths and weaknesses of Azure SQL Database so that you can make a more informed choice over when or if you should use it within your environment.
Managing and processing large amounts of data requires major investments in hardware and time, or, you can look to an appliance-style solution like Analytics Platform System (APS) which neatly packages a massively parallel architecture into a single solution. However, APS requires a massive outlay of cash just to get started learning and testing. You can’t possibly know if APS will solve your problems or not without that outlay. Enter Azure SQL Data Warehouse. This Platform as a Service (PaaS) offering from Microsoft helps to democratize and open the capabilities of APS to anyone. The cost of entry is low and the functionality is high. This session will walk you through Azure SQL Data Warehouse so you understand what is on offer, how it works and what it can do for you and your enterprise. You’ll attain a better understanding of the strengths and weaknesses that this PaaS offering brings to the table so that you can begin to use massively parallel operations with your own data.
For the most part, query tuning in one version of SQL Server is pretty much like query tuning in the next. SQL Server 2016 introduces a number of new functions and methods that directly impact how you’re going to do query tuning in the future. The most important change is the introduction of the Query Store. This session will explore how the Query Store works and how it’s going to change how you tune and troubleshoot performance. With the information in this session, not only will you understand how the Query Store works, but you’ll know everything you need to apply it to your own SQL Server 2016 tuning efforts as well as your Azure SQL Databases.
This session will present a list of "Low hanging fruit" style use cases for getting value from the in-memory engine ranging from the:

- Staging of data
- Scaleable sequence generation
- Queueing
- A more performant and scalable alternative to temporary tables
- etc

The focus of the session is on quick wins and getting value out of the in-memory engine without having to move entire databases from the legacy to the in-memory engine lock stack and barrel.
For example, many systems use addresses. Each user inputs the address for each client/customer in the way they want. This can have serious data consequences for data integrity as time goes by. One user updates their data, another does not. Now, which is correct?

QuantummData or MagickData solves this problem. It’s ideal for people looking for a way to simplify either a multi-tenanted system or one in which the data is shared between multiple applications. This talk covers the nature of the problem, implementation/solutions, how to fit this into an existing system and how this will have a positive benefit on the usage and system architecture.
Azure Data Factory is a cloud based data movement service. It is used to ingest, transform and publish data from various sources. In this session we will explore how we can use C# to extend the capabilities of Data factory by developing custom activities for (a.) Data Movement and (b.) Data Transformation. We will create a custom .Net activity to move the data to Azure blob storage and understand how we can use it in pipeline to schedule on regular intervals. We will also go through various security aspects.
Move from username/password, where you try to keep the bad guys out and embrace the modern world, were online services are always on and always vulnerable. Service Personal Endpoints solve this problem. Change the way you look at the world and suddenly the problems go away. No more hacking target. With SPE, we can remove that single point of vulnerability, and simultaneously boost security. This is a cross platform and will work seamlessly with existing and new deployments and systems. It only takes a small change to make life a whole lot safer for you, your clients & users.
  • More robust
  • Reduce attack surface
  • Better BI
  • Increased security
Sometimes you are stuck with customer demands where common solutions just aren't scalable

within the Analysis Services (SSAS) regular toolbox. This is where custom

assemblies come to save the day!  Custom assemblies allow you to extend

the power of SSAS by using other technologies, as in this case T-SQL and the

SQL Server relational engine.



This session will demonstrate how to utilize custom assemblies in Analysis Services to leverage the

power of the relational engine and use it directly in MDX. Through the .Net

framework and T-SQL we will be able to inject dimension member unique names

into MDX. Effectively we are replacing the MDX Crossjoins with the T-SQL Inner

joins, whereby gaining scalability and consistency in query performance.
If you are releasing database changes, new reports, cubes or ssis packages on a regular basis, you've probably offered up your share of blood, toil, tears and sweat on getting them delivered into production in working condition. DevOps is a way to bridge the gap between developers and IT professionals and for that we need to address the toolchain to support the practices. Microsoft offers a set of tools that'll help you on your journey towards the end goal: Maximize predictability, efficiency, security and maintainability of operational processes.   We will in detail be looking at:   Agile Development Frame of Mind Visual Studio Online (tool) Feature/PBI/WI (concept) Team Foundation Server Code Branching (concept) Build Agents (tool) PowerShell Microsoft's Glue (tool)
Getting the techniques in your tool belt right, makes a world of a difference. Did you ever wonder, how to deploy a cube, with minimum impact to query performance? Or how to optimize processing/query performance? Are you really ready to deploy when its required or do you get nervous every time? Attend this session to build and improve your SSAS Developer skills, by exploring:
  • Partitions
  • Aggregations
  • Synchronization
  • Integration Testing
  • Custom Assemblies 
Managing confidential data in SQL Server will lead us to Encryption, Certificate and Keys. Many times we decide to create self-sign certificates and have then store locally in SQL, an although is easy to do is not as secure as we want. However there is growing demand for regulatory compliance and concern for data privacy, using only database encryption management tools, and local keys is a not appropriate anymore. Is there a simple solution for this? Yes, with Azure Key Vault we can have an accessible, reliable and secure infrastructure to manage certificates and keys for SQL Server. In this session we will learn on how to use Azure Key Vault and how it integrates with SQL
With the release of SSMS 2016 and continual update of the sqlserver module PowerShell loves SQL Server. This also inspired the community to build a suite of useful PowerShell commands for the modern DBA. dbatools has over 100 SQL Server administration, best practice and migration commands, dbareports will gather information about your estate and enable you to report on it with SSRS, Power Bi and Cortana
Come and join me, one of the founders and major contributors, for a packed session full of demos using these commands and learn how they can ease many routine tasks
This session will be of use to DBAs who want to increase their PowerShell skills
I was required to prove that I had successfully installed and configured a backup solution across a large estate. I had a number of success criteria that had to be met. Checking all of these by hand (eye) would have been error prone, so I wrote a test to do this for me and an easy for management to read HTML report using PowerShell and Pester.
Pester will enable you to provide an easy to read output to quickly and repeatedly show that infrastructure is as expected for a set of checks. There are many use cases for this type of solution; DR testing, installation, first line checks, presentation setups
You have written some amazing code
You have shared it on GitHub
People have forked at and now you have a Pull Request
What happens next?
How do you deal with pull requests, bugs, issues and criticism?
What about testing, QA and your release process?
Documentation is important as well as a web-site. Suddenly you might be wondering why you should bother
Come and listen to my experience as a founder, leader major contributor and evangelist for two popular PowerShell SQL Server based projects and I will answer all those questions and any others that you have
The price we pay for cheaper commodity hardware is that we cannot always guarantee the final result, normally it’s fine, but we can’t be sure. In this session we’ll look at potential failure mechanisms and see how to prevent them not just at run time. We’ll devise an architecture which guarantees correctly processed results regardless. This will be widely applicable to systems of all types and will form the basis of your future implementations. This gives end to end certainty that the data is stored, processed and transmitted correctly. Have a fun dive into high end architecture.
Support for in-memory OLTP has been recently added to Sql Server 2016 Standard and Express editions and Azure Sql Database. In-memory high performance processing has never been so affordable. Sounds great, but is it the best new way to do things? In this session real-world use cases will be used to demonstrate patterns, best practices and explain limitations of in-memory table types and tabled-valued parameters. Gain new wildly applicable development skills and get insider information about data and workload types specific for the financial services industry.
Have you ever need to verify if best practices are in use? How do you do it when you have dozens if not hundreds or thousands of SQL servers? One by one?
And, what if you have to apply the best practices on those SQL servers? You will also do it one by one?

In this session we will see how easy, fast, precise and less error prone can be validate if a set of SQL servers is respecting the best practices and if not how we can configure them to just by using a set of commands from DBATools module. 

This module is one of the most popular tools among DBAs and is developed and maintained by more than 30 contributors from the community. We have PowerShell and SQL Server MVPs, DBAs, developers and QA people.
If you do not know this tool or if you want to learn more this is a great opportunity.
How easy is it to hack a SQL Server?
In this session we'll see a few examples on how to exploit SQL Server, modify data and take control, while at the same time not leaving a trace.
We'll start by gaining access to a SQL Server (using some "creative" ways of making man-in-the-middle attacks), escalating privileges and tampering with data at the TDS protocol level (e.g. changing your income level and reverting without a trace after payment), and more.
Most importantly, we'll also cover recommendations on how to avoid these attacks, and take a look at the pros and cons of new security features in SQL Server 2016.
This is a demo-driven session, suited for DBAs, developers and security consultants.
Are you a DBA or Developer and would like to get started with Azure Machine Learning the EASY WAY?
Azure ML isn't just for "data scientists"... Anyone can use it! And after this session you'll be using it too...
Disclaimer: Sadly, Azure ML still can't predict what's on your girlfriend's mind. Nothing ever will.
DevOps has revolutionised our ability to deliver customer value.

Updates aren’t held up in 6-month release cycles. We’ve automated tests and deployments collaboratively so we know updates work. A virtuous cycle of innovation and testing allows us to build better, more reliable products faster and cheaper.

However, while that’s great in theory, I’ve seen people screw up spectacularly when applying DevOps to relational databases.

In this light-hearted session I’ll present the 15 most popular ways to screw up, resulting in painful, fragile deployments and expensive legacy databases that no-one likes to maintain.

A session for developers *and* DBAs.
Gone are the days when you had one or two SQL Server instances under management. You have 30. You have 300. You need to know their state and you need to know it now. This session uses Redgate SQL Monitor as a tool to illustrate the state of your servers and offer guidance on which ones have emergencies, where performance is lagging, and where resources are underutilized. With the knowledge gained from this session, you'll better be able to manage all the servers in your estate.
The next release of SQL Server Analysis Services Tabular will allow you to use the M language and a Power Query-like UI to load data into your model. In this session you'll learn why this is such a significant change. Topics covered will include:
  • How the new functionality changes the way SSAS data loading works
  • New data sources that this supports
  • When you should transform data in M while loading and when you shouldn't
  • Creating partitions
  • Migrating from Power BI to Analysis Services
With the addition of R into SQL Server 2016, Microsoft have provided a few extra degrees of freedom for the standard SQL developer. You can now use the R language to wrangle, clean and collect external data, using libraries and functionality that simply wasn’t available to you before with regular T-SQL. Performing data enrichment at scale with SQL Server or even Microsoft R Server can provide extremely valuable new insights for your clients and start reaping the benefits of big data.

In this session, Consolidata’s Oliver Frost shows you how to develop your own application for tapping into ‘dark data’. Ollie will demonstrate how to stream live tweets, perform aggregations in R and pipe the output to a Power BI dashboard, giving you a full end-to-end experience of the importance of learning some basic R code in 2016.

This session is for anyone who is new to R and is interesting expanding their skill set beyond their comfort zone in SQL Server.
This session looks at GraphView - a Microsoft R&D project that provides a .Net API for developers to store and query data in graphs but using SQL Server as the repository. This first look session will provide an overview of how graph databases persist data and the main features in GraphView that allow developers to store and query data stored in the repository.
In this demo-focused session, I will be using very cool Airbnb dataset to walk through most of the built-in and custom visualisations available in Power BI Desktop. I will walk through how to use and format these visualisations to create compelling dashboards. We will also look at how we can use R visualisations in Power BI with minimal knowledge of R. 
After years of silence, Microsoft finally exhibited great love towards SSRS with SQL Server 2016 release. But it doesn't stop there. There are some exciting features in the SP1, and Technical Preview of Power BI reports in SQL Server Reporting Services is now available. In this session, we will look at top features of SSRS 2016, 2016 SP1 and Power BI reports in SQL Server Reporting Services. Also, we will look how all these new features can benefit us to create remarkable paginated and mobile visualisations.
In this demo-focused session, we will explore different features of Power BI by using various open data sources like Zoopla API, Bing Maps API, UK sold prices and UK Ofsted schools rating to see how easily we can extract and analyse data from these datasets and how quickly we can create stunning visualisations using Power BI.
SISDB Catalog is not new anymore, and most of SSIS developers are familiar with this. However still there are many people who do not take the maximum befit of it. In this session, we will see and understand how to take the maximum befit of this metadata rich SSISDB


We will look at 
- SSISDB Catalog Views
- Deployment
- Versioning
- Parameters 
- Environment Variables
- Execution and Logging
- DataTaps
- Self-Service ETL reporting
In this session we will learn about SQL Server enhancements in the most recent versions that can help you troubleshoot query performance.

Ranging from new xEvents to Showplan improvements, from LQS (and underlying infrastructure) to the revised Plan Comparison tool, learn how these can help you streamline the process of troubleshooting query performance and gain faster insights.
It may surprise some of you that SSAS Tabular can scale up to multi-billion row tables. Working with these volumes requires some good development practices and we will go through some of these today.
Tips include how to develop in Visual Studio in a timely way, managing partitions and automating processes.

We will see how one can easily develop on a limited data set but process fully after deployment.
We will look at ideas on managing partitions, both in Visual Studio and also after deployment.
We will think about how to automate adding in extra tables into the model, without using Visual Studio in some cases.
We will look at performance of the queries and which types of queries are slower.

This should be a session providing some ideas on how to develop with these very large business requirements.
Always Encrypted is new in SQL Server 2016. It is highly effective in keeping sensitive data securely, but only if encryption keys are managed properly. In this session we look at a major part of implementing Always Encrypted: key management. The session includes how to generate encryption keys, the options for storing keys, encryption key lifecycle, key rotation, and the different types of keys used with SQL Server.
Row-Level Security was introduced in SQL Server 2016 and offers a robust way of restricting access to data. In this session we look at what it is, how it works, and the benefits of using it. We compare Row-Level Security with other ways of restricting access to data, and how best to implement it.
As cybercrime grows, and regulations tighten, it is more important than ever to understand Always Encrypted, the new encryption feature in SQL Server 2016. This talk explains what Always Encrypted is, how to use it, and when to use it. This presentation compares Always Encrypted to Transparent Data Encryption (TDE) to help you make informed choices about protecting your sensitive data.
Everyone agrees that great database performance starts with a great database design. Unfortunately, not everyone agrees which design options are best. Data architects and DBAs have debated database design best practices for decades. Systems built to handle current workloads are unable to maintain performance as workloads increase.

Attend this engaging and sometimes irreverent session about the pros and cons of database design decisions. This debate includes topics such as logical design, NULLS, surrogate keys, GUIDs, primary keys, indexes, refactoring, code-first generators, and even the cloud. Learn about the contentious issues that most affect your end users and how to deal with them.
 
Prerequisites: Opinions, lots of them, even wrong ones.
Every business claims to be client focused, but this is really not the case. When business operates from the perspective of the client, not how the client fits into what we do, then real engagement happens. This isn’t marketing, this is a holistic approach to maximizing the potential from each and every client or engagement, no matter how small the intimal contact or contract. The goal is to make the client as successful as possible, by being as useful as possible. Come and learn a new way of doing business and how you can outcompete your competition. Ultimately, it is do-or-die, choose to do!
As volumes of data continue to increase so too does our need to analyse this data in real time. In this session, you’ll be introduced to the world of real time event processing by building an extensible infrastructure monitoring solution in Azure.

During the session, you’ll learn how we can collect and submit system events to Azure Event Hub. Then using Azure Stream Analytics, we can query this data stream to detect events of interest, such as high resource usage, slow response times and system down time. Finally, we’ll push key Performance Indicators and other insights from our monitoring solution to a mobile device using PowerBI where it can be monitored in real time from where ever you are.
Advanced analytics and data science are probably the most widely used (hype) terms in the world of Business Intelligence. Microsoft puts a lot on analytics with it's Cortana Intelligence suite. That alone is more than enough reason to come and learn how parts like Azure Data Factory, Azure Stream Analytics, Azure Machine Learning and others combine to create great analytics solutions. We will explore these components in theory and demo's.
Sharding is technique to split a large database into multiple smaller chunks called shards across a number of distribute servers. The Elastic database client library can be used to create sharded Azure SQL Database. In this session, we’ll develop a simple web based book store, that allows adding and searching books. The book store data is split across multiple shards. We'll also learn to query sharded database and merge-split data across multiple shards. 
With the introduction of Azure Machine Learning predictive analytics and text analysis is within everyone’s reach. It is (relatively) easy to implement and it is easy to use when combined with PowerBI. In this session you will learn how to take advantage of Azure ML from PowerBI and you will learn how powerful PowerBI is as a tool for Data Scientists.
DAX was created as an easier alternative for MDX with the introduction of PowerPivot. Multidimensional functionality with the look and feel of Excel functions and formulas. But the concept of context makes it more difficult than it might seem at a first glance. In this session we will learn to handle row-, query- and filter context by looking at a lot of practical examples.
So you built your Azure SQL Data Warehouse, loading data into it is lightning fast and accessing the data is even faster. Now you release it to production and chaos. Hundreds of people cannot connect at the same time and everyone can see all the data in the Data Warehouse. What do you do? How do you address these challenges and many more that you face. Join this demo-heavy session to learn how to properly architect, configure and deploy an Analytical topology that is flexible, performant, secure and accessible across a large
organization.
This is a must attend session for every data professional.  Join me as I provide an overview of Microsoft Mobile Reporting, Microsoft newly acquired on-premises mobile BI solution.  Mobile Reporting is optimized for SQL Server and designed to enable rapid development and publishing of business intelligence in a way that delivers premium user experience on any device.  In this session I will provide an overview of Mobile Reporting, discuss its key features, provide an architectural overview and finally demonstrate how to author and publish dashboards.
Most of us are overwhelmed with data from all the different applications that we use on a daily basis.  Bringing all the data together is often a very time-consuming and sometimes a challenging process.  Even further, attempting to analyze and visualize the data poses new challenges that is sometime difficult or impossible to overcome.  Now with Power BI this can all be made very simple.  Individuals, ranging from novice information workers to advanced IT professionals can quickly and easily transform, analyze and visualize data using a single tool, Power BI Desktop.   In this course we will work through four main topics: Shaping Data, Building a Data Model, Visualizing Data and Using the Power BI Server.
Join me to go under the hood with a Power BI deep dive. Where I will discuss two Hybrid Scenarios for Power BI. I will explore the data refresh capabilities, allowing reports published to Power BI to connect to varying data sources for data refresh. I will also discuss exciting new capabilities that allows Power BI to connect directly to SQL Server Analysis Services on-premises and interactively query. Allowing customers to keep and manage data on-premises without the need to move their data to the cloud.
Join me in this demo-heavy session as I introduce Azure Data Factory, a new data orchestration and data movement service in the cloud. Using Azure Data Factory, you can ingest and move data between on-premises and cloud sources quickly and easily. I will explain and demonstration how to schedule, orchestrate, and manage data transformations
With the latest release of SQL Server Reporting Services (SSRS), more and more organizations are adopting it as an organizational reporting solution.  As the deployments increase, so will the demand on the environment. Join this session as I explain how to architect and deploy a scalable and highly available solution.  I will explain how to architect the back-end SQL Servers using technologies such as Clustering and AlwaysOn for High Availability and Disaster Recovery.  In addition, I will demonstrate how to deploy SSRS scalable SSRS web frontends that leverage Load Balancers to properly manage scale.
All too often a forum post on erratic query performance is met with a reply ‘Oh, it’s parameter sniffing. You can fix it with <insert random solution here>.’
The problem with answer, even if it has identified the cause, is that it’s only part true.Parameter sniffing is not simply a problem that needs fixing; it's an essential part of well-performing queries. In most cases.

Come to this session to learn what Parameter Sniffing really is and why it’s a good thing, most of the time. Learn how to identify the scenarios where it’s not good, why a feature that is supposed to improve query performance sometimes degrades it, and what your options are for resolving the problems when they do occur.
Indexes are essential to good database performance, but it can be hard to decide what indexes to create and the ‘rules’ around indexes often appear to be vague or downright contradictory.

In this session we’ll dive deep into indexes, have a look at their architecture and internal structure and how that affects the way that indexes are used in query execution. We’ll look at why clustered indexes are recommended on almost all tables and how their architecture affects the choice of columns. We’ll look at nonclustered indexes; their architecture and how query design affects what indexes should be created
to support various queries.
While the waits and queues performance tuning methodology is well known, there can be some confusion still about what various waits mean. CXPacket, for example, is often misinterpreted.

In this session, we’ll look at the SQL Server execution model and see why waits occur, and then look at some common waits and see what causes them and what they mean
Each new version of SQL Server sees big features, small features and minor improvements. Over the years there have been a number of minor improvements to execution plans, increasing the information they provide.

In this session we’ll look at what was added to execution plans in SQL Server 2014 and SQL Server 2016 and have a look at what improvements there are in the latest CTP of vNext, and we’ll also have a look at Live Query Statistics.
One of the new features announced for SQL Server vNext is Adaptive Query Plans, query plans that can change after the query starts executing.

This session will show why this is a radical departure from the way that things have worked until now and how it can improve the performance of some query forms. We’ll look at the places where adaptive query plans are used and compare the performance of queries using adaptive query plans to see just what kind of improvement it can make.

You have seen a lot of best practices talks on how to build proper high availability and disaster recovery solutions.In this fun, but informative session, you will learn about some of the
worst things seen in my 20 year IT career. The time the caretaker hit the emergency power off switch? Or the time the vendor decommissioned the wrong SAN? There will be more stories and anti-patterns, so you can learn what not to do when designing solutions. Finally in this session you will ensure your job is not made redundant by the process of making your data redundant. 
Graph databases solve a lot of complex relationship problems that relational databases struggle
to support. Relational databases are optimized for capturing data and answering
transactional data questions.  Graph databases are highly optimized for answering questions about data relationships.  Do you, as an architect, understand which data stories need which type of technology? 

  • Master Data
  • Networks & Infrastructure
  • Trees and Hierarchies
In this session you will learn which data stories are the right fit for your relational stores, and
which are the right fit for graph databases. You will learn options for bringing this data together for more intelligent data solutions.   You will learn the basics of how this is implemented in the next release of SQL Server.
Drawing on over 10 years of remote working, this talk looks at the challenges and options for remote working.
By the end of it you´ll know how to make the case for remote working, places to consider working from, your work environment at home or on the go. You´ll also understand the challenges of maintaining social contact, structuring your day and how to switch off.
Drawing on 15 years as an application developer working with SQL Server and different technologies, this talk gives an overview of looking at execution plans, tips for table indexes and creating stored procedures.
We also consider other things such as when to use ORDER BY, comparing object oriented versus set based practices and avoiding functions in the WHERE clause.
Many want to learn new things and keep up to date with the world of technology, but few can find the time. By the end of this talk you´ll have been introduced to the world of podcasts and audiobooks. You´ll have learnt how to fit listening time into your regular routine and how to find options that interest you. You will know how to manage a podcast library and have been given a few ideas of podcasts and audiobooks that might be of interest from the worlds of SQL Server, Data Science and Tech.
This talk gives an introduction to Data Science and some of the aspects such as machine learning, big data and predictive analytics. 
We´ll look at the diverse skills that are useful including statistics, data cleansing and programming. 
Lastly we go through some of the resources available looking at podcasts, programming courses, conferences and options in the MS Stack.
Do you want to make your SQL Server instance stay Always On? SQL Server Always On Availability
Groups provides a number of out-of-the-box enhancements in SQL Server 2012,
2014 and 2016 which helps analyze common issues with relative ease. Attend this
session to find out more about how you can leverage the new enhancements in
Always On to improve reliability, reduce manual effort and increase uptime.
This session will also do a deep dive using demos of the new Power BI
visualizations available for common scenarios leveraging the new diagnostic and
supportability improvements that were added.
Schrödinger’s Metric is an indicator which is both favorable and unfavorable until it is reported. But how do we know what to report, which models to use, and what the numbers actually mean?
Join this session to understand the concepts of Data Science, using Microsoft Data Platform technologies such as Excel, AzureML and R Services. Join us to learn, from the ground up:
 
How do we know which model to choose, and how do we understand the answers?
What is clustering, and what's its application for businesses?
What is regression, and why is it useful?
What is a neural net, and how do you interpret them?
What's the best way to visualise the results so that they are easy to understand?
...and more.

You will learn these concepts in plain English.... 

Join this session if you want to be significantly different, hold the belief that deviation can be normal, and be right 95% of the time. Or just join us if you are think that, like Schrödinger’s cat, the might metrics be ok if you don't look at them.

No prior knowledge of Data Science is assumed. Come and live life to the right of the bell curve!
With the new integration features of Power BI, SSRS 2016 and Excel Online these tools certainly work better together.

So this presentation shows which reporting requirement is best delivered in Power BI, SSRS 2016 or Excel Online. There is an ever growing number of data visualisation available within this toolset. So this presentation covers which data visualisation to choose to meet a given business scenario …

So choosing the best char for combinations of the following ..

• One variable, two variables, Three of more variables,

• Many periods, Few Periods,

• Changing over time and Static.
As organizations see the benefits of the cloud, you may find yourself involved in migration projects which target the move from on-premises SQL Server to the cloud. Are you ready for this?  


In this session, we will compare and contrast different migration strategies. We will cover different ways to migrate your SQL Server database from on-premises to Azure, and how to detect and solve potential migration blockers and issues.
SQL Server and Azure are built for each other. New hybrid scenarios between
on-premise SQL Server and Azure mean they don't have to exclude each other
but instead you can have the best of both worlds.

For example, by taking advantage of services like Azure Blob Storage or
Azure VMs we can increase the availability of our services or distribute data
in smart ways that benefit our performance and decrease cost.

In this demo-heavy session, you will learn the strongest use cases for
hybrid scenarios between on-premises and the cloud, and open a new horizon of
what you can do with your SQL Server infrastructure.
First announced in March 2016, SQL Server on Linux is a milestone of Microsoft's history. Are you ready for SQL Server's future?

During this session we are going to understand how SQL Server works on Linux, exploring some implementation details, as well as how to perform basic management tasks to keep SQL Server running in a different operating system than Windows.

Let's dive into the "black screen" and play with Penguin together!
Operational data produced by Sql Server is only as valuable as the decisions it enables. With R Services and real-time machine learning decisions can be better, faster and based on fresh data. Condition-based maintenance, smarter deployments, behavior and content anomaly detection - these are just few examples and possibilities are endless. Come learn how to find busy tables and avoid Sch-M locks concurrency problems.
DirectQuery is a feature of Analysis Services that transforms a Tabular model in a semantic layer on top of a relational database, transforming any MDX or DAX query in a real-time request to the underlying relational engine using the SQL language. In Analysis Services 2016 this feature has been improved and optimized, removing several limitations, extending the support to relational databases other than SQL Server, and dramatically improving its performance.

In this session, you will learn what are the new features of DirectQuery, how to implement best practices
in order to obtain the best results, and what are typical use cases where DirectQuery should be considered as an alternative to the in-memory engine embedded in Analysis Services.
Tabular is a great engine that is capable of tremendous performance. That said, when your model gets bigger, you need to use the most sophisticated tools and techniques to obtain the best performance out of it. In this session we will show you how Tabular performs when you are querying a model with many billions rows, conduct a complete analysis of the model searching for optimization ideas and implement them on
the fly, so to look at the effect of using the best practices on large models. This will also give you a realistic idea of what Tabular can do for you when you need to work on large models.
How do you optimize a DAX expression? This session will introduce you to the useful tools to measure
performance, gathering data to find the bottlenecks and helping you in writing new optimized versions of DAX. Starting from SQL Profiler, you will learn which events are relevant for DAX, and how to collect them in different environments (Analysis Services, Power Pivot, Power BI). We will show DAX Studio, which
simplify and speed-up the data collection process, and makes it easy to find bottlenecks in storage engine and formula engine. VertiPaq Analyzer and other small tools will be also used to collect other useful information. The goal of the session is to provide you a methodology to analyze performance of your DAX
measures, to find the bottleneck and to identify the main reason of a performance issue.
R is one of the most popular statistical programming languages and thus
one of the most important tools for data analysts today. Microsoft has chosen
to implement and embrace R across the whole BI stack in tools like SQL Server,
Azure Machine Learning, Power Pivot and more.

In this session I will introduce R and place the usage of it in a context which hopefully will inspire you and encourage further immersion into the vast amount of possibilities with R. I will focus on how Microsoft has implemented R in SQL Server (R Services) and produce lots of samples on how to use R in standard T-SQL and how to reap the benefits of scalable R from other platforms.
In this session we will look at SQL Server R Services and, in particular, the new MicrosoftML package available in SQL Server vNext and how it can be used to build an in-database Predictive Analytics Model, that can scale. The session will provide a brief introduction to supervised Machine Learning before applying that to a real world scenario, taking you through each of the steps required to build an in-database predictive analytics model using R and SQL Server vNext.
This talk is about creating a tabular model. It guides you through the process of creating a tabular model. The session will be packed with very practical tips and tricks and the steps you should do to create a proper model. The steps are based on "real life" experiences, and backed with a little bit of theory. After this hour you will understand how to optimize for memory usage and speed, enhance the user experience, use some DAX expressions and to use the right tools for the job.
What is it that all these BI people are so excited about? Why does a data warehouse differ from a "normal" database? Why is creating reports not just creating reports? What Microsoft products are involved? 

In this fully packed one-hour session everyone who is not familiar with Business Intelligence concepts is introduced to the world of BI. A BI speed-course, with some theory, and also some demo's.

This will be the most productive hour of this year's SQL bits! :-)
Azure SQL DWH is based on MS SQL Server and supports T-SQL. It helps to DB/DWH developers start using it without many efforts. Unfortunately, there are several limitations that could bring difficulties in your job. For example, we can't use MERGE statement for Upsert tasks in DWH, there is no IDENTITY or SEQUENCE, differences in implementing partition switching and so on. In this session, I'm going to cover several tips and tricks how we can handle with this limitations using available possibilities.
Job hunting practical hints and tips
Some of the topics covered in the session
Writing CV
Tweak your CV for each job
Selling yourself, what makes you special
Elevator pitch
Dealing with job agencies

Job interview
Do your homework
Know where you are going 
Look the part
Expect the unexpected
Great answers to hard questions
Be yourself, not an imitation 
Rember the follow-up 
Some will some will not so what who cares next!
As database professionals we take pride in our technical skills.  We also need soft / people skills to help us succeed.
Aims of this session
Share some tips, techniques, and ideas for working with other people, such colleagues and customers.
What are people most interested in?
Working with a team
Communicating with others
Getting the best out of other people
What will help me to succeed
Learn how to do text mining in R with the Monty Python movie scripts. Condensing the theory and code from the full-training day, we take a look at how you can grab text off the internet, wrangle it into an analysable format, perform sentiment analysis, and work out key topics in a body of text.

We'll be covering:
- Overview of text analysis in R
- Web scraping
- Wrangling text for analysis
- Performing sentiment analysis and visualising it
- Identifying key words
- Working with phrases
- Working out the topic of text
Data should live forever. Docker containers should be constantly killed and reborn. How do you match up these two opposing requirements to do data persistence in a docker environment? We go from the basics of Docker into the ways of persisting data and what architectural decisions you need to think about.
Those maths lessons in school were long ago, and now the new hotness requires lots of statistics. It's enough to make a person think about changing careers. Tackle this worry by coming along to this session where I explain statistics simply! No scary formulae, just a practical walk through a task.

Learn about examining your data, working with samples, simple statistical models, and ways of assessing how well your model works in this live walkthrough of a challenge - predicting your age!
Learn how to do basic stuff in R via this double session. Covering the simple tasks and vital information  that everyone needs to be able to analyse data in R. Topics will include data ingress, key object types, tabular data manipulation, data egress and charting.
The package data.table is super-fast and super-powerful and can turn your long-winded and slow R code into a lean, mean, data-crunching machine. This hour takes you through the key aspects of the package and how it can make your life better.

We'll be looking at:
- Basic syntax
- Data I/O
- Joins
- Within group activities
- Pivoting data
- Cool hacks
We can write hacky R code but "works on my machine", especially "when I do this, this, and this" is ok for play, but not for sharing. We'll look at development best practices for R including defensive programming techniques, testing, and package development.

We'll be covering:
- function writing
- defensive programming techniques
- package development
- unit testing
- code coverage
- continuous integration
Azure Functions are serverless, lambda architecture dev gobbledygook. Nope, Azure Functions are cheap, cheerful, and quick ETL in Azure. Learn more in this session about how they can be used to connect to cloud and on-prem resources, what benefits and draw backs they have, and see them in action for ETL. Learn about building distributed data platforms more quickly.
A how-to guide on choosing the most appropriate type of credential for the SQL Server service and Agent accounts covering standalone and clustered installations along with Always On availability groups.

In this presentation, I will take you through the options available to you regarding the use of SQL Server service account credentials with regards to best practice, security concerns and environmental considerations. This will cover versions 2008 through to 2016.

I will also be covering the topic of granting the Agent account access to local sub-systems and to external data sources using proxies and credentials


You can solve any BI scenario in one of two ways: the right one and the wrong one. In most cases, the right one involves – first – building the correct data model, and then author some very simple DAX code. The wrong one, on the other hand, is to use a data model that just “looks the most natural one” and then start writing crazy DAX code to solve the compute your numbers. Most of our work – as consultants – is to start from wrong solutions and turn them into right ones.

In this session, we will analyze several cases where the data model matters, starting from a naïve solution for a given set of requirements, understanding where the complexity is and why the model is the wrong one, and then solving the scenario by means of building the correct model. It is a very good occasion to look at different ways to model your data and to learn where to focus your attention to build sound BI solutions
When moving to a cloud or hybrid environment, one of the biggest challenges is
maintaining a consistent login experience for users. Many firms have challenges
with connecting Office 365 to their Azure infrastructure, particular in more
complex designs.  A bad design can raise security concerns or give users a bad experience. In this session you will learn about how Active Directory, Azure Active Directory, and Active Directory Federation Services work together to offer your users a single common logon experience. You will learn about the proper architecture for your security needs and scale.
Do you know all the different ways to refresh your data in the Power BI cloud service? Attend this session to get the complete overview including a lot of demos and input, so you can choose the right ways for your solution.

We will walkthrough the pros, cons and limitation with all the different methods.
  1. Upload
  2. Scheduled refresh from On-prem Sources
  3. Scheduled refresh from Cloud Sources
  4. Automatic refresh from OneDrive
  5. Direct Query
  6. Live Query
  7. Realtime
Complete the loop and create reports on top of the Reporting Services database so you can answer questions like: Who is using the reports and who is not? Any reports not being used? What are the top 20 list of slowest reports? Who received the Data-Driven Subscription? What are the dependencies between data sources, data sets and reports? What is the specific query in a dataset or report? When was the report deployed and by who? Who has access to the reports and by which permissions?

The session will walk-through the creation and all the attendees will get all the reports from the demonstrations.
Power BI started out as a set of Self-Service BI tools in Excel and has now been merged into Power BI Desktop with the possibility to deploy reports to the Power BI cloud service. At the same time Power BI has become a grownup Corporate BI platform.

This session will give you the full overview of all the different ways to bring and use Power BI in the enterprise including how to setup content workflow, security, auditing and governance. The session will cover on-premises only and hybrid scenarios and how to combine Self-Service BI with Enterprise Reporting with the benefit of both control and agility.

The session is build from experience from implementing Power BI in several large danish enterprises.
GDPR that comes into force in May 2018 sets strict limits on ALL businesses that collect, use and share data from EU citizens. Understand requirements and learn how to prepare data architecture to provide compliance and avoid stiff fines up to greater of 4% of annual worldwide turnover or 20 million EUR. This session covers on-premises, in-cloud and virtual data, shows real-world examples and practical solutions applicable to Sql Server and EUC (end-user computing - Excel, PowerBI) like static data masking, hashing, encryption, governance.
The Internet of Things (IOT) gets more and more attraction - not only on the business but also on the customer side. Connected fridges, cars and smart watches - always and everywhere connected! In this session Wolfgang will show you some possibilities of the customer IOT. One ideal looking example was the Microsoft Band equipped with many sensors and great usability. By using the Band SDK the heart rate sensor is used and live streamed to Power BI. But which type of real-time functionality is best suited for that kind of data? 
The different types of real-time analytics (Stream Analytics, Power BI API, Realtime Tiles,..) will be presented and their pros and cons will be envisioned.
The challenge: Let's prepare a real-time dashboard of Band2 data in Power BI in 60 minutes!
SQL Server Reporting Services (SSRS) is Microsofts grand-father technology for reporting. Although being over-worked with SQL 2016 not all reporting requirements can be solved. Power BI on the other side is a young, agile and powerful reporting technology - but currently only available as a cloud service serving the interactive reporting approach.

Today, many organizations are not willing or able to put their data into a cloud service for analysis which limits the adoption of Power BI. Microsoft got onto that train and announced the on-premises version of Power BI as part of SSRS for the upcoming SQL Server (vNext). At present available as a technical preview the first sights look promising - but there open questions: Which features will be available for Power BI on-prem? Can I use custom visuals? What about Power Query?
Come and join this session to get an overview about the next generation of Microsoft reporting! 
With the advent of SQL Server Integration Services Catalog (SSISDB) a new place to store, execute, and monitor SSIS packages came into existence.

This session shows the different aspects of programmability in the context of SSISDB. Beginning with a short overview of the underlying database objects, a deeper look at SSISDB's stored procedures follows. A side-step from T-SQL to C# and the available SSIS SDK illustrates a different view of SSISDB access. In conclusion, the analysis and reporting aspects of SSISDB programmability are shown (SSRS reports and Power BI). What are the new objects Microsoft added in the last versions of SSIS (2016 and vNext)? How can those objects be used in your own applications? 

After this session, you will have a deeper knowledge about SSISDB's content and programming interfaces, and you will know how to start SSIS packages using T-SQL and C#. The pros and cons of these programming techniques will also be discussed.
You are using Extended Events and Dynamic Management Views (DMVs) to analyze performance problems in your databases. How do you go from there to building a performance-monitoring system that is easy to use and that works at scale? In this session, you will learn techniques for loading and parsing Extended Events into a central monitoring database in close to real time, correlating the events with query plans, indexing the data for performance, and making the information easily available.
Numerous techniques for concatenating string data can be found online, and even in some books. In this session you will learn two of them that are reliable, and fairly easy to understand, implement, and deploy. You will also learn how to control the order of the values being concatenated, a feature that is not natively provided by SQL Server.
Learn about typical challenges in legal information retrieval, the techniques used in querying data by determining its legal relevance, and how SQL Server 2016 can be used to simplify some of these techniques.
For instance, a very important element of legal proceedings is determining which law should be used. Legal rules change over time, resulting in multiple versions of the same law. Laws do not last forever, and can be replaced by other laws (one old law can be replaced by one or more new laws). The timeline of events that are relevant to a particular court case provide one set of criteria to be used to determine the correct law and the correct version of the law.
In this session you will learn about a typical challenge in legal information management: how to determine whether two (or more) offers to sell, or purchase, products (or services) represent the basis of a legally binding contract.
You will learn which techniques are used to correlate the data in terms of the range of objects, their quantities, their qualitative properties, while also taking into account the temporal properties of the offers.
You will also learn how clustered columnstore indexes can be used to improve the efficiency of the queries needed to solve this important legal problem.
XML was added to the collection of SQL Server native data types in SQL Server 2005. Being a complex data type, not only is XML accompanied by a set of dedicated T-SQL functions, but also with a complete querying language. In this session you will learn about the Microsoft SQL Server 2008 implementation of the World Wide Web Consortium's XML Query Recommendation, and learn how to compose XML data – from existing SQL Server data or simply "from scratch", how to retrieve (relational) data from XML Documents to be used in a SQL Server database, and how to manipulate XML data using Transact-SQL and the XML Data Manipulation Language (XML DML). You've already mastered all the primitive SQL Server data types, why would XML be an exception?
Have you ever had the need to access documents in your database as if they were files in the file system? SQL Server 2012 introduces a brand new method for managing large data objects (BLOBs) in a database. FILETABLEs provide access to data using Transact-SQL - just like any other table inside the database, while at the same time also provide access to the data using the operating system File I/O API - i.e. just like any other folder in the file system. In this session you will learn how to upgrade your document management solutions by migrating your large data to FILETABLEs. The session covers two most typical migration scenarios: migrating from a distributed data store, where files are stored outside the database, and from a homogeneous database, where the files are stored in the database.
SQL Server 2014 introduces Extreme Transaction Processing, a brand new memory-optimized data management feature, targeting OLTP workloads. In this session you will learn about two new, and not very well known, features that you can use to share sets of data between modules - either within the database, or between client applications and databases: - Memory-optimized Table Variables; and - Memory-optimized Table-valued Parameters.
What can we discover by exploring SQL Server’s metadata? The catalog views and related objects contain a wealth of information about how SQL Server works and how it keeps track of YOUR data. We’ll look at the catalog views, property functions and a few system procedures.  We’ll briefly touch on the dynamic management objects, as well.

Goals:

  • Describe the different types of metadata objects and how they relate to each other
  • Explore the information available in the metadata 
  • Create our own tools for providing specific information 
You have got your first Oracle to SQL migration project. You are already a SQL rockstar but know little about Oracle. You are pretty excited and confident that you will successfully execute the migration project. Sorry, you may be in for a lot of surprises! In a migration project, just having SQL skills is not enough, you need to know a good deal about Oracle too. Join this session to get a first-hand knowledge of the technical similarities and dissimilarities between the two great database software. Amongst many other things, we will compare architecture, client connectivity, storage & programmable objects. Well, gaining some Oracle skills should be your first priority towards your migration project.
Not every workload can benefit from In-Memory tables. Memory Optimized Tables are not a magic bullet that will improve performance for all kinds of transactional workloads. Therefore, it is very critical that you test & benchmark in-memory performance for your SQL deployments before you decide to migrate disk-based tables to memory-optimized tables. In this session, you will learn:
a. Baselining current performance
b. How to identify the right candidates for In-Memory
c. Generate sample production data
d. Create simulated production workload
e. Test & benchmark In-Memory performance with simulations
f. Compare In-Memory performance with the baseline
You will also learn about a verity of tools and techniques that can be used in our proof-of-concept.
Search might be more complicated than you think.  Doing search well in your application will make it engaging as well as give users confidence in it.  This session is going to look at getting the most from Azure Search.  We will look at

1.  tfidf indexes
2.  Free/directed search
3.  Facets
4.  Lemmatization
5.  Suggestions
6.  Tokenization/filters

I will examples of all of these and give you an insight into just how much you can do with a well thought out application of Azure Search.
There are many community scripts out there that DBAs can use without having to write them yourself. Like Ola Hallengrens scripts, or Brent Ozars or sp_WhoIsActive by Adam Machanic. But how about Powershell? Turns out, this world is still a bit unknown to the DBA but there is a fast growing community with extremely useful scripts for day to day SQL Server management. In this session we will take a look into my favourite scripts and we will also write a bit of Powershell ourselves.
In this session Richard Conway will show you from grass roots no knowledge of Spark how to navigate the Spark framework ecosystem and build complex batch and near real time applications that use Spark's machine learning library mllib. He'll cover everything from data shaping, basic statistics at scale, normalising, testing, training and building services and complex pipelines underpinned by machine learning. This is very fast-paced demo-heavy session going from nothing to big data and machine learning superstar by virtue of Apache Spark. If you're thinking of using Hadoop in the future this is the one session you don't want to miss.
In this talk Laura will go through the theory and practice of time series analysis with examples in R covering concepts such as lag, serial correlation and Box-Ljeung tests. She'll focus on the practical applications of time series analysis and show how data can be modelled using libraries such as xts, zoo and forecast to make predictions on markets and seasonal behaviours. In addition Laura will cover an introduction to deep learning and specifically how Recurrent Neural Networks (RNNs) and LSTM's can be used to make time series predictions with R's RNN package and Tensorflow.
The next version of Microsoft SQL Server is coming to Linux and is already in preview.  In this session, Program Managers from Microsoft will provide an introduction to SQL Server and how it runs on Linux and container platforms.  The scope of the first release on Linux and schedule will be covered and deployment, configuration, high availability, storage, and performance/scale will be topics for a more technical drill in.
Microsoft Session:
The next version of Microsoft SQL Server and is currently public preview and SQL on Linux is a big part of the release.  In this session, we will drill down into the  architecture of SQL Server on Linux. We will also look at some best practices we have learned from customer engagements in the public preview to date.
Microsoft Session:
This session will cover common DBA troubleshooting scenarios on SQL Server on Linux. In this demo rich session, we will cover various every-day from troubleshooting startup and configuration to performance and bottleneck analysis. We will discuss existing and new tools that exist that will enable DBA’s to effectively troubleshoot SQL Server on Linux.
Microsoft Session:
This session will cover High Availability and Disaster Recovery solutions for SQL Server on Linux. What technology options are available and how to build a highly available database solution on Linux? How will the various planned maintenance and unplanned downtime scenarios will work? We will discuss the design patterns and architecture blue prints for HA/DR for SQL Server on Linux.
Docker has come towindows, and also SQL Server is coming to Linux. Can you run SQL Server inDocker? Why would you? In this session I'll show you what Docker is, what youcan use it for and what the use case is regarding SQL Server. I'm using SQLServer on Docker for Windows myself in test environments for instance, it turnsout to be very usefull in Continuous Integration and database upgrade testingscenarios. We'll discuss production scenarios as well.
This session, designed for developers and data scientists, will ramp up the attendee very quickly on Microsoft's powerful machine learning algorithm APIs as a part of Cognitive Services and chat bot development tools as part of the Bot Framework. These tools will be called out separately as independent and innovative tools-of-the-trade, as well as a big part of the session focusing on the power of combining these tools to create intelligent chat bots (think instant messaging a picture to get an intelligent caption or annotating conversations based on key phrases). This session will help attendees decide if and how they want to make a chat bot to solve a repetitive task or clever scenario they have encountered. User experience will be heavily emphasized to create the best bot experiences. Components will be laid out for the attendee so that in the end they will see a working, published bot.
Microsoft Session -
Either moving existing databases to Azure or designinga new SQL centric solution on Azure, availability and connectivity of Azure SQLDatabase is always a fundamental key requirement. We’ll share in the session best practices of designing highly available SQL Databases in the context of cross region deployment; how to make your application robust to handle transient connectivity issues.

Mircosoft Session:
Are your customers considering moving their databases to Azure SQL DB,and need help figuring out the optimal migration approach? Are you in the middle of an Azure migration project? Come to this session to learn about AzureSQL DB migration approaches, challenges, and solutions from SQLCAT. We will present practical migration guidance and learnings from our engagements with high scale and complex customer migrations. Whether you are an Azure SQLDB expert, or just starting to explore the technology, this session will help you identify and implement the optimal migration path for your customers, deal with complexities, and take advantage of the latest database platform features.