We all write SQL scripts but how do we know that what we write is returning the correct results?  In this session I will explain the importance of unit testing your code, what to test for and most importantly what free tools you can use to make this easy.  By the end of this session you will be equipped with the information you need to go and implement unit testing on your code so that you can confidently carry out stress-free system releases.
Putting Your head in the Cloud– A Beginner’s Guide to Cloud Computing and SQL Azure Although Microsoft Azure and the concept of Cloud Computing has been around for a number of years it is still a mystery to many.

This talk takes a look at Cloud Computing – what it is, the types of Cloud available and their advantages and disadvantages.

We’ll then look at Windows Azure  and specifically SQL Azure DB, to see how to create and manage SQL databases in the Cloud. By the end of this talk you will be ready to put your head in the cloud and start taking advantage of what the cloud has to offer
Anyone who can type commands into R but that is not the same as actually 'doing' statistics for analytics. They may even misuse those methods, and it's an entirely different thing to really understand what’s happening. 

Knowledge is what really drives each phase of your analysis, and create effective models for the business to use in order to create actionable insights. It can be difficult to see when someone is building faulty statistical models, especially when their intentions are good, and their results look pretty! Results are important, and it's down to you to create models that are sound and robust.  

In this session, we will look at modeling techniques in Predictive Analytics using R, using our boozy day at the Guinness factory as a backdrop to understanding why statistical learning is important for analytics today.
Drinking Guinness is optional, but admittedly might be preferred for this intensive session.
For far too long, I thought that statistics only contained information on table row counts. While they do contain that information, there is more to it than that. In this beginner session, we’ll go over statistics – how they are created, the different types of statistics that exist, how they’re maintained and how the Query Optimizer uses them. We will also touch on system tables and DMVs that will provide additional information on your statistics. We'll also go over the cardinality estimator changes in 2014. At the end of this session, you should have a better idea of how the query optimizer within SQL Server makes decisions on how to gather data.
SQL Sever 2016 is just around the corner and with it come some great new features and enhancements to existing capabilities, in this session we will look at some of these and understand how they might make us change the way that we look at designing and implementing SQL Server solutions. We will focus on the SQL Server engine, covering topics including Operational Analytics, Temporal & Stretch Tables and Always Encrypted. And we will be having a look at some of the old favorites, including enhancements to In-Memory OLTP (Hekaton) & AlwaysOn Availability Groups. By the end of this session you will have a jump-start on getting ready for the next version of SQL Server, whether you are an application developer, DBA or Business Intelligence developer.
This full day seminar covers advanced T-SQL querying and programming topics. It is recommended for T-SQL practitioners (developers and DBAs) who have at least one year of experience writing and tuning T-SQL code. The seminar explains how to solve common problems elegantly and efficiently. Following are the topics that will be covered:
  • Window Functions and their Advanced Uses
  • Advanced Uses of the APPLY Operator
  • Performance Problems with UDFs
  • Parameter Embedding with Dynamic Filtering and Sorting
  • Using the HIERARCHYID Datatype
  • In-Memory OLTP
  • Temporal Tables
  • Sequences
  • MERGE Statement Tips
There’s so much more to running totals than meets the eye. You can apply windowed running aggregate calculations and their variants (windowed ranking calculations) to solve a wide variety of T-SQL querying tasks elegantly and efficiently. This session will show you how. Some of the solutions that rely on running total calculations are downright beautiful and inspirational.
You're a SQL Server Developer, and you're ready to learn how to design and tune indexes. In this day long, example-packed seminar, Microsoft Certified Master Kendra Little will build your index design and tuning skills. You will learn:
  • Why choosing the right key column order can make a huge difference in query performance
  • How included columns can be used in more ways than you might expect
  • Good and bad formulas for clustered indexes and guidelines to keep your table schema efficient
  • When indexed views will make your queries faster and how to avoid pitfalls with them in both Standard and Enterprise Edition
  • How to find and interpret index requests from SQL Server's "missing index" feature
  • How to identify which queries need indexes the most using the execution plan cache and SQL Server 2016 Query Store
  • How to identify the paradigms where table partitioning shines
  • What partition elimination is, why it's critical to query performance, and how to tell if it's happening in your queries
  • Scenarios where Partitioned Views are the best fit
You'll get scripts for all the examples in this demo packed course, plus a set of online exercises to further increase your knowledge after the day is over.
Sometimes bad execution plans happen to good queries. You may have heard that it's better to rewrite complex TSQL-- but is  using a temporary table better than using query hints? In this session you'll learn the pros and cons of using hints, plan guides, and the new Query Store feature in SQL Server 2016 to manipulate your execution plans. You'll take away a checklist of things to do every time you decide to bully an execution plan.

With the release of the public preview versions of SQL Server 2016 we were finally able to play with, in my opinion, one of the most exciting new features in SQL Server 2016, the Query Store! The Query Store serves as a flight recorder for your query workload and provides valuable insights into the performance of your queries. It doesn’t stop there however, using the performance metrics the Query Store records, we can decide which Execution Plan SQL Server should use when executing a specific query. If those two features aren’t enough, the Query Store provides all this information inside easy-to-use reports and Dynamic Management Views (DMVs) removing a great deal of the complexity of query performance analysis.

During this session we will take a thorough look at the Query Store, it’s architecture, the build-in reporting, DMVs and the performance impact of enabling the Query Store. No matter if you are a DBA or developer, the Query Store has information that can help you analyze performance issues or write better queries!

You are a DBA or developer working on applications with mission critical performance requirements so exacting that a level 400~500 understanding of the database engine is required in order to extract every last ounce of performance from the database engine. This session aims to deliver on this by providing a 360 degree via of what what the database engine is doing and how to crack 'Hard' problems involving undocumented waits and spinlock activity that the standard out of the box SQL Server tools provide little or no insight into. Techniques will be covered for performing deep analysis of CPU saturation scenarios, wait analysis at thread level, the trouble shooting bad device drivers, networking and IO down to the full IO path from the database engine right through to device driver level, all via windows performance toolkit.
"In memory" is a hot topic in the database world at present, but how do modern servers CPUs utilise memory, does the story end with main memory ?, what about NUMA and the memory hierarchy on the actual CPU. Do memory access patterns matter ? does the CPU socket certain workloads are executed on matter and how can all of this be leveraged in the database engine to our benefit. All these answers and more will be covered at level 400 including, large memory pages, spinlocks, optimising hash joins for leveraging the CPU cache, the OLTP database engine and the LMAX queuing pattern. During this journey everything a SQL Server professional needs to know and should know about memory will be covered, along with deep insights into the database engine, CPU architectures and the use of windows performance toolkit to quantify the performance related behaviour of the database engine.
Learn Columnstore Indexes in just 1 day starting with basics of the structure and stepping into the internal details, learning how to load data into them and finishing with advanced concepts of Batch Mode processing and performance tuning.

Microsoft added first implementation for the Columnstore Indexes in SQL Server 2012 with Nonclustered Columnstore, and SQL Server 2014 brought us updatable Clustered Columnstore Indexes, while SQL Server 2016 is adding 2 new areas with the updatable Nonclustered Columnstore Indexes - Operational Analytics and Operational Analytics InMemory.
The first addition relates to the traditional row-based storage, while the Operational Analytics InMemory focuses on the integration with the InMemory technology in SQL Server (also known as Hekaton).

This training day is all about differences in implementations, their advantages and limitations and how to get the best out of the all types and incarnations of the Columnstore Indexes.
CISL (https://github.com/NikoNeugebauer/CISL) is a free and open source Columnstore Indexes Scripts Library that allows any user to get advanced insights over the Columnstore Indexes. With the help of CISL you can discover what are the tables that you should consider converting to Columnstore Indexes, what are the difficulties that prevent you from doing it in a matter of couple of clicks.

Learn how to use Columnstore Indexes maintenance solution in CISL for keeping your Columnstore performance at the maximum speed. The maintenance solution will do all the necessary things for you while providing a way the way you want it to operate.
Tired of Bar Charts? We'll build out a custom PowerBI Visual and show the power of PowerBI whilst going into a deep dive on how this is achieved. We will be exploring web technologies along with data technologies, and seeing how some very powerful constructs are used to produce PowerBI reports. 

We will be covering a variety of content; including: Typescript, Javascript, HTML5, Gulp, Visual Studio Code, the MVVM pattern, D3.js, and without giving the game away too much, Google Maps.
You have waited years for improvements to Reporting Services to create and deliver even better reports for your business users. Wait no more, Reporting Services will deliver with the latest updates in SQL Server 2016. Take a hands on tour with Chris Testa-O'Neill of the new features of Reporting Services In this one day workshop, you will learn:

Microsoft Reporting Strategy     - Understand where Reporting Services fits into this strategy in 2016
Report Development Features   - Explore the new options available for creating reports
Subscription Improvements       - Have greater control over what is delivered to your users
Power BI integration                   - Seamlessly integrate your SSRS reports with Power BI reports
SSRS and Datazen                    - See how Datazen completes the reporting strategy  

Additional smaller features will also be covered throughout the workshop to provide a complete picture of all the new features.

This session is for report developers, report writers and consultants who want to deliver great reports for their users in a timely manner. Also bring your laptops with Reporting Services 2016 installed as you will be given to opportunity to get hands on with the technology.
Do you get involved in the creation of Business Intelligence solutions within your organisation? This session will show you the new features of SQL Server 2016 Business Intelligence components. Ranging from performance improvements in SSAS, to an array of visualisation in Reporting Services, and new features of SSIS, MDS and DQS  Whether you are new or an experienced SQL Server Business Intelligence developer, this session will provide you with the knowledge regarding the new features of SQL Server 2016 BI components  Join Chris for a demo-packed workshop to learn about the world of data quality using Master Data Services, Data Quality Service, Data Warehouse design patterns, and ETL capabilities of Integration Services. During the workshop you will build a Data Warehouse to reinforce your learning.
Massively Parallel Processing (MPP) database systems are designed to deliver performance at scale. This one day session will give you an excellent foundation in the concepts underpinning MPP. It will also accelerate your development skills and practical experience using Azure SQL Data Warehouse and APS.

This pre-con will contain a number of lab exercises that you will be able to work do both on site and complete at home at your leisure.

JRJ is a Principal Program Manager for Microsoft. His focus and passion is MPP. He developed the APS training course material and co-authored many topics on azure for Azure SQL Data Warehouse.

Topics covered in the days session will include:
  • Conceptual introduction to massively parallel processing (MPP)
  • Data warehouse design
  • Query execution and optimisation
  • Data migration strategies
  • Data integration options and considerations
Technologies covered during the day will include:
  • Analytics Platform System
  • Azure SQL Data Warehouse
  • PolyBase
  • Azure Data Factory
  • Power BI
In SQL Server and Power BI suites, you can find nearly anything you need for analysing your data. SQL Server 2016 closes one of the last gaps - support for statistics beyond basic aggregate functions and support for other mathematical calculations. This is done with support for the R code inside SQL Server database Engine. This session goes beyond showing the basics, i.e. how to use R in the Database Engine and Reporting Services reports; it shows and explains also some advanced statistical and matrix calculations.
This seminar provides an introduction to the world of Big Data. It begins with an overview of the challenges and concepts around Big Data, as well as the various data platform types, which evolved in order to address these challenges, such as key-value and columnar databases. The seminar also includes a detailed review of the current major vendors and products in the market, whether it’s in the cloud or on-premises, including a discussion about the pros and cons of each one as well as a comparison between the different platforms. We will also cover some Big Data use cases and scenarios, choose the best data platform for the job and design the Big Data architecture for each case.


- Defining Big Data
- Introduction to Big Data
- Big Data Challenges
- Big Data Lifecycle Management

- Exploring the Different Big Data Platform Types
- SMP vs MPP Architectures
- The MapReduce Programming Model
- The Relational Data Model
- The Key-Value Model
- Document Databases
- Graph Databases
- Columnar Data Stores
- Search Technologies
- Streaming Analytics
- Machine Learning

- Leading Vendors and Products
- Open-Source vs Closed-Source Software
- Cloud vs. On-Premises
- The Apache Software Foundation
- Big Data Market Research
- Leading Products (Very Long List Here…)

- Big Data Use Cases
- Internet of Things
- Funnel Conversion
- Behavioral Analytics
- Predictive Analytics
- Fraud Detection
- Twitter Big Data Architecture

- Summary
- Big Data Made Simple
- Big Data Roadmap
- Additional Resources
Do you have ever looked on an execution plan that performs a join between 2 tables, and you have wondered what a "Left Anti Semi Join" is? Joining 2 tables in SQL Server isn't the easiest part! Join me in this session where we will deep dive into how join processing happens in SQL Server. In the first step we lay out the foundation of logical join processing. We will also further deep dive into physical join processing in the execution plan, where we will also see the "Left Anti Semi Join". After attending this session you are well prepared to understand the various join techniques used by SQL Server. Interpreting joins from an execution plan is now the easiest part for you.    
Locks are used by SQL Server to isolate database users from each other. Unfortunately incompatible locks leads to blocking. And blocking situations are always hurting your end users regarding their response time. In this one-day long workshop we will have a detailed look at Locking & Blocking in SQL Server, and how you can influence SQL Server in this area. During the precon we will cover the following areas:
  • Transactions
  • Isolation Levels Locking Deadlocking Latches & Spinlocks
What determines when a plan is reused? When is it better to recompile? Reusing an inappropriate plan cause a huge reduction in query response time. On the other hand, recompiling a frequently run query every time it’s executed can also cause big performance problems.   In this seminar, we’ll discuss how SQL Server decides to recompile or reuse an existing plan, and how you can tell if that’s a good choice. We’ll also look at how you can ‘encourage’ SQL Server to do what you want it to do. The seminar includes extensive demonstrations that illustrate the details of SQL Server plan caching and recompilation, as well as the internals of the plan cache and the metadata available for examining its contents. This seminar will be presented on SQL Server 2014 and cover features specific to that version, but most of the information is relevant to SQL Server 2012 and SQL Server 2008 as well.
Business Intelligence Markup Language (Biml) automates your BI patterns and eliminates the manual repetition that consumes most of your SQL, SSIS, and SSAS development time. In this session, Scott Currie (the creator of Biml) and Biml Heroes Andy Leonard and Cathrine Wilhelmsen will guide you through learning Biml - from installing free Biml tools to creating advanced Biml automation solutions. The session will be largely demonstration driven, and reusable sample code will be distributed for you to use in your own projects. Best practices will also be a major focus of the session, so even Biml enthusiasts may enjoy the session as a refresher.

A preliminary agenda includes:

8:15am – 9:00am Registration
9:00am – 9:30am Opening Comments
9:30am – 10:30am Biml Syntax and Structure
10:30am – 10:45am Break
10:45am – 11:30am C# & VB.NET Syntax Accelerator
11:30am – 12:15pm Create & Load A Staging Environment In One Hour
12:15pm – 1:00pm Lunch
1:00pm – 2:00pm A Real-World Metadata Framework
2:00pm – 2:15pm Break
2:15pm – 3:15pm Managing Large Solutions and Organizing Your Code
3:15pm – 3:45pm Deployment & Automation
3:45pm – 4:00pm Break
4:00pm – 4:15pm Team Development
4:15pm – 4:30pm Biml in the Cloud
4:30pm – 4:45pm The Biml Ecosystem
4:45pm – 5:00pm Ask the Experts
Let's talk about how you can get the most out of Azure DocumentDB. In this session we will dive deep in to the mechanics of DocumentDB and explain the various levers available to tune performance. From advanced query features (including some amazing new ones) to indexing to using JavaScript integrated transactions - this session will equip you with the best practices and nuggets of information that will become invaluable tools in your toolbox for building blazingly fast large scale applications.
Application developers now support unprecedented rates of change – functionality must rapidly evolve to meet customer needs and respond to competitive pressures. To address these realities, developers are increasingly selecting document-oriented databases (e.g. MongoDB, CouchDB, Azure DocumentDB) for schema-free, scalable and high performance data storage. While schema-free databases make it easy to embrace changes to your data model, you should still spend some time thinking about your data.
In this talk, you will get an overview on what to think about when storing data in a document database. What is data modeling and why should you care? How is modeling data in a document database different to a relational database? How do you express relationships in a document database? 
Server down situations are stressful for everyone. Tempers flare and every second counts as you struggle to get things get things running again. Highly available SQL Server architectures are not infallible; they can experience and sometimes even cause outages – especially if they are not implemented properly. Troubleshooting clustered instances (FCIs) and availability groups (AGs) is not only something anyone who deploys these features should have, but a difficult skillset to acquire since these features are tightly coupled solutions utilizing a Windows Server Failover Cluster (WSFC), networking, Active Directory, DNS, and more. DBAs are not always aware of these dependencies, nor do they usually have access to them or know when they are altered. Unfortunately, most availability issues with FCIs and AGs can rarely be fixed by only concentrating on SQL Server itself.

This full day preconference taught by Microsoft High Availability MVP Allan Hirt will teach you how to approach and deal with problems related to the different clustered configurations of SQL Server whether they are FCIs or AGs. Topics include:
  • Learning about the tools and utilities you have at your fingertips (including logs such as the WSFC log) for diagnosing, and tips on how, when, and where to use them
  • Common real world problems you may encounter
  • How to avoid problems by planning and implementing clustered SQL Server configurations properly
  • Ensuring your quorum configuration is optimal to guarantee uptime and not cause downtime
  • Determining go/no go points and when you should force quorum or do something more radical

If you are supporting FCIs or AGs, this session is one you should attend so you can always be the hero … or at least sleep better at night.
Released with SQL Server 2014 and highly improved in SQL Server 2016, In-memory OLTP (Hekaton) is ready to boost the performance of your database. 

In this session, we will introduce a step-by-step approach to adopting the technology, aimed at professionals who are interested in how it can fit into their own environments. 

Join us and get ready to reproduce our field-tested implementation process all the way from an essential start, to a successful end.
The last decade has brought about a great deal of change in the IT world. In that time, there have been 5 versions of SQL Server released, with a sixth on its way (not to mention the regular operating system releases). Microsoft is pushing to ever more frequent releases, with the pinnacle of this being the Azure offerings, where change is almost a daily thing. This means that upgrading and migrating is going to be a big topic for many people, especially with the impending "end of life" of SQL 2005 in April 2016.

But before we dive into actually upgrading or migrating, we need to do a little research into what we can expect from a newer version of SQL Server.

André and William will spend the day with you going through the different aspects of SQL Server lifecycle management:

- How to identify and catalogue the SQL Servers in your network (you'd be surprised at how many rogue servers are out there!)
- How to see which features you are using to decide which future version and edition will be required
- How to identify possible compatibility issues early on
- How to implement best practices for all SQL Servers and ensure those best practices remain in place
- How to decide if on-premises or cloud services is the best option
- How to identify and investigate performance issues in your SQL Server environment

We will close the day out with an open discussion about the topics discussed and how they affect your environment.

You will leave the training day with a clear understanding of how to approach managing different versions of SQL Server. You will also be equipped with the knowledge to approach any upgrade or migration in a clear and structured way.
SQL Server is a high performance relational engine and provides a highly scalable database platform but due to its complexity (and bad programming practices) can be prone to serious concurrency problems, unexpected behaviors, lost updates and much more! In SQL Server 2005, two optimistic concurrency mechanisms were introduced and touted as the solution to all our problems. Now in SQL Server 2012 and 2014 even more have followed, but many challenges and problems still remain.

Let’s take a long look into the world of SQL Server concurrency and investigate Pessimistic and Optimistic isolation understanding how they work, when you should use them, and more importantly when they can go very wrong. Don't be staring down the wrong end of SQL Server's two Smoking Barrels and join me for this revealing and thought provoking presentation.
While you can see how to run through Setup to deploy a clustered instance of SQL Server (FCI) or an availability group (AG) in other places, what ties them together is the underlying Windows failover cluster (WSFC). Anyone who deploys FCIs or AGs needs a solid foundation of the entire clustering stack to truly be able to understand and have success with implementations. This session will demystify what lies underneath SQL Server from a DBA point of view.
In this advanced workshop we cover enhancements in SQL Server 2016 for building databases in an enterprise environment on modern project teams.

With demonstrations and several exercises, this workshop uses group labs to cover advanced database design skills…but this isn’t your average "Here's how to create a table, now go build a database" course. Our goal is to cover new features in SQL Server 2016 that are relevant to modern enterprise development practices. We’ll talk about some of the pain points designers feel as well as the costs, benefits, and risks associated with design choices. 
Discussion topics will include: 

• Advanced database design process 
• Advanced Data Types (XML, JSON, Geospatial)
• Files/Filegroups/Partitioning/Archiving/Stretching
• Security/Encryption/Data masking/Audit
• Advanced Table design Topics {Temporal/Hekaton/Compression}
• Other Advanced Topics 

Attendees will leave this session with an understanding of the following: 
• Advanced database design process for modern enterprise development projects
• New and advanced features available in SQL Server 2016
• How to decide which design choice is the right design choice for your needs

The day will include lecture style format as well as interactive discussions and exercises.  

Attendee prerequisites:
Hands-on experience with SQL Server (any version) basic design concepts including normalization, constraints, indexes, datatypes and integrity features.  Basic understanding of database administration concepts. Familiarity with basic Azure and Data Platform features. 

Discover the ins and outs of some of the newest capabilities of our favorite data language. From JSON to COMPRESS/DECOMPRESS, from SESSION_CONTEXT() to DATEDIFF_BIG(), and new query hints like NO_PERFORMANCE_SPOOL and MIN/MAX_GRANT_PERCENT, you’ll walk away with a long list of reasons to consider upgrading to the latest version. 
In this session, you will learn about various anti-patterns and why they can be bad for performance or maintainability. You will also learn about best practices that will help you avoid falling into some of these bad habits. Come learn how these habits develop, what kind of problems they can lead to, and how you can avoid them - leading to more efficient code, a more productive work environment, and, in a lot of cases, both.
Once upon a time, your success as a Database Administrator required only that you develop expertise in SQL Server installation strategies, backup and restore operations, and perhaps performance tuning. However, now that the modern IT landscape includes technologies such as cloud computing and server virtualization, your continued success requires you to adapt and expand your skills. It is increasingly important to develop a broad understanding of modern application stacks before you can facilitate an intelligence system design to support your data management requirements.
To achieve this goal, you need to build a foundation by understanding concepts related to virtualization, storage, and software-defined networking. After all, "the cloud" is just a massive virtualization environment.
In this full-day session, you will learn:

How modern all-flash storage arrays affect your overall database environment
Optimization strategies for a virtualized SQL Server environment
Cloud architecture options

Infrastructure as a Service

Virtual Machines
Virtual Networks
Azure Storage
Windows Storage Spaces

Platform as a Service

SQL Database
SQL Data Warehouse
Azure Data Lake

Security and performance management in a cloud world
Overall infrastructure architecture design
How to keep your SQL Server environments current in an ever-changing world of releases
How to automate as much as humanly possible, so you can actually take a holiday
It's time to face the fact that Data Warehouses, as we know them, are changing.

The increasing power of automatically scaling systems and the rapidly dropping cost of storage are challenging how we decide which data to keep and how we push it out to those who need it.

In this session we'll run through the architecture of the modern warehouse, from our structured/unstructured Azure Data Lake architecture to our platform as a service Azure Data Warehouse and the various tools bringing the two together.

Expect a general overview of these new Azure components, a little BI theory and practical demonstrations of setting up Data Lake, writing U-SQL, linking up Data Warehouse and exposing data to users, all from the ground up (demo gods permitting)
SSIS is a powerful tool for extracting, transforming and loading data, but creating and maintaining a large number of SSIS packages can be both tedious and time-consuming. Even if you use templates and follow best practices you often have to repeat the same steps over and over and over again. Handling metadata and schema changes is a manual process, and there are no easy ways to implement new requirements in multiple packages at the same time.

It is time to bring the Don't Repeat Yourself (DRY) software engineering principle to SSIS projects. First learn how to use Biml (Business Intelligence Markup Language) and BimlScript to generate SSIS packages from database metadata and implement changes in all packages with just a few clicks. Then take the DRY principle one step further and learn how to update all packages in multiple projects by separating and reusing common code.

Speed up your SSIS development by using Biml and BimlScript, and see how you can complete in a day what once took more than a week!
The most coveted features of SQL Server are made available in Enterprise Edition and are sometimes released into Standard Edition a few years later. This often leaves a vast group of users who "window shop" the latest and greatest features and return to the office wishing they never saw those features presented.

This session will show you how you can achieve the same, or at least a similar, outcome to some of those features without having to fork out for Enterprise Edition licenses or breaking any license agreements. You will leave the session with a set of solution concepts covering Partitioning, Data Compression and High Availability that you can build upon or extend and maybe save you and your company a nice pile of cash.

Microsoft Power BI is a cloud service that provides a complete self-service analytics solution, including tools for data extraction, transformation, modeling and visualization. Data modeling is possible by using both Excel or Power BI Desktop. The Power BI Service enables you to create reports and dashboards, sharing them with other users. Mobile apps for several platforms also improve access on any device.

In this full-day seminar, you are guided in creating a complete solution step-by-step, using all the features of Power BI. Starting from scratch, you see how to create query in a visual way to import and integrate data from many different sources, with a particular attention on leveraging existing data in the company, transforming data in order to improve the resulting model and sharing the results of the queries created, so that they will be easy to reuse. Then you see how to create a data model following the best practices and using the resources available on the web in order to accelerate the creation of tables and formulas that can be shared in many models. The reference data model will improve over the day, adding metadata to improve the usability of other Power BI features. You will also learn how to connect to existing on-premises data models in Analysis Services, without copying and storing data in the cloud.

Once the data model is ready, you see how to create reports and dashboards in Power BI, leveraging the many data visualizations available. After publishing the result on Power BI site, you see how to refresh the data on the cloud, including all the details about correct configuration and best practices for moving on premise data to the cloud in a secure way, using both Personal and Enterprise gateways. At the end of the day, you will be ready to start using the entire Power BI stack in your company, choosing the right feature for each requirement and applying the best practices in each step.

The Tabular model in Power Pivot for Excel, Power BI and SSAS Tabular seems to offer only plain-vanilla one-to-many relationships, based on a single column. In 2015 there was the introduction of many-to-many relationships, yet the model seems somewhat poor when compared with SSAS Multidimensional. In reality, by leveraging the DAX language, you can handle virtually any kind of relationship, no matter how complex they are. In this session we will analyze and solve several scenarios with calculated relationships, virtual relationships, complex many-to-many. The goal of the session is to show how to solve complex scenarios with the aid of the DAX language to build unconventional data models.
The way SQL Server estimates cardinality for a query has been updated in SQL 2014. In this session we will discuss why cardinality matters, the differences between the SQL 2014 cardinality and previous versions, and how to evaluate if your queries will benefit after upgrading from previous versions of SQL Server.
This session is all about putting big data to work. To do that we need to be clear on objectives and to transform a proof of concept into a reliable, resilient and repeatable production process.  To make that work we'll show you how to hook up Data Lake to a variety of sources using Data Factory using Visual Studio so that we also get source control.  The trick with all of this is to use the scalability of the cloud to minimise the costs involved by turning on and off the services we need.  Inevitably things will go wrong with our demo and that is part of the plan as we want to show you how to diagnose problems and fix them.   
In this session we'll look over some of the things which you should be looking at within your virtual environment to ensure that you are getting the performance out of it that you should be.  This will include how to look for CPU performance issues at the host level.  We will also be discussing the Memory Balloon drivers and what they actually do, and how you should be configuring them, and why.  We'll discuss some of the memory sharing technologies which are built into vSphere and Hyper-V and how they relate to SQL Server.  Then we will finish up with some storage configuration options to look at.
User-defined functions in SQL Server are very much like custom methods and properties in .Net languages. At first sight, they seem to be the perfect tool to introduce code encapsulation and reuse in T-SQL. So why is this feature mostly avoided by all T-SQL gurus?

The reason is performance. In this session, you will learn how user-defined functions feed the optimizer with misleading and insufficient information, how the optimizer fails to use even what little information it has, and how this can lead to shocking query performance.

However, you will also see that there is a way to avoid the problems. With just a little extra effort, you can reap the benefits of code encapsulation and reuse, and still get good performance.
Failover clustering is no longer the default option for making database servers highly available.  The benefits of Availability Groups coupled with the limitations of cloud platforms mean some systems can’t use failover cluster instances or need more than they can offer.  To complicate designing a solution even more, SQL Server 2016 introduces Basic Availability Groups in the standard edition that arguably could see failover clusters heading for extinction.   This session will compare Microsoft’s options for deploying highly available SQL Server data platforms in mid-2016.  It will use examples of solution designs to help data professionals understand each feature’s strengths and capabilities whether they’re deployed in on-premises data centres or using Microsoft Azure services.
Much of your ETL process flow consists of packages that are very similar in structure, capturing data from a single source and transferring that to a single destination. Creating the individual packages can be tedious and it's easy to miss something in the process of generating the same basic package over and again. BI Markup Language makes it easy to build new packages, and PowerShell makes creating the BIML scripts easy. In this session we'll show you how to use PowerShell to generate dozens of SSIS packages doing similar tasks from a defined set of ETL sources.
Need a fast data integration solution, but don't have the time or budget for heavy performance tuning? By selecting the right design, 90% of SSIS customers will achieve their ETL performance goals with little to no performance tuning. This session will teach you how to pick that design! Targeted at intermediate to advanced SSIS users, Matt Masson – a long time member of the SSIS team - will guide you through a series of proven SSIS design patterns, and teach you how to apply them to solve many common ETL problems. We’ll start off with an overview of the Data Flow internals, and review the SQLCAT team’s Top Ten Best Practices for achieving optimal performance. We’ll then dive into a series of design patterns, including Advanced Lookup Patterns, Parallel Processing, Avoiding Transactions, handling Late Arriving Facts, Cloud/On-Prem Hybrid Data Flows, and multiple ways to process Slowly Changing Dimensions. Finally, we'll take a close look at the new features and functionality provided in SQL Server 2016. 
If you’re looking for ways to maximize your SSIS ROI, then you won’t want to miss this! 
This training session includes the following modules:
Module 1 - Data Flow Internals
Module 2 - Benchmarking and Performance Best Practices
Module 3 - Common Design Patterns
Module 4 - Lookup Design Patterns
Module 5 - Parallelization Design Patterns
Module 6 - Advanced Design Patterns
Module 7 - SSIS in SQL Server 2016 
Module 8 - Combining Microsoft Technologies
Failing to design an application with concurrency in mind, and failure to test an application with the maximum number of expected simultaneous users is one of the main causes of poor application performance.   Locking and blocking is SQL Server’s default method of managing concurrency in a multi-user environment.  In this session we’ll look at the three main aspects of locking: type of lock, duration of lock and unit of locking. We’ll also look at when locks cause blocking and examine various ways to minimize blocking.   In addition to looking at the aspects of locking, in this session, you will learn:  
  • What metadata is available to show you:
The locks that have been acquired. The processes that are blocked and who is blocking them. The tables have had the most problems due to locking and blocking.  
  • What other tools are available to track down other locking and blocking issues.
Machine Learning is the process of extracting predictive or categorical relationships from large quantities of multiple sources of data. Most often,the Data Scientist uses a series of tools and processes to gain meaning from data – and coordination between other professionals is a manual affair. Microsoft’s Azure Machine Learning (Azure ML) brings together a completely on-line collaborative experimentation environment to build solutions, and a simple way to publish the results so that other code and processes can use them.Buck Woody from the Microsoft Machine Learning and Data Science team (MLADS) will show you not only how to use the Azure ML tool to create solutions, but also explain the entire Cortana Analytics Suite and where AzureML fits in. You’ll also learn the process for creating the solution. We’ll take a solution from source data to published output. At the end of this workshop, you’ll be able to:

Understand the Cortana Analytics Suite and when and where to use Azure ML 
Apply the Cortana Analytics Process (CAP) to a given solution
Understand and use the Azure ML Studio environment to create collaborative experiments and publish solutions
Get input data from on-premises, online, and cloud-based sources
Clean, transform, normalized and quantize your data
Build, score and evaluate a Predictive Model
Build and evaluate a categorical model
Publish and stage the predictive model as an Azure-based Service  


You’ll need a laptop with connectivity to the Internet
Optionally: Microsoft Excel (if you wish to consume the model in Excel)

While more and more workloads have moved to virtual environments, the data warehouse is often one of the last physical server in the data center. However, that doesn’t need to be the case, in this webinar you will learn about the challenges of implementing your BI environments in a virtual environment.

This includes your data warehouse, and its ancillary systems like Analysis Services, Integration Services and Reporting Services. Finally, we’ll talk a little bit about how Azure VMs can change this guidance.

DBAs, developers or analysts are often asked to get involved in the process of designing and implementing a data strategy where there is no dedicated BI resource. Data modelling doesn't sound too difficult but it is something you can struggle with in the beginning. In this session we will talk through the process of gathering user requirements to provide a good starting point for your data model. We will use the Sun Modelling technique to aid us in this.
For the most part, query tuning in one version of SQL Server is pretty much like query tuning in the next. SQL Server 2016 introduces a number of new functions and methods that directly impact how you’re going to do query tuning in the future. The most important change is the introduction of the Query Store. This session will explore how the Query Store works and how it’s going to change how you tune and troubleshoot performance. With the information in this session, not only will you understand how the Query Store works, but you’ll know everything you need to apply it to your own SQL Server 2016 tuning efforts as well as your Azure SQL Databases.
This session covers the more advanced aspects of development for Azure SQL Data Warehouse. Areas such as data movement, workload concurrency and resource management will all be covered during this intense 60 minute session. Topics covered include:
  • Data Movement
  • Workload concurrency
  • Resource Management
  • Statistics
    The Internet of Things (IoT) starts with your things—the things that matter most to your business.   IoT is at an inflection point where the right technologies are coming together and we are able to connect devices to the cloud and leverage streams of data that were previously out of reach. It's a great time to take a look at game changing technologies you can use today to make your Internet of Things (IoT) ideas stand out from the rest using Microsoft Azure.     In this session we will look at an end-to-end example and demo of an IoT Architecture built using Microsoft Azure with real-time data using services like IoT Hubs, Stream Analytics and Power BI.     Welcome to the Internet of Your Intelligent Things!
    Everyone tests that code, but most people run a query, execute a procedure and run another query. This ad hoc, non-repeatable testing isn't reliable, and encourages regression bugs. In this session you will learn how to begin introducing testing into your development process using the proven tSQLt framework. You'll run tests with a click of a button, using an ever growing test suite that improves code quality. You will see how to handle test data, exceptions, and edge cases.
    Learning how to detect, diagnose and resolve performance problems in SQL Server is tough.  Often, years are spent learning how to use the tools and techniques that help you detect when a problem is occurring, diagnose the root-cause of the problem, and then resolve the problem. 

    In this session, attendees will see all new demos of native tools and techniques which make difficult troubleshooting scenarios faster and easier, including:

    •           XEvents,Profiler/Traces, and PerfMon
    •           Using Dynamic Management Views (DMVs)
    •           Identifying bottlenecks using Wait Stats

    Every DBA needs to know how to keep their SQL Server in tip-top condition. If you don't already have troubleshooting skills, this session covers everything you need to know to get started.
    SSIS is a well known ETL tool on premisses. Azure Data Factory is a managed service on cloud which provides ability to extract data from different sources, transform it with data driven pipelines, and process the data. in this session you will see many demos comparing ADF (Azure Data Factory) with SSIS in different aspects. you will also learn features that are available in ADF but not in SSIS with many demos.
    The quickest way to migrate your on-premises OLTP database to Azure is to simply "Lift & Shift".
    You create a vm in Azure and size it to match your local system and move your database into it.
    This might not be the most cost effective way though and also, you still have to do all the database maintenance yourself.

    In this session we will investigate how we could use more of the cloud features like SQL Database, Redis Cache, Search, etc. in order to truly scale our system. And we'll see if this increases or lowers the total cost of ownership.

    This excercise is about an OLTP system but we will also look at how it affects loading our DWH from this new setup.
    R is a powerful language to add to the BI, analytics and data science technologies you may already be using. This session circumvents the painful experience of on-boarding a new technology and will give you the foundation needed to use R effectively.Topics covered will include effective R coding, development best-practices, using R as a reporting tool, and how to build and administer a solid platform for analysis.
    We all know that correct indexing is king when it comes to achieving high levels of performance in SQL Server. When indexing combines with the enterprise features partitioning and compression, you can find substantial performance gains.

    In this session, use scripts to query dynamic management views (DMVs) to identify the right objects on which to implement strategy, measure performance gains, and identify the impact on memory and other resources. Devise a sliding-window, data-loading strategy by using partition switching. Track fragmentation at the partition level and minimize index maintenance windows. Discover partitioning improvements in SQL Server 2014. Take home an advanced script for tracking usage and details on fragmentation, memory caching, compression levels, and partitioned objects.
    If you have an AlwaysOn Availability Group setup, then come learn about the new enhancements shipped with SQL Server 2012 Service Pack 3 and above which allows you to:
    1. Troubleshoot failovers in your Availability Group easily
    2. Determine the reason for connectivity loss and timeouts
    3. Understand which part of your topology is the reason for latency
    Do you have an application running with an in-market version of SQL Server, such as 2012 or 2014? Then this session is for you!

    Microsoft is well aware of the number of customers betting their business on current versions of SQL Server, and that we need to add value to your already existing SQL Servers.

    Rather than waiting for v.next for critical feature completeness, performance, supportability and hardware adoption improvements, we have been seeking to provide enhancements for in-market releases.

    The latest result of these continued effort shines through in SQL Server 2012 SP3, and even more so with the upcoming SQL Server 2014 SP2.

    This session will showcase several improvements in the Database Engine for SQL Server 2012 through 2016, that address some of the most common customer pain points involving tempdb, new CE, memory management, partitioning, alter column as well as diagnostics for troubleshooting query plans, memory grants, and backup/restore.

    Come see this demo filled session to understand these changes in performance and scale of the database engine, new and improved diagnostics for faster troubleshooting and mitigation.

    Learn how you can use these features to entice your customers to upgrade and run SQL Server workloads with screaming performance.  


    1. Learn about performance, scale and diagnostics enhancements in SQL Server database engine.

    2. Evangelize these enhancements to get an out-of-box performance.

    3. With this, understand how your experience with SQL Server will improve, and why you should install the latest and greatest Service Packs
    So many of us have learned data modeling and database design approaches from working with one database or data technology. We may have used only one design tool.  That means our vocabularies around identifiers and keys tend to be product specfic.  Do you know the difference between a unique index and a unique key? What about the difference between RI, FK and AK? Do you know if your surrogate keys have their companion alternate keys?

    In this session we’ll look at the generic and proprietary terms for these concepts, as well as where they fit in the data modeling and database design process.  We’ll also look at implementation options across a few commercial DBMSs and datastores. These concepts span data activities and it’s important that your team understand each other and where they, their tools and approaches fit to produce a succesful database design. 
    After this session you will:
    1. Understand the diagnostics enhancements available in SQL Server database engine in SQL Server 2012 Service Pack 3 and above
    2. Leverage the diagnostics to troubleshoot and mitigate issues quickly in mission-critical environments
    3. Simplify troubleshooting experience for common SQL Server scenarios
    The analysis of raw data requires us to find and understand complex patterns in that data.  We all have a toolbox of techniques and methodologies that we use; the more tools we have, the better we are at the job of analysis.  Some of these tools are well known, data mining for example. This talk covers some of the less well-known techniques that are still directly applicable to this kind of analytics.    Last year at Sqlbits I gave a two hour session on four such topics:
    • Monte Carlo simulations (MCS)
    • Nyquist’s Theorem
    • Benford’s Law
    • Simpson’s paradox    
    I will not be assuming that you attended last year’s talk; although if you did and enjoyed it then it is highly likely that you will enjoy this one!  This session will focus on more of these invaluable techniques.  For example, we’ll talk about:
    • Dark Data
    • Probability calculations
    • RFI    
    In each case I try to give you an understanding, not of the maths behind these techniques, but of how they work, why they work and (most importantly) why it is to your advantage to know about them.  I have genuinely chosen only techniques that I have found invaluable in my commercial work. 
    Nowadays many companies don't have dedicated developer positions. Therefore the most of the SQL code has been written by application developers. And they use only a subset of SQL Server features and usually in a suboptimal manner. I spent last ten years working with application developers and have collected common mistakes and misunderstandings between them and DBAs that increase development, test and deployment costs and reduce the overall quality. In this session we will cover the most important things they need to know about SQL Server and that cannot be easily or cheap fixed by DBAs or consultants.
    Machine Learning can solve all your problems, it can tell you what to do better and how to improve your business processes, increase revenue, reduce waste etc.   Well, not really. Machine Learning is not magic. You don't just apply machine learning in your organisation and intelligent, innovative solutions come out of nowhere. Machine Learning has its limitations and its beauty, but it all comes down to data and questions. You need good data and the right questions and then you are good to go.   In this session, we are going to look at a typical machine learning process and how to apply it to some real world data.  We are going to use Azure Machine Learning to transform data and ideas into models that are production ready in minutes, all of this while keeping the real world in mind.
    You are a BI developer and have a few years’ experience developing Data Warehouses and Integration Services packages. The typical performance bottleneck in your data warehouse is using SSIS to insert massive amount of data into SQL Server.

    In this session, you will learn how to improve the load performance with SSIS into SQL Server. You will learn about how to achieve minimal logged inserts, parallel data load into a single table, and other methods for inserting data faster.
    Finding the right job is critical if you're going to get the most out of life. What's also critical is finding the best way of doing that job, is it as a permanent employee, a contractor, a freelance consultant, or maybe even being a business owner.
    In this session I'll talk through my experience of the pros, cons and gotchas in my time of doing all of the above. We'll look at what you need to think about from a logistics, financial planning, tax, marketing, career and skills development perspective, to hopefully give you some pointers in the right direction and more of an idea of which option may suit you best.
    Running databases in SQL Servers in your office or data centre?  Considering moving them to Azure?  Want to know what options and benefits of Infrastructure as a Service and Platform as a Service? This session will provide an overview of the process to move from an on-premise to Azure data platforms.  The session provides an overview of options, considerations, risks and mitigation. This will consider technical capabilities and limitations and the commercial model supporting these to reduce risk of nasty surprises.
    Extended Events provide deep insight into SQL Server's behavior and allow us to gather information not available by other means. However, compared to other technologies such as SQL Trace and Event Notifications, a way to react to the events as soon as they happen seems to be lacking. In this session we will see how the Extended Events streaming API can be used to process events in a near real-time fashion. We will demonstrate how this technology enables new possibilities to solve real world problems, such as capturing and notifying deadlocks or blocking sessions.
    Think you know MDX? This session will show you some advanced, little-known but nonetheless practical tips and tricks for writing complex calculations and making your queries run faster. Topics covered will include:
    * Everything there is to know about solve order and its quirks
    * Inline named sets and the problems only they can solve
    * Using the UnOrder() function to optimise calculations
    * Subselects: what they do, when to use them and how they interact with MDX calculations
    * Useful connection string properties and their effects
    ... and lots more!
    Do you have an application running with an in-market version of SQL Server, such as 2012 or 2014? Then this session is for you!

    Microsoft is well aware of the number of customers betting their business on current versions of SQL Server, and that we need to add value to your already existing SQL Servers.

    Rather than waiting for v.next for critical feature completeness, performance, supportability and hardware adoption improvements, we have been seeking to provide enhancements for in-market releases.

    The latest result of these continued effort shines through in SQL Server 2012 SP3, and even more so with the upcoming SQL Server 2014 SP2.

    This session will showcase several improvements in the Database Engine for SQL Server 2012 through 2016, that address some of the most common customer pain points involving tempdb, new CE, memory management, partitioning, alter column as well as diagnostics for troubleshooting query plans, memory grants, and backup/restore.

    Come see this demo filled session to understand these changes in performance and scale of the database engine, new and improved diagnostics for faster troubleshooting and mitigation.

    Learn how you can use these features to entice your customers to upgrade and run SQL Server workloads with screaming performance.  


    1. Learn about performance, scale and diagnostics enhancements in SQL Server database engine.

    2. Evangelize these enhancements to get an out-of-box performance.

    3. With this, understand how your experience with SQL Server will improve, and why you should install the latest and greatest Service Packs

    The world is changing, and so is the life of the people who deal with data. Data is everywhere - on earth and in the cloud. If you are database developer, a DBA or  DevOps, your life isn't the same anymore. Database as service is adding new dimensions to the life of data engineers. Many things are the same as before, but many things are different. You spend your day thinking about new ways to do capacity planning, new ways to monitor and troubleshoot. You use some old tricks for performance tuning and find some new tricks as well. You think about if your data is safe and secure, now that it is in the cloud -- who is touching my data, from where and when - you want audits and controls to satisfy regulatory and compliance needs. When and how to use new features such as columnstore indexes, in-memory OLTP, etc. When to use Elastic pools. When to shard and use Elastic DB. 
    Join the SQLCAT team for a full day Azure SQL DB deep dive on fundamentals, performance troubleshooting, advanced practices, application patterns, customer scenarios and practical learnings. Whether you are a beginner or intermediate or advanced user of Azure SQL DB, you will find lots of useful learnings to enhance your daily life as a database developer, a DBA or DevOps.
    Speakers: Sanjay Mishra, Kun Cheng, Denzil Ribeiro
    Come to hear about lessons learned from early customer deployments of SQL Server security features: Row level security, Always Encrypted, Data Masking, Auditing, etc.

    Speaker: Kun Cheng, Arvind Shyamsundar
    With R in SQL Server 2016, data engineers (DBAs, developers) are expanding their horizons and deriving more value for the business from their data. Advanced Analytics with R in SQL Server 2016 brings the traditional data engineers and data scientists together to generate greater business value. We will explore a few customer examples who are leveraging R with SQL Server today and how they are doing it.
    Microsoft recently released the Azure Data Lake services to allow you to prepare and process Big Data. It offers you both Hadoop clusters and the new Azure Data Lake Analytics job service with the new query language called U-SQL that evolved from Microsoft’s internal Big Data platform. 
    This seminar will give you an introduction to Azure Data Lake all up, including the Azure Data Lake Storage service and then go deeper into Azure Data Lake Analytics and U-SQL. You will learn what U-SQL is, gain some hands-on experience working through a Lab, see how the tools enable your productivity from day one and see further details on the language and its use.

    Pre-requisite: Minimum: A laptop that can connect to the Azure Portal. Alternatively: A laptop with Visual Studio (2012/13/15) community edition and the latest Azure Data Lake Tool installed.
    Join the SQL Server product group for an all-day session to learn about the completely revamped and new upgrade and migration experience for SQL Server, codenamed Project “Chinook”. This session covers upgrade methodologies, process and new tooling to support SQL Server upgrades and migrations on-premise and to the cloud. The session will cover minimizing upgrade downtime and performance regressions as well as the end to end upgrade experience. Learn, share and collaborate with us on the future of upgrade and migration experience for SQL Server. The session will include hands-on labs so bring your laptop
    This seminar covers the core skills to adopting a Dev Ops practice for your database having a Database Lifecycle Management (DLM) process using SQL Server Data tools (SSDT) and other industry tools;
    • Putting your database under source control
    • Testing your database
    • Deploying your database
    • Managing reference/static data
    • Doing this across multiple environments in a release pipeline
    This is a hands on lab where you will bring a database and we will get you testing and deploying that database. If you already have started on the DLM journey and are deploying your database from source control, then consider the intermediate session as this will be able to take you to the next level.

    Why should I attend
    Do you have one or more databases that you want to control. That means being able to
    1. Create the database from scratch
    2. Manage the that database operational including static data
    3. Upgrade that database without manual intervention
    Do your developers have continuous integration for code but the database is an afterthought? Do you have to edit upgrade scripts for each environment you run them in? Do your development teams provide upgrade scripts that:
    • don’t work in all environments?
    • don’t take into account production has 1 billion rows?
    • just don’t work full stop
    • need to be run at 1 am
    • would bring down your site if run on production
    If any of the above are true, then you need to implement Database Lifecycle Management (DLM), aka Continuous Integration (CI) and Deployment (CD) for databases.

    Breakdown of Content
    Dev Ops as a practice is a process that when implemented enables you to prevent all the problems described above, by the virtue of controlled, reliable, repeatable set of processes, which if you choose to, can be automated.

    We will be looking at the different approaches for managing your database including gold schema and upgrade scripts (migrations). The pros and cons of each and how these interact with common development practices such as NHibernate or Entity Framework. Anyone that’s tried this will attest, its often not as simple as scripting all your database objects and deploying them.

    We will cover the common gotchas of the different approaches and will recommend tried and tested solutions for each of them. As well as covering the different stages we will cover the tooling available to you for each stage.

    What you will achieve by the end of the day.

    An understanding of all the components of DevOps and how Database Lifcycle Management is a part of it and why you need them. You will understand how you can,
    • take your database and put it in source control
    • resolve the common issues with deployment
    • include data in your database schema
    • deploy your database from source control using command line tools
    • automate the deployment using tools such as TFS, VSO and Team City
    You will know the tools that you can use at each stage. You will be prepared for the intermediate course


    An understanding of TSQL will help your ability to understand the course.

    The day will be led by a number of speakers from sabin.io including Simon Sabin and Simon D'Morias
    A common requirement in database applications is that users need a function to search a set of data from a large set of possible search conditions. The challenge is to implement such searches in a way that isboth maintenanble and efficient in terms of performance. This session looks at the two main techniques to implement such searches and highlights their strengths and limitaitons.

    This seminar extends the Devops for Data 101 session to build out your automated solutions that are repeatable and reliable. We will look at the more complex, real world problems such as more complex testing, handling different environments or client configurations, data replication, CDC and server level objects.

    Create/rebuild environments for development and testing on demand. Promote changes between environments and understand what and why the database has changed.

    Improve your build and release cycle through automation and testing to minimise downtime and increase confidence in releases. If you haven’t already have started on the DLM journey, then it is advised that you attend the DLM 101 course first to give you the foundations for this course.

    Why should I attend

    You meet any of the following criteria:
    • Attended the 101 and want to complete your knowledge in order to deal with the common deployment scenarios
    • Currently have source control but lack deployment automation
    • Have database releases that are too slow and/or unreliable
    • What to create dev/test environments on demand from scratch
    • Have multiple environments/clients with different configurations
    • Have a database solution that consists of more than a single database
    Breakdown of Content
    This seminar covers the next set of skills to implement database lifecycle management (DLM)
    • Combining deployment patterns to handle complex deployments
    • Options for managing static and setup data
    • Packaging your database and versioning
    • Creating environments on demand
    • Composite/shared schemas for large teams
    • Generating code from your schema, i.e. CRUD procs
    • Managing complex data movements
    • Improving your deployment time
    • Non database objects, sql agent jobs, linked servers, logins.  
    What you will achieve by the end of the day
    You will understand how to:
    • Completed an automated build and deployment using VSO
    • Understand whats needed for most database deployments
    • Promote database builds through environments
    • Deploy complex objects incrementally
    • Manage refactoring and data movements
    Experience of Source Control and SSDT Knowledge of modifying SQL Server objects

    The day will be led by a number of speakers from sabin.io including Simon Sabin and Simon D'Morias
    Continuous Integration is not normally associate with data warehouse projects due to the perceived complexity of implementation. John will be showing how modern tools make it simple to apply continuous integration techniques to data warehouse projects. In particular, the session will cover: 
    • Setting up SQL Server Data Tools for automated deployments 
    • Creating DAC Publish profiles 
    • The many actions of SQLPackage.exe 
    • Automating your build and deployments with PowerShell and psake 
    • Guide to configuring TeamCity as your build server 
    • Automated Integration and Regression testing of your Data Warehouse 
    • Automating cube and tabular model deployment
    Today’s applications combine various forms of data processing and analysis to support multiple workloads.  Traditional batch processing and data warehousing combined with the more recent “real-time” stream processing satisfy an unprecedented breadth of analysis.  And now the way these applications respond to this analysis is presenting a new generation of Intelligent Applications.
    There are many  potential and actual uses of application like this:

    Fraud Detection
    Credit Risk Management
    Product Recommendations
    Operational Efficiency

    This session is going to examine how to successfully build this modern data architecture on the Microsoft Azure cloud platform.  Using a canonical IoT application example, we will focus on design tenants for efficient, scalable and resilient data processing and analysis while meeting the demands of the new Intelligent Application. By the end of the session we will have a solutions showing us how to

    Allow only certain devices to send us data 
    Store potentially high frequency sensor data 
    Process the real-time data 
    Enrich the data 

    Microsoft Azure offers an array of services but the framework of our discussion will be the next generation of the lambda architecture: http://lambda-architecture.net/  How we add in machine learning and automated response to create the intelligent application architecture.
    Lindsey Allen will peel back the curtain on SQL Server 2016 new feature Operational Analytics.   We continue to invest heavily in our in-memory technology and have now introduce one of the more important features we are releasing as part of SQL Server 2016.    Lindsey will share learnings from early customer adopters as well as the engineering teams white board thinking when designing this feature.
    Security is one of the top topics for the new release of SQL Server. A total of 3 completely new features are coming to us: Always Encrypted, Dynamic Data Masking and Row Level Security. Microsoft Certified Solutions Master Andreas Wolter will give a close insight into the new security features in this session and will show for which use cases these new technologies have been developed, how they complement with existing technologies as well as potential security problems when relying on single features like these. 
    AzureML has piqued the interest of data scientist and non-data scientist alike inspiring a new brand of solutions for companies to unlock insights from their data. In this session Richard Conway will take you though the building blocks of time series analysis using real-world models of energy and stock prediction. Richard will show that with combinations of linear algebra, R and AzureML you can begin building predictive solutions in new time. 
    You've got SQL Server and SSRS sorted but your users are wanting a bit more intelligence in  their Business Intelligence. Microsoft can help you deliver these with R integration in SQL Server 2016, and the Cortana Analytics Suite, including Machine Learning and Power BI. This hands-on pre-con will take you through the fundamentals of working with R and then show how it can easily be integrated into the Microsoft stack.
    - R + Microsoft
    - Getting started with R
    - Data manipulation fundamentals
    - Data visualisation basics
    - Interactive data visualisations
    - Making predictions with R & Azure ML
    - Using R in SQL Server - overcoming SQL limitations and making predictions
    - High end dashboards with PowerBI & R
    You’ll need to bring a laptop sized device with you and possibly a tablet as well (so you can access the manual simultaneously while you develop your code), and we’ll hand you out a one month Azure subscription which has the software and services you’ll need for the labs.  This will also enable you to continue your journey to being an R expert after the conference.
    In an easy introduction to machine learning and predictive analytics, we'll use AzureML to analyse the Titanic dataset and predict whether different audience members would have survived the disaster. Your opportunity to test whether or not the fates would have been kind to you, without even getting your feet wet. :-)
    Azure SQL DB can automatically make customer apps run faster and more cost-effectively, based on analyzed workload telemetry from millions of customer databases. Come and see live demos about what the service can do for customers today (index management, optimum elastic pool recommendations, top insights into query performance) and hear what we plan for near future. 
    With a set of new charts (the first new charts since Office 98!) and our one-click forecasting feature, we’ve added more capabilities to reduce the amount of work the user has to do to get to the visualizations or analysis they need. By integrating Power Query, our modern data gathering and shaping technology, making Power Map (now called 3D Maps) available to all users and making a number of improvements to the PivotTable and PowerPivot experiences, we are focusing on helping users take greater advantage of the power and flexibility of the tool.
    Attend this full day pre-con to gain in-depth knowledge of SQL Sentry products and services. Whether you are already a customer or considering adopting our platform, you will learn how to make the most of it directly from our product and sales engineering teams. 
    CEO Greg Gonzalez will discuss the direction and future of SQL Sentry. Session topics include: SQL Sentry configuration and optimization, applying SQL Sentry software to identify and solve real world issues, setting up the best alerting environment, mining performance data from your repository, what's new in Plan Explorer, and details on our solutions for monitoring APS, Azure SQL DW, and Azure SQL DB. 
    You will also have the opportunity to interact with the sales and product teams during breaks and lunch as well as a dedicated Q&A session at the end of the day to get your own questions answered. This day of deep dive technical training will provide you with the knowledge and tools to make your own environment run as efficiently as possible. 

    Keynote (30 Minutes) - 200 
    Greg Gonzalez, CEO 
    Greg will talk about all the recent and upcoming exciting events happening at SQL Sentry. Get an inside look at what's happening with the company as well as our entire product line. 

    Configuration Personal Training (60 Minutes) - 300 
    Scott Fallen 
    Get your SQL Sentry environment configured and organized for the best results. We will focus on installation, retention thresholds, site configuration, SQL Sentry Client preferences, and integration with other tools (SCOM and ticketing). 

    Drop and Give me 5!!! Get Your SQL Server in Shape with SQL Sentry (60 Minutes) - 400 
    Richard Douglas 
    Whether physical or virtual, every SQL Server instance has its own performance profile based on available resources, workload, host configuration, and other factors. Yet many performance issues share root cause and performance metric patterns. In this session we will demonstrate how to recognize these patterns and identify the root cause for 5 common performance issues using SQL Sentry Performance Advisor and Event Manager. You will also discover the additional performance and configuration information we can provide about a virtualized SQL Server’s host, whether it be Hyper-V or VMware.

    Reveille! Wake up with SQL Sentry Advanced Alerting (60 minutes) - 300 
    Scott Fallen 
    In this session we will take a deep dive into the advanced features and options available in SQL Sentry’s alerting system. We will cover how to have your alerts follow a call schedule, filtering alerts to allow automated response to very specific situations, making the most of response rulesets, and crafting our own alerts with Custom Conditions. For those that already have alerting in place, we will discuss tips and tricks for reducing alert noise. We’ll conclude with a discussion of alerting best practices and Q&A. 

    Performance Data Surveillance (60 minutes) - 300 
    John Martin 
    Within the SQL Sentry repository there is valuable environment data for monitored systems, as well as historical performance data, all available to the end user so that they can mine this information for themselves. This session is a starting point to help get users acclimated with the data structures in the SQL Sentry repository, and some of the information that is available. This session should provide the SQL Sentry user with the building blocks to continue this journey of mining the SQL Sentry repository on their own in the future. 

    Plan Explorer Reconnaissance (60 Minutes) - 300 
    Aaron Bertrand & Greg Gonzalez
    SQL Sentry Plan Explorer has become the tool of choice for tens of thousands of SQL Server professionals working to improve query performance through execution plan analysis. The latest release brings more of what we love, and marks the evolution of Plan Explorer into a comprehensive platform that will help us address even more performance problems, as well as prevent them in the future. 

    Classified Files : APS, Azure SQL Data Warehouse, and Azure SQL Database (60 Minutes) - 300 
    John Martin 
    Whether you use an in-house Analytics Platform System or the cloud-based Azure SQL DW, we have your monitoring covered. In this session, John will show you how we can help you gain insight into data movement, troubleshoot distributed queries step-by-step, and review current and historical information about loads, backups, and restores. You will also learn about our comprehensive health alerts, our notification system, and our familiar, Outlook-style Event Manager calendar. Finally, we’ll go over the features offered in our monitoring solution for Azure SQL Database.

    Operation Debrief (60 minutes) - 200 
    SQL Sentry Team 
    Interact with the entire team to find out the answers to all of your questions about anything SQL Sentry related. This will be an open Q & A session, feedback opportunity, and chance to learn anything we don't cover in the sessions.
    Power BI is the central point for sharing data insight through advanced visualization.  Excel is the best analytic tool and include many capabilities to understand the data in detail.  These two Microsoft analytic platforms share a lot of technologies and users can import  and model data in any one of the environments and use the results across both. The session will go through the different meeting points between these two platforms. 
    Join me for a step-by-step overview on how to secure your data in Azure SQL DB, by leveraging the service’s built-in security features and without requiring any security expertise. The session will cover our most advanced features to-date including Threat Detection & Auditing, Always Encrypted, Transparent Data Encryption and Dynamic Data Masking. I will illustrate important concepts and real-world use cases using a live demo and examples, and where applicable I will share our future plans for these areas.
    Chances are, your team has several point-in-time backups for your databases. After all, they’re essential for recovering the system in an emergency. And, chances are, you’ve got a version control system (VCS) to provide the same capabilities for your applications. But what about version control for the database, too? Come to this session to see how you can create a more efficient database development platform by integrating your VCS with SQL Server. In real-time, you’ll see how versioning, branching, merging, and the other manual tasks you hate can fade away with just a few tips, tricks, and tools.
    Join Joseph for a fascinating look into SQL Server 2016 and its unmatched innovation across on-premises and the cloud to help you turn data into intelligent action. During this keynote Joseph will share insights achieved with advanced data analytics and how SQL Server and Cortana Analytics Suite can bring mission critical intelligence to every organization. With examples from customer organizations, Joseph will demonstrate how companies can benefit from new technologies such as Always Encrypted security, In-Memory OLTP and Operational Analytics, as well as new hybrid capabilities with Stretch Database. Joseph will also share some exciting insider views into SQL on Linux!
    This presentation discusses:
    • The principle behind Microsoft’s Fast Track program
    • How Fast Track architectures and produced and certified
    • Interpreting the certification results
    • A review of two new SQL Server 2014 Data Warehouse Fast Track Certifications – the 5TB and 7TB single socket configurations from Lenovo.
    The introduction of AlwaysOn with SQL Server 2012 Enterprise Edition allowed administrators to address the need for a simpler, more granular method of configuring High Availability without the need for traditional shared storage. The effectiveness of an AlwaysOn implementation in SQL Server 2014 was in part limited by the throughput capabilities of the log transport function. This presentation discusses:
    • A review of Enhanced Availability Groups
    • Log Transport Improvements
    • A functional comparison of SQL Server 2014 SP1 CU7 vs SQL Server 2016 RC2 using SanDisk Fusion ioMemory
    CRU International is the Global Commodity Industry pricing and Market Analysis firm specialising in Mining Metals and Fertilizers.

    Will Blake, Director of Technology & Analytics, will share how CRU International develops and deploys a series of BI Office solutions to support various business functions including: Finance, Web Analytics Supply, Pricing. 

    In the second half of the presentation, Ian Macdonald, Principal Technologist at Pyramid Analytics, will present how BI Office Version 6 and Microsoft SQL Server Analysis Services provides the ideal platform for Analytics at CRU. By demonstrating how to build a BI Solution and a real life customer examples in this session, Will and Ian create the perfect   launch pad to show you how BI Office can turn your company into a data driven organisation heading for the stars.
    Whether you are a Rocket Data Scientist, a Starship Captain searching for insights, a Number 1 officer creating elegant visualisations or an ordinary Space Cadet seeking business enlightenment before your next Galaxy tour, BI Office from Pyramid Analytics can help you achieve your quest for insights and timely business information. By demonstrating how to build and deploy a BI Solution in minutes, this session will show you how BI Office can make your PowerBI output work on your planet! Ian is a data explorer, who has nearly 30 years’ experience in trekking the Enterprise Business Intelligence Galaxy. Starting in Technical Support, training and consulting with Information Builders in 1986, he subsequently developed a career in presales, product marketing and marketing, holding senior VP positions with companies such as Microsoft, Hyperion Solutions and HP. Ian can create the perfect launch pad to show you how BI Office can turn your company into a data driven organisation heading for the stars, whilst combining the best of PowerBI on Premises.
    In this demo-packed session, you’ll learn practical tips and tricks for SQL code tuning from SQL Sentry’s Richard Douglas as he walks you through some of the most problematic and troublesome SQL coding problems. Using both free and paid tools from SQL Sentry, you’ll learn tips and tricks you can take home
    and immediately apply to your SQL code. You’ll learn things like what’s the best way to write a cursor, a quick trick that can save you 20-30% processing times on your big stored procedures, some major T-SQL bad habits, and even some new SQL Server features. And, you might end up seeing something in the free tools
    that will significantly improve the way that you work!
    Today’s Business Intelligence systems are increasingly perceived as mission critical, but their value is only as good as the quality of data they contain. Learn how you can use Microsoft SQL Server Master Data Services and Profisee’s Master Data Maestro to ensure data quality and improve BI value.
    We’ve made huge investments to enhance SQL Server Management Studio (SSMS) across the board in SQL Server 2016. Join us in this session for a ‘behind the scenes’ look at the new foundation for SSMS, how we built what you see today and learn about how we will accelerate innovation for SQL client tools going forward.
    SQL Server Data Tools (SSDT) significantly boosts developer productivity across the database development lifecycle with a modern database development environment for SQL Server and Azure SQL Database. Join us in this session to learn about recent improvements made to SSDT, from support for the latest SQL Server 2016 and Azure SQL DB features to a faster release cadence, a unified installation experience and simplified connectivity. We’ll highlight key experiences through demos, discuss our ongoing work and have an active Q&A to hear from you about what issues we should prioritize going forward. 

    This session gives you a chance to directly connect with the product team and get their insights. You’ll learn a lot whether you’re new to SSDT / Visual Studio development or an experienced developer.
    In this session you will learn about the key investment areas in SQL Server 2016 and what the so called Hero Features of SQL Server 2016 are. We will cover technologies such as In-Memory OLTP, Clustered Columnstore Index, Real-Time Operational Analytics, Advanced Analytics with R Services, Always Encrypted and Stretch Database. You will also see a real customer scenario demo showcasing how SQL Server 2016 is being used to manage the Full Lifecycle of Financial Data. 
    SQL Server Engineering has done some fundamental improvements in the SQL Server 2016 core database engine that fix some performance bottlenecks. These fixes make SQL Server 2016 just run faster after the upgrade. In addition to these fixes, SQL Server 2016 has some interesting new features that help the DBA to improve the performance and scalability of mission critical OLTP workloads. Windows Server 2016 has also some new features especially in the context of database storage architecture and new hardware support that benefit SQL Server 2016 workloads. This session will cover all these investments. 
    You have probably heard about DevOps, continuous integration and delivery? This session introduces the concepts and benefits of adopting these processes. 

    DevOps has become the latest trend. Development teams all over are moving to a DevOps culture where developers and operations work closer, in a single workflow to improve efficiency.

    Most companies can easily build continuous integration and continuous delivery into their application. But many struggle with the database due to the complexity of migrating rather than overwriting. In this session we will look at the DevOps movement and how it can be implemented in any company large or small to ease database development and deliver working products.

    By building a pipeline to define how code changes are tested, tracked and released into production you will see the benefits of these processes. 

    This session will include looking at tools like SSDT, RedGate, VSTS, TeamCity & Octopus to see how they can utilised in DevOps.
    Have you managed to implement, or are you struggling to implement, automated deployments for your databases but keep hitting problems? Database deployments are usually far more complex than application rollouts, it’s not uncommon to hit road blocks when attempting to automate deployments.

    This session will look at the commonly used deployment methods and look at the options to prevent the most common problems. We will be covering all of the commonly used methods from hand written scripts to gold schema models using SSDT or RedGate.

    The session will include problems surrounding:
    • Release order/dependencies
    • Data
    • Environment differences
    • Schema Compare Tools
    This session looks at how to maximise resolution time with some joined up thinking between developers and DBAs. Martin Wild and Peter O Connell will run through a workflow showing how to identify poor performing SQL, how to navigate and analyse its plan and how to optimize for better performance. They will introduce some free tools to help you on your way and help connect your developers and DBAs to resolve issues faster. 
    Releasing new features every month, Power BI is moving faster than ever towards letting you experience any data, any way, anywhere. With newly added capabilities such as Publish to web, Power BI Embedded and row-level security the range of use cases for Power BI is expanding rapidly. What does this mean to you as a BI professional, developer, analyst, or journalist?
    Join this session to learn how the current Power BI suite can be leveraged by different types of users, and watch the latest and greatest updates in action!
    R is the lingua franca of Analytics. SQL is the world’s most popular database language. What magic can you make happen by combining the power of R and SQL for Data Science and Advanced Analytics? Come to this session to learn how to turn your existing applications into intelligent applications using the R integration into the SQL Server engine. 
    SQL Server always logs every change to a database. But exactly what is logged can vary based on a number of factors. Some operations are referred to as "minimally logged," but even those operations can log a different level of detail depending on your recovery model. This session looks inside the transaction log to see exactly what is logged for minimally logged operations. First, we look at some background information regarding how the log is used and managed, and then we introduce a tool that can help you actually see your log records. When you can query the log, you can determine how logging for operations such as index rebuilds and SELECT INTO differs depending on whether you are using the FULL or BULK_LOGGED recovery model. We also look at other factors in our SQL Server operations that can affect what is actually written to the log. In addition, I’ll describe the benefits and caveats for each of the recovery models.

    • Understand how SQL Server's transaction log is managed
    • Define what a 'minimally logged operation' is
    • Examine what actually gets written to the log in each of the recovery models

    This session looks inside the transaction log to see exactly what is logged for minimally logged operations in each of the recovery models. We also look at other factors in our SQL Server operations that affect what is actually written to the log.
    Making Big Data processing easy requires great developer support that hides the complexity of managing scale, provides easy integration of custom code to handle the complex processing requirements ranging from data cleanup to advanced processing of unstructured data, and provides great tool support for the developer to help in the iterative development process. Thus when we at Microsoft introduced the Azure Data Lake, we decided to also include a new language called U-SQL to make Big data processing easy. It unifies the declarative power of SQL and the extensibility of C# to make writing custom processing of big data easy. It also unifies processing over all data – structured, semistructured and unstructured data – and queries over both local data and remote SQL data sources. This presentation will give you an overview on U-SQL, why we decided to build a new language, what its core philosophical underpinnings are as well as show the language in its natural habitat – the development tooling – showing the language capabilities as well as the tool support from starting your first script to analyzing its performance.
    The support for system-versioned temporal tables as a database feature that provides information about data at any point in time rather than only at the current moment in time is a much have feature for any DBA! With the help of this new features, DBA´s can in minutes collect vast amounts of operational data using open source software based on PowerShell technology and Azure. Add PowerBI to the mix and get mind blowing insights to the well-being (or lack of) of ones SQL server environments, the future will never be the same.
    Join this session for a perspective on moving a legacy enterprise organisation into the modern BI world using the power of positive disruption and the Attunity tool set.

    This real world case study will explore Zurich Insurance’s journey featuring both the challenges and successes they faced.
    Cortana Analytics – now re-branded as the Cortana Intelligence Suite – is a collection of Microsoft technologies that allow you to create intelligent actions from data, using a variety of methodologies and platform components. Buck Woody, from the Microsoft Machine Learning and Data Science team, will show you each of these components, how they fit together, and how you can develop an architecture for complex analytic solutions. 
    You work with SQL Server. Awesome. Starting today you have two choices: 1) you can stick with what you currently do and watch as the technology future passes you by, or 2) you can start learning and working with next big industry trend in data. This session will take you on an end-to-end journey of IoT, the Internet-of-things, from data generation to data movement, discussing topics such as "big data" and looking at all the services and technologies that help you understand your data. Simply, this session will help you understand the Internet or "YOUR" things.