Your tabular model is done, approved and ready to be used by the user. By means of using Excel the user gets very excited about the use of tabular Models. For a while the user uses Excel as a self-service business intelligence tool. Then all of a sudden the user starts asking if they can use the program to extract more and other information from the tabular model by the use of Excel. Now it is up to you to familiarize the user with all the possibilities of working with the tabular model by means of Excel.
Given the small amount of documented knowledge about the use of tabular models by means of Excel, I will show you how to get the best of your tabular models by using Excel as a self-service business intelligence tool. Filters, named sets, and calculations in the pivot table: I will explain it all!
Digital Transformation is much more than just sticking a few Virtual Machines in the cloud; it is real, transformative, long-term change that benefits and impacts the whole organisation.
Digital Transformation is a hot topic with CEOs and the C-level suite, renewing their interest in data and what it can do to empower the organisation.
With the right metrics and data visualisation, Power BI can help to bring clarity and predictability to the CEO to make strategic decisions, understand how their customers behave, and measure what really matters to the organization. This session is aimed at helping you to please your CEO with insightful dashboards in Power BI that are relevant to the CxO in your organisation, or your customers’ organisations.
Using data visualisation principles in Power BI, we will demonstrate how you can help the CEO by giving her the metrics she needs to develop a guiding philosophy based on data-driven leadership. Join this session to get practical advice on how you can help drive your organisation’s short and long term future, using data and Power BI.
As an MBA student and external consultant who delivers solutions worldwide, Jen has experience in advising CEO and C-level executives in terms of strategic and technical direction.
Join this session to learn how to speak their language in order to meet their needs, and impress your CEO with proving it, using Power BI.
So Azure SQL DataWarehouse is now available and starting to be use, but what does that mean to you and why should you care?

Reflecting on a large-scale Azure DW project, this session gathers together learnings, successes, failures and general opinions into a crash course in using Azure DataWarehouse “properly”.

We'll start by quickly putting the technology in context, so you know WHEN to use it, WHERE it’s appropriate and WHY it works the way it does.
  • Introducing the ADW technology
  • Explaining distributions & performance
  • Explaining polybase
Then we'll dive into HOW to use it, looking at some real-life design patterns, best practice and some “tales from the trenches” from a recent large Azure DW project.
  • Performance tips & tricks (designing for minimal data movement, managing distribution skew, CTAS, Resource classes and more)
  • ETL Patterns (Surrogate keys & Orchestration)
  • Common Mistakes & Pitfalls
  • Conclusions & Recommendations
Most of the time you’ll see ETL being done with a tool such as SSIS, but what if you need near-realtime reporting? This session will demonstrate how to keep your data warehouse updated using Service Broker messages from your OLTP database.

Containers are a new and exciting technology but what does it mean for DBAs?

This session will give an introduction into SQL Server running in containers and what options are available.

Attendees will be taken through the following: -

Defining what containers are (benefits and limitations)
Configuring Windows Server 2016 to run containers
Installing the docker engine
Pulling SQL images from the docker respository
Running SQL Server containers
Committing new SQL Server images
Exploring 3rd party options to run containers on previous versions of Windows Server (real world example)

This sessions assumes that attendees have a good background in SQL Server administration and a basic knowledge of Windows Server administration
Tired of Bar Charts? We'll build out a custom PowerBI Visual and show the power of PowerBI whilst going into a deep dive on how this is achieved. We will be exploring web technologies along with data technologies, and seeing how some very powerful constructs are used to produce PowerBI reports. 

We will be covering a variety of content; including: Typescript, Javascript, HTML5, Gulp, Visual Studio Code, the MVVM pattern, D3.js, and without giving the game away too much, Google Maps.
Have you ever wondered if there are scored more goals when it rains or if Stoke wins more games on cold weekday evening at bet265 stadium? This session shows how you can use open data sources and web pages to gather data.

How you can then manipulate and extend it and finally report on it. We will look into how you can use data from Azure data market or other open source and combine it with data from web pages to create the dataset you need.

When we have our dataset we will manipulate and extend it using M and DAX so that we can get meaningful insights from it. We will then dive into the data to see if there is anything to report. 

In this end to end Power BI Desktop demo we will use fun data that many can relate to as the English Premier League is one of the most popular football leagues in the world. The audience will take away many nuggets of information as they see how a real world example could look like. I will share with them all the obstacles and learning I got when creating this report so they will see both the limitation of Power BI Desktop and open data as well as its strength. First of all the audience will learn how to use Power BI Desktop to create something real with fun data.

Both the data acquisition and manipulation as well as the reporting is covered in this demo rich session. The audience will learn about Power BI desktops strength and weaknesses as well as the benefits and potential problems with using open data and web pages as source.

In the end the Power BI Desktop file will be available to download for the audience. 
Microsoft Session. Based on the popular blog series, join me in taking a deep drive and a behind the scenes look at how SQL Server 2016 "Just Runs Faster",focused on scalability and performance enhancements. This talk will discuss specific engine improvements, not only for awareness, but to expose design and internal change details. The beauty behind "It Just Runs Faster" is your ability to just upgrade and gain performance with out lengthy and costly application or infrastructure changes. If you are looking at why SQL Server 2016 makes sense from a technical perspective for your business you won't want to miss this session.
In this fun session we'll review a bunch of problem implementations that have been seen in the real world.  Most importantly we will look at why these implementations went horribly wrong so that we can learn from them and never repeat these mistakes.
In this session we will review the new enhancement to SQL Server security available in SQL Server 2016 and Azure SQL DB.  These include Always Encrypted, Row-Level Security and Dynamic Data Masking as well as whatever else Microsoft has released since I've written this abstract. We'll look at how to set these features up, how to use them, and most importantly when to use them.
Microsoft Session. Are you a:
  • Production DBA that wants to know which new xEvents and DMVs/DMFs are available to get further insights into SQL Server operations?
  • Production DBA that needs to troubleshoot a performance issue, and needs to collect performance data for others to analyze?
  • Developer that needs to analyze query performance and already has basic knowledge of how to read an execution plan?
Then this session is for you! It just works – performance and scale in SQL Server 2016 database engine and what is being added to in-market versions. This session will showcase several improvements in SQL Server, focusing on the latest enhancements that address some of the most common customer pain points the Database Engine, involving tempdb, new CE, memory management, T-SQL constructs as well as diagnostics for troubleshooting query plans, memory grants, and backup/restore. Understand these changes in performance and scale, and the new and improved diagnostics for faster troubleshooting and mitigation.
The Internet of Things is the new kid on the block offering a wealth of possibilities for data streaming and rich analytics. Using a Raspberry Pi 3 we will take an end to end look at how to interact with the physical world collecting sensor values and feeding that data in real-time into cloud services for manipulation and consumption. This will be a heavily demonstrated session looking at how such an environment can be setup using Microsoft offerings, including; Windows 10 IoT Core, a C# Universal Windows Platform application, an Azure IoT Event Hub, Azure Stream Analytics, Azure SQL DB and Power BI. This is an overview of what’s possible, but showing exactly how to build such a simplified solution with a session which will be 90% demonstrations. This will hopefully add that level excitement to real-time data with plenty of hardware out there showing what it can do when setup with Microsoft software. In addition, spare Raspberry Pi devices will be available for the audience to pass around and look at. 
The Azure Data Lake is one of the newest additions to the Microsoft Azure Cloud Platform, bringing together cheap storage and massive parallel processing in two quick-to-setup and, relatively, easy-to-use technologies. In the session we will dive into the Azure Data lake.

First of all exploring: 
Azure Data Lake Store: How is data stored? What are the best practices?; 
Azure Data Lake Analytics: How does the query optimiser work? What are the best practices?

Second, a practical demonstration of: Tuning U-SQL: Partitioning and distribution: Job Execution in Visual Studio
SQL Server normally relies on concurrency models in order to maintain the Isolation in ACID. As systems scale, this method can go from being a benefit to a hindrance, limiting the concurrency for our applications and hurting performance.   Throwing hardware at the problem can help, but does not address the fundamental issues. These can only be resolved with better database design and the correct configurations.   Join John as we work through the different design patterns that exist for increasing concurrency. From using specific data types, and partitioning, to how In-Memory OLTP can help. By the end of this season you will understand the different methods available to boost concurrency regardless of the size of your environment.
What is the future going to look like? When are we going to reach true Artificial Intelligence? Is the Singularity going to happen? There is a lot of talk today about AI and what it means for the human society. Let's forget about the future for an hour and focus on what is possible today. We are going to look at the most promising area in AI research, Deep Learning and understand how it fits in the wider picture of Machine Learning. Be prepared for some maths, loads of graphs and deep learning in action.
You know the situation that a query yesterday still worked
quickly and satisfactorily and today it suffers from performance problems?

What will you do in such a situation?

- you may restart SQL Server (did work all the other times before)
- you drop the procedure cache (has been told by a dba)
- you  get yourself a coffee and think about what you learned in this session

Microsoft SQL Server requires statistics for ideal execution
plans. If statistics are not up-to-date, Microsoft SQL Server may create execution
plans that run a query many times slower. In addition to the basic
understanding of statistics, special situations are shown in this session that
are only known to a small group of experts.

After a brief introduction to the functionality of
statistics (Level 100), the special query situations, which lead to wrong
decisions without experience, are immediately apparent. The following topics
are covered by usage of large number of demos:

- When will statistics get updated?
- examples for estimates and how they can go wrong?
- outdated statistics and ascending keys?
- when will outdated statistics updated for unique indexes?
- drawback of statistics in empty tables?

Follow me on an adventurous journey through the world of statistics of Microsoft SQL Server.
SQL Server 2016 is a class leading Tier One product but how do you get customer to upgrade and migrate? How can you address the "so what" SQL does everything I need to do now?

For the last year, Mike Boswell  has been working on migrating a leading UK public sector customer to SQL Server 2016. We have had to go deep on performance and support the business argument.

This chalk n talk session will dive into how various groups, in Microsoft including SQL CAT, were used to change the customer perception of "we are just fine with legacy SQL Server" and give you ideas on what you need to look into.

We talk about the areas you need customers to focus on and what is likely to become blockers?  Performance comparisons, implementation of new features, bringing the Oracle DBAs to understand the differences are just some of the challenge you will face. I'll tell you how we failed at first and how we turned it all around.

We will look at new CE, where In memory is a good fit, indexing, testing, checkpoints, how it "just goes faster" helps workloads, performance counters, KPIs, business discussions, insight into migration planning and how to even work with an Offshore model!
SQL 2016 brings us many new features but one of the most
anticipated is surely the Query Store. The Query Store now allows us to track
query plans as they change over time giving us a whole slew of new
possibilities when it comes to tuning our queries. Even just the ability to
compare a previous plan to a new plan is a huge step towards understanding what
may be happening in our instance. We can even tell the optimizer which plan we
want it to use. These were all either extremely difficult to do before and in
some cases impossible to do. This session will give you the insight to get
started using this new and wonderful feature set.
During this session we'll take an "under the bonnet" look at query plans in SQL server. We'll look at how the components work in different versions of SQL Server and the things to look out for that could cause you problems in your queries.

We'll also take a look at how the query optimiser makes its decisions and how to influence it to make better choices 
When was the price of an article changed and how was the original price?
How has the price of an article developed over a period of time?
Developers must solve their own solution with the help of triggers and/or stored procedures.
With temporary tables an implementation is ready in a few seconds - but what are the special requirements?
In addition to a brief introduction to the technology of Temporal Tables, this session provides an overview of all the special features associated with Temporal Tables.

- Rename tables and columns
- Temporal Tables related to triggers?
- Temporal Tables and InMemory - can this go well?
- can computed columns be used?
- how to configure the security
- ...
Ever found yourself deconstructing endless layers of nested code? Is your T-SQL codebase written in an object-oriented format with functions & views? Did you know that object-oriented code reuse can come with a significant penalty?  

In this session, learn how T-SQL is not like other common programming languages. We will peek inside the Query Optimizer to understand why applying object-oriented principles can be detrimental to your T-SQL's performance. Extensive demos will not only explore solutions to maximize performance, you will also be introduced to a T-SQL tool that will aid you in unraveling nested code.
Though datatypes are fundamental, they are often disregarded or given little thought. But did you know that poor data type choices can have a significant impact on your database design and performance? 

Attend this session to learn how database records are stored within SQL Server and why all data types are not created equal. Armed with that knowledge, we will explore several performance scenarios that may be impacting your systems right now! When you leave, you will be able to explain to your colleagues why data type choices matter, assess your own systems, and implement some best practices to mitigate these performance killers.
Where do the estimated rowcount values come from? Look inside SQL Server’s distribution statistics to see how they are used to come up with the estimates. We’ll also discuss changes in the cardinality estimator in recent versions and look at some new metadata that gives us more statistics information.  Goals:
  • Explore the output of DBCC SHOW_STATISTICS
  • Describe when the density information is useful
  • Look at some problem scenarios for which the statistics can’t give good estimates
  • Understand why cardinality estimation involves more than just the statistics
This session is not only for people working with Master Data, but also everyone working with Business Intelligence. With Master Data Services 2016 it's now easy to handle all your dimension data, including Type 2 history. In this session you will get a brief introduction to the basic principles of Master Data Services, together with an overview of all the new features brought to you with Master Data Services 2016.
You will learn about features like:
  • New features and better performance in the Excel Add-In
  • Track Type 2 history
  • Many-To-Many hierarchies
  • New security and administrator capabilities
  • New approval flows
If you are using Master Data Services or are thinking about it, - this is the session you cannot miss.
With the release of Azure Analysis Services we have, at long last, got a crucial piece of the Microsoft BI stack available in the cloud. Azure Analysis Services is a SaaS version of Analysis Services, and in this session you’ll learn about:
  • What Azure Analysis Services is
  • When you should use it
  • Configuring Azure Analysis Services in the Azure portal
  • Developing and deploying Azure Analysis Services models
  • Building reports in Excel and Power BI using Azure Analysis Services
  • Connecting to on-premises data sources from Azure Analysis Services
  • Automation with PowerShell
  • Integration with other cloud BI services
  • Sizing and pricing
  • Monitoring
DevOps is changing today's software development world by helping us build better software, faster. However many organizations struggle to include their database changes with their application deployment. In this session, we will examine how the concepts and principles of DevOps can be applied to database development by looking at both automated comparison analysis as well as migration script management. We will cover using branches and pull requests for database development while performing automated building, testing, and deployment of database changes to on premise and cloud databases.
When you need to extract data from the database you are writing, more or less complex, T-SQL code. Often simplistic and procedural approach reflects what you have in your mind, however this could have a negatively impact about performance because the database engine might think otherwise. Fortunately T-SQL, as a declarative language, allows us to ask the "what" and delegate to the engine the "how". Everything works best as long as you respect a few simple rules and you may use special constructs. In this session, with few slides and a lot of real-case scenarios, you can see the advantages of writing the query for high performance, even when they are written by that "someone else" called ORM.
You already know a thing or two about tuning a SQL query on Microsoft SQL Server. You can read an execution plan and know the most significant red flags to look for. And you have also learned about the important information revealed by SET STATISTICS statements. But you want to take it up another level!

In this session, go even deeper with query tuning with three new lessons. First, we’ll examine a few new and seldom used features inside of SSMS specifically for query performance. Second, we’ll spend a bit of time learning query related DMVs and how to read query plans in directly within XML. Finally, we will discuss and demo a set of powerful Trace Flags, the 8600 series, that reveal additional details about how the query optimizer behaves as it processes a query.
Having multiple data visualizations doesn't make it easier to choose the right one to reveal the story. Choosing the wrong visualization can obscure the story - or worse yet, distort it! In this session, we start by exploring how telling a story with data visualizations is different from creating a report. Then we explore the vocabulary of data visualization and learn how to apply grammar (visualization design principles) to your data. Along the way, you will also learn how to evaluate the goal of your data story and how to choose the correct visualizations that communicate this story accurately and effectively.
Have you ever considered a situation where Columnstore Index can be quite the opposite of what one would expect from it? A slow, wasteful source of painfully slow queries, lagging the performance, consuming irresponsible amount of resources …

Setting the wrong expectations (it won’t run 100 times faster on EVERY query), selecting the wrong architecture (partition by 100s of rows instead of millions), using and aggregating by the large Strings in the fact tables – this list is actually quite large.

What about some of the less known limitations for building Columnstore Indexes ? The ones that will bite you suddenly in the middle of the project – when you do not expect it at all ?

Let me show you how to achieve those painful mistakes and you will surely know how to avoid them 
With increasing speed in relational query execution classical analytical solutions get challenged more and more. Why loose time for processing data into multi-dimensional databases? Why analyze outdated data if you can have fresh data instead?

We are analyzing typical scenarios from classical multi-dimensional analysis like YTD calculation, DistinctCount and others in regards to their efficiency with different solution approaches: Classical multi-dimensional databases in ROLAP mode, DirectQuery, T-SQL… And we are going to show how Columnstore indexes are influencing those solutions. Find out about advantages and disadvantages of the different solutions in regards to the problem. And maybe you will discover new approaches for your own challenges. 

Prerequisites: SSAS knowledge, basic SSAS performance tuning plus the relational lndexing basics with good understanding of the relational Columnstore capacities.
When R is installed on SQL Server 2016 there are new settings and configurations the DBA needs to know to ensure that SQL Server is not adversely impacted when R is run on the server.  This session will show what settings need to be changed to monitor R and provide tools to assist in the process.  The components installed and their interaction with SQL Server will be explored to provide a better understanding of what processes are running when and what their impact is on performance. Attendees will learn what R code needs to include to run not only in memory but also to use the ScaleR processes of R Server to swap to disk when all memory is in use.  The maintenance requirements for implementing R code are reviewed so that DBAs will know what steps are involved to support R running on SQL Server.  By default SQL Server allows R users to utilize server resources at will without having any R code installed on the server.  Learn about this process and how to restrict it. If  you are planning on running R on your SQL Server 2016 instance you need this session to configure the server to ensure optimal performance for SQL Server and R.
The Analysis Services model used in both Power BI Desktop and SSAS allows solving of very complex modelling issue. In this session we will look at solving several complex modelling problems like many-to-many and RLS using the latest version of Power BI desktop and Analysis Services.
Clustering SQL Server still vexes many in one way or another. For some, it is even worse now that both AlwaysOn features - clustered instances (FCIs) and availability groups (AGs) - require an underlying Windows Server failover cluster. Storage, networking, Active Directory, quorum, and more are topics that come up quite often. Learn from one of the world's experts on clustering SQL Server about some of the most important considerations - both good and bad - that you need to do to be successful whether you are creating an FCI, AG, or combining both together in one solution.
It isn’t the dark ages any more.
You’ve learned how to express your database in script form using something like SSDT, Flyway or Redgate. You track those scripts in source control with tools like TFS or Git. Well done.
But you haven’t written as many automated tests as you know you should. You haven’t looked at the build functionality in VSTS or gotten to grips with build servers like TeamCity or Jenkins. Even if you have it was for C# apps and you aren’t sure how to get the same benefits for SQL Server.
I’ll explain how to unit test SQL with tSQLt and to automate your tests with a build server to give you confidence in the quality of your code.
The need for batch movement of data on a regular time schedule is a requirement for most analytics solutions. Within the Cortana Intelligence Suite, Azure Data Factory (ADF) is the service that can be used to fulfil such a requirement.

In this session, you will see the components required within ADF to orchestrate the movement of data within the Cortana Intelligence Suite. Looking at scenarios such as moving data from on premise servers to Azure BLOB storage or a database, to invoking analytical workloads such as invoking Machine Learning model evaluations. Azure Data Factory brings all this together. Also see a real world case study from a Microsoft Partner on how they implemented an Azure Data Factory and Azure Data Lake business intelligence solution for a 1.5 billion user customer.
Do you believe the myths that “Third Normal Form is good enough”, or that “Higher Normal Forms are hard to understand”?
Do you believe the people who claim that these statements are myths?
Or do you prefer to form your own opinion?

If you take database design seriously, you cannot afford to miss this session. You will get a clear and easy to understand overview of all the higher Normal Forms: what they are, how to check if they are met, and what consequences their violations can have. This will arm you with the knowledge to reject the myths about higher Normal Forms. But, more important: it will make you a better designer!
In an AlwaysOn world we focus on entire databases being highly available. However, replication offers another, arguably more powerful, way to make data available on multiple servers/locations that steps outside of "normal" High Availabilty scenarios. This session will explain what database replication is, what the different parts are that make up the replication architecture and when/why you would use replication. You will leave the session with an understanding of how you can leverage this feature to achieve solutions that are not possible using other High Availabilty features. The content will be valid for all versions of SQL Server from 2005 onward.
Whether you are a developer, DBA, or anything in between, chances are you are not always following best practices when you write T-SQL. Unfortunately, many so-called “bad habits” aren’t always obvious, but can lead to poor performance, maintainability issues, and compatibility problems.

In this session, you will learn about several bad habits, how they develop, and how you can avoid them. While we will briefly discuss advice you’ve probably heard before, like avoid SELECT * and don’t use NOLOCK, you will also learn some subtleties in SQL Server that might surprise you, how some shorthand can bite you in the long run, and a very easy way to improve cursor performance.

By changing your techniques and ditching some of these bad habits for best practices, you will take new techniques back to your environment that will lead to more efficient code, a more productive workflow, or both.
Discover the ins and outs of some of the newest capabilities of our favorite data language. From JSON to COMPRESS / DECOMPRESS, from AT TIME ZONE to DATEDIFF_BIG(), and from SESSION_CONTEXT() to new query hints like NO_PERFORMANCE_SPOOL and MIN / MAX_GRANT_PERCENT, as well as some other surprises, you’ll walk away with a long list of reasons to consider upgrading to the latest version  - or the next version.
Continuous Integration attracts more and more attention throughout all areas of development. But not every area is provided with a satisfactory amount of tools and support for this topic. Especially in BI there is still a gap to close.  During this session we will look into ways to start on your own Continuous Integration approach for your SSAS tabular solution. After a short review of what aspects Continuous Integration actually consists of we'll focus on automated deployment. Let's have a look at the Tabular Object Model (TOM) in C# as well as the JSON based Tabular Model Scripting Language (TMSL) available since SQL Server 2016 as possible approaches to build your own automated deployment process. And what about BIML?
Does your application suffer from performance problems even though you followed best practices on schema design? Have you looked at your transaction log?
There’s no doubt about it, the transaction log is treated like a poor cousin. The poor thing does not receive much love. The transaction log however is a very essential and misunderstood part of your database. There will be a team of developers creating an absolutely awesome elegant design the likes of which have never been seen before, but then leave the transaction log using default settings. It’s as if it doesn’t matter, an afterthought, a relic of the platform architecture.
In this session you will learn to appreciate how the transaction log works and how you can improve the performance of your applications by making the right architectural choices.
Exciting times ahead! You bought a license for SQL Server 2016 and you are going to upgrade to the new shiny version of SQL Server on a beefy new machine!
Fantastic! Except that you have no idea how your application will work on the new version. There’s a new cardinality estimator in 2016: how will it affect performance? The new features in In-Memory OLTP and Columnstore Indexes look really promising, but how will your workload take advantage of these features?
The best way to know for sure is to conduct a benchmark and compare it to your current system.
In this demo-intensive session you will discover how to capture a meaningful workload in production and how to replay it against your test system. You will also learn which performance metrics to capture and compare, and which tools can help you in the task.
We have seen SSRS getting better and better in every single release and SQL 2016 is a mind blowing upgrade of whatever we have seen thus far from Microsoft.With the SQL 2016 new features, we can easily embed R statistical models within SSRS reports.

As we have always seen a steady growth in SSRS since 2005, we now have much more advanced brand new features in 2016 SQL Server Reporting Services, beyond our imagination.Sit tight and buckle up for an amazing roller coaster ride, to not only briefly see the advanced SSRS killer features but also some of the R statistical charts within SSRS 2016 and the new user interface of Report Builder.
You are hitting performance and scalability issues with the database engine, conventional wisdom and tools and getting you nowhere, where do you go ?. Windows performance toolkit has the answers and this session will allow you to unlock the secrets of the database engine, around things such as:

- CPU saturation, this will cover how the engine is structured, how to read call stacks and why windows performance toolkit is an order of magnitude more powerful than the debugger in this respect.

- any behaviour which is not documented, including latching and spin locking, the likes of the developer team, Bob Dorr and his ilk can infer a world of meaning from call stacks, this session will show how mere mortals can do this also.

- IO anomalies, is something in the path between your server and storage holding on to IOs or reordering them ?

- what is going off under the covers when you see strange wait activity, this will include a dive into the windows threading model.
Join us for a practical look at the components of Cortana Intelligence Suite for information management, data storage, analytics, and visualization. Purpose, capabilities, and use cases for each component of the suite will be discussed. If you are a technology professional who is involved with delivering business intelligence, analytics, data warehousing, or big data utilizing Azure services, this technical overview will help you gain familiarity with the components of Cortana Intelligence Suite and its potential for delivering value.
It’s fairly well known that the query optimizer is what creates execution plans. Lots of people are aware that execution plans are the thing that makes queries run fast, or slow. What seems to be less well known is just how vital the number of rows that the optimizer thinks may be returned by any given query is the primary factor driving the choices that the optimizer makes. This session focuses on how the row counts for queries are arrived at and how those row counts impact the choices made by the optimizer and, ultimately, the performance on your system. With the knowledge you gain from this session, you will make superior choices in writing T-SQL, creating indexes and maintaining your statistics. This leads to a better performing system. All thanks to counting the number of rows.
For the most part, query tuning in one version of SQL Server is pretty much like query tuning in the next. SQL Server 2016 introduces a number of new functions and methods that directly impact how you’re going to do query tuning in the future. The most important change is the introduction of the Query Store. This session will explore how the Query Store works and how it’s going to change how you tune and troubleshoot performance. With the information in this session, not only will you understand how the Query Store works, but you’ll know everything you need to apply it to your own SQL Server 2016 tuning efforts as well as your Azure SQL Databases.
How easy is it to hack a SQL Server?
In this session we'll see a few examples on how to exploit SQL Server, modify data and take control, while at the same time not leaving a trace.
We'll start by gaining access to a SQL Server (using some "creative" ways of making man-in-the-middle attacks), escalating privileges and tampering with data at the TDS protocol level (e.g. changing your income level and reverting without a trace after payment), and more.
Most importantly, we'll also cover recommendations on how to avoid these attacks, and take a look at the pros and cons of new security features in SQL Server 2016.
This is a demo-driven session, suited for DBAs, developers and security consultants.
The next release of SQL Server Analysis Services Tabular will allow you to use the M language and a Power Query-like UI to load data into your model. In this session you'll learn why this is such a significant change. Topics covered will include:
  • How the new functionality changes the way SSAS data loading works
  • New data sources that this supports
  • When you should transform data in M while loading and when you shouldn't
  • Creating partitions
  • Migrating from Power BI to Analysis Services
SISDB Catalog is not new anymore, and most of SSIS developers are familiar with this. However still there are many people who do not take the maximum befit of it. In this session, we will see and understand how to take the maximum befit of this metadata rich SSISDB


We will look at 
- SSISDB Catalog Views
- Deployment
- Versioning
- Parameters 
- Environment Variables
- Execution and Logging
- DataTaps
- Self-Service ETL reporting
In this session we will learn about SQL Server enhancements in the most recent versions that can help you troubleshoot query performance.

Ranging from new xEvents to Showplan improvements, from LQS (and underlying infrastructure) to the revised Plan Comparison tool, learn how these can help you streamline the process of troubleshooting query performance and gain faster insights.
Everyone agrees that great database performance starts with a great database design. Unfortunately, not everyone agrees which design options are best. Data architects and DBAs have debated database design best practices for decades. Systems built to handle current workloads are unable to maintain performance as workloads increase.

Attend this engaging and sometimes irreverent session about the pros and cons of database design decisions. This debate includes topics such as logical design, NULLS, surrogate keys, GUIDs, primary keys, indexes, refactoring, code-first generators, and even the cloud. Learn about the contentious issues that most affect your end users and how to deal with them.
 
Prerequisites: Opinions, lots of them, even wrong ones.
Indexes are essential to good database performance, but it can be hard to decide what indexes to create and the ‘rules’ around indexes often appear to be vague or downright contradictory.

In this session we’ll dive deep into indexes, have a look at their architecture and internal structure and how that affects the way that indexes are used in query execution. We’ll look at why clustered indexes are recommended on almost all tables and how their architecture affects the choice of columns. We’ll look at nonclustered indexes; their architecture and how query design affects what indexes should be created
to support various queries.
Graph databases solve a lot of complex relationship problems that relational databases struggle
to support. Relational databases are optimized for capturing data and answering
transactional data questions.  Graph databases are highly optimized for answering questions about data relationships.  Do you, as an architect, understand which data stories need which type of technology? 

  • Master Data
  • Networks & Infrastructure
  • Trees and Hierarchies
In this session you will learn which data stories are the right fit for your relational stores, and
which are the right fit for graph databases. You will learn options for bringing this data together for more intelligent data solutions.   You will learn the basics of how this is implemented in the next release of SQL Server.
Microsoft Session. Do you want to make your SQL Server instance stay Always On? SQL Server Always On Availability
Groups provides a number of out-of-the-box enhancements in SQL Server 2012,
2014 and 2016 which helps analyze common issues with relative ease. Attend this
session to find out more about how you can leverage the new enhancements in
Always On to improve reliability, reduce manual effort and increase uptime.
This session will also do a deep dive using demos of the new Power BI
visualizations available for common scenarios leveraging the new diagnostic and
supportability improvements that were added.
Tabular is a great engine that is capable of tremendous performance. That said, when your model gets bigger, you need to use the most sophisticated tools and techniques to obtain the best performance out of it. In this session we will show you how Tabular performs when you are querying a model with many billions rows, conduct a complete analysis of the model searching for optimization ideas and implement them on
the fly, so to look at the effect of using the best practices on large models. This will also give you a realistic idea of what Tabular can do for you when you need to work on large models.
In this session we will look at SQL Server R Services and, in particular, the new MicrosoftML package available in SQL Server vNext and how it can be used to build an in-database Predictive Analytics Model, that can scale. The session will provide a brief introduction to supervised Machine Learning before applying that to a real world scenario, taking you through each of the steps required to build an in-database predictive analytics model using R and SQL Server vNext.
The package data.table is super-fast and super-powerful and can turn your long-winded and slow R code into a lean, mean, data-crunching machine. This hour takes you through the key aspects of the package and how it can make your life better.

We'll be looking at:
- Basic syntax
- Data I/O
- Joins
- Within group activities
- Pivoting data
- Cool hacks
When moving to a cloud or hybrid environment, one of the biggest challenges is
maintaining a consistent login experience for users. Many firms have challenges
with connecting Office 365 to their Azure infrastructure, particular in more
complex designs.  A bad design can raise security concerns or give users a bad experience. In this session you will learn about how Active Directory, Azure Active Directory, and Active Directory Federation Services work together to offer your users a single common logon experience. You will learn about the proper architecture for your security needs and scale.
SQL Server Reporting Services (SSRS) is Microsofts grand-father technology for reporting. Although being over-worked with SQL 2016 not all reporting requirements can be solved. Power BI on the other side is a young, agile and powerful reporting technology - but currently only available as a cloud service serving the interactive reporting approach.

Today, many organizations are not willing or able to put their data into a cloud service for analysis which limits the adoption of Power BI. Microsoft got onto that train and announced the on-premises version of Power BI as part of SSRS for the upcoming SQL Server (vNext). At present available as a technical preview the first sights look promising - but there open questions: Which features will be available for Power BI on-prem? Can I use custom visuals? What about Power Query?
Come and join this session to get an overview about the next generation of Microsoft reporting! 
You are using Extended Events and Dynamic Management Views (DMVs) to analyze performance problems in your databases. How do you go from there to building a performance-monitoring system that is easy to use and that works at scale? In this session, you will learn techniques for loading and parsing Extended Events into a central monitoring database in close to real time, correlating the events with query plans, indexing the data for performance, and making the information easily available.
Not every workload can benefit from In-Memory tables. Memory Optimized Tables are not a magic bullet that will improve performance for all kinds of transactional workloads. Therefore, it is very critical that you test & benchmark in-memory performance for your SQL deployments before you decide to migrate disk-based tables to memory-optimized tables. In this session, you will learn:
a. Baselining current performance
b. How to identify the right candidates for In-Memory
c. Generate sample production data
d. Create simulated production workload
e. Test & benchmark In-Memory performance with simulations
f. Compare In-Memory performance with the baseline
You will also learn about a verity of tools and techniques that can be used in our proof-of-concept.
There are many community scripts out there that DBAs can use without having to write them yourself. Like Ola Hallengrens scripts, or Brent Ozars or sp_WhoIsActive by Adam Machanic. But how about Powershell? Turns out, this world is still a bit unknown to the DBA but there is a fast growing community with extremely useful scripts for day to day SQL Server management. In this session we will take a look into my favourite scripts and we will also write a bit of Powershell ourselves.
In this session Richard Conway will show you from grass roots no knowledge of Spark how to navigate the Spark framework ecosystem and build complex batch and near real time applications that use Spark's machine learning library mllib. He'll cover everything from data shaping, basic statistics at scale, normalising, testing, training and building services and complex pipelines underpinned by machine learning. This is very fast-paced demo-heavy session going from nothing to big data and machine learning superstar by virtue of Apache Spark. If you're thinking of using Hadoop in the future this is the one session you don't want to miss.
In this talk Laura will go through the theory and practice of time series analysis with examples in R covering concepts such as lag, serial correlation and Box-Ljeung tests. She'll focus on the practical applications of time series analysis and show how data can be modelled using libraries such as xts, zoo and forecast to make predictions on markets and seasonal behaviours. In addition Laura will cover an introduction to deep learning and specifically how Recurrent Neural Networks (RNNs) and LSTM's can be used to make time series predictions with R's RNN package and Tensorflow.
The next version of Microsoft SQL Server is coming to Linux and is already in preview.  In this session, Program Managers from Microsoft will provide an introduction to SQL Server and how it runs on Linux and container platforms.  The scope of the first release on Linux and schedule will be covered and deployment, configuration, high availability, storage, and performance/scale will be topics for a more technical drill in.
Microsoft Session:
This session will cover common DBA troubleshooting scenarios on SQL Server on Linux. In this demo rich session, we will cover various every-day from troubleshooting startup and configuration to performance and bottleneck analysis. We will discuss existing and new tools that exist that will enable DBA’s to effectively troubleshoot SQL Server on Linux.
Microsoft Session:
This session will cover High Availability and Disaster Recovery solutions for SQL Server on Linux. What technology options are available and how to build a highly available database solution on Linux? How will the various planned maintenance and unplanned downtime scenarios will work? We will discuss the design patterns and architecture blue prints for HA/DR for SQL Server on Linux.
Microsoft Session -
Either moving existing databases to Azure or designinga new SQL centric solution on Azure, availability and connectivity of Azure SQLDatabase is always a fundamental key requirement. We’ll share in the session best practices of designing highly available SQL Databases in the context of cross region deployment; how to make your application robust to handle transient connectivity issues.

Mircosoft Session:
Are your customers considering moving their databases to Azure SQL DB,and need help figuring out the optimal migration approach? Are you in the middle of an Azure migration project? Come to this session to learn about AzureSQL DB migration approaches, challenges, and solutions from SQLCAT. We will present practical migration guidance and learnings from our engagements with high scale and complex customer migrations. Whether you are an Azure SQLDB expert, or just starting to explore the technology, this session will help you identify and implement the optimal migration path for your customers, deal with complexities, and take advantage of the latest database platform features.
Microsoft Session:
The updateable clustered columnstore in Microsoft SQL Server 2016 offers a leading solution for your data warehouse workload with order of magnitude better data compression and query performance over traditional B-Tree based schemas. This session describes columnstore index internals with deep insight into data compression methodology for achieving high query performance including improvements in column store investments for SQL Server 2016 and Microsoft Azure SQL Database
Microsoft Session:
Attend this session to get an overview of the wide array of significant changes and investments with Azure SQL Database, including parity with SQL Server 2016, Intelligent database capabilities differentiate from our competition like AWS, security enhancements, FPGA performance acceleration,increased business continuity and more! We'll share forward looking road-map of investments both from engineering and business perspective. In this session we will show you some of the fantastic capabilities of Azure SQL Database as well as how to showcase them to your customers.

Database provisioning requests are the bane of many DBA's working lives. Developers want to work with realistic data, DBAs want to protect Production and secure data.

But copying databases for development and test is a job that gets "delegated up"; DBAs, IT Managers, those with the keys to Production find themselves performing the task, and the development team is often blocked while it takes place.

The same people are often responsible for ensuring the copy is appropriate for the environment (permissions, configuration data, sensitive data), so have to perform semi-manual tasks and maintain brittle scripts to do so.

In this session Redgate's Grant Fritchey shows how new tool SQL Clone enables self-service or easy automation of database copies and is near-instant at the time of need, using a fraction of the disk space, and allowing quick reversion to a baseline.

The storage industry is at one of the greatest inflection points in its history, in this session you will learn the basics of flash storage and what this means for the DBA and developer in terms of what conventional wisdom needs to change and how best to leverage what is soon to become the main stream storage technology in terms of:
  • IOPS/GB is going down with old protocols such as SAS, however help is at hand in the form of the new storage protocol on the block
  • How to test flash storage
  • New capabilities that flash storage opens up such as bulk loading at speed and running OLTP and OLAP applications on the same storage
  • Compression was invented for the world of spinning disk, how does it behave with spinning disk
  • ...and more !!!

Pyramid Analytics BI Office’s Enterprise BI Platform and PrecisionPoint’s data warehousing solution for Microsoft Dynamics ERP systems combine to deliver a familiar user interface and simplified access to key Business and Financial data.

The platform allows Finance and Operational users fast access to one-view of the truth and IT to support, govern and manage a self-service analytics solution, providing an agile environment to create rich analytics to answer complex business problems. Capabilities for Publishing reports and mashing-up of data also allow organisations to deliver truly saleable, on Premise or Cloud hosted Business Intelligence, that meets both User and IT needs in a single, integrated platform.

Hear how organisations are embracing the prospect of:

  • A solution designed from the ground up to exploit this platform to deliver a scalable, secure and governed self-service Analytics, Reporting, Dashboard and Published reporting environment.
  • A ‘best in class’ user presentation layer combined with a data warehousing solution that produces guaranteed and reconciled reports.
  • Aggregated, highly performant data sets, allowing users to quickly and easily interrogate their data, create deep analyses or simply review KPIs in a simple to use dashboard.
  • A unified user interface across the business that will support both the needs of expert and more casual users.
  • A flexible and scalable approach to cater for business growth.

Together, Ian Macdonald and Ross Wendon will demonstrate how the combination of the Pyramid Analytics BI Office and PrecisionPoint platforms delivers an analytics and presentation layer easy, fast and offers a low total cost of ownership. This has been proven by Customers and validated by independent Industry experts. Our solution removes the “heavy lifting” of your IT department and gets you up and running quickly and cost effectively.

Microsoft SQL Server is the best-in-class, most widely used Business Intelligence data platform. With Pyramid Analytics' BI Office, organizations can maximize their Microsoft BI investment, eliminate security concerns, improve manageability and provide a rich analytics experience – all browser based and either on premise or in the cloud.

‘But I can get all that from Power BI!’, I hear you say? Well, yes, possibly at some point in the future. But if you’re (im)patiently waiting for the ability to:

  • Drill-down within charts
  • Create sophisticated analysis, without coding
  • Theme the interface at the touch of a button
  • Create beautiful dashboards, simply
  • Publish highly formatted Word-like documents in seconds
  • Allow users to collaborate and discuss important issues with the BI applications linking data and context
  • Leverage the power of R through simple, one-click editing
  • Deliver to thousands of users instantly, on premise

Wait no more!!

During this session, Pyramid Analytics’ Ian Macdonald will explain how Power BI (cloud and desktop), Reporting Services, and other Microsoft BI frontends simply struggle to meet the critical requirements threshold for enterprise-wide analytic deployments. But how, in BI Office, you have access to a compelling, feature-rich alternative that you can implement NOW!

Today’s complex IT world involves managing data from many locations, systems and environments. Data components that make up our understanding of our customers are derived from many places with complex relationships linking our information assets. The challenge is how to build a cohesive, single view of data that can then be automatically shared amongst users, systems, suppliers, partners and consumers. See how your existing investment in Microsoft SQL Server can be utilised to quickly create the authoritative view of data that can be seamlessly synchronised with enterprise systems such as Microsoft Dynamics CRM, Salesforce, SAP and many more – either on-premise in your data centre or in the cloud, using Microsoft Azure.

For further information, see www.profisee.com/mdm4360

The world’s largest enterprises run their infrastructure on Oracle, DB2 and SQL and their critical business operations on SAP applications. Organisations need this data to be available in real-time to conduct necessary analytics. However, delivering this heterogeneous data at the speed it’s required can be a huge challenge because of the complex underlying data models and structures and legacy manual processes which are prone to errors and delays.


Replicate SAP data to the Microsoft platform:
  • SQL Server
  • Azure SQL Database & SQL Data Warehouse
  • Azure Event Hubs

See a live demo, ask the expert and claim your free glow stick for Friday night’s party.

Application acceleration using DataCore software

In this session we will show an alternative IoT analytics solution, addressing real-life issues in SCADA data and Azure. The solution analyses anonymised data from a network of 600 Waste Water Treatment Plants and 100s of Pumping Stations. We will show how we dealt with the idiosyncrasies of SCADA data, how we formulated the likelihood of spills at pumping stations, how we achieved near real-time reporting, and still produced compelling visualisations in PowerBI using various techniques. This is an end-to-end solution utilising Azure Blob Storage, Azure Data Warehouse, Azure Data Factory, Azure Batch, C#, Azure Analysis Services, and PowerBI. At the end we will also compare the pros and cons of this solution to other real-time analytics options.

Data Science concerns with the activities of processing and analysing data in a particular domain, as well as applying machine learning algorithms to automatically discover insights and interesting patterns from the data. While data scientist needs tools to explore and visualize data, along with performing machine learning experiments and evaluating candidate models, operational platforms are required to productionize and maintain the resultant models, and integrate them into the operational systems. In this session, we explore the main features and capabilities of various Microsoft technologies that enable such end-to-end data science exercise, including SQL Server and Azure PaaS Services, along with practical scenarios and demos.

Many people are unaware of the basic alerting capability available in SQL Server. Those that do have likely just scratched the surface of what is possible. In this session sponsored by SentryOne, we shall discuss why alerting is important and how to move from basic alerting to more advanced alerting techniques allowing you to take automated actions when problems occur. You will even learn how you can create alerts in multi-tenancy environments and increase visibility with other collaboration platforms on your mobile devices.

In a BI world where PowerBI, Tableau, Qlik and their equivalents grab the headlines, Excel remains far and away the world’s most popular analytics tool. Loved by business users, it has a latent knowledge base unrivalled by any other productivity software. On the flipside, IT generally see Excel as the problem that will not go away, no matter how shiny the latest corporate dashboard tool.

This session examines Excel from a user perspective – why is it so loved in the business, why is ‘save to Excel’ still so heavily used in bespoke BI tools, and also what are some common gripes? We also consider the IT viewpoint, and why Excel is often seen as a problem.

Unlike most BI tools XLCubed embraces Excel, rather than tries to replace it. It provides a data-connected Excel-centric model for Corporate BI, where business users can leverage existing skills, but work smarter, free of pivot table restrictions. Governed Web and mobile deployment takes seconds and retains corporate security.

With over a decade’s expertise on Analysis Services, Version 9 adds connectors for relational databases, big data sources, and of course Power BI. Report level data Mashups mean reports can pull data from different sources without the need for a full semantic layer or the complexity of ensuring another data model refreshes as required. It provides a wealth of additional capability including intuitive user calculations, advanced number decomposition, extended data visualisations, cell-level commentary, and version control. XLCubed provides an Excel-centric model that can work for both business users and IT - it’s Excel, Jim, but not as we know it…

The SQL team has been working on next-gen query processing improvements to improve the performance of your queries and to enable new scenarios.  This talk will explain these enhancements and how they fit into the overall product roadmap for SQL.
The SQL Team delivers value monthly to both cloud and on-premises environments.  This is not easy to do.  This talk will explain:
- How the engineering model works
- How we use feedback data to drive our engineering model when building features
- How we product any usage data we collect
- How we are protecting customer data in our public cloud offering

Martin Wild of Quest Software will lift the lid on SQL Server on Linux. He will explore the mechanics to deploy SQL Server on Linux, what is and is not in the current Linux build and share some of quirks, insights and observations on setup and configuration. Once the stage is set, Martin will cover tuning and performance on Linux and bring you through some new DMVs for Linux monitoring. He will then compare the plans produced on Linux vs. Windows for consistency… or not! To round off the session get ready for a side by side benchmark session of workloads running on Windows and Linux. It’s sure to be an informative lid lifting! Come see what’s inside.

As this is a vendor session he will round off with a [brief] segment on Quest tools including monitoring and diagnostics and some cool free tuning tools.

Q&A with Conor Cunningham and Simon Sabin
Microsoft Session
I look at how you can take advantage of an on-premises gateway to make use of local data when creating reports and dashboards for Power BI. This will start with Power BI Desktop, and the choices you have for SQL Server and Analysis Services. It will then move to the cloud and look at hosting your files on OneDrive for Business, and what this means for data freshness. We will then look at options for personal use or a more centralized use. It will finish off with looking at some troubleshooting tools available to you when working with data refresh.
This Q&A panel is here to answer your questions and help you make sense of what Data Science means and how you can fit into this discipline.
We have mixed the panel up with Data Scientists, and Non Data Scientist working in a Data Science team, as well as professionals moving into the field from a SQL Server and Business Intelligence background.
Join us as bring your questions……
Join industry veterans Steve Jones, Kevin Kline, and friends for a wide-ranging Q&A panel discussion covering all topics of career advice for the IT professional. We’ll discuss topics like personal branding, team leadership, trends in hiring, career growth, the trajectory of Microsoft’s data platform, and many more. The session starts with a few prepared statements from the panel. But  every Q&A panel is unique due to the questions that come from the audience. So bring your questions and topics you’ve been wrestling with to get insight from some of the industry’s most trusted thought leaders. We look forward to seeing you there!
We have assembled a hub of technical experts from SQLCAT, Tiger team, Product Group, Customer Support Services (CSS) and others.
We'll have folks from across all our data, BI and Advanced Analytics teams plus some managers -available to answer any questions you have. You can ask us anything about our products, services, career, our typical week or even our teams!
Go ahead, ask us anything about our public products, the team or our work. Please note, we cannot comment on unreleased features and future plans.


Why would Microsoft do a Q & A?
We want to know how you use data. Your questions provide insights into how we can make service better, what the community needs to help them with being successful.


Panel
Martin Thornalley -  Data Solution Architect
Chris Lound - UK Customer Support Services
Lindsey Allen - Program Manager
Bob Ward - Principle Architect
Sunil Agarwal - Program Manager
Sanjay Mishra - AzureCAT
Mike Boswell - WW Data Platform SME
Bianca Furtuna - Technical Evangelist
Kasper de Jonge - Program Manager
In this talk, we learn the two ways that SQL R
Services can be invoked one from the R IDE via the ScaleR package; and as a
stored procedure directly from SQL Server Management Studio. We learn how
either scenario works and what is the intended use-case. We also learn R
programming best practices to follow when working against data stored in SQL
Server databases.