One of the last technical challenges of cloud adoption is right security configuration. This session focuses on Azure Sql PaaS, covers governance, risk management and compliance and provides 8-step process for securing public cloud.
A technical overview of Azure SQL Data Warehouse Gen 2. SQL Data Warehouse is a cloud-based Enterprise Data Warehouse (EDW) that leverages Massively Parallel Processing (MPP) to quickly run complex queries across petabytes of data.
Customers have feelings and by harnessing the power of deep neural networks, we can derive emotional insight from their data and use this to improve. Attendees will leave understanding how to connect different types of data to cognitive services.
This talk will address how to add the unit testing framework tSQLt to the database deployment pipeline. The purpose is to reduce the cost of validate every change in the database with a fully automated pipeline.
We introduce the concept of aggregation, we show several examples of their usage understanding the advantages and the limitations of aggregations, with the goal of building a solid understanding on how and when to use the feature in data models.
Within Azure we have a rich ecosystem of AI services that can be leveraged to gain new insights into your data. This session will give you an easy to digest breakdown of the key services that matter and how to approach each one. Cognitive Services, Bot Framework, Azure Machine Learning Studio, Databricks, Notebooks, the Azure ML SDK for Python and the Azure ML Service
This session will take a look at better Unicode support, query processing improvements for row store tables, secure enclaves, and other neat things you'll find useful as a modern database administrator or developer.
A walk-through on what is possible analyzing your data with the "R" language.
Tabular Editor, an open source tool for authoring Tabular Models, makes it easier for teams of developers to work on the same model simultaneously. It also provides functionality for automated build and deployment. In short, DevOps for SSAS Tabular.
Azure offers a comprehensive set of big-data solutions that help you gather, store, process, analyse and visualise data of any variety, volume or velocity, so you can discover new opportunities and take quick action. In this overview session, we’ll look at the various components within Azure that make up the Modern Data Warehouse, enable Real-Time Analytics, and support Advanced Analytics scenarios. You should leave with a high level understanding of the capabilities and limitations of each of the products within the Azure Analytics portfolio.
Find out how a major UK hotel chain unified their wildly different sources of data to build a supercharged analytics and pricing engine to power their business. Find out how Cosmos and Databricks helped them get to know their customers, how best to retain them, and how best to keep them happy, all while ensuring GDPR compliance and the right to be forgotten.
Do you want to know why customers chose Cosmos DB? Come learn about the business goals and technical challenges faced by real world customers, and learn about key Cosmos DB features so you can help your customers deliver their high-performance business-critical applications on Cosmos DB.
We will review this new feature of ADFv2, do deep dive to understand the mentioned techniques, compare them to SSIS and/or T-SQL and learn how modelled data flow runs Scala behind the scenes.
See how to analyze images in your Data Lake with Azure Data Lake Analytics, U-SQL and custom models
Azure Databricks support both Classic and Deep Learning ML Algorithms to analyse Large DataSets at scale. The Integrated Notebook experience gives the Data Scientists and Data Engineers to do exploratory Data Analysis, also feels like native to Jupyter notebook users. In this session we will extract intelligence from Higgs Dataset (Particle Physics) by running Classic and Deep Learning models using Azure Databricks. We will also peek into AMl service's integration with Azure Databricks for managing the end-to-end machine learning lifecycle.
Azure Machine Learning is a platform for developing and deploying your machine learning models on Azure. We will look at the life cycle of ML projects: from data, to model, to consumption. This will include Automated Machine Learning capabilities.
Managed Instances can make your cloud migrations simpler, but have their own nuances. Learn about what you need to know to manage this new platform.
Learn in 75 Minutes what Batch Execution Mode is, when & how it will affect your workloads (in upcoming SQL Server 2019 & Azure SQLDB) on the traditional Rowstore Indexes.
Look inside Query Store to see what it does and how it works
Join me in this session and learn how to capture a production workload, replay it to your cloud database and compare the performance. I will introduce you to the methodology and the tools to bring your database to the cloud without breaking a sweat.
Biml is not just for generating SSIS packages! Come and see how you can use Biml to save time and speed up other Data Warehouse development tasks like T-SQL development, test data creation, and dimension population.
This session presents how to migrate, replicate, and synchronize data between SQL Server, SQL VM, Azure SQL Database, and Azure SQL Database Managed Instance, across on-premises, Microsoft Azure, and other cloud platforms to build a real hybrid data platform. We introduce our current technology choices, deep dive in customer scenarios, and use cases, and share product roadmap.
Azure offers a wide range of services that can be combined into a BI solution in the cloud.
In this session, we’ll look at the different options within the Cognitive Services suite, show you how to connect to the APIs using Python code, walk through a live bot demo, and build an Azure Cognitive Search index. You should leave this session feeling like you’ve had a jump start to further your AI developer skill set.
Learn how to troubleshoot AGs ad FCIs.
If you have already mastered the basics of Azure Data Factory (ADF) and are now looking to advance your knowledge of the tool this is the session for you.
This session looks at creating a SQL test lab on your workstation. We start by selecting a hypervisor, look at building a virtual machine and then creating a domain controller, a Windows failover cluster and a couple of SQL Servers.
Paginated reports (SSRS), are available in both on-premises and cloud-based solutions. Join this session to experience the journey of SSRS and how to create and deploy reports across all the different products available in the Microsoft Ecosystem.
Azure offers a vast comprehensive data estate! While this is great for enabling users to pick the right tools for the right job - this has also increased the surface area for understanding how to integrate a vast number of components. In this session, we will show off the plug and play nature of Azure products by showcasing you can write to Azure Cosmos DB with a data pipeline moving data from multiple sources powered by Azure Data Factory and Databricks.
Data lakes have been around for several years and there is still much hype and hyperbole surrounding their use. This session covers the basic design patterns and architectural principles.
Join the Microsoft Data Platform Product group, from engineers to data scientists and lean about the data platform evolution, and the skills you need to know to stay current. Full of demos and information, this fast-moving story will get you ready for another SQLBits conference
Based on real life scenarios, an audience interactive session.
Learning DAX can be tricky, especially if you have a background in SQL.
The FILTER function does not create a new filter context but a row context. The ROW function is executed in a filter context without creating a row context. To demystify this and all the other DAX Gotchas that I bumped into is the goal of my session.
Deploying untested code to production isn't ideal, manual testing can be slow and unreliable. In this talk we will look at the different types of automated testing we can use for our databases to give us confidence in the quality of the code.
The dbatools module now has over 400 commands you can work with. How do you know where to start and which ones you can use in your day-to-day work?
In this session, we will explore real world DAX and Model performance scenario's. These scenarios range from memory pressure to high CPU load. Watch as we will dive into the model and improve performance by going through concrete examples.
This session focuses on the deeper integration of SQL Server Integration Services (SSIS) in Azure Data Factory (ADF) and the broad extensibility of Azure-SSIS Integration Runtime (IR).
In this session I will show you how to apply DevOps practices to speed up your development cycle and ensure that you have robust deployable models. We will focus on the Azure cloud platform in particular, however this is applicable to other platforms
In this session we'll look at ETL metadata, use it to drive process execution, and see benefits quickly emerge. I'll show how a metadata-first approach reduces complexity, enhances resilience and allows ETL processing to become self-organising.
In this hour long session we will attempt to include lots of advice and guidance on how to develop code that will easily get approved by your DBA prior to release to production.
Clarify differences between many-to-many and weak relationships in Power BI
Using Azure DevOps and Azure RM templates to created isolated environments for testing PaaS solutions.
Monitoring for Power BI is one of the things that tends to be an afterthought. Apart from some standard reports, there's nothing that actually has an impact or helps you drive automation one step further. This session shows you the way.
Hybrid data landscapes are common and the on-premises data gateway enables connecting to your on-premises data sources from online services (like Power BI, PowerApps, Microsoft Flow and Logic Apps) without the need to move your data to the cloud. Come to this session to see the latest gateway features and best practices related to setup and configuration along with troubleshooting tips and tricks, investigate bottlenecks and resolve your common gateway errors.
Many organisations using Power BI seek the Nirvana of self-serve and enterprise reporting, but are left with an unstructured governance strategy. This session simplifies the governance process, by utilising Microsoft Flow and Power BI together.
Power BI Premium and Analysis Services enable you to build comprehensive, enterprise-scale analytic solutions. This session will deep dive into exciting new and upcoming features.
Various topics will be covered such as management of large, complex models, connectivity, programmability, performance, scalability, management of artifacts, source-control integration, and monitoring. Learn how to use Power BI Premium to create semantic models that are reused throughout large, enterprise organizations.
Step back through the ages and explore how database teams have approached creating environments for dev and test. Learn how, in the new age of provisioning, databases are delivered safer, faster, and more efficiently
We’ll take a look at how to approach making an Azure Databricks based ETL solution from start to finish. Along the way it will become clear how Azure Databricks works and we will use our SSIS knowledge to see if it can handle common use-cases
<<1234>>