
Ásgeir Gunnarsson
Sessions for 2026
In this session, we will cover typical scenarios for sharing semantic models and reports with external users. We will explore different methods to grant access, considering ease of use, maintenance, and security and compliance perspectives. By the end of the session, the audience will have a clear understanding of the available options for sharing semantic models and reports with external users, along with the pros and cons of each method. This will enable them to make informed decisions on the best approach for their organization
When building a Lakehouse in Fabric Spark (or any other analytics data store), ensuring data quality is crucial. There are many aspects to check and various methods to do so. In this session we'll cover how to perform data quality checks using built-in functions in PySpark, how to create reusable functions for data quality checks, and how to use Python modules for data quality validation. When we look at Python modules we will focus on using Great Expectations module for data quality validations. Additionally, we will discuss the new Fabric feature, materialized views in Spark, which includes built-in data quality checks.
You are a data warehouse developer that´s been using SQL Server on-premise or in Azure to build your data warehouses. You are comfortable in writing SQL scripts, stored procedures and creating views to fuel your data warehouse. You have heard about the new kid in town Spark but you have not yet take the plunge and are wondering if you have to or if you should. If the above is something like you then this is the session for you. The session is built largely on my own experience. I had been doing data warehouse in SQL for years and I had decided Spark was nothing for me. I felt there was enough to do in SQL so there was no reason to add Spark to the tools I was using. It seemed like a lot of learning and when you are an old dog that can be difficult to do 😊.
How to orchestrate a lakehouse flow in Fabric? Should you use Data Factory pipelines, notebook-driven orchestration, materialized views, or bring in a third-party orchestration tool? Each option has clear strengths, but also trade-offs that can become painful in production if chosen incorrectly. In this session, we take a deep, practical dive into the orchestration options available for Fabric lakehouses. We will break down how each approach actually works, where it excels, where it falls short, and, most importantly, when you should use one over the others. After attending this session, you will leave with a clear decision framework for choosing the right orchestration strategy for your Fabric lakehouse, confidently and deliberately, rather than by trial and error.
How to orchestrate a lakehouse flow in Fabric? Should you use Data Factory pipelines, notebook-driven orchestration, materialized views, or bring in a third-party orchestration tool? Each option has clear strengths, but also trade-offs that can become painful in production if chosen incorrectly. In this session, we take a deep, practical dive into the orchestration options available for Fabric lakehouses. We will break down how each approach actually works, where it excels, where it falls short, and, most importantly, when you should use one over the others. After attending this session, you will leave with a clear decision framework for choosing the right orchestration strategy for your Fabric lakehouse, confidently and deliberately, rather than by trial and error. Part 1 focuses on introducing what the options are and orchestrating with Data Factory
How to orchestrate a lakehouse flow in Fabric? Should you use Data Factory pipelines, notebook-driven orchestration, materialized views, or bring in a third-party orchestration tool? Each option has clear strengths, but also trade-offs that can become painful in production if chosen incorrectly. In this session, we take a deep, practical dive into the orchestration options available for Fabric lakehouses. We will break down how each approach actually works, where it excels, where it falls short, and, most importantly, when you should use one over the others. After attending this session, you will leave with a clear decision framework for choosing the right orchestration strategy for your Fabric lakehouse, confidently and deliberately, rather than by trial and error. Part 1 focuses on introducing orchestration via notebooks, materialized views and third party solutions
In this panel debate, four practitioners with extensive, real-world Fabric experience come together to discuss what actually works when administering Fabric at scale. Benni de Jagere (Microsoft CAT), Just Blindbæk (Tabular Editor), Lars Andersen (Microsoft CAT), and Ásgeir Gunnarsson (data lab) will share hard-earned lessons from real production environments, covering both successes and mistakes. The discussion will focus on the most debated and misunderstood areas of Fabric administration, including tenant and capacity management, workspace strategies, governance models, monitoring, security boundaries, and operational ownership. Expect differing viewpoints, strong opinions, and honest answers
Choosing the right workspace strategy is crucial for a successful implementation of Microsoft Fabric. Should you consolidate everything into one workspace, have separate workspaces for each stage, or create distinct workspaces for different workloads? Or perhaps something entirely different? Several factors will influence your workspace strategy, including the size and composition of your development team, your DevOps strategy, and the need for workload/data isolation. Some of these considerations stem from the way Fabric operates, while others are due to security restrictions. By the end of this session, the audience will have a better understanding of the challenges in Fabric related to workspace strategy and the possible solutions.