DirectQuery, hybrid or more? How architectural decisions between performance, cost and maintainabili
Regular 50 minute session for SQLBits 2026TL; DR
Near real-time process analyses with Power BI and Snowflake are often implemented using DirectQuery to make operational data available as quickly as possible. In practice, however, it quickly becomes apparent that this architectural decision has far-reaching implications for performance, costs, maintainability, governance and end-user acceptance.
Using a real-world process analysis scenario, this session shows how Power BI DirectQuery works on Snowflake, when this approach scales well, and under what conditions performance or stability issues arise. It explains how query behaviour, model flexibility and large time series interact and why near real-time is not the same as true event streaming.
Another focus is on the visualisation of large time series: It will be shown how Deneb can be used to display a large number of data points in a high-performance manner and what technical and organisational requirements this entails. In addition, hybrid approaches and strategic options such as mirroring or fabric will be classified.
The session provides practical decision-making logic for consciously weighing up architectural options and finding solutions that are both technically feasible and acceptable to end users.
Session Details
Near real-time process analyses place special demands on data architecture, modelling and performance. In such scenarios, Power BI in combination with Snowflake is often used via DirectQuery to provide data for operational evaluations as quickly as possible. In practice, however, it quickly becomes apparent that this architectural decision creates a conflict between performance, cost and maintainability – compounded by questions of technical feasibility, governance and end-user acceptance.
In this session, I will use a real-world process analysis scenario to show how Power BI DirectQuery works on Snowflake, what typical real-world problems arise with large amounts of data, and under what conditions DirectQuery scales well – and when it does not. Among other things, this involves long time series, additional data enrichment, high flexibility for end users, and the impact on query behaviour, stability, and operation.
Another focus is on the visualisation of large time series: Native Power BI visuals quickly reach their technical or performance limits when dealing with a large number of data points. I will show how Deneb can be used as a custom visual to display large amounts of time series data in a controlled and performant manner – and what additional requirements this places on performance, governance, and maintainability.
A central component of the session is an in-depth performance analysis: How Power BI generates and optimises queries, how these are further processed in Snowflake, and why increasing flexibility in the model often leads to growing overhead. Based on this, I classify various architectural approaches – from DirectQuery ‘as is’ to optimised models and hybrid scenarios to strategic options such as mirroring or fabric-based architectures.
It becomes clear that architectural decisions must be evaluated not only technically, but also organisationally: Which adjustments to the existing model or transformations are realistic? What governance requirements arise during operation? And how strongly do response times and update rates influence acceptance among end users, especially in operational process analyses?
The session shows that there is no universally correct solution. Depending on data volume, growth, organisational maturity and user expectations, the optimal architecture may mean deliberately sticking with Snowflake – or identifying the right time for an expansion or a change of architecture.
This session is aimed at anyone who uses Power BI for operational near real-time process analyses and wants to make holistic architecture decisions – taking into account performance, costs, maintainability, technical feasibility, governance and user acceptance.
In this session, I will use a real-world process analysis scenario to show how Power BI DirectQuery works on Snowflake, what typical real-world problems arise with large amounts of data, and under what conditions DirectQuery scales well – and when it does not. Among other things, this involves long time series, additional data enrichment, high flexibility for end users, and the impact on query behaviour, stability, and operation.
Another focus is on the visualisation of large time series: Native Power BI visuals quickly reach their technical or performance limits when dealing with a large number of data points. I will show how Deneb can be used as a custom visual to display large amounts of time series data in a controlled and performant manner – and what additional requirements this places on performance, governance, and maintainability.
A central component of the session is an in-depth performance analysis: How Power BI generates and optimises queries, how these are further processed in Snowflake, and why increasing flexibility in the model often leads to growing overhead. Based on this, I classify various architectural approaches – from DirectQuery ‘as is’ to optimised models and hybrid scenarios to strategic options such as mirroring or fabric-based architectures.
It becomes clear that architectural decisions must be evaluated not only technically, but also organisationally: Which adjustments to the existing model or transformations are realistic? What governance requirements arise during operation? And how strongly do response times and update rates influence acceptance among end users, especially in operational process analyses?
The session shows that there is no universally correct solution. Depending on data volume, growth, organisational maturity and user expectations, the optimal architecture may mean deliberately sticking with Snowflake – or identifying the right time for an expansion or a change of architecture.
This session is aimed at anyone who uses Power BI for operational near real-time process analyses and wants to make holistic architecture decisions – taking into account performance, costs, maintainability, technical feasibility, governance and user acceptance.
3 things you'll get out of this session
Advice on how to deal with Direct Query on Snowflake
Performance Optimization
Decision Points for the right Solution
Performance Optimization
Decision Points for the right Solution
Speakers
Michael Tenner's other proposed sessions for 2026
How does MCP Server for Power BI boost the productivity of data analysts and engineers? - 2026