22-25 April 2026

Building End-to-End Data Analytics Solutions with Microsoft Fabric and Open Data

Proposed session for SQLBits 2026

TL; DR

Discover how to create a comprehensive end-to-end data analytics solution using Microsoft Fabric that reflects real-world scenarios. In this session, we will utilise open data sources such as the World Bank and Weather Data to showcase key artifacts and techniques within Microsoft Fabric. You will gain insights into workspace architecture, naming conventions, Lakehouses, Notebooks, and more, with practical scenarios you can apply to your work.

Session Details

Discover how to create a comprehensive end-to-end data analytics solution using Microsoft Fabric that reflects real-world scenarios. In this session, we will utilise open data sources such as the World Bank and Weather Data to showcase key artifacts and techniques within Microsoft Fabric. You will gain insights into workspace architecture, naming conventions, Lakehouses, Notebooks, and more, with practical scenarios you can apply to your work.

In this session, we will cover:

Workspace Architecture and Naming Conventions: Best practices for organising your workspace and naming your resources.

Working with Lakehouses and Notebooks: How to effectively use Lakehouses for data storage and Notebooks for data ingestion and transformation, as well as how to utilise Notebook environments.

Data Factory Pipelines for Orchestration: Automating data workflows with Data Factory pipelines.

Logging and Semantic Model Development: Implementing a logging framework for monitoring and maintaining the solution.

Developing and Maintaining Semantic Models: Using Semantic Link Labs to enhance semantic model development, particularly regarding measures and calculation groups creation.

3 things you'll get out of this session

Workspace Architecture and Naming Conventions: Best practices for organising your workspace and naming your resources. Working with Lakehouses and Notebooks: How to effectively use Lakehouses for data storage and Notebooks for data ingestion and transformation, as well as how to utilise Notebook environments. Data Factory Pipelines for Orchestration: Automating data workflows with Data Factory pipelines. Logging and Semantic Model Development: Implementing a logging framework for monitoring and maintaining the solution. Developing and Maintaining Semantic Models: Using Semantic Link Labs to enhance semantic model development, particularly regarding measures and calculation groups creation.

Speakers

Prathyusha(Prathy) Kamasani

data-nova.io/blog