MDX Studio provides unique visualization of MDX expressions and allows the user to interact with different stages of MDX execution. We all also be looking at behaviour or formula engine and storage engine in context of various queries and how MDX studio helps to peep under the hood and understand what is going wrong with the query.It is built by Mosha Pasumansky, who is inventor of MDX and one of the Architects of Microsoft Analysis Services.
Microsoft SQL Server 2008 Reporting Services provides a complete, server-based platform designed to support a wide variety of reporting needs enabling organizations to deliver relevant information where needed across the entire enterprise. This session covers the new scalability features of SQL Server Reporting Services 2008, such as on-demand processing and improved memory management. The session will demonstrates how to build a high performance, scalable reporting platform and explains performance tuning techniques to ensure that report performance remains optimal as your platform grows.
Bad performance is often systemic of poor queries which are systemic of bad schema design which is systemic of non-relational thinking which is systemic of project time constraints and lack of understanding of Database Design. In this talk/tutorial I'll work my way through Normalisation, we'll look at the Relation Model and how to think in sets - it's very important; throughout I'll be referring to Codd and Date's teachings. Theory aside I'll do all my demonstrations in SQL Server - concurrency, indexing, good T-SQL practices and advice.
Encapsulating common code in fucntions is one of the first things you learn as a programmer. However with SQL Server functions can be very bad for performance. In this session we will examine scalar functions in both TSQL and in .Net. You will come away from this session understanding the pitfalls of TSQL functions and how you can make them run 100 times faster.
This session will cover how and why you should configure tempdb, how to troubleshoot tempdb issues, and how to detect, resolve, and mitigate allocation contention issues by creating multiple data files, optimizing temporary object reuse, and using trace flag 1118.
So you've gone to all the trouble of building a cube, and your users are complaining that their queries are still too slow? This session will show you how to tune query performance on Analysis Services 2008, looking at partitioning, designing aggregations and identifying MDX-related bottlenecks.
When systems are under load, it is usually the database that is a bottleneck.  You can unblock the flow of data by taking the database out of the equation, but is this realistic, possible or desirable?  This session starts by calling bull on database ivory towers, exploring the options that database professionals seldom consider but are already becoming mainstream with highly scalable and performant systems.  Some of the newer approaches such as caching, eventual consistency, NoSQL databases and complex event processing are covered and their pros, cons and applications discussed

Learn to love SCOM!

Out of the box the SCOM Management Pack for SQL Server is not popular with DBA's.  This demo based session will take you through the process of how you can extend SCOM to properly monitor and analyse SQL performance, ultimately providing performance management dashboards for your CIO.

Like it or hate it - SCOM is the enterprise operational monitoring tool of choice for many organisations, so learn to embrace it and unlock the wealth of performance data captured.

In this session with examples we will cover how to identify inefficiencies in parallel query execution. We will also investigate some invisible symptoms! Keywords: MAXDOP, CXPACKET, SLEEP_TASK & SOS_SCHEDULER_YIELD.

I don’t have a quad core laptop yet, so I will do the best with my dual core!

With examples we will discuss tips and tricks that will be useful for Developers, DBAs and Consultants.

This session details a typical DBA's day demonstrating the new improvements of the Management Platform & Performance monitoring made for Microsoft SQL Server 2008.

We will concentrate on how best you can make use of tools (pre-installed) of Microsoft, as they pertain to efficient systems management, multi-server management, performance tracking, and policy implementation. Take advantages of detection and enforcement of security policies & design practices in your database environment  using both graphical and command-line tools. We cover everything from the new Management interface in SQL Server and Management Data Warehouse that also have a practical demonstration of PowerShell integration in SQL Server administration tasks, not to mention about best practices. 

You can read the Microsoft marketing messages about how wonderful the HA options are in SQL Server. So what are they like in the real world? This session will discuss the theory, and see how it works in practice, related to my experience of Failover Clustering, Database Mirroring, Log Shipping and other "HA" options that exist. The session will also cover business considerations, like how expensive the solutions are, both for hardware purchase and software licensing.This session is held over from SQLBits 5 when the speaker was ill.

Performance troubleshooting is a key aspect of any operational DBA's role and identifying performance bottlenecks quickly enables faster problem resolution.  PAL helps identify problems quickly by processing large log files and providing a colour-coded report highlighting problem areas. Here's a summary of the session:

  • Performance troubleshooting
    • Troubleshooting methodology
    • Data capture
  • Using Performance Analysis for Logs (PAL)
    • Analysing PerfMon log files
    • Tuning thresholds
    • Working with theshold templates
  • Post-analysis steps
    • Establish a baseline
    • Next steps for troubleshooting

David Elliott and David Prime from Betfair will be presenting. This session will investigate using Stream Insight, SQL Server and Analysis Services to provide an example framework to monitor cube usage as well as suggest a mechanism for highlighting areas for performance and security enhancements.

SQL Server 2008 provides a lot of scaleout technologies. In combination with Service Broker you can build message-based applications that can be scaled out to any required workload and size. In this session you will learn the basics of scaleout technologies available in SQL Server 2008 and in Service Broker. We’ll cover in detail:

  • Scaling out with Service Broker
  • Load BalancingRoutingService Broker message forwardersBroker Configuration Notice Service

We can think about a process memory dump as a photograph of the process memory at a given point in time – usually when something very wrong has happened. They can be used to perform post-mortem debugging to get the exact details on the nature of the problem and avoid them in the future. They are really important, but often overlooked, so this session will try to shed a little bit of light on crash dump analysis and postmortem debugging.

A brief introduction to WinDbg and adplus.vbs, as well as general postmortem debugging,  will lead to a series of examples where we will be able to determine the root cause of some SQL Server crashes (such as data corruption on a DBCC CHECKDB process) and SQL Server stability problems (such as the typical 17883 errors).

A discussion on how to debug common, but tricky, development problems will follow, with examples of debugging .NET applications failing to dispose the connections to the database properly, as well as other kind of database-related problems.

This session will be valuable both for DBAs who want to understand why their SQL Server instance - or cluster resource, etc - is crashing unexpectedly as well as for developers, who will add a new tool and skill set to improve their troubleshooting skills.

Fast Track is a new reference data warehousing architecture provided by Microsoft. More than this it represents a new way of thinking about data warehousing. A Fast Track system is measured by its raw compute power - not by a DBAs ability to tune an index. Fast Track is an appliance-like solution that delivers phenomenal performance from a pre-defined, balanced configuration of CPU, memory and storage using nothing but commodity hardware.

Of particular interest in a Fast Track system is the way in which the storage and SQL Server are configured. To achieve the fantastic throughput without using SSDs requires some careful configuration.  This configuration is designed to make use of Sequential I/O to dramatically improve disk I/O performance.

Interested? If you have a large data warehouse that's seen better days or perhaps you are about to embark on a new warehousing project then you should be!  Fast Track is a great solution with a fantastic value proposition.

In this one hour session we'll aim to get under the skin of Fast Track and get some answers as to how it delivers such great throughput on commodity hardware. In the process we'll aim to answer the following questions:

  • When might I need Fast Track? 
  • What is Sequential I/O?
  • How does Sequential I/O improve performance?  
  • What do I need to do to get Sequential I/O?  
  • How can I monitor for Sequential I/O ?
  • What may I need to change in my ETL to get the benefit of sequential I/O?

Allan Mitchell will be showing how to build SSIS packages to take advantage of sequential IO performance.

Still reading?  I'll save you a front row seat....

Successful Logging for real Timber entails meticulous planning, acute selection, proper process and flawless execution which are all important to produce best returns. Transaction Log files in SQL Server are an extremely important component requiring careful planning for the correct sizing, careful implementation and acute management to keep the cogwheels of SQL server well oiled and chugging away to extract the best performance out of the system.

This session is filled with demos of real world samples starting with analysing the core structure of Transaction Log files and the role it plays in the different working scenarios. We will delve further into the role of Transaction Log file in planning on how to correctly size them, engineer manageable VLF's, choosing the correct recovery model and the intricacies of transaction log file usages with respect to checkpoint, snapshots, replication, crash recovery, database  backups, database mirroring and change data capture. We will deep dive into recovering from lost log files, a peculiar case of runaway tempdb log file and resolution, troubleshooting autogrowing log files. So bring out your chainsaws, sharpen your blades and get ready to start Logging the right way!

So how does SQL Server pick which plan to run?  If you've ever wanted to get a better understanding of how the Query Optimizer works, this is the talk to attend.  Come listen to a member of the Query Processor development team explain how a query goes from SQL to the final plan.  This talk will cover conceptual topics like how query trees are built and how the optimizer stores all of the alternatives it is considering.  Additionally, the talk will examine examples in areas including index matching, parallel query generation, and update processing so that you can apply these concepts to better debug your own queries using the same techniques.
Used properly, normalisation brings huge advantages.  It optimises storage (each piece of data is stored only once), it removes an entire class of update, insert and delete anomalies and it improves data integrity.  What more could we ask for?  Well, performance can be an issue.  Normalised databases often have a reputation for poor performance.  This talk will examine the role of normalisation on performance and focus on effective ways we can denormalise data and yet retain the data integrity that normalisation brings.  This talk compliments that by Tony Rogerson beautifully and will be given by Mark and Yasmeen Ahmed who works with him at the University of Dundee.
With the combination of Windows Server 2008 R2 and SQL Server 2008 R2 it has now become possible to run SQL Server on up to 256 Cores. Recently, the SQL CAT team had a customer in the lab to test a banking application at high scale-up. Previously, we had test runs for the same application on 32-cores – but now the time had come to stress the app to 128 Cores. In this session, I will talk about the lessons we learned from stress testing an OLTP workload at this scale. You will see some interesting bottlenecks and get an idea of what sort of numbers are achievable on a big SQL Server box. Of course, I will also provide you with the guidance on how to get those numbers.
Do you wonder about SSIS performance? Well I do, and I've compiled my research into this session. We'll cover various design patterns for solving common problems like inserts vs. updates, is it faster to use a lookup, or can you just catch the errors and process them afterwards? As well as the richer patterns we'll look at some straight comparisons between two components that can be used to do perform the same task and ask which one is quicker?

This session will help you understand the basic concepts of building SQL Azure applications. After an overview of SQL Azure, we will walk you through details;  such as setting up a SQL Azure account; connecting to SQL Azure; managing logins and security; creating objects; migrating database schemas and moving data. This will be demonstrated by showing how to building a simple application.

In almost all data warehouses there is a requirement to quickly get data into SQL Server. In this talk, I will present the different options you have: SSIS vs. T-SQL. We will look at some common design patterns for scalability and take a deeper look at how the Bulk API for SQL Server works.
This demonstration shows you how to manage database changes using source control within SQL Server Management Studio. Source controlling the database is quick, and lets you track database changes, seeing who made them, when, and why. Source control can work with your continu-ous integration environment so database changes kick off automated tests. This ensures any problems are caught quickly. Once these changes have been tested, deployment to your test or production servers be-comes a simple operation.
Increase the "out of the box" performance and throughput of SQL Server. How can you double backup speed?, improve table scans?, load a single flat file as quickly as possible into SQL Server? What benefits does Solid State storage bring? This "free format" session is led by Henk Van der Valk, manager of the Unisys EMEA SQL Server Performance lab. It will give practical, real world advice for improving SQL performance and scalability, revealing details of a new offering from UNISYS; "SQL PowerRack", that combines high performance server technology and Solid State Storage to deliver new levels of performance for the most demanding SQL BI applications.
Presented by Quest Software's Iain Kick, Senior Technical Consultant SQL Server. What do you do when the SQL Server Service won't start? Techniques covered to quickly alert to service failure and where to go to iden-tify and fix any issues. This presentation will cover the native Microsoft solutions and Quest's Spotlight on SQL Server Enterprise ‘New way'. Attend this presentation to receive your free Spotlight on SQL Serv-er Enterprise licence, conditions apply.
In this session Greg Gonzalez, President and Product Manager for SQL Sentry, will illustrate how the SQL Sentry BI Suite provides unparalleled insight, awareness and control over the true source of performance issues across the entire SQL Server BI Platform. He will high-light some unique and patented features not seen in any other moni-toring product, from an animated real-time and historical Disk Activity view to pinpointing high impact SQL and SSAS queries.