Reference

SQL Workload Profiler Reference

Contents

  • Introduction
  • Background:
    • The SQL Server Plan Cache
    • Plan Cache Performance Counters
    • SQL Workload Profiler
  • SQL Workload Profiler Data Sources
  • Comparing Workload Profiler and the SQL Server DMVs
  • Definitions / Terminology
  • Limitations
  • Dealing with Large Workloads
  • Understanding The Statistics
  • Tabs In Workload Profiler
  • Details Window
  • Aggregation Levels
  • Display Options
  • Analysis Charts
  • Filtering
  • Data Flow
  • Comparing Data Collection Methods
  • Options Reference
  • Configuration File Settings
  • Output Table Columns
  • Version History

Introduction

This page provides a lot of reference information ( =brain dump ūüôā ) about SQL Workload Profiler. ¬†Please feel free to ask questions, point out where you view things differently, highlight mistakes, etc.

For a shorter overview of Workload Profiler, please see the SQL Workload Profiler page.  For step-by-step instructions covering the core usage scenarios, please see the Instructions / Tutorials page.

Background

The SQL Server Plan Cache

A Query Plan (also known as an Execution Plan) is created when SQL Server takes your SQL Query and works out how best to execute it.  Building a Query Plan is relatively resource intensive, so SQL Server will, under most circumstances, keep a copy of the plan around for a while for later re-use.  These plans are stored in an area of SQL Server memory called the Plan Cache.  Alongside each cached plan various cumulative resource usage statistics are also stored.

You can¬†take a high level look at the contents of the¬†Plan Cache using the “Server Summary” tab in Workload Profiler. ¬†This is based on a query taken from a post on Kimberly Tripp’s blog.

For a quick low level look, run the following SQL Query:

SELECT TOP 100 *
FROM sys.dm_exec_cached_plans AS cp
CROSS APPLY sys.dm_exec_query_plan(cp.plan_handle) AS qp
CROSS APPLY sys.dm_exec_sql_text(cp.plan_handle) AS st;

The description of the Plan Cache above is purposefully very short and general.  There are several other very good descriptions of the Plan Cache already out there.  For more information, see these other great resources:

Plan Cache Performance Counters

SQL Server offers a few handy performance counters for checking what how well the Plan Cache is being utilised (higher is usually better), the number of recompilations (lower is usually better), etc:

SQLServer: Plan Cache Object:

  • Cache Hit Ratio
  • Cache Object Counts

SQLServer: SQL Statistics Object:

  • SQL Compilations/sec
  • SQL Re-Compilations/sec

Check the MSDN links above for details of these and the other Plan Cache related counters too.

SQL Workload Profiler

Workload Profiler¬†started life as a utility that I wrote¬†to understand the dynamics of¬†some workloads on Azure SQL Database (which doesn’t support many of the¬†usual tools). ¬†Gradually more features have been added to make Workload Profiler¬†more powerful/usable.

A few weeks ago, with the recent change in Azure SQL Database Service Tiers, I picked up the tool again to profile some other workloads.  Since this sort of profiling is likely to be required more over the next few months, as Azure SQL Database moves to a performance based cost model, I decided to share the tool.  To be clear, it does work for on-premise SQL Server too.

Please remember at the moment the tool is really a beta and will still have some rough edges.

SQL Workload Profiler Data Sources

Most of the data visible in Workload Profiler is taken from the following Data Management Views (DMVs):

Retrieving data from either of these DMVs requires that the user be granted the View Server State permission.

The Exec Query Stats DMV contains an interesting set of identifiers and hash codes.  My interpretation for working with them is:

20140928_06_ER

In this diagram, gold represents entities visible / partly visible in the DMVs. ¬†Grey represents entities¬†that doesn’t exist as such in the DMV but whose existence can be inferred. ¬†The blue column names hanging from the boxes denote identifiers / pseudo identifiers.

The diagram above represents the data as it exists at a fixed point in time.  Once the time dimension is added, the relationship between procedure/batch and query plan becomes one to many instead of one to one.

Comparing Workload Profiler and the SQL Server DMVs

The SQL Server sys.dm_exec_query_stats DMV contains the cumulative resource usage statistics, for each query Рquery plan combination, since each query plan was last compiled.  The statistics are simple totals that are often cover time periods of different lengths.  Using the DMVs alone is a great first step.  The resources linked above contain some example SQL queries that can yield a lot of information.

In contrast to the DMVs alone, Workload Profiler aims to provide resource usage statistics across fixed, comparable periods of time e.g. query P used X resources between times A and B which were say 30 seconds apart.  Query R used Y resources between times A and B, etc.  This is repeated to provide a more complete picture over time, etc, e.g. resource usage between times B and C, C and D, D and E, etc.

In addition, if a query plan was created around¬†time B, then deleted around¬†time D, usage statistics¬†will still appear in Workload Profiler, where as a simple analysis of the cumulative statistics just by looking at the DMV at time E would not contain this information. ¬†Aggregating¬†by query over time (and hence across different generations of¬†Query Plans) is possible with Workload Profiler, which isn’t possible with the DMV alone.

Using the Analysis View in Workload Profiler, or by looking at the output SQL Server table, it is possible to view the workload performance data retrospectively, over the course of the data capture session, including drilling into the detail of individual queries and query plan statistics, as they were at the time of each data collection during the profiling session.

Of course, using Workload Profiler also requires less T-SQL!

Definitions / Terminology

Before continuing, let’s clarify some terms¬†used in Workload Profiler and in this documentation:

General SQL Server Terms

  • Workload – a general term used to describe all of the stuff that SQL Server is¬†doing.
  • Procedure/Batch – roughly speaking, is all of the SQL content of a single request that you¬†send to SQL Server (for ad-hoc SQL queries) or is the SQL body of a database object like a stored procedure.
  • Query – is part of a Procedure/Batch, e.g. if you send a request to SQL Server that contains three SELECT statements, then you have sent one batch that contains three queries. ¬†More procedure/batch vs. query examples are given below.
  • Query Plan – SQL Server compiles the SQL of each procedure/batch¬†into¬†a Query Plan (also known as an Execution Plan) which usually/ideally is reused for subsequent executions of the same query, to save the effort of recompiling.
  • Query Hash – a hash value based on the text of the SQL¬†after¬†literal values have been removed. ¬†Grouping queries by query hash helps to identify queries that may be good candidates for parametrisation.

Workload Profiler Terms

  • Data Collection – taking a copy of part or all of the statistics¬†in the SQL Server Plan Cache at a point in time.
  • Delta – the difference between two data collections e.g. query P¬†execution count at time A¬†was 5, query P¬†execution count at later time B¬†was 8, so query P¬†was executed 3 times between times A¬†and B, i.e. query P¬†has an execution count delta of 3. ¬†For long running queries, the query is included in the time period¬†that covers the time when the query finished executing.
  • Session – all of the data collections and statistics¬†gathered since you pressed the “Start” button in Workload Profiler.
  • Cumulative/Total – the sum of all of the Deltas since the start of a Session, e.g. for a particular object or a grand total for all objects.

Quick Procedure/Batch vs. Query Examples

The following examples are designed to illustrate how a Procedure/Batch can be broken down into a set of separate (quite independent) Queries.

Example 1
SELECT abc FROM xyz
SELECT def FROM pqr
= one batch comprised of two queries.
Two rows will appear in the DMV and Workload Profiler.
Example 2
SELECT abc, def FROM xyz
INNER JOIN (SELECT def FROM pqr) pr ON ghi=jkl
= one batch comprised of  one query.
i.e.:  The nested SELECT is not a separate query.
Example 3
exec dbo.MyProc @a=1, @b=2
where the definition of dbo.MyProc1 is
CREATE PROC dbo.MyProc1
@a int
as
begin
SELECT abc FROM xyz
end
= one procedure containing one query.
Example 4
exec dbo.MyProc1 @a=1
where the definition of dbo.MyProc1 is
CREATE PROC dbo.MyProc1
@a int as
begin
INSERT INTO xzy (abc) SELECT abc FROM pqr
UPDATE pqr SET abc WHERE …
end
= one procedure containing two queries.
Example 5
declare @i int = 5
if @i < 10 begin SELECT abc FROM xyz end
else begin SELECT abc FROM xyz end
= one procedure and two queries (but only one executes).
There are no separate metrics for the control logic.
Example 6
declare @i int = 25
if @i < 10 begin select 1 end
else begin select 2 end
= no entry in Workload Profiler.
Nothing in the SQL Server DMVs for simple control logic.

Limitations

Not every request to SQL Server will be picked up by Workload Profiler.  Workload Profiler can only capture information that is in the Plan Cache.

If your SQL Server is under heavy memory pressure, SQL Server may completely clear the Plan Cache. ¬†If so,¬†very little will be seen in Workload Profiler – in the worst case perhaps¬†just a few queries, each appearing as a new Plan – i.e. PS (Plan Status) “N”, with low execution counts, even when you know the server is busy. ¬†Checking the performance counters described above can be helpful in such situations.

Other reasons the figures in Workload Profiler may be off:

  • SQL Server may replace a plan with a newer version (e.g. if statistics change). ¬†If the old plan had any execution metrics against that Workload Profiler had not yet collected, these will disappear into the abyss – though this happens even more if you just are looking at a static extract of the¬†DMVs.
  • A fail over will clear the Plan Cache – and this happens in Azure too, for load balancing as well as resilience reasons.
  • The SQL Server DMVs themselves are relatively complex beasts, pulling information from different places inside SQL Server. ¬†Sometimes, a row will disappear and reappear next time you call into the DMVs (this is probably for performance reasons to avoid locking/blocking inside the Database Engine).
  • Using the WITH RECOMPILE option with stored procedures prevents use of the Plan Cache = no statistics for those stored procedures.
  • Procedures containing text literals longer than 8 KB don’t get cached.
  • Some (most?) bulk operations don’t¬†result in a cached Query Plan.
  • Natively compiled stored procedures¬†play differently with the DMVs used by Workload Profiler. ¬†Check MSDN (using the links above). ¬†I have done¬†no testing with natively compiled stored procedures and Workload Profiler.

In terms of Workload Profiler specific reasons why the data in the tool may not be/appear accurate:

  • Some of the¬†settings on the “Options” tab restrict the amount of information that Workload Profiler can retrieve. ¬†For example, if, since the last data collection, ten thousand different queries¬†have executed, but the “Max Stats Per Collection” setting is set at 1000, then clearly a large chunk of the workload will be missing.
  • Unreliable¬†connections to the server being monitored will result in missed data collections and incomplete information.
  • Long running queries¬†register in the DMVs at the time they complete (i.e. in one go, not progressively as they execute). ¬†As such, a long running query will¬†appear in a single delta in Workload Profiler. ¬†Large long-running queries can therefore make the deltas (and hence the charts in Workload Profiler) appear inaccurate (and “lumpy”).

Dealing with Large Workloads

In the context of Workload Profiler, diverse workloads require more processing.  An example will help clarify the meaning of diverse here.  Consider the following two workloads executing between two data collections:

  1. 1000 different stored procedures each executed 10 times.
  2. 10 different stored procedures each executed 1000 times.

Assuming (for simplicity) that each stored procedure contains one query: this means workload 1 results in 1000 rows in the DMVs and Workload Profiler.  Workload 2 results in 10 rows in the DMVs and Workload Profiler.  As such, workload 2 requires much less processing Рboth in terms of work in SQL Server to retrieve the DMV data and in Workload Profiler in processing that data.

The following steps may help in working with very diverse workloads in Workload Profiler:

  • Don’t use¬†data collection method 2, since this retrieves every query in the plan cache during every data collection.
  • Consider increasing the time between data collections, to give SQL Server, Workload Profiler (and you!) time to breath.
  • Consider decreasing the maximum length of the SQL text columns retrieved by Workload Profiler (see the Max Text Length options on the “Options” tab). ¬†The default limit of 1000 characters¬†is very likely too long when capturing very large diverse workloads, especially where the workloads contain many long ad-hoc SQL statements (as opposed to say Stored Procedures, for which the Proc/Batch text column contains only the name and not the entire SQL definition).
  • Consider disabling the Analysis View in the UI whilst collecting data. ¬†The¬†in-memory analysis can consume a lot of memory. ¬†Write the performance metrics to a table in SQL Server and analyse later – either in Workload Profiler or via some other means such as SQL, Excel, etc.
  • On the other hand, if analysing whilst capturing is important to you, you may need to increase the memory-usage limit (otherwise only a small number of data points may be visible in the Analysis View) – see the¬†Adv_LiveAnalysisMaxHistoryMemoryMB setting in the .config file to change the limit.
  • If there are a large number of single-use ad-hoc SQL queries blowing out the Plan Cache, consider using¬†the¬†“Use Custom Data Collection SQL Scripts”¬†option and grouping those single use queries to make them easier to work¬†with.
  • If the UI is very slow (lots of pauses)¬†during the automatic updates, consider disabling the automatic updating of the data tables on each tab. ¬†Press the right mouse button on one of the data tables and untick the automatic updating option.
  • If outputting the performance data collected by Workload Profiler to a SQL Server table, put the database containing that table on a separate SQL Server. ¬†That’s why there is a separate connection string setting on the “Options” tab ūüôā
  • If the “Analysis Details”¬†tab¬†is very slow, consider changing the “Adv_ShowSessionTotalsInAnalysisDetails” setting (see below).

Understanding The Statistics

Check the definitions in the source DMVs:

FAQ:  What is the difference between a Logical Read and a Physical Read?

One physical read is the action of reading one page of a database file from disk, into memory, for use by the query.
One logical read is the action of reading one page of a database file from memory, i.e. it was previously loaded into the Buffer Pool in SQL Server memory, cached for future re-use and so luckily did not need to be read from disk again.
A high number of physical reads therefore generally restricts performance more than a high number of logical reads.

FAQ:  What is a SQL Server Page?

SQL Server Database data files are divided up into 8 kB units called pages.  SQL Server reads and writes to the data files occur in pages (i.e. chunks 8 kB in size).

Tabs In Workload Profiler

Roughly speaking, the tabs in Workload Profiler are divided as follows:

  • Current View: ¬†“Now”, “Totals”, ¬†and “Server” tabs
  • Analysis View: ¬†“Analysis”, “Analysis Chart” and “Analysis Details” tabs
  • Everything else: ¬†“Server Summary”, “Options”, “Log” and “About” tabs

Current View:  Now, Totals and Server tabs

20140928_09_Tabs

The current view¬†describes the workload running now on the server being monitored. ¬†The “Now” tab displays the contents of the current delta (i.e. the workload during the last data collection period – see the definitions section above to understand the terminology¬†used here). ¬†The “Totals” tab displays cumulative metrics since the current session was started, i.e. the sum of all of the deltas.

The content of the “Server” tab provides a view¬†similar to querying the DMVs directly, i.e. it aims to be a current¬†view of the plan cache statistics. ¬†It contains the statistics¬†for each query – plan combination since each plan was last compiled. ¬†The exact content of this tab varies slightly, depending upon the data collection method being used. ¬†Data collection method 2¬†retrieves statistics for all of the queries in the plan cache during every data collection. ¬†As such, the “Server” tab¬†is the same as the DMV in this scenario.

Data collection methods 1¬†and 3¬†only retrieve statistics for queries that executed during the time between the last two data collections. ¬†Nonetheless, a copy of the last¬†server statistics obtained for each query – plan combination is retained in memory in Workload Profiler, beyond the life of a single delta. ¬†The “Server” tab therefore displays the server statistics for all of the queries that have¬†executed during the current session. ¬†As later executions of the same plan occur statistics for these plans will be updated in the “Server” tab. ¬†Later generations of plans will also overwrite the statistics for earlier generations in the¬†“Server” tab (as in the DMV). ¬†The contents of the rows in the “Server” tab will match the equivalent rows in the DMVs in SQL Server. ¬†The DMVs may have additional rows too compared to the “Server” tab in Workload Profiler, for those queries that executed before the current Workload Profiler session was started. ¬†Since SQL Server may/will prune old plans, it is also possible/probable that the “Server” tab in Workload Profiler will contain some statistics no longer in SQL Server, since the data collection process does not retrieve details of deleted plans.

Analysis View:  Analysis, Analysis Chart and Analysis Details tabs

20140928_10_Tabs

The Analysis views provide a way to look back through the workload that executed during the current session (or a past session, when re-loading performance statistics that were previously written to a SQL Server table by Workload Profiler).

The “Analysis” tab contains one row for each data collection. ¬†If the data collection interval is set at 30 seconds, there will¬†be a row added to this tab every thirty seconds. ¬†The table displays summary statistics for each data collection. ¬†The “Filter” button at the top of the tab can be used to restrict the statistics, e.g. to a specific¬†stored procedure. ¬†The “Analysis Chart” tab plots the same information from the “Analysis” tab in the form of simple¬†line graphs. ¬†Again, filtering will affect the data displayed.

The “Analysis Details” tab can show the detail underneath each data collection summary line on the “Analysis” tab. ¬†By choosing the relevant view option at the top of the tab, it is possible to see either¬†just the delta, or cumulative statistics that cover the time since the first row in the “Analysis” tab to the¬†selected time in the “Analysis Details” tab.

Note the careful definition above: ¬†“…the cumulative statistics that cover the time since the first row….”. ¬†When the Analysis View is used during data collection, memory usage by Workload Profiler is by default limited to 1 GB¬†(this can be changed in the settings file). ¬†Once that limit is reached, the data behind the oldest row (i.e. at the top) in the “Analysis” tab is discarded. ¬†This memory limit only applies during data collection. ¬†No limit is enforced when loading¬†data that was previously saved by Workload Profiler to a SQL Server table. ¬†When loading data from a SQL Server table it is also possible to only load a sub set of the data (e.g. shorter time period, etc).

Data Table Row Colouring

20140928_15_Tabs

The “Totals”, “Server” and “Analysis Details” tabs¬†alter the colour of the row background depending upon how recently each query was executed. ¬†Rows are highlighted in orange when that query executed in the current delta. ¬†Rows are also coloured an increasingly dark grey colour to indicate increasing amounts of time since a query was last executed.

Data Table Columns

20140928_11_Tabs

Most of the columns in the data tables on each tab mirror the columns in the source DMV.  The only additions are:

  • PS = Plan Status, where N = New, E = Existing, R = Reset, F = (Lost and) Found, 0 = (Lost and) Discarded
  • Gen = Plan Generation (when the aggregation level = Query – Plan).
  • PC = Plan Count (when the aggregation level is above Query – Plan)

Data Table Limits and Column Sorting

20140928_12_Tabs

By default the number of items in the data table displayed on each tab is limited to 500.  This is configurable via a setting in the config file (see below).  However, for most uses, 500 should be sufficient.  Each data table in Workload Profiler can be sorted by clicking on the column heading.  This performs a deep sort Рi.e. a full sort of all of the queries in that set of queries (which could be thousands), not just those that are loaded into the on screen table (i.e. up to 500).

Analysis Memory/Disk Requirements

Some in-memory compression of data is performed, so providing an exact formula for memory usage is not possible.  A rough guide can however be provided using the following example.

  • Data Collection Interval: ¬†30 seconds
  • Session Duration: ¬†4 hours = 240 minutes = 480 data collections
  • Workload: ¬†2000 distinct queries executed during each¬†30 second interval (number of executions of each query does not impact memory/disk usage)
  • Resulting number of statistics (approximately = rows from the DMVs) = 960,000 (i.e. approx. 1 million).
  • SQL Server output table size: ¬†approx. 1 GB
  • Memory required: ¬†approx. ¬†1.2 GB

Scale this example to make other estimates.  E.g. changing the data collection interval to every 60 seconds would roughly halve the memory and disk space required.

Details Window

Double clicking on any row on the “Now”, “Totals”, “Server” or “Analysis Details” tabs will open a window containing more detailed information about the item that has been double clicked on.

The exact details that are show depend upon the level of aggregation currently selected.  More information about Aggregation Levels is provided below.

It should be noted that whilst the data tables and chart in the main window can be configured to change the numerical display calculation and display units (see Display Options below), all numbers in the Details Window are raw totals (counts, pages, milliseconds) Рin the same manner as the source DMV.

Aggregation Levels

Workload Profiler can display statistics at three different levels of aggregation.  They may appear similar at first impression but are subtly different:

  • Query – Plan: ¬†The default aggregation level of the source DMV – to be specific –¬†the level of a query – query plan generation combination.
  • Query: ¬†Aggregating across different plans (or more likely plan generations) that exist for a given query.
  • Query Hash: ¬†Aggregating statistics for queries that¬†share the same query hash code (= a hash¬†value¬†calculated after literal values have been removed from the SQL query – so, for example, glossing over some complexity, queries that differ only say by numerical values, will share the same query hash code).

Each tab in Workload Profiler has a default aggregation level:

  • “Now” tab: ¬†Query-Plan: ¬†since in a single¬†delta only one plan for each query is retrieved
  • “Totals” tab: ¬†Query: ¬†since over time different plans / plan generations can exist, if we are interested in totals for each query, we want to remove the detail of specific¬†query plans, hence the use of this aggregation level.
  • “Server'” tab: ¬†Query-Plan: ¬†since this tab is a current¬†view of the plan cache statistics, only a single plan for each query exists.
  • “Analysis Details” tab: ¬†For the same reasons as above: ¬†when the view selected¬†on this tab = Delta, Query-Plan is the default aggregation level; when the view selected¬†= Cumulative, Query is the default aggregation level.

The aggregation level can be changed from the right mouse button context menu on the each tab. ¬†Each tab (and each View Option on the “Analysis Details” tab) can have a different aggregation level¬†selected. ¬†The choice is retained until Workload Profiler is closed.

Display Options

20140928_13_Tabs

A context menu (i.e. right mouse button menu) is present on each data table in Workload Profiler.  These options largely control how the statistics are calculated and displayed:

  • Aggregation Level: ¬†See previous section above for explanation.
  • Values: ¬†change how the numbers are calculated for display – change between Totals, Average per Execution and Average per Second.
  • I/O Units: ¬†Pages, Kilobytes or Megabytes.
  • Time Units: ¬†Microseconds (though only accurate to the nearest millisecond), Milliseconds or Seconds.
  • Automatically Update: ¬†Suspends the automatic updating of the data tables (which can be annoying if you are looking at a data table when it refreshes and you loose your position). ¬†This does not impact the background data collection which continues uninterrupted – it simply stops the table in the UI being updated. ¬†When¬†automatic updating is disabled, an additional option “Update Now” also appears in the context menu to allow¬†the¬†data tables to be reloaded with the latest information on-demand.
  • Add to Filter: ¬†See “Filtering” section below for explanation.
  • Copy to Clipboard: ¬†Copies the current data table in tab delimited format to the clipboard (embedded tab, carriage return and line feed characters in the data are¬†stripped out).

In the display options, Raw denotes which the option that matches how the statistics are displayed / reported through the source DMV.

Display options are applied to all data tables together, e.g. changing the I/O units to MB on one data table changes all of the data tables.  This avoids the confusion of different units.  The one exception to this is Aggregation Level, which can be different on different tabs.

Analysis Charts

The graphs on the “Analysis Charts” tab are pretty straightforward. ¬†They display the same information as shown on the “Analysis” tab.

The check boxes at the bottom of the tab allow the different lines to be shown/hidden. ¬†The “Log.” check box switches the axes between normal (linear) scales and logarithmic scales.

It is possible to zoom into different areas in the charts by dragging a rectangle in the chart area.  Click the small (0) buttons at the top / left to zoom back out.

The “Latest” check box locks on to the latest data as it arrives for a permanently up to date view.

Filtering

Workload Profiler supports filtering in four different places, three as standard and one advanced option:

  • Data Collection Filter: ¬†from the “Collection Filter” option on the “Options” tab it is possible to restrict the data being collected by database and/or via text criteria.
  • Analysis Filter: ¬†from the “Edit” button on the three analysis tabs it is possible to filter the data displayed in the analysis tabs by a variety of fields.
  • Database Load Filter: ¬†when loading statistics previously captured to a SQL Server table via¬†the “Load” button on the “Options” tab, it is possible to filter the data by time period, database and/or via text criteria.
  • Custom Data Collection SQL: ¬†if the standard filters do not support the filtering required,¬†using custom SQL scripts to collect data from the Plan Cache into Workload Profiler is possible, though notably¬†more involved than using any of the three standard filters above.

On the “Analysis Details” tab, the right mouse button context menu has an additional option compared to the other tabs: ¬†“Add to Filter”. ¬†This takes the currently selected item and adds it into the Analysis Filter, as a quicker way of filtering.

Data Flow

The data flow diagram below summarises how data flows from the Plan Cache through Workload Profiler, in particular paying attention to how the three standard data filters described above fit together (click to enlarge).

20140928_05_DataFlow

The thickness of the arrows in the diagram above roughly indicates the relative size of the data volumes flowing around.

Comparing Data Collection Methods

Workload Profiler supports three different ways of collecting the workload statistics.  All three of the data collection methods essentially do the same thing during a data collection Рwhich is all of the following steps:

  1. Compare the current contents of the DMVs to the contents of the DMVs from the previous data collection.
  2. Identify which queries have executed in the time since the last data collection (loosely referred to in the UI as “recently executed queries”).
  3. Retrieve the performance statistics for those query executions.  I.e. subtract the previous DMV statistic totals from the current DMV statistic totals to obtain the difference.  This difference, called the delta, describes the performance of the recently executed queries identified in step 2 above.

The three data collection methods vary in how they carry out the above actions.  Each is described and compared below.

Data Collection Method 1:  Retrieve statistics for recently executed queries and calculate delta client side

Data Collection Method 1 works as follows:  the very first data collection retrieves the performance statistics of all queries in the SQL Server Plan Cache (minus the procedure/batch text and query text, to reduce the query overhead and result set size).  This is necessary to establish a baseline to compare against when calculating the deltas.  This may be a large number of rows on servers with very large plan caches (esp. when populated by lots of single-use ad-hoc queries).

Subsequent data collections are efficient and retrieve only those rows that have executed since the last data collection, restricting the retrieved rows by the last_execution_time column in the DMV.  This data collection method exerts the least overhead on SQL Server.  This method trusts that the last_execution_time field in the DMV is updated correctly/promptly.

Workload Profiler compares the data retrieved in the second, etc data collections to the baseline to identify the queries that have recently executed and calculate the delta.

Data Collection Method 2:  Retrieve statistics for all queries in Plan Cache and calculate delta client side

This method is the simplest to understand (and was originally the only method the tool supported).  All of the execution statistics for every query in the Plan Cache are read during every data collection.  The statistics retrieved during the current data collection are compared to those retrieved during the previous few data collections to determine the recently executed queries / the delta.

Since all of the plan cache statistics are retrieved every time, this method retrieves a much larger volume of data from SQL Server during each data collection.  This method is not likely to scale well and will likely perform relatively badly when used against larger and busier servers.

In addition, when the total number of rows in the DMVs exceeds the Workload Profiler “Max Stats per Collection” setting on the “Options” tab, this method will miss some executions / execution statistics. ¬†Actually, that last sentence is true of all three data collections methods. ¬†But since data collection method 2 retrieves statistics for every query in the cache during every data collection, the probability of hitting that limit are very much greater for data collection method 2 compared to methods 1 and 3.

Nonetheless, since everything is retrieved every time this method provides the most complete¬†data in the “Server” tab – this method¬†keeps that tab completely matching the source DMVs (where as the other data collection methods can only show queries that have executed during the current profiling session in this tab).

There are also a small number of edge cases where this data collection method can identify missed data collections that the other data collection methods do not support.

Data Collection Method 3:  Process statistics server side for all queries in Plan Cache and retrieve the delta

Method 3 works differently to methods 1 and 2.  Methods 1 and 2 maintain a statistics baseline in Workload Profiler memory. Workload Profiler then compares newly retrieved query execution statistics to this in memory baseline to calculate the delta.

In contrast, method 3 uses SQL Server to maintain the baseline and calculate the delta.  The data read from SQL is then the actual delta instead of cumulative totals.  This extra SQL Server based processing places slightly more CPU and I/O load on SQL Server but offers increased scalability.

Method 3 keeps the baseline and performs the delta processing (actually nothing more than a simple query join) in tempdb.  Use of temp has advantages and disadvantages.  One point to make is that if the database connection from Workload Profiler drops, the baseline must be recalculated (since temp tables are automatically deleted when the connection is closed), which leads to a gap of one data collection.  But this is in fact no different to the other data collection methods which also skip one data collection after an error/connection interruption for other reasons.

Method 3 identifies the delta by comparing execution counts (as opposed to the last_execution_time field).  As such, my gut instinct is that it is slightly more thorough than the approach of method 1, though I have no actual evidence for this.

Summary Recommendations

Any data collection¬†method can be¬†used when¬†working with small workloads or large but not very diverse workloads. ¬†(See the¬†“Dealing with Large Workloads” section above for a definition of diverse).

For servers running large diverse workloads, use method 1 or 3.  Method 3 is possibly slightly more thorough and scalable due to the way it uses SQL Server for more of the data processing, but this places slightly higher load on SQL Server.

Method 2 is not suitable for working with servers running large diverse workloads due to the fact all of the statistics for every query in the entire plan cache are retrieved during each data collection.

Options Reference

20140928_14_Tabs

This section documents the settings that can be found on the “Options” tab in Workload Profiler. ¬†Note that default values for all of the settings described below can specified in the .config file (see “Configuration File Settings” section below).

Collect Query Stats From Database

  • “Connection String”: ¬†A standard SQL Server connection string to the database/server¬†that you wish to profile. ¬†Note that any database specified here does NOT have any impact on which database statistics are retrieved for. ¬†Use the Data Collection Filter below for that.

Collection Options

  • “Collection Method”: ¬†Specifies which approach is used to gathering the workload performance statistics from the SQL Server Plan Cache. ¬†See the previous section above for a comparison of the different methods.
  • “Use Custom SQL Collection Query”: ¬†if checked, instructs Workload Profiler to use the custom queries specified in the configuration file (advanced).
  • “Data Collection”: ¬†“Automatic”: ¬†If checked, Workload Profiler will automatically perform a statistics data collection at the interval specified in the “Collection Interval” settings. ¬†If not checked, a “Collect Now” button is shown on the “Now” tab that allows you to perform on demand data collection.
  • “Limits”: ¬†“Max Stats. per Collection”: ¬†Essentially the X in a “SELECT TOP X” used when retrieving statistics from SQL Server during each data collection. ¬†If this limit is being hit, then the statistics in Workload Profiler will be incomplete to some degree.
  • “Limits”: ¬†“Max Stats Total”: ¬†A profiling¬†session¬†will automatically end once this number of statistics have been collected.
  • “Limits”: ¬†“Max Text Length”: ¬†“Read”: ¬†The maximum length of the procedure/batch and query text values¬†to retrieve SQL Server – useful to prevent extremely long¬†SQL batches / queries from bloating the data that is retrieved from SQL and processed by Workload Profiler.
  • “Limits”: ¬†“Max Text Length”: ¬†“Write”: ¬†The maximum length of the procedure/batch and query text¬†values written to the SQL Server table if that option is enabled.
  • “Collection Filter”: ¬†Used to restrict the performance statistics that are read into Workload Profiler. ¬†Any database selection is pushed back to SQL Server in the WHERE clause of the data collection queries. ¬†Text criteria¬†is not pushed back in a WHERE clause due to the potentially excessive CPU overhead this could cause. ¬†This means that the text filtering is applied after the data is read from SQL serve (so if you are only retrieving say the first 100 characters of each procedure/batch/query using the limits settings above, then this filtering will only be against the text in those first 100¬†characters).

Write Stats To Database

  • “Enabled”: ¬†Tick to enable Workload Profiler to write the performance statistics collected during a data collection to a SQL Server table.
  • “Connection String”: ¬†A SQL Server connection string¬†that specifies the database to save the performance statistics into.
  • “Table Name”: ¬†The name of the table to save the performance statistics into.
  • “Options”: ¬†Click to create an appropriate table for saving the statistics, named as specified above, in the database specified above.
  • “Processing SQL”: ¬†Tick to enable Workload Profiler to run a custom data processing script of your choice after each data collection. ¬†The script is executed against the database specified in the “Write Stats To: Connecting String”.
  • “Browse”: ¬†Click to select the custom data processing script to be executed.

Analysis Options

  • “Statistics Source”: ¬†Choose where to load analysis statistics from or disable analysis during a profiling session.
  • “Connection String”: ¬†A¬†SQL Server connection string that specifies the database to load previously saved statistics from.
  • “Table Name”: ¬†The name of the table containing the previously saved performance statistics.
  • “Load Data”: ¬†Click to load the previously saved performance statistics into memory. ¬†This button opens the “Data Load Filter” to allow you to load only a subset of the previously saved data for analysis.

Configuration File Settings

Core Settings

All of the settings on the “Options” tab have corresponding settings in the .config file that accompanies the Workload Profiler executable. ¬†They map as follows – where the settings are listed in the form: ¬†UI Name = Config File Name (allowed values):

Collect Query Stats From Database

  • “Connection String” =¬†AzureSqlQueryStats.Properties.Settings.MonitorConStr

Collection Options

  • “Collection Method” = DataCollectionMethod: ¬†(ClientDeltaRetrieveRecent |¬†ClientDeltaRetrieveAll |¬†ServerDelta)
  • “Use Custom SQL Collection Query”=¬†UseCustomCollectionSQL¬†(True |¬†False)
  • “Data Collection”: ¬†“Automatic” = ¬†AutomaticDataCollection (True | False)
  • “Data Collection”: ¬†“Collection Interval” = CollectionIntervalSeconds (10 to¬†3600)
  • “Limits”: ¬†“Max Stats. per Collection” =¬†MaxStatsPerCollection (10 to¬†10000000)
  • “Limits”: ¬†“Max Stats Total” =¬†MaxStatsTotal (10000 to¬†1000000000)
  • “Limits”: ¬†“Max Text Length”: ¬†“Read” = MaxSqlTextLengthRead (10 to¬†100000)
  • “Limits”: ¬†“Max Text Length”: ¬†“Write” = MaxSqlTextLengthWrite¬†(10 to 100000)
  • “Collection Filter” = split into three settings:
    • DataCollectionFilter_DatabaseName (list of database names, one per line)
    • DataCollectionFilter_ProcedureOrBatchText (list of text criteria, one per line)
    • DataCollectionFilter_QueryText (list of text criteria, one per line)

Write Stats To Database

  • “Enabled” = WritingEnabled (True | False)
  • “Connection String” = AzureSqlQueryStats.Properties.Settings.WriteToConStr
  • “Table Name” = WriteToTableName
  • “Processing SQL” = ProcessingEnabled (True | False)
  • “Browse” = ProcessingSqlFilePath (a path to a valid SQL file locally where SQL Workload Profiler is running).

Analysis Options

  • “Statistics Source”=¬†AnalysisSource (Current |¬†Database | Disabled)
  • “Connection String”=¬†AzureSqlQueryStats.Properties.Settings.AnalysisConStr
  • “Table Name”=¬†AnalysisTableName

Advanced Settings

In addition to the core settings various other advanced options can be configured in the .config file. ¬†All of these settings are prefixed with “Adv_” to denote the fact they do not appear in the UI.

Some of these include:

  • Adv_AutoStart: (True | False): ¬†If true, Workload Profiler will start collecting data¬†straight away when opened, assuming valid¬†values have been specified for the other settings.
  • Adv_ListColumnAutoWidth: (True | False): ¬†By default, Workload Profiler will resize width of the data table columns when the Workload Profiler window is resized. ¬†If this behaviour is annoying you, disable it here!
  • Adv_LiveAnalysisMaxHistoryMemoryMB: (1 – infinity): ¬†The maximum memory Workload Profiler is allowed to use whilst performing a data capture. ¬†Once this limit is reached, Workload Profiler will begin discarding the oldest data in the Analysis View.
  • Adv_ShowSessionTotalsInAnalysisDeltaDetails: (True | False): ¬†In order¬†to make the “Analysis Details” tab faster in the “Delta” view when working with very large data sets, disable session total calculation here.
  • Adv_ListViewMaxItems: ¬†Increase or decrease the maximum number of items loaded into the UI data tables. ¬†Note: ¬†Higher numbers will make the UI slower.
  • Adv_DisplayDecimalPlaces: ¬†Workload Profiler automatically rounds numbers to an appropriate number of decimal places for display in the data tables. ¬†This algorithm is not perfect – if you would prefer a fixed number of decimal places, specify it here.

Output Table Columns

Most of the columns in the SQL Server output table match those in the source DMVs.  Workload Profiler adds a few additional ones too:

Delta Columns

These columns include the extra information that Workload Profiler calculates i.e. execution statistics for the time periods between each data collection instead of the simple cumulative totals visible in the DMVs:

  • ExecutionCount (note the original execution_count column from the SQL DMV is included in the Workload Profiler table¬†as TotalExecutionCount, to be more consistent with the naming of the other columns, and to make room for the delta ExecutionCount).
  • WorkerTime
  • PhysicalReads
  • LogicalWrites
  • LogicalReads
  • ClrTime
  • ElapsedTime
  • Rows

Time Period Identifiers

  • CollectedAt
  • LastCollectedAt

i.e. a given row in the table describes the executions of one query Рquery plan combination between the LastCollectedAt and CollectedAt times.

Pseudo Identifiers

Some of the values that can be used to identify entities in the DMVs are not exactly easy to work with outside SQL Server (varbinary(64), etc).  To allow simpler analysis in tools such as Excel, a few pseudo identifiers have been added:

  • pProcedureOrBatchId – a pseudo identifier that uniquely identifies each¬†SqlHandle encountered during a profiling session.
  • pQueryId – a pseudo identifier that uniquely identifies each combination of SqlHandle,¬†StatementStartOffset and¬†StatementEndOffset encountered during a profiling session.
  • pPlanId – a pseudo identifier that uniquely identifies each PlanHandle encountered during a profiling session.

Plan Status

  • Describes how Workload Profiler sees this query plan generation during each data collection where N = New, E = Existing, R = Reset, F = (Lost and) Found, 0 = (Lost and) Discarded

Version History

  • 1st Oct 2014: ¬†v0.8.0: ¬†Initial Public Release
  • 31st Mar 2015: ¬†v0.9.0: ¬†Addition of Adv_AutoStart configuration setting.

 


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s