Azure SQL Database Memory Limits By Service Tier

Background

The new service tiers in Azure SQL Database provide different levels of performance, by offering more of the following at each higher service level:

  • Processing (CPU) Time
  • Read Rate (from physical disk)
  • Write Rate (to the database log file).
  • Memory (space to cache and process data)

One of the frustrating aspects of the new service tiers on a technical level is that Microsoft are not providing any particularly meaningful performance units to measure the new tiers by.  Everywhere you look you find relative units, be it DTUs or be it percentages that measure resource utilisation in each tier or compare between tiers (e.g. in the sys.resource_stats / sys.dm_db_resource_stats DMVs in the master / your database respectively).

My earlier performance tests provided some information about the maximum read and write rates supported by each tier.

I’ve now performed some further tests that provide some information about the memory limits in each tier / performance level.  Results are given first.  The methodology is at the end of this post.

Test Results – Azure SQL DB Memory Allocation

The tests suggest each database is allocated memory as follows:

  • Web / Business Edition – at least 5 GB (likely variable).
  • Basic Tier – 250 MB
  • Standard Tier S0 – 512 MB
  • Standard Tier S1 – 512 MB
  • Standard Tier S2 – 512 MB
  • Premium Tier P1 – 9 GB
  • Premium Tier P2 – 9 GB

Premium P3 was not tested.  It is not possible to obtain exact figures for Web/Business edition since the DMVs don’t contain memory information in that edition.  However, repeating the same tests has proven that it is possible in Web/Business edition to cache at least 5 GB in memory. More on this below.

This test also allowed some write-rate metrics to be captured.  These are more or less identical to those captured during the earlier tests, i.e. nothing has changed here.

Thoughts

Continue reading

Advertisements

Free Tool: SQL Workload Profiler: For On-Premise and Azure SQL DB

Over the past few weeks I have enhanced a tool I previously developed for gathering and analysing SQL Server Performance Statistics.

Today I have made the tool available via this blog.  It is called SQL Workload Profiler:

20140928_01_Delta      20140928_03_Chart

The tool aims to to provide performance statistics for the queries running in your SQL Server minute-by-minute (or less), together with a quick and simple analysis interface to allow that data to be explored.  The captured granular statistics can also be written to a SQL Table during data collection for saving, more detailed analysis, etc.

Many more details can be found over on the SQL Workload Profiler page.

GA Performance in New Azure SQL Database Performance Tiers

The new Azure SQL Database Service Tiers have now reached General Availability.

Performance Metrics

Update (31st January 2015):  For current Performance Metrics, please the current performance test results here (for both v11 and v12).

Changes at General Availability

I will also call here that the pricing for Standard Edition at GA has been reduced compared to the prices stated during the preview.  This is a good step and actually makes the move into the new Service Tiers attractive for lightly loaded databases for cost reasons alone (even ignoring all the other great new features in the new Service Tiers) – more thoughts on this below.

Changing Your Thinking When Approaching the New Service Tiers

The new SQL Database service tiers require that we, as customers, adopt a completely different mind-set in how we approach SQL Database. The higher performance that we want, the more we have to pay. And that performance scale, at the bottom end of the new service tiers, starts very small compared to Web/Business. Standard Tier, even S2, offers on average, significantly less resources than Web/Business.

Continue reading

Microsoft Azure SQL Database Performance Tests: Summary

Performance Testing Series

This is the final post from a series of blog posts about my performance tests in Azure SQL Database. For the first post in this series (and links to all of the other posts) please see here.

Thanks for reading.  I hope the information shared here will be useful in some way.

Quick Navigation:      << Previous post in the series <<

Update:  31st January 2015

All of the information below reflects the tests that were carried out in July 2014.  The prices are also as Microsoft published them in July 2014.

For current Performance Metrics, please the current performance test results here (for both the current V11 and the new V12).

For initial impressions of the new service tiers and performance information as of July 2014, continue reading.

Continue reading

Microsoft Azure SQL Database Performance Tests: Scale Tests

Performance Testing Series

This post is part of a series of blog posts about my performance tests in Azure SQL Database. For the first post in this series (and links to all of the other posts) please see here.  For a summary of all the results please see here.

For a general overview of the test architecture, test components and test types, please see here.

Quick Navigation:      << Previous post in series <<      >> Next post in series >>

Scale Test Overview

This test combines the activities of the earlier Inserts and Deletes, Direct Selects, Indirect Selects and Updates tests. Please refer to those posts for a more detailed description of these activities.

The type of request generated by each worker thread at any point in time is randomly selected, based on the following probabilities:

  • Inserts – 20%
  • Selects – 30% – which breaks down as 75% Direct Selects and 25% Indirect Selects
  • Updates – 30%
  • Deletes – 20%

As in those earlier tests, the worker threads in these tests are not limited to a specified request rate.

The earlier tests only ran on a relatively small set of data in memory.  These tests run against a newly restored and much larger 36 GB database (except for Basic edition, where a 2 GB database was used).  The size of the database and the fact it is newly restored means that none of the data exists in the buffer pool at the start of the test and the test therefore includes the effect of read rates from physical disk for each tier.

Using the same workload against each tier is, on one hand, not a smart thing to do – i.e. clearly we expect the lower tiers/performance levels to not perform as well as the top end.  On the other hand, running the same workload is a handy way to illustrate exactly what those differences are like.

The previous Insert, Select, Update and Delete tests randomly picked a row from any point in the database with equal probability.  Since those tests were working with data entirely in the buffer pool, that was fine.  In most databases, certain data is more likely to be hit than other data, e.g. typically more recent data is more “active”.

To at least loosely replicate that, two distribution functions were used in these tests that meant the data towards the end (highest row ID values) of each test table was more likely to be accessed than data at the start.  The first distribution function, used for the single row operations (i.e. Select Direct, Update and Delete) had a high probability of choosing data in the last 5% of each table.  The second distribution function used for row ranges (i.e. Select Indirect) had a slightly higher probability of selecting a slightly older data – i.e. the function was less peaked at the end of the table.

Probability distribution charts for these functions can be found in an appendix at the end of this post.

I am not at all claiming that this workload and these distributions are representative of the majority of databases in use.  They are merely one example workload and data access pattern.  The single row distribution function is in particular quite “peaked”, however the choice was deliberate – having such a peak should allow some data to be relatively quickly cached in the buffer pool.

Each test lasted 120 minutes and was executed three times against each tier.  Tests were performed against Business Edition with 1 thread generating requests, 2 threads, 4 threads and 8 threads.  This is due to the opportunistic nature of that edition (which means more threads = more throughput, to some extent).  This is less true of the new service tiers, where the different quotas come into play and additional threads don’t improve performance once a quota limit is reached (which generally happens in the tests here).

More details of the test parameters and configurations can be found in the appendices at the end of this post.

Test Results

Overview

Results from the 30 Scale Tests are shown in the charts below. In these charts, the “T” values are the number of worker threads generating requests against the database, e.g. “Business 4T” = a test against Business edition, running with 4 threads continuously generating requests.

2014-07-05-RowsPerSecond1

Continue reading

Microsoft Azure SQL Database Performance Tests: Sequential Selects

Performance Testing Series

This post is part of a series of blog posts about my performance tests in Azure SQL Database. For the first post in this series (and links to all of the other posts) please see here.  For a summary of all the results please see here.

For a general overview of the test architecture, test components and test types, please see here.

Quick Navigation:      << Previous post in series <<      >> Next post in series >>

Sequential Select Test Overview

The Sequential Select test involves reading an entire table or tables to get a simple estimate of maximum read rates.  The data is read from each table sequentially (i.e. roughly speaking SELECT * with no ORDER BY).  Of course, data access at the file level may well not be sequential.

The test involves using a newly restored database.  The database is sized at 4.5 GB for Web Edition, 10 GB for Business Edition, 2 GB for the Basic Tier and 36 GB for the Standard and Premier tiers.

The size of the database and the fact it is newly restored means that none of the data exists in the buffer pool and the test therefore estimates the maximum read rates from physical disk for each tier.

Test Results

Overview

Results from the Sequential Select tests are shown in the two charts below.  These rates represent the “read-limited” rates, i.e. when the disk read % metric was reporting 100%.  In these charts, the “T” values are the number of worker threads generating requests against the database, e.g. “Business 4T” = a test against Business edition, running with 4 threads.

2014-07-04-SeqSel-Rows

Continue reading

Microsoft Azure SQL Database Performance Tests: Inserts, Direct and Indirect Selects, Updates and Deletes

Performance Testing Series

This post is part of a series of blog posts about my performance tests in Azure SQL Database. For the first post in this series (and links to all of the other posts) please see here.  For a summary of all the results please see here.

For a general overview of the test architecture, test components and test types, please see here.

Quick Navigation:      << Previous post in series <<      >> Next post in series >>

Inserts, Direct and Indirect Selects, Updates and Deletes Test Overview

This test combines the activities of the earlier Inserts and Deletes, Direct Selects, Indirect Selects and Updates tests. Please refer to those posts for a more detailed description of these activities.

The type of request generated by each worker thread at any point in time is randomly selected, based on the following probabilities:

  • Inserts – 20%
  • Selects – 30% – which breaks down as 75% Direct Selects and 25% Indirect Selects
  • Updates – 30%
  • Deletes – 20%

As in those earlier tests, the worker threads in the UT tests are not limited to a specified request rate while the worker threads in the LT tests are constrained to generate requests at a limited rate specified in the test definition.

The UT tests run against only a small set of test data. This means the entire data set generally exists in the SQL Server Buffer pool (the test table is pre-populated immediately before the test begins). This test therefore primarily investigates write rates combined with read rates from the buffer pool. In contrast to these tests, the later Scale Tests include all of the actions here but acting on data that is not always resident in the buffer pool.

UT Test Results

Overview

Results from the 30 UT tests are shown in the two charts below. In these charts, the “T” values are the number of worker threads generating requests against the database, e.g. “Business 4T” = a test against Business edition, running with 4 threads continuously generating requests.

20140630-I-UT-Rows

Continue reading

Microsoft Azure SQL Database Performance Tests: Inserts, Direct Selects, Updates and Deletes

Performance Testing Series

This post is part of a series of blog posts about my performance tests in Azure SQL Database. For the first post in this series (and links to all of the other posts) please see here.  For a summary of all the results please see here.

For a general overview of the test architecture, test components and test types, please see here.

Quick Navigation:      << Previous post in series <<      >> Next post in series >>

Combined Inserts, Direct Selects, Updates and Deletes Test Overview

This test combines the activities of the earlier Inserts and Deletes, Direct Selects and Updates tests.  Please refer to those posts for a more detailed description of these activities.

The type of request generated by each worker thread at any point in time is randomly selected, based on the following probabilities:

  • Inserts – 20%
  • Direct Selects – 30%
  • Updates – 30%
  • Deletes – 20%

As in those earlier tests, the worker threads in the UT tests are not limited to a specified request rate.  The LT tests were not conducted for this workload.

The UT tests run against only a small set of test data. This means the entire data set generally exists in the SQL Server Buffer pool (the test table is pre-populated immediately before the test begins). This test therefore primarily investigates write rates combined with read rates from the buffer pool.  In contrast to these tests, the later Scale Tests include all of the actions here but acting on data that is not always resident in the buffer pool.

UT Test Results

Overview

Results from the 30 UT tests are shown in the two charts below. In these charts, the “T” values are the number of worker threads generating requests against the database, e.g. “Business 4T” = a test against Business edition, running with 4 threads continuously generating requests.

20140630-H-UT-Rows

Continue reading

Microsoft Azure SQL Database Performance Tests: Inserts and Deletes

Performance Testing Series

This post is part of a series of blog posts about my performance tests in Azure SQL Database. For the first post in this series (and links to all of the other posts) please see here.  For a summary of all the results please see here.

For a general overview of the test architecture, test components and test types, please see here.

Quick Navigation:      << Previous post in series <<      >> Next post in series >>

Insert and Delete Test Overview

The Inserts test involves calling two different stored procedures – one to insert rows into a test table, another to delete rows.

The Cloud Service fires up a number of worker threads as specified in the test definition. The worker threads randomly decide whether to insert or delete (with equal probability).

For an insert, the worker thread generates random data to be inserted. Row Id values (the primary key field) are generated sequentially. The worker threads then call a simple stored procedure to insert rows one-by-one (see stored procedure definition at the bottom of the post here).

For a delete, the worker thread picks a Row Id (primary key value) at random from an in-memory list of primary key values that exist in the database. The worker thread then calls a simple stored procedure to delete that single row (see stored procedure definition at the bottom of the post here).

The worker threads in the UT tests are not limited to a specified request rate. As soon as a stored procedure call has completed another is generated. The worker threads in the LT tests are constrained to generate requests at a limited rate specified in the test definition. The LT tests gradually increase the rate over the course of a number of tests.

Both the UT and LT Update tests run against only a small set of test data. This means the entire data set generally exists in the SQL Server Buffer pool (the test table is pre-populated immediately before the test begins). This test therefore primarily investigates write rates.  In contrast to these tests, the later Scale Tests include deleting data that is not always resident in the buffer pool.

UT Test Results

Overview

Results from the 30 UT Inserts and Deletes tests are shown in the two charts below. In these charts, the “T” values are the number of worker threads generating requests against the database, e.g. “Business 4T” = a test against Business edition, running with 4 threads continuously generating requests.

20140630-G-UT-Rows

Continue reading

Microsoft Azure SQL Database Performance Tests: Updates

Performance Testing Series

This post is part of a series of blog posts about my performance tests in Azure SQL Database. For the first post in this series (and links to all of the other posts) please see here.  For a summary of all the results please see here.

For a general overview of the test architecture, test components and test types, please see here.

Quick Navigation:      << Previous post in series <<      >> Next post in series >>

Update Test Overview

The Update test involves calling a stored procedure to update single rows in a pre-populated test table.

The Cloud Service fires up a number of worker threads as specified in the test definition. Each worker thread picks a Row Id (primary key value) at random from an in-memory list of primary key values that exist in the database. The worker thread generates new values for all fields in the row except the primary key then calls a simple stored procedure to update that single row (see stored procedure definition at the bottom of the post here).

The worker threads in the UT tests are not limited to a specified request rate. As soon as an update stored procedure call has completed another is generated. The worker threads in the LT tests are constrained to generate requests at a limited rate specified in the test definition. The LT tests gradually increase the rate over the course of a number of tests.

Both the UT and LT Update tests run against only a small set of test data. This means the entire data set generally exists in the SQL Server Buffer pool (the test table is pre-populated immediately before the test begins). This test therefore primarily investigates write rates.  In contrast to these tests, the later Scale Tests include updating data that is not always resident in the buffer pool.

UT Test Results

Overview

Results from the 30 UT Updates tests are shown in the two charts below. In these charts, the “T” values are the number of worker threads generating requests against the database, e.g. “Business 4T” = a test against Business edition, running with 4 threads continuously generating requests.

20140630-F-UT-Rows

Continue reading