Microsoft Azure SQL Database Performance Tests: Summary

Performance Testing Series

This is the final post from a series of blog posts about my performance tests in Azure SQL Database. For the first post in this series (and links to all of the other posts) please see here.

Thanks for reading.  I hope the information shared here will be useful in some way.

Quick Navigation:      << Previous post in the series <<

Update:  31st January 2015

All of the information below reflects the tests that were carried out in July 2014.  The prices are also as Microsoft published them in July 2014.

For current Performance Metrics, please the current performance test results here (for both the current V11 and the new V12).

For initial impressions of the new service tiers and performance information as of July 2014, continue reading.

Continue reading

Advertisements

Microsoft Azure SQL Database Performance Tests: Scale Tests

Performance Testing Series

This post is part of a series of blog posts about my performance tests in Azure SQL Database. For the first post in this series (and links to all of the other posts) please see here.  For a summary of all the results please see here.

For a general overview of the test architecture, test components and test types, please see here.

Quick Navigation:      << Previous post in series <<      >> Next post in series >>

Scale Test Overview

This test combines the activities of the earlier Inserts and Deletes, Direct Selects, Indirect Selects and Updates tests. Please refer to those posts for a more detailed description of these activities.

The type of request generated by each worker thread at any point in time is randomly selected, based on the following probabilities:

  • Inserts – 20%
  • Selects – 30% – which breaks down as 75% Direct Selects and 25% Indirect Selects
  • Updates – 30%
  • Deletes – 20%

As in those earlier tests, the worker threads in these tests are not limited to a specified request rate.

The earlier tests only ran on a relatively small set of data in memory.  These tests run against a newly restored and much larger 36 GB database (except for Basic edition, where a 2 GB database was used).  The size of the database and the fact it is newly restored means that none of the data exists in the buffer pool at the start of the test and the test therefore includes the effect of read rates from physical disk for each tier.

Using the same workload against each tier is, on one hand, not a smart thing to do – i.e. clearly we expect the lower tiers/performance levels to not perform as well as the top end.  On the other hand, running the same workload is a handy way to illustrate exactly what those differences are like.

The previous Insert, Select, Update and Delete tests randomly picked a row from any point in the database with equal probability.  Since those tests were working with data entirely in the buffer pool, that was fine.  In most databases, certain data is more likely to be hit than other data, e.g. typically more recent data is more “active”.

To at least loosely replicate that, two distribution functions were used in these tests that meant the data towards the end (highest row ID values) of each test table was more likely to be accessed than data at the start.  The first distribution function, used for the single row operations (i.e. Select Direct, Update and Delete) had a high probability of choosing data in the last 5% of each table.  The second distribution function used for row ranges (i.e. Select Indirect) had a slightly higher probability of selecting a slightly older data – i.e. the function was less peaked at the end of the table.

Probability distribution charts for these functions can be found in an appendix at the end of this post.

I am not at all claiming that this workload and these distributions are representative of the majority of databases in use.  They are merely one example workload and data access pattern.  The single row distribution function is in particular quite “peaked”, however the choice was deliberate – having such a peak should allow some data to be relatively quickly cached in the buffer pool.

Each test lasted 120 minutes and was executed three times against each tier.  Tests were performed against Business Edition with 1 thread generating requests, 2 threads, 4 threads and 8 threads.  This is due to the opportunistic nature of that edition (which means more threads = more throughput, to some extent).  This is less true of the new service tiers, where the different quotas come into play and additional threads don’t improve performance once a quota limit is reached (which generally happens in the tests here).

More details of the test parameters and configurations can be found in the appendices at the end of this post.

Test Results

Overview

Results from the 30 Scale Tests are shown in the charts below. In these charts, the “T” values are the number of worker threads generating requests against the database, e.g. “Business 4T” = a test against Business edition, running with 4 threads continuously generating requests.

2014-07-05-RowsPerSecond1

Continue reading

Microsoft Azure SQL Database Performance Tests: Sequential Selects

Performance Testing Series

This post is part of a series of blog posts about my performance tests in Azure SQL Database. For the first post in this series (and links to all of the other posts) please see here.  For a summary of all the results please see here.

For a general overview of the test architecture, test components and test types, please see here.

Quick Navigation:      << Previous post in series <<      >> Next post in series >>

Sequential Select Test Overview

The Sequential Select test involves reading an entire table or tables to get a simple estimate of maximum read rates.  The data is read from each table sequentially (i.e. roughly speaking SELECT * with no ORDER BY).  Of course, data access at the file level may well not be sequential.

The test involves using a newly restored database.  The database is sized at 4.5 GB for Web Edition, 10 GB for Business Edition, 2 GB for the Basic Tier and 36 GB for the Standard and Premier tiers.

The size of the database and the fact it is newly restored means that none of the data exists in the buffer pool and the test therefore estimates the maximum read rates from physical disk for each tier.

Test Results

Overview

Results from the Sequential Select tests are shown in the two charts below.  These rates represent the “read-limited” rates, i.e. when the disk read % metric was reporting 100%.  In these charts, the “T” values are the number of worker threads generating requests against the database, e.g. “Business 4T” = a test against Business edition, running with 4 threads.

2014-07-04-SeqSel-Rows

Continue reading

Microsoft Azure SQL Database Performance Tests: Inserts, Direct and Indirect Selects, Updates and Deletes

Performance Testing Series

This post is part of a series of blog posts about my performance tests in Azure SQL Database. For the first post in this series (and links to all of the other posts) please see here.  For a summary of all the results please see here.

For a general overview of the test architecture, test components and test types, please see here.

Quick Navigation:      << Previous post in series <<      >> Next post in series >>

Inserts, Direct and Indirect Selects, Updates and Deletes Test Overview

This test combines the activities of the earlier Inserts and Deletes, Direct Selects, Indirect Selects and Updates tests. Please refer to those posts for a more detailed description of these activities.

The type of request generated by each worker thread at any point in time is randomly selected, based on the following probabilities:

  • Inserts – 20%
  • Selects – 30% – which breaks down as 75% Direct Selects and 25% Indirect Selects
  • Updates – 30%
  • Deletes – 20%

As in those earlier tests, the worker threads in the UT tests are not limited to a specified request rate while the worker threads in the LT tests are constrained to generate requests at a limited rate specified in the test definition.

The UT tests run against only a small set of test data. This means the entire data set generally exists in the SQL Server Buffer pool (the test table is pre-populated immediately before the test begins). This test therefore primarily investigates write rates combined with read rates from the buffer pool. In contrast to these tests, the later Scale Tests include all of the actions here but acting on data that is not always resident in the buffer pool.

UT Test Results

Overview

Results from the 30 UT tests are shown in the two charts below. In these charts, the “T” values are the number of worker threads generating requests against the database, e.g. “Business 4T” = a test against Business edition, running with 4 threads continuously generating requests.

20140630-I-UT-Rows

Continue reading

Microsoft Azure SQL Database Performance Tests: Inserts, Direct Selects, Updates and Deletes

Performance Testing Series

This post is part of a series of blog posts about my performance tests in Azure SQL Database. For the first post in this series (and links to all of the other posts) please see here.  For a summary of all the results please see here.

For a general overview of the test architecture, test components and test types, please see here.

Quick Navigation:      << Previous post in series <<      >> Next post in series >>

Combined Inserts, Direct Selects, Updates and Deletes Test Overview

This test combines the activities of the earlier Inserts and Deletes, Direct Selects and Updates tests.  Please refer to those posts for a more detailed description of these activities.

The type of request generated by each worker thread at any point in time is randomly selected, based on the following probabilities:

  • Inserts – 20%
  • Direct Selects – 30%
  • Updates – 30%
  • Deletes – 20%

As in those earlier tests, the worker threads in the UT tests are not limited to a specified request rate.  The LT tests were not conducted for this workload.

The UT tests run against only a small set of test data. This means the entire data set generally exists in the SQL Server Buffer pool (the test table is pre-populated immediately before the test begins). This test therefore primarily investigates write rates combined with read rates from the buffer pool.  In contrast to these tests, the later Scale Tests include all of the actions here but acting on data that is not always resident in the buffer pool.

UT Test Results

Overview

Results from the 30 UT tests are shown in the two charts below. In these charts, the “T” values are the number of worker threads generating requests against the database, e.g. “Business 4T” = a test against Business edition, running with 4 threads continuously generating requests.

20140630-H-UT-Rows

Continue reading

Microsoft Azure SQL Database Performance Tests: Inserts and Deletes

Performance Testing Series

This post is part of a series of blog posts about my performance tests in Azure SQL Database. For the first post in this series (and links to all of the other posts) please see here.  For a summary of all the results please see here.

For a general overview of the test architecture, test components and test types, please see here.

Quick Navigation:      << Previous post in series <<      >> Next post in series >>

Insert and Delete Test Overview

The Inserts test involves calling two different stored procedures – one to insert rows into a test table, another to delete rows.

The Cloud Service fires up a number of worker threads as specified in the test definition. The worker threads randomly decide whether to insert or delete (with equal probability).

For an insert, the worker thread generates random data to be inserted. Row Id values (the primary key field) are generated sequentially. The worker threads then call a simple stored procedure to insert rows one-by-one (see stored procedure definition at the bottom of the post here).

For a delete, the worker thread picks a Row Id (primary key value) at random from an in-memory list of primary key values that exist in the database. The worker thread then calls a simple stored procedure to delete that single row (see stored procedure definition at the bottom of the post here).

The worker threads in the UT tests are not limited to a specified request rate. As soon as a stored procedure call has completed another is generated. The worker threads in the LT tests are constrained to generate requests at a limited rate specified in the test definition. The LT tests gradually increase the rate over the course of a number of tests.

Both the UT and LT Update tests run against only a small set of test data. This means the entire data set generally exists in the SQL Server Buffer pool (the test table is pre-populated immediately before the test begins). This test therefore primarily investigates write rates.  In contrast to these tests, the later Scale Tests include deleting data that is not always resident in the buffer pool.

UT Test Results

Overview

Results from the 30 UT Inserts and Deletes tests are shown in the two charts below. In these charts, the “T” values are the number of worker threads generating requests against the database, e.g. “Business 4T” = a test against Business edition, running with 4 threads continuously generating requests.

20140630-G-UT-Rows

Continue reading

Microsoft Azure SQL Database Performance Tests: Updates

Performance Testing Series

This post is part of a series of blog posts about my performance tests in Azure SQL Database. For the first post in this series (and links to all of the other posts) please see here.  For a summary of all the results please see here.

For a general overview of the test architecture, test components and test types, please see here.

Quick Navigation:      << Previous post in series <<      >> Next post in series >>

Update Test Overview

The Update test involves calling a stored procedure to update single rows in a pre-populated test table.

The Cloud Service fires up a number of worker threads as specified in the test definition. Each worker thread picks a Row Id (primary key value) at random from an in-memory list of primary key values that exist in the database. The worker thread generates new values for all fields in the row except the primary key then calls a simple stored procedure to update that single row (see stored procedure definition at the bottom of the post here).

The worker threads in the UT tests are not limited to a specified request rate. As soon as an update stored procedure call has completed another is generated. The worker threads in the LT tests are constrained to generate requests at a limited rate specified in the test definition. The LT tests gradually increase the rate over the course of a number of tests.

Both the UT and LT Update tests run against only a small set of test data. This means the entire data set generally exists in the SQL Server Buffer pool (the test table is pre-populated immediately before the test begins). This test therefore primarily investigates write rates.  In contrast to these tests, the later Scale Tests include updating data that is not always resident in the buffer pool.

UT Test Results

Overview

Results from the 30 UT Updates tests are shown in the two charts below. In these charts, the “T” values are the number of worker threads generating requests against the database, e.g. “Business 4T” = a test against Business edition, running with 4 threads continuously generating requests.

20140630-F-UT-Rows

Continue reading

Microsoft Azure SQL Database Performance Tests: Mini Sequential Selects

Performance Testing Series

This post is part of a series of blog posts about my performance tests in Azure SQL Database. For the first post in this series (and links to all of the other posts) please see here.  For a summary of all the results please see here.

For a general overview of the test architecture, test components and test types, please see here.

Quick Navigation:      << Previous post in series <<      >> Next post in series >>

Mini Sequential Selects Test Overview

The Sequential Select test involves reading an entire table or tables to get a simple estimate of maximum read rates.  The “Sequential” in the name of the test refers to the nature of the read from the table i.e. roughly speaking SELECT * with no ORDER BY clause – of course this may or may not actually be a sequential read at the file level.

This test, the Mini Sequential Selects test, performs the same operation on a much smaller set of data.  This means the entire data set generally exists in the SQL Server Buffer pool (the test table is pre-populated immediately before the test begins). This test therefore primarily investigates sequential read rates from the buffer pool.

In contrast to these tests, the later (Full) Sequential Select tests, perform the same action against a much larger database for which none of the data is in the buffer pool.

As in the earlier tests, the worker threads in the UT tests are not limited to a specified request rate.  The data is merely read sequentially repeatedly during the test.  No LT tests were performed.

UT Test Results

Overview

Results from the 30 UT Mini Sequential Select tests are shown in the two charts below. In these charts, the “T” values are the number of worker threads generating requests against the database, e.g. “Business 4T” = a test against Business edition, running with 4 threads continuously generating requests.

20140630-J-UT-Rows

Continue reading

Microsoft Azure SQL Database Performance Tests: Indirect Selects

Performance Testing Series

This post is part of a series of blog posts about my performance tests in Azure SQL Database. For the first post in this series (and links to all of the other posts) please see here.  For a summary of all the results please see here.

For a general overview of the test architecture, test components and test types, please see here.

Quick Navigation:      << Previous post in series <<      >> Next post in series >>

Indirect Select Test Overview

The Indirect Select test involves executing a text SQL command to select multiple rows from a pre-populated test table.  The aim of the test is to investigate performance of a read workload.

The Cloud Service fires up a number of worker threads as specified in the test definition. Each worker thread picks a Row Id (primary key value and clustered index field) at random from an in-memory list of primary key values that exist in the database – this is the start of the search range.  The worker thread also chooses a number of rows to scan at random: 100 rows (30% probability), 200 rows (20%), 300 rows (15%), 400 rows (10%) and 600/800/1200/2500/5000 rows (each 5%).  This means the average number of rows examined per request is 660.  A further criteria clause is generated and applied to one of the other fields at random in the test table.  The range used for this other field criteria is also random.  Some examples of WHERE clauses that can be generated include:

  • WHERE (RowId BETWEEN @i1 AND @i2) AND (TinyInt1 BETWEEN @p1 AND @p2)
  • WHERE (RowId BETWEEN @i1 AND @i2) AND (SmallInt2 BETWEEN @p1 AND @p2)
  • WHERE (RowId BETWEEN @i1 AND @i2) AND (Int5 BETWEEN @p1 AND @p2)
  • WHERE (RowId BETWEEN @i1 AND @i2) AND (DateTime2 BETWEEN @p1 AND @p2)
  • WHERE (RowId BETWEEN @i1 AND @i2) AND (Float1 BETWEEN @p1 AND @p2)
  • WHERE (RowId BETWEEN @i1 AND @i2) AND (Money3 BETWEEN @p1 AND @p2)

The worker thread then executes this SQL against the database.  So, in summary, each request selects multiple rows by a primary key range – also the clustered index – and by another field – hence the test name “Indirect Select”.

As in the Direct Select test, all fields from the test table are returned (roughly speaking “SELECT * “).

The worker threads in the UT tests are not limited to a specified request rate. As soon as a SQL call has completed another is generated. The worker threads in the LT tests are constrained to generate requests at a limited rate specified in the test definition. The LT tests gradually increase the rate over the course of a number of tests.

Both the UT and LT Direct Select tests run against only a small set of test data. This means the entire data set generally exists in the SQL Server Buffer pool (the test table is pre-populated immediately before the test begins). This test therefore investigates whether reads from the buffer pool are constrained by edition / service tier. In contrast to these tests, the later Sequential Select and Scale Tests purposefully target reading from the physical disk.

UT Test Results

Overview

Results from the 30 UT Indirect Select tests are shown in the two charts below. In these charts, the “T” values are the number of worker threads generating requests against the database, e.g. “Business 4T” = a test against Business edition, running with 4 threads continuously generating requests.

20140630-E-UT-Rows

Continue reading

Microsoft Azure SQL Database Performance Tests: Direct Selects

Performance Testing Series

This post is part of a series of blog posts about my performance tests in Azure SQL Database. For the first post in this series (and links to all of the other posts) please see here.  For a summary of all the results please see here.

For a general overview of the test architecture, test components and test types, please see here.

Quick Navigation:      << Previous post in series <<      >> Next post in series >>

Direct Select Test Overview

The Direct Select test involves calling a stored procedure to select single rows from a pre-populated test table.

The Cloud Service fires up a number of worker threads as specified in the test definition. Each worker thread picks a Row Id (primary key value) at random from an in-memory list of primary key values that exist in the database. The worker thread then calls a simple stored procedure to select that single row (see stored procedure definition at the bottom of the post here).  I.e. each request to the stored procedure directly selects a single row by primary key value – also the clustered index – hence the test name “Direct Select”.

The worker threads in the UT tests are not limited to a specified request rate. As soon as a select stored procedure call has completed another is generated. The worker threads in the LT tests are constrained to generate requests at a limited rate specified in the test definition. The LT tests gradually increase the rate over the course of a number of tests.

Both the UT and LT Direct Select tests run against only a small set of test data.  This means the entire data set generally exists in the SQL Server Buffer pool (the test table is pre-populated immediately before the test begins).  This test therefore investigates whether reads from the buffer pool are constrained by edition / service tier.  In contrast to these tests, the later Sequential Select and Scale Tests purposefully target reading from the physical disk.

UT Test Results

Overview

Results from the 30 UT Direct Select tests are shown in the two charts below. In these charts, the “T” values are the number of worker threads generating requests against the database, e.g. “Business 4T” = a test against Business edition, running with 4 threads continuously generating requests.

20140630-D-UT-Rows

Continue reading