Microsoft Azure SQL Database Performance Tests: Direct Selects

Performance Testing Series

This post is part of a series of blog posts about my performance tests in Azure SQL Database. For the first post in this series (and links to all of the other posts) please see here.  For a summary of all the results please see here.

For a general overview of the test architecture, test components and test types, please see here.

Quick Navigation:      << Previous post in series <<      >> Next post in series >>

Direct Select Test Overview

The Direct Select test involves calling a stored procedure to select single rows from a pre-populated test table.

The Cloud Service fires up a number of worker threads as specified in the test definition. Each worker thread picks a Row Id (primary key value) at random from an in-memory list of primary key values that exist in the database. The worker thread then calls a simple stored procedure to select that single row (see stored procedure definition at the bottom of the post here).  I.e. each request to the stored procedure directly selects a single row by primary key value – also the clustered index – hence the test name “Direct Select”.

The worker threads in the UT tests are not limited to a specified request rate. As soon as a select stored procedure call has completed another is generated. The worker threads in the LT tests are constrained to generate requests at a limited rate specified in the test definition. The LT tests gradually increase the rate over the course of a number of tests.

Both the UT and LT Direct Select tests run against only a small set of test data.  This means the entire data set generally exists in the SQL Server Buffer pool (the test table is pre-populated immediately before the test begins).  This test therefore investigates whether reads from the buffer pool are constrained by edition / service tier.  In contrast to these tests, the later Sequential Select and Scale Tests purposefully target reading from the physical disk.

UT Test Results

Overview

Results from the 30 UT Direct Select tests are shown in the two charts below. In these charts, the “T” values are the number of worker threads generating requests against the database, e.g. “Business 4T” = a test against Business edition, running with 4 threads continuously generating requests.

20140630-D-UT-Rows

20140630-D-UT-MB

The two charts are similar because the average row size was around 400 bytes throughout.  The data volume figures here are based on the data content (i.e. summing the sizes of the data values, according to the number of bytes each data value requires in the SQL Server Data Type scheme but ignoring internal SQL Server overheads).

From these charts we can conclude these read rates are directly dependent on the number of threads reading data, e.g. all the 1T tests show similar performance.  The 2T tests show similar performance, and roughly speaking double that of the 1T tests.  And so on.

These charts suggest that there are no performance limits in place that restrict the rate at which data in the buffer pool can be read.

Test Profiles

The charts below show the performance profiles over the lifetime of the tests in the charts above.  Since 30 lines is rather a lot to fit onto one chart, data for the different editions / tiers has been split across several charts.

It is also worth noting that due to the way the Azure SQL Database logging into sys.resource_stats works, data is only available at five minute intervals and the last data point in each test (which would have been plotted as minute 27 in these charts) is often not useful and so is generally omitted.

Basic to P1 Tiers

20140630-D-UT-Metrics1

20140630-D-UT-CPU1

20140630-D-UT-DiskR1

These profiles are largely as expected.  The reason for the occasional dip in throughput has not been determined.

These profiles also confirm that at no point during these tests was the physical disk hit.

Std2 to P2 Tiers

20140630-D-UT-Metrics2

20140630-D-UT-CPU2

20140630-D-UT-DiskR2

Business

20140630-D-UT-Metrics3

20140630-D-UT-CPU3

20140630-D-UT-DiskR3

These profiles show the more inconsistent nature of the Business Edition.  At times throughput varies wildly.

LT Test Results

Three series of LT tests were carried out, against the Web, Basic and S1 editions / tiers.

Basic Tier

The chart below illustrates how the performance of the Basic Tier varies as the workload (number of requests per second) is increased.  A 30 minute test was conducted at each request rate.

20140630-D-LT-2

This all makes sense.  As request rate increases, throughput also increases.  The maximum throughput here matches that seen in the UT test.  Beyond a rate of 250 requests per second requests aren’t processed.  Again, at no point do we hit the physical disk.

S1 Tier

20140630-D-LT-3

This chart looks very different to the previous Basic Edition chart.  Despite these tests always starting only a few minutes after the test data was created, the data is no longer resident in the buffer pool (for all but the 50 requests per second test).  Almost all of these tests were constrained by the physical disk read rate.  The maximum read rate of 90 or so rows per second is considerably less than the 250 rows per second seen in the Basic Edition test earlier (I have deliberately left the green vertical axis scale the same as on the previous chart to highlight the difference in throughput).

Web Edition

20140630-D-LT-1

The Web Edition tests do not hit the physical disk.  The maximum throughput rate seen here is actually slightly higher than that found in the UT tests, again hinting at the variability of Web / Business edition.

Direct Select Test Conclusions

When selecting single rows via the Clustered Index key value and when the data is already in the buffer pool, performance of the Web / Business editions and new service tiers is approximately equal.

Appendix – UT Test Configuration

SQL Edition Web Bus. Bus. Bus. Bus. Basic Std1 Std2 Prem1 Prem2
Cloud Svc Inst. Size A1 A2 A2 A2 A3 A1 A1 A2 A2 A2
Req. Gen. Thread Count 1 1 2 4 8 1 1 2 4 8
Initial Test Data (MB) 0.1 0.2 0.2 0.2 0.8 0.1 0.1 0.4 0.8 1.2

Appendix – UT Test Results

Configuration Avg Rows Per Second Avg MB Per Minute
UT1 UT2 UT3 UT1 UT2 UT3
Web 1T 269 269 261 5.79 5.77 5.62
Business 1T 347 341 316 7.46 7.36 6.83
Business 2T 596 837 613 12.84 17.99 13.20
Business 4T 1632 1472 1276 35.17 31.75 27.46
Business 8T 2540 3883 3272 54.68 83.54 70.35
Basic 1T 265 267 249 5.68 5.74 5.38
Std S1 1T 317 321 335 6.82 6.89 7.19
Std S2 2T 706 725 695 15.19 15.64 14.99
Prem P1 4T 1587 1444 1414 34.17 31.06 30.45
Prem P2 8T 3159 3253 3298 68.02 70.00 71.06
Configuration SQL Phys Disk Read % SQL Avg CPU %
UT1 UT2 UT3 UT1 UT2 UT3
Web 1T 0.0 0.0 0.0 33.8 33.6 32.5
Business 1T 0.0 0.0 0.0 37.6 38.1 33.4
Business 2T 0.0 0.0 0.0 64.7 96.3 63.1
Business 4T 0.0 0.0 0.0 100.0 100.0 100.0
Business 8T 0.0 0.0 0.0 100.0 100.0 100.0
Basic 1T 0.0 0.0 0.0 80.2 81.2 77.3
Std S1 1T 0.0 0.0 0.0 37.0 37.7 38.5
Std S2 2T 0.0 0.0 0.0 24.9 26.3 24.9
Prem P1 4T 0.0 0.0 0.0 28.6 24.4 24.8
Prem P2 8T 0.0 0.0 0.0 25.2 25.7 28.8
Configuration Cloud Svc Avg CPU Error Count
UT1 UT2 UT3 UT1 UT2 UT3
Web 1T 6.7 6.5 6.7 0 0 0
Business 1T 5.4 5.1 4.9 0 0 0
Business 2T 7.7 10.8 9.5 0 0 0
Business 4T 18.4 16.2 16.4 0 0 0
Business 8T 19.1 26.5 23.5 0 0 0
Basic 1T 7.4 8.2 7.3 0 0 0
Std S1 1T 8.9 8.0 8.5 0 0 0
Std S2 2T 10.8 10.2 9.6 0 0 0
Prem P1 4T 18.4 16.3 16.8 0 0 0
Prem P2 8T 36.6 39.4 41.5 0 0 0

Appendix – LT Test Configuration

SQL Edition Web Basic Std1
Cloud Svc Inst. Size A0 A0 A0
Req. Gen. Thread Count 1 1 1
Initial Test Data (MB) 0.25 0.25 0.5

Appendix – LT Test Results

Threads Req. Per Sec Avg Rows Per Second Avg MB Per Minute
Web Basic Std 1 Web Basic Std 1
1 50 49 50 49 1.05 1.07 1.06
1 100 96 100 90 2.07 2.14 1.94
1 200 194 198 83 4.17 4.26 1.80
1 250 241 247 90 5.19 5.31 1.92
1 300 286 267 89 6.15 5.74 1.93
1 400 306 270 90 6.57 5.82 1.94
Threads Req. Per Sec SQL Avg Disk Read % SQL Avg CPU %
Web Basic Std 1 Web Basic Std 1
1 50 6.5 16.2 7.1 0.3 0.8 0.4
1 100 12.5 31.8 11.8 3.3 8.1
1 200 23.9 60.5 11.5 6.1 15.5 5.5
1 250 30.0 77.0 11.3 8.9 23.3
1 300 35.0 81.5 10.1 11.6 30.2 10.7
1 400 36.7 87.9 10.4 14.5 37.6

The Requests per Second maximum applies in total, not per thread.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s