Performance Testing Series
This post is part of a series of blog posts about my performance tests in Azure SQL Database. For the first post in this series (and links to all of the other posts) please see here. For a summary of all the results please see here.
For a general overview of the test architecture, test components and test types, please see here.
Quick Navigation: << Previous post in series << >> Next post in series >>
Mini Sequential Selects Test Overview
The Sequential Select test involves reading an entire table or tables to get a simple estimate of maximum read rates. The “Sequential” in the name of the test refers to the nature of the read from the table i.e. roughly speaking SELECT * with no ORDER BY clause – of course this may or may not actually be a sequential read at the file level.
This test, the Mini Sequential Selects test, performs the same operation on a much smaller set of data. This means the entire data set generally exists in the SQL Server Buffer pool (the test table is pre-populated immediately before the test begins). This test therefore primarily investigates sequential read rates from the buffer pool.
In contrast to these tests, the later (Full) Sequential Select tests, perform the same action against a much larger database for which none of the data is in the buffer pool.
As in the earlier tests, the worker threads in the UT tests are not limited to a specified request rate. The data is merely read sequentially repeatedly during the test. No LT tests were performed.
UT Test Results
Overview
Results from the 30 UT Mini Sequential Select tests are shown in the two charts below. In these charts, the “T” values are the number of worker threads generating requests against the database, e.g. “Business 4T” = a test against Business edition, running with 4 threads continuously generating requests.
The two charts are similar because the average row size was around 400 bytes throughout. The data volume figures here are based on the data content (i.e. summing the sizes of the data values, according to the number of bytes each data value requires in the SQL Server Data Type scheme but ignoring internal SQL Server overheads).
These results may appear confusing on first impression. They can be easily understood once the CPU power of the Cloud Service is taken into account. For example, the Business 2T and Business 4T tests were both conducted on an A2 sized Cloud Service VM. Clearly, two threads are enough to saturate the VM. The Business 8T test was however conducted on an A3 size VM. The increased processing power has allowed it to process significantly more data.
Similarly, the Basic and S1 tests were conducted on an A1 sized VM, whilst the S2, P1 and P2 tests were conducted on an A2 sized VM. For the other UT test types, an A2 VM was sufficient to drive the test, however not in this case (reading rows in the request generator is a relatively intense CPU activity). So, it is not really a fair comparison between the Business 8T and Prem P2 8T tests!
The full set of VM sizes are given in the appendix of this post.
Ultimately however, what these tests show is that there is no restriction placed on the rate at which data can be read from the buffer pool.
Test Profiles
The charts below show the performance profiles over the lifetime of the tests in the charts above. Since 30 lines is rather a lot to fit onto one chart, data for the different editions / tiers has been split across several charts.
It is also worth noting that due to the way the Azure SQL Database logging into sys.resource_stats works, data is only available at five minute intervals and the last data point in each test (which would have been plotted as minute 27 in these charts) is not useful and so is omitted.
Basic to P1 Tiers
The resolution of the data throughput data collection was improved after the Basic and S1 tests. and before the S2, P1, P2 and Business tests. This explains the slightly odd appearance of the data.
The Basic Edition CPU profile shows it was CPU bound. This explains why the throughput of the Basic 1T UT test is less than the Std S1 1T test in the earlier summary charts.
All of the other service tiers (S2, P1 and P2 as shown below) show the same throughput consistently.
At no point was any significant volume of data read from the disk.
Std2 to P2 Tiers
Business
Business Edition also shows good consistent throughput.
Mini Sequential Selects Test Conclusions
The Mini Sequential Selects test has shown that the maximum sequential read rate from the buffer pool is, roughly speaking, equal across the different service tiers / editions. Or, to perhaps be more precise, is not significantly limited according to the edition / tier of SQL Database.
Appendix – UT Test Configuration
SQL Edition | Web | Bus. | Bus. | Bus. | Bus. | Basic | Std1 | Std2 | Prem1 | Prem2 |
---|---|---|---|---|---|---|---|---|---|---|
Cloud Svc Inst. Size | A1 | A2 | A2 | A2 | A3 | A1 | A1 | A2 | A2 | A2 |
Req. Gen. Thread Count | 1 | 1 | 2 | 4 | 8 | 1 | 1 | 2 | 4 | 8 |
Initial Test Data (MB) | 0.1 | 0.2 | 0.2 | 0.2 | 0.8 | 0.1 | 0.1 | 0.4 | 0.8 | 1.2 |
Again, in hindsight, the Prem P2 8T test should have been conducted on an A3 sized instance.
Appendix – UT Test Results
Configuration | Avg Rows Per Second | Avg MB Per Minute | ||||
---|---|---|---|---|---|---|
UT1 | UT2 | UT3 | UT1 | UT2 | UT3 | |
Web 1T | 27292 | 27448 | 26984 | 586.1 | 589.8 | 580.2 |
Business 1T | 26481 | 26821 | 26029 | 570.0 | 577.7 | 559.4 |
Business 2T | 40677 | 40757 | 40761 | 875.0 | 877.7 | 876.1 |
Business 4T | 40989 | 40804 | 41853 | 881.8 | 880.2 | 900.4 |
Business 8T | 70420 | 71011 | 70646 | 1514.7 | 1526.9 | 1521.2 |
Basic 1T | 20401 | 19921 | 20044 | 438.2 | 428.5 | 430.1 |
Std S1 1T | 23960 | 25902 | 28070 | 514.8 | 556.9 | 603.8 |
Std S2 2T | 40101 | 40457 | 40484 | 861.2 | 872.0 | 869.5 |
Prem P1 4T | 42612 | 42782 | 42791 | 916.5 | 922.3 | 921.0 |
Prem P2 8T | 39418 | 40029 | 39304 | 848.7 | 861.5 | 845.8 |
Configuration | SQL Avg Disk Read % | SQL Avg CPU % | ||||
---|---|---|---|---|---|---|
UT1 | UT2 | UT3 | UT1 | UT2 | UT3 | |
Web 1T | 0.0 | 0.0 | 0.0 | 53.4 | 49.9 | 55.7 |
Business 1T | 0.0 | 0.0 | 0.0 | 49.7 | 52.1 | 96.8 |
Business 2T | 0.0 | 0.0 | 0.0 | 73.9 | 75.6 | 74.6 |
Business 4T | 0.0 | 0.0 | 0.0 | 75.2 | 78.7 | 79.3 |
Business 8T | 0.0 | 0.0 | 0.0 | 100.0 | 100.0 | 100.0 |
Basic 1T | 0.0 | 0.0 | 0.0 | 99.2 | 99.6 | 99.7 |
Std S1 1T | 0.0 | 0.0 | 0.0 | 36.4 | 40.8 | 44.5 |
Std S2 2T | 0.8 | 0.0 | 21.9 | 20.6 | ||
Prem P1 4T | 0.0 | 0.0 | 0.0 | 23.2 | 72.1 | 51.3 |
Prem P2 8T | 0.0 | 0.0 | 0.0 | 5.7 | 5.3 | 5.7 |
Configuration | Cloud Svc Avg CPU | Error Count | ||||
---|---|---|---|---|---|---|
UT1 | UT2 | UT3 | UT1 | UT2 | UT3 | |
Web 1T | 99.6 | 99.9 | 98.7 | 0 | 0 | 0 |
Business 1T | 50.8 | 51.2 | 51.4 | 0 | 0 | 0 |
Business 2T | 82.0 | 81.3 | 81.7 | 0 | 0 | 0 |
Business 4T | 83.3 | 83.3 | 84.5 | 0 | 0 | 0 |
Business 8T | 59.8 | 60.5 | 59.8 | 0 | 0 | 0 |
Basic 1T | 72.3 | 69.2 | 70.8 | 0 | 0 | 0 |
Std S1 1T | 82.8 | 93.3 | 98.6 | 0 | 0 | 0 |
Std S2 2T | 81.6 | 82.1 | 83.3 | 0 | 0 | 0 |
Prem P1 4T | 83.7 | 84.0 | 83.8 | 0 | 0 | 0 |
Prem P2 8T | 82.8 | 84.0 | 84.4 | 0 | 0 | 0 |