Performance Testing Series
This post is part of a series of blog posts about my performance tests in Azure SQL Database. For the first post in this series (and links to all of the other posts) please see here. For a summary of all the results please see here.
For a general overview of the test architecture, test components and test types, please see here.
Combined Inserts, Direct Selects, Updates and Deletes Test Overview
The type of request generated by each worker thread at any point in time is randomly selected, based on the following probabilities:
- Inserts – 20%
- Direct Selects – 30%
- Updates – 30%
- Deletes – 20%
As in those earlier tests, the worker threads in the UT tests are not limited to a specified request rate. The LT tests were not conducted for this workload.
The UT tests run against only a small set of test data. This means the entire data set generally exists in the SQL Server Buffer pool (the test table is pre-populated immediately before the test begins). This test therefore primarily investigates write rates combined with read rates from the buffer pool. In contrast to these tests, the later Scale Tests include all of the actions here but acting on data that is not always resident in the buffer pool.
UT Test Results
Results from the 30 UT tests are shown in the two charts below. In these charts, the “T” values are the number of worker threads generating requests against the database, e.g. “Business 4T” = a test against Business edition, running with 4 threads continuously generating requests.
The two charts are similar because the average row size was around 400 bytes throughout. The data volume figures here are based on the data content (i.e. summing the sizes of the data values, according to the number of bytes each data value requires in the SQL Server Data Type scheme but ignoring internal SQL Server overheads).
In contrast to the Business Edition tests, performance of the new service tiers is very consistent. From these results S2 generally equals or outperforms Business Edition.
It is interesting to note here that with this workload there is a more pronounced gap between the S2 and P1 tiers, which wasn’t always seen in some of the earlier tests.
I will demonstrate below that all of the test results for the new service tiers shown above are limited by the SQL Server Log Write Rate limit, i.e. this is the determining factor in the performance.
The charts below show the performance profiles over the lifetime of the tests in the charts above. Since 30 lines is rather a lot to fit onto one chart, data for the different editions / tiers has been split across several charts.
It is also worth noting that due to the way the Azure SQL Database logging into sys.resource_stats works, data is only available at five minute intervals and the last data point in each test (which would have been plotted as minute 27 in these charts) is not useful and so is omitted.
Basic to P1 Tiers
Peformance of the new tiers is very consistent – other than some variation in the S2 tests. The charts show that these tests weren’t constrained on CPU but were clearly constrained on log write quota. All of the data was already present in the buffer pool during these tests – i.e. physical disk read quota had no impact.
Std2 to P2 Tiers
The SQL Server performance data (from sys.resource_stats) has a few data points missing on these charts. It appears as though a bug or infrastructure fault within Azure was preventing some usage data being processed correctly into the view.
Nonetheless, these profiles show the much more variable nature of the Business Edition. Those tests where multiple threads are executing show considerably more variable performance, sometimes varying wildly.
In addition, the Log Write profile looks suspiciously odd, being at 100% for all data points in all tests. As we found in the other LT tests, this value is rather meaningless.
Combined Inserts, Direct Selects, Updates and Deletes Test Conclusions
The combined tests have shown that, for a workload of purely stored procedure based row-by-row inserts, direct selects, updates and deletes (of average row size 400 bytes), performance of the new S2 / P1 tiers generally equals the current Business Edition. P2 significantly outperforms Business for this workload.
Appendix – UT Test Configuration
|Cloud Svc Inst. Size||A1||A2||A2||A2||A3||A1||A1||A2||A2||A2|
|Req. Gen. Thread Count||1||1||2||4||8||1||1||2||4||8|
|Initial Test Data (MB)||0.1||0.2||0.2||0.2||0.8||0.1||0.1||0.4||0.8||1.2|
Appendix – UT Test Results
|Configuration||Avg Rows Per Second||Avg MB Per Minute|
|Std S1 1T||168||152||162||2.88||2.61||2.77|
|Std S2 2T||334||322||333||5.71||5.51||5.69|
|Prem P1 4T||460||458||450||7.86||7.85||7.69|
|Prem P2 8T||895||891||872||15.32||15.27||14.94|
|Configuration||SQL Avg Log Write %||SQL Avg Disk Read %||SQL Avg CPU %|
|Std S1 1T||89.5||81.3||86.2||0.0||0.0||0.0||35.4||33.7||35.5|
|Std S2 2T||89.5||82.2||90.3||3.0||0.3||0.1||36.0||31.2||33.4|
|Prem P1 4T||99.7||99.4||96.6||0.0||0.0||0.0||15.2||15.0||15.4|
|Prem P2 8T||97.7||97.3||95.5||0.0||0.0||0.0||21.2||20.7||20.6|
|Configuration||Cloud Svc Avg CPU||Error Count|
|Std S1 1T||5.0||4.9||5.2||0||0||0|
|Std S2 2T||6.0||5.3||5.5||0||0||0|
|Prem P1 4T||6.6||7.1||7.1||0||0||0|
|Prem P2 8T||12.5||13.2||12.4||0||0||0|
The Requests per Second maximum applies in total, not per thread.