Performance Testing Series
This post is part of a series of blog posts about my performance tests in Azure SQL Database. For the first post in this series (and links to all of the other posts) please see here. For a summary of all the results please see here.
For a general overview of the test architecture, test components and test types, please see here.
Insert and Delete Test Overview
The Inserts test involves calling two different stored procedures – one to insert rows into a test table, another to delete rows.
The Cloud Service fires up a number of worker threads as specified in the test definition. The worker threads randomly decide whether to insert or delete (with equal probability).
For an insert, the worker thread generates random data to be inserted. Row Id values (the primary key field) are generated sequentially. The worker threads then call a simple stored procedure to insert rows one-by-one (see stored procedure definition at the bottom of the post here).
For a delete, the worker thread picks a Row Id (primary key value) at random from an in-memory list of primary key values that exist in the database. The worker thread then calls a simple stored procedure to delete that single row (see stored procedure definition at the bottom of the post here).
The worker threads in the UT tests are not limited to a specified request rate. As soon as a stored procedure call has completed another is generated. The worker threads in the LT tests are constrained to generate requests at a limited rate specified in the test definition. The LT tests gradually increase the rate over the course of a number of tests.
Both the UT and LT Update tests run against only a small set of test data. This means the entire data set generally exists in the SQL Server Buffer pool (the test table is pre-populated immediately before the test begins). This test therefore primarily investigates write rates. In contrast to these tests, the later Scale Tests include deleting data that is not always resident in the buffer pool.
UT Test Results
Results from the 30 UT Inserts and Deletes tests are shown in the two charts below. In these charts, the “T” values are the number of worker threads generating requests against the database, e.g. “Business 4T” = a test against Business edition, running with 4 threads continuously generating requests.
The two charts are similar because the average row size was around 400 bytes throughout. The data volume figures here are based on the data content (i.e. summing the sizes of the data values, according to the number of bytes each data value requires in the SQL Server Data Type scheme but ignoring internal SQL Server overheads).
Several different tests of the current Web/Business edition were carried out. The Web 1T test was carried out on a A1 sized cloud instance, the Business 1T test on an A2 sized instance. This explains the slightly higher throughput on the Business 1T test. As the number of threads was increased in the Business edition tests, the throughput increases though is variable.
In contrast to the Business Edition tests, performance of the new service tiers is very consistent. From these results S2 / P1 generally equals or outperforms Business Edition.
I will demonstrate below that all of the test results for the new service tiers shown above are limited by the SQL Server Log Write Rate limit, i.e. this is the determining factor in the performance.
The charts below show the performance profiles over the lifetime of the tests in the charts above. Since 30 lines is rather a lot to fit onto one chart, data for the different editions / tiers has been split across several charts.
It is also worth noting that due to the way the Azure SQL Database logging into sys.resource_stats works, data is only available at five minute intervals and the last data point in each test (which would have been plotted as minute 27 in these charts) is not useful and so is omitted.
Basic to P1 Tiers
Peformance of the new tiers is very consistent – other than some variation in the S2 tests. The charts show that these tests weren’t constrained on CPU but were clearly constrained on log write quota. All of the data was already present in the buffer pool during these tests – i.e. physical disk read quota had no impact.
Std2 to P2 Tiers
The SQL Server performance data (from sys.resource_stats) has a few data points missing on these charts. It appears as though a bug or infrastructure fault within Azure was preventing some usage data being processed correctly into the view.
Nonetheless, these profiles show the much more variable nature of the Business Edition. Those tests where multiple threads are executing show considerably more variable performance, sometimes varying wildly.
In addition, the Log Write profile looks suspiciously odd, being at 100% for all data points in all tests. More on this below.
LT Test Results
Three series of LT tests were carried out, against the Web, Basic and S1 editions / tiers.
The chart below illustrates how the performance of the Basic Tier varies as the workload (number of requests per second) is increased. A 30 minute test was conducted at each request rate.
As the request rate is increased, the CPU utilisation and log write utilisation increase. At around 70 rows per second, we have hit the log write limit and the database is not able to keep up with the request rate. All of the data for these tests was present in the buffer pool. Disk Read quota had no impact.
Performance is very consistent throughout.
This chart shows a quite mixed set of test results. The initial tests, at low request rates, were impeded a little (but probably to no significant effect) by having to load some of the test data from disk (despite it having been created only a few minutes earlier). As the request rate climbed in later tests, a significant percentage of requests were being skipped, despite not reaching the disk write limit. A period of lower database performance seems the most likely cause.
The log write limit for this workload is reached at around 140 requests per second, consistent with the earlier UT test.
This chart shows good consistency from Web Edition, with linear performance up to 110 requests per second, also consistent with (and actually slightly better than) the earlier UT test.
It is interesting to note that this chart also proves the well known fact that there is no clearly enforced log write limit per database in the Web Edition. I.e. we apparently hit the reported log write limit at 40 requests per second, but we can actually continue increasing our request rate to nearly three times this “maximum” – i.e. to 120 requests per second – before we actually see a significant number of requests failing to be processed.
Insert and Delete Test Conclusions
The Insert and Delete Tests have shown that, for a workload of purely stored procedure based row-by-row inserts and deletes (of average row size 400 bytes), performance of the new S2 / P1 tiers generally equals or exceeds the current Business Edition. P2 significantly outperforms Business for this workload.
Appendix – UT Test Configuration
|Cloud Svc Inst. Size||A1||A2||A2||A2||A3||A1||A1||A2||A2||A2|
|Req. Gen. Thread Count||1||1||2||4||8||1||1||2||4||8|
|Initial Test Data (MB)||0.1||0.2||0.2||0.2||0.8||0.1||0.1||0.4||0.8||1.2|
Appendix – UT Test Results
|Configuration||Avg Rows Per Second||Avg MB Per Minute|
|Std S1 1T||128||130||135||1.36||1.39||1.44|
|Std S2 2T||268||257||249||2.86||2.74||2.66|
|Prem P1 4T||341||339||341||3.63||3.62||3.64|
|Prem P2 8T||666||662||669||7.11||7.07||7.14|
|Configuration||SQL Avg Log Write %||SQL Avg Disk Read %||SQL Avg CPU %|
|Std S1 1T||94.0||95.1||98.7||0.0||0.2||0.0||31.8||30.1||34.0|
|Std S2 2T||98.3||93.3||88.1||0.0||0.0||0.1||33.4||31.4||28.1|
|Prem P1 4T||99.7||99.2||99.8||0.0||0.0||0.0||13.6||12.5||13.5|
|Prem P2 8T||97.4||97.0||97.5||0.0||0.0||0.0||17.9||18.3||18.5|
|Configuration||Cloud Svc Avg CPU||Error Count|
|Std S1 1T||4.5||4.1||4.5||0||0||0|
|Std S2 2T||5.3||4.2||4.3||0||0||0|
|Prem P1 4T||5.7||5.3||5.6||0||0||0|
|Prem P2 8T||10.1||10.0||10.3||0||0||0|
Appendix – LT Test Configuration
|Cloud Svc Inst. Size||A0||A0||A0|
|Req. Gen. Thread Count||1||1||1|
|Initial Test Data (MB)||0.25||0.25||0.5|
Appendix – LT Test Results
|Threads||Req. Per Sec||Avg Rows Per Second||Avg MB Per Minute|
|Web||Basic||Std 1||Web||Basic||Std 1|
|Threads||Req. Per Sec||SQL Avg Log Write %||SQL Avg Disk Read %||SQL Avg CPU %|
|Web||Basic||Std 1||Web||Basic||Std 1||Web||Basic||Std 1|
The Requests per Second maximum applies in total, not per thread.