Performance Testing Series
This post is part of a series of blog posts about my performance tests in Azure SQL Database. For the first post in this series (and links to all of the other posts) please see here. For a summary of all the results please see here.
Running these performance tests required deploying some large pre-populated databases (particularly for the Sequential Select and Scale Tests). These databases were deployed via a variety of means – with the dual objectives of minimising deployment time whilst also capturing some useful performance metrics.
So, before we delve into the detail of the performance tests, this post provides a digression into the results of the database deployments and the information inferred from them.
Database Import Results
Fifteen databases of different sizes were deployed into Azure using the logical database BACPAC format during the course of these tests. The restore times for these databases are illustrated below.
Even a quick glance shows just how slow importing a database of any significant size into the Standard service tier is. At this rate, importing a 200GB database into the Standard tier would take almost seven days! Clearly importing under the Premium tier to later switch down to Standard is worth considering (changing tiers can involve data movement and thus latency, though Microsoft explicitly note switching down from Premium to Standard does not involve data movement).
Examining the chart in a little closer also shows that importing a 10 GB database into the Web / Business service tier is significantly quicker than importing a 2GB database into the Basic service tier.
This becomes clearer if we flip the chart around to look at write rates instead:
Performance of the Web and Business Tiers is quite variable but very much higher than the newer tiers, consistently and substantially beating even the P1 tier.
Delving into the sys.resource_stats data provides an explanation. Looking at a typical import of a 2 GB database under the Basic service tier:
For the duration of the import, the Average Log Write metric is reporting 100%. Clearly the BACPAC import process runs under the new performance constraints i.e. CPU limit and most critically the log write limit which now heavily restricts the import rate, especially for the Basic and Standard service tiers. Indeed, checking the sys.dm_db_wait_stats view reveals LOG_RATE_GOVERNOR is the predominant type of wait. This makes the imports typically take significantly longer than under the Web / Business Service Tiers.
The same behaviour of the log write rate limit constraining the import process repeats for imports into the Standard and Premium service tiers, for example here is the equivalent chart for a 36 GB database import under the Premium service tier:
As a final comparison, the chart below compares the import progress over time for the same 36 GB database (i.e. same BACPAC) under the Business, Standard and Premium service tiers:
As is clear, the Business Database import completes much quicker. Two imports into the Business tier are shown to illustrate the potential variability of this tier.
All this allows us estimate the maximum log write rates for the different service tiers as:
- Web / Business = 114 MB per minute = 1.90 MB per second
- Basic = 9.7 MB per minute = 0.17 MB per second
- Standard S1 = 21.8 MB per minute = 0.36 MB per second
- Premium P1 = 54.4 MB per minute = 0.91 MB per second
Note: The above rates are based on data throughput. The actual maximum log write rates will be slightly higher (probably around 10% higher) due to the overhead of the log record structure that surrounds the data content in the log records.
Database Import Summary
Database (BACPAC) import has always been relatively slow (compared to an on-premise database restore). In the new service tiers it runs under a log write rate limit which sadly makes it even slower. Even imports under the Premium P1 tier take about twice as long as under the Web/Business tier.
At the present time, I don’t believe it is possible to import under any performance level higher than P1 (the import defaults to the lowest performance level within each tier). Hopefully this is something Microsoft is addressing during the preview – or better still, completely remove the need to import/export using a logical database format that becomes progressively more impractical as scale increases.
The import times here are only for the import stage where tables are populated (which in these tests had no non-clustered indexes). Any non-clustered indexes are added in a later stage, which can take a significant amount of time particularly on the heavily performance limited tiers like Standard (e.g. a third, half or even more of the bulk load time). Again, just another reason to dislike BACPAC files!
The fact that the database import process is subject to the log write rate limit in the Basic, Standard and Premium tiers provides a quick way of estimating what that limit is in each of those tiers and comparing very roughly between tiers:
|Service Tier||Estimated Log Write Rate Limit||Log Write Rate Limit Comparison|
|Web / Business =||114 MB per minute||–|
|Basic =||9.7 MB per minute||8.5% of Web / Business|
|Standard S1 =||22 MB per minute||19% of Web / Business|
|Premium P1 =||54 MB per minute||47% of Web / Business|
Several database copies were also performed whilst running the performance tests. The copy times and copy rates are shown below.
Clearly database copy is a much faster operation than import, completing in minutes not hours. No database copies were performed on the new Service Tiers, since this operation is not fully supported under the new tiers as described here.
Several database copies were also performed whilst running the performance tests. The restore times and restore write rates are shown below.
The restore rate is independent of Service Tier and averages out at 150 MB per minute. This is the same 150 MB per minute used when estimating the time for data movement that occurs when changing database service tier and/or performance level as described under the heading “Impact of Database Changes” here.
It is also worth noting that a restored database automatically adopts the lowest performance level in a given tier. E.g. restoring a database that was P2 at the time of backup / deletion will result in a new P1 database. The performance level is then changed from S1 to S2 or P1 to P2 if required via one of the usual management interfaces. Although the Azure documentation suggests here that these changes can result in data movement and thus some time to take effect, all of the S1 to S2 and P1 to P2 changes in the newly restored databases above were immediate.
This potential data movement after performance level configuration changes may be a load balancing operation within Azure that is simply less likely at the moment due to the probable relatively low adoption of the new tiers at the current time.
It is also interesting to note the two restores of the Basic Edition database that seem to proceed at twice the standard rate. Perhaps this is an edge effect that is magnified simply because of the relatively small size of the Basic Edition databases (only 2 GB) compared to the Standard and Premium Edition database (all 36 GB).
Database Copy and Restore Conclusions
Restoring a database from a backup is significantly faster than importing from a BACPAC file, especially so for the Basic and Standard tiers. It is not subject to the log write rate limit.
Copying a database is, relatively speaking, extremely fast though has more variable performance – and is not fully supported in the new Service Tiers.