At the end of April Microsoft announced changes to Azure SQL Database: see here and here. As well as many new features being announced, by far the biggest change was the impending end of the current Web/Business offering and the announcement of the new Basic/Standard offerings to go with the previously announced Premium offering.
The fundamental change is that Microsoft is moving away from charging by database size (per GB) as the current Web / Business offerings do – instead to charging by performance for the Basic / Standard / Premium offerings. Microsoft are describing the different performance levels of these new offerings in terms of a largely immeasurable new acronym, Database Throughput Unit (DTU) and some accompanying transaction rate numbers. See here and here. Upon reviewing this information, I was unclear just what throughput I could get for my workloads in the different service tiers. Carrying out some performance tests seemed like the best plan.
So, this is the first in a series of posts that document my tests. I’ll start by describing my objectives below. The next post will describe the test set up. The following posts will then delve into the results of each type of test. The final post summarises all the results in one place if you want to jump right to that.
At the top of each post are links to the next/previous articles in the series. Or use the full contents list below.
Quick Navigation: >> Next post in series >>
Short Of Time – Highlights
If you only have a couple of minutes, just read the Summary.
If you have a little more time, then probably the best bits to read are:
- Bulk Load – to see roughly the maximum write rates per edition/tier,
- Sequential Reads – to see roughly the maximum read rates per edition / tier,
- Scale Tests – to see how the different editions / tiers handle one example workload when read limits are involved,
- Summary – summary of results, rates, concluding thoughts and wrap-up.
All Posts In This Series
- Background and Overview of Performance Tests (this post).
- Test Architecture, Test Types and Test Configurations – Read
Focus: Write Rates
- Database Import, Copy and Restore – Read
- Inserts – Read
- Bulk Load – Read
Focus: Read Rates from the Buffer Pool
- Direct Selects – Read
- Indirect Selects – Read
- Mini Sequential Selects – Read
Focus: Write Rates and Read Rates from the Buffer Pool
- Updates – Read
- Inserts and Deletes – Read
- Inserts, Direct Selects, Updates and Deletes – Read
- Inserts, Direct and Indirect Selects, Updates and Deletes – Read
Tests with Larger Databases – Focus: Write Rates and Read Rates from Physical Disk
- Sequential Selects – Read
- Scale Tests: Inserts, Selects, Updates and Deletes – Read
- Summary – Read
Azure SQL Database Throttling
Azure SQL Database Web and Business Editions manage resources rather opportunistically. These service tiers do not apply strict per database limits, capacity can to quite a large extent be utilised if it is available. Should a server start to become overloaded, an escalating series of throttling actions occur as described here. This can lead to sudden changes in SQL Database performance – i.e. due to neighbours with a heavy workload. Database failovers can also occur to as the platform attempts to equalise load and performance – though in my experience, in recent months performance has stabilised and failovers are less common than six months to a year ago.
Life is different in the Basic, Standard and Premium Service Tiers. In order to guarantee performance, Azure SQL Database limits the performance of databases within the Basic, Standard and Premium Service Tiers in at least the following ways:
- Log Write Rate – i.e. the maximum amount of data that can be written into the database transaction log per second,
- Physical Disk Read Rate – i.e. the maximum amount of data that can be read from the database data files per second,
- CPU time – i.e. the maximum amount of CPU time each database is granted per second.
It is possible that some form of memory limit exists or will exist too which will likely become clearer during the preview.
The performance tests I have conducted explored these limits and compared the capabilities of the different tiers.
Objectives of Performance Tests
- Estimate/measure the performance limits of each service tier in more meaningful units that DTUs
- Maximum write rates
- Maximum read rates
- These terms will be more clearly defined in later posts.
- (It would be helpful if Microsoft published these, as it would allow customers to quickly rule in/out certain service tiers for their workloads purely on the basis of these two limits).
- Perform some direct comparisons between the existing Web / Business service tiers and the newer Basic / Standard / Premium service tiers.
- Keep the tests relatively simple so that the results are relatively easily and intuitively understandable even if this means the workloads are a little abstract (i.e. roughly speaking, the opposite of Microsoft’s Azure SQL Database Benchmark described here). As a developer I prefer to understand the basic boundaries (in well-defined numerical terms) of a platform. I can then design an appropriate solution within those boundaries.
- Apply a reasonably heavy workload to Azure SQL Database in order to establish roughly where the limits are, but not be absolutely excessive – particularly in terms of Web/Business, to not generate a disruptive load (i.e. be a noisy neighbour) causing hard throttling errors.
- Note any other differences in behaviour that we come across along the way.
I have focussed less on the CPU performance limits of each tier in the tests. I have no tests designed to push the CPU limits. The tests observe if CPU throttling is occurring but are more focussed on read/write rates. I made this decision because of the nature of my current Azure SQL DB workloads and limited time for this mini-project. I may revisit this later.
One consequence of focussing on relatively simple tests is that none of the tests involved tables with non-clustered indexes. “WHERE” clauses were either against only the clustered index of each table, or against a range of the clustered index plus (i.e. “AND”) one other field with no covering index. I.e. Clustered Index Seeks.
Of course, performance under non-clustered indexes is important, but I wanted to test at least initially under simple test conditions and time of course limits testing all the permutations you/I might like. So I ignored non-clustered indexes. I may revisit this later.
Summary of Performance Tests
- Tests cover Web, Business, Basic, Standard S1, Standard S2, Premium P1 and Premium P2.
- No tests conducted against P3 service tier (too costly).
- Tests cover Inserts, Bulk Load, Selects (from buffer pool and from disk), Updates and Deletes.
- Tests performed between 10th and 29th of June 2014, i.e. well after the performance increases announced on the 19th May here.
- Tests performed in the Western Europe Azure Data Centre.
To be clear, I don’t claim that these are the best performance tests for Azure SQL DB. The results may or may not apply to your workload, but I hope I have given enough information for readers to determine that themselves.
I chose these tests because they have helped me get a feel for the new service tiers and because they are relevant to my workload. I am sharing the results because they may be relevant or helpful for other people. Of course SQL Server runs workloads of many different types at many different scales, so I don’t claim the information here is widely applicable.
Of course, the new service tiers are still in preview. Performance is likely to change during the preview. I deliberately waited a while after the start of the preview as I guessed performance would be more variable initially especially. It is still likely to be tweaked by Microsoft as the preview progresses.