Azure SQL Database Memory Limits By Service Tier

Background

The new service tiers in Azure SQL Database provide different levels of performance, by offering more of the following at each higher service level:

  • Processing (CPU) Time
  • Read Rate (from physical disk)
  • Write Rate (to the database log file).
  • Memory (space to cache and process data)

One of the frustrating aspects of the new service tiers on a technical level is that Microsoft are not providing any particularly meaningful performance units to measure the new tiers by.  Everywhere you look you find relative units, be it DTUs or be it percentages that measure resource utilisation in each tier or compare between tiers (e.g. in the sys.resource_stats / sys.dm_db_resource_stats DMVs in the master / your database respectively).

My earlier performance tests provided some information about the maximum read and write rates supported by each tier.

I’ve now performed some further tests that provide some information about the memory limits in each tier / performance level.  Results are given first.  The methodology is at the end of this post.

Test Results – Azure SQL DB Memory Allocation

The tests suggest each database is allocated memory as follows:

  • Web / Business Edition – at least 5 GB (likely variable).
  • Basic Tier – 250 MB
  • Standard Tier S0 – 512 MB
  • Standard Tier S1 – 512 MB
  • Standard Tier S2 – 512 MB
  • Premium Tier P1 – 9 GB
  • Premium Tier P2 – 9 GB

Premium P3 was not tested.  It is not possible to obtain exact figures for Web/Business edition since the DMVs don’t contain memory information in that edition.  However, repeating the same tests has proven that it is possible in Web/Business edition to cache at least 5 GB in memory. More on this below.

This test also allowed some write-rate metrics to be captured.  These are more or less identical to those captured during the earlier tests, i.e. nothing has changed here.

Thoughts

Standard to Premium Comparison

The most striking point is the relatively big jump between the Standard Tier and Premium Tier.  Comparing S2 to P1:

  • Price per Month:  75 USD vs. 465 USD = a big 6 times increase
  • Read Rate (MB/min):  140 vs. 400 = a 3 times increase, i.e. smaller than 6
  • Write Rate (MB/min):  38 vs. 48 = only a slight increase
  • Memory:  512 MB vs. 9 GB = a massive 18 times increase.

Clearly, much of that 6 times price increase is going towards getting a relatively speaking much bigger slice of the memory pie (plus the other additional features in Premium around HA, etc).

Standard to Web/Business Comparison

Running the same tests on Web/Business edition showed this edition was able to cache at least 5 GB in memory.  I.e. at least ten times that of Standard Edition.  Upon reaching this point, I stopped trying to cache any additional data.  Given the competitive manner in which different databases compete for resources in this edition, it is possible further tests could disrupt other workloads running on that server.  However, even when Web/Business edition is performing badly, it still probably has more than 512 MB available per database.

Like the earlier I/O tests, these tests further underline that Standard Edition, despite the name, is not a like-for-like replacement for Web/Business Edition for larger databases and heavier workloads.  The story is the same – Standard Tier does guarantee resources, but the maximum available is much less than that available in Web/Business Edition.

Standard Tier

It is interesting to note that standard tier has the same amount of memory allocated across all three performance levels.

Though, again, I would be interested to see just how many real world databases are able to perform well in Standard Tier when they are actively used databases and anything more than a few GB.

Microsoft’s DTU Benchmark

The above reinforces my thoughts about Microsoft’s own DTU Benchmark from my earlier posts.  I include it again here:

Microsoft’s own DTU benchmarks against S1 and S2 were made with only 750 MB and 3.6 GB databases respectively. That Microsoft are testing with such small databases in Standard Tier despite it’s 250 GB capacity seems odd.

Microsoft’s decision to test with relatively small databases will no doubt have boosted the DTU benchmark rate given the much lower maximum memory allocated to Standard Edition.

Basic Tier

In my Sequential Select tests back in July, the Basic tier was able to cache my entire 2 GB test database.  Clearly, during the preview, the memory limits (in the Basic tier at least) were either not in place or were much higher.

Test Methodology

Indirect Testing

The most important point to note here is that there are currently no means to directly measure the amount of memory allocated to a database in Azure SQL DB – i.e. no DMV, performance counter or similar.  Instead, an indirect test can be used to infer the amount of memory.  Using this method we cannot say the result is 100% accurate, however I believe these results will likely be very close.  In addition, the resulting numbers look very plausible (which proves nothing of course, other than the fact they don’t look wildly, ridiculously wrong!).

The test approach used was as follows (repeated for each service tier performance level):

  1. Create a database locally containing just one table.
  2. Populate that table with some data (200 MB for the Basic tier, 250 MB for the Standard tier and 1 GB for the Premium tier).
  3. Export the database to a BACPAC.
  4. Import the BACPAC into Azure SQL DB to a new database in the correct service tier (e.g. if running an Standard S2 test, then into a new S2 database).
  5. Rebuild the table to remove the fragmentation created by the BACPAC import process (this step is probably not strictly necessary for the test).
  6. Wait a short time until the database is restorable (newly created databases aren’t restorable immediately) – i.e. until the Restore button in the management portable is enabled.
  7. Drop the database.
  8. Restore the database.  This clears out any junk in memory and means the database essentially starts with an empty buffer pool, no cached query plans or other memory usage, etc).  This can be verified by checking the sys.dm_db_resource_stats DMV which will show close to zero memory usage.
  9. Run a simple query that performs a full table scan.  This should load the entire table into the buffer pool.  This will take some time to run, since the data is being from disk into memory.  Because Azure SQL DB relatively heavily restricts I/O, this will likely be of the order of a minute or so.
  10. Immediately after the query finishes, check the sys.dm_db_resource_stats DMV to see the % amount of memory used
  11. Find the size of the table (this query is also given below).
  12. Assuming all of the table data is now in the buffer pool, the total memory allocated to the database can be estimated using:
    Total Memory = 100 / (% memory used) * (size of table).
  13. Check that all of the table data is still in the buffer pool by re-running the query from step 9.  If the data is still cached, the query should now return in just a second or two.  In all of my tests, the second, third, etc runs of this query took one to two seconds (compared to between 40 seconds and 3 minutes for the first runs of the query, depending on the service tier).

Queries

The query I used in step 9 was:

select count(*) from Table1
where [Bit1] is not null and
[TinyInt1] between 10 and 240 and
[Int3] between 100000 and 2000000000 and
[Money2] between 0 and 10000000

The count(*) is very simple for SQL Server to calculate.  It requires very little memory to process (which is good, because if it required lots of memory, that would confuse the memory figures in the DMV used for our calculation).

The where clause is just some random criteria to force a table scan, i.e. all of the data to be read into memory.

The query I used in step 11 was:

select 
 schema_name(sys.objects.schema_id) +'.' + sys.objects.name as ObjectName, 
 sum(reserved_page_count) * 8.0 / 1024 as ObjectSizeMB
from sys.dm_db_partition_stats, sys.objects
where sys.dm_db_partition_stats.object_id = sys.objects.object_id and 
schema_name(sys.objects.schema_id) <> 'sys'
group by schema_name(sys.objects.schema_id) +'.' + sys.objects.name
order by 2 desc

Result Data

The following lists the table size and corresponding percentage of memory used (from the sys.dm_db_resource_stats DMV):

  • Basic Tier:  198.1 MB, 79.4%
  • Standard S0:  253.6 MB, 49.5%
  • Standard S1: 253.6 MB, 49.6%
  • Standard S2: 253.6 MB, 49.6%
  • Premium P1: 1049 MB, 11.47%  (this DB was slightly larger than the P2 database as the REBUILD failed, I did not have time to re-run the REBUILD so skipped that step in the test method – this result nonetheless is consistent with the P2 result).
  • Premium P2: 1010 MB, 11.03%

Web/Business Edition Tests

There is no DMV available in Web/Business edition to calculate the maximum amount of memory available to a given database.  Nonetheless, the exact same test process was followed (with 1 GB, 2 GB and 5 GB data sizes).  The dramatic time difference between first and subsequent executions of the test query (as described in step 13 of the test method above) shows that the data was still cached.  Even the test query against the 5 GB test data set returned in around four seconds (compared to four minutes on first run).  A dramatically more impressive performance than Standard Tier, though as ever performance will probably be variable in Web/Business Edition.

 with the exception that instead of DROP/RESTORE (which is not possible in Web/Business) a database copy was created.  This has the same effect as the DROP/RESTORE – i.e. it results in a new database with close to zero memory usage.

Advertisements

2 comments

  1. Thanks for this analysis! I had a 5GB database running on Business tier for a year without any issues. I “upgraded” to S3 and performance was okay until I increased the DB size to about 8GB. Performance tanked. So, I’m moving to (and paying for) P1 based on your analysis. On the Azure website, S3 and P1 look the same (100 DTUs!). Microsoft is going to need to revist these tiers. S0-3 are practically useless for any serious database use.

  2. Pingback: SQL Azure Performance of new servers | XL-UAT


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s