Microsoft Azure SQL Database Performance Tests: Summary

Performance Testing Series

This is the final post from a series of blog posts about my performance tests in Azure SQL Database. For the first post in this series (and links to all of the other posts) please see here.

Thanks for reading.  I hope the information shared here will be useful in some way.

Quick Navigation:      << Previous post in the series <<

Update:  31st January 2015

All of the information below reflects the tests that were carried out in July 2014.  The prices are also as Microsoft published them in July 2014.

For current Performance Metrics, please the current performance test results here (for both the current V11 and the new V12).

For initial impressions of the new service tiers and performance information as of July 2014, continue reading.

Key Takeaways

The rest of this page is quite long.  The following summarises the main points:

Read and Write Rates

  • These performance tests have estimated the maximum read rates of each Azure SQL Database edition / tier as:  Web/Business = 550 MB per minute, Basic = 40 MB/min, Std S1 = 30 MB/min, Std S2 = 140 MB/min, Prem P1 = 400 MB/min and Prem P2 = 770 MB/min.
  • These performance tests have estimated the maximum write rates of each Azure SQL Database edition / tier as:  Web/Business = 100 MB per minute, Basic = 9 MB/min, Std S1 = 19 MB/min, Std S2 = 38 MB/min, Prem P1 = 47 MB/min and Prem P2 = 90 MB/min.

The above figures are the average rates per database (i.e. total database throughput) taken from three 60 minute tests conducted against each edition / tier – they are not per connection rates.

These figures (like the rest of the test results in this series) already include the increases in performance for Standard Tier – i.e. S1 to 15 DTUs and S2 to 50 DTUs.  This increase was announced in mid-May (e.g. see post from Tobias at Microsoft in this thread) and live by early June.  The performance increase was included in another post on the Microsoft Azure blog on 8th July, however, this was merely a repeat of the earlier announcement.  The majority of these performance tests were conducted in mid to late June.

Performance of Newer Service Tiers Compared to Today’s Web / Business Edition

  • Given that Microsoft are closing down the Web / Business edition next year, the relative performance of the newer service tiers (Basic, Standard and Premier) is a key consideration for many Azure SQL Database customers .
  • If you have an Azure SQL database in Web/Business Edition today, and that database is above a few GB and/or has anything above “light” usage, it is likely the Standard Tier – as it is today – will not be a viable replacement once Web/Business is withdrawn.
  • Compared to the Web/Business edition, the newer Standard Tier S1 and S2, have performed significantly worse in these performance tests.  E.g. S1 has only 5% of the disk read performance, S2 only 25% of the read performance of Web/Business in these tests.
  • In these tests (with a 36 GB database), Standard Tier S1 and S2 reached only 0.1% and 0.2% of the transaction throughput of Business.  Your Mileage May Vary, though I suspect for anything other than a “small” database and “light” load, the pattern will be similar.
  • This likely means many customers will need to move from Web/Business to Prem P1 to maintain performance, which means a likely very large cost increase, probably of the order of two to three times the cost or even more for smaller databases.  It is also worth noting that in these tests, whilst P2 generally outperformed Web/Business, P1 was a more mixed comparison, but at least of the same order of magnitude.

Bottom Line

The Good

  • The new tiers do display impressive consistency (less variability) compared to the Web / Business editions.
  • Premier P2 (and presumably P3) generally significantly outperform Web / Business.

The Bad

  • Even though Web / Business is less consistent, on average it significantly outperforms Basic, Standard S1 and Standard S2.
  • This makes the new service tiers, particularly Standard, disappointing.

The Ugly

  • The newer tiers are even more disappointing once cost is taken into account – i.e. worse performance at significantly higher cost than Web / Business:
Cost Now Usage Level, Future Cost and % of Web/Business Performance
Web/Business Very Light Usage
(Std S1)
Light Usage
(Std S2)
Moderate Usage
(Prem P1)
Heavy Usage
(Prem P2)
5 $ 26 per month $ 40 per month
5% read perf
20% write perf
$ 200 per month
25% read perf
40% write perf
$ 930 per month
70% read perf
50% write perf
$ 1860 per month
140% read perf
95% write perf
25 $ 76 per month
50 $ 126 per month
100 $ 176 per month
150 $ 226 per month

The Painful

  • The Price-Performance Point can be quantified and compared using the “Price per (MB/min)” of each edition or service tier.
  • Storage performance is of course by no means the only factor affecting performance but, at the moment, in Azure SQL Database, it is a very dominant factor –  especially when considering the move from Web/Business into the new Service Tiers.
  • Starting with read-rate price-performance:
Edition, Avg. Read Rate (MB/min), Read Cost per (MB/min)
550 MB/min
Std S1
30 MB/min
Std S2
140 MB/min
Prem P1
400 MB/min
Prem P2
770 MB/min
5 $ 0.05 per (MB/min) $ 1.33
for max.
5% web/bus.
read perf.
$ 1.43
for max.
25% web/bus.
read perf.
$ 2.33
for max.
70% web/bus.
read perf.
$ 2.42
for max.
140% web/bus.
read perf.
25 $ 0.14 per (MB/min)
50 $ 0.23 per (MB/min)
100 $ 0.32 per (MB/min)
150 $ 0.41 per (MB/min)
  • The write-rate price-performance comparison is even more painful:
Edition, Avg. Write Rate (MB/min), Write Cost per (MB/min)
100 MB/min
Std S1
19 MB/min
Std S2
38 MB/min
Prem P1
47 MB/min
Prem P2
90 MB/min
5 $ 0.26 per (MB/min) $ 2.11
for max.
20% web/bus.
write perf.
$ 5.26
for max.
40% web/bus.
write perf.
$ 19.79
for max.
50% web/bus.
write perf.
$ 20.67
for max.
95% web/bus.
write perf.
25 $ 0.76 per (MB/min)
50 $ 1.26 per (MB/min)
100 $ 1.76 per (MB/min)
150 $ 2.26 per (MB/min)
  • These price-performance figures are eye-watering.  Almost all compare very badly with Web/Business.  Price-performance does not increase linearly.  It becomes increasingly more expensive to obtain higher throughput rates.

Microsoft’s Azure Sql Benchmark (ASB), Database Throughput Units (DTU) and Costs

  • It is also interesting that, in comparison to these tests, Microsoft’s own DTU benchmarks against S1 and S2 were made with only 750 MB and 3.6 GB databases respectively (see details here).  That Microsoft are testing with such small databases in Standard Tier despite it’s 250 GB capacity seems a little odd.
  • For a 750 MB database, Web/Business cost = 10 USD per month, S1 = 40 USD.
  • For a 3.6 GB database, Web/Business = 22 USD per month, S2 = 200 USD per month.
  • And remember, this is for, on average, according to these tests, a service tier with significantly less resources.

Standard Tier Footnote

  • S1 has performed very poorly in these performance tests.  I would be very hesitant to place a workload on S1 where the database size is above a few GB or where the load is anything above very light.  S1 provides an initial read “burst” but after that a sustained read rate of only 5% of current web/business.  If that initial burst fails to get most of your frequently accessed data into the buffer pool, then performance can be substantially dragged back by that very low read rate.
  • If your current Azure workload uses Web/Business reasonably heavily, then it is likely S2 will also be insufficient and Prem P1 at a minimum will be required…


  • The new service tiers are still at the Preview Stage.  They haven’t yet reached General Availability.
  • Microsoft have suggested that the performance of the new tiers is not yet fixed and may be adjusted.
  • I suspect, once Microsoft start to receive more “loud and clear” feedback, the price-performance point (which at the moment seems way-off for Standard Tier) will be improved.

For more details on the points above, please continue reading below.

Summary of Test Results and Metrics

MB Per Minute Rates

These rates roughly estimate the maximum each edition / service tier is capable of.  For the newer service tiers, these rates were generally measured when the quota usage rates (as reported in sys.resource_stats) were at 100%.  For Web / Business, a series of tests were generally conducted utilising various thread counts to establish maximum rates.

For more details about the individual tests, see the other posts in this series (the Test lines in the tables below link back to the relevant post for that column).

In these tables, I believe the Log Write and Disk Read rates are reasonably representative (due to the simple nature of the tests).  The example workload is just that – one possible workload, so these figures will vary by workload.

(MB Per Min)
Log Write Sproc Single
Row Writes
Log Write Disk Read Average Rate
for Example Workload
SQL Edition \ Test: DB Import Insert Bulk Load Seq. Select Not Read Ltd Read Ltd CTUs
Web / Business 114.0 4.8 93.7 550 115.0 56.5 100
Basic 9.7 1.4 9.3 39 17.6 16.2 38
Standard S1 22.0 2.7 18.7 29 30.8 0.03 0.1
Standard S2 5.5 37.5 144 60.6 0.08 0.2
Premium P1 54.0 7.0 46.9 400 93.2 19.7 50
Premium P2 13.5 91.3 766 179.8 75.9 190

For comparison purposes, the official DTU allowances per tier at the time of these tests were:
Basic=5, S1=15, S2=50, P1=100 and P2=200.  These tests were conducted throughout June, well after the mid-May announcement of the Basic and Standard Edition DTU increases.  The Scale Tests (in the Read Ltd Example Workload column) were conducted in late June, over a month after the DTU increases.

Comparison to Web / Business Edition %

The table shows the same figures as above, expressing them as an approximate percentage of the performance of Web / Business edition.  This allows easier comparison between Web / Business and the newer service tiers.

Rate (%
of Business)
Log Write Sproc Single
Row Writes
Log Write Disk Read Average Rate
for Example Workload
SQL Edition \ Test: DB Import Insert Bulk Load Seq. Select Not Read Ltd Read Ltd CTUs
Web / Business 100 100 100 100 100 100 100
Basic 10 30 10 7 60 38 38
Standard S1 20 60 20 5 100 0.1 0.1
Standard S2 120 40 25 120 0.2 0.2
Premium P1 45 150 50 70 120 50 50
Premium P2 280 95 140 150 134 134

Notes (on the above data tables)

Example Workload

The example workload is 20% single row inserts, 30% single row updates, 30% selects and 20% single row deletes.  Of the selects, 75% are single row selects (so called Direct Selects) and 25% are multiple row selects (so called Indirect Selects).  All the single row operations were performed via stored procedure, selecting rows via the clustered index key.  The multiple row selects covered small ranges of the clustered index (average 660 rows) and executed via text SQL.  For more details, see the Scale Tests post.

Example Workload Rates

For the example workload, “Not Read Ltd” refers to a test where all data was already in the buffer pool at the start of the test (i.e. generally not normal so something of a special case) and thus read rates were not limited by the disk read quota (see details here).  “Read Ltd” refers to a test where none of the data was in the buffer pool at the start of the test and thus the test was subject to / performance typically limited by the physical disk read quota limit of the service tier (see details here).  Obviously, some data is loaded into the buffer pool during the tests – each of which was two hours long.  These “Read Ltd” figures are an average over the two hours.

The comparison % figures (and CTUs) have been adjusted to take account of the number of threads across the different tests and thus compare like-for-like.


CTUs were defined, half jokingly, in the previous Scale Tests post (Chris’s Throughput Units).  They accurately reflect my test results, but are not proposed as a serious representative benchmark figure.

Basic Edition

The example workload for Basic edition was obviously conducted against a 2 GB DB, the maximum permitted size for that service tier.


Headline:  Good Consistency

The new service tiers have generally shown consistent performance in these performance tests, generally providing their maximum rates substantially more consistently than observed in Web / Business edition.  This was one of the stated aims of the new service tiers and Microsoft have largely achieved it.

The only area where some moderate inconsistency was shown in the new service tiers was in the physical disk read rates.  The preview is still running, so Microsoft have time to work on this.

Headline:  Low Disk Read Quota Allowances, Poor Performance of Standard Edition

The Scale Tests conducted in this series of performance tests have demonstrated just how poorly Standard Edition can perform if it is read-constrained.  By contrast, the non-read limited performance figures actually compare favourably to Web / Business.

The poor performance when read limited is not surprising when the average sustained sequential read performance figures (from the Sequential Selects test) are considered.  According to these results, S1 has only 5% of the read performance of Web /Business.  S2 has a little more at 25%, but this will, I suspect, hardly make S2  a viable replacement in many cases for Web / Business.  And that means P1 is needed – with still only 70% of the read rate of Web / Business but at least roughly the same order of magnitude.  BUT: moving from Web/Business to P1 = a substantial cost increase.

Again Microsoft still have time to re-balance these things as the preview runs on.

Headline:  Great P2+ Performance

The new P2 tier (tested as part of these tests) performed extremely well compared to Web / Business.  P3 is presumably even better but was not tested in these tests (too costly for this piece of work).

Headline:  Premium Tier Pushes Both Upwards AND Downwards

I would speculate that the introduction of the P2 and P3 tiers has had two effects.

First, quite obviously, at the top end of Premium, it means SQL Database capabilities have pushed higher than ever before.

Secondly, and perhaps a little less obviously, it means the bottom end of Premium has pushed downwards and reduced the capabilities of Standard.  The measured rates for disk read and log write of P1 are both somewhat below Web / Business – and that pushes the capabilities of Standard further down.  Of course, we know performance of Web / Business is more variable but still, these average figures are quite clear.

Scale Test Conclusions

[ The text in this section is reproduced from the previous post].

The Scale Tests (“Example Workload – Read Ltd” columns in the tables above) have illustrated how one example workload performs across the different editions, tiers and performance levels of SQL Database.  In particular, how the nature of the workload can generate differences in performance vastly different to simple expectation based on the DTU figures for the different tiers and performance levels.

I have created the CTUs (Chris’s Throughput Units) in these tests half jokingly – actually, 90% jokingly, even though they do accurately reflect my test results.  I don’t claim for a second they are any more useful than those figures provided by Microsoft.  Microsoft have undoubtedly put substantially more effort into creating a rate that much better reflects more workloads and scales.

My point really is that DTUs, CTUs or whatever else aren’t a reliable guide to planning / design.  They are perhaps a very rough starting point but SQL Server performance doesn’t scale nicely and simply according to one simple blended benchmark number.

Some basic read and write rates can be helpful though.  By the way – the CTU benchmark comes supplied with these figures in contrast to other xTU style benchmarks that may be available!  Though there is a disclaimer the CTU rates are estimates of course, not official, subject to change, etc!

The CTU benchmark also compares the new service tiers to Web/Business, though again it is noted that Web/Business performance is more variable.  That still doesn’t stop averages being captured nor does it mean readers aren’t intelligent enough to understand that any Web/Business average has a wider standard deviation than for the new Service Tiers.

These tests have also illustrated how read rate appears to quite heavily dominate SQL Database performance under the new service tiers.  And how life is very unpleasant when that quota is running short.  Of course, I could have reduced the fragmentation in the test database, used a more optimum design, etc.  But, just in case you missed it everywhere else I said it, I don’t claim this is the best workload to benchmark with.  Though I am sure that there are plenty of other databases in Web/Business today with similar or bigger problems – that as of today are running perfectly/moderately happily (just like this workload did in Web/Business) – whose owners may be in for a surprise when they try running in an environment that is constrained on read rate.  For these databases, Standard Edition as of today is probably not going to be a happy place to be.

Which neatly brings me to one aspect I haven’t mentioned thus far.  Cost.  For this workload, clearly Premier P1 would be needed.  That’s would represent a cost change from 65 UKP (100 USD) to 295 UKP (465 USD) per month, i.e. a multiple of 4.6!  (And that’s just the preview pricing!)  So, that database that is running happily today in Web/Business (despite its “issues”) is going to potentially cost quite a lot more this time next year to keep running…

Which leaves me all the way back with my original thought I posted way back in April/May time – that these new Service Tiers actually represent quite an increase in the cost of SQL Database.  Even a 150 GB database (i.e. at the top end of SQL Database per GB pricing in Business Edition today) will cost twice as much to run in P1 in future (and again, that’s just the preview pricing, GA  pricing is four times today’s price!).

Finally, a couple of other footnotes comparing Web/Business to the new service tiers:

  • The sys.resource_stats DMV recently appeared in the Web/Business edition, presumably to allow comparison between the existing tiers and the new tiers.
  • Firstly, from these tests, we have seen that 100% values in the log write and physical disk read columns for Web/Business actually cover a VERY wide range of actual performance levels, often multiples of 3 or 4 or even 5.  So, if you are seeing 100%, that actually doesn’t tell you the true maximum.
  • I believe I read in a blog post that the read and write percentages in this DMV in Web/Business have been set at half S2.  I.e. 100% Business = half S2.  Or equivalently, S2 has twice the performance of Business.  From these tests, the opposite actually appears to be true.  E.g. the physical read read rate measured for S2 = 140 to 210 MB per minute, for Business = 550 MB per minute.

This conclusion has turned into something of a stream of conciousness.  It may sound overly critical.  Actually, I do really like the concepts behind the changes Microsoft is bringing to SQL Database.  They bring an increased level of robustness and probably should have been the way SQL Database was designed originally.  There are many good solid technical enhancements.  In practice, P2 significantly outperforms Web/Business in almost all tests (P1 is a more mixed picture).  So my real concerns are about the transition from Web/Business and what appears from here to be quite significant associated cost increases in order to maintain the same performance.  Standard Edition in particular seems a “somewhat” poor relation to Web/Business due to the constrained read quota.  And, great though Premium is, it costs significantly more.

Is this the equivalent in aviation terms of moving from a rough around the edges, sometimes unreliable but generally quite high performing economy class only Budget Airline to a National/Flag Carrier with it’s multiple cabins and seating classes, all generally more expensive than the budget carrier?

As a PS, I should remind that these tests were conducted in the Western Europe data centre.  It is conceivable that there is plenty of spare Web/Business capacity here (that may not exist in other Azure data centres worldwide) so Web/Business may perform better here than elsewhere.  If that is the case, the general conclusions here would be less widely applicable.  Time and the experiences of others will certainly tell if that is the case or if these conclusions are generally applicable…

Next Steps

If I was to continue, I would think about some of the following, in no particular order (there is always more to investigate but you need to stop / take a break at some point):

  • Try a more optimally designed workload in Standard Edition.
  • Gradually turn up the size of the database and/or request rate in Standard Edition.
  • Run some tests with indexes – though building, reorganizing, rebuilding, etc indexes is painful in Standard Edition.
  • Gather more detailed IO stats, e.g. from the DMVs during tests. I have a newer version of the Cloud Service that now collects DMV IO metrics through tests but alas this study has come to an end for now.  I may use this in the future.
  • Change the distribution function to less favour the buffer pool (i.e. reduce the Premium tier advantage).
  • Run some tests with no IndirectSelects as they can totally destroy Standard Edition.
  • The new service tiers have shown good consistency (which was something I wanted to validate), there is less need to run each test three times.  Twice or even once only is probably sufficient.
  • Re-run some of these tests once the new Service Tiers hit GA.


  1. Hi Chris,

    Thanks for spending the time & money researching & benchmarking these new tiers. I’m quite worried about how our system will perform when they remove Web/Business edition!

    I’ve recently been testing our production system and monitoring the impact of moving from Business to Premium tier and I’ve found the P1 to be very disappointing. Basically to maintain the same level of performance we will have to move to P2 at a minimum (possibly P3!).


    • Hi James,

      Thanks for the feedback. You are not alone. Of the people who have tested the new tiers so far there seems to be agreement that Microsoft has the price-performance point quite a long way from where it should be.

      Of course, this is still the preview stage. There is still plenty of time for things to be changed. I would be very surprised if Microsoft don’t increase the DTUs per tier again before reaching General Availability. I think the question is more how much they will increase them by. Given the price increases, performance needs to be increased in large multiples of what it is today, not just by a few percent.



  2. Thank you very much Chris for sharing your tests results. They are indeed very insightful…and pretty scary too as we are using today the Web Edition.
    No wonder Microsoft is not communicating the current specs of Web/Business editions…

    To Microsoft (hoping someone is reading this post):
    The results from Chris are extremely worrying to your current Web/Business edition clients. You’ve got a PR disaster waiting to happen as this information spreads. We are getting very nervous here facing the same cost/performance scenarios as what Ewart detailed in his post.

    We have been an early Azure adopters and a strong supporter of the platform (even making a video testimonial with your team which is published on your website). The results shown on Chris’s blog have been felt like a huge punch in the face…

  3. This is impressive work!
    We too have been extensively performance testing the new tiers and are extremely unhappy with the results. Our results align with your findings that “Standard” is sub-standard (pun intended) compared to Web/Business.

  4. Pingback: SQL Database の新しいサービスレベルのリソースのフィット状況を確認 | SE の雑記

    • Hi – Thanks for the comment.

      I’m not planning to re-run the tests since I don’t believe the performance changes since I ran the tests are significant enough to warrant re-testing. So my previous comments still apply, with the exception that the GA pricing for standard tier is slightly less than originally announced. When I ran my tests, S1 was 15 DTUs. S0 is now 10 DTUS and S1 20 DTUs. I.e. slightly different, but not all that significant.

      I had an interesting phone conversation with Microsoft a few weeks ago over this – where they were quite clear that they believe only a very small percentage of users will be impacted negatively by cost/performance in the new tiers. They obviously have data on general cost/usage levels over their customer base. I still struggle however to reconcile their viewpoint with the results of my tests since the differences between Web/Business and the new tiers are so stark…

      Bottom line (for me) is still that performance is more consistent in the new tiers (i.e. varies less over time) BUT average performance of standard tier and even Premium P1 is well below average performance of Web/Business. And following on from that – the premium tiers are very pricey. If you are struggling with the more volatile (but actually higher on average) performance of Web/Business, then the higher Premium tiers will certainly help, *if you have the cash!*.

      As a final point, Microsoft were very clear to me, that if any customers are really struggling with the new tiers, they do want to hear from them. I mentioned this to a couple of people/companies I’ve worked for previously and they were reassured by that but they aren’t planning to make the jump just yet.


  5. Great and very thorough measurements, thank you.

    I’ve made my own very basic tests by running load tests (measuring requests per second) for some of our own websites storing their DB in Business, Basic and Standard (S0) tiers. What I’ve seen is that (again: very basic, very simple measurements aiming a very specific use-case) Basic marginally outperforms (where the difference could be barely measured with my approach) Business and S0 marginally outperforms Basic.

    I was happy with the performance supplied by Basic but was surprised that the apparent double throughput of S0 is barely visible in the stats. Although there are more components than just the speed of DB access to a performance of websites of course the ratio of DB operations contributing to the page page loads of these particular sites is quite high (more than a half).

    • Thanks for the comment.

      Where are a large proportion of database requests are hitting data already in the SQL Server Buffer Cache, performance for the new tiers is good.
      On the flip side, IO heavy databases (where much of the data isn’t in the cache) tend to be the hardest hit under the new service tiers (performing significantly less well and much worse on average than under Web/Business). On small databases (of the order a low GB) it is relatively easy for a large proportion of the data to be cached. This avoids the read throttling and performance is therefore much higher.

      The bottom line is that, strangely….
      ….the lower tiers perform well for small databases sizes.
      ….the higher tiers perform well for large databases.
      ….And then there is a relatively painful middle ground where databases of say 20GB, 40GB, 60GB, etc rapidly become IO constrained on Standard Tier and perform much less well than even a small database on Basic / S0.

      That is why I was querying in my post why Microsoft conducted their DTU benchmarks against such small test databases. Small databases are easily cached by SQL Server and thus avoid most IO throttling and perform very well indeed.

      • Thanks for the reply, very good explanation. I’m surely seeing the caching you mentioned as the two test databases I checked are both small (~50 and ~160MB), the queries are repetitive (i.e. even if the parameters change, the same queries are repeated over and over most of the time) not to mention that in my test the data didn’t change, it was just read again and again.

        I believe this is fairly typical for web applications (relatively small database, a small number of queries repeated, much more read than write) so I guess for web apps with moderate database usage the switch from Web/Business even to Basic will be possible without seeing performance degradation. (Especially that for web apps caching – on multiple levels in the application, then lastly output caching – happens a lot, so increased traffic doesn’t necessarily mean a linearly increasing load on the DB).

        Nevertheless these performance stats are worrying.

  6. Hi Chris,
    great article – just wish I had read it before starting down the path of updating our databases!
    Foolishly on receiving the GA email I upgraded a key database to S0, thinking it would be a good starter for 10 – some of the key screens died 😦
    We tracked it back to a view and here are some surface tests that corroborate your findings. Select * from view.
    Web edition – 3 seconds
    Standard S0 – 90 Seconds
    Standard S1 – 11 seconds
    Standard S2 – 5 seconds
    I have not had chance to test P1, but this is looking like the candidate level for our database.
    This will push out costs up 6 fold for the database side of the platform.

    Do you think that MSFT are going to do anything with pooling DTU’s as we have about 20 DB’s which have varying workloads through the month as putting them all onto P1 would be very painful! ?



    • Hi Peter

      Thanks for the comment. Microsoft are working on something that will allow resources to be pooled across databases in some way. I have no specific information about this, but it is mentioned in the GA announcement on the Microsoft Azure Blog here:
      See the paragraph that mentions ISV solutions. Whether this will help your database workload, only time will tell…
      Feel free to add any comments you might like to the above Azure blog post. The more feedback Microsoft can get the better.

      IO heavy databases cost much more to run under the new service tiers – so performance tuning will become more important now than on Web/Business. Previously just database size determined cost and the platform could absorb some of the performance spikes (variable performance, but higher than the new Standard Edition on average). Clearly, in the new world, it is not as simple as that. The platform can deliver high performance that is sustained/more consistent over time – but we have to pay a lot more to reach those kind of performance levels.


      • Hi Chris,
        I missed that in the GA notes.
        As we currently have (only) 20 databases under our direct control with lots more we help clients with – it was going to get very painful very quickly.
        I will try and find more details on the IS tier mentioned and see where we go for there.

        As a side note, I know of one client which has 300 Standard DB to upgrade, I can only guess as to what they think !!!



  7. Hi Chris,

    Nice work and quite thorough too. I only wished I read this post last week. I got the GA mail from Microsoft and immediately “upgraded”. I had a 13GB database on web/business and moved it to S1. Unfortunately, we started getting “resource limit exceeded exceptions” less than 10 minutes later. Now we are on S2 and the timeouts are still there although much further apart.

    Cost has moved from $46 monthly to $75 (with timeout errors). It appears we have to retune queries or migrate to P0 (>$400 monthly)

      • As per previous posts, I upgraded, found out how bad it was then “downgraded” to web again.
        As a small ISV this move from MSFT is a disaster for us – 20 times cost base 😦

      • Sorry, thought I was replying to Dejisys – as others have pointed out, yes it is possible to move back for now, and wait for a year before you are forced to move. Whether to move or stay will depend on how much performance you need, which will affect the price.

  8. Pingback: Azure SQL Databases – vad, hur och varför | Science & Society

  9. Pingback: Azure SQL Databases – what, how and why | Johan Åhlén, Science & Society

  10. Pingback: Azure SQL Databases – vad, hur och varför | Johan Åhlén, Science & Society

  11. As we know Microsoft Azure introduced three new service tier that is Basic,Standard
    and Premium for Azure SQL databases.Lets for Statndard tier service level “S0”, SQL database pricing says
    they will charge $15/month for each DB(Standard-S0).

    My question is I have to pay only $15/month for Standard “S0” or any additional cost like
    the total cost for each SQL DB will vary depending on transactions being carried out in a month.

    • The prices are flat costs. There are no transaction based costs. Each service tier is limited to a certain number of transactions. More expensive service tiers can support larger numbers of transactions.

  12. You mention that the new tiers have an initial “burst” where they’re fast, but then slow down significantly. From what I’ve read, the tiers limit the number of transactions you can have before you are throttled. The B1 is like 5 per second. Is this the cause of the huge performance difference?

  13. Pingback: Azure SQL – Transaction rate per hour Explained | Blake's Application Development Blog

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s