Azure SQL DB Performance Tests

Introduction

I have conducted various performance tests over time against the different versions, editions, service tiers and performance levels of Azure SQL Database.

Microsoft seem reluctant to publish absolute performance metrics for Azure SQL Database, instead relying on a relative metric – DTUs – that provide a somewhat vague comparison between tiers.  I hope these tests provide some additional insight.  However, I make no claims that these are the best most representative tests.  They are merely some basic tests I have used to help me understand how performance varies between the different service tiers and performance levels.  Other tests would no doubt have different results.  My read and write tests in particular were designed to be just basic, average tests, without any significant optimisation being performed.

If the tests help you, then great.  If not, feel free to improve them or write your own tests and share the results!

It is also likely Microsoft adjust absolute performance on an ongoing basis.  That is another good reason for running some tests of your own (either similar to my tests or different) to get some up to date results.

Major posts in this series include:

Please see these posts for test results.

The rest of this page provides some details of the current methodology I have used for performance testing (i.e. from December 2014 onwards).  The test database is described first (which is used in all tests).  Following that are details of the individual tests themselves.

I hope I have provided sufficient detail, please ask if anything is unclear.

Test Database

The structure of the test database used is the same across all service tiers and tests (except the CPU test).  It contains 51 tables (imaginatively named Table0 to Table50), all of identical structure:

Field Data Type
RowId int (PK)
Bit1 bit
Bit2 bit
Bit3 bit
Bit4 bit
TinyInt1 tinyint
TinyInt2 tinyint
SmallInt1 smallint
SmallInt2 smallint
Int1 int
Int2 int
Int3 int
Int4 int
Int5 int
DateTime1 datetime
DateTime2 datetime
DateTime3 datetime
DateTime4 datetime
Float1 float
Float2 float
Money1 money
Money2 money
Money3 money
Text1 char(2)
Text2 char(10)
Text3 varchar(40)
Text4 varchar(100)
Text5 varchar(250)

Read Test Database Content

The RowId field was the primary key and also comprised the clustered index.  Values for this field are sequentially generated.

Field values are randomly generated.  The text values are generated to be, on average, just over 2/3 the maximum length of the fields.  Examining sys.dm_db_index_physical_stats reports the average row size as approximately 410 bytes.

The tests against the high-spec service tiers make use of a larger number of the tables.  For all of my read tests no more than 36 tables were actually used (this number may seem a little strange but was based on the maximum cost I wanted to spend on the earlier Web/Business tests, where cost is a function of database size).  Tables 1 to 36 were each populated with 2,560,000 rows which equates to  just over 1 GB of data per table (except for Basic tier, where tables 1 to 19 were populated with 100 MB each).   Thus the test database is around 36-37GB in size (1.9 GB for Basic tier).  An existing application was used to generate the test data, so there are no SQL  scripts for this step of test preparation.  Nonetheless, hopefully there is enough detail here to understand the contents of the database.  If it helps, I would be willing to share a copy of my actual test database, though you’ll need somewhere that I can upload a 36 GB file to.

Copies of this same test database have been used for all read tests since July 2014.  The database was created on my development system then loaded into Azure via the standard BACPAC import process.

Database Object Definitions

A script containing definitions for all 51 tables can be downloaded here.  This contains the timings table (described later) but not most of the other scripts below since these may be tweaked between series of tests.

An example table definition is shown below (all tables are the same):

CREATE TABLE [dbo].[Table1](
 [RowId] [int] NOT NULL,
 [Bit1] [bit] NULL,
 [Bit2] [bit] NULL,
 [Bit3] [bit] NULL,
 [Bit4] [bit] NULL,
 [TinyInt1] [tinyint] NULL,
 [TinyInt2] [tinyint] NULL,
 [SmallInt1] [smallint] NULL,
 [SmallInt2] [smallint] NULL,
 [Int1] [int] NULL,
 [Int2] [int] NULL,
 [Int3] [int] NULL,
 [Int4] [int] NULL,
 [Int5] [int] NULL,
 [DateTime1] [datetime] NULL,
 [DateTime2] [datetime] NULL,
 [DateTime3] [datetime] NULL,
 [DateTime4] [datetime] NULL,
 [Float1] [float] NULL,
 [Float2] [float] NULL,
 [Money1] [money] NULL,
 [Money2] [money] NULL,
 [Money3] [money] NULL,
 [Text1] [char](2) NULL,
 [Text2] [char](10) NULL,
 [Text3] [varchar](40) NULL,
 [Text4] [varchar](100) NULL,
 [Text5] [varchar](250) NULL,
 CONSTRAINT [PK_T001_DataLoading] PRIMARY KEY CLUSTERED 
(
 [RowId] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)
)

Read Test Methodology

Preparation

The most significant step in preparing for the read tests is creating the test databases.  Since I know of no way to clear the buffer pool in Azure, I create one test database for every test run.  Creating a database copy produces a database that should have no data in the buffer pool (the sys.dm_db_resource_stats also shows zero memory usage in the copies).  So, let’s imagine we are testing S1 and S2 only for simplicity here, where each level is tested three times to check for consistency and reduce the effects of random variations.  My preparation steps are:

  1. Import the test database (described above) from the BACPAC file into Azure, to a database called AzureDataTesting.  I use the P1 or P2 performance level to speed the import along.  After the import I reduce the performance level of this database down to S0 since no tests will be run against this database.
  2. Copy this database twice to AzureDataTesting_S1 and AzureDataTesting_S2.  These databases will both initially be at the S0 performance level (it is not possible to change the performance level as part of the database copy process).  Change the performance level of AzureDataTesting_S1 to S1 and the performance level of AzureDataTesting_S2 to S2.
  3. Now, we need to copy each of these databases three times, i.e. to create:
    AzureDataTesting_S1a, AzureDataTesting_S1b, AzureDataTesting_S1c, AzureDataTesting_S2a, AzureDataTesting_S2b, AzureDataTesting_S2c.
  4. These six databases (S1a, S1b, S1c, S2a, S2b and S2c) are now ready to run the test against.

Someone may say why not create only four additional databases in step 3, and use the two databases from step 2 in the test to give six databases in total.  That may produce the same result, or it may not – I have not tested it.  I preferred to have six test databases all created in the same way.

Test

The test is essentially reading each 1 GB table in turn and recording the time taken to read the table.  The size of the total data read is chosen to be larger than the total memory available in that performance level, i.e. currently the following test data sizes are used:

  • Basic: 1.9 GB
  • Standard S0:  10 GB
  • Standard S1:  20 GB
  • Standard S2:  30 GB
  • Standard S3:  36 GB
  • Premium P1 and P2:  36 GB

The test makes use of the following stored procedure:

CREATE PROC dbo.ReadDataChunk
 @TargetTable int,
 @ThreadId varchar(5)
AS
BEGIN
 DECLARE @StartTime datetime
 DECLARE @EndTime datetime
 DECLARE @TableSize bigint
 DECLARE @Sql varchar(1000)
 SET @Sql = 'SELECT COUNT(*) FROM Table' + CAST(@TargetTable AS varchar(10)) + ' '
 SET @Sql = @Sql + 'WHERE '
 SET @Sql = @Sql + '[Bit1] IS NOT NULL AND '
 SET @Sql = @Sql + '[TinyInt1] BETWEEN 10 AND 240 AND '
 SET @Sql = @Sql + '[Int3] BETWEEN 100000 AND 2000000000 AND '
 SET @Sql = @Sql + '[Money2] BETWEEN 0 and 10000000'
 SELECT @TableSize =
 CAST(sum(reserved_page_count) AS bigint) * 8.0 * 1024
 FROM sys.dm_db_partition_stats, sys.objects
 WHERE sys.dm_db_partition_stats.object_id = sys.objects.object_id AND 
 schema_name(sys.objects.schema_id) <> 'sys' AND
 ((schema_name(sys.objects.schema_id) +'.' + sys.objects.name) = ('dbo.Table' + cast(@TargetTable as varchar(10))))
 GROUP BY schema_name(sys.objects.schema_id) +'.' + sys.objects.name
 SET @StartTime = getdate()
 EXEC (@Sql)
 SET @EndTime = getdate()
 INSERT INTO dbo.Timings
 VALUES (@StartTime, @EndTime, 'ReadChunk', @TargetTable, -1, -1, @TableSize, @TableSize, @ThreadId)
END

At the core of this stored procedure is a SELECT that performs a scan to calculate a simple row count.  A row count was chosen as it requires a minimal amount of CPU effort, a minimal amount of memory for the calculation and minimal time / processing / network IO when returning the result.

The results are recorded by the stored procedure into the Timings table.  This table has some additional columns not used in the current tests i.e. ignore the following fields for this test:

  • RowFrom and RowTo – since this test performs a simple SELECT on the entire table,
  • TableSizeAfter since a read test obviously does not change the data in the table, and
  • ThreadId – since this is not a concurrency test, only one SQL script is running.

Running the test is then as simple as:

EXEC dbo.ReadDataChunk 1, 'T1'
EXEC dbo.ReadDataChunk 2, 'T1'
EXEC dbo.ReadDataChunk 3, 'T1'
EXEC dbo.ReadDataChunk 4, 'T1'
EXEC dbo.ReadDataChunk 5, 'T1'
EXEC dbo.ReadDataChunk 6, 'T1'
EXEC dbo.ReadDataChunk 7, 'T1'
EXEC dbo.ReadDataChunk 8, 'T1'
EXEC dbo.ReadDataChunk 9, 'T1'
EXEC dbo.ReadDataChunk 10, 'T1'
...

where the number of lines in the script, i.e. the number of tables read (1, 2, 3, etc) varies depending on the performance level as described above.

After the test, I retrieve the data from the Timings table, as well as the resource utilisation data from the sys.dm_db_resource_stats DMV and drop them both into Excel for analysis.

Write Test Methodology

Preparation

This test starts with a test database of the same structure, as used above.  However, all tables are empty except table0.  Table0 is populated with 100 MB of data (I used 247,500 rows).

Again, three test databases are created per performance level as described above.  In theory, only one database is needed per performance level.  However, it is my preference to use separate databases.  This is because I believe the different databases may end up distributed across different servers in the data centre and thus provide a more representative result.

The test involves writing the 100 MB chunk of date repeatedly into the database and measuring the write time for each 100 MB chunk.  The following stored procedure is used:

CREATE PROC dbo.WriteDataChunk
 @SourceTable int,
 @TargetTable int,
 @RowIdOffset int,
 @ThreadId varchar(5)
AS
BEGIN
 DECLARE @StartTime datetime
 DECLARE @EndTime datetime
 DECLARE @TableSizeBefore bigint
 DECLARE @TableSizeAfter bigint
 DECLARE @Sql varchar(1000)
 SET @Sql = 'INSERT INTO dbo.Table' + CAST(@TargetTable AS varchar(10)) + ' ('
 SET @Sql = @Sql + 'RowId, '
 SET @Sql = @Sql + 'Bit1, Bit2, Bit3, Bit4,'
 SET @Sql = @Sql + 'TinyInt1, TinyInt2, SmallInt1, SmallInt2,'
 SET @Sql = @Sql + 'Int1, Int2, Int3, Int4, Int5,'
 SET @Sql = @Sql + 'DateTime1, [DateTime2], DateTime3, DateTime4,'
 SET @Sql = @Sql + 'Float1, Float2, Money1, Money2, Money3,'
 SET @Sql = @Sql + 'Text1, Text2, Text3, Text4, Text5'
 SET @Sql = @Sql + ')'
 SET @Sql = @Sql + 'SELECT RowId + ' + CAST(@RowIdOffset AS varchar(10)) + ','
 SET @Sql = @Sql + 'Bit1, Bit2, Bit3, Bit4,'
 SET @Sql = @Sql + 'TinyInt1, TinyInt2, SmallInt1, SmallInt2,'
 SET @Sql = @Sql + 'Int1, Int2, Int3, Int4, Int5,'
 SET @Sql = @Sql + 'DateTime1, [DateTime2], DateTime3, DateTime4,'
 SET @Sql = @Sql + 'Float1, Float2, Money1, Money2, Money3,'
 SET @Sql = @Sql + 'Text1, Text2, Text3, Text4, Text5 '
 SET @Sql = @Sql + 'FROM Table' + CAST(@SourceTable as varchar(10)) + ' '
 SELECT @TableSizeBefore =
 CAST(SUM(reserved_page_count) AS bigint) * 8.0 * 1024
 FROM sys.dm_db_partition_stats, sys.objects
 WHERE sys.dm_db_partition_stats.object_id = sys.objects.object_id AND 
 schema_name(sys.objects.schema_id) <> 'sys' AND
 ((schema_name(sys.objects.schema_id) +'.' + sys.objects.name) = ('dbo.Table' + CAST(@TargetTable as varchar(10))))
 GROUP BY schema_name(sys.objects.schema_id) +'.' + sys.objects.name
 SET @StartTime = getdate()
 EXEC (@Sql)
 SET @EndTime = getdate()
 SELECT @TableSizeAfter =
 CAST(SUM(reserved_page_count) AS bigint) * 8.0 * 1024
 FROM sys.dm_db_partition_stats, sys.objects
 WHERE sys.dm_db_partition_stats.object_id = sys.objects.object_id AND 
 schema_name(sys.objects.schema_id) <> 'sys' AND
 ((schema_name(sys.objects.schema_id) +'.' + sys.objects.name) = ('dbo.Table' + CAST(@TargetTable as varchar(10))))
 GROUP BY schema_name(sys.objects.schema_id) +'.' + sys.objects.name
 INSERT INTO dbo.Timings
 VALUES (@StartTime, @EndTime, 'WriteChunk', @TargetTable, -1, -1, @TableSizeBefore, @TableSizeAfter, @ThreadId)
END

The actual test is executed using:

DECLARE @iTableCount int = 10 -- change this as appropriate for the performance level
DECLARE @iBatchesPerTable int = 10 -- change this as appropriate for the performance level
DECLARE @iTargetTable int = 1
DECLARE @BatchIndex int
DECLARE @RowIdOffset int
WHILE @iTargetTable <= @iTableCount
BEGIN
 SET @BatchIndex = 0
 WHILE @BatchIndex < @iBatchesPerTable
 BEGIN
  SET @RowIdOffset = @BatchIndex * 247500 -- approx = 100 MB
  EXEC dbo.WriteDataChunk 0, @iTargetTable, @RowIdOffset, 'T1'
  SET @BatchIndex = @BatchIndex + 1
 END
 SET @iTargetTable = @iTargetTable + 1
END

Before the test is run for a given performance level, the SQL is ammended: @iTableCount is set to the number of tables to be populated in the test and @iBatchesPerTable variable is  set to the number of 100 MB chunks to write into each table.  In the Jan 2015 tests, the following data volumes and variable values were used:

  • Basic and S0 – 1 GB:  @iTableCount = 10, @iBatchesPerTable = 1
  • S1, S2 and S3 – 2 GB:  @iTableCount = 20, @iBatchesPerTable = 1
  • P1 – 5 GB:  @iTableCount = 10, @iBatchesPerTable = 5
  • P2 – 10 GB:  @iTableCount = 20, @iBatchesPerTable = 5

After the test, I retrieve the data from the Timings table, as well as the resource utilisation data from the sys.dm_db_resource_stats DMV and drop them both into Excel for analysis.

Total Memory Test

The test procedure used was as follows (repeated for each service tier performance level):

  1. Create two empty copies of the test database (i.e. structure only, no data) outside Azure.
  2. Populate Table0 in the first copy with around 100 MB = approx 250,000 rows (which will be used for the Basic tier and for S0, S1 and S2)
  3. Populate Table0 in the second copy with 1 GB = approx 2,500,000 rows (which will be used for S3 and the Premium tier).
  4. Export the databases to BACPACs.
  5. Import the BACPACs into Azure SQL DB – lets call them AzureDataTesting100MB and AzureDataTesting1GB.
  6. Copies of these databases should be made in each performance level.  I change the performance level on the source database first, then create the copy.
  7. This procedure clears out any junk in memory and means the database copy essentially starts with an empty buffer pool, no cached query plans or other memory usage, etc).  This can be verified by checking the sys.dm_db_resource_stats DMV which will show close to zero memory usage.
  8. Run a simple query that performs a full table scan (query given below).  This should load the entire table into the buffer pool.  This may take some time to run, while the data is being from disk into memory.
  9. Immediately after the query finishes, check the sys.dm_db_resource_stats DMV to see the % amount of memory used.
  10. Find the size of the table (this query is also given below).
  11. Assuming all of the table data is now in the buffer pool, the total memory allocated to the database can be estimated using:
    Total Memory = 100 / (% memory used) * (size of table).
  12. Check that all of the table data is still in the buffer pool by re-running the query from step 9.  If the data is still cached, the query should now return in just a second or two.

The query used in step 8 was:

SELECT COUNT(*) FROM Table1
WHERE [Bit1] IS NOT NULL AND
[TinyInt1] BETWEEN 10 AND 240 AND
[Int3] BETWEEN 100000 AND 2000000000 AND
[Money2] BETWEEN 0 AND 10000000

Again, the count(*) is very simple for SQL Server to calculate.  It requires very little memory to process (which is good, because if it required lots of memory, that would confuse the memory figures in the DMV used for the total memory calculation).  The where clause is just some random criteria to force a table scan, i.e. all of the data to be read into memory.

The query used in step 11 was:

SELECT
 schema_name(sys.objects.schema_id) +'.' + sys.objects.name as ObjectName, 
 sum(reserved_page_count) * 8.0 / 1024 AS ObjectSizeMB
FROM sys.dm_db_partition_stats, sys.objects
WHERE sys.dm_db_partition_stats.object_id = sys.objects.object_id and 
schema_name(sys.objects.schema_id) <> 'sys'
GROUP BY schema_name(sys.objects.schema_id) +'.' + sys.objects.name
ORDER BY 2 DESC

CPU Test

Overview

The test is based around running the same CPU-heavy workload in each of the different performance levels and comparing the results.  The chosen workload is a forced loop join where the join condition is something non-trivial, for these tests (script is shown later).

Using a forced loop join means that a relatively small number of rows will require a significant amount of time to process.  Using a relatively small number of rows is important because it eliminates any potential delays due to time waiting for the data to be read.  The test table was created with 280,000 rows of data which is in total around 10 MB.  This entire table was read before the test began properly to get the data loaded into the buffer pool.  All in all this means close to all of the query execution time is CPU time.

The query was written so that SQL Server should use a parallelised execution plan.  This allows the workload to expand to exploit the full amount of CPU resources available within the performance level.  The exact same data was used for every test.

The test script used automatically ran the test multiple times, recording the results for each test.  The number of rows selected from the test table was increased with each test (SELECT TOP … ORDER BY RowId was used to ensure data is picked up in the same way in each performance level).  The number of rows was increased until the SELECT ran for over two minutes, at which time no further tests were peformed (i.e. the row count wasn’t increased further).

Preparation

This test uses a different database created as follows (can be created directly in Azure).  First, create a blank database.  Then, create the following tables:

CREATE TABLE [dbo].[CpuTest](
 [RowId] [bigint] NOT NULL,
 [Int1] [int] NULL,
 [Int2] [int] NULL,
 [Int3] [int] NULL,
 [Int4] [int] NULL,
 [Int5] [int] NULL,
 CONSTRAINT [PK_CpuTest] PRIMARY KEY CLUSTERED 
(
 [RowId] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)
)
CREATE TABLE [dbo].[CpuTimings](
 [RowId] [bigint] NOT NULL,
 [RowCount] [int] NULL,
 [TimeTaken] [float] NULL,
 CONSTRAINT [PK_CpuTimings] PRIMARY KEY CLUSTERED 
(
 [RowId] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)
)

Next, create the random data to be used in the test as follows:

DECLARE @i int = 0
DECLARE @CurrentDataSizeMB float = 0
DECLARE @iRowCount bigint = 0
TRUNCATE TABLE dbo.CpuTest
CREATE TABLE #Data
(
 [Int1] [int] NULL
)
WHILE @i < 10000
BEGIN
 INSERT INTO #Data VALUES (@i)
 SELECT @i = @i + 1
END
SELECT @i = count(*) FROM #Data
WHILE @CurrentDataSizeMB < 10
BEGIN
 INSERT INTO dbo.CpuTest
 SELECT @iRowCount + [Int1], 
  rand(cast(cast(newid() as varbinary(32)) as int)) * 100,
  rand(cast(cast(newid() as varbinary(32)) as int)) * 1000, 
  rand(cast(cast(newid() as varbinary(32)) as int)) * 10000, 
  rand(cast(cast(newid() as varbinary(32)) as int)) * 1000000, 
  rand(cast(cast(newid() as varbinary(32)) as int)) * 1000000000
 FROM #Data
 SELECT @iRowCount = @iRowCount + @i
 SELECT @CurrentDataSizeMB = 
  sum(reserved_page_count) * 8.0 / 1024
  FROM sys.dm_db_partition_stats, sys.objects
  WHERE (sys.dm_db_partition_stats.object_id = sys.objects.object_id) AND 
  (schema_name(sys.objects.schema_id) <> 'sys') AND
  ((schema_name(sys.objects.schema_id) +'.' + sys.objects.name) = 'dbo.CpuTest')
END
DROP TABLE #Data
PRINT 'Table Size = ' + cast(@CurrentDataSizeMB as varchar(30))

I generated the test data only once and then copied that database around.  Using different sets of test data may affect the results (e.g. if the operators in the join condition are short-circuited, the data used could affect the timings).

The test is then executed using:

SELECT count(*) FROM dbo.CpuTest -- to load the data into the buffer pool
WHERE [Int1] + [Int2] + [Int3] + [Int4] + [Int5] = 12
TRUNCATE TABLE [dbo].[CpuTimings]
DECLARE @TestIndex int = 1
DECLARE @TestRepeat int = 0
DECLARE @RowCount int
DECLARE @StartTime datetime2
DECLARE @sql varchar(1000)
DECLARE @NextId int
DECLARE @TotalTime int = 0
WHILE (@TestIndex < 21) AND (@TotalTime < 300)
BEGIN
 IF @TestIndex = 1  BEGIN SET @RowCount = 100 END
 IF @TestIndex = 2  BEGIN SET @RowCount = 500 END
 IF @TestIndex = 3  BEGIN SET @RowCount = 1000 END
 IF @TestIndex = 4  BEGIN SET @RowCount = 2000 END
 IF @TestIndex = 5  BEGIN SET @RowCount = 3000 END
 IF @TestIndex = 6  BEGIN SET @RowCount = 4000 END
 IF @TestIndex = 7  BEGIN SET @RowCount = 5000 END
 IF @TestIndex = 8  BEGIN SET @RowCount = 7500 END
 IF @TestIndex = 9  BEGIN SET @RowCount = 10000 END
 IF @TestIndex = 10 BEGIN SET @RowCount = 12500 END
 IF @TestIndex = 11 BEGIN SET @RowCount = 15000 END
 IF @TestIndex = 12 BEGIN SET @RowCount = 17500 END
 IF @TestIndex = 13 BEGIN SET @RowCount = 20000 END
 IF @TestIndex = 14 BEGIN SET @RowCount = 22500 END
 IF @TestIndex = 15 BEGIN SET @RowCount = 25000 END
 IF @TestIndex = 16 BEGIN SET @RowCount = 27500 END
 IF @TestIndex = 17 BEGIN SET @RowCount = 30000 END
 IF @TestIndex = 18 BEGIN SET @RowCount = 32500 END
 IF @TestIndex = 19 BEGIN SET @RowCount = 35000 END
 IF @TestIndex = 20 BEGIN SET @RowCount = 40000 END
 SET @TestRepeat = 1
 SET @TotalTime = 0
 WHILE @TestRepeat < 4
 BEGIN
  SET @StartTime = getdate();
  SET @sql = '
  WITH TestData
  AS
  (
   SELECT top ' + cast(@RowCount as varchar(10)) + ' *
   FROM dbo.CpuTest
   ORDER BY 1
  )
  SELECT count(*)
  FROM TestData t1
  INNER LOOP JOIN TestData t2
  ON (abs(((t1.Int1 * t2.Int2) + t1.Int3) - ((t2.Int1 * t1.Int2) + t2.Int4)) < 17) AND
  (abs((power(t1.Int5 % 5, 3) - (t2.Int5 % 25))) < 7)'
  EXEC (@sql)
  DECLARE @TimeMs float = CAST(datediff(ms, @StartTime, getdate()) AS float) / 1000
  SELECT @NextId = max(RowId) FROM CpuTimings
  SET @NextId = isnull(@NextId, 0) + 1
  INSERT INTO [dbo].[CpuTimings]
  VALUES (@NextId, @RowCount, @TimeMs)
   SET @TestRepeat = @TestRepeat + 1
   SET @TotalTime = @TotalTime + @TimeMs
  END
 SET @TestIndex = @TestIndex + 1
END

Once the script completes the data can be copied into Excel for analysis as described in the Jan 2015 test results.

While not shown above, it is also a good idea to run the join manually a few times on each performance level with different row counts to check a parallelised execution plan is being used.

Advertisements

One comment


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s