Friday, September 29, 2017

Transitioning of SQLOS to SQLPAL


Prior to SQL Server 2005, SQL Server runs as a user mode process like any other Windows application and it depends on Windows OS to manage hardware resources
​. The hardware resource management includes the following;
  • ​Processor scheduling
  • Memory management
  • Network 
  • Disk I/O​

In SQL Server 2005, the hardware resource management part has been incorporated into SQL Server as a separate component known as SQLOS (SQL Server Operating System). The objective of this initiative was to improve SQL Server performance by assigning database own tasks manage itself, so that SQL Server can perform better. The SQLOS then acts as a non-preemptive schedule to service database own tasks by cooperating with Windows OS. As a result, SQL Server knows how to manage the hardware resources better than Windows OS. 


SQLPAL-SQL Platform Abstraction Layer, is an extended version of SQLOS to facilitate platform independent behaviour with the introduction of SQL Server on Linux initiative. To achieve this objective Microsoft has used a research project called, Drawbridge. Drawbridge uses essentials Windows components inside SQLPAL to interact with SQL Server and then SQLPAL will interact with the underlying OS. Essentially, SQL Server on Linux still thinks SQL Server is working on Windows. See below Figure;

The SQLPAL interact with underlying OS (Windows or Linux) through its host extension implementation. So host extension has the actually platform specific implementation for Windows OS as well as Linux OS. The biggest advantage of this innovative approach is, you did not need to rewrite SQL Server where more than 20+ years of development happened on Windows and contains thousands of millions of code. 

​How about the performance?

The obvious doubt is, adding an additional abstraction layer (SQLPAL) will negatively impact the performance?​ The testing shows that SQL Server performs almost the same against both the OS. In fact, the performance of SQL Server 2017 is far better than SQL Server 2016. 

​Need to know more: Click here


Monday, June 19, 2017

How to fix cardinality estimation anomalies [Video]

Use the link mentioned below to watch the presentation that I delivered for PASS DBA Virtual Chapter about Filtered Statistics. 

Tuesday, May 23, 2017

Understanding Skewed Data in SQL Server

My latest article.

Monday, April 24, 2017

How to get all SQL Server internal tasks

Use the below statement to get all SQL Server internal task details.

SELECT * FROM sys.dm_exec_sessions
WHERE is_user_process=0

The below statement too seems correct but it does not give all the internal tasks.

SELECT * FROM sys.dm_exec_requests
WHERE status='background'

Wednesday, April 19, 2017

SQL Server 2017 becomes the first commercial RDMS with built-in AI support

AI Into SQL Server 2017
Today (4/19/2017), Microsoft announced the release of SQL Server 2017 CTP 2.0 (formerly known as SQL Server vNext). It has pretty exciting features including deep learning AI capabilities. Deep learning model demonstrated today at Microsoft Data Amp is used some medical data to figure out the probability of getting a lung cancer for a particular patient. Faster early detection of such potential disease helps to reduce the healthcare cost and also to have a healthier life. All these happens under one roof meaning, in one single data platform which is SQL Server. All the analytics capabilities of the deep learning system are now integrated into SQL Server which makes super easy for data scientists to work and analyze data. The application has to call just a single stored procedure to do the deep learning analytics of a patient, it's that much simple.
The data, the model, and the intelligence are all built into SQL Server which will eliminate so many challenges of building today’s AI application by using so many different types of tools.

Python Integration
SQL Server 2017 has a built-in support for Python code inside T-SQL queries which gives so much powerful programming capabilities. SQL Server 2016 supported R integration as well. You can use sp_execute_external_script system stored procedure with T-SQL to call R or Python language which external to T-SQL.

Adaptive Query Processing
Adaptive Query Processing is one of the extremely powerful database engine enhancements in SQL Server 2017. With this new feature SQL Server has the intelligent to learn itself about query processing and improve itself and generate better execution plan based on past query execution with no human intervention. This is an amazing feature that I was dreamed for years.

Graph data represents many-to-many relationships and typically relational databases need to store that kind of relationships as one-to-many relationships. SQL Server now supports graph data and processing of graph data. You can create graph objects which create nodes and edges in SQL Server 2017. This essentially extends the SQL Server query processing capabilities. This query capability is fully integrated into SQL engine in SQL Server 2017. Previously, you had to use a special type of databases called graph databases to store this kind of data.

One platform which supports both Windows and Linux
All the features of SQL Server 2017 now support both Windows and Linux platforms.

In a nutshell, SQL Server is not just another database server, it is an extremely powerful data platform eco-system because it combines AI and analytics into one product. Everything is built in. So much easier than using hundreds of different tools. I can’t wait to get some hands-on of these new features.


Wednesday, April 12, 2017

How To Generate In-Memory OLTP Migration Checklist

In-Memory OLTP, code named Hakaton is a separate database engine introduced in SQL Server 2014 and it is combined with SQL Server traditional database engine. So when you install SQL Server, you will get both database engines. Modern computers/servers are getting more physical memory (RAM). Windows Server 2016 support 24 TB of physical memory. So it is clear the future is more towards In-Memory databases regardless of database type.

I’d recommend reading Microsoft published whitepaper by Kalen Delaney on In-Memory OLTP. here                                                                                                                            
In-Memory OLTP simply means you can create memory optimized tables as well as natively compiled stored procedure. In other words, you can store the schema of the entire table and data in memory. However, it obviously has some limitations too. Those limitations will be addressed in every new release of SQL Server.

It is possible to migrate the tables and other objects in existing databases to memory-optimized structures. Since it has limitations, the first step would be to identify the list of objects which are eligible for the migration. The SSMS (SQL Server Management Studio) provides a feature (wizard) that you can use for this purpose. The below steps will go through the details of the wizard against the sample database, AdventureworksDW2016CTP3 which you can download and install.

Follow the below steps to generate the checklist.

  1. Connect to the server in SSMS.
  2. Right-click on the database and select Generate In-Memory OLTP Migration Checklist option. (See below screenshot)

  1. Click Next on the first page of the wizard.
  2. In Choose Objects screen, you can select the tables, stored procedures or both. You also can specify a location for the checklist to be saved. Then click on Next. (See below screenshot)

  1. In Summary page, you can generate the PowerShell script for this task. This is optional. Click on Finish.
  2. It takes some time depending on the number of objects (tables, triggers, foreign keys, indexes, stored procedures, etc) in the database. Because it checks all the dependent objects against the In-Memory OLTP rules.

You can view the checklist generated by the Advisor in the location you specified in step #4.

It creates the subfolders under database name folder for each object. See below screenshot.

The contents of each folder look like below. Below screenshot displays the checklist for stored procedures.

Open each HTML file to see the details of the analysis.

I will write another blog post on how to analyze the checklist information.


Monday, April 10, 2017

Sharding implementation in SQL Server

Sharding is a data partitioning technique which is popular in NoSQL databases like MongoDB and Cassandra. You can scale out the data set horizontally using sharding. SQL Server does not support sharding natively, however, you can implement sharding solution in SQL Server from application end. Below article describes sharding patterns that you can implement with SQL Server databases.

Sunday, April 9, 2017

Incredibly large memory support in Windows Server 2016

The latest release of Microsoft Windows Serve version is Windows Server 2016. The most salient feature of the Windows Server 2016 is the maximum server memory support. The maximum memory supported by Windows Server 2016 is 24 TB. Yes, it's terabytes. This is amazing. If you're running SQL Server 2014 or 2016, you can keep terabytes size databases completely in memory by using In-Memory OLTP technology. Windows Server 2016 is only available with x64 address space. The memory support limit is a huge gap from Windows Server 2012 which supports the maximum physical memory of 4 TB. Tt is no doubt that future of relational databases is based on In-Memory OLTP technology rather than traditional storage based technology. So it's time to get the skills set ready and spend more time with In-Memory OLTP.

Source: msdn


Thursday, March 30, 2017

Developer Friendly T-SQL Enhancements in SQL Server 2016 and vNext

Both SQL Server 2016 and vNext versions have introduced many new features. T-SQL language too gets new features in each SQL Serer versions. Below two enhancements to T-SQL language is very important for DBAs and Database Developers who writes T-SQL heavily.


You can create SQL Server objects like below by using CREATE OR ALTER statement.


This way, you can create or alter stored procedure, function, view, and trigger.  However, this does not apply for table object.  This saves developers time tremendously   because it avoids numerous validation steps.


The new DROP EXISTS statements also avoids some conditional logic that you need to include to check the existence of the object before it drops.


This statement works with many SQL Server objects. See the below figure;

Source: MSDN

So if you’ve SQL Server 2016 or vNext in your environments, you can start using these new features right away.


Tuesday, March 28, 2017

Why use SELECT * With DMVs?

Dynamic Management Views and Functions are used to retrieve information about SQL Server internals. The Dynamic Management Objects first, introduced in SQL Server 2005. The subsequent releases added many new DMVs while some got changed.  Sometime new columns can be added and column names can be changed in new versions. So it is recommended to use SELECT * whenever you want to get internal information using DMVs. This is especially important when developing production monitoring code. So that the possibility of breaking your code is less with new SQL Server versions.

Monday, March 27, 2017

SQLSaturday584 - Colorado Springs

Another great SQLSaturday conference ended on 25th March, 2017 at Colorado Springs. I presented SQL Server Statistics and New CE model. You can visit the below link to download the presentations and demos which you’re interested. There are many valuable topics.


Schedule Page


Thanks everyone who participated for the event!


Tuesday, March 7, 2017

SQL Server errors are too generic

Below is an actual error captured in SQL Server Error log during an incident. 

SQL Server must shut down in order to recover a database (database ID 1). The database is either a user database that could not be shut down or a system database. 
Restart SQL Server. If the database fails to recover after another startup, repair or restore the database.

This error is not quite accurate because it says SQL Server must shut down in order to recover a database which is database id 1. In SQL Server, database id 1 means "master" database. The next statement says the database is either a user database or system database. In this particular case, it can not be a user database. 

The error message is quite generic. It appears that SQL Server does not change the error messages depending on the database id in this case, maybe it could be due to a performance impact. 

Monday, February 27, 2017

Understanding SQL Server backups

One of my friends and colleagues (Jayaruwan alias JB) recently asked me a question regarding SQL Server backup. The question was, during the backup operation, what will happen to the data changes? Are those data recorded in the backup? If so till what point?

I knew for a fact that the data changes during the backup operation will get written to the backup. To demo this, I did the following test.

You need to open two query windows in SSMS. In this demo, I used SQL Server 2016 and bit large sample database named, AdventureWorksDW2016CTP3.

To run the demo yourself, you can follow the below steps.
Open two query windows in SSMS and copy and paste the below code in each query window.


--Window 1

--part 1
USE AdventureworksDW2016CTP3;
CREATE TABLE backuptest
    Id int identity(1,1) primary key,
    Test_Desc varchar(1000)

--part 2
DECLARE @msg3 binary(128);
SET @msg3=CAST('Kick off the full backup of AdventureworksDW2016CTP3' AS binary(128));

INSERT INTO backuptest (Test_Desc)
SELECT command + ' ' + CAST(percent_complete AS varchar) + ' ' + CONVERT(varchar, GETDATE() ,121) FROM sys.dm_exec_requests
WHERE context_info=@msg3;

--part 3
USE AdventureworksDW2016CTP3
SELECT * FROM backuptest

--part 4
USE master;


Copy and paste the below code snippet in the 2nd Window.


--Window 2

--part 1
USE master;
DECLARE @msg3 binary(128);
SET @msg3=CAST('Kick off the full backup of AdventureworksDW2016CTP3' AS binary(128));

PRINT 'Backup Database Started at : ' + CONVERT(varchar, GETDATE() ,121);
BACKUP DATABASE AdventureworksDW2016CTP3 TO DISK='C:\temp\AdventureWorksDW2016CTP3_full.bak'
PRINT 'Backup Database finished at : ' + CONVERT(varchar, GETDATE() ,121);

--part 2
RESTORE DATABASE AdventureworksDW2016CTP3 FROM DISK='C:\temp\AdventureWorksDW2016CTP3_full.bak'


Execute the Part 1 and part 2 queries in Window 1.
Immediately move to the window 2 and execute the part 1. Watch the execution of the second Window, which is BACKUP DATABASE statement and once it's  completed, you need to stop the execution of Query Window 1.

You need to copy and paste the result of the backup operation which is similar to the following.

Backup Database Started at : 2017-02-27 22:21:58.270
Processed 186680 pages for database 'AdventureworksDW2016CTP3', file 'AdventureWorksDW2014_Data' on file 1.
Processed 39 pages for database 'AdventureworksDW2016CTP3', file 'AdventureWorksDW2014_Log' on file 1.
BACKUP DATABASE successfully processed 186719 pages in 14.817 seconds (98.450 MB/sec).
Backup Database finished at : 2017-02-27 22:22:13.200

It has the information of what time the backup started and what time it finished.

Now, you can check the data in backuptest table by executing the part 3 of the query in Window 1. See below figure for sample data set;




You can notice there are many rows inserted along with the percent completed of the backup statement which executed in parallel to the BACKUP DATABASE statement in Query Window 2. For later reference, copy and paste the result set to an Excel sheet. After carefully observing the data set, you need to change the database to master in Query Window 1 (part 4)


At this point, you've a full backup file in the local directory and we know that while the backup operation in progress we inserted the data to a table in Query Window 1.


Now switch back to the Query Window 2 and restore the database using the part 2. Upon completion of the restore, observe the data in backuptest table using part 3 of the Query in Window 1.


You can notice, most of the data changes happened during the backup process is still there. See the below figure;



The last entry recorded in the table is “BACKUP DATABASE 99.9967 2017-02-27 22:22:13.060”. At that point, database backup statement is completed 99% and the time is 22:22:13:060. We also have the backup statement completed time which is, “2017-02-27 22:22:13.200”. So actually it has missed 140 milliseconds worth of data.


You can do this demo yourself and get more understanding.


Monday, February 20, 2017

SQLSaturday582 - Melbourne

It was an honor to speak at SQLSaturday at Melbourne on 11th Feb, 2017. My session was about SQL Server Statistics and New Cardinality Estimator model. However, I could not go through some of the demos due to time restriction. I received much positive feedback so I'm finally happy about the session overall. 

You can find the presentation artifacts here.

Thanks, everyone who attended my session.

How to interpret Disk Latency

I was analyzing IO stats in one of our SQL Servers and noticed that IO latency (Read / Write) are very high. As a rule of thumb, we know tha...