Monday, February 20, 2017
Friday, December 16, 2016
- Open SSMS.
- In object explorer, connect to the SQL Server instance that you want to enable DAC.
- Right click on Server instance and select Facets.
- In View Facets window selectServer Configuration using the drop down box. See below figure:
SELECT CASE WHEN ses.session_
FROM sys.endpoints AS en
ON en.endpoint_id = s
WHERE en.NAME = 'Dedicated
If your using the server name when connecting, make sure SQL Server Browser service is running. Otherwise you will receive the most common log on error which is "A Network-related or instance-specific error occurred…" which basically says it cannot find the server.
It is always a good practice to enable
Friday, December 2, 2016
Tuesday, November 29, 2016
Friday, November 18, 2016
With that you have a client tool (command line) to connect to the SQL Server on Mac.
Friday, October 21, 2016
- Recompile would be the 1st step to influence to optimize to recompile the code again.
- Compilation is the most important stage
- Cost based optimizer == need to evaluate a query reasonably quickly
- When writing a query, there are many ways to write it but do not focus to that first. Focus the result.
- WHERE msal * 12 > literal -> Bad
- WHERE msal > literal / 12 -> Good
- All about statistics / estimation -> Query performance
- Most of the cases the issue is not because of out of date stats or indexes, it could be parameter sniffing
- EXEC <proc name> <parameters> -> uses the existing plan
- EXEC <proc name> <parameters> WITH RECOMPILE -> generates a new plan
- Using above two, you can identify the parameter sensitiveness (sniffing)
- Update stats invalidate the plan
- Estimates vs Actual -> If it is different, it may not be the stat problem always. It may be parameter sniffing.
- Optimizer loves highly selective predicates
- Low no.of rows -> high selectivity
- High no.of rows -> low selectivity
- Statistics -> Summarized info about the data distribution of table columns
- DBCC AUTOPILOT -> Undocumented
- Hypothetical Indexes -> just the index structure.
- What if analysis using AUTOPILOT with >= SQL Server 2008
- sp_helpstats ‘<table name>’, ‘all’
- Histogram -> 200 steps + 1 row for the null if the column allows null
- SQL Server 7.0 had 300 rows
- When the histogram is large, it increases the compilation time because histogram is not like an index
- EQ_ROWS – Equal rows for the index key
- Rows * All density = Avg no.of rows returns for that column or the combination of cols
- If the table is huge, the data distribution present in histogram is not quite accurate
- Step compression -> When building the histogram, if it finds the value approximately similar to each adjacent steps, then the algorithm will compress those steps and create 1 step
- RANGE_ROWS -> values between two HI_KEYs and excluding the HI_KEYs at both ends
- AVG_RANGE_ROWS = RANGE_ROWS / DISTINCT_RANGE_ROWS
- Exec sp_autostats ‘<table name>’
- Sys.stats -> shows all the stats including index stats
- Sys.dm_db_stats_properties -> gives lot of details that you can use to update stats more effectively programmatically.
- Histogram -> direct hit
- DBCC SHOWSTATISTICS WITH HISTOGRAM
- Stats will update on Index Rebuild but not on Index re-org.
- Entire stats structure stored in db as a BLOB (Header, Density Vector and Histogram)
- Partitioned table uses table level stats which has 200 steps by each by each partition
- Online partition level index rebuild in SQL Server 2014, but when MS introduced partitioning? It was in SQL Server 2005. So MS took 9 years to get online partition level index rebuild
- Tuple cardinality -> used to estimate distinct cardinality
- Optimization | Compilation | Execution
- Local variable values are not known at optimization time
- Parameters and literals can be sniffed -> uses histogram
- Variables cannot be sniffed -> density vector
- UDTATE STATS with any % will not be parallelized but FULL SCAN can be parallelized.
- OPTION (QUERYTRACEON 3602, 9204, RECOMPILE)
- Dynamic auto updating threshold
- For large tables to have 20% change , you need to wait long time to update stats.
- SQL Server does not understand the correlation between columns
- Calculation direction in query plan is from right to left.
- Problem is always with monster tables
- Is 200 steps enough for such tables?
- Even the table is partitioned, still the histogram is @ table level
- Sp_recompile for a table is an expensive operation. It needs Sch-M lock on the table
- Filtered stats – consider creating filtered stats, even daily basis to tackle and solve the estimates problems due to skewed data. So that you will get better execution plans
- QUERYTRACEON (2353) -> Additional info.
Monday, August 29, 2016
Thursday, August 25, 2016
It is certain that many are excited and welcome the Microsoft official announcement of SQL Server is going to support Linux platform. Specially the open source community must be thrilled with the news.
However, still there is not much information available to public about how SQL Server on Linux will looks like. As per Microsoft the initial release will be available on mid of 2017.
If we look at the history of SQL Server, the very early released versions were on UNIX based operating system known as OS/2 which is jointly developed by IBM and Microsoft. As per wikipedia the kernel type of OS/2 is mentioned as hybrid. That was 1988/89 period.
The SQL Server versions 1.0, 1.1, and 4.2 were all based on OS/2 platform. Microsoft had separately developed the version 4.2 to support for their first version of Windows NT OS but with the same features as in the version that runs on OS/2.
Historically, the SQL Server has born in UNIX based platform and later it has been ported to support only on Windows after ending the agreement with Sybase.
After 25 years later, Microsoft has decided to make SQL Server available on Linux platform which is a good move.
source: Microsoft SQL Server - History
In my opinion, the main challenge would be to develop an abstraction layer to support Linux platform. The architecture of Windows and Linux are distinctly different. Both has preemptive scheduling but for the process model, Linux has very unique implementation of threads. SQL Server's process/threads model that runs in Windows is, single process multi-threading, meaning you can see only a single process for an SQL Server instance while internally it has thread pool to service for client requests. Consequently, thread is the unit of work at OS level to accomplish SQL Server requests in Windows.
Conversely, in Linux has multi processes and there is no concept called threads as in Windows OS. Linux calls fork() system call to create a child process from the parent process and then it is finished or killed. In a nutshell, in Linux, everything is done by in-terms of processes.
Bridging this architectural difference in SQL Server probably the most challenging part. I'm no expert in OS but if you think in software engineering point of view, this is the image I get about the abstraction layer.
SQL Server has it's own user mode operating system known as SQLOS which is non-preemptive scheduler to interact with the Windows OS to service better for special needs of SQL Server. The SQLOS was introduced in SQL Server 2005 code name Yukon.
To support Linux platform, the SQLOS component may need lot of additions or else there could be a separate component like SQLOS to support Linux platform. We do not know that information yet.
What features will be in SQL Server on Linux?
It is unclear what features will be available in SQL Server version that runs on Linux. To provide full features set as in SQL Server 2016 would be a real challenge. Another question is, whether it will include the BI tools? As per the announcement blog, it clearly states "We are bringing the core relational database capabilities to preview today" What are the core relational database capabilities? I presume the following capabilities should definitely need be addressed. They are; scheduling, memory management, I/O, exception handling, ACID support, etc. How about high availability options like clustering and AlwaysOn? I predict there will be some sort of high availability option in the first release on Linux. How about the Page size, is it 8K or different? By default Linux page size is 4K (4096 Bytes), however SQL Server in Windows supports non-configurable page size which is 8K(8192 Bytes). Consequently, SQL Server on Linux has to support is default page size of 4K. However unlike in Windows, Linux page size is configurable but it is not certain how feasible and the consequences of doing it. I just checked the page size in Mac OS and it is 4K too. See below figure.
Will there be In-Memory OLTP engine with SQL Server on Linux? In-Memory OLTP, code name Hekaton was first introduced in SQL Server 2014 and it further developed in SQL Server 2016. In-Memory OLTP is an enormously important feature in future RDBMS products. However it is unclear yet, the first release of SQL Server on Linux will have this feature or not.
What this means to DBAs?
Skill set required for future SQL Server DBAs are going to be expanded. They need to learn Linux OS commands as well as Python and Powershell for scripting tasks. In future companies will going to have a mix of Linux and Windows based SQL Server installations most probably. Licensing cost will decide which platform version of SQL Server is going to be the bigger portion. In a nutshell, future SQL Server DBAs will be cross-platform DBAs. It's challenging isn't it?
How about the Certifications?
Most likely, a new certification path will emerge to cater for the new trends/skills of SQL Server. To become a successful DBA, you might need to be qualified for SQL Server for both Linux and Windows platforms as well as cloud platforms.
How this will impact to other RDBMS products?
My sense is, this will impact directly to MySQL. MS SQL Server is a proven enterprise level data platform. If anyone has a chance to get that into Linux OS, then it is highly unlikely someone will choose MySQL besides SQL Server providing that SQL Server Developer edition is free.
I'm really excited and looking forward to get some hands on with SQL Server on Linux in very near future.
Thursday, August 18, 2016
- How many files (data / log) we need to create? Is it based on logical cores or physical CPUs?
- How to decide the initial size of the tempdb?
- Do we need to enable autogrowth or not? If we do, then, is it for all the files or just one file?
- The size of autogrowth?
chunks less than 64MB and up to 64MB = 4 VLFs
chunks larger than 64MB and up to 1GB = 8 VLFs
chunks larger than 1GB = 16 VLFs
- Do we need to keep all data files and log file in the same drive?
- Changes in SQL Server 2016