CREATE FUNCTION dbo.trim( @val nvarchar(max) )
RETURNS nvarchar(max)
BEGIN
DECLARE @tab nchar(1)
, @lf nchar(1)
, @cr nchar(1)
, @crlf nchar(2)
SET @tab = char(9)
SET @lf = char(10)
SET @cr = char(13)
SET @crlf = char(13) + char(10)
RETURN ( replace(replace(replace(replace(ltrim(rtrim(@val)), @tab, ''), @crlf, ''), @lf, ''), @cr, '' ) )
END --trim()
Friday, December 16, 2016
Monday, October 31, 2016
Friday, October 14, 2016
a call to LogonUserW failed
xp_cmdshell raises error "a call to LogonUserW failed with error code 1385"
An error occurred during the execution of xp_cmdshell. A call to 'LogonUserW' failed with error code: '1385'.
In order to fix this you need to open the Local Security Settings on the host machine.
Navigate to Security Settings -> Local Policies -> User Rights Assignment.
Open "Log on as a batch job" and add the user assigned as the xp_cmdshell proxy account
An error occurred during the execution of xp_cmdshell. A call to 'LogonUserW' failed with error code: '1385'.
In order to fix this you need to open the Local Security Settings on the host machine.
Navigate to Security Settings -> Local Policies -> User Rights Assignment.
Open "Log on as a batch job" and add the user assigned as the xp_cmdshell proxy account
Wednesday, September 21, 2016
SSMS connecting to Availability Groups
Connecting to Always
On High Availability Group using SQL Server Management Studio (SSMS)
1. In SQL Server Management Studio, enter the High Availability Group Listener name.
Then click on the [Options >>] button.
2. On the “Connection Properties” tab, increase the "Connection time-out:"
Default setting is 15 seconds.
Microsoft recommends setting it to 21 seconds per subnet for availability groups.
This setting is available SSMS versions 2005 and up.
3. Click on the “Additional Connection Parameters” tab.
This tab is available on SSMS versions 2012 and up and this is parameter available in SQL Native Clients based on .NET framework 4 and up.
Add an entry for:
MultiSubnetFailover=True
References:
https://msdn.microsoft.com/en-us/library/gg471494(v=sql.110).aspx https://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlconnection.connectionstring(v=vs.100).aspx
Friday, September 16, 2016
Client Recovery Latency During Failover
A multi-subnet FCI by default enables the RegisterAllProvidersIP cluster resource for its network name. In a multi-subnet configuration, both the online and offline IP addresses of the network name will be registered at the DNS server. The client application then retrieves all registered IP addresses from the DNS server and attempts to connect to the addresses either in order or in parallel. This means that client recovery time in multi-subnet failovers no longer depend on DNS update latencies. By default, the client tries the IP addresses in order. When the client uses the new optional MultiSubnetFailover=True parameter in its connection string, it will instead try the IP addresses simultaneously and connects to the first server that responds. This can help minimize the client recovery latency when failovers occur. For more information, see AlwaysOn Client Connectivity (SQL Server) and Create or Configure an Availability Group Listener (SQL Server).
With legacy client libraries or third party data providers, you cannot use the MultiSubnetFailover parameter in your connection string. To help ensure that your client application works optimally with multi-subnet FCI in SQL Server 2012, try to adjust the connection timeout in the client connection string by 21 seconds for each additional IP address. This ensures that the client’s reconnection attempt does not timeout before it is able to cycle through all IP addresses in your multi-subnet FCI.
The default client connection time-out period for SQL Server Management Studio and sqlcmd is 15 seconds.
https://msdn.microsoft.com/en-us/library/ff878716(v=sql.110).aspx
Linked Servers/OPENROWSET use OLE DB and OLE DB in the SQL Server Native Client does not support the MultiSubnetFailover keyword.
Try a System DSN using ODBC with MultiSubnetFailover specified and then creating the linked server to use this .
Other option is to increase the timeout of the linked server to give it time to work around all the registered IP addresses.
Use the following guidelines to connect to a server in an availability group or SQL Server 2012 Failover Cluster Instance:
- Use the MultiSubnetFailover=True connection property when connecting to a single subnet or multi-subnet; it will improve performance for both.
- To connect to an availability group, specify the availability group listener of the availability group as the server in your connection string.
- Connecting to a SQL Server instance configured with more than 64 IP addresses will cause a connection failure.
- Behavior of an application that uses the MultiSubnetFailover connection property is not affected based on the type of authentication: SQL Server Authentication, Kerberos Authentication, or Windows Authentication.
- Increase the value of Connect Timeout to accommodate for failover time and reduce application connection retry attempts.
- Distributed transactions are not supported.
If read-only routing is not in effect, connecting to a secondary replica location will fail in the following situations:
- If the secondary replica location is not configured to accept connections.
- If an application uses ApplicationIntent=ReadWrite (discussed below) and the secondary replica location is configured for read-only access.
https://msdn.microsoft.com/en-us/library/hh205662(v=vs.110).aspx
Bottom Line:
1. Upgrade clients where possible
2. Set the multisubnetfailover option if using SQLClient in .NET 4.0 and above client libraries and applications
3. Extend connection timeouts to 21 seconds per subnet for legacy connections that cannot be upgraded
4. If the application uses a library that cannot be upgraded, but supports ODBC, then upgrade ODBC and create an ODBC connection that uses the MultiSubmetFailover=True and have application connect using the ODBC connection.
5. If the application uses SQL Client, then can create a SQL Native Client alias that points to primary node (can use a powershell script to change)
6. If all else fails, connect the application to the primary node (will have to be manually changed on failover)
7. Not recommended, but can set RegisterAllProvidersIP to off for the cluster (this could cause some clients to not be able to connect for 20 minutes depending on HostRecordTTL setting)
Additional References:
http://www.madeiradata.com/make-sure-clients-can-connect-multi-subnet-cluster/
http://johnlouros.com/blog/leveraging-multi-subnet-failover
https://blogs.technet.microsoft.com/sqlpfeil/2014/03/31/sql-alias-powershell-script/
http://sqlperformance.com/2013/11/system-configuration/ag-connectivity
Use the following guidelines to connect to a server in an availability group or SQL Server 2012 Failover Cluster Instance:
- Use the MultiSubnetFailover=True connection property when connecting to a single subnet or multi-subnet; it will improve performance for both.
- To connect to an availability group, specify the availability group listener of the availability group as the server in your connection string.
- Connecting to a SQL Server instance configured with more than 64 IP addresses will cause a connection failure.
- Behavior of an application that uses the MultiSubnetFailover connection property is not affected based on the type of authentication: SQL Server Authentication, Kerberos Authentication, or Windows Authentication.
- Increase the value of Connect Timeout to accommodate for failover time and reduce application connection retry attempts.
- Distributed transactions are not supported.
If read-only routing is not in effect, connecting to a secondary replica location will fail in the following situations:
- If the secondary replica location is not configured to accept connections.
- If an application uses ApplicationIntent=ReadWrite (discussed below) and the secondary replica location is configured for read-only access.
https://msdn.microsoft.com/en-us/library/hh205662(v=vs.110).aspx
Bottom Line:
1. Upgrade clients where possible
2. Set the multisubnetfailover option if using SQLClient in .NET 4.0 and above client libraries and applications
3. Extend connection timeouts to 21 seconds per subnet for legacy connections that cannot be upgraded
4. If the application uses a library that cannot be upgraded, but supports ODBC, then upgrade ODBC and create an ODBC connection that uses the MultiSubmetFailover=True and have application connect using the ODBC connection.
5. If the application uses SQL Client, then can create a SQL Native Client alias that points to primary node (can use a powershell script to change)
6. If all else fails, connect the application to the primary node (will have to be manually changed on failover)
7. Not recommended, but can set RegisterAllProvidersIP to off for the cluster (this could cause some clients to not be able to connect for 20 minutes depending on HostRecordTTL setting)
Additional References:
http://www.madeiradata.com/make-sure-clients-can-connect-multi-subnet-cluster/
http://johnlouros.com/blog/leveraging-multi-subnet-failover
https://blogs.technet.microsoft.com/sqlpfeil/2014/03/31/sql-alias-powershell-script/
http://sqlperformance.com/2013/11/system-configuration/ag-connectivity
Wednesday, September 14, 2016
@@SERVERNAME returns name of old server
-- in the case where a server has been cloned or renamed
-- and @@SERVERNAME still returns name of old server
use master
go
SELECT @@SERVERNAME, SERVERPROPERTY('MachineName')
GO
SELECT [server_id]
,[name]
,[product]
,[provider]
,[data_source]
FROM [master].[sys].[servers]
GO
sp_dropserver 'OLDSERVER\OLDINSTANCE';
GO
sp_addserver 'NEWSERVER\NEWINSTANCE', local;
GO
-- has been updated in servers table
SELECT [server_id]
,[name]
,[product]
,[provider]
,[data_source]
FROM [master].[sys].[servers]
GO
-- may need to restart instance
-- for it to take effect for @@SERVERNAME
SELECT @@SERVERNAME, SERVERPROPERTY('MachineName')
GO
-- and @@SERVERNAME still returns name of old server
use master
go
SELECT @@SERVERNAME, SERVERPROPERTY('MachineName')
GO
SELECT [server_id]
,[name]
,[product]
,[provider]
,[data_source]
FROM [master].[sys].[servers]
GO
sp_dropserver 'OLDSERVER\OLDINSTANCE';
GO
sp_addserver 'NEWSERVER\NEWINSTANCE', local;
GO
-- has been updated in servers table
SELECT [server_id]
,[name]
,[product]
,[provider]
,[data_source]
FROM [master].[sys].[servers]
GO
-- may need to restart instance
-- for it to take effect for @@SERVERNAME
GO
Tuesday, September 13, 2016
Installing SQL Server 2000 Desktop Edition
Download MSDE 2000 Release A
This will download a compress executable file MSDE2000A which is SQL Server 2000 Desktop Engine with SP3a (8.00.760)
Executing this will expand to a folder C:\MSDERalA\
From a command line navigate to this folder and execute on of the following:
To install a default instance configured to use the default Windows Authentication Mode, just pass in the SAPWD="AStrongSAPwd" switch where AStrongSAPwd is a strong password for the "sa" account.
C:\MSDERalA\>setup SAPWD="AStrongSAPwd"
To install a named instance include the INSTANCENAME="InstanceName" switch. To use mixed mode Windows Authentication and SQL authentication use the SECURITYMODE=SQL switch.
C:\MSDERalA\>setup INSTANCENAME="MyInstance" SECURITYMODE=SQL SAPWD="AStrongSAPwd"
To start SQL Server, go to Start->Run and type sqlmangr.exe to bring up the sql server service manager. You can start the service using it and set it to start when OS starts.
Now open a command prompt and attempt connecting using osql
C:\>osql -E
Resources:
How to obtain and install SQL Server 2000 Desktop Engine (MSDE 2000)
https://support.microsoft.com/en-us/kb/324998
http://support.microsoft.com/default.aspx?scid=kb;en-us;810826
http://support.microsoft.com/kb/322336/EN-US/
This will download a compress executable file MSDE2000A which is SQL Server 2000 Desktop Engine with SP3a (8.00.760)
Executing this will expand to a folder C:\MSDERalA\
From a command line navigate to this folder and execute on of the following:
To install a default instance configured to use the default Windows Authentication Mode, just pass in the SAPWD="AStrongSAPwd" switch where AStrongSAPwd is a strong password for the "sa" account.
C:\MSDERalA\>setup SAPWD="AStrongSAPwd"
To install a named instance include the INSTANCENAME="InstanceName" switch. To use mixed mode Windows Authentication and SQL authentication use the SECURITYMODE=SQL switch.
To start SQL Server, go to Start->Run and type sqlmangr.exe to bring up the sql server service manager. You can start the service using it and set it to start when OS starts.
Now open a command prompt and attempt connecting using osql
C:\>osql -E
Resources:
How to obtain and install SQL Server 2000 Desktop Engine (MSDE 2000)
https://support.microsoft.com/en-us/kb/324998
http://support.microsoft.com/default.aspx?scid=kb;en-us;810826
http://support.microsoft.com/kb/322336/EN-US/
what version of .NET?
To find .NET Framework versions by viewing the registry (.NET Framework 1-4)
1. On the Start menu, choose Run.
2. In the Open box, enter regedit.exe.
3. In the Registry Editor, open the following subkey:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP
To find .NET Framework versions by viewing the registry (.NET Framework 4.5 and later)
1. On the Start menu, choose Run.
2. In the Open box, enter regedit.exe.
3. In the Registry Editor, open the following subkey:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v4\Full
https://msdn.microsoft.com/en-us/library/hh925568(v=vs.110).aspx#net_a
1. On the Start menu, choose Run.
2. In the Open box, enter regedit.exe.
3. In the Registry Editor, open the following subkey:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP
To find .NET Framework versions by viewing the registry (.NET Framework 4.5 and later)
1. On the Start menu, choose Run.
2. In the Open box, enter regedit.exe.
3. In the Registry Editor, open the following subkey:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v4\Full
https://msdn.microsoft.com/en-us/library/hh925568(v=vs.110).aspx#net_a
Tuesday, August 23, 2016
when were databases last restored
SELECT A.restore_history_id
, A.restore_date --date database was restored
, A.destination_database_name
, A.user_name
, A.restore_type
, B.backup_finish_date --date of backup used to restore database
, C.physical_device_name --backup file used to restore database
, A.stop_at
, A.stop_at_mark_name
, A.stop_before
FROM msdb.dbo.restorehistory A
LEFT JOIN msdb.[dbo].[backupset] B
ON A.backup_set_id = B.backup_set_id
LEFT JOIN msdb.[dbo].[backupmediafamily] C
ON B.[media_set_id] = C.[media_set_id]
WHERE A.restore_date
= ( SELECT MAX(A1.restore_date)
FROM msdb.dbo.restorehistory A1
WHERE A1.destination_database_name = A.destination_database_name )
, A.restore_date --date database was restored
, A.destination_database_name
, A.user_name
, A.restore_type
, B.backup_finish_date --date of backup used to restore database
, C.physical_device_name --backup file used to restore database
, A.stop_at
, A.stop_at_mark_name
, A.stop_before
FROM msdb.dbo.restorehistory A
LEFT JOIN msdb.[dbo].[backupset] B
ON A.backup_set_id = B.backup_set_id
LEFT JOIN msdb.[dbo].[backupmediafamily] C
ON B.[media_set_id] = C.[media_set_id]
WHERE A.restore_date
= ( SELECT MAX(A1.restore_date)
FROM msdb.dbo.restorehistory A1
WHERE A1.destination_database_name = A.destination_database_name )
Friday, August 12, 2016
Thursday, August 11, 2016
Execute ps1 files at command line
1. Open regedit.exe
2. Export copy of the registry
3. Navigate to
HKEY_CLASSES_ROOT\Microsoft.PowerShellScript.1\Shell
4. Change the default from 'Open' to '0'
5. Could change default parameters to powershell.exe in HKEY_CLASSES_ROOT\Microsoft.PowerShellScript.1\Shell\0\Command
Another option:
3. Navigate in registry to HKEY_CLASSES_ROOT\Microsoft.PowerShellScript.1\Shell\Open\Command\
4. Change Old Value:
"C:\Windows\System32\notepad.exe" "%1"
to New Value:
"C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe" -NoLogo -NoProfile -NonInteractive -ExecutionPolicy "Bypass" -File "%1"
Why not do this:
This makes it easy for dangerous .ps1 files to be executed. With great power comes great responsibility.
2. Export copy of the registry
3. Navigate to
HKEY_CLASSES_ROOT\Microsoft.PowerShellScript.1\Shell
4. Change the default from 'Open' to '0'
5. Could change default parameters to powershell.exe in HKEY_CLASSES_ROOT\Microsoft.PowerShellScript.1\Shell\0\Command
Another option:
3. Navigate in registry to HKEY_CLASSES_ROOT\Microsoft.PowerShellScript.1\Shell\Open\Command\
4. Change Old Value:
"C:\Windows\System32\notepad.exe" "%1"
to New Value:
"C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe" -NoLogo -NoProfile -NonInteractive -ExecutionPolicy "Bypass" -File "%1"
Why not do this:
This makes it easy for dangerous .ps1 files to be executed. With great power comes great responsibility.
Wednesday, August 3, 2016
error accomodating a copy of the model database
Problem: Error encountered during installation of vendor software package that includes creation of database
Symptoms: Error message similar to "37000-1803: [Microsoft] [ODBC SQL Server Driver] [SQL Server] The CREATE DATABASE statement failed. The primary file must be at least XXXX MB to accommodate a copy of the model database."
Cause: Vendor has specified a database size in their create database script that is smaller than the size of the "model" database on that instance.
Solution: Shrink model database
Symptoms: Error message similar to "37000-1803: [Microsoft] [ODBC SQL Server Driver] [SQL Server] The CREATE DATABASE statement failed. The primary file must be at least XXXX MB to accommodate a copy of the model database."
Cause: Vendor has specified a database size in their create database script that is smaller than the size of the "model" database on that instance.
Solution: Shrink model database
Wednesday, July 20, 2016
much ado about NOLOCK
In most cases it does not cause problems and in some cases may improve performance of the query.
If the query includes tables that are extremely volatile (data changing very rapidly) then there is a chance of a “dirty read” where an older version of the record is read while that record is currently being in process of being updated by another transaction.
Contrary to the name of the query hint (NOLOCK) it does not completely eliminate locking caused by the query, but it does reduce it.
I don’t recommend putting it arbitrarily on every query by default -- something I’m guilty of myself :)
Instead recommend using it when other optimization strategies are not practical, the risk of dirty reads is acceptable, and it actually makes a difference in the speed the query returns results.
https://www.brentozar.com/archive/2011/11/theres-something-about-nolock-webcast-video/
https://www.brentozar.com/archive/2013/02/7-things-developers-should-know-about-sql-server/
https://www.brentozar.com/archive/2015/03/top-3-mistakes-t-sql-developers-make/
If the query includes tables that are extremely volatile (data changing very rapidly) then there is a chance of a “dirty read” where an older version of the record is read while that record is currently being in process of being updated by another transaction.
Contrary to the name of the query hint (NOLOCK) it does not completely eliminate locking caused by the query, but it does reduce it.
I don’t recommend putting it arbitrarily on every query by default -- something I’m guilty of myself :)
Instead recommend using it when other optimization strategies are not practical, the risk of dirty reads is acceptable, and it actually makes a difference in the speed the query returns results.
https://www.brentozar.com/archive/2011/11/theres-something-about-nolock-webcast-video/
https://www.brentozar.com/archive/2013/02/7-things-developers-should-know-about-sql-server/
https://www.brentozar.com/archive/2015/03/top-3-mistakes-t-sql-developers-make/
Wednesday, June 22, 2016
Change DB Owner to SA
SELECT SUSER_SNAME(A.owner_sid) "current_owner"
, A.name "database_name"
, 'ALTER AUTHORIZATION ON DATABASE::[' + A.name + '] TO [sa]; '
+ 'USE [' + name + ']; CREATE USER [' + SUSER_SNAME(A.owner_sid) + '] FOR LOGIN [' + SUSER_SNAME(A.owner_sid) + '] WITH DEFAULT_SCHEMA=[dbo]; '
+ 'ALTER ROLE [db_owner] ADD MEMBER [' + SUSER_SNAME(A.owner_sid) + '];'
-- + 'EXEC sp_addrolemember N''db_owner'', N''' + SUSER_SNAME(A.owner_sid) + ''';'
"cmd"
FROM sys.databases A
WHERE SUSER_SNAME(A.owner_sid) NOT IN ( 'sa')
AND A.[state] = 0
ORDER BY A.name
, A.name "database_name"
, 'ALTER AUTHORIZATION ON DATABASE::[' + A.name + '] TO [sa]; '
+ 'USE [' + name + ']; CREATE USER [' + SUSER_SNAME(A.owner_sid) + '] FOR LOGIN [' + SUSER_SNAME(A.owner_sid) + '] WITH DEFAULT_SCHEMA=[dbo]; '
+ 'ALTER ROLE [db_owner] ADD MEMBER [' + SUSER_SNAME(A.owner_sid) + '];'
-- + 'EXEC sp_addrolemember N''db_owner'', N''' + SUSER_SNAME(A.owner_sid) + ''';'
"cmd"
FROM sys.databases A
WHERE SUSER_SNAME(A.owner_sid) NOT IN ( 'sa')
AND A.[state] = 0
ORDER BY A.name
Thursday, June 16, 2016
Enable AD module for PowerShell in Windows Server 2012R2
To enable AD module for PowerShell, fire up Server Manager and go to Features->Remote Server Administration Tools -> Role Administration Tools -> AD DS and AD LDS Tools -> Active Directory module for Windows PowerShell
Friday, June 10, 2016
TSQL to control SQL Server Agent
To Start the sql server agent,
EXEC master.dbo.xp_ServiceControl 'START', 'SQLServerAgent'
To Stop the sql server agent,
EXEC master.dbo.xp_ServiceControl 'STOP', 'SQLServerAgent'
To Check the status of the sql server agent,
EXEC master.dbo.xp_ServiceControl 'QueryState', 'SQLServerAgent'
Tuesday, May 24, 2016
Calling Powershell script from Control-M
call PowerShell -NoLogo -NoProfile -NonInteractive -ExecutionPolicy "Bypass" -Command "& '\\path\to\pscript\myscript.ps1'" < NUL
Execute Powershell scripts at command line
Powershell’s default setting does not allow for execution of scripts
C:\>powershell # enter powershell
Windows PowerShell
Copyright (C) 2009 Microsoft Corporation. All rights reserved.
PS C:\> Get-ExecutionPolicy # check what the rights are
Restricted
PS C:\> Set-ExecutionPolicy RemoteSigned # reset the rights
PS C:\> Get-ExecutionPolicy # make sure the change took
C:\>powershell # enter powershell
Windows PowerShell
Copyright (C) 2009 Microsoft Corporation. All rights reserved.
PS C:\> Get-ExecutionPolicy # check what the rights are
Restricted
PS C:\> Set-ExecutionPolicy RemoteSigned # reset the rights
PS C:\> Get-ExecutionPolicy # make sure the change took
Now can execute scripts at command line
Enable Disk Cleanup on Windows Server 2008 / 2008 R2
In order to use cleanmgr.exe you’ll need to copy two files that are already present on the server, cleanmgr.exe and cleanmgr.exe.mui. Use the following table to locate the files for your operating system.
Windows Server 2008 R2 64-bit
C:\Windows\winsxs\amd64_microsoft-windows-cleanmgr_31bf3856ad364e35_6.1.7600.16385_none_c9392808773cd7da\cleanmgr.exe
Windows Server 2008 R2 64-bit
C:\Windows\winsxs\amd64_microsoft-windows-cleanmgr.resources_31bf3856ad364e35_6.1.7600.16385_en-us_b9cb6194b257cc63\cleanmgr.exe.mui
Windows Server 2008 64-bit
C:\Windows\winsxs\amd64_microsoft-windows-cleanmgr.resources_31bf3856ad364e35_6.0.6001.18000_en-us_b9f50b71510436f2\cleanmgr.exe.mui
Windows Server 2008 64-bit
C:\Windows\winsxs\amd64_microsoft-windows-cleanmgr_31bf3856ad364e35_6.0.6001.18000_none_c962d1e515e94269\cleanmgr.exe.mui
Windows Server 2008 32-bit
C:\Windows\winsxs\x86_microsoft-windows-cleanmgr.resources_31bf3856ad364e35_6.0.6001.18000_en-us_5dd66fed98a6c5bc\cleanmgr.exe.mui
Windows Server 2008 32-bit
C:\Windows\winsxs\x86_microsoft-windows-cleanmgr_31bf3856ad364e35_6.0.6001.18000_none_6d4436615d8bd133\cleanmgr.exe
Once you’ve located the files move them to the following locations:
Cleanmgr.exe should go in %systemroot%\System32.
Cleanmgr.exe.mui should go in %systemroot%\System32\en-US.
You can now launch the Disk cleanup tool by running Cleanmgr.exe from the command prompt.
Disk Cleanup can now be run by entering Cleanmgr.exe into a command prompt, or by clicking Start and typing Cleanmgr into the Search bar.
https://technet.microsoft.com/en-us/library/ff630161(WS.10).aspx
Windows Server 2008 R2 64-bit
C:\Windows\winsxs\amd64_microsoft-windows-cleanmgr_31bf3856ad364e35_6.1.7600.16385_none_c9392808773cd7da\cleanmgr.exe
Windows Server 2008 R2 64-bit
C:\Windows\winsxs\amd64_microsoft-windows-cleanmgr.resources_31bf3856ad364e35_6.1.7600.16385_en-us_b9cb6194b257cc63\cleanmgr.exe.mui
Windows Server 2008 64-bit
C:\Windows\winsxs\amd64_microsoft-windows-cleanmgr.resources_31bf3856ad364e35_6.0.6001.18000_en-us_b9f50b71510436f2\cleanmgr.exe.mui
Windows Server 2008 64-bit
C:\Windows\winsxs\amd64_microsoft-windows-cleanmgr_31bf3856ad364e35_6.0.6001.18000_none_c962d1e515e94269\cleanmgr.exe.mui
Windows Server 2008 32-bit
C:\Windows\winsxs\x86_microsoft-windows-cleanmgr.resources_31bf3856ad364e35_6.0.6001.18000_en-us_5dd66fed98a6c5bc\cleanmgr.exe.mui
Windows Server 2008 32-bit
C:\Windows\winsxs\x86_microsoft-windows-cleanmgr_31bf3856ad364e35_6.0.6001.18000_none_6d4436615d8bd133\cleanmgr.exe
Once you’ve located the files move them to the following locations:
Cleanmgr.exe should go in %systemroot%\System32.
Cleanmgr.exe.mui should go in %systemroot%\System32\en-US.
You can now launch the Disk cleanup tool by running Cleanmgr.exe from the command prompt.
Disk Cleanup can now be run by entering Cleanmgr.exe into a command prompt, or by clicking Start and typing Cleanmgr into the Search bar.
https://technet.microsoft.com/en-us/library/ff630161(WS.10).aspx
Wednesday, May 18, 2016
Setting max file size to current physical size
Scenario: On SQL 2008R2 server, Tempdb has unlimited growth max size and is in danger of filling disk.
Need a quick way to set max size to current size to prevent further growth until can assess if there is an abnormal process taking up TempDB, what optimal size of TempDB should be, and/or disk space can be added.
SELECT 'ALTER DATABASE [' + db_name(A.database_id)
+ '] MODIFY FILE ( NAME = N''' + A.name
+ ''', MAXSIZE = '
+ cast((B.size_on_disk_bytes/1024) as varchar) + ' KB)'
, A.*
, B.*
FROM master.sys.master_files A
JOIN sys.dm_io_virtual_file_stats( db_id('tempdb'),NULL) B
ON A.file_id = B.file_id
WHERE A.database_id = db_id('tempdb')
Need a quick way to set max size to current size to prevent further growth until can assess if there is an abnormal process taking up TempDB, what optimal size of TempDB should be, and/or disk space can be added.
SELECT 'ALTER DATABASE [' + db_name(A.database_id)
+ '] MODIFY FILE ( NAME = N''' + A.name
+ ''', MAXSIZE = '
+ cast((B.size_on_disk_bytes/1024) as varchar) + ' KB)'
, A.*
, B.*
FROM master.sys.master_files A
JOIN sys.dm_io_virtual_file_stats( db_id('tempdb'),NULL) B
ON A.file_id = B.file_id
WHERE A.database_id = db_id('tempdb')
Tuesday, May 3, 2016
Wednesday, April 27, 2016
SQL Service Aliases
SQL Service Aliases (example instance name INST01)
NT SERVICE\MSSQL$INST01
NT SERVICE\SQLAgent$INST01
NT SERVICE\MSOLAP$INST01
NT SERVICE\MSSQL$INST01
NT SERVICE\SQLAgent$INST01
NT SERVICE\MSOLAP$INST01
Monday, April 25, 2016
Who locked my files?
Find out who has shared folder files locked and even force close them.
Use Computer Management snap-in, navigate to Shared Folders and Open Files
Thursday, April 21, 2016
How to transfer logins and passwords between instances of SQL Server
This article describes how to transfer the logins and the passwords between instances of Microsoft SQL Server 2005, of Microsoft SQL Server 2008, and of Microsoft SQL Server 2012 on different servers.
The script in this article creates two stored procedures in the master database: sp_hexadecimal and sp_help_revlogin.
https://support.microsoft.com/en-us/kb/918992
Having the SIDs of sql logins match across multiple instances is especially useful in various scenarios including:
1. Where the production database is frequently restored onto development and test instances, no more needing to run sp_change_users_login after every refresh
2. Mirrored and High Availability Group databases so that sql logins on the secondary instance match the users in the database after a failover.
The script in this article creates two stored procedures in the master database: sp_hexadecimal and sp_help_revlogin.
https://support.microsoft.com/en-us/kb/918992
Having the SIDs of sql logins match across multiple instances is especially useful in various scenarios including:
1. Where the production database is frequently restored onto development and test instances, no more needing to run sp_change_users_login after every refresh
2. Mirrored and High Availability Group databases so that sql logins on the secondary instance match the users in the database after a failover.
Tuesday, April 12, 2016
Troubleshooting Checklist
When I get a call about poor performance, these are the things I check first. What do you check for? Leave comments below...
Moving SQL Agent output file
If you move the location of the SQL Agent output file by changing the parameter in SQL Server Configuration Manager, you may also need to update the registry entry.
Check the value of key ErrorLogFile in the following registry hive:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL.1\SQLServerAgent
Check the path in the ErrorLogFile entry
Check the value of key ErrorLogFile in the following registry hive:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL.1\SQLServerAgent
Check the path in the ErrorLogFile entry
Wednesday, March 30, 2016
Database Health Assessments
Database configuration and maintenance challenges can, if not addressed, cause serious disruptions to database environments and an organization’s ongoing business. A small problem can go unnoticed today but become a larger problem tomorrow that impacts database and company performance, and data security and protection.
KEY AREAS
Backups
Security
Configuration
Maintenance
Disaster recovery
BACKUPS
- Database backups are a standard task and fundamental core responsibility
- Two most common issues that require a database restoration are user errors and hardware failure
- The database is a constantly changing part of a company’s technology landscape. In fact, studies show that a typical database doubles in size every three years. As a database grows, the backup procedures in place do not keep up with the many additions and changes that are made to its structure.
- Point-in-time recovery provides the capability so that in the event of failure, the database can be recovered to the time right before the database failure
- Backups need to be planned in such a way as to make point-in-time recovery possible
- Since most full backups are scheduled at most daily, without point-in-time recovery the organization will lose up to a full day's worth of transactions
- A large number of broken transaction log chains due to missing backup files
- Only one path to restore a database to the most recent PIT (versus multiple full backups with appropriate differential and transaction log backups)
- A lack of multiple backup chains available (even though disk space is inexpensive)
- Corrupt backup files resulting from a lack of continuous testing. Backups are useless until verified.
- Time and experience demonstrate that a database will eventually fail due to a disk subsystem failure, corrupted metadata, upgrade problems, etc. To meet established RTO SLAs, you must have a complete set of backups. In the event of a disaster, your backup strategy and its execution must meet or exceed the expected levels of data retention.
- For example, a mission-critical database will require near zero data loss in a crash scenario in many cases. Under these business parameters, the database must be in full recovery model and have transaction log backups every 5 to 15 minutes (based on SLAs).
- Many databases in FULL recovery model had no transaction log backups occurring. This scenario leads to several major problems
- The transaction log is not truncated and internal log segments are not marked for reuse unless a transaction log backup is taken (for a database in FULL recovery model).
- This leads to an ever-growing transaction log. It is not uncommon to find a database with the transaction log file multiple times larger than the data file — the log at a 100 percent usage level and continuing to grow.
- In addition, point-in-time recovery is not guaranteed since there is no guarantee that the log can be backed up during a crash scenario.
- Other common mistakes include failing to back up system databases and failing to add recently created databases to existing backup maintenance plans.
- Ensure that the current backup strategy meets the business needs on a database by database basis.
- Ensure that databases are in their proper recovery model, and have the proper backup strategy configured, including nightly full/differential backups and appropriate transaction log backups occurring at a frequency that meets required SLAs for data loss.
- Your organization must also ensure that its database backups are protected by designing and implementing off-site replication solutions for those backups, whether it is to an off-site facility or the cloud.
CONFIGURATION
- It is extremely common to find SQL Server instances (and databases) configured with their out-of-box default settings permanently left in that state. Unfortunately, there are performance and security risks associated with the default configuration. When first launched, the performance impact is not noticed. However, as databases grow and their associated transactional activity increases — along with resource needs — the performance penalties of default configurations can severely impact the environment’s operations. Worse yet, the impact of default configurations from a security perspective is immediately detrimental and exposes a company’s environment to inadvertent and malicious attacks.
- Service packs and cumulative updates were not installed — all essential in plugging security holes, fixing program bugs, and enabling additional features to improve the environment’s efficiency and performance.
- Many configuration errors involving SQL memory settings were discovered. At default settings, the SQL Server buffer pool will expand until it causes contention with the operating system and other third-party services. This causes the entire server to slow down, perform unneeded I/O, and sometimes even hang or crash
- Many default settings were found at the database level, which can lead to performance problems such as file growth factors. For example, on a large multi-GB file, the default 10% growth factor can lead to many gigabytes of allocation in a single growth session, which can cause significant I/O pauses.
- Businesses must optimize configuration settings and install security updates to ensure the best performance and bugfree operation of their databases. Following these protocols, your organization’s environment will be better protected and function at a more efficient level.
- Database administrators (DBAs) must develop and execute upgrade plans for standalone instances as well as those that utilize advanced features such as mirroring, Microsoft Cluster Server (MSCS), replication, and Always On Availability Groups.
- Continually manage the reconfiguration of memory and other instance-level settings to ensure optimal use of server resources.
- From a database level, DBAs must make sure that the settings are aligned with the functionality and performance. For example, it is important to determine the proper file growth factors by taking into account the frequency of growth, the size of the growth session, and the time the growth occurs.
- Physical Memory - Often it is found that the infrastructure had adequate memory but the SQL Server instance was not able to utilize it for a variety of reasons. Consider virtualization technology, which is being deployed increasingly in virtualized database environments where extra care and attention is required
- CPU Pressure - When unacceptable CPU pressure occurs, the root cause is either a configuration setting or inefficient TSQL code. Reconfigure instance settings and/or tune TSQL statements to reduce or eliminate CPU pressure and the need to add hardware.
- Disk Layout - Pay special attention to the disk sub-system and how it interacts with Microsoft SQL Server databases, checking to see how it is optimized for performance and redundancy. Also check the physical layout of the databases with special attention given to data and log files for both user and system types of databases.
- Reconfigure instance settings and/or tune TSQL statements
- Consider virtualization technology
MAINTENANCE
- Database maintenance keeps a SQL Server environment running at optimum performance and can stop an impending disaster before it occurs. Maintenance routines act as automated tuning agents that preemptively remedy a host of issues that can occur naturally inside the databases such as index fragmentation, early database corruption detection, and system databases and file system cleanup.
- Missing or improperly configured maintenance plans
- No regular defragmentation is done on database indexes to ensure efficient data access and avoid unnecessary additional I/O. Similar to the defragmentation done on a server or PC hard drive, the same needs to be done on database indexes.
- Regular validation of data pages was not performed to ensure no corruption. Undetected corruption can grow over time. A database may appear to be fully operational while the corruption is being backed up. At some point, the corruption can spread and cause the database to become unusable. Unfortunately when that happens, the problem has already been backed up for awhile. Therefore, it is critical to detect corruption and handle it as soon as possible to eliminate or minimize data loss.
- Implement comprehensive set of best practices maintenance procedures. These procedures ensure database structures, such as indexes and statistics, are regularly defragmented and updated — resulting in the repair of lost performance due to naturally occurring data changes.
- Proper Purging Routines - Each instance keeps a history (e.g. for SQL jobs) that is kept in the MSDB system database and in the file system. It is important that this history is purged or archived on a regular basis to ensure optimal database performance
- Index Maintenance - To speed database queries, it is common practice to add indexes to key tables within the database. As data is added and removed from the table that contains an index, the index itself can be affected. Over time, these indexes can get lopsided and heavily fragmented and severely impact the performance of the database. To ensure optimal performance of a SQL Server database, it is a best practice to perform maintenance on indexes
- Data Integrity Checks - A data integrity failure can start small and then grow over time. If an integrity failure is left uncorrected, it can make its way into database backups. Eventually a database will go suspect or unavailable. At that point, data loss is inevitable and can be potentially quite large. Perform regular data integrity checks
- Perform regular data integrity checks
- Perform maintenance on indexes
- Purge or archive instance histories on a regular basis
DISASTER RECOVERY
- A disaster recovery (DR) plan is crucial in the event of a critical failure at the primary datacenter site and also during small-scale local events, such as extended power outages.
- There is no one-size-fitsall DR solution across such a diverse range of companies because of budget fluctuations and differences in the DR appropriations.
- DR plans could not meet the service level agreements (SLAs) for recovery time objectives (RTO) and/or recovery point objectives (RPO).
- Backups were shipped off-site to a safe location but no facility was in place to restore them or the related instance-level objectives in a timely manner.
- Customers assumed that periodically restoring backups on a remote server would allow for recovery in the event of failure — a process that doesn’t meet the requirements for a complete and usable DR site.
- Many customers failed to successfully test and validate DR procedures and plans.
- can achieve its DR goals by employing a variety of technologies, including log shipping, database mirroring, geo-dispersed clusters, SAN/virtual machine replication, and Availability Groups.
- The DR infrastructure and procedures must be implemented based on best practices and rigorously tested on an on-going basis.
- Each organization must take a tailored approach to effectively balance RTO and RPO needs against the implementation and maintenance costs.
- Consider a cloud service that eliminates the costs associated with the physical hardware and software of an on-premises solution.
- Microsoft lessens the cost burden of disaster recovery and fosters the concept of maintaining a DR site by requiring passive sites to pay licenses only once they become production sites for a certain period of time.
KEY AREAS
Backups
Security
Configuration
Maintenance
Disaster recovery
- Database backups are a standard task and fundamental core responsibility
- Two most common issues that require a database restoration are user errors and hardware failure
- The database is a constantly changing part of a company’s technology landscape. In fact, studies show that a typical database doubles in size every three years. As a database grows, the backup procedures in place do not keep up with the many additions and changes that are made to its structure.
- Point-in-time recovery provides the capability so that in the event of failure, the database can be recovered to the time right before the database failure
- Backups need to be planned in such a way as to make point-in-time recovery possible
- Since most full backups are scheduled at most daily, without point-in-time recovery the organization will lose up to a full day's worth of transactions
- A large number of broken transaction log chains due to missing backup files
- Only one path to restore a database to the most recent PIT (versus multiple full backups with appropriate differential and transaction log backups)
- A lack of multiple backup chains available (even though disk space is inexpensive)
- Corrupt backup files resulting from a lack of continuous testing. Backups are useless until verified.
- Time and experience demonstrate that a database will eventually fail due to a disk subsystem failure, corrupted metadata, upgrade problems, etc. To meet established RTO SLAs, you must have a complete set of backups. In the event of a disaster, your backup strategy and its execution must meet or exceed the expected levels of data retention.
- For example, a mission-critical database will require near zero data loss in a crash scenario in many cases. Under these business parameters, the database must be in full recovery model and have transaction log backups every 5 to 15 minutes (based on SLAs).
- Many databases in FULL recovery model had no transaction log backups occurring. This scenario leads to several major problems
- The transaction log is not truncated and internal log segments are not marked for reuse unless a transaction log backup is taken (for a database in FULL recovery model).
- This leads to an ever-growing transaction log. It is not uncommon to find a database with the transaction log file multiple times larger than the data file — the log at a 100 percent usage level and continuing to grow.
- In addition, point-in-time recovery is not guaranteed since there is no guarantee that the log can be backed up during a crash scenario.
- Other common mistakes include failing to back up system databases and failing to add recently created databases to existing backup maintenance plans.
- Ensure that the current backup strategy meets the business needs on a database by database basis.
- Ensure that databases are in their proper recovery model, and have the proper backup strategy configured, including nightly full/differential backups and appropriate transaction log backups occurring at a frequency that meets required SLAs for data loss.
- Your organization must also ensure that its database backups are protected by designing and implementing off-site replication solutions for those backups, whether it is to an off-site facility or the cloud.
SECURITY
- Security vulnerabilities can go unnoticed since they do not impede regular use of the data. In many environments, if the data can be accurately and reliably accessed, little thought is given to the level of access given to the accounts that use the data or the robustness of the security surrounding those accounts. Security vulnerabilities can account for a variety of database problems, including loss of data, undesired alteration of data, and malicious or unintended modifications to database structures.
- Instance-level logins with weak passwords including accounts with an empty password, the same password as the login name, a one character password, or a password with the reverse of the login name.
- These easily-guessed passwords lead to easy and unintended access into the database environment.
- What is even more damaging is that many of these accounts have elevated permissions on either specific databases or the entire instance.
- Lastly, many environments contained far more sysadmin level accounts than necessary, such as developer logins with full privileges to perform an activity in production instances.
- Remove excess privileges and provide the correct levels of security access needed to perform certain tasks
- Remove write permissions for accounts that only need to run reports. This limits the damage a compromised account can do.
- When changing passwords and limiting security, it is a good time to trace the security needs down to specific objects and rights to ensure a minimized attack surface.
- Properly segregate responsibilities and rights across your environment through the planning of appropriate development, testing, and production deployment strategies
- It is extremely common to find SQL Server instances (and databases) configured with their out-of-box default settings permanently left in that state. Unfortunately, there are performance and security risks associated with the default configuration. When first launched, the performance impact is not noticed. However, as databases grow and their associated transactional activity increases — along with resource needs — the performance penalties of default configurations can severely impact the environment’s operations. Worse yet, the impact of default configurations from a security perspective is immediately detrimental and exposes a company’s environment to inadvertent and malicious attacks.
- Service packs and cumulative updates were not installed — all essential in plugging security holes, fixing program bugs, and enabling additional features to improve the environment’s efficiency and performance.
- Many configuration errors involving SQL memory settings were discovered. At default settings, the SQL Server buffer pool will expand until it causes contention with the operating system and other third-party services. This causes the entire server to slow down, perform unneeded I/O, and sometimes even hang or crash
- Many default settings were found at the database level, which can lead to performance problems such as file growth factors. For example, on a large multi-GB file, the default 10% growth factor can lead to many gigabytes of allocation in a single growth session, which can cause significant I/O pauses.
- Businesses must optimize configuration settings and install security updates to ensure the best performance and bugfree operation of their databases. Following these protocols, your organization’s environment will be better protected and function at a more efficient level.
- Database administrators (DBAs) must develop and execute upgrade plans for standalone instances as well as those that utilize advanced features such as mirroring, Microsoft Cluster Server (MSCS), replication, and Always On Availability Groups.
- Continually manage the reconfiguration of memory and other instance-level settings to ensure optimal use of server resources.
- From a database level, DBAs must make sure that the settings are aligned with the functionality and performance. For example, it is important to determine the proper file growth factors by taking into account the frequency of growth, the size of the growth session, and the time the growth occurs.
- Physical Memory - Often it is found that the infrastructure had adequate memory but the SQL Server instance was not able to utilize it for a variety of reasons. Consider virtualization technology, which is being deployed increasingly in virtualized database environments where extra care and attention is required
- CPU Pressure - When unacceptable CPU pressure occurs, the root cause is either a configuration setting or inefficient TSQL code. Reconfigure instance settings and/or tune TSQL statements to reduce or eliminate CPU pressure and the need to add hardware.
- Disk Layout - Pay special attention to the disk sub-system and how it interacts with Microsoft SQL Server databases, checking to see how it is optimized for performance and redundancy. Also check the physical layout of the databases with special attention given to data and log files for both user and system types of databases.
- Reconfigure instance settings and/or tune TSQL statements
- Consider virtualization technology
MAINTENANCE
- Database maintenance keeps a SQL Server environment running at optimum performance and can stop an impending disaster before it occurs. Maintenance routines act as automated tuning agents that preemptively remedy a host of issues that can occur naturally inside the databases such as index fragmentation, early database corruption detection, and system databases and file system cleanup.
- Missing or improperly configured maintenance plans
- No regular defragmentation is done on database indexes to ensure efficient data access and avoid unnecessary additional I/O. Similar to the defragmentation done on a server or PC hard drive, the same needs to be done on database indexes.
- Regular validation of data pages was not performed to ensure no corruption. Undetected corruption can grow over time. A database may appear to be fully operational while the corruption is being backed up. At some point, the corruption can spread and cause the database to become unusable. Unfortunately when that happens, the problem has already been backed up for awhile. Therefore, it is critical to detect corruption and handle it as soon as possible to eliminate or minimize data loss.
- Implement comprehensive set of best practices maintenance procedures. These procedures ensure database structures, such as indexes and statistics, are regularly defragmented and updated — resulting in the repair of lost performance due to naturally occurring data changes.
- Proper Purging Routines - Each instance keeps a history (e.g. for SQL jobs) that is kept in the MSDB system database and in the file system. It is important that this history is purged or archived on a regular basis to ensure optimal database performance
- Index Maintenance - To speed database queries, it is common practice to add indexes to key tables within the database. As data is added and removed from the table that contains an index, the index itself can be affected. Over time, these indexes can get lopsided and heavily fragmented and severely impact the performance of the database. To ensure optimal performance of a SQL Server database, it is a best practice to perform maintenance on indexes
- Data Integrity Checks - A data integrity failure can start small and then grow over time. If an integrity failure is left uncorrected, it can make its way into database backups. Eventually a database will go suspect or unavailable. At that point, data loss is inevitable and can be potentially quite large. Perform regular data integrity checks
- Perform regular data integrity checks
- Perform maintenance on indexes
- Purge or archive instance histories on a regular basis
DISASTER RECOVERY
- A disaster recovery (DR) plan is crucial in the event of a critical failure at the primary datacenter site and also during small-scale local events, such as extended power outages.
- There is no one-size-fitsall DR solution across such a diverse range of companies because of budget fluctuations and differences in the DR appropriations.
- DR plans could not meet the service level agreements (SLAs) for recovery time objectives (RTO) and/or recovery point objectives (RPO).
- Backups were shipped off-site to a safe location but no facility was in place to restore them or the related instance-level objectives in a timely manner.
- Customers assumed that periodically restoring backups on a remote server would allow for recovery in the event of failure — a process that doesn’t meet the requirements for a complete and usable DR site.
- Many customers failed to successfully test and validate DR procedures and plans.
- can achieve its DR goals by employing a variety of technologies, including log shipping, database mirroring, geo-dispersed clusters, SAN/virtual machine replication, and Availability Groups.
- The DR infrastructure and procedures must be implemented based on best practices and rigorously tested on an on-going basis.
- Each organization must take a tailored approach to effectively balance RTO and RPO needs against the implementation and maintenance costs.
- Consider a cloud service that eliminates the costs associated with the physical hardware and software of an on-premises solution.
- Microsoft lessens the cost burden of disaster recovery and fosters the concept of maintaining a DR site by requiring passive sites to pay licenses only once they become production sites for a certain period of time.
Friday, March 25, 2016
When To Break Down Complex Queries
Four problematic query patterns:
OR logic in the WHERE clause
In this pattern, the condition on each side of the OR operator in the WHERE or JOIN clause evaluates different tables. This can be resolved by use of a UNION operator instead of the OR operator in the WHERE or JOIN clause.
Aggregations in intermediate results sets
This pattern has joins on aggregated data sets, which can result in poor performance. This can be resolved by placing the aggregated intermediate result sets in temporary tables.
A large number of very complex joins
This pattern has a large number of joins, especially joins on ranges, which can result in poor performance because of progressively degrading estimates of cardinality. This can be resolved by breaking down the query and using temporary tables.
A CASE clause in the WHERE or JOIN clause
This pattern has CASE operators in the WHERE or JOIN clauses, which cause poor estimates of cardinality. This can be resolved by breaking down the cases into separate queries and using the Transact-SQL IF statement to direct the flow for the conditions.
OR logic in the WHERE clause
In this pattern, the condition on each side of the OR operator in the WHERE or JOIN clause evaluates different tables. This can be resolved by use of a UNION operator instead of the OR operator in the WHERE or JOIN clause.
Aggregations in intermediate results sets
This pattern has joins on aggregated data sets, which can result in poor performance. This can be resolved by placing the aggregated intermediate result sets in temporary tables.
A large number of very complex joins
This pattern has a large number of joins, especially joins on ranges, which can result in poor performance because of progressively degrading estimates of cardinality. This can be resolved by breaking down the query and using temporary tables.
A CASE clause in the WHERE or JOIN clause
This pattern has CASE operators in the WHERE or JOIN clauses, which cause poor estimates of cardinality. This can be resolved by breaking down the cases into separate queries and using the Transact-SQL IF statement to direct the flow for the conditions.
Default Trace
Often overlooked, the default trace is there with some useful audit info should you need it.
--see if default trace is enabled
SELECT *
FROM sys.configurations
WHERE configuration_id = 1568
--list events that are captured
SELECT distinct E.name "EventName"
FROM fn_trace_geteventinfo(1) I
INNER JOIN sts.trace_events E
ON I.eventid = E.trace_event_id
--path to trace file
SELECT reverse(substring(reverse(path), charindex('\',reverse(path)), 256)) "default_tracepath"
FROM sys.traces
WHERE is_default = 1
--query trace file
SELECT *
FROM sys.fn_trace_gettable(convert(varchar(150),
( SELECT top 1
FROM sys.fn_trace_getinfo(NULL) f
WHERE f.property = 2 )), default ) T
INNER JOIN sys.trace_events E
ON T.eventclass = E.trace_event_id
--see if default trace is enabled
SELECT *
FROM sys.configurations
WHERE configuration_id = 1568
--list events that are captured
SELECT distinct E.name "EventName"
FROM fn_trace_geteventinfo(1) I
INNER JOIN sts.trace_events E
ON I.eventid = E.trace_event_id
--path to trace file
SELECT reverse(substring(reverse(path), charindex('\',reverse(path)), 256)) "default_tracepath"
FROM sys.traces
WHERE is_default = 1
--query trace file
SELECT *
FROM sys.fn_trace_gettable(convert(varchar(150),
( SELECT top 1
FROM sys.fn_trace_getinfo(NULL) f
WHERE f.property = 2 )), default ) T
INNER JOIN sys.trace_events E
ON T.eventclass = E.trace_event_id
Wednesday, March 23, 2016
SET STATISTICS IO
SET STATISTICS IO
Displays the amount of disk activity that was generated by a query.
Meaning of output:
• Logical reads—Number of pages read from the buffer pool.
• Physical reads—Number of pages read from disk.
• Read-ahead read— Read-ahead is a performance optimization mechanism that anticipates the needed data pages and reads them from disk. It can read up to 64 contiguous pages from one data file.
• Lob logical reads—Number of large object (LOB) pages read from the buffer pool.
• Lob physical reads—Number of LOB pages read from disk.
• Lob read-ahead reads—Number of LOB pages read from disk using the read-ahead mechanism.
Displays the amount of disk activity that was generated by a query.
Meaning of output:
• Logical reads—Number of pages read from the buffer pool.
• Physical reads—Number of pages read from disk.
• Read-ahead read— Read-ahead is a performance optimization mechanism that anticipates the needed data pages and reads them from disk. It can read up to 64 contiguous pages from one data file.
• Lob logical reads—Number of large object (LOB) pages read from the buffer pool.
• Lob physical reads—Number of LOB pages read from disk.
• Lob read-ahead reads—Number of LOB pages read from disk using the read-ahead mechanism.
Monday, March 21, 2016
Best practices for Virtualizing SQL Server 2012
Best practices for Virtualizing SQL Server 2012
http://download.microsoft.com/download/6/1/d/61dde9b6-ab46-48ca-8380-d7714c9cb1ab/best_practices_for_virtualizing_and_managing_sql_server_2012.pdf
http://download.microsoft.com/download/6/1/d/61dde9b6-ab46-48ca-8380-d7714c9cb1ab/best_practices_for_virtualizing_and_managing_sql_server_2012.pdf
Detrimental effects of predicates on estimation of cardinality with the query processor.
• Queries with predicates that use comparison operators between different columns of the same table.
• Queries with predicates that use operators, and any one of the following are true:
o There are no statistics on the columns involved on either side of the operators.
o The distribution of values in the statistics is not uniform, but the query seeks a highly selective value set. This situation can be especially true if the operator is anything other than the equality (=) operator.
o The predicate uses the not equal to (!=) comparison operator or the NOT logical operator.
• Queries that use any of the SQL Server built-in functions or a scalar-valued, user-defined function whose argument is not a constant value.
• Queries that involve joining columns through arithmetic or string concatenation operators.
• Queries that compare variables whose values are not known when the query is compiled and optimized.
table variable myths
Table variables are not "in memory" tables, they function basically the same as #temporary tables and even create objects in tempdb.
The difference is that table variables are "lightweight" temporary tables in that they exist only for the scope of the batch and there are no statistics associated with them (the optimizer assumes only one row in table). Also, you cannot create indexes on them (aside from primary key and unique constraints).
So, table variables can improve performance and take up less resources in situations with small data sets, however larger data sets can have better performance as temporary tables.
The difference is that table variables are "lightweight" temporary tables in that they exist only for the scope of the batch and there are no statistics associated with them (the optimizer assumes only one row in table). Also, you cannot create indexes on them (aside from primary key and unique constraints).
So, table variables can improve performance and take up less resources in situations with small data sets, however larger data sets can have better performance as temporary tables.
SQL SERVER 2014 DATA TOOLS
SQL SERVER 2014 DATA TOOLS
For some reason as yet unknown Microsoft decided to stop shipping SQL Server Data Tools with the installation media. Adding to the confusion is that the tools are now split into different versions.
There is one version named just SQL Server Data Tools (SSDT), which is an add-on to Visual Studio. This version adds the toolset that was formerly known as Data Dude to Visual Studio. These SSDT tools include data and schema comparison, as well as support for refactoring databases, creating views, stored procedures, functions, and triggers. This version of SSDT provides both a live development mode and an offline project mode that helps provide source control for your databases by integrating with Visual Studio for team development, source control, and change tracking. There are different versions of these SSDT tools depending on the version of Visual Studio that you are using.
• SQL Server tooling in Visual Studio 2013—Visual Studio 2013 Express for Web, Express for Windows Desktop, Professional, Premium, and Ultimate include the SSDT tools. You don’t need a separate download. To check for the latest version of SSDT, open Visual Studio 2013 and choose the Tools, Extensions, and Updates menu. Then check the Updates section for Microsoft SQL Server Update for database tooling.
• SSDT Visual Studio 2012—there is a standalone install experience as well as an integrated install for the Visual Studio Professional, Premium, and Ultimate SKUs.
• SSDT Visual Studio 2010—this version of SSDT is no longer being updated.
The other version of SSDT is confusingly called SQL Server Data Tools – Business Intelligence (SSDT-BI). Although their names are almost identical, SSDT-BI is a distinctly different toolset than SSDT. SSDT-BI is the replacement for BIDS and it enables the development of Integration Services packages, Analysis Services cubes, and Reporting Services reports. Both versions of SSDT are no cost downloads for SQL Server users.
Source: 55144BC SQL SERVER PERFORMANCE TUNING AND OPTIMIZATION
For some reason as yet unknown Microsoft decided to stop shipping SQL Server Data Tools with the installation media. Adding to the confusion is that the tools are now split into different versions.
There is one version named just SQL Server Data Tools (SSDT), which is an add-on to Visual Studio. This version adds the toolset that was formerly known as Data Dude to Visual Studio. These SSDT tools include data and schema comparison, as well as support for refactoring databases, creating views, stored procedures, functions, and triggers. This version of SSDT provides both a live development mode and an offline project mode that helps provide source control for your databases by integrating with Visual Studio for team development, source control, and change tracking. There are different versions of these SSDT tools depending on the version of Visual Studio that you are using.
• SQL Server tooling in Visual Studio 2013—Visual Studio 2013 Express for Web, Express for Windows Desktop, Professional, Premium, and Ultimate include the SSDT tools. You don’t need a separate download. To check for the latest version of SSDT, open Visual Studio 2013 and choose the Tools, Extensions, and Updates menu. Then check the Updates section for Microsoft SQL Server Update for database tooling.
• SSDT Visual Studio 2012—there is a standalone install experience as well as an integrated install for the Visual Studio Professional, Premium, and Ultimate SKUs.
• SSDT Visual Studio 2010—this version of SSDT is no longer being updated.
The other version of SSDT is confusingly called SQL Server Data Tools – Business Intelligence (SSDT-BI). Although their names are almost identical, SSDT-BI is a distinctly different toolset than SSDT. SSDT-BI is the replacement for BIDS and it enables the development of Integration Services packages, Analysis Services cubes, and Reporting Services reports. Both versions of SSDT are no cost downloads for SQL Server users.
Source: 55144BC SQL SERVER PERFORMANCE TUNING AND OPTIMIZATION
Tuesday, March 15, 2016
i am the very model of a modern data professional
I am the very model of a modern Data-Professional,
I've information vegetable, animal, and mineral,
I know the kings of England, and I quote the fights historical
From Marathon to Waterloo, in order categorical;
I'm very well acquainted, too, with matters mathematical,
I understand equations, both the simple and quadratical,
About binomial theorem I'm teeming with a lot o' news,
With many cheerful facts about the square of the hypotenuse.
I'm very good at integral and differential calculus;
I know the scientific names of beings animalculous:
In short, in matters vegetable, animal, and mineral,
I am the very model of a modern Data-Professional.
I know our mythic history, King Arthur's and Sir Caradoc's;
I answer hard acrostics, I've a pretty taste for paradox,
I quote in elegiacs all the crimes of Heliogabalus,
In conics I can floor peculiarities parabolous;
I can tell undoubted Raphaels from Gerard Dows and Zoffanies,
I know the croaking chorus from The Frogs of Aristophanes!
Then I can hum a fugue of which I've heard the music's din afore,
And whistle all the airs from that infernal nonsense Pinafore.
Then I can write a washing bill in Babylonic cuneiform,
And tell you ev'ry detail of Caractacus's uniform:
In short, in matters vegetable, animal, and mineral,
I am the very model of a modern Data-Professional.
In fact, when I know what is meant by "mamelon" and "ravelin",
When I can tell at sight a Mauser rifle from a javelin,d
When such affairs as sorties and surprises I'm more wary at,
And when I know precisely what is meant by "commissariat",
When I have learnt what progress has been made in modern gunnery,
When I know more of tactics than a novice in a nunnery –
In short, when I've a smattering of elemental strategy –
You'll say a better Data-Professional has never sat a gee-e
For my military knowledge, though I'm plucky and adventury,
Has only been brought down to the beginning of the century;
But still, in matters vegetable, animal, and mineral,
I am the very model of a modern Data-Professional.
I've information vegetable, animal, and mineral,
I know the kings of England, and I quote the fights historical
From Marathon to Waterloo, in order categorical;
I'm very well acquainted, too, with matters mathematical,
I understand equations, both the simple and quadratical,
About binomial theorem I'm teeming with a lot o' news,
With many cheerful facts about the square of the hypotenuse.
I'm very good at integral and differential calculus;
I know the scientific names of beings animalculous:
In short, in matters vegetable, animal, and mineral,
I am the very model of a modern Data-Professional.
I know our mythic history, King Arthur's and Sir Caradoc's;
I answer hard acrostics, I've a pretty taste for paradox,
I quote in elegiacs all the crimes of Heliogabalus,
In conics I can floor peculiarities parabolous;
I can tell undoubted Raphaels from Gerard Dows and Zoffanies,
I know the croaking chorus from The Frogs of Aristophanes!
Then I can hum a fugue of which I've heard the music's din afore,
And whistle all the airs from that infernal nonsense Pinafore.
Then I can write a washing bill in Babylonic cuneiform,
And tell you ev'ry detail of Caractacus's uniform:
In short, in matters vegetable, animal, and mineral,
I am the very model of a modern Data-Professional.
In fact, when I know what is meant by "mamelon" and "ravelin",
When I can tell at sight a Mauser rifle from a javelin,d
When such affairs as sorties and surprises I'm more wary at,
And when I know precisely what is meant by "commissariat",
When I have learnt what progress has been made in modern gunnery,
When I know more of tactics than a novice in a nunnery –
In short, when I've a smattering of elemental strategy –
You'll say a better Data-Professional has never sat a gee-e
For my military knowledge, though I'm plucky and adventury,
Has only been brought down to the beginning of the century;
But still, in matters vegetable, animal, and mineral,
I am the very model of a modern Data-Professional.
Wednesday, February 17, 2016
before restarting that instance...
Check number of VLFs - Virtual log file counts in the 1000s will mean longer recovery time
Check size of database and transaction log files - the bigger the file sizes, the longer it will take to recover
Check size of tempdb file definitions - tempdb will be rebuilt, is it properly sized and in the right location? Also, bigger the file sizes, the long it will take to allocate the space.
When was last Full / Incremental / Transaction log backup taken? If possible, kick off another transaction log backup before restating.
Check SQL Server Agent jobs and make sure no big jobs are running, then stop the Agent
Update your resume and/or linked-in profile :)
After restart;
- If smaller databases and minimal impact run checkdb with physical only
- Check SQL Agent, start it if it has not started automatically
- Trace flags not defined in startup parameters were lost
- Statistics were reset
Check size of database and transaction log files - the bigger the file sizes, the longer it will take to recover
Check size of tempdb file definitions - tempdb will be rebuilt, is it properly sized and in the right location? Also, bigger the file sizes, the long it will take to allocate the space.
When was last Full / Incremental / Transaction log backup taken? If possible, kick off another transaction log backup before restating.
Check SQL Server Agent jobs and make sure no big jobs are running, then stop the Agent
Update your resume and/or linked-in profile :)
After restart;
- If smaller databases and minimal impact run checkdb with physical only
- Check SQL Agent, start it if it has not started automatically
- Trace flags not defined in startup parameters were lost
- Statistics were reset
Wednesday, February 3, 2016
Fix Logins -> DB Users
When a database is restored from one server to another, the databases users are still mapped to the guid of the logins on the original server. To remap the database users to the logins on the gaining server, you can drop and recreate the user or run sp_change_users_login. The script below will run this for all users.
USE [db_name]
GO
SET NOCOUNT ON
GO
declare @tab TABLE ( id int identity(1,1), sqlcmd nvarchar(4000) )
declare @id int, @sqlcmd nvarchar(4000)
insert into @tab ( sqlcmd )
select 'exec sp_change_users_login ''update_one'', ''' + A.name + ''',''' + B.name + ''''
from sys.sysusers A
inner join sys.syslogins B
on lower(A.name) collate SQL_Latin1_General_CP1_CI_AS = lower(B.name) collate SQL_Latin1_General_CP1_CI_AS
where issqluser = 1
select @id = max(id) from @tab
while @id > 0
begin
select @sqlcmd = sqlcmd from @tab where id = @id
print @sqlcmd
exec sp_executesql @sqlcmd
set @id = @id - 1
end
GO
USE [db_name]
GO
SET NOCOUNT ON
GO
declare @tab TABLE ( id int identity(1,1), sqlcmd nvarchar(4000) )
declare @id int, @sqlcmd nvarchar(4000)
insert into @tab ( sqlcmd )
select 'exec sp_change_users_login ''update_one'', ''' + A.name + ''',''' + B.name + ''''
from sys.sysusers A
inner join sys.syslogins B
on lower(A.name) collate SQL_Latin1_General_CP1_CI_AS = lower(B.name) collate SQL_Latin1_General_CP1_CI_AS
where issqluser = 1
select @id = max(id) from @tab
while @id > 0
begin
select @sqlcmd = sqlcmd from @tab where id = @id
print @sqlcmd
exec sp_executesql @sqlcmd
set @id = @id - 1
end
GO
Wednesday, January 27, 2016
Cost Threshold for Parallelism
Was experiencing high CPU usage (95%+) on 4 vCPU server I had "inherited".
Server has SQL Server, Reporting Services, and Integration Services running on it.
Cost Threshold for Parallelism was set to 25. Increased it to 120 and immediately saw drop in CPU contention.
Average CPU usage dropped to %25-%30 with short lived spikes instead long running contention.
Lowered to 60 and saw this:
Increased in steps of 10 up to 100 until the contention was again relieved.
Monday, January 25, 2016
sp_recompile
Causes stored procedures, triggers, and user-defined functions to be recompiled the next time that they are run. It does this by dropping the existing plan from the procedure cache forcing a new plan to be created the next time that the procedure or trigger is run. In a SQL Server Profiler collection, the event SP:CacheInsert is logged instead of the event SP:Recompile.
https://msdn.microsoft.com/en-us/library/ms181647(v=sql.110).aspx
https://msdn.microsoft.com/en-us/library/ms181647(v=sql.110).aspx
sp_refreshview
Updates the metadata for the specified non-schema-bound view. Persistent metadata for a view can become outdated because of changes to the underlying objects upon which the view depends.
https://msdn.microsoft.com/en-us/library/ms187821(v=sql.110).aspx
https://msdn.microsoft.com/en-us/library/ms187821(v=sql.110).aspx
Friday, January 15, 2016
Log Reuse Waits
SELECT name
, log_reuse_wait_desc
FROM sys.databases;
, log_reuse_wait_desc
FROM sys.databases;
- NOTHING - vlfs available
- CHECKPOINT - dirty pages in buffer pool, awaiting checkpoint, issue checkpoint and/or adjust RECOVERY INTERVAL setting
- LOG_BACKUP - waiting for next transaction log backup
- ACTIVE_BACKUP_OR_RESTORE - transaction log backup in progress or is "hung"
- ACTIVE_TRANSACTION - open transaction, use DBCC OPENTRAN to identify
- DATABASE_MIRRORING - secondary falls behind
- REPLICATION - subscriber falls behind
- DATABASE_SNAPSHOT_CREATION - transient; snapshot creation in progress
- LOG_SCAN - transient; fn_dblog, during a checkpoint initiates a log scan to synchronize log sequence numbers, or other process that causes a log scan
- OTHER_TRANSIENT - currently not in use
DATABASEPROPERTYEX
DATABASEPROPERTYEX ( database, property )
database is a name of the database. It is of type nvarchar(128)
property is an option or property setting to be returned. It is of type nvarchar(128). Below are the possible property names.
Value
|
Description
|
Returned Value
|
Collation
|
Default collation name for the database.
|
Collation name
|
IsAnsiNullDefault
|
Database follows SQL-92 rules for allowing null values.
|
1 = TRUE
0 = FALSE NULL = Invalid input |
IsAnsiNullsEnabled
|
All comparisons to a null evaluate to unknown.
|
1 = TRUE
0 = FALSE NULL = Invalid input |
IsAnsiPaddingEnabled
|
Strings are padded to the same length before comparison or insert.
|
1 = TRUE
0 = FALSE NULL = Invalid input |
IsAnsiWarningsEnabled
|
Error or warning messages are issued when standard error conditions occur.
|
1 = TRUE
0 = FALSE NULL = Invalid input |
IsArithmeticAbortEnabled
|
Queries are terminated when an overflow or divide-by-zero error occurs during query execution.
|
1 = TRUE
0 = FALSE NULL = Invalid input |
IsAutoClose
|
Database shuts down cleanly and frees resources after the last user exits.
|
1 = TRUE
0 = FALSE NULL = Invalid input |
IsAutoCreateStatistics
|
Existing statistics are automatically updated when the statistics become out-of-date because the data in the tables has changed.
|
1 = TRUE
0 = FALSE NULL = Invalid input |
IsAutoShrink
|
Database files are candidates for automatic periodic shrinking.
|
1 = TRUE
0 = FALSE NULL = Invalid input |
IsAutoUpdateStatistics
|
Auto update statistics database option is enabled.
|
1 = TRUE
0 = FALSE NULL = Invalid input |
IsCloseCursorsOnCommitEnabled
|
Cursors that are open when a transaction is committed are closed.
|
1 = TRUE
0 = FALSE NULL = Invalid input |
IsFulltextEnabled
|
Database is full-text enabled.
|
1 = TRUE
0 = FALSE NULL = Invalid input |
IsInStandBy
|
Database is online as read-only, with restore log allowed.
|
1 = TRUE
0 = FALSE NULL = Invalid input |
IsLocalCursorsDefault
|
Cursor declarations default to LOCAL.
|
1 = TRUE
0 = FALSE NULL = Invalid input |
IsMergePublished
|
The tables of a database can be published for replication, if replication is installed.
|
1 = TRUE
0 = FALSE NULL = Invalid input |
IsNullConcat
|
Null concatenation operand yields NULL.
|
1 = TRUE
0 = FALSE NULL = Invalid input |
IsNumericRoundAbortEnabled
|
Errors are generated when loss of precision occurs in expressions.
|
1 = TRUE
0 = FALSE NULL = Invalid input |
IsPublished
|
The tables of the database can be published for snapshot or transactional replication, if replication is installed.
|
1 = TRUE
0 = FALSE NULL = Invalid input |
IsQuotedIdentifiersEnabled
|
Double quotation marks can be used on identifiers.
|
1 = TRUE
0 = FALSE NULL = Invalid input |
IsRecursiveTriggersEnabled
|
Recursive firing of triggers is enabled.
|
1 = TRUE
0 = FALSE NULL = Invalid input |
IsSubscribed
|
Database can be subscribed for publication.
|
1 = TRUE
0 = FALSE NULL = Invalid input |
IsTornPageDetectionEnabled
|
Microsoft® SQL Server™ detects incomplete I/O operations caused by power failures or other system outages.
|
1 = TRUE
0 = FALSE NULL = Invalid input |
Recovery
|
Recovery model for the database.
|
FULL = full recovery model
BULK_LOGGED = bulk logged model SIMPLE = simple recovery model |
SQLSortOrder
|
SQL Server sort order ID supported in previous versions of SQL Server.
|
0 = Database is using Windows collation
>0 = SQL Server sort order ID |
Status
|
Database status.
|
ONLINE = database is available for query
OFFLINE = database was explicitly taken offline RESTORING = database is being restored RECOVERING = database is recovering and not yet ready for queries SUSPECT = database cannot be recovered |
Updateability
|
Indicates whether data can be modified.
|
READ_ONLY = data can be read but not modified
READ_WRITE = data can be read and modified |
UserAccess
|
Indicates which users can access the database.
|
SINGLE_USER = only onedb_owner, dbcreator, orsysadmin user at a time
RESTRICTED_USER = only members of db_owner,dbcreator, and sysadminroles MULTI_USER = all users |
Version
|
Internal version number of the Microsoft SQL Server code with which the database was created. For internal use only by SQL Server tools and in upgrade processing.
|
Version number = Database is open
NULL = Database is closed |
Subscribe to:
Posts (Atom)