DECLARE @tabs TABLE ( id int identity(1,1) not null , tabname sysname not null ); DECLARE @i int , @tabname sysname; INSERT INTO @tabs ( tabname ) SELECT name FROM sysobjects WHERE type = 'U'; SELECT @i = max(id) FROM @tabs; WHILE @i > 0 BEGIN SELECT @tabname = tabname FROM @tabs WHERE id = @i PRINT 'UPDATE STATISTICS ' + @tabname + ' WITH ALL' SET @i = @i - 1 END; GO
Monday, December 5, 2011
Update Statistics
ISNULL, COALESCE
1. Data type determination of the resulting expression - ISNULL uses the first parameter type, COALESCE follows the CASE expression rules and returns type of value with highest precedence
2. The NULLability of result expression can be different for ISNULL and COALESCE. ISNULL return value is considered NOT NULLable if the return value is a non-nullable one (in the case when the argument that is returns is based on a non-null column or constant). Whereas COALESCE is not. So the expressions ISNULL(NULL, 1) and COALESCE(NULL, 1) although equivalent have different NULLability values. This makes a difference if you are using these expressions in computed columns and creating key constraints or making return value of a scalar UDF deterministic so that it can be indexed.
3. Validations for ISNULL and COALESCE is also different. For example, NULL value for ISNULL is converted to int whereas for COAELSCE you have to provide a type. Ex:
ISNULL(NULL, NULL) -- is int
COALESCE(NULL, NULL) -- Will throw an error
COALESCE(CAST(NULL as int), NULL) -- it valid and returns int
4. ISNULL takes only 2 parameters whereas COALESCE takes variable number of parameters. You have to have nested ISNULL to get the same effect as COALESCE.
5. COALESCE is based on the ANSI SQL standard whereas ISNULL is a proprietary TSQL built-in function.
6. You could get different plans for queries using ISNULL & COALESCE if the expressions involve scalar sub-queries. This will make a performance difference and some queries with COALESCE be sub-optimal.
2. The NULLability of result expression can be different for ISNULL and COALESCE. ISNULL return value is considered NOT NULLable if the return value is a non-nullable one (in the case when the argument that is returns is based on a non-null column or constant). Whereas COALESCE is not. So the expressions ISNULL(NULL, 1) and COALESCE(NULL, 1) although equivalent have different NULLability values. This makes a difference if you are using these expressions in computed columns and creating key constraints or making return value of a scalar UDF deterministic so that it can be indexed.
3. Validations for ISNULL and COALESCE is also different. For example, NULL value for ISNULL is converted to int whereas for COAELSCE you have to provide a type. Ex:
ISNULL(NULL, NULL) -- is int
COALESCE(NULL, NULL) -- Will throw an error
COALESCE(CAST(NULL as int), NULL) -- it valid and returns int
4. ISNULL takes only 2 parameters whereas COALESCE takes variable number of parameters. You have to have nested ISNULL to get the same effect as COALESCE.
5. COALESCE is based on the ANSI SQL standard whereas ISNULL is a proprietary TSQL built-in function.
6. You could get different plans for queries using ISNULL & COALESCE if the expressions involve scalar sub-queries. This will make a performance difference and some queries with COALESCE be sub-optimal.
Wednesday, November 30, 2011
Gerunds
The way to implement a "many-to-many" relationship requires an additional table be created consisting of the primary keys from each of the entities. This additional table goes by many different names depending on who you talk to.
Association table
Bridge table
Cross-reference (xref) table
Intersection tables
Join table (perhaps most common)
Junction table
Link table, link entity
Many-to-many relationship tables
Map table
Reference table
Relationship table
Swing table
Probably the most proper (being a mathematical model) but least used is "Gerund" -- so named by E.F. Codd, creator of the relational model. This term applies because an entity is functioning as a relationship.
Association table
Bridge table
Cross-reference (xref) table
Intersection tables
Join table (perhaps most common)
Junction table
Link table, link entity
Many-to-many relationship tables
Map table
Reference table
Relationship table
Swing table
Probably the most proper (being a mathematical model) but least used is "Gerund" -- so named by E.F. Codd, creator of the relational model. This term applies because an entity is functioning as a relationship.
Tuesday, November 22, 2011
Don't Criticize Code
Editorial - Don't Criticize Code
By Phil Factor
SQLServerCentral.com
After many years spent supervising development teams, I've come to recognize warning signs of problems ahead. There are many such signs, but the following is one that always makes me cringe: a developer looks at code that he or she is assigned to maintain, and says something like "Who one earth wrote such stupid and xxxx (unstructured/ mindless/ unintelligible/etc.) code?"
There is a developer showing his inexperience. Only adolescents writing their first application in some exciting new computer language in their bedroom ever believe that it is possible to write perfect code, or even that they are capable of doing it. Intensive commercial experience as a developer normally grinds off the idealism and naivety that inspires this attitude towards legacy code.
Never is it a good idea to criticize legacy code, let alone the poor developer responsible for it, particularly if you don't know the history of the application and the circumstances of the code being written. If you don't show the code due respect, you are in danger of refactoring it into something worse. You probably aren't aware of the circumstances under which the code was written, or the real brief given to the programmer. The pressures of coding in a commercial development project are difficult to describe, but it is perfectly natural under such conditions to take decisions that, in retrospect look daft, but which actually saved the project.
Legacy code is something I pore through with eagerness, as if it were a rich historical archive; even if it merely illustrates the coder's drift into lunacy. Many times, when doing so, I have been jolted out of a growing sense of contempt by coming across a technique or insight that is brilliant. This real sense of humility, when faced with the results of human endeavor under pressure, always served me well in my work as a consultant. When you've been called in to fix a project that is about to hit the wall, making any suggestion of criticism is probably the least helpful thing you could possibly do. Beyond the easy job of fixing things, comes the more difficult trick of tactfully attributing their original predicament to a cruel act of Fate.
For anyone eager for a long-term career in IT Development, humility and tact are probably more important habits to acquire than demon coding-skills.
SQLServerCentral.com
By Phil Factor
SQLServerCentral.com
After many years spent supervising development teams, I've come to recognize warning signs of problems ahead. There are many such signs, but the following is one that always makes me cringe: a developer looks at code that he or she is assigned to maintain, and says something like "Who one earth wrote such stupid and xxxx (unstructured/ mindless/ unintelligible/etc.) code?"
There is a developer showing his inexperience. Only adolescents writing their first application in some exciting new computer language in their bedroom ever believe that it is possible to write perfect code, or even that they are capable of doing it. Intensive commercial experience as a developer normally grinds off the idealism and naivety that inspires this attitude towards legacy code.
Never is it a good idea to criticize legacy code, let alone the poor developer responsible for it, particularly if you don't know the history of the application and the circumstances of the code being written. If you don't show the code due respect, you are in danger of refactoring it into something worse. You probably aren't aware of the circumstances under which the code was written, or the real brief given to the programmer. The pressures of coding in a commercial development project are difficult to describe, but it is perfectly natural under such conditions to take decisions that, in retrospect look daft, but which actually saved the project.
Legacy code is something I pore through with eagerness, as if it were a rich historical archive; even if it merely illustrates the coder's drift into lunacy. Many times, when doing so, I have been jolted out of a growing sense of contempt by coming across a technique or insight that is brilliant. This real sense of humility, when faced with the results of human endeavor under pressure, always served me well in my work as a consultant. When you've been called in to fix a project that is about to hit the wall, making any suggestion of criticism is probably the least helpful thing you could possibly do. Beyond the easy job of fixing things, comes the more difficult trick of tactfully attributing their original predicament to a cruel act of Fate.
For anyone eager for a long-term career in IT Development, humility and tact are probably more important habits to acquire than demon coding-skills.
SQLServerCentral.com
Friday, November 18, 2011
Title Case / Proper Case
CREATE FUNCTION [dbo].[fnTitleCase]( @text AS varchar(8000) ) RETURNS varchar(8000) AS BEGIN DECLARE @Reset bit; DECLARE @Ret varchar(8000); DECLARE @i int; DECLARE @c char(1); SELECT @Reset = 1, @i=1, @Ret = ''; WHILE (@i <= len(@Text)) BEGIN SELECT @c = substring(@text,@i,1), @Ret = @Ret + case when @Reset=1 then upper(@c) else lower(@c) end, @Reset = case when @c like '[a-zA-Z]' then 0 else 1 end, @i = @i +1 END RETURN @Ret END
Thursday, November 17, 2011
3rd Party References
In Visual Studio I usually get errors when using 3rd party controls due to the assemblies not being deployed along with the application.
To resolve this, I put copies of the control assemblies in a sub-folder of the project called "References" and then add those files to the references in the project references in the solution explorer using browse.
You can then go to the properties page of the reference and set "Copy Local" to true. This will ensure that the module gets deployed to the "bin" folder of the app.
To resolve this, I put copies of the control assemblies in a sub-folder of the project called "References" and then add those files to the references in the project references in the solution explorer using browse.
You can then go to the properties page of the reference and set "Copy Local" to true. This will ensure that the module gets deployed to the "bin" folder of the app.
Monday, November 14, 2011
Changing Collation on Columns
Had a problem with this after importing data from one database to another using "SELECT INTO" over a link. A co-worker developed this, so kudos go to him...
declare @table_name varchar(50) declare @column varchar(50) declare @datatype varchar(50) declare @nullornot varchar(15) declare @length varchar(3) declare @getprops cursor SET @getprops = CURSOR for SELECT TABLE_NAME, COLUMN_NAME, DATA_TYPE, CHARACTER_MAXIMUM_LENGTH, CASE WHEN IS_NULLABLE = 'YES' THEN 'NULL' ELSE 'NOT NULL' END AS NLL FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_CATALOG = 'Metrics' AND COLLATION_NAME = 'SQL_Latin1_General_CP1_CI_AS' OPEN @getprops FETCH NEXT FROM @getprops into @table_name, @column, @datatype, @length, @nullornot WHILE @@FETCH_STATUS = 0 BEGIN PRINT N'ALTER TABLE ' + @table_name + N' ALTER COLUMN ' + @column PRINT @datatype + N'(' + @length + N') COLLATE SQL_Latin1_General_CP850_CI_AS ' + @nullornot PRINT N'GO' PRINT N'' FETCH NEXT FROM @getprops into @table_name, @column, @datatype, @length, @nullornot END CLOSE @getprops DEALLOCATE @getprops GO
Monday, October 31, 2011
MSAccess: Launch Form on Startup
I recently had to start doing stuff in MS Access 2007. It has been a great many moons since I've worked with Access and, of course, all of the menus have changed.
Now, to get the the settings page for the "Application", instead of "Options"-->"Tools", you now click the "Office Button" and down at the bottom of the menu page that pops up is "Access Options".
This is one location you can specify which form you want to open up automatically when you open the access file.
A nice holdover from previous versions gives another, more flexible method. You can create an macro named "AutoExec" that can launch a macro, form, code module procedure, etc.
To override the default MS Access splash screen, create a 1x1 1bit bitmap file and name it the name as your ms access project (ie: "myproject.bmp") and place it in the same folder as your ms access project "*.accdb" file.
Now, to get the the settings page for the "Application", instead of "Options"-->"Tools", you now click the "Office Button" and down at the bottom of the menu page that pops up is "Access Options".
This is one location you can specify which form you want to open up automatically when you open the access file.
A nice holdover from previous versions gives another, more flexible method. You can create an macro named "AutoExec" that can launch a macro, form, code module procedure, etc.
To override the default MS Access splash screen, create a 1x1 1bit bitmap file and name it the name as your ms access project (ie: "myproject.bmp") and place it in the same folder as your ms access project "*.accdb" file.
MSAccess: GetNetUser()
If you use "CurrentUser()" you will as often as not get "Admin", which is the default Access user.
To get the user's windows authentication login name, place the following code at the top of a standard module:
It can be used in a query as GetNetUser() and in the control source of a text box on a form or report as =GetNetUser()
To get the user's windows authentication login name, place the following code at the top of a standard module:
Private Declare Function WNetGetUserA Lib "mpr.dll" _
(ByVal lpszLocalName As String, ByVal lpszUserName As String, lpcchBuffer As Long) As Long
Public Function GetNetUser() As String
Dim lpUserName As String, lpnLength As Long, lResult As Long
'Create a buffer
lpUserName = String(256, Chr$(0))
'Get the network user
lResult = WNetGetUserA(vbNullString, lpUserName, 256)
If lResult = 0 Then
GetNetUser = Left$(lpUserName, InStr(1, lpUserName, Chr$(0)) - 1)
Else
GetNetUser = "-unknown-"
End If
End Function
It can be used in a query as GetNetUser() and in the control source of a text box on a form or report as =GetNetUser()
Thursday, September 29, 2011
Enable HttpPost in .net Web Service
Will need to add the following to your Web.config:
...
...
... ...
If you want to turn off one of the protocols, just comment out that add line. The "Documentation" protocol is the .net generated WSDL.
Wednesday, June 29, 2011
Example Function to Look up Top Level Org
CREATE OR REPLACE FUNCTION f_getTopOrg ( orgID IN MYSCHEMA.t_organization.organizationid%TYPE ) RETURN MYSCHEMA.t_organization.organizationid%TYPE IS pOrgID MYSCHEMA.t_organization.organizationid%TYPE; cOrgID MYSCHEMA.t_organization.organizationid%TYPE; BEGIN pOrgID := orgID; WHILE pOrgID IS NOT NULL LOOP cOrgID := pOrgID; SELECT A.parentorganizationid INTO pOrgID FROM MYSCHEMA.t_organization A WHERE A.organizationid = cOrgID; END LOOP; RETURN(cOrgID); END;
Lock and Kill User Connections
ALTER USER APPLICATION_USER ACCOUNT LOCK; SELECT 'alter system kill session ''' || sid || ',' || serial# || '''; ' || sql_id death FROM v$session WHERE username = 'APPLICATION_USER';
Reset SYS, SYSTEM Password
Log into the server's console and connect as the system administrator, then reset the SYS and SYSTEM password.
Clear Transaction Logs
This is how to disable archiving:
C:> sqlplus sys/@dbalias as sysdba
SQL> shutdown immediate
SQL> startup mount
SQL> alter database noarchivelog;
SQL> alter database open;
SQL> archive log list
SQL> alter system switch logfile;
Run this about 10 times, delete all archived log files and then proceed.
This is how to enable archiving:
SQL> archive log list
At this point the numbers should have incremented as many as the number of switches.
SQL> shutdown immediate
SQL> startup mount
SQL> alter database archivelog;
Database altered.
SQL> alter database open;
SQL> archive log list
SQL> alter system switch logfile;
Run this a couple of times and you should see new archived log files.
SQL> archive log list
At this point the numbers should have incremented as many as the number of switches.
Exit SQLPlus and do an immediate full backup of the system.
Example backup batch file
set ORACLE_SID=MYPROJ
exp system/******@myuser file=MYPROJ.dmp log=export.log full=y
for /f "tokens=1-5 delims= " %%d in ("%date%") do set dayfldr=%%d
xcopy MYPROJ.dmp "\\Projects\MYPROJ\Database\Daily_Exports\%dayfldr%\" /Z /R /V /Y
Drop all objects in schema
Normally, it is simplest to drop and add the user. This is the preferred method if you have system or sysdba access to the database.
If you don't have system level access, and want to scrub your schema, the following sql will produce a series of drop statments, which can then be executed. Note: be sure you are connected as the schema owner, would be tragic to drop system tables....
spool dropall.sql
select 'drop '||object_type||' '|| object_name|| DECODE(OBJECT_TYPE,'TABLE',' CASCADE CONSTRAINTS;',';')
from user_objects
spool off
Then, can purge the recycle bin to really clean things up:
purge recyclebin;
This will produce a list of drop statements. Not all of them will execute - if you drop with cascade, dropping the PK_* indices will fail. But in the end, you will have a pretty clean schema. Confirm with:
select * from user_objects
Advance Sequence Nextvals to sync with table
Here's a way to advance a sequence using an anonymous PL/SQL block:
DECLARE
mn integer;
mx integer;
BEGIN
SELECT MYPROJ.personnelid_seq.CURRVAL INTO mn FROM DUAL;
SELECT max(personnelid) INTO mx FROM MYPROJ.personnel;
WHILE mn <= mx LOOP
SELECT MYPROJ.personnelid_seq.NEXTVAL INTO mn FROM DUAL;
END LOOP;
SELECT igems.accountid_seq.CURRVAL INTO mn FROM DUAL;
SELECT max(accountid) INTO mx FROM MYPROJ.account;
WHILE mn <= mx LOOP
SELECT MYPROJ.accountid_seq.NEXTVAL INTO mn FROM DUAL;
END LOOP;
END;
/
COMMIT;
/
--uncomment to view current values
--SELECT MYPROJ.personnelid_seq.CURRVAL FROM DUAL;
--SELECT MYPROJ.accountid_seq.CURRVAL FROM DUAL;
Another method is to temporarily alter the sequence to advance, then set it back:
ALTER SEQUENCE seq_name INCREMENT BY 124;
SELECT seq_name.nextval FROM dual;
ALTER SEQUENCE seq_name INCREMENT BY 1;
/
Startup Shutdown Oracle DB
Listner commands at command prompt:
lsnrctl status
lsnrctl stop
lsnrctl start
Connect to Oracle
SET ORACLE_SID={SID}
sqlplus / AS SYSDBA
or
sqlplus /nolog
SQL>connect / AS SYSDBA
Shutdown
SQL>shutdown immediate
Startup
SQL>startup
Drop a corrupt datafile
alter database datafile 'D:\ORACLE\ORADATA\MYPROJDATA01.DBF' offline drop;
drop tablespace MYPROJDATA including contents and datafiles;
Listener Commands
To Check and make sure that the SID is correct. The SIDs that are currently registered with the listener can be obtained by typing:
LSNRCTL SERVICES
These SIDs correspond to SID_NAMEs in TNSNAMES.ORA or DB_NAME in the initialisation file.
From a client with the SID registered in the tnsnames.ora, you can do TNSPING and see if the listener responds:
c:\>tnsping DEV01
The listener is a separate process that resides on the server. It receives incoming client connection requests and manages the traffic of these requests to the server. A listener is configured with one or more listening protocol addresses and service information about the destination service. Protocol addresses are configured in the listener configuration file, listener.ora. Service information may or may not be configured in the listener.ora file. The listener must be running on the machine where the database resides. The listener can be started by using the Administrative Tools\Services:
Right click on the OracleTNSListener and select START.
To stop the listener, right click and select STOP.
The listener can also be started at a DOS prompt:
Load the listener:
C:\>lsnrctl
LSNRCTL for 32-bit Windows: Version 8.1.7.0.0 - Production on 02-APR-2003 08:45:21
(c) Copyright 1998 Oracle Corporation. All rights reserved.
Welcome to LSNRCTL, type "help" for information.
LSNRCTL>
LSNRCTL> LSNRCTL> help
The following operations are available
An asterisk (*) denotes a modifier or extended command:
start
stop
status
services
version
reload
save_config
trace
dbsnmp_start
dbsnmp_stop
dbsnmp_status
change_password
quit
exit
set*
show*
Show the Listener status:
LSNRCTL> status
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC0)))
TNS-12541: TNS:no listener
TNS-12560: TNS:protocol adapter error
TNS-00511: No listener
32-bit Windows Error: 2: No such file or directory
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=GW0020480835)(PORT=1521)))
TNS-12541: TNS:no listener
TNS-12560: TNS:protocol adapter error
TNS-00511: No listener
32-bit Windows Error: 61: Unknown error
Start the Listener:
LSNRCTL> start
Starting tnslsnr: please wait...
TNSLSNR for 32-bit Windows: Version 8.1.7.0.0 - Production
System parameter file is D:\Oracle\Ora81\network\admin\listener.ora
Log messages written to D:\Oracle\Ora81\network\log\listener.log
Trace information written to D:\Oracle\Ora81\network\trace\listener.trc
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(PIPENAME=\\.\pipe\EXTPROC0ipc)))
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=GW0020480835.xxx.xx.xx.xxx)(PORT=1521)))
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=GW0020480835.xxx.xx.xx.xxx)(PORT=2481))(PROTOCOL_STACK=(PRESENTATION=GIOP)(SESSION=RAW)))
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC0)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for 32-bit Windows: Version 8.1.7.0.0 - Production
Start Date 02-APR-2003 08:49:13
Uptime 0 days 0 hr. 0 min. 2 sec
Trace Level user
Security OFF
SNMP OFF
Listener Parameter File D:\Oracle\Ora81\network\admin\listener.ora
Listener Log File D:\Oracle\Ora81\network\log\listener.log
Listener Trace File D:\Oracle\Ora81\network\trace\listener.trc
Services Summary...
AEF has 1 service handler(s)
CWSS has 1 service handler(s)
Develop has 1 service handler(s)
MyWorkDB has 1 service handler(s)
OEMREPO has 1 service handler(s)
PLSExtProc has 1 service handler(s)
Wanda has 1 service handler(s)
afportal has 1 service handler(s)
The command completed successfully
Stop the Listener:
LSNRCTL> stop
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC0)))
The command completed successfully
LSNRCTL>
Exit LSNRCTL
LSNRCTL> exit
ORA-12505: TNS:listener does not currently know of SID given in connect descriptor
Cause: The listener received a request to establish a connection to a database or other service. The connect descriptor received by the listener specified a SID for an instance (usually a database instance) that either has not yet dynamically registered with the listener or has not been statically configured for the listener. This may be a temporary condition such as after the listener has started, but before the database instance has registered with the listener.
Action: - Wait a moment and try to connect a second time.
- Check which instances are currently known by the listener by executing: lsnrctl services
- Check that the SID parameter in the connect descriptor specifies an instance known by the listener.
- Check for an event in the listener.log file.
Lock Unlock UserSchema
ALTER USER ACCOUNT LOCK
/
ALTER USER ACCOUNT UNLOCK
/
Also, you can revoke his create session priviledge. This will prevent that user from logging in, but all his objects will still be available.
Recompile Database Objects
Identifying Invalid Objects
The DBA_OBJECTS view can be used to identify invalid objects using the following query:
COLUMN object_name FORMAT A30
SELECT owner,
object_type,
object_name,
status
FROM dba_objects
WHERE status = 'INVALID'
ORDER BY owner, object_type, object_name;
With this information you can decide which of the following recompilation methods is suitable for you.
The Manual Approach
For small numbers of objects you may decide that a manual recompilation is sufficient. The following example shows the compile syntax for several object types:
ALTER PACKAGE my_package COMPILE;
ALTER PACKAGE my_package COMPILE BODY;
ALTER PROCEDURE my_procedure COMPILE;
ALTER FUNCTION my_function COMPILE;
ALTER TRIGGER my_trigger COMPILE;
ALTER VIEW my_view COMPILE;
Notice that the package body is compiled in the same way as the package specification, with the addition of the word "BODY" at the end of the command.
An alternative approach is to use the DBMS_DDL package to perform the recompilations:
EXEC DBMS_DDL.alter_compile('PACKAGE', 'MY_SCHEMA', 'MY_PACKAGE');
EXEC DBMS_DDL.alter_compile('PACKAGE BODY', 'MY_SCHEMA', 'MY_PACKAGE');
EXEC DBMS_DDL.alter_compile('PROCEDURE', 'MY_SCHEMA', 'MY_PROCEDURE');
EXEC DBMS_DDL.alter_compile('FUNCTION', 'MY_SCHEMA', 'MY_FUNCTION');
EXEC DBMS_DDL.alter_compile('TRIGGER', 'MY_SCHEMA', 'MY_TRIGGER');
This method is limited to PL/SQL objects, so it is not applicable for views.
Custom Script
In some situations you may have to compile many invalid objects in one go. One approach is to write a custom script to identify and compile the invalid objects. The following example identifies and recompiles invalid packages and package bodies.
SET SERVEROUTPUT ON SIZE 1000000
BEGIN
FOR cur_rec IN (SELECT owner,
object_name,
object_type,
DECODE(object_type, 'PACKAGE', 1,
'PACKAGE BODY', 2, 2) AS recompile_order
FROM dba_objects
WHERE object_type IN ('PACKAGE', 'PACKAGE BODY')
AND status != 'VALID'
ORDER BY 4)
LOOP
BEGIN
IF cur_rec.object_type = 'PACKAGE' THEN
EXECUTE IMMEDIATE 'ALTER ' || cur_rec.object_type ||
' "' || cur_rec.owner || '"."' || cur_rec.object_name || '" COMPILE';
ElSE
EXECUTE IMMEDIATE 'ALTER PACKAGE "' || cur_rec.owner ||
'"."' || cur_rec.object_name || '" COMPILE BODY';
END IF;
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.put_line(cur_rec.object_type || ' : ' || cur_rec.owner ||
' : ' || cur_rec.object_name);
END;
END LOOP;
END;
/
This approach is fine if you have a specific task in mind, but be aware that you may end up compiling some objects multiple times depending on the order they are compiled in. It is probably a better idea to use one of the methods provided by Oracle since they take the code dependencies into account.
DBMS_UTILITY.compile_schema
The COMPILE_SCHEMA procedure in the DBMS_UTILITY package compiles all procedures, functions, packages, and triggers in the specified schema. The example below shows how it is called from SQL*Plus:
EXEC DBMS_UTILITY.compile_schema(schema => 'SCOTT');
UTL_RECOMP
The UTL_RECOMP package contains two procedures used to recompile invalid objects. As the names suggest, the RECOMP_SERIAL procedure recompiles all the invalid objects one at a time, while the RECOMP_PARALLEL procedure performs the same task in parallel using the specified number of threads. Their definitions are listed below:
PROCEDURE RECOMP_SERIAL(
schema IN VARCHAR2 DEFAULT NULL,
flags IN PLS_INTEGER DEFAULT 0);
PROCEDURE RECOMP_PARALLEL(
threads IN PLS_INTEGER DEFAULT NULL,
schema IN VARCHAR2 DEFAULT NULL,
flags IN PLS_INTEGER DEFAULT 0);
The usage notes for the parameters are listed below:
schema - The schema whose invalid objects are to be recompiled. If NULL all invalid objects in the database are recompiled.
threads - The number of threads used in a parallel operation. If NULL the value of the "job_queue_processes" parameter is used. Matching the number of available CPUs is generally a good starting point for this value.
flags - Used for internal diagnostics and testing only.
The following examples show how these procedures care used:
-- Schema level.
EXEC UTL_RECOMP.recomp_serial('SCOTT');
EXEC UTL_RECOMP.recomp_parallel(4, 'SCOTT');
-- Database level.
EXEC UTL_RECOMP.recomp_serial();
EXEC UTL_RECOMP.recomp_parallel(4);
-- Using job_queue_processes value.
EXEC UTL_RECOMP.recomp_parallel();
EXEC UTL_RECOMP.recomp_parallel(NULL, 'SCOTT');
There are a number of restrictions associated with the use of this package including:
Parallel execution is perfomed using the job queue. All existing jobs are marked as disabled until the operation is complete.
The package must be run from SQL*Plus as the SYS user, or another user with SYSDBA.
The package expects the STANDARD, DBMS_STANDARD, DBMS_JOB and DBMS_RANDOM to be present and valid.
Runnig DDL operations at the same time as this package may result in deadlocks.
utlrp.sql and utlprp.sql
The utlrp.sql and utlprp.sql scripts are provided by Oracle to recompile all invalid objects in the database. They are typically run after major database changes such as upgrades or patches. They are located in the $ORACLE_HOME/rdbms/admin directory and provide a wrapper on the UTL_RECOMP package. The utlrp.sql script simply calls the utlprp.sql script with a command line parameter of "0". The utlprp.sql accepts a single integer parameter that indicates the level of parallelism as follows:
0 - The level of parallelism is derived based on the CPU_COUNT parameter.
1 - The recompilation is run serially, one object at a time.
N - The recompilation is run in parallel with "N" number of threads.
Both scripts must be run as the SYS user, or another user with SYSDBA, to work correctly.
For further information see:
DBMS_UTILITY.compile_schema
UTL_RECOMP
http://www.oracle-base.com/articles/misc/RecompilingInvalidSchemaObjects.php
Oracle Architecture
Oracle Architecture
The Oracle Instance
The memory structures and server processes that do the work in the database.
The System Global Area is a shared block of memory for Oracle’s use. At a minimum contains:
- Redo Log buffer: short term storage for redo information so it can be written to the redo logs.
- Shared Pool: further broken down into the library cache, which holds recently parsed and executed code, and the data dictionary cache, which store recently used object definitions.
- Database buffer cache: Oracle’s work area for executing SQL.
The instance also houses the processes
- System Monitor: Opening database, maintain connection between instance and database
- Database Writer: writes to the database files (writes as little as possible. Minimizing disk I/O for performance)
- Process Monitor: Monitors user sessions
- Log Writer: writes to the redo logs (writes as close to real time as possible. Ideally save all changes.)
- Checkpoint: ensure instance is synchronized with the database from time to time.
- Archiver: writes archived redo logs
The Oracle Database
The database refers to the physical files on the os that contain the data and data dictionary. At the minimum the database requires datafiles, control files, and redo logs.
Parameter File: Holds the parameters to start the instance
Password File: Encrypted file that holds the sysdba password. Allows sys to log on regardless of the state of the database.
Datafiles: Core of the database, the files that hold the data.
Control Files: Holds all the parameters used to connect the instance and database. For Example, pointers to the rest of the database (redo logs, datafiles…) and various data to maintain database integrity (scn and timestamp). Often multiplexed to allow recovery from file corruption.
Redo Log: maintains all changes made to the data over a given period of time or until the log is full. Often multiplexed to allow recovery from file corruption.
Archived Redo Logs: Copies of filled redo logs kept for recovery purposes.
(Special thanks to "Josh" for this information)
Drop and Recreate PK Index
Disable PK constraint.
alter table TBL1 disable constraint PK_TBL1 ;
Delete PK index.
alter table TBL1 drop index PK_TBL1 ;
Create PK index.
create unique index "PK_TBL1" on "TBL1" ("INSPECTORID", "DUTYID", "INSPID")
tablespace "TBLSPCINDX"
pctfree 10 initrans 2 maxtrans 255
storage
(
initial 64K
next 0K
minextents 1
maxextents 2147483645
pctincrease 0
freelists 1
freelist groups 1
)
nologging;
Enable PK constraint.
alter table "TBL1" enable constraint "PK_TBL1" ;
Rebuild index to index tablespace
SELECT 'alter index MYPROJ.' || index_name || ' rebuild tablespace MYPROJINDX;'
FROM dba_indexes
WHERE owner = 'MYPROJ'
AND tablespace_name != 'MYPROJINDX'
ORDER BY index_name;
Example Creating Common Role
1. Log into the oracle instance using SQL*Plus or other oracle scripting tool as SYSTEM user.
2. Create roles SCHEMA_DEVELOPER and SCHEMA_USER.
CREATE ROLE "SCHEMA_DEVELOPER" NOT IDENTIFIED;
GRANT CREATE SESSION TO "SCHEMA_DEVELOPER";
GRANT SELECT ANY DICTIONARY TO "SCHEMA_DEVELOPER";
GRANT ALTER SESSION TO "SCHEMA_DEVELOPER";
GRANT CREATE CLUSTER TO "SCHEMA_DEVELOPER";
GRANT CREATE DATABASE LINK TO "SCHEMA_DEVELOPER";
GRANT CREATE PROCEDURE TO "SCHEMA_DEVELOPER";
GRANT CREATE PUBLIC SYNONYM TO "SCHEMA_DEVELOPER";
GRANT CREATE SEQUENCE TO "SCHEMA_DEVELOPER";
GRANT CREATE TABLE TO "SCHEMA_DEVELOPER";
GRANT CREATE TRIGGER TO "SCHEMA_DEVELOPER";
GRANT CREATE VIEW TO "SCHEMA_DEVELOPER";
GRANT DROP PUBLIC SYNONYM TO "SCHEMA_DEVELOPER";
CREATE ROLE "SCHEMA_USER" NOT IDENTIFIED;
GRANT CREATE SESSION TO "SCHEMA_USER";
GRANT SELECT ANY DICTIONARY TO "SCHEMA_USER";
3. Create the tablespaces (make sure to update the filepath for the datafile according to the server you are working with). Variables are enclosed in curley braces {}. Use this script as a guide:
CREATE TABLESPACE "COMMONDATA"
LOGGING
DATAFILE '{PATH_TO_DATAFOLDER}\COMMONDATA01.DBF'
SIZE 10M REUSE
AUTOEXTEND ON
NEXT 50M MAXSIZE UNLIMITED
EXTENT MANAGEMENT LOCAL;
CREATE TABLESPACE "COMMONINDX"
NOLOGGING
DATAFILE '{PATH_TO_DATAFOLDER}\COMMONINDX01.DBF'
SIZE 5M REUSE
AUTOEXTEND ON
NEXT 10M MAXSIZE UNLIMITED
EXTENT MANAGEMENT LOCAL;
4. Create the user and role using this script as a guide. (Note: if you are using Oracle 10g or higher you will need to remove lines that set quota unlimited on temp).
CREATE USER "COMMON"
profile "DEFAULT"
identified by "{COMMON_PASSWORD}"
default tablespace "COMMONDATA"
temporary tablespace "TEMP"
quota unlimited on COMMONDATA
quota unlimited on COMMONINDX
account UNLOCK;
CREATE ROLE "COMMON_TABLE_USER" NOT IDENTIFIED;
5. Open a command prompt window and import a copy of the common schema from a .dmp file. If you dont have a .dmp file find another instance with the common schema and export it. Use these scripts as a guide:
For importing the schema:
imp common/{COMMON_PASSWORD}@{INSTANCE_NAME} file=common.dmp log=commonimp.log fromuser=common touser=common
For exporting the schema:
exp common/{COMMON_PASSWORD}@{INSTANCE_NAME} file=common.dmp log=commonexp.log
7. Run the following scripts one at a time to generate grant scripts for the roles needed.
Select the statements that were created by these scripts and run them in order to grant the correct priveleges:
SELECT 'grant SELECT, REFERENCES on "COMMON".' || table_name "Grant Privileges", 'TO ' || role || ';' "To Role"
FROM dba_tables
, dba_roles
WHERE owner = 'COMMON'
AND table_name = any (select table_name from dba_tables where table_name like 'CL_%')
AND role = any (select role from dba_roles where role like 'COMMON%')
ORDER BY role, table_name;
col "Grant Privileges" format a50
SELECT 'grant SELECT, INSERT, UPDATE on "COMMON".' || table_name "Grant Privileges", 'TO ' || role || ';' "To Role"
FROM dba_tables
, dba_roles
WHERE owner = 'COMMON'
AND table_name = any (select table_name from dba_tables where table_name like 'CT_%')
AND role = any (select role from dba_roles where role like 'COMMON%')
ORDER BY role, table_name;
8. Grant schema role permissions:
GRANT SCHEMA_DEVELOPER TO COMMON;
GRANT COMMON_TABLE_USER TO SCHEMA_DEVELOPER;
GRANT COMMON_TABLE_USER TO SCHEMA_USER;
Scripted Backups Example 2
USE master GO DECLARE @theday char(1) , @file varchar(128) SET @theday = datepart(dw, getdate()) ALTER DATABASE dbname SET RECOVERY SIMPLE; SET @file = 'D:\BACKUPS\dbname_' + @theday + '.bak'; BACKUP DATABASE dbname TO DISK = @file WITH INIT; BACKUP LOG dbname WITH TRUNCATE_ONLY; GO
This does a 7 day "rolling backup", overwriting the backup from last week. Set it up on a nightly job.
Scripted Backups Example 1
USE master
GO
DECLARE @dbs AS TABLE ( id int identity(1,1), dbname sysname )
DECLARE @id int
, @dbname sysname
, @path varchar(128)
, @file nvarchar(255)
, @theday char(1)
SET @path = 'D:\DEV_BACKUPS'
SET @theday = datepart(dw, CURRENT_TIMESTAMP)
INSERT INTO @dbs ( dbname )
SELECT name
FROM sys.databases
WHERE database_id > 4 --not system dbs
AND state = 0 --online
ORDER BY name
SELECT @id = max(id) FROM @dbs
WHILE @id > 0
BEGIN
SELECT @dbname = dbname FROM @dbs WHERE id = @id
SET @file = @path + '\' + @dbname + '_BAK' + @theday + '.bak'
BACKUP DATABASE @dbname TO DISK = @file WITH INIT;
BACKUP LOG @dbname WITH TRUNCATE_ONLY;
SET @id = @id - 1
END
GO
Default data file location
To prevent the C:\ drive from filling up, it is a good idea to set the database default location. To do this:
1. Open SQL Server Management Studio
2. Right click the server instance
3. Select "Properties"
4. In the Server Properties window, select "Database Settings"
5. Under "Database default locations", specify path for "Data:" and "Log:", for example: "D:\SQLDATA\"
Additionally, if space on the C:\ drive is limited, check the properties of the TEMPDB.
This can be found under the "Databases" --> "System Databases" branches in the server's tree-view.
First, since the tempdb does not autoshrink, you can manually shrink it by right-clicking tempdb and selecting "Task"-->"Shrink"-->"Database".
Next, right-click the tempdb database and select "Properties". Then select "Files". You can set the "tempdev.mdf" file to be restricted growth and add an additional database file that is unrestricted on another drive.
Monday, June 27, 2011
StartStop OPMNCTL
D:\oracle\oas10g\opmn\bin>opmnctl stopall
opmnctl: stopping opmn and all managed processes...
D:\oracle\oas10g\opmn\bin>opmnctl startall
opmnctl: starting opmn and all managed processes...
D:\oracle\oas10g\opmn\bin>
Invalid Heap Size
Was getting the following error when trying to start the OAS server:
D:\OAS\opmn\bin>opmnctl startall
opmnctl: starting opmn and all managed processes...
====================================================================
opmn id=MYSERVER:6200
1 of 2 processes started.
ias-instance id=MYSERVER.XXXX.XXXXX ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
ias-component/process-type/process-set:
default_group/MYAPP/default_group/
Error
--> Process (index=1,uid=809764963,pid=3936)
failed to start a managed process after the maximum retry limit
Log:
D:\OAS\opmn\logs\\default_group~MYAPP~default_group~1.log
Here's what the log said:
--------
09/02/17 12:37:26 Start process
--------
Error occurred during initialization of VM
Incompatible initial and maximum heap sizes specified
Cracked open the config file and found the following:
D:\OAS\opmn\conf\opmn.xml
<data id="java-options" value="-server -mx512M -ms1024M -Xrs -XX:MaxPermSize=256M -XX:AppendRatio=3 -Djava.security.policy=$ORACLE_HOME/j2ee/MYAPP/config/java2.policy -Djava.awt.headless=true -Dhttp.webdir.enable=false -XX:+UseConcMarkSweepGC -XX:+CMSPermGenSweepingEnabled -XX:+CMSClassUnloadingEnabled"/>
Fixed it:
<data id="java-options" value="-server -mx1024M -ms512M -Xrs -XX:MaxPermSize=256M -XX:AppendRatio=3 -Djava.security.policy=$ORACLE_HOME/j2ee/MYAPP/config/java2.policy -Djava.awt.headless=true -Dhttp.webdir.enable=false -XX:+UseConcMarkSweepGC -XX:+CMSPermGenSweepingEnabled -XX:+CMSClassUnloadingEnabled"/>
Enable Administrator Account
On Windows Vista, you might have noticed that the local administrator account is disabled.
To enable the administrator account on windows vista type the following string on the command prompt:
net user administrator /active:yes
Tracing Calls to Stored Procedures from C
//where cmd is of type SqlCommand
Console.WriteLine("");
Console.WriteLine(cmd.CommandText);
String sep = " ";
foreach (SqlParameter param in cmd.Parameters)
{
if (param.SqlValue.Equals("Null"))
{
Console.WriteLine(sep + param.ParameterName + "=" + param.SqlValue);
}
else
{
switch (param.SqlDbType)
{
case SqlDbType.NChar:
case SqlDbType.NVarChar:
case SqlDbType.VarChar:
case SqlDbType.Char:
case SqlDbType.Date:
case SqlDbType.DateTime:
case SqlDbType.DateTime2:
case SqlDbType.SmallDateTime:
case SqlDbType.Text:
Console.WriteLine(sep + param.ParameterName + "='" + param.SqlValue + "'");
break;
default:
Console.WriteLine(sep + param.ParameterName + "=" + param.SqlValue);
break;
}
}
sep = ", ";
}
Console.WriteLine("");
iis restart
REM ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
REM
REM Use this batch file to start IIS web sites when the server is started. To use, ensure that all following steps have been followed.
REM 1. Place this as a batch file on the server ( C:\StartWebsites.bat )
REM 2. Edit the startup group policy setting for the local machine
REM PATH : ( Group Policy Object Editor - Local ComputerPolicy - Computer Configuration - Windows Settings - Scripts - Startup)
REM 3. Add a script with the path to the batch file as the script name ( Script Name: C:\StartWebsites.bat )
REM
REM ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
REM CScript Host is required to use IIS command line tools
cscript.exe //H:CScript
REM Start web sites
REM web sites
iisweb /start wiki
iisweb /start development
REM Reset default scripting host to WScript in case anything else depends on it.
cscript.exe //H:WScript
Windows Platform FIPS Error
Pass this on to your folks if they get the following error in a asp.net app on the development server:
"This implementation is not part of the Windows Platform FIPS validated cryptographic algorithms"
Have your developer go into the web config and move the "sessionState" tag to just under the tag and make sure the sessionState is uncommented.
To this point I still only have speculation as to what changed on the server that now is causing this error. Most likely was a setting that changed as part of a security update.
Encrypting web.config Sections
1. First, add following to web.config within the container
2. Below is an example .bat or .cmd file for encrypting sensitive sections of the web.config for a given .NET web application. Change the {PATH} to the physical path to the web application's folder.
http://msdn.microsoft.com/en-us/library/2w117ede.aspx
---------------------BEGIN: configProtectedData --------------------- ---------------------END: configProtectedData ---------------------
2. Below is an example .bat or .cmd file for encrypting sensitive sections of the web.config for a given .NET web application. Change the {PATH} to the physical path to the web application's folder.
---------------------BEGIN: encrypt.cmd --------------------- @echo off REM ********************************************************* REM ** APP_PATH REM ** Change {PATH} below to path of physical location where REM ** application is installed REM ** REM ** ASP_PATH REM ** Location of ASP.NET framework REM ********************************************************* SET APP_PATH="{PATH}" SET ASP_PATH=C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\aspnet_regiis.exe SET ASP_OPT=-pef SET ASP_PROV="MY_PROVIDER" %ASP_PATH% %ASP_OPT% "connectionStrings" %APP_PATH% -prov %ASP_PROV% pause ---------------------END: encrypt.cmd ---------------------A complete walkthrough for this, including information on key stores is available here:
http://msdn.microsoft.com/en-us/library/2w117ede.aspx
Managing Key Store
---------------------BEGIN: create_keystore.cmd ---------------------
@echo off
REM *********************************************************
REM ** ASP_PATH
REM ** Location of ASP.NET framework
REM **
REM ** Warning: keep the exported key in a safe place
REM ** you will not be able to decrypt data using
REM ** a recreated keystore even by same name
REM *********************************************************
SET ASP_PATH=C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\aspnet_regiis.exe
REM To Delete Key store
REM %ASP_PATH% -pz "MY_KEYS"
REM To Create key store
%ASP_PATH% -pc "MY_KEYS" -exp
REM To grant access to key store by ASP.NET application service
%ASP_PATH% -pa "MY_KEYS" "NT AUTHORITY\NETWORK SERVICE"
REM To Export key store
%ASP_PATH% -px "MY_KEYS" "d:\temp\crypto\MY_KEYS.xml" -pri
---------------------END: create_keystore.cmd ---------------------
---------------------BEGIN: import_keystore.cmd ---------------------
@echo off
REM *********************************************************
REM **
REM ** ASP_PATH
REM ** Location of ASP.NET framework
REM *********************************************************
SET ASP_PATH=C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\aspnet_regiis.exe
REM To Delete Key store
%ASP_PATH% -pz "MY_KEYS"
REM To Import Key Store
%ASP_PATH% -pi "MY_KEYS" "d:\temp\crypto\375CSPTS_KEYS.xml"
REM To grant access to key store by ASP.NET application service
%ASP_PATH% -pa "MY_KEYS" "NT AUTHORITY\NETWORK SERVICE"
---------------------END: import_keystore.cmd ---------------------
A complete walkthrough for this, including information on key stores is available here:
http://msdn.microsoft.com/en-us/library/2w117ede.aspx
setmetabase UploadReadAheadSize
'cscript setmetabase.vbs
'set vdirObj=GetObject("IIS://localhost/W3svc/1/ROOT")
'set vdirObj=GetObject("IIS://localhost/W3SVC/1/Root/rapid/projects/proj1")
'set vdirObj=GetObject("IIS://localhost/W3SVC/1/Root/rapid/projects/proj_Prototype")
set vdirObj=GetObject("IIS://localhost/W3SVC/1/Root/rapid/projects/proj2")
' Print out the current value of some properties:
WScript.Echo "UploadReadAheadSize Before: " & vdirObj.UploadReadAheadSize
' Set some properties:
' default is 49152
'vdirObj.Put "UploadReadAheadSize", 1024000
' Save the property changes in the metabase:
'vdirObj.SetInfo
'WScript.Echo "UploadReadAheadSize After: " & vdirObj.UploadReadAheadSize
Clearing out security log
If you get a message 'security log is full', can take the following steps....
1. log in as administrator
2. right-click 'my computer' icon and select 'manage'
3. in the computer management window, expand 'system tools', then expand 'event viewer'
4. select the 'security' entry
5. from the computer management window menu, select 'action', then 'clear all events'
6. click 'yes' to save a copy of the log, or 'no' to clear the log completely
Create ap_developer and ap_user Roles
USE model
GO
---------------------------------
-- ap_developer
---------------------------------
DECLARE @RoleName sysname
set @RoleName = N'ap_developer'
IF EXISTS (SELECT * FROM sys.database_principals WHERE name = @RoleName AND type = 'R')
Begin
DECLARE @RoleMemberName sysname
DECLARE Member_Cursor CURSOR FOR
select [name]
from dbo.sysusers
where uid in (
select member_principal_id
from sys.database_role_members
where role_principal_id in (
select principal_id
FROM sys.database_principals where [name] = @RoleName AND type = 'R' ))
OPEN Member_Cursor;
FETCH NEXT FROM Member_Cursor
into @RoleMemberName
WHILE @@FETCH_STATUS = 0
BEGIN
exec sp_droprolemember @rolename=@RoleName, @membername= @RoleMemberName
FETCH NEXT FROM Member_Cursor
into @RoleMemberName
END;
CLOSE Member_Cursor;
DEALLOCATE Member_Cursor;
End
GO
IF EXISTS (SELECT * FROM sys.database_principals WHERE name = N'ap_developer' AND type = 'R')
DROP ROLE [ap_developer]
GO
IF EXISTS (SELECT * FROM sys.database_principals WHERE name = N'ap_developer' AND type = 'A')
DROP APPLICATION ROLE [ap_developer]
GO
CREATE ROLE [ap_developer] AUTHORIZATION [dbo]
GO
EXEC sp_addrolemember N'db_datareader', N'ap_developer'
EXEC sp_addrolemember N'db_datawriter', N'ap_developer'
GRANT CONTROL ON SCHEMA::[dbo] TO [ap_developer]
GRANT VIEW DEFINITION TO [ap_developer]
GRANT CREATE PROCEDURE TO [ap_developer]
GRANT EXECUTE TO [ap_developer]
GO
---------------------------------
-- ap_user
---------------------------------
DECLARE @RoleName sysname
set @RoleName = N'ap_user'
IF EXISTS (SELECT * FROM sys.database_principals WHERE name = @RoleName AND type = 'R')
Begin
DECLARE @RoleMemberName sysname
DECLARE Member_Cursor CURSOR FOR
select [name]
from dbo.sysusers
where uid in (
select member_principal_id
from sys.database_role_members
where role_principal_id in (
select principal_id
FROM sys.database_principals where [name] = @RoleName AND type = 'R' ))
OPEN Member_Cursor;
FETCH NEXT FROM Member_Cursor
into @RoleMemberName
WHILE @@FETCH_STATUS = 0
BEGIN
exec sp_droprolemember @rolename=@RoleName, @membername= @RoleMemberName
FETCH NEXT FROM Member_Cursor
into @RoleMemberName
END;
CLOSE Member_Cursor;
DEALLOCATE Member_Cursor;
End
GO
IF EXISTS (SELECT * FROM sys.database_principals WHERE name = N'ap_user' AND type = 'R')
DROP ROLE [ap_user]
GO
IF EXISTS (SELECT * FROM sys.database_principals WHERE name = N'ap_user' AND type = 'A')
DROP APPLICATION ROLE [ap_user]
GO
CREATE ROLE [ap_user] AUTHORIZATION [dbo]
GO
EXEC sp_addrolemember N'db_datareader', N'ap_user'
EXEC sp_addrolemember N'db_datawriter', N'ap_user'
GRANT VIEW DEFINITION TO [ap_user]
GRANT EXECUTE TO [ap_user]
GO
/**********
IF EXISTS (SELECT * FROM sys.database_principals WHERE name = N'jmolsen')
DROP USER [jmolsen]
GO
CREATE USER [jmolsen] FOR LOGIN [jmolsen] WITH DEFAULT_SCHEMA=[dbo]
GO
EXEC sp_addrolemember N'ap_developer', N'jmolsen'
GO
************/
Using ROW_NUMBER()
From SQL Server 2005 Books Online:
The following example returns rows with numbers 50 to 60 inclusive in the order of the OrderDate.
USE AdventureWorks;
GO
WITH OrderedOrders AS
(
SELECT SalesOrderID
, OrderDate
, ROW_NUMBER() OVER (ORDER BY OrderDate) AS 'RowNumber'
FROM Sales.SalesOrderHeader
)
SELECT *
FROM OrderedOrders
WHERE RowNumber BETWEEN 50 AND 60;
Default Database
/***********************************************
* This gives users a dummy default database
* in order to prevent them from:
* 1) having 'master' as a default database
* 2) not being able to connect to the server
* because their default database has been
* dropped, recreated, or restored
***********************************************/
USE master
GO
IF NOT EXISTS ( SELECT TOP 1 1 FROM sysdatabases WHERE name = 'uno' )
BEGIN
CREATE DATABASE [uno]
END
GO
ALTER DATABASE [uno] SET READ_WRITE
GO
USE uno
GO
GRANT SELECT TO [public]
GO
DECLARE @name sysname
, @sql nvarchar(max)
DECLARE login_cur CURSOR FAST_FORWARD FOR
SELECT name
FROM master..syslogins WITH (NOLOCK)
WHERE upper(name) LIKE '%'
OPEN login_cur
FETCH login_cur INTO @name
WHILE @@FETCH_STATUS = 0
BEGIN
SET @sql = 'IF NOT EXISTS ( SELECT TOP 1 1 FROM sysusers WHERE name = ''' + @name + ''' ) '
SET @sql = @sql + 'CREATE USER [' + @name + '] FOR LOGIN [' + @name + '] WITH DEFAULT_SCHEMA=[dbo];'
EXEC sp_executesql @sql
SET @sql = 'ALTER LOGIN [' + @name + '] WITH DEFAULT_DATABASE=[uno];'
EXEC sp_executesql @sql
FETCH login_cur INTO @name
END
CLOSE login_cur
DEALLOCATE login_cur
GO
--ALTER DATABASE [uno] SET READ_ONLY
--GO
Drop all constraints (PKs & FKs) and Indexes
DECLARE @objs TABLE ( id int identity(1,1), cmd nvarchar(4000), srt tinyint ) DECLARE @id int , @cmd nvarchar(4000) INSERT INTO @objs (cmd,srt) SELECT 'DROP INDEX [' + A.name + '] ON [' + + SCHEMA_NAME(B.schema_id) + + '].[' + B.name + ']' , 1 FROM sys.indexes A INNER JOIN sys.objects B ON A.object_id = B.object_id WHERE A.is_primary_key = 0 AND A.is_unique_constraint = 0 AND B.type = 'U' AND A.name IS NOT NULL UNION SELECT 'ALTER TABLE [' + + SCHEMA_NAME(B.schema_id) + + '].[' + B.name + '] DROP CONSTRAINT [' + A.name + ']' , 2 FROM sys.objects A INNER JOIN sys.objects B ON A.parent_object_id = B.object_id WHERE A.type = 'PK' OR A.name IN ( 'PK_deemed_raiv','PK_driverchecklog') ORDER BY 2 SELECT @id = max(id) FROM @objs WHILE @id > 0 BEGIN SELECT @cmd = cmd FROM @objs WHERE id = @id EXEC sp_executesql @cmd SET @id = @id - 1 END GO
Dropping All Tables, SPs, FNs, and Views
USE {DATABASE_NAME}
GO
--------------------------------------------------------
-- Drop Procedures
--------------------------------------------------------
DECLARE @objname sysname
DECLARE @sql nvarchar(max)
DECLARE obj_cur CURSOR FAST_FORWARD
FOR
SELECT name FROM sysobjects WHERE type = 'P'
OPEN obj_cur
FETCH obj_cur INTO @objname
WHILE @@FETCH_STATUS = 0
BEGIN
SET @sql = 'DROP PROCEDURE ' + @objname
EXEC sp_executesql @sql
FETCH obj_cur INTO @objname
END
CLOSE obj_cur
DEALLOCATE obj_cur
GO
--------------------------------------------------------
-- Drop Functions
--------------------------------------------------------
DECLARE @objname sysname
DECLARE @sql nvarchar(max)
DECLARE obj_cur CURSOR FAST_FORWARD
FOR
SELECT name FROM sysobjects WHERE type IN ('FN','IF','TF')
OPEN obj_cur
FETCH obj_cur INTO @objname
WHILE @@FETCH_STATUS = 0
BEGIN
SET @sql = 'DROP FUNCTION ' + @objname
EXEC sp_executesql @sql
FETCH obj_cur INTO @objname
END
CLOSE obj_cur
DEALLOCATE obj_cur
GO
--------------------------------------------------------
-- Drop Views
--------------------------------------------------------
DECLARE @objname sysname
DECLARE @sql nvarchar(max)
DECLARE obj_cur CURSOR FAST_FORWARD
FOR
SELECT name FROM sysobjects WHERE type = 'V'
OPEN obj_cur
FETCH obj_cur INTO @objname
WHILE @@FETCH_STATUS = 0
BEGIN
SET @sql = 'DROP VIEW ' + @objname
EXEC sp_executesql @sql
FETCH obj_cur INTO @objname
END
CLOSE obj_cur
DEALLOCATE obj_cur
GO
--------------------------------------------------------
-- Drop Constraints
--------------------------------------------------------
DECLARE @objname sysname
DECLARE @sql nvarchar(max)
DECLARE @tabname sysname
DECLARE obj_cur CURSOR FAST_FORWARD
FOR
SELECT A.name [const_name]
, B.name [table_name]
FROM sysobjects A
, sysobjects B
, sysconstraints C
WHERE A.id = C.constid
AND B.id = C.id
AND B.name NOT LIKE 'sys%'
AND A.type IN ( 'F', 'K' )
ORDER BY A.type, 2 DESC, 1
OPEN obj_cur
FETCH obj_cur INTO @objname, @tabname
WHILE @@FETCH_STATUS = 0
BEGIN
SET @sql = 'ALTER TABLE ' + @tabname + ' NOCHECK CONSTRAINT ' + @objname
PRINT @sql
EXEC sp_executesql @sql
FETCH obj_cur INTO @objname, @tabname
END
CLOSE obj_cur
OPEN obj_cur
FETCH obj_cur INTO @objname, @tabname
WHILE @@FETCH_STATUS = 0
BEGIN
SET @sql = 'ALTER TABLE ' + @tabname + ' DROP CONSTRAINT ' + @objname
PRINT @sql
EXEC sp_executesql @sql
FETCH obj_cur INTO @objname, @tabname
END
CLOSE obj_cur
DEALLOCATE obj_cur
GO
--------------------------------------------------------
-- Drop Tables
--------------------------------------------------------
DECLARE @objname sysname
DECLARE @sql nvarchar(max)
DECLARE obj_cur CURSOR FAST_FORWARD
FOR
SELECT name FROM sysobjects WHERE type = 'U'
OPEN obj_cur
FETCH obj_cur INTO @objname
WHILE @@FETCH_STATUS = 0
BEGIN
SET @sql = 'DROP TABLE ' + @objname
EXEC sp_executesql @sql
FETCH obj_cur INTO @objname
END
CLOSE obj_cur
DEALLOCATE obj_cur
GO
Dropping Erroneous Tables from Master database
USE master
GO
DECLARE @tabname sysname
, @sql nvarchar(256)
DECLARE tab_cur CURSOR FAST_FORWARD FOR
SELECT name
FROM sysobjects WITH (NOLOCK)
WHERE type = 'U'
AND ( name LIKE 'L_%' OR name LIKE 'T_%' )
OPEN tab_cur
FETCH tab_cur INTO @tabname
WHILE @@FETCH_STATUS = 0
BEGIN
SET @sql = 'DROP TABLE ' + @tabname
EXEC sp_executesql @sql
FETCH tab_cur INTO @tabname
END
CLOSE tab_cur
DEALLOCATE tab_cur
GO
Rebuilding all Indexes
USE [DATABASE_NAME]
GO
DECLARE @TableName varchar(255)
DECLARE TableCursor CURSOR FAST_FORWARD FOR
SELECT table_name
FROM information_schema.tables WITH (NOLOCK)
WHERE table_type = 'base table'
OPEN TableCursor
FETCH NEXT FROM TableCursor INTO @TableName
WHILE @@FETCH_STATUS = 0
BEGIN
DBCC DBREINDEX(@TableName,' ',90)
FETCH NEXT FROM TableCursor INTO @TableName
END
CLOSE TableCursor
DEALLOCATE TableCursor
GO
Counts from Each Table in DB
USE [dbname]
GO
SET NOCOUNT ON
DECLARE @tabname sysname
DECLARE @sql nvarchar(max)
DECLARE @rows int
DECLARE table_cur CURSOR FAST_FORWARD FOR
SELECT name
FROM sysobjects
WHERE type = 'U'
CREATE TABLE #results_tmp ( tabname sysname, cnt int )
OPEN table_cur
FETCH table_cur INTO @tabname
WHILE @@FETCH_STATUS = 0
BEGIN
SET @sql = 'INSERT INTO #results_tmp SELECT ''' + @tabname + ''', count(1) FROM ' + @tabname
EXEC sp_executesql @sql
FETCH table_cur INTO @tabname
END
CLOSE table_cur
DEALLOCATE table_cur
SELECT cnt, tabname FROM #results_tmp ORDER BY cnt DESC
DROP TABLE #results_tmp
GO
Reordering Subgroup
USE [db]
GO
--delcare variables
DECLARE @i int
DECLARE @lastrow int
DECLARE @curPoc int
DECLARE @curDir int
DECLARE @lastDir int
DECLARE @order int
DECLARE @pocs AS TABLE
( rownum int IDENTITY(1, 1) PRIMARY KEY NOT NULL
, pocID int NOT NULL
, fk_directorateID int NOT NULL
)
-- this builds a var table that is pre-sorted according to criteria
INSERT INTO @pocs ( pocID, fk_directorateID )
SELECT A.pocId
, A.fk_directorateID
FROM t_poc A (NOLOCK)
WHERE A.isArchived = 0
ORDER BY A.fk_directorateID
, A.orderBy
, A.pocID
SELECT @lastrow = max(A.rownum)
FROM @pocs A
SET @i = 1
SET @lastDir = 0
SET @order = 0
--loop through all POCs in POC table
WHILE @i <= @lastrow
BEGIN
SELECT @curPoc = A.pocID
, @curDir = A.fk_directorateID
FROM @pocs A
WHERE A.rownum = @i
IF @lastDir = @curDir
BEGIN
SET @order = @order + 1
END
ELSE
BEGIN
SET @order = 0
SET @lastDir = @curDir
END
UPDATE t_poc
SET orderBy = @order
WHERE pocID = @curPoc
SET @i = @i + 1
END
GO
Default data file location
To prevent the C:\ drive from filling up, it is a good idea to set the database default location. To do this:
1. Open SQL Server Management Studio
2. Right click the server instance
3. Select "Properties"
4. In the Server Properties window, select "Database Settings"
5. Under "Database default locations", specify path for "Data:" and "Log:", for example: "D:\SQLDATA\"
Additionally, if space on the C:\ drive is limited, check the properties of the TEMPDB.
This can be found under the "Databases" --> "System Databases" branches in the server's tree-view.
First, since the tempdb does not autoshrink, you can manually shrink it by right-clicking tempdb and selecting "Task"-->"Shrink"-->"Database".
Next, right-click the tempdb database and select "Properties". Then select "Files". You can set the "tempdev.mdf" file to be restricted growth and add an additional database file that is unrestricted on another drive.
Truncate Transaction Log
If the transaction log is not being backed up, it will eventually grow to its maximum size or run out of disk space. The first thing to try when the transaction log is full is to perform a backup of it.
USE MASTER
GO
ALTER DATABASE myproj SET RECOVERY FULL;
GO
BACKUP DATABASE myproj TO DISK = '\\server01\data$\myprojData.bak';
GO
BACKUP LOG myproj TO DISK = '\\server01\data$\myprojLog.bak';
GO
ALTER DATABASE myproj SET RECOVERY SIMPLE;
GO
Transaction log
A record of modifications performed to a database. The amount of information logged in the transaction log depends on the recovery model for a database.
Transaction Log Backup
Backs up the active portion and truncates the inactive portion of the transaction log.
http://technet.microsoft.com/en-us/library/cc966495.aspx
Recovery Models
Simple
The simple recovery model does what it implies, it gives you a simple backup that can be used to replace your entire database in the event of a failure or if you have the need to restore your database to another server. With this recovery model you have the ability to do complete backups (an entire copy) or differential backups (any changes since the last complete backup). With this recovery model you are exposed to any failures since the last backup completed. Here are some reasons why you may choose this recovery model:
Your data is not critical and can easily be recreated
The database is only used for test or development
Data is static and does not change
Losing any or all transactions since the last backup is not a problem
Data is derived and can easily be recreated
Type of backups you can run:
Complete backups
Differential backups
File and/or Filegroup backups
Partial backups
Copy-Only backups
Under the simple recovery model, the transaction log is automatically truncated to remove any inactive virtual log files. Truncation usually occurs after each CHECKPOINT but can be delayed under some conditions, such as a long-running transaction, which can keep large portions of the transaction log in an "active" state.
http://msdn.microsoft.com/en-us/library/ms345414.aspx
Bulk_Logged
The bulk logged recovery sort of does what it implies. With this model there are certain bulk operations such as BULK INSERT, CREATE INDEX, SELECT INTO, etc... that are not fully logged in the transaction log and therefore do not take as much space in the transaction log. The advantage of using this recovery model is that your transaction logs will not get that large if you are doing bulk operations and you have the ability to do point in time recovery as long as your last transaction log backup does not include a bulk operation as mentioned above. If no bulk operations are run this recovery model works the same as the Full recovery model. One thing to note is that if you use this recovery model you also need to issue transaction log backups otherwise your database transaction log will continue to grow. Here are some reasons why you may choose this recovery model:
Data is critical, but you do not want to log large bulk operations
Bulk operations are done at different times versus normal processing.
You still want to be able to recover to a point in time
Type of backups you can run:
Complete backups
Differential backups
File and/or Filegroup backups
Partial backups
Copy-Only backups
Transaction log backups
Full
The full recovery model is the most complete recovery model and allows you to recover all of your data to any point in time as long as all backup files are useable. With this model all operations are fully logged which means that you can recover your database to any point. In addition, if the database is set to the full recovery model you need to also issue transaction log backups otherwise your database transaction log will continue to grow forever. Here are some reasons why you may choose this recovery model:
Data is critical and data can not be lost.
You always need the ability to do a point-in-time recovery.
You are using database mirroring
Type of backups you can run:
Complete backups
Differential backups
File and/or Filegroup backups
Partial backups
Copy-Only backups
Transaction log backups
Recover from a corrupted Master DB
To rebuild a SQL 2005 master database:
start /wait \setup.exe /qn
INSTANCENAME= REINSTALL=SQL_Engine
REBUILDDATABASE=1 SAPWD=
In SQL 2008 use:
setup.exe
/QUIET
/ACTION=REBUILDDATABASE
/INSTANCENAME=instance_name
/SQLSYSADMINACCOUNTS= accounts
[/SAPWD=password]
[/SQLCOLLATION=collation_name]
Then start in single user mode:
sqlservr.exe -m -s
Then restore the latest backup of master and user databases, bringing the instance to the latest point possible.
More Info:
http://www.sqlservercentral.com/articles/Administration/73207/
http://www.networkworld.com/community/node/39292
Ten SQL Server Stats to Monitor
Top 10 SQL Server Counters for Monitoring SQL Server Performance
1. SQLServer: Buffer Manager: Buffer cache hit ratio
The buffer cache hit ratio counter represents how often SQL Server is able to find data pages in its buffer cache when a query needs a data page. The higher this number the better, because it means SQL Server was able to get data for queries out of memory instead of reading from disk. You want this number to be as close to 100 as possible. Having this counter at 100 means that 100% of the time SQL Server has found the needed data pages in memory. A low buffer cache hit ratio could indicate a memory problem.
2. SQLServer: Buffer Manager: Page life expectancy
The page life expectancy counter measures how long pages stay in the buffer cache in seconds. The longer a page stays in memory, the more likely SQL Server will not need to read from disk to resolve a query. You should watch this counter over time to determine a baseline for what is normal in your database environment. Some say anything below 300 (or 5 minutes) means you might need additional memory.
3. SQLServer: SQL Statistics: Batch Requests/Sec
Batch Requests/Sec measures the number of batches SQL Server is receiving per second. This counter is a good indicator of how much activity is being processed by your SQL Server box. The higher the number, the more queries are being executed on your box. Like many counters, there is no single number that can be used universally to indicate your machine is too busy. Today’s machines are getting more and more powerful all the time and therefore can process more batch requests per second. You should review this counter over time to determine a baseline number for your environment.
4. SQLServer: SQL Statistics: SQL Compilations/Sec
The SQL Compilations/Sec measure the number of times SQL Server compiles an execution plan per second. Compiling an execution plan is a resource-intensive operation. Compilations/Sec should be compared with the number of Batch Requests/Sec to get an indication of whether or not complications might be hurting your performance. To do that, divide the number of batch requests by the number of compiles per second to give you a ratio of the number of batches executed per compile. Ideally you want to have one compile per every 10 batch requests.
5. SQLServer: SQL Statistics: SQL Re-Compilations/Sec
When the execution plan is invalidated due to some significant event, SQL Server will re-compile it. The Re-compilations/Sec counter measures the number of time a re-compile event was triggered per second. Re-compiles, like compiles, are expensive operations so you want to minimize the number of re-compiles. Ideally you want to keep this counter less than 10% of the number of Compilations/Sec.
6. SQLServer: General Statistics: User Connections
The user connections counter identifies the number of different users that are connected to SQL Server at the time the sample was taken. You need to watch this counter over time to understand your baseline user connection numbers. Once you have some idea of your high and low water marks during normal usage of your system, you can then look for times when this counter exceeds the high and low marks. If the value of this counter goes down and the load on the system is the same, then you might have a bottleneck that is not allowing your server to handle the normal load. Keep in mind though that this counter value might go down just because less people are using your SQL Server instance.
7. SQLServer: Locks: Lock Waits / Sec: _Total
In order for SQL Server to manage concurrent users on the system, SQL Server needs to lock resources from time to time. The lock waits per second counter tracks the number of times per second that SQL Server is not able to retain a lock right away for a resource. Ideally you don't want any request to wait for a lock. Therefore you want to keep this counter at zero, or close to zero at all times.
8. SQLServer: Access Methods: Page Splits / Sec
This counter measures the number of times SQL Server had to split a page when updating or inserting data per second. Page splits are expensive, and cause your table to perform more poorly due to fragmentation. Therefore, the fewer page splits you have the better your system will perform. Ideally this counter should be less than 20% of the batch requests per second.
9. SQLServer: General Statistic: Processes Block
The processes blocked counter identifies the number of blocked processes. When one process is blocking another process, the blocked process cannot move forward with its execution plan until the resource that is causing it to wait is freed up. Ideally you don't want to see any blocked processes. When processes are being blocked you should investigate.
10. SQLServer: Buffer Manager: Checkpoint Pages / Sec
The checkpoint pages per second counter measures the number of pages written to disk by a checkpoint operation. You should watch this counter over time to establish a baseline for your systems. Once a baseline value has been established you can watch this value to see if it is climbing. If this counter is climbing, it might mean you are running into memory pressures that are causing dirty pages to be flushed to disk more frequently than normal.
http://www.databasejournal.com/features/mssql/article.php/3932406/Top-10-SQL-Server-Counters-for-Monitoring-SQL-Server-Performance.htm
Starting SQL Server in Maintenance Mode
Open a command prompt and run the commands below (note: your path to the Binn folder may vary) to switch on the SQL Server with minimal configuration single user mode:
cd C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Binn
sqlservr.exe -f
This can be useful in cases where the SQL Server will not start in normal mode due to something like the partition the tempdb is on becomes unavailable. If something like this happens can start in single user mode and execute the following:
alter database tempdb move file (filename=tempdev, FileName ='C:\DATA\tempdb.mdf')
alter database tempdb move file (filename=templog, FileName ='C:\DATA\templog.ldf')
To INDEX or Not to INDEX
0. The exception to any of the below is if the cost of the index in maintenance time is greater than the benefit of the index or the size of the index is within roughly 80% the size of the data or greater in size than data in the table.
1. Primary keys should almost always be indexed.
2. Foreign keys should almost always be indexed.
3. Look through the most common executed queries and add indexes for any columns in the "where" clauses that are not covered by steps 1 & 2.
4. Look through long running queries and add indexes for any columns in the "where" clauses that are not covered by steps 1, 2 & 3.
NOTE: Exception in #3 & #4 is if the comparison operator is "LIKE". Also, whether you do #3 then #4, or #4 then #3 depends mostly on how the application is being used and where the bottlenecks are.
Other things to watch out for:
- Try not to put "too many" indexes on a table, as this would in turn slow down inserts/updates.
- I usually start with step "1" above and implement a step at a time until I stop seeing benefit, then back off a step.
- Compound indexes should be used infrequently and sparingly
- Clustered indexes can actually degrade performance depending on how the column is used. Most design tools automatically create a clustered index on the primary key, but this should not be taken as a clustered index is "always" a good thing. There are actually very few cases where a clustered index actually results in a benefit, more often than not it creates more overhead than it produces in benefit.
Recursive Org List
CREATE FUNCTION [f_GetParentOrgs] ( @org_id int )
RETURNS @parentOrgs TABLE ( ID int IDENTITY PRIMARY KEY, org_id int )
AS
BEGIN
DECLARE @row int
DECLARE @org int
INSERT INTO @parentOrgs ( org_id )
SELECT organizationParentID
FROM t_organization WITH (NOLOCK)
WHERE organizationID = @org_id
AND organizationParentID NOT IN ( SELECT org_id FROM @parentOrgs )
SELECT @row = count(1) FROM @parentOrgs
WHILE @row > 0
BEGIN
SELECT @org = org_id
FROM @parentOrgs
WHERE ID = @row
INSERT INTO @parentOrgs ( org_id )
SELECT org_id
FROM dbo.f_GetParentOrgs( @org )
WHERE org_id NOT IN
( SELECT org_id
FROM @parentOrgs
)
SET @row = @row - 1
END
--this is to include original org parameter in list if needed
INSERT INTO @parentOrgs ( org_id )
VALUES ( @org_id )
RETURN
END
Reducing SQL Server Locks
Reducing SQL Server Locks
By : Brad McGehee
Apr 13, 2006
http://www.sql-server-performance.com/tips/reducing_locks_p1.aspx
if your users are complaining that they have to wait for their transactions to complete, you may want to find out if object locking on the server is contributing to this problem. To do this, use the SQL Server Locks Object: Average Wait Time (ms). You can use this counter to measure the average wait time of a variety of locks, including database, extent, Key, Page, RID, and table.
If you identify one or more types of locks causing transaction delays, then you will want to investigate further to see if you can identify what specific transactions are causing the locking. The Profiler is as a good tool for this detailed analysis. [7.0, 2000, 2005] Updated 9-18-2006
*****
Use sp_who and sp_who2 (the sp_who2 stored procedure is not documented in the SQL Server Books Online, but offers more detail than sp_who) to identify which processes may be blocking other processes. While blocking can also be identified using Enterprise Manager or Management Studio, these two commands work much faster and more efficiently than these GUI-based front-ends. [6.5, 7.0, 2000, 2005] Updated 9-18-2006
*****
On tables that change little, if at all, such as lookup tables, consider altering the default lock level for these tables. By default, SQL Server uses row level locking for all tables, unless the SQL Query Optimizer determines that a more appropriate locking level, such as page or table locks, is more appropriate. For most lookup tables that aren't huge, SQL Server will automatically use row level locking. Because row locking has to be done at the row level, SQL Server needs to work harder to maintain row locks that it does for either page or table locks. Since lookup tables aren't being changed by users, it would be more efficient to use a table lock instead of many individual row locks. How do you accomplish this?
You can override how SQL Server performs locking on a table by using the SP_INDEXOPTION command. Below is an example of code you can run to tell SQL Server to use page locking, not row locks, for a specific table:
SP_INDEXOPTION 'table_name', 'AllowRowLocks', FALSE
GO
SP_INDEXOPTION 'table_name', 'AllowPageLocks', FALSE
GO
This code turns off both row and page locking for the table, thus only table locking is available. [7.0, 2000, 2005] Updated 9-18-2006
*****
Keep all Transact-SQL transactions as short as possible. This helps to reduce the number of locks (of all types), helping to speed up the overall performance of your SQL Server applications. If practical, you may want to break down long transactions into groups of smaller transactions. In addition, only include those Transact-SQL commands within a transaction that are necessary for the transaction. Leave all other code outside of the transaction. [6.5, 7.0, 2000, 2005] Updated 8-21-2005
*****
An often overlooked cause of locking is an I/O bottleneck. Whenever your server experiences an I/O bottleneck, the longer it takes user's transactions to complete. And the longer they take to complete, the longer locks must be held, which can lead to other transactions having to wait for previous locks to be released.
If your server is experiencing excessive locking problems, be sure to check if you are also running into an I/O bottleneck. If you do find that you have an I/O bottleneck, then resolving it will help to resolve your locking problem, speeding up the performance of your server. [6.5, 7.0, 2000, 2005] Updated 8-21-2005
*****
To help reduce the amount of time tables are locked, which hurts concurrency and performance, avoid interleaving reads and database changes within the same transaction. Instead, try to do all your reads first, then perform all of the database changes (UPDATES, INSERTS, DELETES) near the end of the transaction. This helps to minimize the amount of time that exclusive locks are held. [6.5, 7.0, 2000, 2005] Updated 8-21-2005
*****
Any conditional logic, variable assignment, and other related preliminary setup should be done outside of transactions, not inside them. Don't ever pause a transaction to wait for user input. User input should always be done outside of a transaction. Otherwise, you will be contributing to locking, hurting SQL Server's overall performance. [6.5, 7.0, 2000, 2005] Updated 8-21-2005
*****
Encapsulate all transactions within stored procedures, including both the BEGIN TRANSACTION and COMMIT TRANSACTION statements in the procedure. This provides two benefits that help to reduce blocking locks.
First, it limits the client application and SQL Server to communications before and after the transaction, thus forcing any messages between the client and the server to occur at a time other than when the transaction is running (reducing transaction time).
Second, It prevents the user from leaving an open transaction (holding locks open) because the stored procedure forces any transactions that it starts to complete or abort. [6.5, 7.0, 2000, 2005] Updated 8-21-2005
*****
If you have a client application that needs to "check-out" data for awhile, then perhaps update it later, or maybe not, you don't want the records locked during the entire time the record is being viewed. Assuming "viewing" the data is much more common that "updating" the data, then one way to handle this particular circumstance is to have the application select the record (not using UPDATE, which will put a share lock on the record) and send it to the client.
If the user just "views" the record and never updates it, then nothing has to be done. But if the user decides to update the record, then the application can perform an UPDATE by adding a WHERE clause that checks to see whether the values in the current data are the same as those that were retrieved.
Similarly, you can check a timestamp column in the record, if it exists. If the data is the same, then the UPDATE can be made. If the record has changed, then the application must include code to notify the user so he or she can decide how to proceed. While this requires extra coding, it reduces locking and can increase overall application performance. [6.5, 7.0, 2000, 2005] Updated 10-16-2005
xp_cmdshell
USE master
GO
EXEC sp_configure 'show advanced options', 1
EXEC sp_configure 'xp_cmdshell', '1'
RECONFIGURE;
CHECKPOINT;
xp_cmdshell 'dir c:\'
GO
Setting up a New Instance
To create an additional instance on a box that already has SQL Server 2005 installed, you will need to:
1. Run the "setup" on disk 1 of SQL Server 2005 install disks
2. One page of the install wizard prompts to use "default" instance, or create a named instance. Select create a named instance and enter the name you want to call the new instance.
3. After installation, will need to apply any service packs (ie: SQL Server 2005 Service Pack 3). Even if you have applied this service pack to existing instances, will still need to apply this for the new instance.
4. After install of the service packs, you will need to run the SQL Server Configuration Manager. This can be found in "Start"->"Microsoft SQL Server 2005"->"Configuration Tools"->"SQL Server Configuration Manager". In this tool's UI, expand "SQL Server 2005 Network Configuration", then select "Protocols for xxxxx", where xxxx is the instance name you created in step 2. You will then Enable the TCP/IP protocol.
5. Can then use SQL Server Management Studio to connect to the instance. For the server name use SERVERNAME\INSTANCENAME (ie: vejxsefw3db1510\DEVTEST)
6. During this process, you may need to stop/restart the instance several times. When the instance is restarted, the SQL Server Agent service will need to be manually started.
Security
Use Windows Only authentication mode for security if possible
By using integrated security, you can greatly simplify administration by relying on the OS security and saving yourself from maintaining two separate security models. This also keeps passwords out of connection strings.
Take the time to audit SQL logins for null or weak passwords
Use the following code to check for null passwords:
Use master
Select name,
Password
from syslogins
where password is null
order by name
There are a multitude of free and commercial tools to check for weak passwords. SQLPing2 is free and can be used to check for weak and null passwords.
Frequently check group and role memberships
While the SQL Server security model has many enhancements, it also adds the extra layer of permissions that we must monitor to make sure no one has been given more access than they need or that they’ve already circumvented security to elevate themselves. There's also the spectre of user's who have changed roles within the company but the SQL Server permissions structure has not been adjusted. This goes back to assigning object access to groups and not individuals.
Physically secure the SQL Server
Lock it behind a door and lock away the key while you’re at it. Someone sitting in front of the server will always find a way.
Rewrite applications to use more user-defined stored procedures and views
This minimizes the need to give direct access to tables. It gives the developer more control over how data can be accessed.
Enable logging of all user login events
You can also do this via script but using the following code:
xp_instance_regwrite N'HKEY_LOCAL_MACHINE', N'SOFTWARE\Microsoft\MSSQLServer\MSSQLServer',N'AuditLevel', REG_DWORD,3
Check master..Sp_password for trojan code
Compare your production scripts to the default script on a fresh installation and keep that code handy.
Check master..Sp_helpstartup for trojan procedures
Make sure no one has placed a backdoor here. Use Sp_unmakestartup to remove any rogue procedures.
Disable SQL Mail capability unless absolutely necessary
Leaving it open gives a potential attacker another means of delivering potential trojans, viruses, or simply launching a particularly nasty denial of service attack. By itself it is fairly harmless but it can be used to help an attacker.
Remove the Guest user from databases to keep unauthorized users out
This is the default but vigilant in case some dbo gets loose with the access controls. The exception to this is the master and tempdb databases as the guest account is required
Make sure all SQL Server data and system files are installed on NTFS partitions
If someone should gain access to the OS, make sure that the necessary permissions are in place to prevent a catastrophe.
Use a low-privilege user account for SQL Server service rather than LocalSystem or Administrator
This account should only have minimal privileges (a local user is best) and should help contain an attack to the server in case of compromise. Notice that when using Enterprise Manager or SQL Server Configuration Manager (SQL 2005) to make this change, the ACLs on files, the registry, and user rights are done for you automatically.
Secure the “sa” account with a strong password
This assumes you are using the SQL Server and Windows security mode. If possible, use the Windows Only mode and don't worry about people brute-forcing your 'sa' accounts. Of course, even so you'll want to set a strong password in case someone changes modes on you.
Choose only the network libraries you absolutely require
Better yet, if the SQL Server is local-only then why not disable all network libraries and use shared memory to access the SQL Server? Just use the name '(local)' as the server name. If your SQL Server requires connectivity from other hosts, use the TCP/IP netlib and then determine if SSL is needed.
Make sure the latest OS and SQL Server Service Packs/Hot-Fixes are applied
This goes without saying but I'll say it anyway for completeness. Simply perform a "select @@version" on your SQL Server and compare the results to the linked page.
Restrict to sysadmins-only access to dangerous stored procedures and extended stored procedures
There are quite a few of them, and this could take some time. Be careful not to do this on a production server first. Test on a development machine so you don’t break any functionality. Below is a list of the ones we recommend you assess:
sp_sdidebug
xp_availablemedia
xp_cmdshell
xp_deletemail
xp_dirtree
xp_dropwebtask
xp_dsninfo
xp_enumdsn
xp_enumerrorlogs
xp_enumgroups
xp_enumqueuedtasks
xp_eventlog
xp_findnextmsg
xp_fixeddrives
xp_getfiledetails
xp_getnetname
xp_grantlogin
xp_logevent
xp_loginconfig
xp_logininfo
xp_makewebtask
xp_msver
xp_regread
http://www.sqlsecurity.com/
Queries Run Slower as Procedures
Sympton: Stored procedure runs slow in query analyzer but the same query pasted into query analyzer runs fast.
You may be a victim of sql server's so called "parameter sniffing" (also sometimes called parameter spoofing).
The short sample for how to get around this is to adjust your stored procedure to use local variables and assign the parameter values to the local variables. It is silly, but it does consistantly work.
CREATE PROCEDURE dbo.MyProcedure( @parm1 int )
AS
DECLARE @myvar1 int
SET @myvar1 = @parm1
SELECT *
FROM mytable
WHERE colA = @myvar1
Apparently, it has to do with the the optimization query plans for stored procedures.
If the procedure is complex enough and contains parameters that the optimizer 'decides' will impact how the query plan is generated, then the procedure takes longer to compile and run.
The method described above is one way to work around this. The above example fools the optimization engine into compiling the procedure and query plan once.
There may be cases where you will need to use the RECOMPILE directive in the stored procedure definition when the parameters actually DO impact query optimization (ie: dynamic where clauses).
More info and other ways to deal with 'Parameter Sniffing' can be found in links below:
http://www.eggheadcafe.com/tutorials/aspnet/353cb16c-3cde-44dd-a264-750c1ce4e423/sql-server-query-analyzer.aspx
http://blogs.msdn.com/queryoptteam/archive/2006/03/31/565991.aspx
http://www.sqlmag.com/Article/ArticleID/94369/sql_server_94369.html
http://stackoverflow.com/questions/211355/parameter-sniffing-or-spoofing-in-sql-server/215861
You may be a victim of sql server's so called "parameter sniffing" (also sometimes called parameter spoofing).
The short sample for how to get around this is to adjust your stored procedure to use local variables and assign the parameter values to the local variables. It is silly, but it does consistantly work.
CREATE PROCEDURE dbo.MyProcedure( @parm1 int )
AS
DECLARE @myvar1 int
SET @myvar1 = @parm1
SELECT *
FROM mytable
WHERE colA = @myvar1
Apparently, it has to do with the the optimization query plans for stored procedures.
If the procedure is complex enough and contains parameters that the optimizer 'decides' will impact how the query plan is generated, then the procedure takes longer to compile and run.
The method described above is one way to work around this. The above example fools the optimization engine into compiling the procedure and query plan once.
There may be cases where you will need to use the RECOMPILE directive in the stored procedure definition when the parameters actually DO impact query optimization (ie: dynamic where clauses).
More info and other ways to deal with 'Parameter Sniffing' can be found in links below:
http://www.eggheadcafe.com/tutorials/aspnet/353cb16c-3cde-44dd-a264-750c1ce4e423/sql-server-query-analyzer.aspx
http://blogs.msdn.com/queryoptteam/archive/2006/03/31/565991.aspx
http://www.sqlmag.com/Article/ArticleID/94369/sql_server_94369.html
http://stackoverflow.com/questions/211355/parameter-sniffing-or-spoofing-in-sql-server/215861
sqlcmd
SQLCMD is a command line utility (similar to sqlplus for oracle) that replaces the isql command line utility in SQL2000 and earlier versions of SQL Server
Can use it to make a .bat file that runs several scripts in a particular order, for example:
sqlcmd -S {server} -d {dbname} -i sqlfile1.sqlsqlcmd -S {server} -d {dbname} -i sqlfile2.sql
Sqlcmd
[-U login id] [-P password] [-S server] [-H hostname]
[-E trusted connection] [-d use database name] [-l login timeout]
[-t query timeout] [-h headers] [-s colseparator] [-w screen width]
[-a packetsize] [-e echo input] [-I Enable Quoted Identifiers]
[-c cmdend] [-L[c] list servers[clean output]] [-q "cmdline query"]
[-Q "cmdline query" and exit] [-m errorlevel] [-V severitylevel]
[-W remove trailing spaces] [-u unicode output]
[-r[0|1] msgs to stderr] [-i inputfile] [-o outputfile]
[-f | i:[,o:]]
[-k[1|2] remove[replace] control characters]
[-y variable length type display width]
[-Y fixed length type display width]
[-p[1] print statistics[colon format]]
[-R use client regional setting] [-b On error batch abort]
[-v var = "value"...]
[-X[1] disable commands[and exit with warning]]
[-? show syntax summary]
To CLR or Not To CLR
After looking at the below STIG entries, my recommendation is that CLR should be disabled on an instance that hosts legacy databases and a separate instance should be maintained for newer databases that use CLR.
While stored procedures remain a preferred way of doing things, in some cases you may need to do something very complex that might be better developed and maintained in managed code.
In this case, CLR would be the preferred solution instead of making calls from within the DBMS out to code that is external to the DBMS environment.
If CLR is required, then this needs to be documented in the System Security Plan that "access to CLR applications is required."
http://iase.disa.mil/stigs/content_pages/database_security.html
U_INS_sqlserver9_v8r1.7_Checklist_20100827.pdf
STIG ID: DM6123
Use of Command Language Runtime objects should be disabled if not required.
STIG ID: DG0099-SQLServer9
Vulnerability: DBMS’s may spawn additional external processes to execute procedures that are defined in the DBMS, but stored in external host files (external procedures) or to executables that reside on the external host.
Fix: Redesign applications to use CLR integration.
One-Way Encryption (Hash)
USE [database]
GO
ALTER TABLE [dbo].[t_user]
DROP COLUMN [password]
GO
ALTER TABLE [dbo].[t_user]
ADD [password] varbinary(20) NULL
GO
UPDATE t_user
SET password = HashBytes('SHA1', cast('PASSWORDSTRING' as nvarchar(16)))
GO
SELECT count(1)
FROM t_user
WHERE password = HashBytes('SHA1', cast('PASSWORDSTRING' as nvarchar(16)))
GO
ALTER PROCEDURE p_UserValidate(@username varchar(75), @password nvarchar(16))
AS
BEGIN
SET NOCOUNT ON;
--Returns userID if user/password combo IS valid
--Returns empty recordset if user/password combo IS NOT valid
SELECT A.userID
FROM t_user A WITH (NOLOCK)
WHERE upper(A.username) = upper(@username)
AND A.password = HashBytes('SHA1', @password)
END
GO
ALTER PROCEDURE p_PasswordChange(@userID numeric(18,0), @password nvarchar(16))
AS
BEGIN
SET NOCOUNT ON;
UPDATE t_user
SET password = HashBytes('SHA1', @password)
WHERE userID = @userID
END
GO
More Info: http://msdn.microsoft.com/en-us/library/ms174415.aspx
Column Level Two-way Encryption
To use encryption and later be able to decrypt data in a column using symmetric encryption involves the following:
1. Service Master Key - unique to each sql server instanced
2. Database Master Key - unique to each database, generated using password and service master key
3. One or more Symmetric Database Certificates
4. One or more Symmetric Encryption Keys that use an algorithm and are tied to a certificate
----------------------------------------------------------
-- This only needs to be run once after database creation
----------------------------------------------------------
USE [database]
GO
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'MyEncryptionPassword'
GO
CREATE CERTIFICATE SSN_CERT_01 WITH SUBJECT = 'SSN_CERT_01'
GO
CREATE SYMMETRIC KEY SSN_KEY_01
WITH ALGORITHM = AES_128
ENCRYPTION BY CERTIFICATE SSN_CERT_01
GO
GRANT CONTROL ON CERTIFICATE::[SSN_CERT_01] TO [APPLICATION_USER]
GO
GRANT CONTROL ON SYMMETRIC KEY::[SSN_KEY_01] TO [APPLICATION_USER]
GO
----------------------------------------------------------
-- This is how to encrypt a value to be inserted into
-- column of database table
-- This needs to be encapsulated in stored procedures
-- that are called from the application, so as not to
-- expose encrypt/decrypt process within application code
-- NOTE: Will need to 'OPEN' the key once per session
----------------------------------------------------------
OPEN SYMMETRIC KEY SSN_KEY_01
DECRYPTION BY CERTIFICATE SSN_CERT_01;
UPDATE t_person
SET ssn = EncryptByKey( Key_GUID('SSN_KEY_01'), @ssn )
WHERE person_id = @person_id
CLOSE SYMMETRIC KEY SSN_KEY_01
----------------------------------------------------------
-- This is how to decrypt a value to select from
-- column of database table
-- This needs to be encapsulated in stored procedures
-- that are called from the application, so as not to
-- expose encrypt/decrypt process within application code
-- NOTE: The cast needs to be the same datatype as the
-- data was before it was encrypted
----------------------------------------------------------
OPEN SYMMETRIC KEY SSN_KEY_01
DECRYPTION BY CERTIFICATE SSN_CERT_01;
SELECT TOP 1 @ssn = cast( DecryptByKey(A.ssn) as nvarchar(10) )
FROM t_person A WITH (NOLOCK)
WHERE A.person_id = @person_id
CLOSE SYMMETRIC KEY SSN_KEY_01
MOVING FROM INSTANCE TO INSTANCE
By default, a database's master key is encrypted using the server's service master key. When you move a database to a new server using backup/restore or detach/attach, the database master key can not be decrypted automatically when you attempt to use the symmetric key. This is because the "service master key" is used to automatically open the database's master key. At this point there are a couple of options in order to get encryption working again in the database:
1) Decode all data on source instance; create backup; restore backup; recreate master key, certificate, and encryption key, then re-encrypt the data... yeh, right.
2) Explicitly OPEN MASTER KEY every time you want to use it... this is problematic as it will require code changes.
3) BACKUP SERVICE MASTER KEY on the original instance and use RESTORE SERVICE MASTER KEY on the target server... This can be bad as it hoses up the MASTER KEYS in all the databases on the instance.
4) You can back up the certificates on the original database; create database backup; restore database on target server; drop the keys, certificates, and master key in restored database; recreate the master key; restore the certificate from the backup file; recreate the encryption key. Probably a good practice to backup the certificate, but still is a lot of work.
--TO BACK UP CERTIFICATE ON SOURCE INSTANCE
BACKUP CERTIFICATE SB_CERT
TO FILE = 'C:\SANDBOX_CERT.cer'
WITH PRIVATE KEY ( FILE = 'C:\SANDBOX_CERT_Key.pvk'
, ENCRYPTION BY PASSWORD = 'SandBox_P@$$w0rd' )
--ON TARGET INSTANCE AFTER DB RESTORE
--AND COPY CERT FILES TO TARGET'S FILE SYSTEM
CREATE CERTIFICATE SB_CERT
FROM FILE = 'C:\SANDBOX_CERT.cer'
WITH PRIVATE KEY ( FILE = 'C:\SANDBOX_CERT_Key.pvk'
, DECRYPTION BY PASSWORD = 'SandBox_P@$$w0rd' )
5) Easiest solution is use the following command on the restored database:
USE dbname
GO
OPEN MASTER KEY DECRIPTION BY PASSWORD = 'MyEncryptionPassword'
GO
ALTER MASTER KEY REGENERATE WITH ENCRYPTION BY PASSWORD = 'MyEncryptionPassword'
GO
NOTE: The master key password needs to match the password you used to create master key on original database. You also need CONTROL permission on database to fiddle with master key.
Scripting Linked Servers
To create the linked server:
USE [msdb]
GO
EXEC sp_addlinkedserver @server = N'SAMPLE_LINK', @srvproduct=N'Oracle', @provider=N'OraOLEDB.Oracle', @datasrc=N'LINKDSN'
GO
EXEC sp_addlinkedsrvlogin @rmtsrvname = 'SAMPLE_LINK', @rmtuser = 'LINKUSER', @rmtpassword = '*******', @useself = False
GO
To drop the linked server:
USE [msdb]
GO
EXEC sp_droplinkedsrvlogin 'SAMPLE_LINK', NULL
GO
EXEC sp_dropserver 'SAMPLE_LINK'
GO
Grant execute
This sql generates a grant blanket execute on stored procedures in a database:
GRANT EXECUTE TO {USER_NAME}
GO
This sql generates a grant execute statement for each stored procedure in the database:
SELECT 'GRANT EXECUTE ON ' + name + ' TO {USER_NAME}'
FROM sysobjects
WHERE type IN ( 'P', 'FN' )
AND name NOT LIKE 'dt_%'
ORDER BY name
This scripts grants execute to all stored procedures in the database:
USE {DATABASE_NAME}
GO
DECLARE @proc_name varchar(128)
DECLARE @sql nvarchar(255)
DECLARE proc_cur CURSOR FAST_FORWARD
FOR
SELECT name
FROM sysobjects WITH (NOLOCK)
WHERE type IN ('P','FN')
OPEN proc_cur
FETCH proc_cur INTO @proc_name
WHILE @@FETCH_STATUS = 0
BEGIN
SET @sql = 'grant execute on ' + @proc_name + ' to {USER_NAME}'
EXECUTE sp_executesql @sql
FETCH proc_cur INTO @proc_name
END
CLOSE proc_cur
DEALLOCATE proc_cur
GO
EnableDisable Constraints
A quick way to generate script to disable all FK constraints in current db:
SELECT 'ALTER TABLE ' + B.name + ' NOCHECK CONSTRAINT ' + C.name
FROM sysconstraints A
, sysobjects B
, sysobjects C
WHERE A.id = B.id
AND A.constid = C.id
AND C.type = 'F'
A quick way to generate script to enable all FK constraints in current db:
SELECT 'ALTER TABLE ' + B.name + ' CHECK CONSTRAINT ' + C.name
FROM sysconstraints A
, sysobjects B
, sysobjects C
WHERE A.id = B.id
AND A.constid = C.id
AND C.type = 'F'
Tuesday, June 7, 2011
fn_split
CREATE FUNCTION fn_Split
( @text varchar(8000)
, @delimiter varchar(20) = ' '
)
RETURNS @Strings TABLE
(
position int IDENTITY PRIMARY KEY
, value varchar(8000)
)
AS
BEGIN
DECLARE @index int
SET @index = -1
WHILE (LEN(@text) > 0)
BEGIN
SET @index = CHARINDEX(@delimiter , @text)
IF (@index = 0) AND (LEN(@text) > 0)
BEGIN
INSERT INTO @Strings VALUES (@text)
BREAK
END
IF (@index > 1)
BEGIN
INSERT INTO @Strings VALUES (LEFT(@text, @index - 1))
SET @text = RIGHT(@text, (LEN(@text) - @index))
END
ELSE
SET @text = RIGHT(@text, (LEN(@text) - @index))
END
RETURN
END
GO
Monday, June 6, 2011
Script out indexes completely
use sandbox1
go
-- Script out indexes completely, including both PK's and regular indexes, each clustered or nonclustered.
-- DOES NOT HANDLE COMPRESSION; that's ok, since 2008 R2 RTM benchmarking shows it's faster and results in smaller indexes to insert uncompressed and then compress later
-- HARDCODES [dbo] schema (i.e. it doesn't say [JohnDoe].[table], changing that to [dbo].[table]
-- modified from original version from http://www.sqlservercentral.com/Forums/Topic961088-2753-2.aspx
DECLARE
@idxTableName SYSNAME,
@idxTableID INT,
@idxname SYSNAME,
@idxid INT,
@colCount INT,
@IxColumn SYSNAME,
@IxFirstColumn BIT,
@ColumnIDInTable INT,
@ColumnIDInIndex INT,
@IsIncludedColumn INT,
@sIncludeCols VARCHAR(MAX),
@sIndexCols VARCHAR(MAX),
@sSQL VARCHAR(MAX),
@sParamSQL VARCHAR(MAX),
@sFilterSQL VARCHAR(MAX),
@location SYSNAME,
@IndexCount INT,
@CurrentIndex INT,
@CurrentCol INT,
@Name VARCHAR(128),
@IsPrimaryKey TINYINT,
@Fillfactor INT,
@FilterDefinition NVARCHAR(MAX),
@IsClustered BIT, -- used solely for putting information into the result table
@PKDropSQL NVARCHAR(MAX)
IF EXISTS (SELECT * FROM tempdb.dbo.sysobjects WHERE id = object_id(N'[tempdb].[dbo].[#IndexSQL]'))
DROP TABLE [dbo].[#IndexSQL]
CREATE TABLE #IndexSQL
( TableName VARCHAR(128) NOT NULL
,IndexName VARCHAR(128) NOT NULL
,IsClustered BIT NOT NULL
,IsPrimaryKey BIT NOT NULL
,IndexCreateSQL NVARCHAR(max) NOT NULL
,PKDropSQL NVARCHAR(max) NULL
)
IF EXISTS (SELECT * FROM tempdb.dbo.sysobjects WHERE id = object_id(N'[tempdb].[dbo].[#IndexListing]'))
DROP TABLE [dbo].[#IndexListing]
CREATE TABLE #IndexListing
(
[IndexListingID] INT IDENTITY(1,1) PRIMARY KEY CLUSTERED,
[TableName] SYSNAME COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[ObjectID] INT NOT NULL,
[IndexName] SYSNAME COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[IndexID] INT NOT NULL,
[IsPrimaryKey] TINYINT NOT NULL,
[FillFactor] INT,
[FilterDefinition] NVARCHAR(MAX) NULL
)
IF EXISTS (SELECT * FROM tempdb.dbo.sysobjects WHERE id = object_id(N'[tempdb].[dbo].[#ColumnListing]'))
DROP TABLE [dbo].[#ColumnListing]
CREATE TABLE #ColumnListing
(
[ColumnListingID] INT IDENTITY(1,1) PRIMARY KEY CLUSTERED,
[ColumnIDInTable] INT NOT NULL,
[Name] SYSNAME COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[ColumnIDInIndex] INT NOT NULL,
[IsIncludedColumn] BIT NULL
)
INSERT INTO #IndexListing( [TableName], [ObjectID], [IndexName], [IndexID], [IsPrimaryKey], [FILLFACTOR], [FilterDefinition] )
SELECT OBJECT_NAME(si.object_id), si.object_id, si.name, si.index_id, si.Is_Primary_Key, si.Fill_Factor, NULL --si.filter_definition --filter_definition is in sql 2008
FROM sys.indexes si
LEFT OUTER JOIN information_schema.table_constraints tc ON si.name = tc.constraint_name AND OBJECT_NAME(si.object_id) = tc.table_name
WHERE OBJECTPROPERTY(si.object_id, 'IsUserTable') = 1
ORDER BY OBJECT_NAME(si.object_id), si.index_id
SELECT @IndexCount = @@ROWCOUNT, @CurrentIndex = 1
WHILE @CurrentIndex <= @IndexCount
BEGIN
SELECT @idxTableName = [TableName],
@idxTableID = [ObjectID],
@idxname = [IndexName],
@idxid = [IndexID],
@IsPrimaryKey = [IsPrimaryKey],
@FillFactor = [FILLFACTOR],
@FilterDefinition = [FilterDefinition]
FROM #IndexListing
WHERE [IndexListingID] = @CurrentIndex
-- So - it is either an index or a constraint
-- Check if the index is unique
IF (@IsPrimaryKey = 1)
BEGIN
SET @sSQL = 'ALTER TABLE [dbo].[' + @idxTableName + '] ADD CONSTRAINT [' + @idxname + '] PRIMARY KEY '
-- Check if the index is clustered
IF (INDEXPROPERTY(@idxTableID, @idxname, 'IsClustered') = 0)
BEGIN
SET @sSQL = @sSQL + 'NON'
SET @IsClustered = 0
END
ELSE
BEGIN
SET @IsClustered = 1
END
SET @sSQL = @sSQL + 'CLUSTERED' + CHAR(13) + '(' + CHAR(13)
END
ELSE
BEGIN
SET @sSQL = 'CREATE '
-- Check if the index is unique
IF (INDEXPROPERTY(@idxTableID, @idxname, 'IsUnique') = 1)
BEGIN
SET @sSQL = @sSQL + 'UNIQUE '
END
-- Check if the index is clustered
IF (INDEXPROPERTY(@idxTableID, @idxname, 'IsClustered') = 1)
BEGIN
SET @sSQL = @sSQL + 'CLUSTERED '
SET @IsClustered = 1
END
ELSE
BEGIN
SET @IsClustered = 0
END
SELECT
@sSQL = @sSQL + 'INDEX [' + @idxname + '] ON [dbo].[' + @idxTableName + ']' + CHAR(13) + '(' + CHAR(13),
@colCount = 0
END
-- Get the number of cols in the index
SELECT @colCount = COUNT(*)
FROM sys.index_columns ic
INNER JOIN sys.columns sc ON ic.object_id = sc.object_id AND ic.column_id = sc.column_id
WHERE ic.object_id = @idxtableid AND index_id = @idxid AND ic.is_included_column = 0
-- Get the file group info
SELECT @location = f.[name]
FROM sys.indexes i
INNER JOIN sys.filegroups f ON i.data_space_id = f.data_space_id
INNER JOIN sys.all_objects o ON i.[object_id] = o.[object_id]
WHERE o.object_id = @idxTableID AND i.index_id = @idxid
-- Get all columns of the index
INSERT INTO #ColumnListing( [ColumnIDInTable], [Name], [ColumnIDInIndex],[IsIncludedColumn] )
SELECT sc.column_id, sc.name, ic.index_column_id, ic.is_included_column
FROM sys.index_columns ic
INNER JOIN sys.columns sc ON ic.object_id = sc.object_id AND ic.column_id = sc.column_id
WHERE ic.object_id = @idxTableID AND index_id = @idxid
ORDER BY ic.index_column_id
IF @@ROWCOUNT > 0
BEGIN
SELECT @CurrentCol = 1
SELECT @IxFirstColumn = 1, @sIncludeCols = '', @sIndexCols = ''
WHILE @CurrentCol <= @ColCount
BEGIN
SELECT @ColumnIDInTable = ColumnIDInTable,
@Name = Name,
@ColumnIDInIndex = ColumnIDInIndex,
@IsIncludedColumn = IsIncludedColumn
FROM #ColumnListing
WHERE [ColumnListingID] = @CurrentCol
IF @IsIncludedColumn = 0
BEGIN
SET @sIndexCols = CHAR(9) + @sIndexCols + '[' + @Name + '] '
-- Check the sort order of the index cols ????????
IF (INDEXKEY_PROPERTY (@idxTableID,@idxid,@ColumnIDInIndex,'IsDescending')) = 0
BEGIN
SET @sIndexCols = @sIndexCols + ' ASC '
END
ELSE
BEGIN
SET @sIndexCols = @sIndexCols + ' DESC '
END
IF @CurrentCol < @colCount
BEGIN
SET @sIndexCols = @sIndexCols + ', '
END
END
ELSE
BEGIN
-- Check for any include columns
IF LEN(@sIncludeCols) > 0
BEGIN
SET @sIncludeCols = @sIncludeCols + ','
END
SET @sIncludeCols = @sIncludeCols + '[' + @IxColumn + ']'
END
SET @CurrentCol = @CurrentCol + 1
END
TRUNCATE TABLE #ColumnListing
--append to the result
IF LEN(@sIncludeCols) > 0
SET @sIndexCols = @sSQL + @sIndexCols + CHAR(13) + ') ' + ' INCLUDE ( ' + @sIncludeCols + ' ) '
ELSE
SET @sIndexCols = @sSQL + @sIndexCols + CHAR(13) + ') '
-- Add filtering
IF @FilterDefinition IS NOT NULL
SET @sFilterSQL = ' WHERE ' + @FilterDefinition + ' ' + CHAR(13)
ELSE
SET @sFilterSQL = ''
-- Build the options
SET @sParamSQL = 'WITH ( PAD_INDEX = '
IF INDEXPROPERTY(@idxTableID, @idxname, 'IsPadIndex') = 1
SET @sParamSQL = @sParamSQL + 'ON,'
ELSE
SET @sParamSQL = @sParamSQL + 'OFF,'
SET @sParamSQL = @sParamSQL + ' ALLOW_PAGE_LOCKS = '
IF INDEXPROPERTY(@idxTableID, @idxname, 'IsPageLockDisallowed') = 0
SET @sParamSQL = @sParamSQL + 'ON,'
ELSE
SET @sParamSQL = @sParamSQL + 'OFF,'
SET @sParamSQL = @sParamSQL + ' ALLOW_ROW_LOCKS = '
IF INDEXPROPERTY(@idxTableID, @idxname, 'IsRowLockDisallowed') = 0
SET @sParamSQL = @sParamSQL + 'ON,'
ELSE
SET @sParamSQL = @sParamSQL + 'OFF,'
SET @sParamSQL = @sParamSQL + ' STATISTICS_NORECOMPUTE = '
-- THIS DOES NOT WORK PROPERLY; IsStatistics only says what generated the last set, not what it was set to do.
IF (INDEXPROPERTY(@idxTableID, @idxname, 'IsStatistics') = 1)
SET @sParamSQL = @sParamSQL + 'ON'
ELSE
SET @sParamSQL = @sParamSQL + 'OFF'
-- Fillfactor 0 is actually not a valid percentage on SQL 2008 R2
IF ISNULL( @FillFactor, 90 ) <> 0
SET @sParamSQL = @sParamSQL + ' ,FILLFACTOR = ' + CAST( ISNULL( @FillFactor, 90 ) AS VARCHAR(3) )
IF (@IsPrimaryKey = 1) -- DROP_EXISTING isn't valid for PK's
BEGIN
SET @sParamSQL = @sParamSQL + ' ) '
SET @PKDropSQL = 'ALTER TABLE [' + @idxTableName + '] DROP CONSTRAINT [' + @idxName + ']'
END
ELSE
BEGIN
SET @sParamSQL = @sParamSQL + ' ,DROP_EXISTING = ON ) '
SET @PKDropSQL = NULL
END
SET @sSQL = @sIndexCols + CHAR(13) + @sFilterSQL + CHAR(13) + @sParamSQL
-- 2008 R2 allows ON [filegroup] for primary keys as well, negating the old "IF THE INDEX IS NOT A PRIMARY KEY - ADD THIS - ELSE DO NOT" IsPrimaryKey IF statement
SET @sSQL = @sSQL + ' ON [' + @location + ']'
--PRINT @sIndexCols + CHAR(13)
INSERT INTO #IndexSQL (TableName, IndexName, IsClustered, IsPrimaryKey, IndexCreateSQL, PKDropSQL)
VALUES (@idxTableName, @idxName, @IsClustered, @IsPrimaryKey, @sSQL, @PKDropSQL)
END
SET @CurrentIndex = @CurrentIndex + 1
END
SELECT * FROM #IndexSQL
Subscribe to:
Posts (Atom)